WO2019196793A1 - 图像处理方法及装置、电子设备和计算机可读存储介质 - Google Patents
图像处理方法及装置、电子设备和计算机可读存储介质 Download PDFInfo
- Publication number
- WO2019196793A1 WO2019196793A1 PCT/CN2019/081743 CN2019081743W WO2019196793A1 WO 2019196793 A1 WO2019196793 A1 WO 2019196793A1 CN 2019081743 W CN2019081743 W CN 2019081743W WO 2019196793 A1 WO2019196793 A1 WO 2019196793A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- face
- image data
- application
- processing
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
- G06V40/173—Classification, e.g. identification face re-identification, e.g. recognising unknown faces across different face tracks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/30—Authentication, i.e. establishing the identity or authorisation of security principals
- G06F21/31—User authentication
- G06F21/32—User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/521—Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/40—Spoof detection, e.g. liveness detection
- G06V40/45—Detection of the body part being alive
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10048—Infrared image
Definitions
- the present application relates to the field of image processing technologies, and in particular, to an image processing method, an image processing apparatus, an electronic device, and a computer readable storage medium.
- the electronic device can collect the face image and the 3D information of the face, and can perform face payment, face unlocking, and the like according to the collected face image and the face 3D information.
- Embodiments of the present application provide an image processing method, an image processing apparatus, an electronic device, and a computer readable storage medium.
- the image processing method of the embodiment of the present application includes: receiving target information associated with a face; determining an operating environment corresponding to the target information according to a security level of the target information, and performing a face with the face in the operating environment Related processing.
- the image processing apparatus of the embodiment of the present application includes a receiving total module and a processing total module.
- the receiving total module is configured to receive target information associated with the human face;
- the processing total module is configured to determine an operating environment corresponding to the target information according to the security level of the target information, and execute the human face in the operating environment Related processing.
- the electronic device of the embodiment of the present application includes a camera module, a first processing unit, and a second processing unit, where the first processing unit is configured to: receive target information associated with a human face; and determine, according to a security level of the target information An operating environment corresponding to the target information, and performing processing related to a face in the operating environment.
- a computer readable storage medium according to an embodiment of the present application, wherein a computer program is stored thereon, and the computer program is executed by a processor to implement the steps of the image processing method described above.
- FIG. 1 is a schematic flow chart of an image processing method according to some embodiments of the present application.
- FIG. 2 is a block diagram showing the structure of an electronic device according to some embodiments of the present application.
- FIG. 3 is a schematic structural diagram of an electronic device according to some embodiments of the present application.
- FIGS. 4 to 9 are schematic flowcharts of an image processing method according to some embodiments of the present application.
- FIG. 10 is a schematic diagram of a scene of a structured light measurement depth according to some embodiments of the present application.
- FIG. 11 is a flow chart of an image processing method according to some embodiments of the present application.
- FIGS 12 to 15 are block diagrams of an image processing apparatus according to some embodiments of the present application.
- 16 is a block diagram of an electronic device of some embodiments of the present application.
- Image processing methods include:
- 002 Determine an operating environment corresponding to the target information according to the security level of the target information, and perform processing related to the face in the operating environment.
- the image processing method of the embodiment of the present application determines an operating environment corresponding to the target information according to the security level of the target information, so that the face can be executed in the determined operating environment. Relevant processing to ensure the security of the relevant information of the face.
- the target information associated with the face includes image data for acquiring face depth information and attribute information of an application that invokes face recognition.
- the electronic device 10 may be a mobile phone, a tablet computer, a personal digital assistant or a wearable device, or the like.
- the electronic device 10 can include a camera module 110, a first processing unit 120, and a second processing unit 130.
- the first processing unit 120 can be a CPU (Central Processing Unit).
- the second processing unit 130 may be an MCU (Microcontroller Unit) or the like.
- the second processing unit 130 is connected between the first processing unit 120 and the camera module 110.
- the second processing unit 130 can control the laser camera 112, the floodlight 114, and the laser light 118 in the camera module 110.
- the unit 120 can control the RGB (Red/Green/Blue, red/green/blue color mode) camera 116 in the camera module 110.
- the camera module 110 includes a laser camera 112, a floodlight 114, an RGB camera 116, and a laser light 118.
- the laser camera 112 is an infrared camera for acquiring an infrared image.
- the floodlight 114 is a point source that emits infrared light;
- the laser 118 is a point source that emits laser light and that emits laser light to form a pattern.
- the laser camera 112 can acquire an infrared image according to the reflected light.
- the laser lamp 118 emits laser light
- the laser camera 112 can acquire a speckle image based on the reflected light.
- the speckle image is an image in which the laser light emitted by the laser lamp 118 is reflected and the pattern is deformed.
- the first processing unit 120 may include a CPU core running in a TEE (Trusted Execution Environment) environment and a CPU core running in a REE (Rich Execution Environment) environment.
- TEE Trusted Execution Environment
- REE Raster Execution Environment
- the TEE environment and the REE environment are the operating modes of the ARM module (Advanced RISC Machines, advanced reduced instruction set processor).
- the security level of the TEE environment is high, and only one CPU core in the first processing unit 120 can run in the TEE environment at the same time.
- the operation behavior of the security level in the electronic device 10 needs to be performed in the CPU core in the TEE environment, and the operation behavior with lower security level can be performed in the CPU core in the REE environment.
- the second processing unit 130 includes a PWM (Pulse Width Modulation) module 132, a SPI/I2C (Serial Peripheral Interface/Inter-Integrated Circuit) interface 134, and a RAM. (Random Access Memory) module 136 and depth engine 138.
- the PWM module 132 can transmit a pulse to the camera module 110 to control the floodlight 114 or the laser light 118 to be turned on, so that the laser camera 112 can acquire an infrared image or a speckle image.
- the SPI/I2C interface 134 is configured to receive an image acquisition instruction sent by the first processing unit 120.
- the depth engine 138 can process the speckle image to obtain a depth disparity map.
- the image can be sent to the second processing unit 130 by the CPU core running in the TEE environment. Acquisition instructions.
- the flooding light 114 in the pulse wave control camera module 110 can be turned on by the PWM module 132 to generate an infrared image through the laser camera 112, and the laser in the camera module 110 can be controlled.
- Lamp 118 is turned on and a speckle image is acquired by laser camera 112.
- the camera module 110 can transmit the collected infrared image and speckle image to the second processing unit 130.
- the second processing unit 130 may process the received infrared image to obtain an infrared parallax map, and may also process the received speckle image to obtain a speckle disparity map or a depth disparity map.
- the processing of the infrared image and the speckle image by the second processing unit 130 refers to correcting the infrared image or the speckle image, and removing the influence of the internal and external parameters of the camera module 110 on the image.
- the second processing unit 130 can be set to different modes, and different images output different images.
- the second processing unit 130 When the second processing unit 130 is set to the speckle pattern, the second processing unit 130 processes the speckle image to obtain a speckle disparity map, and the target speckle image can be obtained according to the speckle disparity map; when the second processing unit 130 sets In the depth map mode, the second processing unit 130 processes the speckle image to obtain a depth disparity map, and the depth disparity map can obtain a depth image, and the depth image refers to an image with depth information.
- the second processing unit 130 may send the infrared parallax map and the speckle disparity map to the first processing unit 120, and the second processing unit 130 may also send the infrared parallax map and the depth disparity map to the first processing unit 120.
- the first processing unit 120 may acquire a target infrared image according to the infrared disparity map described above, and acquire a depth image according to the depth disparity map described above. Further, the first processing unit 120 may perform face recognition, face matching, living body detection, and acquiring depth information of the detected face according to the target infrared image and the depth image.
- the communication between the second processing unit 130 and the first processing unit 120 is through a fixed security interface to ensure the security of the transmitted data.
- the data sent by the first processing unit 120 to the second processing unit 130 is passed through the SECURE SPI/I2C 140, and the data sent by the second processing unit 130 to the first processing unit 120 is passed through the SECURE MIPI (Mobile Industry Processor). Interface, mobile industry processor interface) 150.
- SECURE SPI/I2C 140 SECURE SPI/I2C 140
- MIPI Mobile Industry Processor
- Interface mobile industry processor interface
- the second processing unit 130 may also acquire the target infrared image according to the infrared disparity map, calculate the acquired depth image by using the depth disparity map, and send the target infrared image and the depth image to the first processing unit 120.
- the second processing unit 130 may perform face recognition, face matching, living body detection, and acquiring depth information of the detected face according to the target infrared image and the depth image.
- the sending, by the second processing unit 130, the image to the first processing unit 120 means that the second processing unit 130 sends the image to the CPU core in the TEE environment in the first processing unit 120.
- the electronic device includes a laser camera 112, a floodlight 114, a visible light camera 116 (ie, RGB camera 116), a laser light 118, and a micro control.
- the unit MCU 130 ie, the second processing unit 130
- the processor 120 ie, the first processing unit 120.
- the laser camera 112, the floodlight 114, the visible light camera 116, and the laser light 118 are connected to the micro control unit MCU 130, respectively.
- the micro control unit MCU 130 is coupled to the processor 120.
- the target information is image data for acquiring face depth information
- step 001 receiving target information associated with the face includes step 011.
- the foregoing instruction may be sent to the second processing unit 130 connected to the first processing unit 120, so that the second processing unit 130
- the control camera module 110 collects the infrared image and the speckle image; the first processing unit 120 in the electronic device 10 can also directly control the camera module 110 according to the obtained instruction of the face data, and control the camera module 110 to collect the infrared image and the scattered image. Spot image.
- the instruction for acquiring the face data further includes acquiring the visible light image
- the first processing unit 120 in the electronic device 10 may further control the camera module 110 to collect the visible light image, that is, the RGB image.
- the first processing unit 120 is an integrated circuit for processing data in the electronic device 10, for example, a CPU.
- the second processing unit 130 is connected to the first processing unit 120 and the camera module 110, and can pre-process the face image collected by the camera module 110.
- the intermediate image obtained by the pre-processing is sent to the first processing unit 120.
- the second processing unit 130 may be an MCU.
- the camera module 110 may transmit the image to the second processing unit 130 or the first processing unit 120 after acquiring the image according to the above instruction.
- the camera module 110 can transmit the infrared image and the speckle image to the second processing unit 130, and transmit the RGB image to the first processing unit 120.
- the camera module 110 can also transmit the infrared image, the speckle image, and the RGB image.
- the images are all transmitted to the first processing unit 120.
- the second processing unit 130 may process the acquired image to obtain an infrared disparity map and a depth disparity map, and then obtain the acquired infrared parallax.
- the map and the depth disparity map are transmitted to the first processing unit 110.
- the received image data may be classified into a security level.
- the security level corresponding to each image data may be preset in the first processing unit 120.
- the image data received by the first processing unit 120 may include an infrared image, a speckle image, an infrared parallax map, a depth disparity map, and an RGB image.
- the three security levels preset in the first processing unit 120 include a first level, a second level, and a third level, and the security level is gradually decreased from the first level to the third level.
- the face depth information can be obtained according to the speckle image and the depth parallax image, so the speckle image and the depth disparity map can be set to the first level; the infrared image and the infrared parallax image can be used for face recognition, so the infrared image and The infrared parallax image is set to the second level; the RGB image can be set to the third level.
- 012 Determine an operating environment corresponding to the image data according to the security level, where the operating environment is an operating environment of the first processing unit 120.
- the first processing unit 120 can operate in different operating environments, such as a TEE environment and a REE environment.
- the first processing unit 120 can operate in a TEE environment or a REE environment.
- the first processing unit 120 when the CPU in the electronic device 10 includes a plurality of CPU cores, one and only one CPU core can run in the TEE environment, and other CPU cores can run in the REE environment.
- the CPU core when the CPU core runs in the TEE environment, the CPU core has a higher security level; when the CPU core runs in the REE environment, the CPU core has a lower security level; alternatively, the electronic device 10 can determine the first level.
- the image data corresponds to the TEE operating environment
- the third level image data corresponds to the REE running environment
- the second level image data corresponds to the TEE running environment or the REE running environment.
- the image data is divided into the first processing unit 120 in the corresponding operating environment for processing, and the face depth information is obtained.
- the electronic device 10 may divide the acquired image data into the first processing unit 120 in the corresponding operating environment for processing.
- the speckle image and the depth disparity map may be divided into the first processing unit 120 in the TEE environment for processing
- the RGB image may be divided into the first processing unit 120 in the REE environment for processing, the infrared image and the infrared disparity map.
- the first processing unit 120 which can be divided into the TEE environment, performs processing or the first processing unit 120 in the REE environment performs processing.
- the first processing unit 120 may perform face recognition according to the infrared image or the infrared parallax map, and detect whether the acquired infrared image or the infrared parallax map includes a human face. If the infrared image or the infrared parallax map includes a human face, the electronic device 10 The face included in the infrared image or the infrared parallax map may be matched with the stored face of the electronic device 10 to detect whether the face included in the infrared image or the infrared parallax map is a stored face.
- the first processing unit 120 may acquire depth information of a face according to a speckle image or a depth disparity map, where the depth information of the face refers to three-dimensional information of the face.
- the first processing unit 120 may also perform face recognition based on the RGB image, detect whether there is a human face in the RGB image, and whether the face in the RGB image matches the stored face.
- the first processing unit 120 is a CPU
- the CPU processing efficiency is relatively low.
- the first processing unit 120 may divide the image data into a security level, determine an operating environment corresponding to the image data according to the security level of the image data, and divide the image data into corresponding operations.
- the first processing unit 120 in the environment performs processing to improve the efficiency of processing the image data by different division of the image data.
- the image data includes a face image acquired by the camera module 110 and/or an intermediate image processed by the second processing unit 130 on the face image.
- the camera module 110 in the electronic device 10 can acquire infrared images, speckle images, and RGB images.
- the camera module 110 can directly send the collected infrared image and speckle image to the first processing unit 120, or the camera module 110 can directly send the collected infrared image, speckle image and RGB image to the first The processing unit 120; the camera module 110 can also send the infrared image and the speckle image to the second processing unit 130, and send the RGB image to the first processing unit 120, and the second processing unit 130 will image the infrared image and the speckle image.
- the intermediate image obtained by the processing is sent to the first processing unit 120.
- the image data includes an infrared image and a speckle image acquired by the camera module 110.
- the time interval between the first time when the camera module 110 collects the infrared image and the second time when the speckle image is collected is less than the first threshold.
- the second processing unit 130 can control the floodlight 114 in the camera module 110 to be turned on and collect the infrared image through the laser camera 112.
- the second processing unit 130 can also control the laser light 118 in the camera module 110 to be turned on and collected by the laser camera 112. Speckle image.
- the time interval between the first moment when the camera module 110 acquires the infrared image and the second moment when the speckle image is acquired should be less than the first threshold. For example, the time interval between the first time and the second time is less than 5 milliseconds.
- the floodlight controller and the laser light controller may be disposed in the camera module 110, and the second processing unit 130 may control the laser camera 112 to collect by controlling the time interval at which the floodlight controller or the laser light controller transmits the pulse wave.
- the time interval between the infrared image and the speckle image collected by the laser camera 112 is lower than the first threshold, which ensures the consistency of the collected infrared image and the speckle image, and avoids the infrared image and There is a large error between the speckle images, which improves the accuracy of data processing.
- the image data includes an infrared image and an RGB image captured by the camera module 110.
- the infrared image and the RGB image are images acquired by the camera module 110 at the same time.
- the second processing unit 130 controls the laser camera 112 to acquire the infrared image
- the first processing unit 120 controls the RGB camera 116 to acquire the RGB image.
- a timing synchronization line can be added between the laser camera 112 and the RGB camera 116, so that the camera module 110 can simultaneously acquire infrared images and RGB images.
- the infrared image and the RGB image are simultaneously acquired by controlling the camera module 110, so that the collected infrared image and the RGB image are consistent, thereby improving the accuracy of the image processing.
- dividing the image data into the first processing unit 120 in the corresponding operating environment for processing comprises: extracting a feature set in the image data; and dividing the feature set into the first processing unit 120 in the corresponding operating environment of the image data for processing.
- the first processing unit 120 may extract the feature set in the image data, and then divide the feature set in the image data into the first processing unit 120 in the corresponding operating environment of the image data for processing.
- the first processing unit 120 may identify a face region in each image in the received image data, and extract the face region into a first processing unit 120 corresponding to the operating environment of each image data for processing.
- the first processing unit 120 may further extract information of the facial feature points in the respective image data, and then divide the information of the facial feature points in the respective image data into the first processing unit 120 in the corresponding operating environment of the image data for processing. .
- the first processing unit When the first processing unit divides the feature set into the first processing unit 120 in the corresponding operating environment of the image data, the first processing unit first searches for the image data of the feature set, and then acquires the operating environment corresponding to the image data, and then The feature set extracted in the image data is divided into the first processing unit 120 in the operating environment corresponding to the image data for processing.
- the first processing unit 120 may extract the feature set in the image data, and divide the feature set of the image data into the first processing unit 120 for processing, reducing the first The processing amount of the processing unit 120 improves the processing efficiency.
- the image processing method further includes:
- the first processing unit 120 may perform face recognition and living body detection based on the image data.
- the first processing unit 120 can detect whether there is a human face in the infrared image or the infrared parallax map.
- the first processing unit 120 may match the face existing in the infrared image or the infrared parallax map with the stored face, and detect the person present in the infrared image or the infrared parallax map. Whether the face matches the stored face successfully.
- the first processing unit 120 may acquire a face depth image according to the speckle image or the depth disparity map, and perform living body detection according to the face depth image.
- the performing the living body detection according to the face depth image includes: searching for a face region in the face depth image, detecting whether the face region has depth information, and whether the depth information conforms to the face stereo rule. If the face region has depth information in the face depth image, and the depth information conforms to the face stereo rule, the face is biologically active.
- the face stereo rule is a rule with three-dimensional depth information of the face.
- the first processing unit may further perform artificial intelligence recognition on the image data by using an artificial intelligence model, obtain a texture of the surface of the face, and detect whether the direction of the texture, the density of the texture, the width of the texture, etc. meet the rules of the face, if the The face rule determines that the face is biologically active.
- the image processing method further includes:
- 014 Get the type of the application that receives the face depth information
- the first processing unit 120 may send the acquired face depth information to the application, and the application program performs operations such as face unlocking, face payment, and the like.
- the first processing unit 120 may transmit the depth image to the application through a secure channel or a normal channel, and the security channel and the normal channel have different security levels. Among them, the security channel has a higher security level and the normal channel has a lower security level.
- the data can be encrypted to prevent data leakage or theft.
- the electronic device 10 can set a corresponding data channel according to the type of the application.
- an application with high security requirements can correspond to a secure channel
- an application with low security requirements can correspond to a normal channel.
- the payment application corresponds to a secure channel
- the image application corresponds to a normal channel.
- the first processing unit 120 may preset the type of each application and the data channel corresponding to each type. After obtaining the data channel corresponding to the type of the application, the face depth information may be sent to the application through the corresponding data channel. Allows the application to proceed to the next step based on the depth image described above.
- the image processing method of the embodiment of the present application selects a corresponding data channel to transmit data according to the type of the application program, which can ensure the security of data transmission for an application with high security requirements and low security requirements.
- the speed at which the application transfers data is not limited to, but not limited to, but not limited to, but not limited to, but not limited to, but not limited to, but not limited to, but not limited to, but not limited to, but not limited to, but not limited to, the image processing method, which can ensure the security of data transmission for an application with high security requirements and low security requirements. The speed at which the application transfers data.
- the target information is attribute information of an application that invokes face recognition
- step 001 receives target information associated with the face, including step 021.
- step 002 determines an operating environment corresponding to the target information according to the security level of the target information, and performs the process related to the face in the operating environment, including step 022, step 023, and step 024.
- face recognition has been widely applied to various scenarios, such as identity authentication scenarios, mobile payment scenarios, simulated dynamic expression scenarios, video conference scenarios, and the like.
- the above scenarios are functions of recognizing a face by calling a hardware device such as a camera to implement a response.
- Some scenarios require high security for operations, such as identity authentication scenarios and mobile payment scenarios.
- Some scenarios require relatively low security for operations, such as simulating dynamic emoticon scenes and video conferencing scenarios.
- the image processing method in the embodiment of the present application can ensure information security in a scenario with high security requirements and fast response in a scenario with relatively low security requirements.
- attribute information of an application that invokes face recognition may be first acquired.
- the attribute information of the application may include the security level of the application. For example, the security level of banking applications, payment applications, and authentication applications is high; the security level of ordinary camera applications is low.
- the face recognition process needs to be performed under the TEE; if the security level of the application is low, the face recognition process does not need to be executed under the TEE. Just execute it under REE.
- the camera needs to be turned on to recognize the face and compare it with the pre-stored face. If the comparison result is consistent, the verification is successful and the payment operation is performed. Due to personal payment information, etc., the user's personal property may be lost. Therefore, it needs to be executed under the TEE to ensure security.
- the camera is turned on to recognize the face, but only the face expression is captured and captured, and then various interesting dynamic expressions are simulated to enhance the fun, so it is only necessary to execute under the REE.
- control application enters the trusted execution environment TEE mode and performs the steps of the face recognition process serially under the trusted execution environment TEE.
- the face recognition process may include a step of detecting a face, a step of acquiring a face element, a step of detecting a face of a living body, and a step of recognizing a face. Due to the security features of the TEE, the step of detecting a face, the step of acquiring a face element, the step of detecting a face, and the step of recognizing a face are performed serially in sequence under the TEE. Under the REE, the security requirements are not high, and the step of detecting a face, the step of acquiring a face element, the step of detecting a face and the step of recognizing a face can be performed in parallel.
- the detecting the face step refers to a process of emitting infrared light by calling the floodlight 114, receiving an infrared image by the laser camera 112, and determining whether it is a human face by using the infrared image.
- the step of acquiring the face element is a process of capturing a face with the visible light camera 116, and acquiring features related to image quality such as face brightness, face shooting angle, and the like.
- the human face detection step combines infrared light and structured light, and emits infrared light through the floodlight 114.
- the laser light 118 emits structured light, receives the infrared image and the structured light image through the laser camera, and then analyzes the image to obtain the image.
- the face recognition step the face is photographed by the visible light camera 116, and features such as the size of the eyes, the position information of the nose on the face, and the length of the eyebrows are acquired to perform face recognition.
- the step of detecting the face, the step of acquiring the face element, the step of detecting the face, and the step of recognizing the face are not limited in the execution order.
- the above steps can also be combined arbitrarily according to the actual application of the application to achieve the corresponding functions.
- the face biometric detection step is combined with the face recognition step without the face element step. It should be understood that the above scenarios are merely examples and are not intended to limit the technical solutions.
- the face recognition process does not require high security. Therefore, the steps of the face recognition process can be performed in parallel to improve the execution speed of the face recognition process.
- the step of detecting a face the step of acquiring a face element, the step of detecting a face of a face, and the step of recognizing a face in the face recognition process are explained in detail below.
- the step of detecting a face includes:
- infrared light can be emitted through the floodlight 114.
- 0252 Captures an infrared image that passes through the subject.
- An infrared image passing through the subject is captured by the laser camera 112.
- the infrared image is sent to the micro control unit MCU130, and the infrared image is processed by the micro control unit MCU130 to obtain infrared image data.
- the infrared image is sent to the micro control unit MCU130, and the infrared image is processed by the hardware micro control unit MCU130 to prevent the original data from being intercepted on the application side. Can effectively improve the security of data.
- the infrared image data is provided to the application through a preset interface, so that the application calls the infrared image data.
- the infrared image data can be called by the application to implement the detection of the face.
- the preset interface is a bus interface that conforms to a preset standard, and includes a MIPI (Mobile Industry Processor Interface) bus interface, an I2C synchronous serial bus interface, and an SPI bus interface.
- MIPI Mobile Industry Processor Interface
- the step of acquiring a face element includes:
- a visible light image of the subject can be captured by the visible light camera 116.
- the visible light image is sent to the micro control unit MCU 130, and the visible light image is processed by the micro control unit MCU 130 to obtain visible light image data.
- the visible image data is provided to the application through a preset interface, so that the application calls the visible image data.
- the application program may call the visible light image data to extract the face element, such as acquiring the feature related to the image quality such as the face brightness and the face shooting angle, as the data basis of the next operation.
- the step of detecting a human face includes:
- structured light may be generated by PWM modulation in the micro control unit MCU 130, structured light is emitted by the laser light 118, and infrared light is emitted to the object through the flood light 114 (infrared light emitter).
- 0259 Captures an infrared image and a structured light image through the subject.
- an infrared image and a structured light image passing through the subject may be captured by a laser camera 112 (infrared light receiver).
- the infrared image and the structured light image are sent to the micro control unit MCU 130, and the infrared image and the structured light image are processed by the micro control unit MCU130 to obtain infrared image data and depth of field data.
- the infrared image and the structured light image may be processed by a depth engine in the micro control unit MCU 130 to acquire infrared image data and depth of field data.
- the infrared image data and the depth of field data are provided to the application through a preset interface, so that the application calls the infrared image data and the depth of field data for secure verification.
- structured light can be modulated by the micro control unit MCU 130, and structured light is emitted to the object by the laser light 118 (structured light projection device).
- the laser light 118 structured light projection device
- the subject may be the user's face.
- the structured light is deformed due to the shape characteristic of the object, and by collecting the structured light information, a structured light image having the subject contour and depth can be obtained.
- the type of structured light may include laser stripes, Gray code, sinusoidal stripes, or non-uniform speckles.
- the following is an example of a widely used stripe projection technique, in which the strip projection technique belongs to the surface structure light in a broad sense.
- a sinusoidal stripe is generated by the micro control unit MCU130, and the sinusoidal stripe is projected to the object by the laser lamp 118 (structured light projection device), and the stripe is photographed by the laser camera 112.
- the degree of bending modulated by the object demodulates the curved stripe to obtain the phase, and then converts the phase to the height of the whole field.
- the structured light used in the embodiments of the present application may be other arbitrary patterns in addition to the above-mentioned stripes, depending on the specific application scenario.
- the laser camera 112 can transmit the structured light image to the micro control unit MCU 130, and use the micro control unit MCU 130 to calculate the depth of field data corresponding to the subject.
- the structured light image may be sent to the micro control unit MCU 130, and the structured light image is processed in the hardware micro control unit MCU 130, and the data has been calculated in hardware compared to the direct processing to the application processing.
- the micro control unit MCU 130 may be used to demodulate the phase information corresponding to the deformed position pixel in the structured light image, and then convert the phase information into height information, and finally determine the depth of field data corresponding to the object according to the height information.
- the processor 120 provides the depth of field data acquired from the micro control unit MCU 130 to the application through a preset interface to cause the application to call the depth of field data. For example, based on the structured light image of the face, data such as the contour and height of the face are calculated.
- the above data information has been calculated in the micro control unit MCU130, and the application only needs to invoke the above data information to perform feature comparison with the prestored data to implement identity verification. If the verification is passed, the user can gain access to the application and take the application further.
- a structured light to create a three-dimensional human face model
- acquiring a three-dimensional feature of a human face combining infrared image data or multi-frame facial dynamic features, and the like
- performing a living body detection to detect that the currently photographed face is a living body rather than a planar photo.
- the face recognition step includes:
- the visible light image is sent to the micro control unit MCU, and the visible light image is processed by the micro control unit MCU 130 to obtain facial feature data.
- the facial feature data may include the size of the eyes, the position information of the nose on the face, the length of the eyebrows, and the like.
- the face feature data is provided to the application through a preset interface, so that the application calls the face feature data.
- the visible light image data can be called by the application to perform face recognition.
- the image processing method of the embodiment of the present application obtains the attribute information of the application that invokes the face recognition, and determines whether the face recognition process needs to be executed under the trusted execution environment TEE according to the attribute information, if the face recognition process needs to be
- the control application enters the trusted execution environment TEE mode, and the steps of the face recognition process are performed serially under the trusted execution environment TEE, if the face recognition process does not need to be in the trusted execution environment TEE Execution, the control application enters the normal execution environment REE mode, and performs the steps of the face recognition process in parallel under the common execution environment REE, which can ensure the information security under the scenario with high security requirements, and can realize the security requirement relative to Fast response in low scenes.
- the present application further provides an image processing apparatus 20 that can be applied to the electronic device 10 shown in FIG. 2 or FIG.
- the image processing device 20 includes a receiving total module 201 and a processing total module 202.
- the receiving total module 201 is configured to receive target information associated with a human face.
- the processing total module 202 is configured to determine an operating environment corresponding to the target information according to the security level of the target information, and perform processing related to the face in the operating environment.
- the target information is image data for acquiring face depth information.
- the receiving total module 201 includes a receiving module 211.
- the processing module 202 includes a determination module 212 and a first processing module 213.
- the receiving module 211 is configured to divide the image data into a security level if the image data for acquiring the face depth information is received.
- the determining module 212 is configured to determine an operating environment corresponding to the image data according to the security level; the operating environment is an operating environment of the first processing unit.
- the first processing module 4213 is configured to divide the image data into a first processing unit in a corresponding operating environment for processing, to obtain face depth information.
- the image data includes a face image acquired by the camera module 110 and/or an intermediate image processed by the second processing unit on the face image.
- the image data includes an infrared image and a speckle image acquired by the camera module 110.
- the time interval between the first moment of acquiring the infrared image and the second moment of acquiring the speckle image is less than the first threshold.
- the image data includes an infrared image and an RGB image captured by the camera module 110.
- the infrared image and the RGB image are images acquired simultaneously by the camera module.
- the first processing module 213 is further configured to extract a feature set in the image data; and divide the feature set into the first processing unit 120 in the corresponding operating environment of the image data for processing.
- the determining module 212 is further configured to perform face recognition and living body detection according to the image data before obtaining the face depth information, and determine that the face of the image data is passed and the detected face is biologically active.
- the processing total module 202 further includes a first obtaining module 214 and a sending module 215 .
- the first obtaining module 214 is configured to acquire a type of an application that receives the face depth information.
- the determining module 212 is further configured to determine a data channel corresponding to the application according to the type.
- the sending module 215 is configured to send the face depth information to the application through the corresponding data transmission channel.
- the first processing unit 120 may divide the image data into a security level, determine an operating environment corresponding to the image data according to the security level of the image data, and divide the image data into corresponding
- the first processing unit 120 in the operating environment performs processing to improve the efficiency of processing the image data by different division of the image data.
- the target information is attribute information of an application that invokes face recognition.
- the receiving total module 201 includes a second obtaining module 221.
- the processing module 202 includes a determination module 222 and a second processing module 223.
- the second obtaining module 221 is configured to acquire attribute information of an application that invokes face recognition.
- the determining module 222 is configured to determine, according to the attribute information, whether the face recognition process needs to be executed under the trusted execution environment TEE.
- the second processing module 223 is configured to control the application to enter the trusted execution environment TEE mode if the face recognition process needs to be executed under the trusted execution environment TEE, and perform the face recognition process serially in the trusted execution environment TEE. Step, if the face recognition process does not need to be executed under the trusted execution environment TEE, the control application enters the normal execution environment REE mode, and the steps of the face recognition process are executed in parallel under the normal execution environment REE.
- the attribute information of the application includes the security level of the application.
- the face recognition process includes a step of detecting a face, a step of acquiring a face element, a step of detecting a face living body, and a step of recognizing a face.
- the second processing module 223 is further configured to: emit infrared light to the object; capture an infrared image passing through the object; send the infrared image to the micro control unit MCU 130, and perform the infrared image by using the micro control unit MCU130 Processing to obtain infrared image data; providing the infrared image data to the application through a preset interface, so that the application calls the infrared image data.
- the second processing module 223 is further configured to: capture a visible light image of the object; send the visible light image to the micro control unit MCU 130, and process the visible light image by using the micro control unit MCU 130 to obtain visible light image data;
- the visible light image data is provided to the application through a preset interface to cause the application to call the visible light image data.
- the second processing module 223 is further configured to: emit infrared light and structured light to the object; capture an infrared image and a structured light image passing through the object; and send the infrared image and the structured light image to the micro control unit MCU130 And using the micro control unit MCU130 to process the infrared image and the structured light image to obtain infrared image data and depth of field data; and providing the infrared image data and the depth of field data to the application through a preset interface, so that the application calls the infrared image data And depth of field data.
- the second processing module 223 is configured to send the infrared image and the structured light image to the micro control unit MCU 130, and process the infrared image and the structured light image by using the micro control unit MCU130 to obtain infrared image data and depth of field data, specifically for : demodulating the phase information corresponding to the pixel of the deformed position in the structured light image; converting the phase information into height information; and determining the depth of field data corresponding to the object according to the height information.
- the second processing module 223 is further configured to: capture a visible light image of the object; send the visible light image to the micro control unit MCU 130, and process the visible light image by using the micro control unit MCU 130 to obtain facial feature data;
- the face feature data is provided to the application through a preset interface to cause the application to call the face feature data.
- the preset interface is a bus interface that conforms to a preset standard, and includes an MIPI bus interface, an I2C synchronous serial bus interface, and an SPI bus interface.
- the image processing apparatus 20 of the embodiment of the present application obtains the attribute information of the application that invokes the face recognition, and determines whether the face recognition process needs to be executed under the trusted execution environment TEE according to the attribute information, if the face recognition process needs to be performed.
- the control application enters the trusted execution environment TEE mode, and the steps of the face recognition process are serially executed under the trusted execution environment TEE, if the face recognition process does not need to be in the trusted execution environment TEE If the execution is performed, the control application enters the normal execution environment REE mode, and the steps of the face recognition process are executed in parallel under the common execution environment REE, which can ensure information security in a scenario with high security requirements and can implement security requirements. Fast response in relatively low scenes.
- each module in the image processing apparatus 20 described above is for illustrative purposes only. In other embodiments, the image processing apparatus 20 may be divided into different modules as needed to complete all or part of the functions of the image processing apparatus 20.
- the implementation of the various modules in the image processing apparatus 20 provided in the embodiments of the present application may be in the form of a computer program.
- the computer program can run on a terminal or server.
- the program modules of the computer program can be stored on the memory of the terminal or server.
- the steps of the image processing method described in various embodiments of the present application are implemented when the computer program is executed by a processor.
- the present application also provides an electronic device 10.
- the electronic device 10 includes a first processing unit 120, a second processing unit 130, and a camera module 110.
- the first processing unit 120 is configured to receive target information associated with a human face, and determine an operating environment corresponding to the target information according to a security level of the target information, and execute a face related to the face in the operating environment. deal with.
- the second processing unit is respectively connected to the first processing unit and the camera module.
- the target information includes image data for acquiring face depth information.
- the first processing unit 120 is configured to: if the image data for acquiring the face depth information is received, divide the image data into a security level; determine an operating environment corresponding to the image data according to the security level; and the operating environment is the operation of the first processing unit 120 Environment: The image data is divided into the first processing unit 120 in the corresponding operating environment for processing, and the face depth information is obtained.
- the image data includes a face image acquired by the camera module 110 and/or an intermediate image processed by the second processing unit 130 on the face image.
- the image data includes an infrared image and a speckle image acquired by the camera module 110.
- the time interval between the first moment of acquiring the infrared image and the second moment of acquiring the speckle image is less than the first threshold.
- the image data includes an infrared image and an RGB image collected by the camera module 110.
- the infrared image and the RGB image are images acquired by the camera module 110 at the same time.
- the first processing unit 120 divides the image data into the first processing unit 120 in the corresponding operating environment for processing, including: extracting a feature set in the image data; and dividing the feature set into the first process in the corresponding operating environment of the image data.
- Unit 120 performs the processing.
- the first processing unit 120 is further configured to perform face recognition and living body detection according to the image data, and determine that the face of the image data is passed and the detected face is biologically active.
- the first processing unit 120 is further configured to acquire a type of an application that receives the face depth information, determine a data channel corresponding to the application according to the type, and send the face depth information to the application by using a corresponding data transmission channel. .
- the first processing unit 120 may divide the image data into a security level, determine an operating environment corresponding to the image data according to the security level of the image data, and divide the image data into corresponding operations.
- the first processing unit 120 in the environment performs processing to improve the efficiency of processing the image data by different division of the image data.
- the camera module 110 includes a laser camera 112 , a floodlight 114 , a visible light camera 116 , and a laser light 118 .
- the second processing unit 130 is a micro control unit MCU 130
- the first processing unit 120 is a processor 130.
- the target information includes attribute information of an application that invokes face recognition.
- the laser camera 112, the floodlight 114, the visible light camera 116, and the laser light 118 are connected to the micro control unit MCU 130, respectively.
- the micro control unit MCU 130 is coupled to the processor 120.
- the processor 120 is specifically configured to: obtain attribute information of an application that invokes face recognition; determine, according to the attribute information, whether the face recognition process needs to be executed in a trusted execution environment TEE; if the face recognition process needs to be performed in a trusted manner Executing under the environment TEE, the control application enters the trusted execution environment TEE mode, and the steps of the face recognition process are serially executed under the trusted execution environment TEE; if the face recognition process does not need to be executed under the trusted execution environment TEE, Then, the control application enters the normal execution environment REE mode, and the steps of the face recognition process are executed in parallel under the normal execution environment REE.
- the attribute information of the application includes a security level of the application.
- the face recognition process includes a step of detecting a face, a step of acquiring a face element, a step of detecting a face, and a step of recognizing a face.
- the floodlight 114 emits infrared light to the subject.
- the laser camera 112 captures an infrared image that passes through the subject and transmits the infrared image to the micro control unit MCU 130.
- the micro control unit MCU 130 processes the infrared image to acquire infrared image data.
- the processor provides 120 infrared image data to the application through a preset interface to cause the application to call the image data.
- the visible light camera 116 captures a visible light image of the subject and transmits the visible light image to the micro control unit MCU 130.
- the micro control unit MCU 130 processes the visible light image to acquire visible light image data.
- the processor 120 provides visible light image data to the application via a preset interface to cause the application to invoke visible light image data.
- the floodlight 114 emits infrared light to the subject, and the laser light 118 emits structured light to the subject.
- the laser camera 118 captures an infrared image and a structured light image passing through the subject, and transmits the infrared image and the structured light image to the micro control unit MCU 130.
- the micro control unit MCU 130 processes the infrared image and the structured light image to acquire infrared image data and depth of field data.
- the processor 120 provides the infrared image data and the depth of field data to the application through a preset interface to cause the application to call the infrared image data and the depth of field data.
- the micro control unit MCU 130 when the micro control unit MCU 130 processes the infrared image and the structured light image to obtain the infrared image data and the depth of field data, the micro control unit MCU 130 specifically performs the phase information corresponding to the deformed position pixel in the demodulated structured light image, and the phase is The information is converted into height information, and the step of determining the depth of field data corresponding to the object based on the height information.
- the visible light camera 116 captures a visible light image of the subject and transmits the visible light image to the micro control unit MCU 130.
- the micro control unit MCU 130 processes the visible light image to acquire facial feature data.
- the processor 120 provides the face feature data to the application through a preset interface to cause the application to invoke the face feature data.
- the electronic device 10 of the embodiment of the present application obtains the attribute information of the application that invokes the face recognition, and determines whether the face recognition process needs to be executed under the trusted execution environment TEE according to the attribute information, if the face recognition process needs to be The execution of the letter execution environment TEE, the control application enters the trusted execution environment TEE mode, and the steps of the face recognition process are performed serially under the trusted execution environment TEE, if the face recognition process does not need to be in the trusted execution environment TEE Execution, the control application enters the normal execution environment REE mode, and performs the steps of the face recognition process in parallel under the common execution environment REE, which can ensure the information security under the scenario with high security requirements, and can realize the security requirement relative to Fast response in low scenes.
- Electronic device 30 includes one or more processors 31, memory 32, and one or more programs.
- One or more of the programs are stored in memory 32 and are configured to be executed by one or more processors 31.
- the program includes instructions for executing the image processing method described in any of the above embodiments.
- the program includes instructions for performing an image processing method of the following steps:
- 012 Determine an operating environment corresponding to the image data according to the security level, where the operating environment is an operating environment of the first processing unit 120;
- the image data is divided into the first processing unit 120 in the corresponding operating environment for processing, and the face depth information is obtained.
- the program includes instructions for performing the image processing method of the steps:
- the application also provides a computer readable storage medium.
- the processor is caused to perform the steps of the image processing method described in any one of the embodiments herein.
- the present application also provides a computer program product comprising instructions which, when run on a computer, cause the computer to perform the steps of the image processing method of any one of the embodiments of the present application.
- Non-volatile memory can include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
- Volatile memory can include random access memory (RAM), which acts as an external cache.
- RAM is available in a variety of formats, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronization chain. Synchlink DRAM (SLDRAM), Memory Bus (Rambus) Direct RAM (RDRAM), Direct Memory Bus Dynamic RAM (DRDRAM), and Memory Bus Dynamic RAM (RDRAM).
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computer Security & Cryptography (AREA)
- Computer Hardware Design (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Optics & Photonics (AREA)
- Image Processing (AREA)
- Collating Specific Patterns (AREA)
Abstract
一种图像处理方法、图像处理装置(20)、电子设备(10)及计算机可读存储介质。图像处理方法包括:(001)接收与人脸相关联的目标信息;(002)根据目标信息的安全等级确定与目标信息对应的运行环境,并在运行环境下执行与人脸相关的处理。
Description
优先权信息
本申请请求2018年4月12日向中国国家知识产权局提交的、专利申请号为201810327407.3的专利申请、以及2018年4月28日向中国国家知识产权局提交的、专利申请号为201810403022.0的优先权和权益,并且通过参照将其全文并入此处。
本申请涉及图像处理技术领域,特别涉及一种图像处理方法、图像处理装置、电子设备和计算机可读存储介质。
随着人脸识别技术和结构光技术的发展,人脸解锁、人脸支付等在电子设备中越来越常见。通过结构光技术,电子设备可采集人脸图像以及人脸的3D信息,根据采集到的人脸图像及人脸3D信息可进行人脸支付、人脸解锁等。
发明内容
本申请实施方式提供一种图像处理方法、图像处理装置、电子设备和计算机可读存储介质。
本申请实施方式的图像处理方法包括:接收与人脸相关联的目标信息;根据所述目标信息的安全等级确定与所述目标信息对应的运行环境,并在所述运行环境下执行与人脸相关的处理。
本申请实施方式的图像处理装置包括接收总模块和处理总模块。接收总模块用于接收与人脸相关联的目标信息;处理总模块用于根据所述目标信息的安全等级确定与所述目标信息对应的运行环境,并在所述运行环境下执行与人脸相关的处理。
本申请实施方式的电子设备包括摄像头模组、第一处理单元及第二处理单元,所述第一处理单元用于:接收与人脸相关联的目标信息;根据所述目标信息的安全等级确定与所述目标信息对应的运行环境,并在所述运行环境下执行与人脸相关的处理。
本申请实施方式的计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现上述的图像处理方法的步骤。
本申请实施方式的附加方面和优点将在下面的描述中部分给出,部分将从下面的描述中变得明显,或通过本申请的实践了解到。
本申请的上述和/或附加的方面和优点可以从结合下面附图对实施方式的描述中将变得明显和容易理解,其中:
图1是本申请某些实施方式的图像处理方法的流程示意图。
图2是本申请某些实施方式的电子设备的结构框图。
图3是本申请某些实施方式的电子设备的结构示意图。
图4至图9是本申请某些实施方式的图像处理方法的流程示意图。
图10是本申请某些实施方式的结构光测量深度的场景示意图。
图11是本申请某些实施方式的图像处理方法的流程示意图。
图12至图15是本申请某些实施方式的图像处理装置的模块示意图。
图16是本申请某些实施方式的电子设备的模块示意图。
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
请参阅图1,本申请提供一种图像处理方法,可应用于电子设备10。图像处理方法包括:
001:接收与人脸相关联的目标信息;和
002:根据目标信息的安全等级确定与目标信息对应的运行环境,并在运行环境下执行与人脸相关的处理。
目前人脸识别已广泛应用于身份认证、移动支付、模拟动态表情等场景。而如何提高诸如身份认证、移动支付等操作的安全性,降低用户的信息被盗取的风险已成为亟需解决的问题。本申请实施方式的图像处理方法在接收到与人脸相关联的目标信息后,根据目标信息的安全等级确定出与目标信息对应的运行环境,从而可以在确定出的运行环境下执行与人脸相关的处理,保证人脸的相关信息的安全性。与人脸相关联的目标信息包括用于获取人脸深度信息的图像数据及调用人脸识别的应用程序的属性信息。
当目标信息为用于获取人脸深度信息的图像数据时,如图2所示,电子设备10可为手机、平板电脑、个人数字助理或可穿戴设备等。电子设备10可包括摄像头模组110、第一处理单元120、及第二处理单元130。第一处理单元120可为CPU(Central Processing Unit,中央处理器)。第二处理单元130可为MCU(Microcontroller Unit,微控制单元)等。其中,第二处理单元130连接在第一处理单元120和摄像头模组110之间,第二处理单元130可控制摄像头模组110中激光摄像头112、泛光灯114和镭射灯118,第一处理单元120可控制摄像头模组110中RGB(Red/Green/Blue,红/绿/蓝色彩模式)摄像头116。
摄像头模组110中包括激光摄像头112、泛光灯114、RGB摄像头116和镭射灯118。激光摄像头112为红外摄像头,用于获取红外图像。泛光灯114为可发射红外光的点光源;镭射灯118为可发射激光且发射的激光可形成图案的点光源。其中,当泛光灯114发射红外光时,激光摄像头112可根据反射回的光线获取红外图像。当镭射灯118发射激光时,激光摄像头112可根据反射回的光线获取散斑图像。上述散斑图像是镭射灯118发射形成图案的激光被反射后图案发生形变的图像。
第一处理单元120可包括在TEE(Trusted execution environment,可信运行环境)环境下运行的CPU内核和在REE(Rich Execution Environment,自然运行环境)环境下运行的CPU内核。其中,TEE环境和REE环境均为ARM模块(Advanced RISC Machines,高级精简指令集处理器)的运行模式。其中,TEE环境的安全级别较高,第一处理单元120中有且仅有一个CPU内核可同时运行在TEE环境下。通常情况下,电子设备10中安全级别较高的操作行为需要在TEE环境下的CPU内核中执行,安全级别较低的操作行为可在REE环境下的CPU内核中执行。
第二处理单元130包括PWM(Pulse Width Modulation,脉冲宽度调制)模块132、SPI/I2C(Serial Peripheral Interface/Inter-Integrated Circuit,串行外设接口/双向二线制同步串行接口)接口134、RAM(Random Access Memory,随机存取存储器)模块136和深度引擎138。PWM模块132可向摄像头模组110发射脉冲,控制泛光灯114或镭射灯118开启,使得激光摄像头112可采集到红外图像或散斑图像。SPI/I2C接口134用于接收第一处理单元120发送的图像采集指令。深度引擎138可对散斑图像进行处理得到深度视差图。
当第一处理单元120接收到应用程序的数据获取请求时,例如,当应用程序需要进行人脸解锁、人脸支付时,可通过运行在TEE环境下的CPU内核向第二处理单元130发送图像采集指令。当第二处理单元130接收到图像采集指令后,可通过PWM模块132发射脉冲波控制摄像头模组110中的泛光灯114开启并通过激光摄像头112采集红外图像、控制摄像头模组110中的镭射灯118开启并通过激光摄像头112采集散斑图像。摄像头模组110可将采集到的红外图像和散斑图像发送给第二处理单元130。第二处理单元130可对接收到的红外图像进行处理得到红外视差图,还可对接收到的散斑图像进行处理得到散斑视差图或深度视差图。其中,第二处理单元130对红外图像和散斑图像进行处理是指对红外图像或散斑图像进行校正,去除摄像头模组110中内外参数对图像的影响。其中,第二处理单元130可设置成不同的模式,不同模式输出的图像不同。当第二处理单元130设置为散斑图模式时,第二处理单元130对散斑图像处理得到散斑视差图,根据上述散斑视差图可得到目标散斑图像;当第二处理单元130设置为深度图模式时,第二处理单元130对散斑图像处理得到深度视差图,根据上述深度视差图可得到深度图像,深度图像是指带有深度信息的图像。第二处理单元130可将上述红外视差图和散斑视差图发送给第一处理单元120,第二处理单元130也可将上述红外视差图和深度视差图发送给第一处理单元120。第一处理单元120可根据上述红外视差图获取目标红外图像、根据上述深度视差图获取深度图像。进一 步的,第一处理单元120可根据目标红外图像、深度图像来进行人脸识别、人脸匹配、活体检测以及获取检测到的人脸的深度信息。
第二处理单元130与第一处理单元120之间通信是通过固定的安全接口,用以确保传输数据的安全性。如图1所示,第一处理单元120发送给第二处理单元130的数据是通过SECURE SPI/I2C 140,第二处理单元130发送给第一处理单元120的数据是通过SECURE MIPI(Mobile Industry Processor Interface,移动产业处理器接口)150。
可选地,第二处理单元130也可根据上述红外视差图获取目标红外图像、上述深度视差图计算获取深度图像,再将上述目标红外图像、深度图像发送给第一处理单元120。
可选地,第二处理单元130可根据上述目标红外图像、深度图像进行人脸识别、人脸匹配、活体检测以及获取检测到的人脸的深度信息。其中,第二处理单元130将图像发送给第一处理单元120是指第二处理单元130将图像发送给第一处理单元120中处于TEE环境下的CPU内核。
当目标信息为调用人脸识别的应用程序的属性信息时,如图3所示,电子设备包括激光摄像头112、泛光灯114、可见光摄像头116(即RGB摄像头116)、镭射灯118、微控制单元MCU130(即第二处理单元130)、处理器120(即第一处理单元120)。激光摄像头112、泛光灯114、可见光摄像头116、镭射灯118分别与微控制单元MCU130相连。微控制单元MCU130与处理器120相连。
请参阅图2和图4,在一个实施例中,目标信息为用于获取人脸深度信息的图像数据,步骤001接收与人脸相关联的目标信息包括步骤011。步骤002根据目标信息的安全等级确定与目标信息对应的运行环境,并在运行环境下执行与人脸相关的处理包括步骤012和步骤013。
011:若接收到用于获取人脸深度信息的图像数据,对图像数据划分安全等级。
当电子设备10中的第一处理单元120接收到应用程序侧获取人脸数据的指令后,可将上述指令发送给与第一处理单元120连接的第二处理单元130,使第二处理单元130控制摄像头模组110采集红外图像和散斑图像;电子设备10中的第一处理单元120还可根据获取的人脸数据的指令直接控制摄像头模组110,控制摄像头模组110采集红外图像和散斑图像。可选地,若上述获取人脸数据的指令中还包括获取可见光图像,则电子设备10中的第一处理单元120还可控制摄像头模组110采集可见光图像,即RGB图像。第一处理单元120为电子设备10中处理数据的集成电路,例如CPU;第二处理单元130分别连接第一处理单元120与摄像头模组110,可对摄像头模组110采集的人脸图像进行预处理,再将预处理得到的中间图像发送给第一处理单元120,可选的,第二处理单元130可为MCU。
摄像头模组110在根据上述指令采集到图像后,可将图像传送给第二处理单元130或第一处理单元120。可选地,摄像头模组110可将红外图像和散斑图像传送给第二处理单元130,将RGB图像传送给第一处理单元120;摄像头模组110也可将红外图像、散斑图像和RGB图像都传送给第一处理单元120。其中,当摄像头模组120将红外图像和散斑图像传送给第二处理单元130时,第二处理单元130可对获取的图像进行处理得到红外视差图和深度视差图,再将获取的红外视差图和深度视差图传送给第一处理单元110。
当第一处理单元110接收到摄像头模组110直接传输的图像数据或经过第二处理单元130进行处理后的中间图像时,可对接收到的图像数据划分安全等级。其中,第一处理单元120中可预设各个图像数据对应的安全等级。可选地,第一处理单元120接收到的图像数据可包括红外图像、散斑图像、红外视差图、深度视差图和RGB图像。第一处理单元120中可预设三个安全等级包括第一等级、第二等级和第三等级,由第一等级到第三等级安全级别逐渐降低。根据散斑图像和深度视差图像可得到人脸深度信息,因此可将散斑图像和深度视差图设定第一等级;根据红外图像和红外视差图像可进行人脸识别,因此可将红外图像和红外视差图像设定为第二等级;RGB图像可设定为第三等级。
012:根据安全等级确定图像数据对应的运行环境,运行环境是第一处理单元120的运行环境。
第一处理单元120可在不同的运行环境下运行,例如TEE环境和REE环境。其中,第一处理单元120可运行在TEE环境或REE环境下。以第一处理单元120是CPU为例,当电子设备10中CPU包括多个CPU内核时,有且仅有一个CPU内核可运行在TEE环境下,其他CPU内核可运行在REE环境下。其中,当CPU内核运行在TEE环境下时,CPU内核的安全级别较高;当CPU内核运行在REE环境下时,CPU内核的安全级别较低;可选地,电子设备10可确定第一等级的图像数据对应TEE运行环境,第三等级的图像数据对应REE运行环境,第二等级的图像数据对应TEE运行环境或REE运行环境。
013:将图像数据划分到对应运行环境下的第一处理单元120进行处理,得到人脸深度信息。
在获取到各个图像数据的安全等级以及安全等级对应的运行环境后,电子设备10可将获取的图像数据划分到对应运行环境下的第一处理单元120进行处理。可选地,上述散斑图像和深度视差图可划分到TEE环境下的第一处理单元120进行处理,RGB图像可划分到REE环境下的第一处理单元120进行处理,红外图像和红外视差图可划分到TEE环境下的第一处理单元120进行处理或REE环境下的第一处理单元120进行处理。其中,第一处理单元120可根据红外图像或红外视差图进行人脸识别,检测获取的红外图像或红外视差图中是否包含人脸,若红外图像或红外视差图中包含人脸,则电子设备10可将红外图像或红外视差图中包含的人脸与电子设备10已存储的人脸进行匹配,检测红外图像或红外视差图中包含的人脸是否是已存储的人脸。第一处理单元120可根据散斑图像或深度视差图获取人脸的深度信息,人脸的深度信息是指人脸的三维立体信息。第一处理单元120也可根据RGB图像进行人脸识别,检测RGB图像中是否存在人脸以及RGB图像中人脸与已存储的人脸是否匹配。
通常情况下,当第一处理单元120为CPU时,CPU中有且仅有一个CPU内核可运行在TEE环境中,当图像数据全由TEE环境中CPU处理时,CPU处理效率较为低下。
本申请实施例的图像处理方法,第一处理单元120在获取到图像数据后,可对图像数据划分安全等级,根据图像数据的安全等级确定图像数据对应的运行环境,将图像数据划分到对应运行环境下的第一处理单元120进行处理,通过对图像数据的不同划分提高了对图像数据处理的效率。
可选地,图像数据包括摄像头模组110采集的人脸图像和/或第二处理单元130对人脸图像处理得到的中间图像。
电子设备10中的摄像头模组110可采集红外图像、散斑图像和RGB图像。其中,摄像头模组110可直接将采集到的红外图像和散斑图像发送给第一处理单元120,或摄像头模组110可直接将采集到的红外图像、散斑图像和RGB图像发送给第一处理单元120;摄像头模组110也可将红外图像和散斑图像发送给第二处理单元130,将RGB图像发送给第一处理单元120,第二处理单元130在将对红外图像和散斑图像进行处理得到的中间图像发送给第一处理单元120。
可选地,图像数据包括摄像头模组110采集的红外图像和散斑图像。其中,摄像头模组110采集红外图像的第一时刻与采集散斑图像的第二时刻之间的时间间隔小于第一阈值。
第二处理单元130可控制摄像头模组110中泛光灯114开启并通过激光摄像头112采集红外图像,第二处理单元130还可控制摄像头模组110中的镭射灯118开启并通过激光摄像头112采集散斑图像。为保证红外图像和散斑图像的画面内容的一致,摄像头模组110采集红外图像的第一时刻与采集散斑图像的第二时刻之间的时间间隔应小于第一阈值。例如,第一时刻与第二时刻之间的时间间隔小于5毫秒。其中,在摄像头模组110中可设置泛光灯控制器和镭射灯控制器,第二处理单元130通过控制泛光灯控制器或镭射灯控制器发射脉冲波的时间间隔可控制激光摄像头112采集红外图像的第一时刻与采集散斑图像的第二时刻之间的时间间隔。
本申请实施例的图像处理方法,激光摄像头112采集到的红外图像和散斑图像之间的时间间隔低于第一阈值,可保证采集到的红外图像和散斑图像的一致,避免红外图像和散斑图像之间存在较大的误差,提高了对数据处理的准确性。
可选地,图像数据包括摄像头模组110采集的红外图像和RGB图像。其中,红外图像和RGB图像是摄像头模组110同时采集的图像。
当图像采集指令中还包括采集RGB图像时,第二处理单元130控制激光摄像头112采集红外图像,第一处理单元120控制RGB摄像头116采集RGB图像。为确保采集图像的一致,可在激光摄像头112和RGB摄像头116之间添加时序同步线,使得摄像头模组110可同时采集红外图像和RGB图像。
本申请实施例的图像处理方法,通过控制摄像头模组110同时采集红外图像和RGB图像,使得采集的红外图像和RGB图像一致,提高了图像处理的准确性。
可选地,将图像数据划分到对应运行环境下的第一处理单元120进行处理包括:提取图像数据中特征集;将特征集划分到图像数据对应运行环境下的第一处理单元120进行处理。
第一处理单元120获取到图像数据后,可提取图像数据中特征集,再将图像数据中特征集划分到图像数据对应运行环境下的第一处理单元120进行处理。可选地,第一处理单元120可识别接收到的图像数据中各个图像中人脸区域,将人脸区域提取出再划分到各个图像数据对应运行环境下的第一处理单 元120进行处理。进一步地,第一处理单元120还可提取各个图像数据中人脸特征点的信息,再将各个图像数据中人脸特征点的信息划分到图像数据对应运行环境下的第一处理单元120进行处理。其中,第一处理单元在将特征集划分到图像数据对应运行环境下的第一处理单元120时,先查找提取了特征集的图像数据,再获取上述图像数据对应的运行环境,再将从上述图像数据中提取的特征集划分到图像数据对应的运行环境中第一处理单元120进行处理。
本申请实施例的图像处理方法,第一处理单元120在接收到图像数据后,可提取出图像数据中特征集,将图像数据的特征集划分到第一处理单元120进行处理,减少了第一处理单元120的处理量,提高了处理效率。
可选地,在得到人脸深度信息之前,图像处理方法还包括:
根据图像数据进行人脸识别和活体检测;和
确定对图像数据人脸识别通过且检测到的人脸具有生物活性。
第一处理单元120在接收到图像数据时,可根据图像数据进行人脸识别和活体检测。其中,第一处理单元120可检测红外图像或红外视差图中是否存在人脸。当红外图像或红外视差图中存在人脸时,第一处理单元120可将红外图像或红外视差图中存在的人脸与已存储人脸进行匹配,检测红外图像或红外视差图中存在的人脸与已存储人脸是否匹配成功。若匹配成功,则第一处理单元120可根据散斑图像或深度视差图获取人脸深度图像,根据人脸深度图像践行活体检测。其中,根据人脸深度图像进行活体检测包括:在人脸深度图像中查找人脸区域,检测人脸区域是否有深度信息,且深度信息是否符合人脸立体规则。若人脸深度图像中人脸区域有深度信息,且深度信息符合人脸立体规则,则人脸具有生物活性。人脸立体规则是带有人脸三维深度信息的规则。可选地,第一处理单元还可采用人工智能模型对图像数据进行人工智能识别,获取人脸表面的纹理,检测纹理的方向、纹理的密度、纹理的宽度等是否符合人脸规则,若符合人脸规则,则判定人脸具有生物活性。
可选地中,请参阅图5,图像处理方法还包括:
014:获取接收人脸深度信息的应用程序的类型;
015:根据类型确定应用程序对应的数据通道;和
016:将人脸深度信息通过对应的数据传输通道发送给应用程序。
第一处理单元120可将获取到的人脸深度信息发送给应用程序,供应用程序进行人脸解锁、人脸支付等操作。可选地,第一处理单元120可通过安全通道或普通通道将深度图像传输给应用程序,安全通道和普通通道的安全级别不同。其中,安全通道的安全级别较高,普通通道的安全级别较低。当数据在安全通道中传输时,可对数据进行加密,避免数据泄露或被窃取。电子设备10可根据应用程序的类型设置对应的数据通道。可选地,安全性要求高的应用程序可对应安全通道,安全性要求低的应用程序可对应普通通道。例如,支付类应用程序对应安全通道,图像类应用程序对应普通通道。第一处理单元120中可预设各个应用程序的类型以及各个类型对应的数据通道,在获取应用程序的类型对应的数据通道后,可将人脸深度信息通过对应的数据通道发送给应用程序,使得应用程序根据上述深度图像进行下一步操作。
本申请实施例的图像处理方法,根据应用程序的类型选取对应的数据通道来传输数据,既可以保证对安全性要求高的应用程序进行数据传输的安全性,也提高了对安全性要求低的应用程序进行传输数据的速度。
请参阅图3和图6,在另一个实施例中,目标信息为调用人脸识别的应用程序的属性信息,步骤001接收与人脸相关联的目标信息包括步骤021。步骤002根据目标信息的安全等级确定与目标信息对应的运行环境,并在运行环境下执行与人脸相关的处理包括步骤022、步骤023和步骤024。
021:获取调用人脸识别的应用程序的属性信息。
目前,人脸识别已经广泛应用到各个场景中,如身份认证场景、移动支付场景、模拟动态表情场景、电视会议场景等等。上述场景,均是通过调用摄像头等硬件设备来对人脸进行识别,从而实现响应的功能。其中,有的场景对操作的安全性要求较高,例如身份认证场景、移动支付场景等。有的场景对操作的安全性要求相对不高,如模拟动态表情场景、电视会议场景等。为此,本申请实施例的图像处理方法既能够保证安全性要求较高的场景下的信息安全,又能够实现安全性要求相对不高的场景下的快速响应。
在本申请的一个实施例中,首先可获取调用人脸识别的应用程序的属性信息。其中,应用程序的属性信息可包括应用程序的安全级别。例如:银行类应用程序、支付类应用程序、身份验证类应用程序的安全级别为高;普通的摄像类应用程序等的安全级别为低。
022:根据属性信息判断人脸识别过程是否需要在可信执行环境TEE下执行。
在本申请的一个实施例中,如果应用程序的安全级别为高,则人脸识别过程需要在TEE下执行;如果应用程序的安全级别为低,则人脸识别过程则无需在TEE下执行,只需在REE下执行即可。例如用户在使用微信、支付宝等应用程序进行支付时,需要开启摄像头对人脸进行识别,与预存的人脸进行比对,如果比对结果一致,则验证成功,进行支付操作。由于涉及到个人付款的信息等,可能会造成用户的个人财产损失,因此,需要在TEE下执行,来保证安全性。而模拟动态表情场景下,开启摄像头对人脸进行识别,只是对人脸的表情进行跟踪捕捉,然后模拟出各种有趣的动态表情,增强趣味性,因此只需在REE下执行即可。
023:如果人脸识别过程需要在可信执行环境TEE下执行,则控制应用程序进入可信执行环境TEE模式,并在可信执行环境TEE下串行执行人脸识别过程的步骤。
其中,人脸识别过程可包括检测人脸步骤、获取人脸元素步骤、人脸活体检测步骤以及人脸识别步骤。由于TEE的安全特性,检测人脸步骤、获取人脸元素步骤、人脸活体检测步骤以及人脸识别步骤步骤在TEE下,是依次串行执行的。而在REE下,对安全性的要求不高,检测人脸步骤、获取人脸元素步骤、人脸活体检测步骤以及人脸识别步骤是可以并行执行的。其中,检测人脸步骤指的是通过调用泛光灯114发射红外光,由激光摄像头112接收红外图像,利用红外图像判断是否为人脸的过程。获取人脸元素步骤是利用可见光摄像头116对人脸进行拍摄,获取诸如人脸亮度、人脸拍摄角度等与图像质量相关的特征的过程。人脸活体检测步骤则是将红外光和结构光结合,通过泛光灯114发射红外光,镭射灯118发射结构光,通过激光摄像头接收红外图像和结构光图像,然后对上述图像进行分析,获取如人脸皮肤纹理特征、建立人脸三维模型、多帧人脸动态特征等,来进行活体检测。人脸识别步骤则是利用可见光摄像头116对人脸进行拍摄,获取诸如眼睛大小、鼻子在人脸的位置信息、眉毛长短等特征,来进行人脸识别。
在本实施例中,检测人脸步骤、获取人脸元素步骤、人脸活体检测步骤以及人脸识别步骤步骤在串行执行时,并不限定其执行顺序。也可以根据应用程序的实际应用情况,对上述步骤进行任意组合,实现相应的功能。如人脸支付场景,只需人脸活体检测步骤结合人脸识别步骤,而无需人脸元素步骤。应当理解的是,上述情景仅为示例,并不作为对技术方案的限定。
024:如果人脸识别过程无需在可信执行环境TEE下执行,则控制应用程序进入普通执行环境REE模式,并在普通执行环境REE下并行执行人脸识别过程的步骤。
在本申请的一个实施例中,在REE下,人脸识别过程对安全性的要求不高,因此,可以并行执行人脸识别过程的步骤,以提高人脸识别过程的执行速度。
下面详细解释人脸识别过程中的检测人脸步骤、获取人脸元素步骤、人脸活体检测步骤以及人脸识别步骤。
如图3和7所示,检测人脸步骤,包括:
0251:向被摄物发射红外光。
具体地,可通过泛光灯114发射红外光。
0252:捕获经过被摄物的红外图像。
由激光摄像头112捕获经过被摄物的红外图像。
0253:将红外图像发送至微控制单元MCU130,并利用微控制单元MCU130对红外图像进行处理,以获取红外图像数据。
由于黑客行为,大多数都是基于应用程序侧的入侵,因此将红外图像发送至微控制单元MCU130,由硬件的微控制单元MCU130对红外图像进行处理,防止原始的数据在应用程序侧被截取,能够有效地提高数据的安全性。
0254:将红外图像数据通过预设接口提供给应用程序,以使应用程序调用红外图像数据。
本实施例中,可利用应用程序调用红外图像数据来实现人脸的检测。
其中,预设接口为符合预设标准的总线接口,包括MIPI(移动产业处理器接口,Mobile Industry Processor Interface)总线接口、I2C同步串行总线接口、SPI总线接口。
如图3和图8所示,获取人脸元素步骤,包括:
0255:拍摄被摄物的可见光图像。
具体地,可通过可见光摄像头116拍摄被摄物的可见光图像。
0256:将可见光图像发送至微控制单元MCU130,并利用微控制单元MCU130对可见光图像进行处理,以获取可见光图像数据。
由于黑客行为,大多数都是基于应用程序侧的入侵,因此将可见光图像发送至微控制单元MCU130,由硬件的微控制单元MCU130对可见光图像进行处理,防止原始的数据在应用程序侧被截取,能够有效地提高数据的安全性。
0257:将可见光图像数据通过预设接口提供给应用程序,以使应用程序调用可见光图像数据。
本实施例中,可利用应用程序调用可见光图像数据来提取人脸元素,如获取人脸亮度、人脸拍摄角度等与图像质量相关的特征,以此作为下一步操作的数据基础。
如图3和图9所示,人脸活体检测步骤,包括:
0258:向被摄物发射红外光和结构光。
在本申请的一个实施例中,可利用微控制单元MCU130中的PWM调制生成结构光,通过镭射灯118发射结构光,并通过泛光灯114(红外光发射器)向被摄物发射红外光。
0259:捕获经过被摄物的红外图像和结构光图像。
在本申请的一个实施例中,可通过激光摄像头112(红外光接收器)捕获经过被摄物的红外图像和结构光图像。
0260:将红外图像和结构光图像发送至微控制单元MCU130,并利用微控制单元MCU130对红外图像和结构光图像进行处理,以获取红外图像数据和景深数据。
具体地,可利用微控制单元MCU130中的深度引擎对红外图像和结构光图像进行处理,以获取红外图像数据和景深数据。
0261:将红外图像数据和景深数据通过预设接口提供给应用程序,以使应用程序调用红外图像数据和景深数据进行安全验证。
在本申请的一个实施例中,可利用微控制单元MCU130调制生成结构光,通过镭射灯118(结构光投影设备)向被摄物发射结构光。假设当前场景为身份验证场景,则被摄物可以是用户的人脸。结构光在照射到被摄物后,由于被摄物的形状特性,结构光会发生形变,通过采集上述结构光信息,可以得到一个具有被摄物轮廓和深度的结构光图像。
其中,结构光的类型可包括,激光条纹、格雷码、正弦条纹、或者,非均匀散斑等。
下面以一种应用广泛的条纹投影技术为例来阐述其具体原理,其中,条形投影技术属于广义上的面结构光。
在使用面结构光投影的时候,如图10所示,通过微控制单元MCU130产生正弦条纹,将该正弦条纹通过镭射灯118(结构光投影设备)投影至被摄物,利用激光摄像头112拍摄条纹受物体调制的弯曲程度,解调该弯曲条纹得到相位,再将相位转化为全场的高度。
应当理解的是,在实际应用中,根据具体应用场景的不同,本申请实施例中所采用的结构光除了上述条纹之外,还可以是其他任意图案。
在此之后,激光摄像头112可将结构光图像发送至微控制单元MCU130,并利用微控制单元MCU130计算获得被摄物对应的景深数据。为了进一步提高安全性,可将结构光图像发送至微控制单元MCU130,在硬件的微控制单元MCU130中对结构光图像进行处理,相比于直接发送给应用程序处理,数据已经在硬件中运算,黑客无法获取原始的数据,因此更加安全。具体地,可利用微控制单元MCU130解调结构光图像中变形位置像素对应的相位信息,然后将相位信息转化为高度信息,最后根据高度信息确定被摄物对应的景深数据。最后,处理器120将从微控制单元MCU130处获取的景深数据通过预设接口提供至应用程序,以使应用程序调用景深数据。例如:基于人脸的结构光图像,计算得到人脸的轮廓、高度等数据信息。上述数据信息已经在微控制单元MCU130中计算,应用程序只需调用上述数据信息,与预存的数据进行特征比对,即可实现身份验证。如果验证通过,用户就可以获取该应用程序的权限,对应用程序进行更进一步地操作。例如,利用结构光建立人脸三维模型,获取人脸的三维特征,结合红外图像 数据或多帧人脸动态特征等,进行活体检测,检测当前拍摄的人脸是活体,而非平面的照片。
如图3和图11所示,人脸识别步骤,包括:
0262:拍摄被摄物的可见光图像。
0263:将可见光图像发送至微控制单元MCU,并利用微控制单元MCU130对可见光图像进行处理,以获取人脸特征数据。
由于黑客行为,大多数都是基于应用程序侧的入侵,因此将可见光图像发送至微控制单元MCU130,由硬件的微控制单元MCU130对可见光图像进行处理,防止原始的数据在应用程序侧被截取,能够有效地提高数据的安全性。
其中,人脸特征数据可包括眼睛大小、鼻子在人脸的位置信息、眉毛长短等。
0264:将人脸特征数据通过预设接口提供给应用程序,以使应用程序调用人脸特征数据。
本实施例中,可利用应用程序调用可见光图像数据来进行人脸识别。
本申请实施例的图像处理方法,通过获取调用人脸识别的应用程序的属性信息,并根据属性信息判断人脸识别过程是否需要在可信执行环境TEE下执行,如果人脸识别过程需要在可信执行环境TEE下执行,则控制应用程序进入可信执行环境TEE模式,并在可信执行环境TEE下串行执行人脸识别过程的步骤,如果人脸识别过程无需在可信执行环境TEE下执行,则控制应用程序进入普通执行环境REE模式,并在普通执行环境REE下并行执行人脸识别过程的步骤,能够保证安全性要求较高的场景下的信息安全,又能够实现安全性要求相对不高的场景下的快速响应。
应该理解的是,虽然上述流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,这些步骤可以以其它的顺序执行。而且,上述流程图中的至少一部分步骤可以包括多个子步骤或者多个阶段,这些子步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些子步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它步骤或者其它步骤的子步骤或者阶段的至少一部分轮流或者交替地执行。
请参阅图12,本申请还提供一种图像处理装置20,可应用于图2或图3所示的电子设备10。图像处理装置20包括接收总模块201和处理总模块202。接收总模块201用于接收与人脸相关联的目标信息。处理总模块202用于根据目标信息的安全等级确定与目标信息对应的运行环境,并在运行环境下执行与人脸相关的处理。
请参阅图13,在一个实施例中,目标信息为用于获取人脸深度信息的图像数据。接收总模块201包括接收模块211。处理总模块202包括确定模块212和第一处理模块213。接收模块211用于若接收到用于获取人脸深度信息的图像数据,对图像数据划分安全等级。确定模块212用于根据安全等级确定图像数据对应的运行环境;运行环境是第一处理单元的运行环境。第一处理模块4213用于将图像数据划分到对应运行环境下的第一处理单元进行处理,得到人脸深度信息。
可选地,图像数据包括摄像头模组110采集的人脸图像和/或第二处理单元对人脸图像处理得到的中间图像。
可选地,图像数据包括摄像头模组110采集的红外图像和散斑图像。其中,采集红外图像的第一时刻与采集散斑图像的第二时刻之间的时间间隔小于第一阈值。
可选地,图像数据包括摄像头模组110采集的红外图像和RGB图像。其中,红外图像和RGB图像是摄像头模组同时采集的图像。
可选地,第一处理模块213还用于提取图像数据中特征集;将特征集划分到图像数据对应运行环境下的第一处理单元120进行处理。
可选地,确定模块212还用于在得到人脸深度信息之前,根据图像数据进行人脸识别和活体检测,以及确定对图像数据人脸识别通过且检测到的人脸具有生物活性。
可选地,请参阅图14,处理总模块202还包括第一获取模块214和发送模块215。第一获取模块214用于获取接收人脸深度信息的应用程序的类型。确定模块212还用于根据类型确定应用程序对应的数据通道。发送模块215用于将人脸深度信息通过对应的数据传输通道发送给应用程序。
本申请实施例的图像处理装置20,第一处理单元120在获取到图像数据后,可对图像数据划分安全等级,根据图像数据的安全等级确定图像数据对应的运行环境,将图像数据划分到对应运行环境下的 第一处理单元120进行处理,通过对图像数据的不同划分提高了对图像数据处理的效率。
请参阅图15,在另一个实施例中,目标信息为调用人脸识别的应用程序的属性信息。接收总模块201包括第二获取模块221。处理总模块202包括判断模块222和第二处理模块223。第二获取模块221用于获取调用人脸识别的应用程序的属性信息。判断模块222用于根据属性信息判断人脸识别过程是否需要在可信执行环境TEE下执行。
第二处理模块223用于如果人脸识别过程需要在可信执行环境TEE下执行,则控制应用程序进入可信执行环境TEE模式,并在可信执行环境TEE下串行执行人脸识别过程的步骤,如果人脸识别过程无需在可信执行环境TEE下执行,则控制应用程序进入普通执行环境REE模式,并在普通执行环境REE下并行执行人脸识别过程的步骤。
其中,应用程序的属性信息包括所述应用程序的安全级别。人脸识别过程包括检测人脸步骤、获取人脸元素步骤、人脸活体检测步骤以及人脸识别步骤。
可选地,第二处理模块223还用于:向被摄物发射红外光;捕获经过被摄物的红外图像;将红外图像发送至微控制单元MCU130,并利用微控制单元MCU130对红外图像进行处理,以获取红外图像数据;将红外图像数据通过预设接口提供给应用程序,以使应用程序调用红外图像数据。
可选地,第二处理模块223还用于:拍摄被摄物的可见光图像;将可见光图像发送至微控制单元MCU130,并利用微控制单元MCU130对可见光图像进行处理,以获取可见光图像数据;将可见光图像数据通过预设接口提供给所述应用程序,以使应用程序调用所述可见光图像数据。
可选地,第二处理模块223还用于:向被摄物发射红外光和结构光;捕获经过被摄物的红外图像和结构光图像;将红外图像和结构光图像发送至微控制单元MCU130,并利用微控制单元MCU130对红外图像和结构光图像进行处理,以获取红外图像数据和景深数据;将红外图像数据和景深数据通过预设接口提供给应用程序,以使应用程序调用红外图像数据和景深数据。
第二处理模块223用于将红外图像和结构光图像发送至微控制单元MCU130,并利用微控制单元MCU130对红外图像和结构光图像进行处理,以获取红外图像数据和景深数据时,具体用于:解调结构光图像中变形位置像素对应的相位信息;将相位信息转化为高度信息;根据高度信息确定被摄物对应的景深数据。
可选地,第二处理模块223还用于:拍摄被摄物的可见光图像;将可见光图像发送至微控制单元MCU130,并利用微控制单元MCU130对可见光图像进行处理,以获取人脸特征数据;将人脸特征数据通过预设接口提供给应用程序,以使应用程序调用人脸特征数据。
可选地,预设接口为符合预设标准的总线接口,包括MIPI总线接口、I2C同步串行总线接口、SPI总线接口。
本申请实施例的图像处理装置20,通过获取调用人脸识别的应用程序的属性信息,并根据属性信息判断人脸识别过程是否需要在可信执行环境TEE下执行,如果人脸识别过程需要在可信执行环境TEE下执行,则控制应用程序进入可信执行环境TEE模式,并在可信执行环境TEE下串行执行人脸识别过程的步骤,如果人脸识别过程无需在可信执行环境TEE下执行,则控制应用程序进入普通执行环境REE模式,并在普通执行环境REE下并行执行人脸识别过程的步骤,能够保证安全性要求较高的场景下的信息安全,又能够实现安全性要求相对不高的场景下的快速响应。
上述图像处理装置20中各个模块的划分仅用于举例说明,在其他实施例中,可将图像处理装置20按照需要划分为不同的模块,以完成图像处理装置20的全部或部分功能。
本申请实施例中提供的图像处理装置20中的各个模块的实现可为计算机程序的形式。该计算机程序可在终端或服务器上运行。该计算机程序构成的程序模块可存储在终端或服务器的存储器上。该计算机程序被处理器执行时,实现本申请各个实施例中所描述的图像处理方法的步骤。
请参阅图2和图3,本申请还提供了一种电子设备10。电子设备10包括:第一处理单元120、第二处理单元130和摄像头模组110。第一处理单元120可用于接收与人脸相关联的目标信息,以及根据所述目标信息的安全等级确定与所述目标信息对应的运行环境,并在所述运行环境下执行与人脸相关的处理。
请参阅图2,在一个实施例中,第二处理单元分别连接上述第一处理单元和摄像头模组。目标信息包括用于获取人脸深度信息的图像数据。第一处理单元120用于:若接收到用于获取人脸深度信息的图 像数据,对图像数据划分安全等级;根据安全等级确定图像数据对应的运行环境;运行环境是第一处理单元120的运行环境;将图像数据划分到对应运行环境下的第一处理单120元进行处理,得到人脸深度信息。
可选地,图像数据包括摄像头模组110采集的人脸图像和/或第二处理单元130对人脸图像处理得到的中间图像。
可选地,图像数据包括摄像头模组110采集的红外图像和散斑图像。其中,采集红外图像的第一时刻与采集散斑图像的第二时刻之间的时间间隔小于第一阈值。
可选地,图像数据包括摄像头模组110采集的红外图像和RGB图像.其中,红外图像和RGB图像是摄像头模组110同时采集的图像。
可选地,第一处理单元120将图像数据划分到对应运行环境下的第一处理单元120进行处理包括:提取图像数据中特征集;将特征集划分到图像数据对应运行环境下的第一处理单元120进行处理。
可选地,在得到人脸深度信息之前,第一处理单元120还用于根据图像数据进行人脸识别和活体检测,以及确定对图像数据人脸识别通过且检测到的人脸具有生物活性。
可选地,第一处理单元120还用于获取接收人脸深度信息的应用程序的类型,根据类型确定应用程序对应的数据通道,以及将人脸深度信息通过对应的数据传输通道发送给应用程序。
本申请实施例的电子设备10,第一处理单元120在获取到图像数据后,可对图像数据划分安全等级,根据图像数据的安全等级确定图像数据对应的运行环境,将图像数据划分到对应运行环境下的第一处理单元120进行处理,通过对图像数据的不同划分提高了对图像数据处理的效率。
请参阅图3,在另一个实施例中,摄像头模组110包括激光摄像头112、泛光灯114、可见光摄像头116、镭射灯118。第二处理单元130为微控制单元MCU130,所述第一处理单元120为处理器130。目标信息包括调用人脸识别的应用程序的属性信息。激光摄像头112、泛光灯114、可见光摄像头116、镭射灯118分别与微控制单元MCU130相连。微控制单元MCU130与处理器120相连。其中,处理器120具体用于:获取调用人脸识别的应用程序的属性信息;根据属性信息判断人脸识别过程是否需要在可信执行环境TEE下执行;如果人脸识别过程需要在可信执行环境TEE下执行,则控制应用程序进入可信执行环境TEE模式,并在可信执行环境TEE下串行执行人脸识别过程的步骤;如果人脸识别过程无需在可信执行环境TEE下执行,则控制应用程序进入普通执行环境REE模式,并在普通执行环境REE下并行执行人脸识别过程的步骤。
可选地,应用程序的属性信息包括所述应用程序的安全级别。可选地,人脸识别过程包括检测人脸步骤、获取人脸元素步骤、人脸活体检测步骤以及人脸识别步骤。
可选地,在检测人脸步骤中,泛光灯114向被摄物发射红外光。激光摄像头112捕获经过被摄物的红外图像,并将红外图像发送至微控制单元MCU130。微控制单元MCU130对红外图像进行处理,以获取红外图像数据。处理器将120红外图像数据通过预设接口提供给应用程序,以使应用程序调用所述图像数据。
可选地,在获取人脸元素步骤中,可见光摄像头116拍摄被摄物的可见光图像,并将可见光图像发送至微控制单元MCU130。微控制单元MCU130对可见光图像进行处理,以获取可见光图像数据。处理器120将可见光图像数据通过预设接口提供给所述应用程序,以使应用程序调用可见光图像数据。
可选地,在人脸活体检测步骤中,泛光灯114向被摄物发射红外光,镭射灯118向被摄物发射结构光。激光摄像头118捕获经过被摄物的红外图像和结构光图像,并将红外图像和结构光图像发送至微控制单元MCU130。微控制单元MCU130对红外图像和结构光图像进行处理,以获取红外图像数据和景深数据。处理器120将红外图像数据和景深数据通过预设接口提供给应用程序,以使应用程序调用红外图像数据和景深数据。
可选地,微控制单元MCU130对红外图像和结构光图像进行处理,以获取红外图像数据和景深数据时,微控制单元MCU130具体执行解调结构光图像中变形位置像素对应的相位信息,将相位信息转化为高度信息,以及根据高度信息确定被摄物对应的景深数据的步骤。
可选地,在人脸识别步骤中,可见光摄像头116拍摄被摄物的可见光图像,并将可见光图像发送至微控制单元MCU130。微控制单元MCU130对可见光图像进行处理,以获取人脸特征数据。处理器120将人脸特征数据通过预设接口提供给应用程序,以使应用程序调用人脸特征数据。
本申请实施例的电子设备10,通过获取调用人脸识别的应用程序的属性信息,并根据属性信息判断人脸识别过程是否需要在可信执行环境TEE下执行,如果人脸识别过程需要在可信执行环境TEE下执行,则控制应用程序进入可信执行环境TEE模式,并在可信执行环境TEE下串行执行人脸识别过程的步骤,如果人脸识别过程无需在可信执行环境TEE下执行,则控制应用程序进入普通执行环境REE模式,并在普通执行环境REE下并行执行人脸识别过程的步骤,能够保证安全性要求较高的场景下的信息安全,又能够实现安全性要求相对不高的场景下的快速响应。
如图16所示,本申请还提供一种电子设备30。电子设备30包括一个或多个处理器31、存储器32和一个或多个程序。其中一个或多个程序被存储在存储器32中,并且被配置成由一个或多个处理器31执行。程序包括用于执行上述任意一个实施例所述的图像处理方法的指令。
例如,程序包括用于执行以下步骤的图像处理方法的指令:
011:若接收到用于获取人脸深度信息的图像数据,对图像数据划分安全等级;
012:根据安全等级确定图像数据对应的运行环境,运行环境是第一处理单元120的运行环境;
013:将图像数据划分到对应运行环境下的第一处理单元120进行处理,得到人脸深度信息。
再例如,程序包括用于执行步骤的图像处理方法的指令:
021:获取调用人脸识别的应用程序的属性信息;
022:根据属性信息判断人脸识别过程是否需要在可信执行环境TEE下执行;
023:如果人脸识别过程需要在可信执行环境TEE下执行,则控制应用程序进入可信执行环境TEE模式,并在可信执行环境TEE下串行执行人脸识别过程的步骤;
024:如果人脸识别过程无需在可信执行环境TEE下执行,则控制应用程序进入普通执行环境REE模式,并在普通执行环境REE下并行执行人脸识别过程的步骤。
本申请还提供了一种计算机可读存储介质。当计算机可执行指令被一个或多个处理器执行时,使得处理器执行本申请任意一个实施例所述的图像处理方法的步骤。
本申请还提供了一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行本申请任意一个实施例所述的图像处理方法的步骤。
本申请所使用的对存储器、存储、数据库或其它介质的任何引用可包括非易失性和/或易失性存储器。合适的非易失性存储器可包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM),它用作外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双数据率SDRAM(DDRSDRAM)、增强型SDRAM(ESDRAM)、同步链路(Synchlink)DRAM(SLDRAM)、存储器总线(Rambus)直接RAM(RDRAM)、直接存储器总线动态RAM(DRDRAM)、以及存储器总线动态RAM(RDRAM)。
以上实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对本申请专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请专利的保护范围应以所附权利要求为准。
Claims (52)
- 一种图像处理方法,其特征在于,包括:接收与人脸相关联的目标信息;和根据所述目标信息的安全等级确定与所述目标信息对应的运行环境,并在所述运行环境下执行与人脸相关的处理。
- 根据权利要求1所述的图像处理方法,其特征在于,所述目标信息包括用于获取人脸深度信息的图像数据,所述接收与人脸相关联的目标信息,包括:若接收到用于获取人脸深度信息的所述图像数据,对所述图像数据划分安全等级;所述根据所述目标信息的安全等级确定与所述目标信息对应的运行环境,并在所述运行环境下执行与人脸相关的处理,包括:根据所述安全等级确定所述图像数据对应的运行环境,所述运行环境是第一处理单元的运行环境;和将所述图像数据划分到对应运行环境下的所述第一处理单元进行处理,得到人脸深度信息。
- 根据权利要求2所述的图像处理方法,其特征在于,所述图像数据包括摄像头模组采集的人脸图像和/或第二处理单元对所述人脸图像处理得到的中间图像。
- 根据权利要求2所述的图像处理方法,其特征在于,所述图像数据包括摄像头模组采集的红外图像和散斑图像;其中,采集所述红外图像的第一时刻与采集所述散斑图像的第二时刻之间的时间间隔小于第一阈值。
- 根据权利要求2所述的图像处理方法,其特征在于,所述图像数据包括摄像头模组采集的红外图像和RGB图像;其中,所述红外图像和所述RGB图像是所述摄像头模组同时采集的图像。
- 根据权利要求2所述的图像处理方法,其特征在于,将所述图像数据划分到对应运行环境下的所述第一处理单元进行处理包括:提取所述图像数据中的特征集;和将所述特征集划分到所述图像数据对应运行环境下的所述第一处理单元进行处理。
- 根据权利要求2至6中任一项所述的图像处理方法,其特征在于,在所述得到人脸深度信息之前,所述图像处理方法还包括:根据所述图像数据进行人脸识别和活体检测;和确定对所述图像数据人脸识别通过且检测到的人脸具有生物活性。
- 根据权利要求2至6中任一项所述的图像处理方法,其特征在于,所述图像处理方法还包括:获取接收所述人脸深度信息的应用程序的类型;根据所述类型确定所述应用程序对应的数据通道;将所述人脸深度信息通过所述对应的数据传输通道发送给所述应用程序。
- 根据权利要求1所述的图像处理方法,其特征在于,所述目标信息包括调用人脸识别的应用程序的属性信息,所述接收与人脸相关联的目标信息包括:获取调用人脸识别的应用程序的所述属性信息;所述根据所述目标信息的安全等级确定与所述目标信息对应的运行环境,并在所述运行环境下执行与人脸相关的处理,包括:根据所述属性信息判断人脸识别过程是否需要在可信执行环境TEE下执行;如果所述人脸识别过程需要在所述可信执行环境TEE下执行,则控制所述应用程序进入可信执行环境TEE模式,并在所述可信执行环境TEE下串行执行所述人脸识别过程的步骤;和如果所述人脸识别过程无需在所述可信执行环境TEE下执行,则控制所述应用程序进入普通执行环境REE模式,并在普通执行环境REE下并行执行所述人脸识别过程的步骤。
- 根据权利要求9所述的图像处理方法,其特征在于,所述应用程序的属性信息包括所述应用程序的安全级别。
- 根据权利要求9所述的图像处理方法,其特征在于,所述人脸识别过程包括检测人脸步骤、获取人脸元素步骤、人脸活体检测步骤以及人脸识别步骤。
- 根据权利要求11所述的图像处理方法,其特征在于,所述检测人脸步骤,包括:向被摄物发射红外光;捕获经过所述被摄物的红外图像;将所述红外图像发送至微控制单元MCU,并利用所述微控制单元MCU对所述红外图像进行处理,以获取红外图像数据;和将所述红外图像数据通过预设接口提供给所述应用程序,以使所述应用程序调用所述红外图像数据。
- 根据权利要求11所述的图像处理方法,其特征在于,所述获取人脸元素步骤,包括:拍摄被摄物的可见光图像;将所述可见光图像发送至微控制单元MCU,并利用所述微控制单元MCU对所述可见光图像进行处理,以获取可见光图像数据;和将所述可见光图像数据通过预设接口提供给所述应用程序,以使所述应用程序调用所述可见光图像数据。
- 根据权利要求11所述的图像处理方法,其特征在于,所述人脸活体检测步骤,包括:向被摄物发射红外光和所述结构光;捕获经过所述被摄物的红外图像和所述结构光图像;将所述红外图像和所述结构光图像发送至微控制单元MCU,并利用所述微控制单元MCU对所述红外图像和所述结构光图像进行处理,以获取红外图像数据和所述景深数据;和将所述红外图像数据和所述景深数据通过预设接口提供给所述应用程序,以使所述应用程序调用所述红外图像数据和所述景深数据。
- 根据权利要求11所述的图像处理方法,其特征在于,所述人脸识别步骤,包括:拍摄被摄物的可见光图像;将所述可见光图像发送至微控制单元MCU,并利用所述微控制单元MCU对所述可见光图像进行处理,以获取人脸特征数据;和将所述人脸特征数据通过预设接口提供给所述应用程序,以使所述应用程序调用所述人脸特征数据。
- 根据权利要求14所述的图像处理方法,其特征在于,所述利用所述微控制单元MCU对所述红外图像和所述结构光图像进行处理,以获取红外图像数据和所述景深数据,包括:解调所述结构光图像中变形位置像素对应的相位信息;将所述相位信息转化为高度信息;和根据所述高度信息确定所述被摄物对应的景深数据。
- 根据权利要求12至15任一项所述的图像处理方法,其特征在于,所述预设接口为符合预设标准的总线接口,包括MIPI总线接口、I2C同步串行总线接口、SPI总线接口。
- 一种图像处理装置,其特征在于,包括:接收总模块,用于接收与人脸相关联的目标信息;和处理总模块,用于根据所述目标信息的安全等级确定与所述目标信息对应的运行环境,并在所述运行环境下执行与人脸相关的处理。
- 根据权利要求18所述的图像处理装置,其特征在于,所述目标信息包括用于获取人脸深度信息的图像数据,所述接收总模块包括:接收模块,用于若接收到用于获取人脸深度信息的所述图像数据,对所述图像数据划分安全等级;所述处理总模块包括:确定模块,用于根据所述安全等级确定所述图像数据对应的运行环境,所述运行环境是第一处理单元的运行环境;和第一处理模块,用于将所述图像数据划分到对应运行环境下的所述第一处理单元进行处理,得到人脸深度信息。
- 根据权利要求19所述的图像处理装置,其特征在于,所述图像数据包括摄像头模组采集的人脸图像和/或第二处理单元对所述人脸图像处理得到的中间图像。
- 根据权利要求19所述的图像处理装置,其特征在于,所述图像数据包括摄像头模组采集的红外 图像和散斑图像;其中,采集所述红外图像的第一时刻与采集所述散斑图像的第二时刻之间的时间间隔小于第一阈值。
- 根据权利要求19所述的图像处理装置,其特征在于,所述图像数据包括摄像头模组采集的红外图像和RGB图像;其中,所述红外图像和所述RGB图像是所述摄像头模组同时采集的图像。
- 根据权利要求19所述的图像处理装置,其特征在于,所述第一处理模块还用于:提取所述图像数据中的特征集;和将所述特征集划分到所述图像数据对应运行环境下的所述第一处理单元进行处理。
- 根据权利要求19-23任一项所述的图像处理装置,其特征在于,在所述得到人脸深度信息之前,所述确定模块还用于:根据所述图像数据进行人脸识别和活体检测;和确定对所述图像数据人脸识别通过且检测到的人脸具有生物活性。
- 根据权利要求19-23任一项所述的图像处理装置,其特征在于,在所述得到人脸深度信息之前,所述处理总模块还包括:第一获取模块,用于获取接收所述人脸深度信息的应用程序的类型;所述确定模块还用于根据所述类型确定所述应用程序对应的数据通道;发送模块,用于将所述人脸深度信息通过所述对应的数据传输通道发送给所述应用程序。
- 根据权利要求18所述的图像处理装置,其特征在于,所述目标信息包括调用人脸识别的应用程序的属性信息,所述接收总模块包括:第二获取模块,用于获取调用人脸识别的应用程序的所述属性信息;所述处理总模块包括:判断模块,用于根据所述属性信息判断人脸识别过程是否需要在可信执行环境TEE下执行;和第二处理模块,用于如果所述人脸识别过程需要在所述可信执行环境TEE下执行,则控制所述应用程序进入可信执行环境TEE模式,并在所述可信执行环境TEE下串行执行所述人脸识别过程的步骤,如果所述人脸识别过程无需在所述可信执行环境TEE下执行,则控制所述应用程序进入普通执行环境REE模式,并在普通执行环境REE下并行执行所述人脸识别过程的步骤。
- 根据权利要求26所述的图像处理装置,其特征在于,所述应用程序的属性信息包括所述应用程序的安全级别。
- 根据权利要求26所述的图像处理装置,其特征在于,所述人脸识别过程包括检测人脸步骤、获取人脸元素步骤、人脸活体检测步骤以及人脸识别步骤。
- 根据权利要求28所述的图像处理装置,其特征在于,所述第二处理模块,具体用于:向被摄物发射红外光;捕获经过所述被摄物的红外图像;将所述红外图像发送至微控制单元MCU,并利用所述微控制单元MCU对所述红外图像进行处理,以获取红外图像数据;和将所述红外图像数据通过预设接口提供给所述应用程序,以使所述应用程序调用所述红外图像数据。
- 根据权利要求28所述的图像处理装置,其特征在于,所述第二处理模块,具体用于:拍摄被摄物的可见光图像;将所述可见光图像发送至微控制单元MCU,并利用所述微控制单元MCU对所述可见光图像进行处理,以获取可见光图像数据;和将所述可见光图像数据通过预设接口提供给所述应用程序,以使所述应用程序调用所述可见光图像数据。
- 根据权利要求28所述的图像处理装置,其特征在于,所述第二处理模块,具体用于:向被摄物发射红外光和所述结构光;捕获经过所述被摄物的红外图像和所述结构光图像;将所述红外图像和所述结构光图像发送至微控制单元MCU,并利用所述微控制单元MCU对所述红外图像和所述结构光图像进行处理,以获取红外图像数据和所述景深数据;和将所述红外图像数据和所述景深数据通过预设接口提供给所述应用程序,以使所述应用程序调用所述红外图像数据和所述景深数据。
- 根据权利要求28所述的图像处理装置,其特征在于,所述第二处理模块,具体用于:拍摄被摄物的可见光图像;将所述可见光图像发送至微控制单元MCU,并利用所述微控制单元MCU对所述可见光图像进行处理,以获取人脸特征数据;和将所述人脸特征数据通过预设接口提供给所述应用程序,以使所述应用程序调用所述人脸特征数据。
- 根据权利要求31所述的图像处理装置,其特征在于,所述第二处理模块,具体用于:解调所述结构光图像中变形位置像素对应的相位信息;将所述相位信息转化为高度信息;和根据所述高度信息确定所述被摄物对应的景深数据。
- 根据权利要求29至32任一项所述的图像处理装置,其特征在于,所述预设接口为符合预设标准的总线接口,包括MIPI总线接口、I2C同步串行总线接口、SPI总线接口。
- 一种电子设备,其特征在于,包括摄像头模组、第一处理单元及第二处理单元,所述第一处理单元用于:接收与人脸相关联的目标信息;和根据所述目标信息的安全等级确定与所述目标信息对应的运行环境,并在所述运行环境下执行与人脸相关的处理。
- 根据权利要求35所述的电子设备,其特征在于,所述第二处理单元分别连接所述第一处理单元和所述摄像头模组;所述目标信息包括用于获取人脸深度信息的图像数据;所述第一处理单元用于:若接收到用于获取人脸深度信息的所述图像数据,对所述图像数据划分安全等级;根据所述安全等级确定所述图像数据对应的运行环境;所述运行环境是第一处理单元的运行环境;和将所述图像数据划分到对应运行环境下的所述第一处理单元进行处理,得到人脸深度信息。
- 根据权利要求36所述的电子设备,其特征在于,所述图像数据包括摄像头模组采集的人脸图像和/或所述第二处理单元对所述人脸图像处理得到的中间图像。
- 根据权利要求36所述的电子设备,其特征在于,所述图像数据包括所述摄像头模组采集的红外图像和散斑图像;其中,采集所述红外图像的第一时刻与采集所述散斑图像的第二时刻之间的时间间隔小于第一阈值。
- 根据权利要求36所述的电子设备,其特征在于,所述图像数据包括所述摄像头模组采集的红外图像和RGB图像;其中,所述红外图像和所述RGB图像是所述摄像头模组同时采集的图像。
- 根据权利要求36所述的电子设备,其特征在于,所述第一处理单元将所述图像数据划分到对应运行环境下的所述第一处理单元进行处理包括:提取所述图像数据中特征集;将所述特征集划分到所述图像数据对应运行环境下的所述第一处理单元进行处理。
- 根据权利要求36至40中任一项所述的电子设备,其特征在于,所述第一处理单元还用于:在所述得到人脸深度信息之前,根据所述图像数据进行人脸识别和活体检测;和确定对所述图像数据人脸识别通过且检测到的人脸具有生物活性。
- 根据权利要求36至40中任一项所述的电子设备,其特征在于,所述第一处理单元还用于:获取接收所述人脸深度信息的应用程序的类型;根据所述类型确定所述应用程序对应的数据通道;和将所述人脸深度信息通过所述对应的数据传输通道发送给所述应用程序。
- 根据权利要求35所述的电子设备,其特征在于,所述目标信息包括调用人脸识别的应用程序的属性信息,所述摄像头模组包括激光摄像头、泛光灯、可见光摄像头、镭射灯,所述第二处理单元为微控制单元MCU,所述第一处理单元为处理器,其中,所述处理器用于:获取调用人脸识别的应用程序的属性信息;根据所述属性信息判断人脸识别过程是否需要在可信执行环境TEE下执行;如果所述人脸识别过程需要在所述可信执行环境TEE下执行,则控制所述应用程序进入可信执行环境TEE模式,并在所述可信执行环境TEE下串行执行所述人脸识别过程的步骤;和如果所述人脸识别过程无需在所述可信执行环境TEE下执行,则控制所述应用程序进入普通执行环境REE模式,并在普通执行环境REE下并行执行所述人脸识别过程的步骤。
- 根据权利要求43所述的电子设备,其特征在于,所述应用程序的属性信息包括所述应用程序的安全级别。
- 根据权利要求43所述的电子设备,其特征在于,所述人脸识别过程包括检测人脸步骤、获取人脸元素步骤、人脸活体检测步骤以及人脸识别步骤。
- 根据权利要求45所述的电子设备,其特征在于,在所述检测人脸步骤中,所述泛光灯向被摄物发射红外光;所述激光摄像头捕获经过所述被摄物的红外图像,并将所述红外图像发送至所述微控制单元MCU;所述微控制单元MCU对所述红外图像进行处理,以获取红外图像数据;所述处理器将所述红外图像数据通过预设接口提供给所述应用程序,以使所述应用程序调用所述红外图像数据。
- 根据权利要求45所述的电子设备,其特征在于,在所述获取人脸元素步骤中,所述可见光摄像头拍摄被摄物的可见光图像,并将所述可见光图像发送至所述微控制单元MCU;所述微控制单元MCU对所述可见光图像进行处理,以获取可见光图像数据;所述处理器将所述可见光图像数据通过预设接口提供给所述应用程序,以使所述应用程序调用所述可见光图像数据。
- 根据权利要求45所述的电子设备,其特征在于,在所述人脸活体检测步骤中,所述泛光灯向被摄物发射红外光,所述镭射灯向所述被摄物发射结构光;所述激光摄像头捕获经过所述被摄物的红外图像和所述结构光图像,并将所述红外图像和所述结构光图像发送至微控制单元MCU;所述微控制单元MCU对所述红外图像和所述结构光图像进行处理,以获取红外图像数据和所述景深数据;所述处理器将所述红外图像数据和所述景深数据通过预设接口提供给所述应用程序,以使所述应用程序调用所述红外图像数据和所述景深数据。
- 根据权利要求45所述的电子设备,其特征在于,在所述人脸识别步骤中,所述可见光摄像头拍摄被摄物的可见光图像,并将所述可见光图像发送至所述微控制单元MCU;所述微控制单元MCU对所述可见光图像进行处理,以获取人脸特征数据;所述处理器将所述人脸特征数据通过预设接口提供给所述应用程序,以使所述应用程序调用所述人脸特征数据。
- 根据权利要求48所述的电子设备,其特征在于,所述微控制单元MCU还用于:解调所述结构光图像中变形位置像素对应的相位信息;将所述相位信息转化为高度信息;和根据所述高度信息确定所述被摄物对应的景深数据。
- 根据权利要求46至49任一项所述的电子设备,其特征在于,所述预设接口为符合预设标准的总线接口,包括MIPI总线接口、I2C同步串行总线接口、SPI总线接口。
- 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求1至17中任一项所述的图像处理方法的步骤。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP19784964.9A EP3633546A4 (en) | 2018-04-12 | 2019-04-08 | IMAGE PROCESSING METHOD AND DEVICE, ELECTRONIC DEVICE AND COMPUTER READABLE STORAGE MEDIUM |
US16/742,378 US11170204B2 (en) | 2018-04-12 | 2020-01-14 | Data processing method, electronic device and computer-readable storage medium |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810327407.3 | 2018-04-12 | ||
CN201810327407.3A CN108595928A (zh) | 2018-04-12 | 2018-04-12 | 人脸识别的信息处理方法、装置及终端设备 |
CN201810403022.0A CN108846310B (zh) | 2018-04-28 | 2018-04-28 | 图像处理方法、装置、电子设备和计算机可读存储介质 |
CN201810403022.0 | 2018-04-28 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/742,378 Continuation US11170204B2 (en) | 2018-04-12 | 2020-01-14 | Data processing method, electronic device and computer-readable storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019196793A1 true WO2019196793A1 (zh) | 2019-10-17 |
Family
ID=68163472
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/081743 WO2019196793A1 (zh) | 2018-04-12 | 2019-04-08 | 图像处理方法及装置、电子设备和计算机可读存储介质 |
Country Status (3)
Country | Link |
---|---|
US (1) | US11170204B2 (zh) |
EP (1) | EP3633546A4 (zh) |
WO (1) | WO2019196793A1 (zh) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109960582B (zh) * | 2018-06-19 | 2020-04-28 | 华为技术有限公司 | 在tee侧实现多核并行的方法、装置及系统 |
CN113780090B (zh) * | 2021-08-12 | 2023-07-28 | 荣耀终端有限公司 | 数据处理方法及装置 |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2005057472A1 (fr) * | 2003-12-12 | 2005-06-23 | Authenmetric Co., Ltd | Procede de reconnaissance des visages et systeme d'acquisition d'images |
CN106548077A (zh) * | 2016-10-19 | 2017-03-29 | 沈阳微可信科技有限公司 | 通信系统和电子设备 |
CN107169483A (zh) * | 2017-07-12 | 2017-09-15 | 深圳奥比中光科技有限公司 | 基于人脸识别的任务执行 |
CN107527036A (zh) * | 2017-08-29 | 2017-12-29 | 努比亚技术有限公司 | 一种环境安全检测方法、终端及计算机可读存储介质 |
CN107832598A (zh) * | 2017-10-17 | 2018-03-23 | 广东欧珀移动通信有限公司 | 解锁控制方法及相关产品 |
CN108595928A (zh) * | 2018-04-12 | 2018-09-28 | Oppo广东移动通信有限公司 | 人脸识别的信息处理方法、装置及终端设备 |
CN108846310A (zh) * | 2018-04-28 | 2018-11-20 | Oppo广东移动通信有限公司 | 图像处理方法、装置、电子设备和计算机可读存储介质 |
Family Cites Families (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10270748B2 (en) * | 2013-03-22 | 2019-04-23 | Nok Nok Labs, Inc. | Advanced authentication techniques and applications |
CN105812332A (zh) | 2014-12-31 | 2016-07-27 | 北京握奇智能科技有限公司 | 数据保护方法 |
CN106200891B (zh) * | 2015-05-08 | 2019-09-06 | 阿里巴巴集团控股有限公司 | 显示用户界面的方法、装置及系统 |
US10248772B2 (en) * | 2015-09-25 | 2019-04-02 | Mcafee, Llc | Secure communication between a virtual smartcard enclave and a trusted I/O enclave |
CN205318544U (zh) | 2015-12-30 | 2016-06-15 | 四川川大智胜软件股份有限公司 | 一种基于三维人脸识别的atm机防欺诈装置及系统 |
CN106845285B (zh) | 2016-12-28 | 2023-04-07 | 北京握奇智能科技有限公司 | 一种tee系统与ree系统配合以实现服务的方法及终端设备 |
DE102017200888B4 (de) * | 2017-01-19 | 2021-07-29 | Thyssenkrupp Ag | Elektrisch verstellbare Lenksäule für ein Kraftfahrzeug |
CN206672174U (zh) | 2017-04-17 | 2017-11-24 | 深圳奥比中光科技有限公司 | 深度计算处理器以及3d图像设备 |
CN206674128U (zh) | 2017-09-08 | 2017-11-24 | 深圳奥比中光科技有限公司 | 结构稳定的3d成像装置 |
CN107729836B (zh) * | 2017-10-11 | 2020-03-24 | Oppo广东移动通信有限公司 | 人脸识别方法及相关产品 |
WO2019206020A1 (zh) * | 2018-04-28 | 2019-10-31 | Oppo广东移动通信有限公司 | 图像处理方法、装置、计算机可读存储介质和电子设备 |
EP3621293B1 (en) * | 2018-04-28 | 2022-02-09 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Image processing method, apparatus and computer-readable storage medium |
CN208110631U (zh) | 2018-05-20 | 2018-11-16 | 蔡畅 | 一种人脸3d图像采集装置 |
WO2019228097A1 (zh) * | 2018-05-29 | 2019-12-05 | Oppo广东移动通信有限公司 | 验证系统、电子装置、验证方法、计算机可读存储介质及计算机设备 |
WO2019228020A1 (zh) * | 2018-05-30 | 2019-12-05 | Oppo广东移动通信有限公司 | 激光投射器的控制系统和移动终端 |
US20200136818A1 (en) * | 2018-10-25 | 2020-04-30 | International Business Machines Corporation | System for generating personalized service content |
US20200394531A1 (en) * | 2019-06-11 | 2020-12-17 | Valorsec Sa | Handling of distributed ledger objects among trusted agents through computational argumentation and inference in the interest of their managers |
-
2019
- 2019-04-08 EP EP19784964.9A patent/EP3633546A4/en active Pending
- 2019-04-08 WO PCT/CN2019/081743 patent/WO2019196793A1/zh unknown
-
2020
- 2020-01-14 US US16/742,378 patent/US11170204B2/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2005057472A1 (fr) * | 2003-12-12 | 2005-06-23 | Authenmetric Co., Ltd | Procede de reconnaissance des visages et systeme d'acquisition d'images |
CN106548077A (zh) * | 2016-10-19 | 2017-03-29 | 沈阳微可信科技有限公司 | 通信系统和电子设备 |
CN107169483A (zh) * | 2017-07-12 | 2017-09-15 | 深圳奥比中光科技有限公司 | 基于人脸识别的任务执行 |
CN107527036A (zh) * | 2017-08-29 | 2017-12-29 | 努比亚技术有限公司 | 一种环境安全检测方法、终端及计算机可读存储介质 |
CN107832598A (zh) * | 2017-10-17 | 2018-03-23 | 广东欧珀移动通信有限公司 | 解锁控制方法及相关产品 |
CN108595928A (zh) * | 2018-04-12 | 2018-09-28 | Oppo广东移动通信有限公司 | 人脸识别的信息处理方法、装置及终端设备 |
CN108846310A (zh) * | 2018-04-28 | 2018-11-20 | Oppo广东移动通信有限公司 | 图像处理方法、装置、电子设备和计算机可读存储介质 |
Non-Patent Citations (1)
Title |
---|
See also references of EP3633546A4 * |
Also Published As
Publication number | Publication date |
---|---|
US20200151428A1 (en) | 2020-05-14 |
US11170204B2 (en) | 2021-11-09 |
EP3633546A4 (en) | 2020-10-21 |
EP3633546A1 (en) | 2020-04-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11238270B2 (en) | 3D face identity authentication method and apparatus | |
TWI736883B (zh) | 影像處理方法和電子設備 | |
US11200404B2 (en) | Feature point positioning method, storage medium, and computer device | |
CN108804895B (zh) | 图像处理方法、装置、计算机可读存储介质和电子设备 | |
CN108805024B (zh) | 图像处理方法、装置、计算机可读存储介质和电子设备 | |
CN111126146B (zh) | 图像处理方法、装置、计算机可读存储介质和电子设备 | |
US11256903B2 (en) | Image processing method, image processing device, computer readable storage medium and electronic device | |
WO2019080580A1 (zh) | 3d人脸身份认证方法与装置 | |
US20200065562A1 (en) | Method and Device for Processing Image, Computer Readable Storage Medium and Electronic Device | |
CN109213610B (zh) | 数据处理方法、装置、计算机可读存储介质和电子设备 | |
WO2020243969A1 (zh) | 人脸识别的装置、方法和电子设备 | |
CN108711054B (zh) | 图像处理方法、装置、计算机可读存储介质和电子设备 | |
CN110602379A (zh) | 一种拍摄证件照的方法、装置、设备及存储介质 | |
CN111523499B (zh) | 图像处理方法、装置、电子设备和计算机可读存储介质 | |
JP6157165B2 (ja) | 視線検出装置及び撮像装置 | |
TW201944290A (zh) | 人臉識別方法以及移動終端 | |
TWI731503B (zh) | 活體臉部辨識系統與方法 | |
CN108595928A (zh) | 人脸识别的信息处理方法、装置及终端设备 | |
CN108764053A (zh) | 图像处理方法、装置、计算机可读存储介质和电子设备 | |
WO2019196793A1 (zh) | 图像处理方法及装置、电子设备和计算机可读存储介质 | |
CN112668547B (zh) | 图像处理方法、装置、电子设备和计算机可读存储介质 | |
CN108564033A (zh) | 基于结构光的安全验证方法、装置及终端设备 | |
WO2020024619A1 (zh) | 数据处理方法、装置、计算机可读存储介质和电子设备 | |
US10460153B2 (en) | Automatic identity detection | |
CN109145772B (zh) | 数据处理方法、装置、计算机可读存储介质和电子设备 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19784964 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2019784964 Country of ref document: EP Effective date: 20191230 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |