[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN108629745B - Image processing method and device based on structured light and mobile terminal - Google Patents

Image processing method and device based on structured light and mobile terminal Download PDF

Info

Publication number
CN108629745B
CN108629745B CN201810326349.2A CN201810326349A CN108629745B CN 108629745 B CN108629745 B CN 108629745B CN 201810326349 A CN201810326349 A CN 201810326349A CN 108629745 B CN108629745 B CN 108629745B
Authority
CN
China
Prior art keywords
imaging
image processing
area
visible light
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201810326349.2A
Other languages
Chinese (zh)
Other versions
CN108629745A (en
Inventor
黄杰文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201810326349.2A priority Critical patent/CN108629745B/en
Publication of CN108629745A publication Critical patent/CN108629745A/en
Application granted granted Critical
Publication of CN108629745B publication Critical patent/CN108629745B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

The application provides an image processing method and device based on structured light and a mobile terminal, wherein the method comprises the following steps: acquiring a visible light image of an imaging object; acquiring depth data indicated by a structured light image of an imaging subject; according to the depth data, identifying a first imaging area of an imaging object in the visible light image and identifying a second imaging area of an article worn by the imaging object; determining an operation area of image processing operation in the visible light image according to the relative positions of the first imaging area and the second imaging area; an image processing operation is performed on the operation area. The method can avoid blurring the object worn by the imaging object under the condition that the image processing operation is beautifying, thereby improving the display effect of the worn object. When the image processing operation is background blurring, the object worn by the false-blurring object can be avoided, and the visible light image blurring effect is improved. And meanwhile, the imaging effect of the imaging photo can be improved.

Description

基于结构光的图像处理方法、装置和移动终端Image processing method, device and mobile terminal based on structured light

技术领域technical field

本申请涉及移动终端技术领域,尤其涉及一种基于结构光的图像处理方法、装置和移动终端。The present application relates to the technical field of mobile terminals, and in particular, to an image processing method, device and mobile terminal based on structured light.

背景技术Background technique

随着移动终端技术的不断发展,越来越多的用户选择使用移动终端进行拍照。并且为了达到较佳的拍摄效果,还可以采用相关的图像处理手段对图像进行处理。With the continuous development of mobile terminal technology, more and more users choose to use mobile terminals to take pictures. And in order to achieve a better shooting effect, the image can also be processed by using a related image processing method.

但在实际图像处理过程中,存在图像处理后图像效果恶化的情况。以背景虚化处理为例,用户佩戴耳坠、发钗等物品时,用户可能希望所佩戴物品显示清晰,不希望所佩戴物品被虚化,但实际操作中,饰品会被虚化损失大量图像细节。另外,当用户开启摄像头进行美颜拍照时,同样会出现类似的情况。However, in the actual image processing process, there is a situation in which the image effect is deteriorated after image processing. Taking background blurring processing as an example, when a user wears earrings, hairpins and other items, the user may want the items to be displayed clearly, but do not want the items to be blurred, but in actual operation, the accessories will be blurred and lose a lot of image details. . In addition, when the user turns on the camera to take a beauty photo, a similar situation will also occur.

因此,现有技术中,在进行图像处理时,在一些场景下,存在图像处理后图像效果恶化的情况,图像的处理效果不佳。Therefore, in the prior art, when image processing is performed, in some scenarios, the image effect is deteriorated after image processing, and the image processing effect is not good.

发明内容SUMMARY OF THE INVENTION

本申请旨在至少在一定程度上解决相关技术中的技术问题之一。The present application aims to solve one of the technical problems in the related art at least to a certain extent.

为此,本申请提出一种基于结构光的图像处理方法,以实现当图像处理操作为美颜的情况下,可以避免模糊成像对象所佩戴物品,从而可以提升所佩戴物品的显示效果。当图像处理操作为背景虚化的情况下,可以避免误虚成像对象所佩戴物品,从而提升可见光图像虚化效果。同时,根据结构光图像指示的深度数据,识别可见光图像中成像对象的第一成像区域,以及识别成像对象所佩戴物品的第二成像区域,进而在确定操作区域并执行图像处理操作后,一方面提高了成像照片的成像效果,另一方面提高了深度数据的准确性,从而使得图像处理效果较好。To this end, the present application proposes an image processing method based on structured light, so as to avoid blurring the objects worn by the imaging object when the image processing operation is beauty, thereby improving the display effect of the worn objects. When the image processing operation is background blurring, it is possible to avoid falsely imagining objects worn by the imaging object, thereby improving the blurring effect of visible light images. At the same time, according to the depth data indicated by the structured light image, identify the first imaging area of the imaging object in the visible light image, and identify the second imaging area of the object worn by the imaging object, and then after determining the operation area and performing the image processing operation, on the one hand The imaging effect of the imaging photo is improved, and on the other hand, the accuracy of the depth data is improved, so that the image processing effect is better.

本申请提出一种基于结构光的图像处理装置。The present application proposes an image processing device based on structured light.

本申请提出一种移动终端。The present application proposes a mobile terminal.

本申请提出一种计算机可读存储介质。The present application proposes a computer-readable storage medium.

为达上述目的,本申请第一方面实施例提出了一种基于结构光的图像处理方法,包括:In order to achieve the above purpose, an embodiment of the first aspect of the present application proposes an image processing method based on structured light, including:

获取成像对象的可见光图像;obtaining a visible light image of the imaging object;

获取所述成像对象的结构光图像指示的深度数据;acquiring depth data indicated by the structured light image of the imaging object;

根据所述深度数据,识别所述可见光图像中所述成像对象的第一成像区域,以及识别所述成像对象所佩戴物品的第二成像区域;According to the depth data, identifying the first imaging area of the imaging object in the visible light image, and identifying the second imaging area of the article worn by the imaging object;

根据所述第一成像区域和所述第二成像区域的相对位置,在所述可见光图像中,确定图像处理操作的操作区域;According to the relative positions of the first imaging area and the second imaging area, in the visible light image, determining an operation area of an image processing operation;

对所述操作区域执行所述图像处理操作。The image processing operation is performed on the operation area.

本申请实施例的基于结构光的图像处理方法,通过获取成像对象的可见光图像;获取成像对象的结构光图像指示的深度数据;根据深度数据,识别可见光图像中成像对象的第一成像区域,以及识别成像对象所佩戴物品的第二成像区域;根据第一成像区域和第二成像区域的相对位置,在可见光图像中,确定图像处理操作的操作区域;对操作区域执行图像处理操作。由此,可以实现当图像处理操作为美颜的情况下,可以避免模糊成像对象所佩戴物品,从而可以提升所佩戴物品的显示效果。当图像处理操作为背景虚化的情况下,可以避免误虚成像对象所佩戴物品,从而提升可见光图像虚化效果。同时,根据结构光图像指示的深度数据,识别可见光图像中成像对象的第一成像区域,以及识别成像对象所佩戴物品的第二成像区域,进而在确定操作区域并执行图像处理操作后,一方面提高了成像照片的成像效果,另一方面提高了深度数据的准确性,从而使得图像处理效果较好。The structured light-based image processing method according to the embodiment of the present application obtains the visible light image of the imaging object; acquires depth data indicated by the structured light image of the imaging object; identifies the first imaging area of the imaging object in the visible light image according to the depth data, and Identifying the second imaging area of the article worn by the imaging object; determining the operation area of the image processing operation in the visible light image according to the relative positions of the first imaging area and the second imaging area; and performing the image processing operation on the operation area. In this way, when the image processing operation is to beautify the face, it is possible to avoid blurring the articles worn by the imaging object, so that the display effect of the worn articles can be improved. When the image processing operation is background blurring, it is possible to avoid falsely imagining objects worn by the imaging object, thereby improving the blurring effect of visible light images. At the same time, according to the depth data indicated by the structured light image, identify the first imaging area of the imaging object in the visible light image, and identify the second imaging area of the object worn by the imaging object, and then after determining the operation area and performing the image processing operation, on the one hand The imaging effect of the imaging photo is improved, and on the other hand, the accuracy of the depth data is improved, so that the image processing effect is better.

为达上述目的,本申请第二方面实施例提出了一种基于结构光的图像处理装置,包括:In order to achieve the above purpose, an embodiment of the second aspect of the present application proposes an image processing device based on structured light, including:

获取模块,用于获取成像对象的可见光图像;获取所述成像对象的结构光图像指示的深度数据;an acquisition module for acquiring a visible light image of an imaging object; acquiring depth data indicated by the structured light image of the imaging object;

识别模块,用于根据所述深度数据,识别所述可见光图像中所述成像对象的第一成像区域,以及识别所述成像对象所佩戴物品的第二成像区域;an identification module, configured to identify, according to the depth data, a first imaging area of the imaging object in the visible light image, and a second imaging area of an article worn by the imaging object;

确定模块,用于根据所述第一成像区域和所述第二成像区域的相对位置,在所述可见光图像中,确定图像处理操作的操作区域;a determination module, configured to determine an operation area of an image processing operation in the visible light image according to the relative positions of the first imaging area and the second imaging area;

处理模块,用于对所述操作区域执行所述图像处理操作。A processing module, configured to perform the image processing operation on the operation area.

本申请实施例的基于结构光的图像处理装置,通过获取成像对象的可见光图像;获取成像对象的结构光图像指示的深度数据;根据深度数据,识别可见光图像中成像对象的第一成像区域,以及识别成像对象所佩戴物品的第二成像区域;根据第一成像区域和第二成像区域的相对位置,在可见光图像中,确定图像处理操作的操作区域;对操作区域执行图像处理操作。由此,可以实现当图像处理操作为美颜的情况下,可以避免模糊成像对象所佩戴物品,从而可以提升所佩戴物品的显示效果。当图像处理操作为背景虚化的情况下,可以避免误虚成像对象所佩戴物品,从而提升可见光图像虚化效果。同时,根据结构光图像指示的深度数据,识别可见光图像中成像对象的第一成像区域,以及识别成像对象所佩戴物品的第二成像区域,进而在确定操作区域并执行图像处理操作后,一方面提高了成像照片的成像效果,另一方面提高了深度数据的准确性,从而使得图像处理效果较好。The structured light-based image processing apparatus according to the embodiment of the present application obtains the visible light image of the imaging object; acquires depth data indicated by the structured light image of the imaging object; identifies the first imaging area of the imaging object in the visible light image according to the depth data, and Identifying the second imaging area of the article worn by the imaging object; determining the operation area of the image processing operation in the visible light image according to the relative positions of the first imaging area and the second imaging area; and performing the image processing operation on the operation area. In this way, when the image processing operation is to beautify the face, it is possible to avoid blurring the articles worn by the imaging object, so that the display effect of the worn articles can be improved. When the image processing operation is background blurring, it is possible to avoid falsely imagining objects worn by the imaging object, thereby improving the blurring effect of visible light images. At the same time, according to the depth data indicated by the structured light image, identify the first imaging area of the imaging object in the visible light image, and identify the second imaging area of the object worn by the imaging object, and then after determining the operation area and performing the image processing operation, on the one hand The imaging effect of the imaging photo is improved, and on the other hand, the accuracy of the depth data is improved, so that the image processing effect is better.

为达上述目的,本申请第三方面实施例提出了一种移动终端,包括:成像传感器、存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器根据从所述成像传感器获取的可见光图像或结构光图像,执行所述程序时,实现如本申请第一方面实施例所述的基于结构光的图像处理方法。In order to achieve the above purpose, a third aspect of the present application provides a mobile terminal, including: an imaging sensor, a memory, a processor, and a computer program stored in the memory and running on the processor, the processor The visible light image or the structured light image acquired by the imaging sensor, when the program is executed, implements the structured light-based image processing method according to the embodiment of the first aspect of the present application.

为了实现上述目的,本申请第四方面实施例提出了一种计算机可读存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时实现如本申请第一方面实施例所述的基于结构光的图像处理方法。In order to achieve the above purpose, a fourth aspect embodiment of the present application provides a computer-readable storage medium on which a computer program is stored, and is characterized in that, when the program is executed by a processor, the implementation of the first aspect of the present application is implemented. The described image processing method based on structured light.

本申请附加的方面和优点将在下面的描述中部分给出,部分将从下面的描述中变得明显,或通过本申请的实践了解到。Additional aspects and advantages of the present application will be set forth, in part, in the following description, and in part will be apparent from the following description, or learned by practice of the present application.

附图说明Description of drawings

本申请上述的和/或附加的方面和优点从下面结合附图对实施例的描述中将变得明显和容易理解,其中:The above and/or additional aspects and advantages of the present application will become apparent and readily understood from the following description of embodiments taken in conjunction with the accompanying drawings, wherein:

图1为本申请实施例一所提供的基于结构光的图像处理方法的流程示意图;1 is a schematic flowchart of an image processing method based on structured light provided by Embodiment 1 of the present application;

图2为本申请实施例二所提供的电子设备的结构示意图;FIG. 2 is a schematic structural diagram of an electronic device provided in Embodiment 2 of the present application;

图3为本申请实施例三所提供的基于结构光的图像处理方法的流程示意图;3 is a schematic flowchart of an image processing method based on structured light provided by Embodiment 3 of the present application;

图4为本申请实施例提供的一种基于结构光的图像处理装置的结构示意图;4 is a schematic structural diagram of an image processing apparatus based on structured light provided by an embodiment of the present application;

图5为本申请实施例提供的另一种基于结构光的图像处理装置的结构示意图;FIG. 5 is a schematic structural diagram of another image processing device based on structured light provided by an embodiment of the present application;

图6为本申请实施例提供的一种移动终端的结构示意图;FIG. 6 is a schematic structural diagram of a mobile terminal according to an embodiment of the present application;

图7为本申请实施例提供的另一种移动终端的结构示意图。FIG. 7 is a schematic structural diagram of another mobile terminal according to an embodiment of the present application.

具体实施方式Detailed ways

下面详细描述本申请的实施例,实施例的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施例是示例性的,旨在用于解释本申请,而不能理解为对本申请的限制。Embodiments of the present application are described in detail below, examples of which are illustrated in the accompanying drawings, wherein the same or similar reference numerals refer to the same or similar elements or elements having the same or similar functions throughout. The embodiments described below with reference to the accompanying drawings are exemplary, and are intended to be used to explain the present application, but should not be construed as a limitation to the present application.

下面参考附图描述本申请实施例的基于结构光的图像处理方法、装置和移动终端。The structured light-based image processing method, device, and mobile terminal according to the embodiments of the present application are described below with reference to the accompanying drawings.

图1为本申请实施例一所提供的基于结构光的图像处理方法的流程示意图。FIG. 1 is a schematic flowchart of an image processing method based on structured light provided by Embodiment 1 of the present application.

如图1所示,该基于结构光的图像处理方法包括以下步骤:As shown in Figure 1, the image processing method based on structured light includes the following steps:

步骤101,获取成像对象的可见光图像。Step 101, acquiring a visible light image of the imaging object.

本申请实施例中,电子设备可以包括可见光图像传感器,可以基于电子设备中的可见光图像传感器根据成像对象反射的可见光进行成像,得到可见光图像。具体地,可见光图像传感器可以包括可见光摄像头,可见光摄像头可以捕获由成像对象反射的可见光进行成像,得到可见光图像。In the embodiment of the present application, the electronic device may include a visible light image sensor, and the visible light image may be obtained by imaging the visible light reflected by the imaging object based on the visible light image sensor in the electronic device. Specifically, the visible light image sensor may include a visible light camera, and the visible light camera may capture the visible light reflected by the imaging object for imaging to obtain a visible light image.

步骤102,获取成像对象的结构光图像指示的深度数据。Step 102: Acquire depth data indicated by the structured light image of the imaging object.

本申请实施例中,电子设备还可以包括结构光图像传感器,可以基于电子设备中的结构光图像传感器获取成像对象的结构光图像。具体地,结构光图像传感器可以包括镭射灯以及激光摄像头。脉冲宽度调制(Pulse Width Modulation,简称PWM)可以调制镭射灯以发出结构光,结构光照射至成像对象,激光摄像头可以捕获由成像对象反射的结构光进行成像,得到结构光图像。深度引擎可以根据结构光图像,计算获得成像对象对应的深度数据,具体而言,深度引擎解调结构光图像中变形位置像素对应的相位信息,将相位信息转化为高度信息,根据高度信息确定被摄物对应的深度数据。In this embodiment of the present application, the electronic device may further include a structured light image sensor, and a structured light image of the imaging object may be acquired based on the structured light image sensor in the electronic device. Specifically, the structured light image sensor may include a laser light and a laser camera. Pulse Width Modulation (PWM for short) can modulate the laser light to emit structured light, the structured light is irradiated to the imaging object, and the laser camera can capture the structured light reflected by the imaging object for imaging to obtain a structured light image. The depth engine can calculate and obtain the depth data corresponding to the imaging object according to the structured light image. Specifically, the depth engine demodulates the phase information corresponding to the deformed position pixels in the structured light image, converts the phase information into height information, and determines the object according to the height information. The depth data corresponding to the subject.

步骤103,根据深度数据,识别可见光图像中成像对象的第一成像区域,以及识别成像对象所佩戴物品的第二成像区域。Step 103 , according to the depth data, identify the first imaging area of the imaging object in the visible light image, and identify the second imaging area of the article worn by the imaging object.

本申请实施例中,为了避免在对可见光图像进行背景虚化操作时,成像对象所佩戴物品,例如耳环、耳坠、发钗等被误虚,从而导致可见光图像虚化效果不佳,或者,为了避免在对可见光图像进行美颜操作时,模糊成像对象所佩戴物品,例如模糊成像对象所佩戴项链、额饰、鼻环等,本申请实施例中,可以根据深度数据,识别可见光图像中成像对象的第一成像区域,以及识别成像对象所佩戴物品的第二成像区域。In the embodiment of the present application, in order to avoid that the objects worn by the imaging object, such as earrings, earrings, hairpins, etc., are falsely false when the background blurring operation is performed on the visible light image, resulting in poor visible light image blurring effect, or, in order to Avoid blurring the objects worn by the imaging object, such as necklaces, forehead ornaments, nose rings, etc., worn by the blurred imaging object when performing the beautification operation on the visible light image. In the embodiment of the present application, the imaging object in the visible light image can be identified according to the depth data. the first imaging area, and the second imaging area for identifying the article worn by the imaging subject.

作为一种可能的实现方式,在获取成像对象的结构光图像指示的深度数据之后,可以根据结构光图像中各对象的深度数据,确定出该对象为前景还是背景。一般来说,深度数据指示对象距离摄像头所在平面较近,深度取值较小时,可以确定该对象为前景,否则,该对象为背景。进而可以根据各个对象,确定可见光图像的前景部分和背景部分。在确定可见光图像的前景部分和背景部分后,可以基于人体检测算法在前景部分识别成像对象是否为人体,若是,则将人体轮廓所包围的部分作为第一成像区域。具体而言,可以从可见光图像中提取图像的边缘像素点,以及像素值的差值小于预设阈值的像素点,即像素值相近的像素点,以得到人体轮廓,从而将人体轮廓所包围的部分作为第一成像区域。As a possible implementation manner, after acquiring the depth data indicated by the structured light image of the imaging object, it can be determined whether the object is the foreground or the background according to the depth data of each object in the structured light image. Generally speaking, the depth data indicates that the object is closer to the plane where the camera is located, and when the depth value is small, it can be determined that the object is the foreground, otherwise, the object is the background. Further, the foreground part and the background part of the visible light image can be determined according to each object. After determining the foreground part and the background part of the visible light image, whether the imaging object is a human body can be identified in the foreground part based on the human body detection algorithm, and if so, the part surrounded by the outline of the human body is used as the first imaging area. Specifically, the edge pixels of the image and the pixels whose pixel value difference is less than a preset threshold value, that is, the pixel points with similar pixel values, can be extracted from the visible light image to obtain the outline of the human body. part as the first imaging area.

需要说明的是,成像对象所佩戴物品可以在第一成像区域内部,例如项链、耳钉、鼻环、额饰等,或者,成像对象所佩戴物品可以与第一成像区域的距离小于预设距离,例如耳环、耳坠、发钗等。因此,本申请实施例中,可以在第一成像区域内部,以及与第一成像区域的距离小于预设距离的前景部分,识别得到第二成像区域,从而可以节省计算量,提升处理效率。作为一种可能的实现方式,可以采集各类饰品的深度数据作为样本数据,利用样本数据训练得到用于对饰品进行识别的识别模型,而后可以采用经过训练的识别模型,识别得到成像对象所佩戴物品。在识别得到成像对象所佩戴物品后,可以将成像对象所佩戴物品轮廓所包围的部分作为第二成像区域。It should be noted that the objects worn by the imaging subject may be inside the first imaging area, such as necklaces, ear studs, nose rings, forehead ornaments, etc., or, the distance between the objects worn by the imaging subject and the first imaging area may be less than a preset distance , such as earrings, earrings, hairpins, etc. Therefore, in the embodiment of the present application, the second imaging region can be identified within the first imaging region and the foreground portion whose distance from the first imaging region is less than the preset distance, thereby saving computation and improving processing efficiency. As a possible implementation method, the depth data of various accessories can be collected as sample data, and the sample data can be used to train a recognition model for recognizing the accessories. thing. After the object worn by the imaging object is identified, the part surrounded by the outline of the object worn by the imaging object may be used as the second imaging region.

可以理解的是,第二成像区域与肤色存在色差,且第一成像区域中距离第二成像区域最近的第一像素点对应的深度,与第二成像区域中距离第一成像区域最近的第二像素点对应的深度之差在预设深度范围内。如果第一像素点的个数为多个,则第一像素点对应的深度可以取多个第一像素点对应的深度的均值,同理,如果第二像素点的个数为多个,则第二像素点对应的深度可以取多个第二像素点对应的深度的均值。It can be understood that there is a color difference between the second imaging area and the skin color, and the depth corresponding to the first pixel point closest to the second imaging area in the first imaging area is the same as the depth corresponding to the second imaging area closest to the first imaging area in the second imaging area. The difference between the depths corresponding to the pixels is within the preset depth range. If the number of first pixels is multiple, the depth corresponding to the first pixel can be the average of the depths corresponding to multiple first pixels. Similarly, if the number of second pixels is multiple, then The depth corresponding to the second pixel point may be an average value of the depth corresponding to a plurality of second pixel points.

步骤104,根据第一成像区域和第二成像区域的相对位置,在可见光图像中,确定图像处理操作的操作区域。Step 104 , according to the relative positions of the first imaging area and the second imaging area, in the visible light image, determine the operation area of the image processing operation.

本申请实施例中,用户可以根据自身需求,对可见光图像进行图像处理操作。其中,图像处理操作可以为背景虚化、美颜(祛痘、瘦脸、提亮面部、磨皮等)等处理操作。可以理解的是,在不同图像处理操作的情况下,操作区域可以相同或者不同。In this embodiment of the present application, the user can perform image processing operations on the visible light image according to their own needs. Among them, the image processing operations may be processing operations such as background blur, beauty (acne removal, face reduction, face brightening, skin resurfacing, etc.). It can be understood that in the case of different image processing operations, the operation areas may be the same or different.

例如,当图像处理操作为美颜的情况下,且相对位置为第一成像区域包含第二成像区域,或者第一成像区域与第二成像区域相交叠时,例如成像对象所佩戴物品为项链、鼻环、额饰等,此时,为了避免对成像对象进行美颜时模糊所佩戴物品,操作区域可以为第一成像区域中除第二成像区域以外的部分。For example, when the image processing operation is beauty, and the relative position is that the first imaging area includes the second imaging area, or the first imaging area and the second imaging area overlap, for example, the object worn by the imaging object is a necklace, Nose rings, forehead ornaments, etc. At this time, in order to avoid blurring the worn objects when the imaged object is beautified, the operation area may be the part of the first imaging area except the second imaging area.

或者,当图像处理操作为美颜的情况下,且相对位置为第一成像区域与第二成像区域相邻时,例如成像对象所佩戴物品为发钗、耳坠等,此时,对成像对象进行美颜并不影响成像对象所佩戴物品的显示效果,因此,操作区域可以为可见光图像中第一成像区域。Or, when the image processing operation is beauty, and the relative position is that the first imaging area is adjacent to the second imaging area, for example, the objects worn by the imaging subject are hairpins, earrings, etc. Beauty does not affect the display effect of the object worn by the imaging object, therefore, the operation area may be the first imaging area in the visible light image.

再例如,当图像处理操作为背景虚化的情况下,且相对位置为第一成像区域包含第二成像区域,例如成像对象所佩戴物品为项链、鼻环、额饰等,此时,对成像对象的第一成像区域以外的部分进行背景虚化并不影响成像对象所佩戴物品的显示效果,因此,操作区域可以为可见光图像中除第一成像区域以外的部分。For another example, when the image processing operation is background blur, and the relative position is that the first imaging area includes the second imaging area, for example, the objects worn by the imaging object are necklaces, nose rings, forehead ornaments, etc., at this time, the imaging The background blurring of the part other than the first imaging area of the object does not affect the display effect of the object worn by the imaging object. Therefore, the operation area may be the part other than the first imaging area in the visible light image.

或者,当图像处理操作为背景虚化的情况下,且相对位置为第一成像区域与第二成像区域相邻或相交叠,例如成像对象所佩戴物品为发钗、耳坠等,此时,对成像对象的第一成像区域以外的部分进行背景虚化时,成像对象所佩戴物品将被误虚,从而严重影响所佩戴物品显示效果,因此,本申请实施例中,操作区域可以为可见光图像中除第一成像区域和第二成像区域以外的部分。Or, when the image processing operation is background blur, and the relative position is that the first imaging area and the second imaging area are adjacent or overlapping, for example, the objects worn by the imaging object are hairpins, earrings, etc. When the part outside the first imaging area of the imaging object is blurred, the objects worn by the imaging object will be falsely false, which will seriously affect the display effect of the objects worn. A portion other than the first imaging area and the second imaging area.

步骤105,对操作区域执行图像处理操作。Step 105, perform image processing operation on the operation area.

本申请实施例中,在确定操作区域后,可以对操作区域执行图像处理操作。In this embodiment of the present application, after the operation area is determined, an image processing operation may be performed on the operation area.

举例而言,当用户佩戴鼻环时,此时,如果用户想对可见光图像进行美颜操作,则可以对身体除鼻环之外的区域进行美颜,如果用户想对可见光图像进行背景虚化,则可以对用户身体之外的区域进行背景虚化。For example, when the user wears a nose ring, at this time, if the user wants to beautify the visible light image, he can beautify the area of the body except the nose ring. If the user wants to blur the visible light image , the area outside the user's body can be blurred.

本申请实施例中基于结构光的图像处理方法可被配置于基于结构光的图像处理装置中,该基于结构光的图像处理装置可以应用于电子设备中。The image processing method based on structured light in the embodiments of the present application can be configured in an image processing apparatus based on structured light, and the image processing apparatus based on structured light can be applied to electronic equipment.

作为一种可能的实现方式,电子设备的结构可以参见图2,图2为本申请实施例二所提供的电子设备的结构示意图。As a possible implementation manner, reference may be made to FIG. 2 for the structure of the electronic device, which is a schematic structural diagram of the electronic device provided in Embodiment 2 of the present application.

如图2所示,该电子设备包括:激光摄像头、泛光灯、可见光摄像头、镭射灯以及微处理器(Microcontroller Unit,简称MCU)。其中,MCU包括PWM、深度引擎、总线接口以及随机存取存储器RAM。另外,电子设备还包括处理器,该处理器具有可信执行环境,MCU为可信执行环境专用硬件,执行可信应用程序运行于该可信执行环境下;处理器还可以具有普通执行环境,该普通执行环境与可信执行环境相互隔离。As shown in FIG. 2 , the electronic device includes: a laser camera, a flood light, a visible light camera, a laser light, and a microprocessor (Microcontroller Unit, MCU for short). Among them, MCU includes PWM, depth engine, bus interface and random access memory RAM. In addition, the electronic device further includes a processor, the processor has a trusted execution environment, and the MCU is dedicated hardware for the trusted execution environment, and executes the trusted application program to run in the trusted execution environment; the processor may also have a common execution environment, The normal execution environment and the trusted execution environment are isolated from each other.

需要说明的是,本领域技术人员可以知晓,图1对应方法不仅适用于图2所示的电子设备,图2所示电子设备仅作为一种示意性描述,图1对应方法还可以用于其他具有可信执行环境,以及可信执行环境专用硬件的电子设备,本实施例中对此不作限定。It should be noted that those skilled in the art can know that the method corresponding to FIG. 1 is not only applicable to the electronic device shown in FIG. 2 , the electronic device shown in FIG. 2 is only used as a schematic description, and the method corresponding to FIG. 1 can also be used for other An electronic device having a trusted execution environment and dedicated hardware for the trusted execution environment is not limited in this embodiment.

其中,PWM用于调制泛光灯以使发出红外光,以及调制镭射灯以发出结构光;激光摄像头,用于采集成像对象的结构光图像或可见光图像;深度引擎,用于根据结构光图像,计算获得成像对象对应的深度数据;总线接口,用于将深度数据发送至处理器,并由处理器上运行的可信应用程序利用深度数据执行相应的操作。其中,总线接口包括:移动产业处理器接口(Mobile Industry Processor Interface简称MIPI)、I2C同步串行总线接口、串行外设接口(Serial Peripheral Interface,简称SPI)。Among them, PWM is used to modulate floodlights to emit infrared light, and to modulate laser lights to emit structured light; laser cameras are used to collect structured light images or visible light images of imaging objects; depth engines are used to, based on structured light images, The depth data corresponding to the imaging object is obtained by calculation; the bus interface is used to send the depth data to the processor, and the trusted application program running on the processor uses the depth data to perform corresponding operations. The bus interfaces include: Mobile Industry Processor Interface (MIPI for short), I2C synchronous serial bus interface, and Serial Peripheral Interface (SPI for short).

本申请实施例中,可信执行环境是电子设备(包含智能手机、平板电脑等)主处理器上的一个安全区域,相对普通执行环境,其可以保证加载到该环境内部的代码和数据的安全性、机密性以及完整性。可信执行环境提供一个隔离的执行环境,提供的安全特征包含:隔离执行、可信应用程序的完整性、可信数据的机密性、安全存储等。总之,可信执行环境提供的执行空间比常见的移动操作系统,如ISO、Android等,提供更高级别的安全性。In the embodiments of the present application, the trusted execution environment is a secure area on the main processor of an electronic device (including smart phones, tablet computers, etc.), which can ensure the security of code and data loaded into the environment compared to a common execution environment security, confidentiality and integrity. The trusted execution environment provides an isolated execution environment, and the provided security features include: isolated execution, integrity of trusted applications, confidentiality of trusted data, and secure storage. In conclusion, the execution space provided by TEE provides a higher level of security than common mobile operating systems such as ISO, Android, etc.

本实施例的基于结构光的图像处理方法,通过获取成像对象的可见光图像;获取成像对象的结构光图像指示的深度数据;根据深度数据,识别可见光图像中成像对象的第一成像区域,以及识别成像对象所佩戴物品的第二成像区域;根据第一成像区域和第二成像区域的相对位置,在可见光图像中,确定图像处理操作的操作区域;对操作区域执行图像处理操作。由此,可以实现当图像处理操作为美颜的情况下,可以避免模糊成像对象所佩戴物品,从而可以提升所佩戴物品的显示效果。当图像处理操作为背景虚化的情况下,可以避免误虚成像对象所佩戴物品,从而提升可见光图像虚化效果。同时,根据结构光图像指示的深度数据,识别可见光图像中成像对象的第一成像区域,以及识别成像对象所佩戴物品的第二成像区域,进而在确定操作区域并执行图像处理操作后,一方面提高了成像照片的成像效果,另一方面提高了深度数据的准确性,从而使得图像处理效果较好。The structured light-based image processing method of this embodiment obtains the visible light image of the imaging object; acquires depth data indicated by the structured light image of the imaging object; identifies the first imaging area of the imaging object in the visible light image according to the depth data, and identifies imaging the second imaging area of the article worn by the object; determining the operation area of the image processing operation in the visible light image according to the relative positions of the first imaging area and the second imaging area; and performing the image processing operation on the operation area. In this way, when the image processing operation is to beautify the face, it is possible to avoid blurring the articles worn by the imaging object, so that the display effect of the worn articles can be improved. When the image processing operation is background blurring, it is possible to avoid falsely imagining objects worn by the imaging object, thereby improving the blurring effect of visible light images. At the same time, according to the depth data indicated by the structured light image, identify the first imaging area of the imaging object in the visible light image, and identify the second imaging area of the object worn by the imaging object, and then after determining the operation area and performing the image processing operation, on the one hand The imaging effect of the imaging photo is improved, and on the other hand, the accuracy of the depth data is improved, so that the image processing effect is better.

本申请实施例中,第二成像区域可以相邻于第一成像区域的目标子区域,或者,第二成像区域可以处于目标子区域内部,其中,目标子区域用于对颈部、耳部、鼻部、唇部或前额成像。In this embodiment of the present application, the second imaging area may be adjacent to the target sub-area of the first imaging area, or the second imaging area may be inside the target sub-area, where the target sub-area is used for the neck, ear, Imaging of the nose, lips or forehead.

需要说明的是,本申请实施例中,目标子区域不仅可以用于对颈部、耳部、鼻部、唇部或前额成像,目标子区域还可以用于对手指、手腕、肚脐、脚踝等部分成像。举例而言,当用户手戴戒指,且手腕佩戴手表时,当用户自拍时,如果用户在脸旁做出一个“yeah”的动作,在图像处理操作为美颜的情况下,用户同样不想戒指和手表被模糊掉,因此,在美颜时,操作区域为第一成像区域中除第二成像区域以外的部分。It should be noted that, in this embodiment of the present application, the target sub-region can not only be used to image the neck, ears, nose, lips or forehead, but also the target sub-region can also be used to image fingers, wrists, navels, ankles, etc. Partial imaging. For example, when the user wears a ring on his hand and a watch on his wrist, when the user takes a selfie, if the user makes a "yeah" action beside his face, and the image processing operation is beauty, the user also does not want the ring. And the watch is blurred out, therefore, during beauty, the operation area is the part of the first imaging area other than the second imaging area.

作为一种可能的实现方式,在步骤105之后,还可以增强第一成像区域的表现力。具体地,可以在第二区域内进行锐化处理、调整对比度和/或饱和度等,从而增强所佩戴物品的显示效果。As a possible implementation manner, after step 105, the expressive power of the first imaging area may also be enhanced. Specifically, sharpening processing, adjustment of contrast and/or saturation, etc. may be performed in the second area, thereby enhancing the display effect of the worn item.

作为一种可能的实现方式,参见图3,在图1所示实施例的基础上,步骤104具体可以包括以下子步骤:As a possible implementation, referring to FIG. 3 , on the basis of the embodiment shown in FIG. 1 , step 104 may specifically include the following sub-steps:

步骤201,在多种图像处理操作中,确定需执行的图像处理操作。Step 201 , among various image processing operations, determine an image processing operation to be performed.

本申请实施例中,用户可以根据自身需求,从多种图像处理操作中,确定需执行的图像处理操作,例如为背景虚化、美颜等。In the embodiment of the present application, the user can determine the image processing operation to be performed from various image processing operations according to his own needs, for example, background blur, beauty, and the like.

作为一种可能的实现方式,电子设备上可以具有不同图像处理操作的控件,用户可以通过触发相应的控件,确定需执行的图像处理操作。As a possible implementation manner, the electronic device may have controls for different image processing operations, and the user can determine the image processing operations to be performed by triggering the corresponding controls.

步骤202,根据需执行的图像处理操作,以及相对位置,在可见光图像中确定对应的操作区域。Step 202: Determine the corresponding operation area in the visible light image according to the image processing operation to be performed and the relative position.

作为一种可能的实现方式,可以预先图像处理操作、相对位置和操作区域之间的映射关系,根据映射关系配置映射表,映射表用于指示图像处理操作、相对位置和操作区域之间的映射关系。在确定需执行的图像处理操作后,可以根据需执行的图像处理操作,以及相对位置查询映射表,确定操作区域,操作简单,且易于实现。As a possible implementation, the mapping relationship between image processing operations, relative positions and operation areas can be pre-configured, and a mapping table can be configured according to the mapping relationship. The mapping table is used to indicate the mapping between image processing operations, relative positions and operation areas. relation. After the image processing operation to be performed is determined, the mapping table can be queried according to the image processing operation to be performed and the relative position to determine the operation area, and the operation is simple and easy to implement.

本实施例的基于结构光的图像处理方法,通过在多种图像处理操作中,确定需执行的图像处理操作;根据需执行的图像处理操作,以及相对位置,在可见光图像中确定对应的操作区域,操作简单,且易于实现。The image processing method based on structured light in this embodiment determines the image processing operation to be performed in various image processing operations; and determines the corresponding operation area in the visible light image according to the image processing operation to be performed and the relative position , the operation is simple and easy to implement.

为了实现上述实施例,本申请还提出一种基于结构光的图像处理装置。In order to realize the above embodiments, the present application also proposes an image processing device based on structured light.

图4为本申请实施例提供的一种基于结构光的图像处理装置的结构示意图。FIG. 4 is a schematic structural diagram of an image processing apparatus based on structured light according to an embodiment of the present application.

如图4所示,该基于结构光的图像处理装置100包括:获取模块110、识别模块120、确定模块130,以及处理模块140。其中,As shown in FIG. 4 , the structured light-based image processing apparatus 100 includes: an acquisition module 110 , an identification module 120 , a determination module 130 , and a processing module 140 . in,

获取模块110,用于获取成像对象的可见光图像;获取成像对象的结构光图像指示的深度数据。The acquiring module 110 is configured to acquire the visible light image of the imaging object; acquire the depth data indicated by the structured light image of the imaging object.

识别模块120,用于根据深度数据,识别可见光图像中成像对象的第一成像区域,以及识别成像对象所佩戴物品的第二成像区域。The identification module 120 is configured to identify, according to the depth data, the first imaging area of the imaging object in the visible light image, and the second imaging area of the article worn by the imaging object.

作为一种可能的实现方式,识别模块120,具体用于根据深度数据,在可见光图像中识别得到前景部分和背景部分,前景部分的深度小于背景部分的深度;在前景部分识别成像对象是否为人体;若前景部分成像对象为人体,在前景部分,将人体轮廓所包围的部分作为第一成像区域;在第一成像区域内部,以及与第一成像区域的距离小于预设距离的前景部分,识别得到第二成像区域;其中,第二成像区域与肤色存在色差,且第一成像区域中距离第二成像区域最近的第一像素点对应的深度,与第二成像区域中距离第一成像区域最近的第二像素点对应的深度之差在预设深度范围内。As a possible implementation manner, the identification module 120 is specifically configured to identify the foreground part and the background part in the visible light image according to the depth data, and the depth of the foreground part is smaller than the depth of the background part; in the foreground part, identify whether the imaging object is a human body ; If the imaging object in the foreground part is a human body, in the foreground part, the part surrounded by the outline of the human body is used as the first imaging area; inside the first imaging area, and the foreground part whose distance from the first imaging area is less than the preset distance, identify Obtain a second imaging area; wherein, the second imaging area has a color difference with the skin color, and the depth corresponding to the first pixel point closest to the second imaging area in the first imaging area is the closest to the first imaging area in the second imaging area The difference between the depths corresponding to the second pixel points is within the preset depth range.

本申请实施例中,第二成像区域相邻于第一成像区域的目标子区域,或者处于目标子区域内部;目标子区域用于对颈部、耳部、鼻部、唇部或前额成像。In the embodiment of the present application, the second imaging area is adjacent to the target sub-area of the first imaging area, or is inside the target sub-area; the target sub-area is used to image the neck, ear, nose, lips or forehead.

确定模块130,用于根据第一成像区域和第二成像区域的相对位置,在可见光图像中,确定图像处理操作的操作区域。The determining module 130 is configured to determine the operation area of the image processing operation in the visible light image according to the relative positions of the first imaging area and the second imaging area.

作为一种可能的实现方式,确定模块130,具体用于在多种图像处理操作中,确定需执行的图像处理操作;根据需执行的图像处理操作,以及相对位置,在可见光图像中确定对应的操作区域。As a possible implementation manner, the determining module 130 is specifically configured to determine the image processing operations to be performed in various image processing operations; according to the image processing operations to be performed and the relative positions, determine the corresponding image processing operations in the visible light image operating area.

可选地,确定模块130,还用于获取预先配置的映射表,映射表用于指示图像处理操作、相对位置和操作区域之间的映射关系;根据需执行的图像处理操作,以及相对位置查询映射表,确定操作区域。Optionally, the determining module 130 is further configured to obtain a pre-configured mapping table, where the mapping table is used to indicate the mapping relationship between image processing operations, relative positions and operation regions; image processing operations performed as required, and relative position query Mapping table to determine the operating area.

作为另一种可能的实现方式,确定模块130,具体用于若相对位置为第一成像区域包含第二成像区域,或者第一成像区域与第二成像区域相交叠,在图像处理操作为美颜的情况下,操作区域包括:第一成像区域中除第二成像区域以外的部分;若相对位置为第一成像区域包含第二成像区域,在图像处理操作为背景虚化的情况下,操作区域包括:可见光图像中除第一成像区域以外的部分;若相对位置为第一成像区域与第二成像区域相邻,在图像处理操作为美颜的情况下,操作区域包括:第一成像区域;若相对位置为第一成像区域与第二成像区域相邻或相交叠,在图像处理操作为背景虚化的情况下,操作区域包括:可见光图像中除第一成像区域和第二成像区域以外的部分。As another possible implementation manner, the determining module 130 is specifically configured to, if the relative position is that the first imaging area includes the second imaging area, or the first imaging area and the second imaging area overlap, and the image processing operation is cosmetic If the relative position is that the first imaging area includes the second imaging area, if the image processing operation is background blur, the operation area Including: the part of the visible light image other than the first imaging area; if the relative position is that the first imaging area is adjacent to the second imaging area, in the case that the image processing operation is beauty, the operation area includes: the first imaging area; If the relative position is that the first imaging area and the second imaging area are adjacent or overlap, and in the case that the image processing operation is background blurring, the operation area includes: the visible light image except the first imaging area and the second imaging area. part.

处理模块140,用于对操作区域执行图像处理操作。The processing module 140 is configured to perform image processing operations on the operation area.

进一步地,在本申请实施例的一种可能的实现方式中,参见图5,在图4所示实施例的基础上,该基于结构光的图像处理装置100还可以包括:增强模块150。Further, in a possible implementation manner of the embodiment of the present application, referring to FIG. 5 , on the basis of the embodiment shown in FIG. 4 , the image processing apparatus 100 based on structured light may further include: an enhancement module 150 .

增强模块150,用于增强第二成像区域的表现力。The enhancement module 150 is configured to enhance the expressive power of the second imaging area.

作为一种可能的实现方式,增强模块150,具体用于在第二区域内进行锐化、调整对比度和/或饱和度。As a possible implementation manner, the enhancement module 150 is specifically configured to perform sharpening, and adjust contrast and/or saturation in the second region.

需要说明的是,前述对基于结构光的图像处理方法实施例的解释说明也适用于该实施例的基于结构光的图像处理装置100,此处不再赘述。It should be noted that the foregoing explanations on the embodiment of the image processing method based on structured light are also applicable to the image processing apparatus 100 based on structured light in this embodiment, and are not repeated here.

本实施例的基于结构光的图像处理装置,通过获取成像对象的可见光图像;获取成像对象的结构光图像指示的深度数据;根据深度数据,识别可见光图像中成像对象的第一成像区域,以及识别成像对象所佩戴物品的第二成像区域;根据第一成像区域和第二成像区域的相对位置,在可见光图像中,确定图像处理操作的操作区域;对操作区域执行图像处理操作。由此,可以实现当图像处理操作为美颜的情况下,可以避免模糊成像对象所佩戴物品,从而可以提升所佩戴物品的显示效果。当图像处理操作为背景虚化的情况下,可以避免误虚成像对象所佩戴物品,从而提升可见光图像虚化效果。同时,根据结构光图像指示的深度数据,识别可见光图像中成像对象的第一成像区域,以及识别成像对象所佩戴物品的第二成像区域,进而在确定操作区域并执行图像处理操作后,一方面提高了成像照片的成像效果,另一方面提高了深度数据的准确性,从而使得图像处理效果较好。The structured light-based image processing device of this embodiment acquires the visible light image of the imaging object; acquires depth data indicated by the structured light image of the imaging object; identifies the first imaging area of the imaging object in the visible light image according to the depth data, and recognizes imaging the second imaging area of the article worn by the object; determining the operation area of the image processing operation in the visible light image according to the relative positions of the first imaging area and the second imaging area; and performing the image processing operation on the operation area. In this way, when the image processing operation is to beautify the face, it is possible to avoid blurring the articles worn by the imaging object, so that the display effect of the worn articles can be improved. When the image processing operation is background blurring, it is possible to avoid falsely imagining objects worn by the imaging object, thereby improving the blurring effect of visible light images. At the same time, according to the depth data indicated by the structured light image, identify the first imaging area of the imaging object in the visible light image, and identify the second imaging area of the object worn by the imaging object, and then after determining the operation area and performing the image processing operation, on the one hand The imaging effect of the imaging photo is improved, and on the other hand, the accuracy of the depth data is improved, so that the image processing effect is better.

为了实现上述实施例,本申请还提出一种移动终端。In order to realize the above embodiments, the present application also proposes a mobile terminal.

图6为本申请实施例提供的一种移动终端的结构示意图。FIG. 6 is a schematic structural diagram of a mobile terminal according to an embodiment of the present application.

本实施例中,移动终端包括但不限于手机、平板电脑等设备。In this embodiment, the mobile terminal includes but is not limited to devices such as a mobile phone and a tablet computer.

如图6所示,该移动终端包括:成像传感器210、存储器220、处理器230及存储在存储器220上并可在处理器230上运行的计算机程序(图6中未示出),处理器230根据从成像传感器210获取的可见光图像或结构光图像,执行程序时,实现如本申请前述实施例中提出的基于结构光的图像处理方法。As shown in FIG. 6 , the mobile terminal includes: an imaging sensor 210 , a memory 220 , a processor 230 and a computer program (not shown in FIG. 6 ) stored on the memory 220 and executable on the processor 230 , the processor 230 According to the visible light image or the structured light image acquired from the imaging sensor 210, when the program is executed, the structured light-based image processing method as proposed in the foregoing embodiments of the present application is implemented.

在本申请实施例的一种可能的实现方式中,参见图7,在图6所示实施例的基础上,该移动终端还可以包括:微处理芯片MCU2140。In a possible implementation manner of the embodiment of the present application, referring to FIG. 7 , on the basis of the embodiment shown in FIG. 6 , the mobile terminal may further include: a microprocessor chip MCU2140 .

处理器230具有可信执行环境,程序运行于可信执行环境。The processor 230 has a trusted execution environment, and the program runs in the trusted execution environment.

MCU240,为可信执行环境的专用硬件,与成像传感器210和处理器230连接,用于控制成像传感器210进行成像,将成像得到的可见光图像发送至处理器230,以及将成像得到的结构光图像指示的深度数据发送至处理器230。The MCU 240 is dedicated hardware of the trusted execution environment, connected with the imaging sensor 210 and the processor 230, and is used to control the imaging sensor 210 to perform imaging, send the visible light image obtained by imaging to the processor 230, and transmit the structured light image obtained by imaging. The indicated depth data is sent to processor 230 .

在本实施例一种可能的实现方式中,成像传感器210可包括:红外传感器、结构光图像传感器和可见光图像传感器。In a possible implementation manner of this embodiment, the imaging sensor 210 may include: an infrared sensor, a structured light image sensor, and a visible light image sensor.

其中,红外传感器包括激光摄像头和泛光灯;结构光图像传感器包括:镭射灯,以及与红外传感器共用的激光摄像头,可见光图像传感器包括:可见光摄像头。The infrared sensor includes a laser camera and a floodlight; the structured light image sensor includes a laser light and a laser camera shared with the infrared sensor, and the visible light image sensor includes a visible light camera.

在本实施例一种可能的实现方式中,MCU240包括PWM、深度引擎、总线接口以及随机存取存储器RAM。In a possible implementation manner of this embodiment, the MCU 240 includes a PWM, a depth engine, a bus interface, and a random access memory RAM.

其中,PWM用于调制泛光灯以使发出红外光,以及调制镭射灯以发出结构光;Among them, PWM is used to modulate floodlights to emit infrared light, and to modulate laser lights to emit structured light;

激光摄像头,用于采集成像对象的结构光图像;Laser camera for collecting structured light images of imaging objects;

深度引擎,用于根据结构光图像,计算获得成像对象对应的深度数据;以及a depth engine, used to calculate and obtain the depth data corresponding to the imaging object according to the structured light image; and

总线接口,用于将深度数据发送至处理器230,并由处理器230上运行的可信应用程序利用深度数据执行相应的操作。The bus interface is used to send the depth data to the processor 230, and the trusted application program running on the processor 230 uses the depth data to perform corresponding operations.

例如,可以根据深度数据识别可见光图像中成像对象的第一成像区域,以及识别成像对象所佩戴物品的第二成像区域,并根据第一成像区域和第二成像区域的相对位置,在可见光图像中,确定图像处理操作的操作区域,从而对操作区域执行图像处理操作,具体过程可参见上述实施例,在此不再赘述。For example, the first imaging area of the imaged object in the visible light image can be identified according to the depth data, and the second imaging area of the article worn by the imaging object can be identified, and according to the relative positions of the first imaging area and the second imaging area, in the visible light image , to determine the operation area of the image processing operation, so as to perform the image processing operation on the operation area, the specific process can refer to the above-mentioned embodiment, which will not be repeated here.

为了实现上述实施例,本申请还提出一种计算机可读存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时实现如本申请前述实施例提出的基于结构光的图像处理方法。In order to realize the above-mentioned embodiments, the present application also proposes a computer-readable storage medium on which a computer program is stored, characterized in that, when the program is executed by a processor, the structured light-based image as proposed in the foregoing embodiments of the present application is realized. Approach.

在本说明书的描述中,参考术语“一个实施例”、“一些实施例”、“示例”、“具体示例”、或“一些示例”等的描述意指结合该实施例或示例描述的具体特征、结构、材料或者特点包含于本申请的至少一个实施例或示例中。在本说明书中,对上述术语的示意性表述不必须针对的是相同的实施例或示例。而且,描述的具体特征、结构、材料或者特点可以在任一个或多个实施例或示例中以合适的方式结合。此外,在不相互矛盾的情况下,本领域的技术人员可以将本说明书中描述的不同实施例或示例以及不同实施例或示例的特征进行结合和组合。In the description of this specification, description with reference to the terms "one embodiment," "some embodiments," "example," "specific example," or "some examples", etc., mean specific features described in connection with the embodiment or example , structure, material or feature is included in at least one embodiment or example of the present application. In this specification, schematic representations of the above terms are not necessarily directed to the same embodiment or example. Furthermore, the particular features, structures, materials or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, those skilled in the art may combine and combine the different embodiments or examples described in this specification, as well as the features of the different embodiments or examples, without conflicting each other.

此外,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括至少一个该特征。在本申请的描述中,“多个”的含义是至少两个,例如两个,三个等,除非另有明确具体的限定。In addition, the terms "first" and "second" are only used for descriptive purposes, and should not be construed as indicating or implying relative importance or implying the number of indicated technical features. Thus, a feature delimited with "first", "second" may expressly or implicitly include at least one of that feature. In the description of the present application, "plurality" means at least two, such as two, three, etc., unless expressly and specifically defined otherwise.

流程图中或在此以其他方式描述的任何过程或方法描述可以被理解为,表示包括一个或更多个用于实现定制逻辑功能或过程的步骤的可执行指令的代码的模块、片段或部分,并且本申请的优选实施方式的范围包括另外的实现,其中可以不按所示出或讨论的顺序,包括根据所涉及的功能按基本同时的方式或按相反的顺序,来执行功能,这应被本申请的实施例所属技术领域的技术人员所理解。Any process or method description in the flowcharts or otherwise described herein may be understood to represent a module, segment or portion of code comprising one or more executable instructions for implementing custom logical functions or steps of the process , and the scope of the preferred embodiments of the present application includes alternative implementations in which the functions may be performed out of the order shown or discussed, including performing the functions substantially concurrently or in the reverse order depending upon the functions involved, which should It is understood by those skilled in the art to which the embodiments of the present application belong.

在流程图中表示或在此以其他方式描述的逻辑和/或步骤,例如,可以被认为是用于实现逻辑功能的可执行指令的定序列表,可以具体实现在任何计算机可读介质中,以供指令执行系统、装置或设备(如基于计算机的系统、包括处理器的系统或其他可以从指令执行系统、装置或设备取指令并执行指令的系统)使用,或结合这些指令执行系统、装置或设备而使用。就本说明书而言,"计算机可读介质"可以是任何可以包含、存储、通信、传播或传输程序以供指令执行系统、装置或设备或结合这些指令执行系统、装置或设备而使用的装置。计算机可读介质的更具体的示例(非穷尽性列表)包括以下:具有一个或多个布线的电连接部(电子装置),便携式计算机盘盒(磁装置),随机存取存储器(RAM),只读存储器(ROM),可擦除可编辑只读存储器(EPROM或闪速存储器),光纤装置,以及便携式光盘只读存储器(CDROM)。另外,计算机可读介质甚至可以是可在其上打印所述程序的纸或其他合适的介质,因为可以例如通过对纸或其他介质进行光学扫描,接着进行编辑、解译或必要时以其他合适方式进行处理来以电子方式获得所述程序,然后将其存储在计算机存储器中。The logic and/or steps represented in flowcharts or otherwise described herein, for example, may be considered an ordered listing of executable instructions for implementing the logical functions, may be embodied in any computer-readable medium, For use with, or in conjunction with, an instruction execution system, apparatus, or device (such as a computer-based system, a system including a processor, or other system that can fetch instructions from and execute instructions from an instruction execution system, apparatus, or apparatus) or equipment. For the purposes of this specification, a "computer-readable medium" can be any device that can contain, store, communicate, propagate, or transport the program for use by or in connection with an instruction execution system, apparatus, or apparatus. More specific examples (non-exhaustive list) of computer readable media include the following: electrical connections with one or more wiring (electronic devices), portable computer disk cartridges (magnetic devices), random access memory (RAM), Read Only Memory (ROM), Erasable Editable Read Only Memory (EPROM or Flash Memory), Fiber Optic Devices, and Portable Compact Disc Read Only Memory (CDROM). In addition, the computer readable medium may even be paper or other suitable medium on which the program may be printed, as the paper or other medium may be optically scanned, for example, followed by editing, interpretation, or other suitable medium as necessary process to obtain the program electronically and then store it in computer memory.

应当理解,本申请的各部分可以用硬件、软件、固件或它们的组合来实现。在上述实施方式中,多个步骤或方法可以用存储在存储器中且由合适的指令执行系统执行的软件或固件来实现。如,如果用硬件来实现和在另一实施方式中一样,可用本领域公知的下列技术中的任一项或他们的组合来实现:具有用于对数据信号实现逻辑功能的逻辑门电路的离散逻辑电路,具有合适的组合逻辑门电路的专用集成电路,可编程门阵列(PGA),现场可编程门阵列(FPGA)等。It should be understood that various parts of this application may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware as in another embodiment, it can be implemented by any one of the following techniques known in the art, or a combination thereof: discrete with logic gates for implementing logic functions on data signals Logic circuits, application specific integrated circuits with suitable combinational logic gates, Programmable Gate Arrays (PGA), Field Programmable Gate Arrays (FPGA), etc.

本技术领域的普通技术人员可以理解实现上述实施例方法携带的全部或部分步骤是可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,该程序在执行时,包括方法实施例的步骤之一或其组合。Those skilled in the art can understand that all or part of the steps carried by the methods of the above embodiments can be completed by instructing the relevant hardware through a program, and the program can be stored in a computer-readable storage medium, and the program can be stored in a computer-readable storage medium. When executed, one or a combination of the steps of the method embodiment is included.

此外,在本申请各个实施例中的各功能单元可以集成在一个处理模块中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。所述集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。In addition, each functional unit in each embodiment of the present application may be integrated into one processing module, or each unit may exist physically alone, or two or more units may be integrated into one module. The above-mentioned integrated modules can be implemented in the form of hardware, and can also be implemented in the form of software function modules. If the integrated modules are implemented in the form of software functional modules and sold or used as independent products, they may also be stored in a computer-readable storage medium.

上述提到的存储介质可以是只读存储器,磁盘或光盘等。尽管上面已经示出和描述了本申请的实施例,可以理解的是,上述实施例是示例性的,不能理解为对本申请的限制,本领域的普通技术人员在本申请的范围内可以对上述实施例进行变化、修改、替换和变型。The above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, and the like. Although the embodiments of the present application have been shown and described above, it should be understood that the above embodiments are exemplary and should not be construed as limitations to the present application. Embodiments are subject to variations, modifications, substitutions and variations.

Claims (12)

1. A method for structured light based image processing, the method comprising the steps of:
acquiring a visible light image of an imaging object;
acquiring depth data indicative of a structured light image of the imaging subject;
identifying a first imaging region of the imaging subject in the visible light image and a second imaging region of an item worn by the imaging subject from the depth data;
determining an operation area of an image processing operation in the visible light image according to the relative position of the first imaging area and the second imaging area, wherein the operation area is related to the relative position and the image processing operation; if the image processing operation is background blurring, if the relative position is that the first imaging region includes the second imaging region, the operation region includes a portion of the visible light image other than the first imaging region; if the relative position is that the first imaging area is adjacent to or overlapped with the second imaging area, the operation area comprises a part of the visible light image except the first imaging area and the second imaging area;
and executing the image processing operation on the operation area.
2. The image processing method according to claim 1, wherein the image processing operation is plural, and the determining an operation region of the image processing operation in the visible light image according to the relative positions of the first imaging region and the second imaging region includes:
determining an image processing operation to be executed in a plurality of image processing operations;
and determining a corresponding operation area in the visible light image according to the image processing operation to be executed and the relative position.
3. The image processing method according to claim 2, wherein the determining a corresponding operation area in the visible light image according to the image processing operation to be performed and the relative position comprises:
acquiring a pre-configured mapping table, wherein the mapping table is used for indicating the mapping relation among the image processing operation, the relative position and the operation area;
and inquiring the mapping table according to the image processing operation to be executed and the relative position to determine the operation area.
4. The image processing method according to claim 1, wherein determining an operation region of an image processing operation in the visible light image according to the relative positions of the first imaging region and the second imaging region comprises:
if the relative position is that the first imaging region includes the second imaging region, or the first imaging region overlaps with the second imaging region, if the image processing operation is a color beautification operation, the operation region includes: a portion of the first imaging region other than the second imaging region;
if the relative position is that the first imaging area is adjacent to the second imaging area, the operation area includes, when the image processing operation is a beauty operation: the first imaging region.
5. The method of any of claims 1-4, wherein identifying a first imaging region of the imaging subject in the visible light image and identifying a second imaging region of an article worn by the imaging subject based on the depth data comprises:
according to the depth data, a foreground part and a background part are identified and obtained in the visible light image, and the depth of the foreground part is smaller than that of the background part;
identifying whether the imaging object is a human body in the foreground part;
if the foreground part imaging object is a human body, taking a part surrounded by the human body outline as the first imaging area in the foreground part;
identifying to obtain the second imaging area in the first imaging area and a foreground part with a distance from the first imaging area smaller than a preset distance; the second imaging area and the skin color have color difference, the depth corresponding to a first pixel point in the first imaging area closest to the second imaging area is within a preset depth range, and the difference between the depth corresponding to a second pixel point in the second imaging area closest to the first imaging area is within the preset depth range.
6. The image processing method according to claim 5,
the second imaging region is adjacent to or inside a target sub-region of the first imaging region; the target sub-region is used to image the neck, ears, nose, lips, or forehead.
7. The image processing method according to any one of claims 1 to 4, further comprising, after performing the image processing operation on the operation region:
enhancing the expressive power of the second imaging area.
8. The image processing method according to claim 7, wherein the enhancing of the expressive power of the second imaging region comprises:
sharpening, adjusting contrast and/or saturation within the second imaging region.
9. A structured light based image processing apparatus, comprising:
the acquisition module is used for acquiring a visible light image of an imaging object; acquiring depth data indicative of a structured light image of the imaging subject;
the identification module is used for identifying a first imaging area of the imaging object in the visible light image and identifying a second imaging area of an article worn by the imaging object according to the depth data;
a determining module, configured to determine an operation area for an image processing operation in the visible light image according to a relative position of the first imaging area and the second imaging area, where the operation area is related to the relative position and the image processing operation; if the image processing operation is background blurring, if the relative position is that the first imaging region includes the second imaging region, the operation region includes a portion of the visible light image other than the first imaging region; if the relative position is that the first imaging area is adjacent to or overlapped with the second imaging area, the operation area comprises a part of the visible light image except the first imaging area and the second imaging area;
and the processing module is used for executing the image processing operation on the operation area.
10. A mobile terminal, comprising: an imaging sensor, a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the structured light based image processing method according to any one of claims 1 to 8 when executing the program based on a visible light image or a structured light image acquired from the imaging sensor.
11. The mobile terminal according to claim 10, wherein the mobile terminal further comprises a micro processing chip MCU; the processor has a trusted execution environment, the program running in the trusted execution environment;
the MCU is special hardware of the trusted execution environment, is connected with the imaging sensor and the processor, and is used for controlling the imaging sensor to image, sending visible light images obtained by imaging to the processor, and sending depth data indicated by structured light images obtained by imaging to the processor.
12. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the structured light based image processing method according to any one of claims 1 to 8.
CN201810326349.2A 2018-04-12 2018-04-12 Image processing method and device based on structured light and mobile terminal Expired - Fee Related CN108629745B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810326349.2A CN108629745B (en) 2018-04-12 2018-04-12 Image processing method and device based on structured light and mobile terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810326349.2A CN108629745B (en) 2018-04-12 2018-04-12 Image processing method and device based on structured light and mobile terminal

Publications (2)

Publication Number Publication Date
CN108629745A CN108629745A (en) 2018-10-09
CN108629745B true CN108629745B (en) 2021-01-19

Family

ID=63705197

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810326349.2A Expired - Fee Related CN108629745B (en) 2018-04-12 2018-04-12 Image processing method and device based on structured light and mobile terminal

Country Status (1)

Country Link
CN (1) CN108629745B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112601005B (en) * 2020-09-25 2022-06-24 维沃移动通信有限公司 Shooting method and device
CN114255175B (en) * 2020-09-25 2025-05-27 广州虎牙科技有限公司 Face beautification method, device, electronic device and storage medium
CN117314794B (en) * 2023-11-30 2024-03-01 深圳市美高电子设备有限公司 A live broadcast beauty method, device, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107316281A (en) * 2017-06-16 2017-11-03 广东欧珀移动通信有限公司 image processing method, device and terminal device
CN107370958A (en) * 2017-08-29 2017-11-21 广东欧珀移动通信有限公司 Image virtualization processing method, device and shooting terminal
CN107454332A (en) * 2017-08-28 2017-12-08 厦门美图之家科技有限公司 Image processing method, device and electronic equipment
CN107493432A (en) * 2017-08-31 2017-12-19 广东欧珀移动通信有限公司 Image processing method, device, mobile terminal, and computer-readable storage medium
CN107846556A (en) * 2017-11-30 2018-03-27 广东欧珀移动通信有限公司 Imaging method, device, mobile terminal and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107316281A (en) * 2017-06-16 2017-11-03 广东欧珀移动通信有限公司 image processing method, device and terminal device
CN107454332A (en) * 2017-08-28 2017-12-08 厦门美图之家科技有限公司 Image processing method, device and electronic equipment
CN107370958A (en) * 2017-08-29 2017-11-21 广东欧珀移动通信有限公司 Image virtualization processing method, device and shooting terminal
CN107493432A (en) * 2017-08-31 2017-12-19 广东欧珀移动通信有限公司 Image processing method, device, mobile terminal, and computer-readable storage medium
CN107846556A (en) * 2017-11-30 2018-03-27 广东欧珀移动通信有限公司 Imaging method, device, mobile terminal and storage medium

Also Published As

Publication number Publication date
CN108629745A (en) 2018-10-09

Similar Documents

Publication Publication Date Title
CN106991654A (en) Depth-based human body beautification method and device and electronic device
EP3284011B1 (en) Two-dimensional infrared depth sensing
CN107209849B (en) Eye tracking
CN110168562B (en) Depth-based control method, depth-based control device and electronic device
US9317772B2 (en) Method for improving tracking using dynamic background compensation with centroid compensation
CN108596061A (en) Face recognition method, device, mobile terminal, and storage medium
CN108881691A (en) Control method, microprocessor, computer-readable storage medium, and computer device
CN108764180A (en) Face recognition method and device, electronic equipment and readable storage medium
CN106682620A (en) Human face image acquisition method and device
US11410458B2 (en) Face identification method and apparatus, mobile terminal and storage medium
KR20180138300A (en) Electronic device for providing property information of external light source for interest object
CN107016348B (en) Face detection method and device combined with depth information and electronic device
CN109191393B (en) Beauty method based on 3D model
CN108629745B (en) Image processing method and device based on structured light and mobile terminal
CN108876709A (en) Method for beautifying faces, device, electronic equipment and readable storage medium storing program for executing
CN106991688A (en) Human body tracking method, human body tracking device and electronic device
CN106997457B (en) Human body recognition method, human body recognition device and electronic device
CN108616688A (en) Image processing method and device, mobile terminal and storage medium
CN110580102A (en) Screen brightening method, device, mobile terminal and storage medium
JP2002216129A (en) Apparatus and method for detecting face area and computer-readable recording medium
CN106951833A (en) Face image acquisition device and face image acquisition method
WO2019011110A1 (en) Human face region processing method and apparatus in backlight scene
CN116109828B (en) Image processing method and electronic device
CN108777784B (en) Depth acquisition method and apparatus, electronic apparatus, computer device, and storage medium
CN110879983A (en) Face feature key point extraction method and face image synthesis method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20210119