[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN114240963B - Image processing method, device, storage medium and electronic device - Google Patents

Image processing method, device, storage medium and electronic device Download PDF

Info

Publication number
CN114240963B
CN114240963B CN202111417587.2A CN202111417587A CN114240963B CN 114240963 B CN114240963 B CN 114240963B CN 202111417587 A CN202111417587 A CN 202111417587A CN 114240963 B CN114240963 B CN 114240963B
Authority
CN
China
Prior art keywords
feature
region
image
area
depth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111417587.2A
Other languages
Chinese (zh)
Other versions
CN114240963A (en
Inventor
林子尧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202111417587.2A priority Critical patent/CN114240963B/en
Publication of CN114240963A publication Critical patent/CN114240963A/en
Application granted granted Critical
Publication of CN114240963B publication Critical patent/CN114240963B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)

Abstract

本申请请施例公开了一种图像处理方法、装置、存储介质及电子设设,其中,方法包括:确定图像中的第一特征区域,获取所述图像的深度图像和所述图像中第二特征区域的分割图像,所述第二特征区域包括所述第一特征区域;根据所述深度图像获取所述第一特征区域对对的特征深度信息;根据所述第一特征区域对对的特征深度信息对所述分割图像中所述第二特征区域进行修正。采用本申请请施例,可以增强分割图像中特征区域的准确性,提高图像处理的可靠性。

The present application discloses an image processing method, device, storage medium and electronic device, wherein the method comprises: determining a first feature region in an image, obtaining a depth image of the image and a segmented image of a second feature region in the image, the second feature region including the first feature region; obtaining feature depth information of the first feature region according to the depth image; and correcting the second feature region in the segmented image according to the feature depth information of the first feature region. By adopting the present application, the accuracy of the feature region in the segmented image can be enhanced, and the reliability of image processing can be improved.

Description

Image processing method, device, storage medium and electronic equipment
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image processing method, an image processing device, a storage medium, and an electronic apparatus.
Background
In recent years, photography has become an indispensable part of the digital age along with the development of social networks and self-media from the popularity to the popularity, and image processing technologies have also been developed. Image segmentation technology is also becoming more important as an important part of image processing technology, for example, image segmentation technology is used to segment a portrait region in an image, separate the portrait region from a background, and perform a highlighting operation or a blurring operation on the portrait region.
Disclosure of Invention
The embodiment of the application provides an image processing method, an image processing device, a storage medium and electronic equipment, which can enhance the accuracy of feature areas in a segmented image and improve the reliability of image processing. The technical scheme is as follows:
In a first aspect, an embodiment of the present application provides an image processing method, including:
Determining a first characteristic region in an image, and acquiring a depth image of the image and a segmented image of a second characteristic region in the image, wherein the second characteristic region comprises the first characteristic region;
Acquiring characteristic depth information corresponding to the first characteristic region according to the depth image;
and correcting the second characteristic region in the segmented image according to the characteristic depth information corresponding to the first characteristic region.
In a second aspect, an embodiment of the present application provides an image processing apparatus, including:
the image acquisition module is used for determining a first characteristic area in an image, acquiring a depth image of the image and a segmented image of a second characteristic area in the image, wherein the second characteristic area comprises the first characteristic area;
the information acquisition module is used for acquiring the characteristic depth information corresponding to the first characteristic region according to the depth image;
And the image correction module is used for correcting the second characteristic region in the segmented image according to the characteristic depth information corresponding to the first characteristic region.
In a third aspect, embodiments of the present application provide a computer storage medium storing a plurality of instructions adapted to be loaded by a processor and to perform the above-described method steps.
In a fourth aspect, an embodiment of the present application provides an electronic device, which may include a processor and a memory, wherein the memory stores a computer program adapted to be loaded by the processor and to perform the above-described method steps.
The technical scheme provided by the embodiments of the application has the beneficial effects that at least:
And correcting the second characteristic region segmented by the image according to the characteristic depth information of the first characteristic region, namely providing a reference basis for image segmentation by the depth information, so as to obtain a more accurate second characteristic region, improving errors and instability caused by only adopting the depth information or a single algorithm of a segmentation algorithm in the related technology, avoiding the influence of the wrong segmentation on the imaging performance of the image, and having lower operation cost on the basis of improving the reliability of image processing.
Drawings
In order to more clearly illustrate the embodiments of the application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic image diagram of an image processing according to an embodiment of the present application;
fig. 2 is a schematic flow chart of an image processing method according to an embodiment of the present application;
FIG. 3A is a schematic image of a first feature region according to an embodiment of the present application;
fig. 3B is a schematic view of a depth image corresponding to an image according to an embodiment of the present application;
FIG. 3C is a schematic view of a segmented image including a second feature region according to an embodiment of the present application;
Fig. 4 is a schematic flow chart of an image processing method according to an embodiment of the present application;
FIG. 5 is a schematic illustration of image and depth image alignment provided by an embodiment of the present application;
FIG. 6 is a schematic view of a depth image including depth feature information according to an embodiment of the present application;
Fig. 7 is a schematic flow chart of an image processing method according to an embodiment of the present application;
FIG. 8 is a schematic diagram of modifying a second feature region according to depth feature information according to an embodiment of the present application;
fig. 9 is a schematic flow chart of an image processing method according to an embodiment of the present application;
Fig. 10 is a schematic structural view of an image processing apparatus according to an embodiment of the present application;
Fig. 11 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
In the description of the present application, it should be understood that the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. In the description of the present application, it should be noted that, unless expressly specified and limited otherwise, "comprise" and "have" and any variations thereof are intended to cover non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus. The specific meaning of the above terms in the present application will be understood in specific cases by those of ordinary skill in the art. Furthermore, in the description of the present application, unless otherwise indicated, "a plurality" means two or more. "and/or" describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate that there are three cases of a alone, a and B together, and B alone. The character "/" generally indicates that the context-dependent object is an "or" relationship.
The present application will be described in detail with reference to specific examples.
In recent years, photography has become an indispensable part of the digital age along with the development of social networks and self-media from the popularity to the popularity, and image processing technologies have also been developed. Image segmentation (IMAGE SEGMENT) is also becoming more important as an important part of image processing technologies, for example, image segmentation is used to segment a portrait region in an image, separate the portrait region from a background, and perform a highlighting operation or a blurring operation on the portrait region.
As shown in fig. 1, an image schematic diagram of image processing provided by an embodiment of the present application is shown in the left image, where the left image is a photograph taken by a user, and when the user wants to implement a "background blurring" function, that is, when a processor receives a trigger condition on a background blurring control, a portrait in the left image needs to be first segmented, that is, a portrait area shown in the right image is obtained, so as to further perform blurring processing on a background area other than the portrait area.
Among related image segmentation techniques, three segmentation techniques are mainly classified. One is a conventional color space method, an edge feature-based method, a wavelet transform-based method, and the like. However, this technique is difficult to resolve for color-similar regions, changes in light and shadow, or interference of image noise, and can be partially improved by referencing a larger range of information, but the calculation cost is increased. The second is semantic segmentation and instance segmentation based on deep learning (DEEP LEAMING). However, the algorithm of such deep learning as semantic segmentation mainly depends on the design of the network architecture and the training data, especially on the training data set, all the target scenes should be covered as much as possible, however, the shooting behaviors are changeable, the view angles are different, the irregular portrait shielding (multi-person scenes) and the uncertain texture features are different, and many factors may affect the segmentation result, so the stability is insufficient. Thirdly, the segmentation method based on the depth map can distinguish different objects through different distances, but cannot analyze different objects at the same distance. And for the depth generated by the stereoscopic vision, the parallax shielding problem exists, and the method is not completely reliable.
Therefore, based on the above-mentioned problems, the present application provides an image processing method, which can enhance the accuracy of feature regions in a segmented image and improve the reliability of image processing.
In one embodiment, as shown in fig. 2, a flowchart of an image processing method according to an embodiment of the present application is provided, which may be implemented by a computer program and may be executed on an image processing apparatus based on von neumann system. The computer program may be integrated in the application or may run as a stand-alone tool class application.
Specifically, the image processing method includes:
S101, acquiring a depth image of an image and a segmentation image of a second characteristic region in the image according to the first characteristic region in the image, wherein the second characteristic region comprises the first characteristic region.
An image refers to a description or portraiture of a similarity, liveliness, or the like of a nature or an objective object (human, animal, plant, landscape, etc.), or an image may be understood as a representation of a nature or an objective object (human, animal, plant, landscape, etc.), which contains information about the object being described. The image is usually a picture with visual effect, and can be a photo, a drawing, a clip art, a map, a satellite cloud picture, a video picture, an X-ray film, an electroencephalogram, an electrocardiogram and the like.
The first feature region in the image may be understood as a specific region in a region of interest of the image, i.e. a second feature region, which comprises the first feature region, which may be a single pixel or a set of multiple pixels. For example, the region of interest is a portrait region, the first feature region is a face region, the region of interest is an automotive region, the first feature region is a head region, the region of interest is a chest region, and the first feature region is a heart region.
It can be understood that in the following figures, the first feature area is taken as a face area, the second feature area is taken as a portrait area for example, the application also includes types and contents of other first feature areas and second feature areas, and the image processing method is applicable to any image including the first feature area and the second feature area.
The first feature region in the image is acquired, and the acquiring method may be any one or more feature extraction algorithms in the related art, for example, a face feature extraction algorithm, a deep learning network, a local binary pattern (Local Binary Pattern, LBP) algorithm, etc., which is not limited in this application. Fig. 3A is a schematic image diagram including a first feature area according to an embodiment of the present application, where the first feature area 101 is an image area corresponding to a face in the figure.
And acquiring a depth image of the image, wherein the depth image comprises depth information, and the depth information can be understood as a distance value or a depth value from any point in the image to the camera, namely three-dimensional coordinate information of any point. The depth information is obtained by any one or more methods such as stereoscopic image matching, depth camera or depth learning, etc., and the present application is not limited in this respect. Fig. 3B is a schematic diagram of a depth image corresponding to an image according to an embodiment of the present application, where areas with different depths correspond to different depth values.
A segmented image of a second feature region in the image is acquired. The segmented image comprises a second feature region and at least one other region, and the segmented image can be obtained by any one or more methods of a histogram thresholding method, a region growing method, an image-based random field model method, a relaxation mark region segmentation method and the like, and the segmented image is not limited in any way.
As shown in fig. 3C, in an embodiment of the present application, a segmented image schematic diagram including a second feature region is provided, where the second feature region 102 is a portrait region obtained based on an image segmentation method, and the segmented image shown in fig. 3C further includes a first region 103, a second region 104, a third region 105, and a fourth region 106. The fourth region 106 is a background region obtained by an image segmentation method. It will be appreciated that the division of the multiple zones shown in fig. 3C is illustrative only.
S102, acquiring feature depth information corresponding to the first feature region according to the depth image.
The feature depth information corresponding to the first feature region may be understood as information characterizing a distance value between the first feature region and the camera. And obtaining the characteristic depth information corresponding to the first characteristic region according to the depth image and the first characteristic region extracted from the image.
For example, as shown in fig. 3A, the first feature region 101 in the image obtains feature depth information corresponding to the first feature region 101 according to the depth map shown in fig. 3B.
S103, correcting a second characteristic region in the segmented image according to the characteristic depth information corresponding to the first characteristic region.
In the related image segmentation technology, the segmentation accuracy of the second characteristic region included in the segmented image is not satisfactory, so that the second characteristic region in the segmented image is corrected according to the characteristic depth information corresponding to the first characteristic region.
The correction principle can be understood as that due to the relevance of the first feature area and the second feature area, that is, the second feature area and the first feature area are located on the same plane, or the difference between the depth value of the second feature area and the depth value of the first feature area is smaller than a preset threshold, for example, when a person self-photographs, the face and the person are located on the same plane, the second feature area is estimated in the depth map according to the feature depth information corresponding to the first feature area, the second feature area corresponding to the depth map is compared with the second feature area in the segmented image, and the second feature area in the segmented image is corrected according to the comparison result.
In one embodiment, the correction method comprises the steps of determining a target area belonging to a second characteristic area in the segmented image according to characteristic depth information corresponding to the first characteristic area, and correcting the second characteristic area according to the target area. The number of the target areas is one or more, and the target areas are single pixel points or a set of a plurality of pixel points.
For example, a second feature region and a background region are obtained in the depth image shown in fig. 3B according to the feature depth information corresponding to the first feature region 101, the second feature region in the depth image shown in fig. 3B is compared with the segmented image shown in fig. 3C, the segmented image shown in fig. 3C includes a second feature region 102, a first region 103, a second region 104, a third region 105, and a fourth region 106, wherein the fourth region 106 is a background region, the target region is determined to be the first region 103, the second region 104, and the third region 105, and the original second feature region 102 is corrected according to the target region.
In this embodiment, the second feature region of the segmented image is corrected, so that the problem that different objects at the same distance cannot be distinguished due to the fact that the second feature region is obtained according to the feature depth information of the first feature region in the depth image can be solved. Specifically, according to the depth image shown in fig. 3B, according to the depth feature information corresponding to the face region (i.e., the first feature region), the obtained portrait region (i.e., the second feature region) includes a star-shaped decoration region 107, where the star-shaped decoration region 107 may be understood as a plurality of falling star-shaped small lamps, and the user is in self-timer among the plurality of star-shaped small lamps, and when the portrait region corresponding to the depth image shown in fig. 3B is used for correcting the portrait region 102 corresponding to the split image shown in fig. 3C, the star-shaped decoration region belongs to the background region 106 in the split image, excluding the portrait region 102, and the target region further corrected for the portrait region 102 does not include the star-shaped decoration region. Therefore, the application solves the inaccuracy caused by the fact that different objects at the same distance cannot be distinguished for image segmentation when the image is segmented only according to the depth image.
In one embodiment, modifying the second feature region based on the target region includes fusing the target region to the second feature region. For example, in the divided map shown in fig. 3C, the first region 103, the second region 104, and the third region 105 as target regions are fused to the second feature region 102.
In one embodiment, the image processing technology provided by the application can be applied to the field of video coding. Specifically, the background area and the region of interest are segmented from each frame of video image included in the video through an image processing technology, the region of interest is the second feature area, the background area and the region of interest of each frame of video image are coded independently, for example, a low-distortion coding mode is used for the region of interest, and a simple and efficient coding mode is used for the background area. By the embodiment, the video coding efficiency is improved, and the memory space is saved.
According to the application, the second characteristic region segmented by the image is corrected according to the characteristic depth information of the first characteristic region, namely, a reference basis is provided for image segmentation by the depth information, so that a more accurate second characteristic region is obtained, errors and instabilities caused by only adopting the depth information or a single algorithm of a segmentation algorithm in the related technology are improved, the influence of the wrong segmentation on the imaging performance of the image is avoided, and the operation cost is lower on the basis of improving the reliability of image processing.
In one embodiment, as shown in fig. 4, a flowchart of an image processing method according to an embodiment of the present application may be implemented by a computer program, and may be executed on an image processing apparatus based on von neumann system. The computer program may be integrated in the application or may run as a stand-alone tool class application.
S201, acquiring a depth image of an image and a segmentation image of a second characteristic region in the image according to a first characteristic region in the image, wherein the second characteristic region comprises the first characteristic region.
S201 refers to S101, which is not described herein.
S202, acquiring a depth image area corresponding to the first characteristic area according to the aligned depth image and the image.
And aligning the depth image with the image, and acquiring a depth image area of the first characteristic area in the depth image according to the aligned depth image and the aligned image. The image and depth image alignment method comprises one or more of an ORB (Oriented FAST and Binary Robust INDEPENDENT ELEMENTARY Features) feature extraction method, a SURF (Speeded-Up Robust Features) algorithm and the like, a feature matching algorithm, a calculation homography matrix, a torsion picture algorithm and the like, and the image and depth image alignment method is not limited in any way.
As shown in fig. 5, which is a schematic diagram of alignment of an image and a depth image provided by an embodiment of the present application, an upper part of fig. 5 shows that an image including a first feature area is aligned with the depth image, and a depth image area corresponding to the first feature area, that is, a depth image area 201, is obtained according to the aligned depth image and image.
S203, acquiring feature depth information corresponding to the first feature region according to the depth image region corresponding to the first feature region.
In one embodiment, according to the depth image area corresponding to the first feature area, average calculation is performed on the depth value corresponding to the depth image area to obtain average depth information corresponding to the first feature area, and according to the average depth information, feature depth information corresponding to the first feature area is obtained.
For example, as shown in fig. 5, the depth image area 201 includes N pixels, each pixel corresponds to a depth value X, and the average calculation is performed on the N depth values to obtain an average depth value corresponding to the depth image area 201
In another embodiment, the depth value corresponding to the depth image area is weighted according to the depth image area corresponding to the first feature area to obtain depth information corresponding to the first feature area, and the feature depth information corresponding to the first feature area is obtained according to the depth information. For example, the weight is inversely related to the distance value between the target pixel point and the central pixel point in the first feature area, and the further the distance value between the target pixel point and the central pixel point is, the smaller the weight is given to the depth value corresponding to the target pixel point in the calculation. The application also comprises any other weighting calculation method.
In one embodiment, obtaining the feature depth information corresponding to the first feature region according to the average depth information includes performing compensation calculation on the depth value corresponding to the average depth information to obtain the feature depth information corresponding to the first feature region. The compensation calculation is used for performing range compensation on the depth value corresponding to the characteristic depth information corresponding to the first characteristic region.
For example, as shown in fig. 5, an average depth value D of a depth image area 201 corresponding to a first feature area is obtained, a range compensation calculation (it can be understood that the compensation value is only an example) is performed on the average depth value D to obtain depth feature information corresponding to the first feature area as a depth value range [0.9D,1.1D ], an area in the depth image, where the depth value does not belong to the depth value range, is marked as null, and a depth image schematic diagram shown in fig. 6 is obtained, and fig. 6 is a depth image schematic diagram including depth feature information according to an embodiment of the present application, where the depth image schematic diagram includes a second feature area 202 corresponding to the depth feature information after the compensation calculation.
According to the method and the device, the characteristic depth information corresponding to the first characteristic region is obtained according to the average depth information, compensation calculation is carried out on the depth value corresponding to the average depth information, and the characteristic depth information corresponding to the first characteristic region is obtained, so that the depth characteristic information corresponding to the first characteristic region is more accurate and reliable.
S204, correcting a second characteristic region in the segmented image according to the characteristic depth information corresponding to the first characteristic region.
S204 is referred to S103 above, and will not be described here again.
According to the application, the second characteristic region segmented by the image is corrected according to the characteristic depth information of the first characteristic region, namely, a reference basis is provided for image segmentation by the depth information, so that a more accurate second characteristic region is obtained, errors and instabilities caused by only adopting the depth information or a single algorithm of a segmentation algorithm in the related technology are improved, the influence of the wrong segmentation on the imaging performance of the image is avoided, and the operation cost is lower on the basis of improving the reliability of image processing.
In one embodiment, as shown in fig. 7, a flowchart of an image processing method according to an embodiment of the present application may be implemented by a computer program, and may be executed on an image processing apparatus based on von neumann system. The computer program may be integrated in the application or may run as a stand-alone tool class application.
S301, determining a first characteristic region in the image, and acquiring a depth image of the image and a segmented image of a second characteristic region in the image.
S301 is referred to S101 above, and will not be described here again.
S302, obtaining feature depth information corresponding to the first feature region according to the depth image.
S302 refers to S202 and S203 described above, and will not be described here again.
S303A, determining a first candidate region with the confidence coefficient in a first confidence coefficient range in the segmented image.
The segmented image comprises a second feature region and at least one other region, each region corresponding to a confidence level belonging to the second feature region. The distribution of the confidence coefficient accords with a normal distribution model, the confidence coefficient is divided into three ranges, namely a first confidence coefficient range (0, T 1), a second confidence coefficient range (T 1,T2) and a third confidence coefficient range [ T 2,1],T1 and T 2 ] which are any numerical values respectively set according to requirements.
In the application, when the second characteristic region in the segmented image is corrected, the region with the confidence coefficient of 0 or tending to 0 is not considered, and the region with the confidence coefficient in the third confidence coefficient range is corrected to be the second characteristic region, or whether the region belongs to the second characteristic region is corrected and calculated according to other related image segmentation algorithms and image segmentation models.
Fig. 8 is a schematic diagram of correcting a second feature area according to depth feature information according to an embodiment of the present application. The segmented image shown in the upper half of fig. 8 includes a second feature region 102, a first region 103, a second region 104, a third region 105, and a fourth region 106. The confidence that the first region 103 belongs to the second feature region 102 is 0.5, the confidence that the second region 104 belongs to the second feature region 102 is 0.6, the confidence that the third region 105 is 0.2, the confidence that the fourth region 106 is 0, and the fourth region 106 is determined as a background region.
A first candidate region in the segmented image is determined for which the confidence level is within a first confidence level range, the first confidence level range being (0, T 1), wherein T 1 is a value set as desired, e.g., the first confidence level range is (0, 0.3), in the segmented image illustrated in the upper half of fig. 8, the first candidate region is the third region 105 with a confidence level of 0.2. It will be appreciated that the first candidate region may be a single pixel or a collection of pixels, and the segmented region illustrated in the present application is merely an example.
S304A, determining a first target area belonging to a second characteristic area in the segmentation map according to the characteristic depth information corresponding to the first characteristic area and the first candidate area.
And determining a first target region belonging to the second characteristic region in the first candidate region according to the characteristic depth information corresponding to the first characteristic region and the first candidate region, wherein the condition of the first target region is that the corresponding depth image region is included in the region determined by the characteristic depth information, namely the condition of the first target region is that the corresponding depth value belongs to the depth value range in the characteristic depth information corresponding to the first characteristic region.
For example, the third region 105, which is the first candidate region in the divided image as illustrated in the upper half of fig. 8, is included in the region 301 determined by the feature depth information corresponding to the first feature region in the depth image, and thus the third region 105 belongs to the first target region.
S303B, determining a second candidate region with the confidence in a second confidence range in the segmented image.
A first target region in the segmented image with a confidence level in a first candidate region of a first confidence level range is obtained according to S303A, S a, and a second target region in the second candidate region with a confidence level in a second confidence level range is obtained according to steps S303B, S B and S305B. It is understood that the present application also includes other fusion algorithms for determining the second target region, and the following is only one possible implementation method.
And determining a second candidate region with the confidence coefficient in a second confidence coefficient range in the segmented image, wherein the second confidence coefficient range is (T 1,T2), and T 1 and T 2 are any numerical values respectively set according to requirements. For example, the second confidence range is (0.3, 0.7), and in the segmented image shown in the upper half of fig. 8, the second candidate region is the first region 103 with a confidence of 0.5 and the second region 104 with a confidence of 0.6. It is understood that the second candidate region may be a single pixel or a set of pixels, and the segmented region shown in the present application is merely an example.
S304B, obtaining feature depth information corresponding to the second candidate region according to the depth image, and recalculating the confidence coefficient of the second candidate region according to the feature depth information corresponding to the second candidate region.
And re-calculating the confidence coefficient of the second candidate region according to the characteristic depth information corresponding to the second candidate region, calculating and fusing the confidence coefficient value and the corresponding depth value of the second candidate region, and performing fusion calculation on the confidence coefficient value and the depth value to re-calculate the confidence coefficient of the second candidate region.
For example, the absolute error α of the average depth value of the region 202 and the face region in the upper half of the depth image of fig. 8, that is, the absolute error α of the average depth value of the target region in the region 202 and the face region is calculated by respectively fusing α to the first region 103 and the second region 104 included in the second candidate region, wherein the calculation formula includes the following formula;
S(x)=(1-α)×M(x)+α×P(x);
Wherein M (x) represents whether the region x in the second candidate region belongs to the second feature region in the depth image determined by the feature depth information, if yes, M (x) =1, and if no, M (x) = 0;P (x) represents the confidence of the region x in the second candidate region.
In other words, the confidence of each region in the second candidate region is given a different weight according to the error between the depth value corresponding to each region in the second candidate region and the average depth value of the face region (i.e. the first feature region), the greater the error the less the confidence and vice versa.
For example, the second candidate region is a first region 103 with a confidence of 0.5 and a second region 104 with a confidence of 0.6, the confidence of 0.6 corresponding to the first region 103 is recalculated according to the depth value corresponding to the first region 103, and the confidence of 0.7 corresponding to the second region 104 is recalculated according to the depth value corresponding to the second region 104.
It can be understood that the above is only one example of fusion calculation method for performing fusion calculation on the confidence value and the depth value, and recalculating to obtain the confidence of the second candidate region.
S305B, determining a second target region belonging to a second characteristic region in the segmented image according to the confidence coefficient of the second candidate region after recalculation.
And determining the region with the confidence higher than the correction threshold as a second target region belonging to the second characteristic region according to the recalculated confidence of the second candidate region. For example, the second candidate region is a first region 103 with a confidence of 0.5 and a second region 104 with a confidence of 0.6, the confidence of 0.6 for the first region 103 is recalculated based on the depth value of the first region 103, the confidence of 0.7 for the second region 104 is recalculated based on the depth value of the second region 104, and a region with a confidence higher than 0.55 is determined as the second target region, and the first region 103 and the second region 104 are determined as the second target region.
S306, correcting the second characteristic area according to the target area.
In one embodiment, the second feature region is modified based on the first target region and the second target region. In another embodiment, the target area comprises only the first target area, or the target area comprises only the second target area, and the second feature area is modified according to the target area.
And correcting the second characteristic region according to the target region, wherein the target region is fused to the second characteristic region. As shown in fig. 8, the fourth region 106 as the first target region and the first and second regions 103 and 104 as the second target regions are fused to the second feature region 105, resulting in a second feature region 302 shown in the lower half of fig. 8.
According to the method and the device, the candidate areas with the confidence degrees in different confidence degree ranges in the segmented image are calculated respectively, and the confidence degrees of the candidate areas are calculated by fusing the depth information of the depth image, so that the accuracy of correcting the second characteristic areas in the segmented image is further improved.
According to the application, the second characteristic region segmented by the image is corrected according to the characteristic depth information of the first characteristic region, namely, a reference basis is provided for image segmentation by the depth information, so that a more accurate second characteristic region is obtained, errors and instabilities caused by only adopting the depth information or a single algorithm of a segmentation algorithm in the related technology are improved, the influence of the wrong segmentation on the imaging performance of the image is avoided, and the operation cost is lower on the basis of improving the reliability of image processing.
In one embodiment, as shown in fig. 9, a flowchart of an image processing method according to an embodiment of the present application is provided, which may be implemented by a computer program and may be executed on an image processing apparatus based on von neumann system. The computer program may be integrated in the application or may run as a stand-alone tool class application.
Specifically, the image processing method includes:
S401, acquiring a depth image of an image and a segmentation image of a second characteristic region in the image according to the first characteristic region in the image, wherein the second characteristic region comprises the first characteristic region.
S401 is referred to S101 above, and will not be described here again.
S402, acquiring a depth image area corresponding to the first characteristic area according to the aligned depth image and the image.
S402 is referred to S202 above, and will not be described here again.
S403, acquiring characteristic depth information corresponding to the first characteristic region according to the depth image region corresponding to the first characteristic region.
According to the depth image area corresponding to the first feature area, carrying out average calculation on the depth value corresponding to the depth image area to obtain average depth information corresponding to the first feature area; and carrying out compensation calculation on the depth value corresponding to the average depth information to obtain the characteristic depth information corresponding to the first characteristic region.
S403 refers to S203 above, and will not be described here again.
S404A, determining a first candidate region with the confidence in a first confidence range in the segmented image.
S404A refers to S303A described above, and will not be described here again.
S405A, determining a first target area belonging to a second characteristic area in the segmentation map according to the characteristic depth information corresponding to the first characteristic area and the first candidate area.
S405A is referred to S304A above, and will not be described here again.
S404B, determining a second candidate region with the confidence in a second confidence range in the segmented image.
S404B refers to S303B above, and is not described here again.
S405B, obtaining feature depth information corresponding to the second candidate region according to the depth image, and recalculating the confidence coefficient of the second candidate region according to the feature depth information corresponding to the second candidate region.
S405B refers to S304B, which is not described herein.
S406B, determining a second target region belonging to a second characteristic region in the segmented image according to the calculated confidence of the second candidate region.
S406B refers to S305B, which is not described herein.
S407, correcting the second characteristic region according to the target region.
S407 is referred to S306 above, and will not be described here again.
According to the application, the second characteristic region segmented by the image is corrected according to the characteristic depth information of the first characteristic region, namely, a reference basis is provided for image segmentation by the depth information, so that a more accurate second characteristic region is obtained, errors and instabilities caused by only adopting the depth information or a single algorithm of a segmentation algorithm in the related technology are improved, the influence of the wrong segmentation on the imaging performance of the image is avoided, and the operation cost is lower on the basis of improving the reliability of image processing.
The following are examples of the apparatus of the present application that may be used to perform the method embodiments of the present application. For details not disclosed in the embodiments of the apparatus of the present application, please refer to the embodiments of the method of the present application.
Referring to fig. 10, a schematic diagram of an image processing apparatus according to an exemplary embodiment of the present application is shown. The image processing apparatus may be implemented as all or part of the apparatus by software, hardware or a combination of both. The image processing apparatus includes an image acquisition module 1001, an information acquisition module 1002, and an image correction module 1003.
An image obtaining module 1001, configured to determine a first feature area in an image, and obtain a depth image of the image and a segmented image of a second feature area in the image, where the second feature area includes the first feature area;
An information obtaining module 1002, configured to obtain feature depth information corresponding to the first feature area according to the depth image;
and an image correction module 1003, configured to correct the second feature region in the segmented image according to feature depth information corresponding to the first feature region.
In one embodiment, the image modification module 1003 includes:
The target determining unit is used for determining a target area belonging to the second characteristic area in the segmented image according to the characteristic depth information corresponding to the first characteristic area;
and the target correction unit is used for correcting the second characteristic region according to the target region.
In one embodiment, the targeting unit comprises:
A first candidate subunit, configured to determine a first candidate region in the segmented image where the confidence coefficient is in a first confidence coefficient range;
And the first target subunit is used for determining a first target area belonging to the second characteristic area in the segmented image according to the characteristic depth information corresponding to the first characteristic area and the first candidate area.
In one embodiment, the targeting unit comprises:
A second candidate subunit, configured to determine a second candidate region in the segmented image where the confidence coefficient is in a second confidence coefficient range;
the second computing subunit is used for acquiring the characteristic depth information corresponding to the second candidate region according to the depth image, and re-computing the confidence coefficient of the second candidate region according to the characteristic depth information corresponding to the second candidate region;
and the second target subunit is used for determining a second target area belonging to the second characteristic area in the segmented image according to the recalculated confidence coefficient of the second candidate area.
In one embodiment, the second computing subunit is specifically configured to:
Determining weight information of the feature depth information corresponding to the second candidate region according to the feature depth information corresponding to the first feature region;
And recalculating the confidence coefficient of the second candidate region according to the weight information of the feature depth information corresponding to the second candidate region.
In one embodiment, the target modification unit is specifically configured to:
fusing the target region to the second feature region.
In one embodiment, the information acquisition module 1002 includes:
The alignment unit is used for acquiring a depth image area corresponding to the first characteristic area according to the aligned depth image and the image;
And the acquisition unit is used for acquiring the characteristic depth information corresponding to the first characteristic region according to the depth image region corresponding to the first characteristic region.
In one embodiment, the acquisition unit comprises:
an average calculating subunit, configured to perform average calculation on a depth value corresponding to the depth image area according to the depth image area corresponding to the first feature area, so as to obtain average depth information corresponding to the first feature area;
And the characteristic acquisition subunit is used for acquiring characteristic depth information corresponding to the first characteristic region according to the average depth information.
In one embodiment, the feature acquisition subunit is specifically configured to:
And carrying out compensation calculation on the depth value corresponding to the average depth information to obtain the characteristic depth information corresponding to the first characteristic region.
According to the application, the second characteristic region segmented by the image is corrected according to the characteristic depth information of the first characteristic region, namely, a reference basis is provided for image segmentation by the depth information, so that a more accurate second characteristic region is obtained, errors and instabilities caused by only adopting the depth information or a single algorithm of a segmentation algorithm in the related technology are improved, the influence of the wrong segmentation on the imaging performance of the image is avoided, and the operation cost is lower on the basis of improving the reliability of image processing.
It should be noted that, in the image processing apparatus provided in the foregoing embodiment, when the image processing method is executed, only the division of the foregoing functional modules is used as an example, in practical application, the foregoing functional allocation may be performed by different functional modules, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the image processing apparatus and the image processing method provided in the foregoing embodiments belong to the same concept, which represents a detailed implementation process in the method embodiment, and are not described herein again.
The foregoing embodiment numbers of the present application are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
The embodiment of the present application further provides a computer storage medium, where the computer storage medium may store a plurality of instructions, where the instructions are suitable for being loaded by a processor and executed by the processor to perform the image processing method according to the embodiment shown in fig. 1 to fig. 9, and the specific execution process may refer to the specific description of the embodiment shown in fig. 1 to fig. 9, which is not repeated herein.
The present application further provides a computer program product, where at least one instruction is stored, where the at least one instruction is loaded by the processor and executed by the processor to perform the image processing method according to the embodiment shown in fig. 1 to fig. 9, and the specific execution process may refer to the specific description of the embodiment shown in fig. 1 to fig. 9, which is not repeated herein.
Referring to fig. 11, a schematic structural diagram of an electronic device is provided in an embodiment of the present application. As shown in fig. 11, the electronic device 1100 may include a processor 1101, a network interface 1104, a user interface 1103, a memory 1105, a communication bus 1102.
Wherein communication bus 1102 is used to facilitate connection communications among the components.
The user interface 1103 may include a Display screen (Display) and a Camera (Camera), and the optional user interface 1103 may further include a standard wired interface and a wireless interface.
Network interface 1104 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface), among others.
Wherein the processor 1101 may comprise one or more processing cores. The processor 1101 connects various portions of the overall electronic device 1100 using various interfaces and lines, performs various functions of the electronic device 1100, and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 1105, and invoking data stored in the memory 1105. Alternatively, the processor 1101 may be implemented in at least one hardware form of digital signal Processing (DIGITAL SIGNAL Processing, DSP), field-Programmable gate array (Field-Programmable GATE ARRAY, FPGA), programmable logic array (Programmable Logic Array, PLA). The processor 1101 may integrate one or a combination of several of a central processing unit (Central Processing Unit, CPU), an image processor (Graphics Processing Unit, GPU), a modem, etc. The CPU mainly processes an operating system, a user interface, an application program and the like, the GPU is used for rendering and drawing contents required to be displayed by the display screen, and the modem is used for processing wireless communication. It will be appreciated that the modem may not be integrated into the processor 1101 and may be implemented by a single chip.
The Memory 1105 may include a random access Memory (Random Access Memory, RAM) or a Read-Only Memory (Read-Only Memory). Optionally, the memory 1105 includes a non-transitory computer readable medium (non-transitory computer-readable storage medium). Memory 1105 may be used to store instructions, programs, code, sets of codes, or sets of instructions. The memory 1105 may include a stored program area that may store instructions for implementing an operating system, instructions for functions (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing the above-described respective method embodiments, etc., and a stored data area that may store data, etc., referred to in the above-described respective method embodiments. The memory 1105 may alternatively be a storage device located remotely from the aforementioned processor 1101. As shown in fig. 11, an operating system, a network communication module, a user interface module, and an image correction application program may be included in the memory 1105 as one type of computer storage medium.
In the electronic device 1100 shown in fig. 11, the user interface 1103 is mainly used as an interface for providing input for a user and acquiring data input by the user, while the processor 1101 may be used to call an image processing application program stored in the memory 1105 and specifically perform the following operations:
Determining a first characteristic region in an image, and acquiring a depth image of the image and a segmented image of a second characteristic region in the image, wherein the second characteristic region comprises the first characteristic region;
Acquiring characteristic depth information corresponding to the first characteristic region according to the depth image;
and correcting the second characteristic region in the segmented image according to the characteristic depth information corresponding to the first characteristic region.
In one embodiment, the processor 1101 performs the correction on the second feature area in the segmented image according to the feature depth information corresponding to the first feature area, specifically performing:
Determining a target region belonging to the second characteristic region in the segmented image according to the characteristic depth information corresponding to the first characteristic region;
and correcting the second characteristic region according to the target region.
In one embodiment, the determining, by the processor 1101, the target area belonging to the second feature area in the segmented image according to the feature depth information corresponding to the first feature area specifically includes:
Determining a first candidate region of the segmented image, the confidence of which is in a first confidence range;
And determining a first target area belonging to the second characteristic area in the segmented image according to the characteristic depth information corresponding to the first characteristic area and the first candidate area.
In one embodiment, the determining, by the processor 1101, the target area belonging to the second feature area in the segmented image according to the feature depth information corresponding to the first feature area specifically includes:
Determining a second candidate region of the segmented image, the confidence of which is in a second confidence range;
Acquiring feature depth information corresponding to the second candidate region according to the depth image, and recalculating the confidence coefficient of the second candidate region according to the feature depth information corresponding to the second candidate region;
And determining a second target region belonging to the second characteristic region in the segmented image according to the recalculated confidence coefficient of the second candidate region.
In one embodiment, the processor 1101 recalculates the confidence of the second candidate region according to the feature depth information corresponding to the second candidate region, specifically performs:
Determining weight information of the feature depth information corresponding to the second candidate region according to the feature depth information corresponding to the first feature region;
And recalculating the confidence coefficient of the second candidate region according to the weight information of the feature depth information corresponding to the second candidate region.
In one embodiment, the processor 1101 corrects the second feature area according to the target area, specifically performing:
fusing the target region to the second feature region.
In one embodiment, the processor 1101 obtains the feature depth information corresponding to the first feature area according to the depth image, and specifically performs:
Acquiring a depth image area corresponding to the first characteristic area according to the aligned depth image and the image;
and acquiring the characteristic depth information corresponding to the first characteristic region according to the depth image region corresponding to the first characteristic region.
In one embodiment, the processor 1101 obtains the feature depth information corresponding to the first feature area according to the depth image area corresponding to the first feature area, and specifically performs:
According to the depth image area corresponding to the first characteristic area, carrying out average calculation on the depth value corresponding to the depth image area to obtain average depth information corresponding to the first characteristic area;
and obtaining the characteristic depth information corresponding to the first characteristic region according to the average depth information.
In one embodiment, the processor 1101 obtains, according to the average depth information, feature depth information corresponding to the first feature area, and specifically performs the step;
And carrying out compensation calculation on the depth value corresponding to the average depth information to obtain the characteristic depth information corresponding to the first characteristic region.
According to the application, the second characteristic region segmented by the image is corrected according to the characteristic depth information of the first characteristic region, namely, a reference basis is provided for image segmentation by the depth information, so that a more accurate second characteristic region is obtained, errors and instabilities caused by only adopting the depth information or a single algorithm of a segmentation algorithm in the related technology are improved, the influence of the wrong segmentation on the imaging performance of the image is avoided, and the operation cost is lower on the basis of improving the reliability of image processing.
Those skilled in the art will appreciate that implementing all or part of the above-described methods in accordance with the embodiments may be accomplished by way of a computer program stored on a computer readable storage medium, which when executed may comprise the steps of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a read-only memory, a random access memory, or the like.
The foregoing disclosure is illustrative of the present application and is not to be construed as limiting the scope of the application, which is defined by the appended claims.

Claims (10)

1.一种图像处理方法,其特征在于,所述方法包括:1. An image processing method, characterized in that the method comprises: 确定图像中的第一特征区域,获取所述图像的深度图像和所述图像中第二特征区域的分割图像,所述第二特征区域包括所述第一特征区域;Determine a first feature region in an image, obtain a depth image of the image and a segmented image of a second feature region in the image, wherein the second feature region includes the first feature region; 根据所述深度图像获取所述第一特征区域对应的特征深度信息;Acquire feature depth information corresponding to the first feature area according to the depth image; 根据所述第一特征区域对应的特征深度信息对所述分割图像中所述第二特征区域进行修正;Correcting the second feature area in the segmented image according to feature depth information corresponding to the first feature area; 所述根据所述第一特征区域对应的特征深度信息对所述分割图像中所述第二特征区域进行修正,包括:The correcting the second feature area in the segmented image according to the feature depth information corresponding to the first feature area includes: 根据所述第一特征区域对应的特征深度信息,确定所述分割图像中属于所述第二特征区域的目标区域;determining, according to the feature depth information corresponding to the first feature area, a target area in the segmented image belonging to the second feature area; 根据所述目标区域对所述第二特征区域进行修正;Correcting the second characteristic area according to the target area; 所述根据所述目标区域对所述第二特征区域进行修正,包括:The correcting the second characteristic area according to the target area includes: 将所述目标区域融合至所述第二特征区域。The target region is merged into the second feature region. 2.根据权利要求1所述的方法,其特征在于,所述根据所述第一特征区域对应的特征深度信息,确定所述分割图像中属于所述第二特征区域的目标区域,包括:2. The method according to claim 1, characterized in that the step of determining the target area in the segmented image belonging to the second feature area according to the feature depth information corresponding to the first feature area comprises: 确定所述分割图像中置信度处于第一置信度范围的第一候选区域;Determine a first candidate region in the segmented image whose confidence level is within a first confidence level range; 根据所述第一特征区域对应的特征深度信息以及所述第一候选区域,确定所述分割图像中属于所述第二特征区域的第一目标区域。A first target region in the segmented image belonging to the second feature region is determined according to the feature depth information corresponding to the first feature region and the first candidate region. 3.根据权利要求1或者2所述的方法,其特征在于,所述根据所述第一特征区域对应的特征深度信息,确定所述分割图像中属于所述第二特征区域的目标区域,包括:3. The method according to claim 1 or 2, characterized in that the step of determining the target area in the segmented image belonging to the second feature area according to the feature depth information corresponding to the first feature area comprises: 确定所述分割图像中置信度处于第二置信度范围的第二候选区域;Determine a second candidate region in the segmented image whose confidence level is within a second confidence level range; 根据所述深度图像获取所述第二候选区域对应的特征深度信息,根据所述第二候选区域对应的特征深度信息对所述第二候选区域的置信度重新进行计算;Acquire feature depth information corresponding to the second candidate region according to the depth image, and recalculate the confidence of the second candidate region according to the feature depth information corresponding to the second candidate region; 根据重新计算之后的所述第二候选区域的置信度,确定所述分割图像中属于所述第二特征区域的第二目标区域。A second target region belonging to the second feature region in the segmented image is determined according to the recalculated confidence level of the second candidate region. 4.根据权利要求3所述的方法,其特征在于,所述根据所述第二候选区域对应的特征深度信息对所述第二候选区域的置信度重新进行计算,包括:4. The method according to claim 3, characterized in that the recalculating the confidence of the second candidate region according to the feature depth information corresponding to the second candidate region comprises: 根据所述第一特征区域对应的特征深度信息,确定所述第二候选区域对应的特征深度信息的权重信息;Determining weight information of the feature depth information corresponding to the second candidate region according to the feature depth information corresponding to the first feature region; 根据所述第二候选区域对应的特征深度信息的权重信息对所述第二候选区域的置信度重新进行计算。The confidence of the second candidate region is recalculated according to the weight information of the feature depth information corresponding to the second candidate region. 5.根据权利要求1所述的方法,其特征在于,所述根据所述深度图像获取所述第一特征区域对应的特征深度信息,包括:5. The method according to claim 1, characterized in that the step of obtaining feature depth information corresponding to the first feature area according to the depth image comprises: 根据对应之后的所述深度图像和所述图像,获取所述第一特征区域对应的深度图像区域;Acquire a depth image region corresponding to the first feature region according to the depth image and the image after correspondence; 根据所述第一特征区域对应的深度图像区域,获取所述第一特征区域对应的特征深度信息。According to the depth image area corresponding to the first feature area, feature depth information corresponding to the first feature area is acquired. 6.根据权利要求5所述的方法,其特征在于,所述根据所述第一特征区域对应的深度图像区域,获取所述第一特征区域对应的特征深度信息,包括:6. The method according to claim 5, characterized in that the acquiring, according to the depth image area corresponding to the first feature area, feature depth information corresponding to the first feature area comprises: 根据所述第一特征区域对应的深度图像区域,对所述深度图像区域对应的深度值进行平均计算,得到所述第一特征区域对应的平均深度信息;According to the depth image region corresponding to the first feature region, average calculation is performed on the depth values corresponding to the depth image region to obtain average depth information corresponding to the first feature region; 根据所述平均深度信息,得到所述第一特征区域对应的特征深度信息。According to the average depth information, characteristic depth information corresponding to the first characteristic region is obtained. 7.根据权利要求6所述的方法,其特征在于,所述根据所述平均深度信息,得到所述第一特征区域对应的特征深度信息,包括;7. The method according to claim 6, characterized in that obtaining the feature depth information corresponding to the first feature area according to the average depth information comprises: 对所述平均深度信息对应的深度值进行补偿计算,得到所述第一特征区域对应的特征深度信息。A compensation calculation is performed on the depth value corresponding to the average depth information to obtain feature depth information corresponding to the first feature area. 8.一种图像处理装置,其特征在于,所述装置包括:8. An image processing device, characterized in that the device comprises: 图像获取模块,用于确定图像中的第一特征区域,获取所述图像的深度图像和所述图像中第二特征区域的分割图像,所述第二特征区域包括所述第一特征区域;An image acquisition module, used to determine a first feature area in an image, acquire a depth image of the image and a segmented image of a second feature area in the image, wherein the second feature area includes the first feature area; 信息获取模块,用于根据所述深度图像获取所述第一特征区域对应的特征深度信息;An information acquisition module, configured to acquire feature depth information corresponding to the first feature area according to the depth image; 图像修正模块,用于根据所述第一特征区域对应的特征深度信息对所述分割图像中所述第二特征区域进行修正;An image correction module, used for correcting the second feature area in the segmented image according to feature depth information corresponding to the first feature area; 所述图像修正模块包括:The image correction module comprises: 目标确定单元,用于根据所述第一特征区域对应的特征深度信息,确定所述分割图像中属于所述第二特征区域的目标区域;a target determination unit, configured to determine a target region in the segmented image belonging to the second feature region according to feature depth information corresponding to the first feature region; 目标修正单元,用于根据所述目标区域对所述第二特征区域进行修正;a target correction unit, configured to correct the second characteristic region according to the target region; 所述目标修正单元具体用于:The target correction unit is specifically used for: 将所述目标区域融合至所述第二特征区域。The target region is merged into the second feature region. 9.一种计算机存储介质,其特征在于,所述计算机存储介质存储有多条指令,所述指令适于由处理器加载并执行如权利要求1~7任意一项的方法步骤。9. A computer storage medium, characterized in that the computer storage medium stores a plurality of instructions, wherein the instructions are suitable for being loaded by a processor and executing the method steps according to any one of claims 1 to 7. 10.一种电子设设,其特征在于,包括:处理器和存储器;其中,所述存储器存储有计算机程序,所述计算机程序适于由所述处理器加载并执行如权利要求1~7任意一项的方法步骤。10. An electronic device, comprising: a processor and a memory; wherein the memory stores a computer program, and the computer program is suitable for being loaded by the processor and executing the method steps as claimed in any one of claims 1 to 7.
CN202111417587.2A 2021-11-25 2021-11-25 Image processing method, device, storage medium and electronic device Active CN114240963B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111417587.2A CN114240963B (en) 2021-11-25 2021-11-25 Image processing method, device, storage medium and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111417587.2A CN114240963B (en) 2021-11-25 2021-11-25 Image processing method, device, storage medium and electronic device

Publications (2)

Publication Number Publication Date
CN114240963A CN114240963A (en) 2022-03-25
CN114240963B true CN114240963B (en) 2025-02-11

Family

ID=80751608

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111417587.2A Active CN114240963B (en) 2021-11-25 2021-11-25 Image processing method, device, storage medium and electronic device

Country Status (1)

Country Link
CN (1) CN114240963B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105590312A (en) * 2014-11-12 2016-05-18 株式会社理光 Foreground image segmentation method and apparatus
CN106469446A (en) * 2015-08-21 2017-03-01 小米科技有限责任公司 The dividing method of depth image and segmenting device

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10275892B2 (en) * 2016-06-09 2019-04-30 Google Llc Multi-view scene segmentation and propagation
EP3343507B1 (en) * 2016-12-30 2022-08-10 Dassault Systèmes Producing a segmented image of a scene
CN107452003A (en) * 2017-06-30 2017-12-08 大圣科技股份有限公司 A kind of method and device of the image segmentation containing depth information
CN109360208A (en) * 2018-09-27 2019-02-19 华南理工大学 A medical image segmentation method based on single-pass multi-task convolutional neural network
CN110136079A (en) * 2019-05-05 2019-08-16 长安大学 Image dehazing method based on scene depth segmentation
CN110335224B (en) * 2019-07-05 2022-12-13 腾讯科技(深圳)有限公司 Image processing method, image processing device, computer equipment and storage medium
CN112132841B (en) * 2020-09-22 2024-04-09 上海交通大学 Medical image cutting method and device
CN112785492A (en) * 2021-01-20 2021-05-11 北京百度网讯科技有限公司 Image processing method, image processing device, electronic equipment and storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105590312A (en) * 2014-11-12 2016-05-18 株式会社理光 Foreground image segmentation method and apparatus
CN106469446A (en) * 2015-08-21 2017-03-01 小米科技有限责任公司 The dividing method of depth image and segmenting device

Also Published As

Publication number Publication date
CN114240963A (en) 2022-03-25

Similar Documents

Publication Publication Date Title
CN111126125B (en) Method, device, equipment and readable storage medium for extracting target text in certificate
US10657652B2 (en) Image matting using deep learning
CN111667520B (en) Registration method and device for infrared image and visible light image and readable storage medium
CN110135455B (en) Image matching method, device and computer readable storage medium
CN110648397B (en) Scene map generation method and device, storage medium and electronic equipment
CN111291584B (en) Method and system for identifying two-dimensional code position
US20190164312A1 (en) Neural network-based camera calibration
CN110147776B (en) Method and device for determining positions of key points of human face
CN112101386B (en) Text detection method, device, computer equipment and storage medium
CN112836625A (en) Face living body detection method and device and electronic equipment
CN111583381B (en) Game resource map rendering method and device and electronic equipment
CN111868786B (en) Cross-device monitoring computer vision system
CN111353325B (en) Key point detection model training method and device
CN112802081A (en) Depth detection method and device, electronic equipment and storage medium
CN115761826A (en) Palm vein effective area extraction method, system, medium and electronic device
CN115937003A (en) Image processing method, image processing device, terminal equipment and readable storage medium
CN113592706B (en) Method and device for adjusting homography matrix parameters
CN113298871B (en) Map generation method, positioning method, system thereof, and computer-readable storage medium
CN108734712B (en) Background segmentation method and device and computer storage medium
CN114240963B (en) Image processing method, device, storage medium and electronic device
CN113627210A (en) Method and device for generating bar code image, electronic equipment and storage medium
CN117115358B (en) Automatic digital person modeling method and device
CN115836322B (en) Image cropping method and device, electronic device and storage medium
CN113033256B (en) Training method and device for fingertip detection model
CN115223173A (en) Object identification method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20230802

Address after: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant after: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd.

Address before: Room F, 11/F, Beihai Center, 338 Hennessy Road, Wan Chai District

Applicant before: Sonar sky Information Consulting Co.,Ltd.

GR01 Patent grant