Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
In the description of the present application, it should be understood that the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. In the description of the present application, it should be noted that, unless expressly specified and limited otherwise, "comprise" and "have" and any variations thereof are intended to cover non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus. The specific meaning of the above terms in the present application will be understood in specific cases by those of ordinary skill in the art. Furthermore, in the description of the present application, unless otherwise indicated, "a plurality" means two or more. "and/or" describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate that there are three cases of a alone, a and B together, and B alone. The character "/" generally indicates that the context-dependent object is an "or" relationship.
The present application will be described in detail with reference to specific examples.
In recent years, photography has become an indispensable part of the digital age along with the development of social networks and self-media from the popularity to the popularity, and image processing technologies have also been developed. Image segmentation (IMAGE SEGMENT) is also becoming more important as an important part of image processing technologies, for example, image segmentation is used to segment a portrait region in an image, separate the portrait region from a background, and perform a highlighting operation or a blurring operation on the portrait region.
As shown in fig. 1, an image schematic diagram of image processing provided by an embodiment of the present application is shown in the left image, where the left image is a photograph taken by a user, and when the user wants to implement a "background blurring" function, that is, when a processor receives a trigger condition on a background blurring control, a portrait in the left image needs to be first segmented, that is, a portrait area shown in the right image is obtained, so as to further perform blurring processing on a background area other than the portrait area.
Among related image segmentation techniques, three segmentation techniques are mainly classified. One is a conventional color space method, an edge feature-based method, a wavelet transform-based method, and the like. However, this technique is difficult to resolve for color-similar regions, changes in light and shadow, or interference of image noise, and can be partially improved by referencing a larger range of information, but the calculation cost is increased. The second is semantic segmentation and instance segmentation based on deep learning (DEEP LEAMING). However, the algorithm of such deep learning as semantic segmentation mainly depends on the design of the network architecture and the training data, especially on the training data set, all the target scenes should be covered as much as possible, however, the shooting behaviors are changeable, the view angles are different, the irregular portrait shielding (multi-person scenes) and the uncertain texture features are different, and many factors may affect the segmentation result, so the stability is insufficient. Thirdly, the segmentation method based on the depth map can distinguish different objects through different distances, but cannot analyze different objects at the same distance. And for the depth generated by the stereoscopic vision, the parallax shielding problem exists, and the method is not completely reliable.
Therefore, based on the above-mentioned problems, the present application provides an image processing method, which can enhance the accuracy of feature regions in a segmented image and improve the reliability of image processing.
In one embodiment, as shown in fig. 2, a flowchart of an image processing method according to an embodiment of the present application is provided, which may be implemented by a computer program and may be executed on an image processing apparatus based on von neumann system. The computer program may be integrated in the application or may run as a stand-alone tool class application.
Specifically, the image processing method includes:
S101, acquiring a depth image of an image and a segmentation image of a second characteristic region in the image according to the first characteristic region in the image, wherein the second characteristic region comprises the first characteristic region.
An image refers to a description or portraiture of a similarity, liveliness, or the like of a nature or an objective object (human, animal, plant, landscape, etc.), or an image may be understood as a representation of a nature or an objective object (human, animal, plant, landscape, etc.), which contains information about the object being described. The image is usually a picture with visual effect, and can be a photo, a drawing, a clip art, a map, a satellite cloud picture, a video picture, an X-ray film, an electroencephalogram, an electrocardiogram and the like.
The first feature region in the image may be understood as a specific region in a region of interest of the image, i.e. a second feature region, which comprises the first feature region, which may be a single pixel or a set of multiple pixels. For example, the region of interest is a portrait region, the first feature region is a face region, the region of interest is an automotive region, the first feature region is a head region, the region of interest is a chest region, and the first feature region is a heart region.
It can be understood that in the following figures, the first feature area is taken as a face area, the second feature area is taken as a portrait area for example, the application also includes types and contents of other first feature areas and second feature areas, and the image processing method is applicable to any image including the first feature area and the second feature area.
The first feature region in the image is acquired, and the acquiring method may be any one or more feature extraction algorithms in the related art, for example, a face feature extraction algorithm, a deep learning network, a local binary pattern (Local Binary Pattern, LBP) algorithm, etc., which is not limited in this application. Fig. 3A is a schematic image diagram including a first feature area according to an embodiment of the present application, where the first feature area 101 is an image area corresponding to a face in the figure.
And acquiring a depth image of the image, wherein the depth image comprises depth information, and the depth information can be understood as a distance value or a depth value from any point in the image to the camera, namely three-dimensional coordinate information of any point. The depth information is obtained by any one or more methods such as stereoscopic image matching, depth camera or depth learning, etc., and the present application is not limited in this respect. Fig. 3B is a schematic diagram of a depth image corresponding to an image according to an embodiment of the present application, where areas with different depths correspond to different depth values.
A segmented image of a second feature region in the image is acquired. The segmented image comprises a second feature region and at least one other region, and the segmented image can be obtained by any one or more methods of a histogram thresholding method, a region growing method, an image-based random field model method, a relaxation mark region segmentation method and the like, and the segmented image is not limited in any way.
As shown in fig. 3C, in an embodiment of the present application, a segmented image schematic diagram including a second feature region is provided, where the second feature region 102 is a portrait region obtained based on an image segmentation method, and the segmented image shown in fig. 3C further includes a first region 103, a second region 104, a third region 105, and a fourth region 106. The fourth region 106 is a background region obtained by an image segmentation method. It will be appreciated that the division of the multiple zones shown in fig. 3C is illustrative only.
S102, acquiring feature depth information corresponding to the first feature region according to the depth image.
The feature depth information corresponding to the first feature region may be understood as information characterizing a distance value between the first feature region and the camera. And obtaining the characteristic depth information corresponding to the first characteristic region according to the depth image and the first characteristic region extracted from the image.
For example, as shown in fig. 3A, the first feature region 101 in the image obtains feature depth information corresponding to the first feature region 101 according to the depth map shown in fig. 3B.
S103, correcting a second characteristic region in the segmented image according to the characteristic depth information corresponding to the first characteristic region.
In the related image segmentation technology, the segmentation accuracy of the second characteristic region included in the segmented image is not satisfactory, so that the second characteristic region in the segmented image is corrected according to the characteristic depth information corresponding to the first characteristic region.
The correction principle can be understood as that due to the relevance of the first feature area and the second feature area, that is, the second feature area and the first feature area are located on the same plane, or the difference between the depth value of the second feature area and the depth value of the first feature area is smaller than a preset threshold, for example, when a person self-photographs, the face and the person are located on the same plane, the second feature area is estimated in the depth map according to the feature depth information corresponding to the first feature area, the second feature area corresponding to the depth map is compared with the second feature area in the segmented image, and the second feature area in the segmented image is corrected according to the comparison result.
In one embodiment, the correction method comprises the steps of determining a target area belonging to a second characteristic area in the segmented image according to characteristic depth information corresponding to the first characteristic area, and correcting the second characteristic area according to the target area. The number of the target areas is one or more, and the target areas are single pixel points or a set of a plurality of pixel points.
For example, a second feature region and a background region are obtained in the depth image shown in fig. 3B according to the feature depth information corresponding to the first feature region 101, the second feature region in the depth image shown in fig. 3B is compared with the segmented image shown in fig. 3C, the segmented image shown in fig. 3C includes a second feature region 102, a first region 103, a second region 104, a third region 105, and a fourth region 106, wherein the fourth region 106 is a background region, the target region is determined to be the first region 103, the second region 104, and the third region 105, and the original second feature region 102 is corrected according to the target region.
In this embodiment, the second feature region of the segmented image is corrected, so that the problem that different objects at the same distance cannot be distinguished due to the fact that the second feature region is obtained according to the feature depth information of the first feature region in the depth image can be solved. Specifically, according to the depth image shown in fig. 3B, according to the depth feature information corresponding to the face region (i.e., the first feature region), the obtained portrait region (i.e., the second feature region) includes a star-shaped decoration region 107, where the star-shaped decoration region 107 may be understood as a plurality of falling star-shaped small lamps, and the user is in self-timer among the plurality of star-shaped small lamps, and when the portrait region corresponding to the depth image shown in fig. 3B is used for correcting the portrait region 102 corresponding to the split image shown in fig. 3C, the star-shaped decoration region belongs to the background region 106 in the split image, excluding the portrait region 102, and the target region further corrected for the portrait region 102 does not include the star-shaped decoration region. Therefore, the application solves the inaccuracy caused by the fact that different objects at the same distance cannot be distinguished for image segmentation when the image is segmented only according to the depth image.
In one embodiment, modifying the second feature region based on the target region includes fusing the target region to the second feature region. For example, in the divided map shown in fig. 3C, the first region 103, the second region 104, and the third region 105 as target regions are fused to the second feature region 102.
In one embodiment, the image processing technology provided by the application can be applied to the field of video coding. Specifically, the background area and the region of interest are segmented from each frame of video image included in the video through an image processing technology, the region of interest is the second feature area, the background area and the region of interest of each frame of video image are coded independently, for example, a low-distortion coding mode is used for the region of interest, and a simple and efficient coding mode is used for the background area. By the embodiment, the video coding efficiency is improved, and the memory space is saved.
According to the application, the second characteristic region segmented by the image is corrected according to the characteristic depth information of the first characteristic region, namely, a reference basis is provided for image segmentation by the depth information, so that a more accurate second characteristic region is obtained, errors and instabilities caused by only adopting the depth information or a single algorithm of a segmentation algorithm in the related technology are improved, the influence of the wrong segmentation on the imaging performance of the image is avoided, and the operation cost is lower on the basis of improving the reliability of image processing.
In one embodiment, as shown in fig. 4, a flowchart of an image processing method according to an embodiment of the present application may be implemented by a computer program, and may be executed on an image processing apparatus based on von neumann system. The computer program may be integrated in the application or may run as a stand-alone tool class application.
S201, acquiring a depth image of an image and a segmentation image of a second characteristic region in the image according to a first characteristic region in the image, wherein the second characteristic region comprises the first characteristic region.
S201 refers to S101, which is not described herein.
S202, acquiring a depth image area corresponding to the first characteristic area according to the aligned depth image and the image.
And aligning the depth image with the image, and acquiring a depth image area of the first characteristic area in the depth image according to the aligned depth image and the aligned image. The image and depth image alignment method comprises one or more of an ORB (Oriented FAST and Binary Robust INDEPENDENT ELEMENTARY Features) feature extraction method, a SURF (Speeded-Up Robust Features) algorithm and the like, a feature matching algorithm, a calculation homography matrix, a torsion picture algorithm and the like, and the image and depth image alignment method is not limited in any way.
As shown in fig. 5, which is a schematic diagram of alignment of an image and a depth image provided by an embodiment of the present application, an upper part of fig. 5 shows that an image including a first feature area is aligned with the depth image, and a depth image area corresponding to the first feature area, that is, a depth image area 201, is obtained according to the aligned depth image and image.
S203, acquiring feature depth information corresponding to the first feature region according to the depth image region corresponding to the first feature region.
In one embodiment, according to the depth image area corresponding to the first feature area, average calculation is performed on the depth value corresponding to the depth image area to obtain average depth information corresponding to the first feature area, and according to the average depth information, feature depth information corresponding to the first feature area is obtained.
For example, as shown in fig. 5, the depth image area 201 includes N pixels, each pixel corresponds to a depth value X, and the average calculation is performed on the N depth values to obtain an average depth value corresponding to the depth image area 201
In another embodiment, the depth value corresponding to the depth image area is weighted according to the depth image area corresponding to the first feature area to obtain depth information corresponding to the first feature area, and the feature depth information corresponding to the first feature area is obtained according to the depth information. For example, the weight is inversely related to the distance value between the target pixel point and the central pixel point in the first feature area, and the further the distance value between the target pixel point and the central pixel point is, the smaller the weight is given to the depth value corresponding to the target pixel point in the calculation. The application also comprises any other weighting calculation method.
In one embodiment, obtaining the feature depth information corresponding to the first feature region according to the average depth information includes performing compensation calculation on the depth value corresponding to the average depth information to obtain the feature depth information corresponding to the first feature region. The compensation calculation is used for performing range compensation on the depth value corresponding to the characteristic depth information corresponding to the first characteristic region.
For example, as shown in fig. 5, an average depth value D of a depth image area 201 corresponding to a first feature area is obtained, a range compensation calculation (it can be understood that the compensation value is only an example) is performed on the average depth value D to obtain depth feature information corresponding to the first feature area as a depth value range [0.9D,1.1D ], an area in the depth image, where the depth value does not belong to the depth value range, is marked as null, and a depth image schematic diagram shown in fig. 6 is obtained, and fig. 6 is a depth image schematic diagram including depth feature information according to an embodiment of the present application, where the depth image schematic diagram includes a second feature area 202 corresponding to the depth feature information after the compensation calculation.
According to the method and the device, the characteristic depth information corresponding to the first characteristic region is obtained according to the average depth information, compensation calculation is carried out on the depth value corresponding to the average depth information, and the characteristic depth information corresponding to the first characteristic region is obtained, so that the depth characteristic information corresponding to the first characteristic region is more accurate and reliable.
S204, correcting a second characteristic region in the segmented image according to the characteristic depth information corresponding to the first characteristic region.
S204 is referred to S103 above, and will not be described here again.
According to the application, the second characteristic region segmented by the image is corrected according to the characteristic depth information of the first characteristic region, namely, a reference basis is provided for image segmentation by the depth information, so that a more accurate second characteristic region is obtained, errors and instabilities caused by only adopting the depth information or a single algorithm of a segmentation algorithm in the related technology are improved, the influence of the wrong segmentation on the imaging performance of the image is avoided, and the operation cost is lower on the basis of improving the reliability of image processing.
In one embodiment, as shown in fig. 7, a flowchart of an image processing method according to an embodiment of the present application may be implemented by a computer program, and may be executed on an image processing apparatus based on von neumann system. The computer program may be integrated in the application or may run as a stand-alone tool class application.
S301, determining a first characteristic region in the image, and acquiring a depth image of the image and a segmented image of a second characteristic region in the image.
S301 is referred to S101 above, and will not be described here again.
S302, obtaining feature depth information corresponding to the first feature region according to the depth image.
S302 refers to S202 and S203 described above, and will not be described here again.
S303A, determining a first candidate region with the confidence coefficient in a first confidence coefficient range in the segmented image.
The segmented image comprises a second feature region and at least one other region, each region corresponding to a confidence level belonging to the second feature region. The distribution of the confidence coefficient accords with a normal distribution model, the confidence coefficient is divided into three ranges, namely a first confidence coefficient range (0, T 1), a second confidence coefficient range (T 1,T2) and a third confidence coefficient range [ T 2,1],T1 and T 2 ] which are any numerical values respectively set according to requirements.
In the application, when the second characteristic region in the segmented image is corrected, the region with the confidence coefficient of 0 or tending to 0 is not considered, and the region with the confidence coefficient in the third confidence coefficient range is corrected to be the second characteristic region, or whether the region belongs to the second characteristic region is corrected and calculated according to other related image segmentation algorithms and image segmentation models.
Fig. 8 is a schematic diagram of correcting a second feature area according to depth feature information according to an embodiment of the present application. The segmented image shown in the upper half of fig. 8 includes a second feature region 102, a first region 103, a second region 104, a third region 105, and a fourth region 106. The confidence that the first region 103 belongs to the second feature region 102 is 0.5, the confidence that the second region 104 belongs to the second feature region 102 is 0.6, the confidence that the third region 105 is 0.2, the confidence that the fourth region 106 is 0, and the fourth region 106 is determined as a background region.
A first candidate region in the segmented image is determined for which the confidence level is within a first confidence level range, the first confidence level range being (0, T 1), wherein T 1 is a value set as desired, e.g., the first confidence level range is (0, 0.3), in the segmented image illustrated in the upper half of fig. 8, the first candidate region is the third region 105 with a confidence level of 0.2. It will be appreciated that the first candidate region may be a single pixel or a collection of pixels, and the segmented region illustrated in the present application is merely an example.
S304A, determining a first target area belonging to a second characteristic area in the segmentation map according to the characteristic depth information corresponding to the first characteristic area and the first candidate area.
And determining a first target region belonging to the second characteristic region in the first candidate region according to the characteristic depth information corresponding to the first characteristic region and the first candidate region, wherein the condition of the first target region is that the corresponding depth image region is included in the region determined by the characteristic depth information, namely the condition of the first target region is that the corresponding depth value belongs to the depth value range in the characteristic depth information corresponding to the first characteristic region.
For example, the third region 105, which is the first candidate region in the divided image as illustrated in the upper half of fig. 8, is included in the region 301 determined by the feature depth information corresponding to the first feature region in the depth image, and thus the third region 105 belongs to the first target region.
S303B, determining a second candidate region with the confidence in a second confidence range in the segmented image.
A first target region in the segmented image with a confidence level in a first candidate region of a first confidence level range is obtained according to S303A, S a, and a second target region in the second candidate region with a confidence level in a second confidence level range is obtained according to steps S303B, S B and S305B. It is understood that the present application also includes other fusion algorithms for determining the second target region, and the following is only one possible implementation method.
And determining a second candidate region with the confidence coefficient in a second confidence coefficient range in the segmented image, wherein the second confidence coefficient range is (T 1,T2), and T 1 and T 2 are any numerical values respectively set according to requirements. For example, the second confidence range is (0.3, 0.7), and in the segmented image shown in the upper half of fig. 8, the second candidate region is the first region 103 with a confidence of 0.5 and the second region 104 with a confidence of 0.6. It is understood that the second candidate region may be a single pixel or a set of pixels, and the segmented region shown in the present application is merely an example.
S304B, obtaining feature depth information corresponding to the second candidate region according to the depth image, and recalculating the confidence coefficient of the second candidate region according to the feature depth information corresponding to the second candidate region.
And re-calculating the confidence coefficient of the second candidate region according to the characteristic depth information corresponding to the second candidate region, calculating and fusing the confidence coefficient value and the corresponding depth value of the second candidate region, and performing fusion calculation on the confidence coefficient value and the depth value to re-calculate the confidence coefficient of the second candidate region.
For example, the absolute error α of the average depth value of the region 202 and the face region in the upper half of the depth image of fig. 8, that is, the absolute error α of the average depth value of the target region in the region 202 and the face region is calculated by respectively fusing α to the first region 103 and the second region 104 included in the second candidate region, wherein the calculation formula includes the following formula;
S(x)=(1-α)×M(x)+α×P(x);
Wherein M (x) represents whether the region x in the second candidate region belongs to the second feature region in the depth image determined by the feature depth information, if yes, M (x) =1, and if no, M (x) = 0;P (x) represents the confidence of the region x in the second candidate region.
In other words, the confidence of each region in the second candidate region is given a different weight according to the error between the depth value corresponding to each region in the second candidate region and the average depth value of the face region (i.e. the first feature region), the greater the error the less the confidence and vice versa.
For example, the second candidate region is a first region 103 with a confidence of 0.5 and a second region 104 with a confidence of 0.6, the confidence of 0.6 corresponding to the first region 103 is recalculated according to the depth value corresponding to the first region 103, and the confidence of 0.7 corresponding to the second region 104 is recalculated according to the depth value corresponding to the second region 104.
It can be understood that the above is only one example of fusion calculation method for performing fusion calculation on the confidence value and the depth value, and recalculating to obtain the confidence of the second candidate region.
S305B, determining a second target region belonging to a second characteristic region in the segmented image according to the confidence coefficient of the second candidate region after recalculation.
And determining the region with the confidence higher than the correction threshold as a second target region belonging to the second characteristic region according to the recalculated confidence of the second candidate region. For example, the second candidate region is a first region 103 with a confidence of 0.5 and a second region 104 with a confidence of 0.6, the confidence of 0.6 for the first region 103 is recalculated based on the depth value of the first region 103, the confidence of 0.7 for the second region 104 is recalculated based on the depth value of the second region 104, and a region with a confidence higher than 0.55 is determined as the second target region, and the first region 103 and the second region 104 are determined as the second target region.
S306, correcting the second characteristic area according to the target area.
In one embodiment, the second feature region is modified based on the first target region and the second target region. In another embodiment, the target area comprises only the first target area, or the target area comprises only the second target area, and the second feature area is modified according to the target area.
And correcting the second characteristic region according to the target region, wherein the target region is fused to the second characteristic region. As shown in fig. 8, the fourth region 106 as the first target region and the first and second regions 103 and 104 as the second target regions are fused to the second feature region 105, resulting in a second feature region 302 shown in the lower half of fig. 8.
According to the method and the device, the candidate areas with the confidence degrees in different confidence degree ranges in the segmented image are calculated respectively, and the confidence degrees of the candidate areas are calculated by fusing the depth information of the depth image, so that the accuracy of correcting the second characteristic areas in the segmented image is further improved.
According to the application, the second characteristic region segmented by the image is corrected according to the characteristic depth information of the first characteristic region, namely, a reference basis is provided for image segmentation by the depth information, so that a more accurate second characteristic region is obtained, errors and instabilities caused by only adopting the depth information or a single algorithm of a segmentation algorithm in the related technology are improved, the influence of the wrong segmentation on the imaging performance of the image is avoided, and the operation cost is lower on the basis of improving the reliability of image processing.
In one embodiment, as shown in fig. 9, a flowchart of an image processing method according to an embodiment of the present application is provided, which may be implemented by a computer program and may be executed on an image processing apparatus based on von neumann system. The computer program may be integrated in the application or may run as a stand-alone tool class application.
Specifically, the image processing method includes:
S401, acquiring a depth image of an image and a segmentation image of a second characteristic region in the image according to the first characteristic region in the image, wherein the second characteristic region comprises the first characteristic region.
S401 is referred to S101 above, and will not be described here again.
S402, acquiring a depth image area corresponding to the first characteristic area according to the aligned depth image and the image.
S402 is referred to S202 above, and will not be described here again.
S403, acquiring characteristic depth information corresponding to the first characteristic region according to the depth image region corresponding to the first characteristic region.
According to the depth image area corresponding to the first feature area, carrying out average calculation on the depth value corresponding to the depth image area to obtain average depth information corresponding to the first feature area; and carrying out compensation calculation on the depth value corresponding to the average depth information to obtain the characteristic depth information corresponding to the first characteristic region.
S403 refers to S203 above, and will not be described here again.
S404A, determining a first candidate region with the confidence in a first confidence range in the segmented image.
S404A refers to S303A described above, and will not be described here again.
S405A, determining a first target area belonging to a second characteristic area in the segmentation map according to the characteristic depth information corresponding to the first characteristic area and the first candidate area.
S405A is referred to S304A above, and will not be described here again.
S404B, determining a second candidate region with the confidence in a second confidence range in the segmented image.
S404B refers to S303B above, and is not described here again.
S405B, obtaining feature depth information corresponding to the second candidate region according to the depth image, and recalculating the confidence coefficient of the second candidate region according to the feature depth information corresponding to the second candidate region.
S405B refers to S304B, which is not described herein.
S406B, determining a second target region belonging to a second characteristic region in the segmented image according to the calculated confidence of the second candidate region.
S406B refers to S305B, which is not described herein.
S407, correcting the second characteristic region according to the target region.
S407 is referred to S306 above, and will not be described here again.
According to the application, the second characteristic region segmented by the image is corrected according to the characteristic depth information of the first characteristic region, namely, a reference basis is provided for image segmentation by the depth information, so that a more accurate second characteristic region is obtained, errors and instabilities caused by only adopting the depth information or a single algorithm of a segmentation algorithm in the related technology are improved, the influence of the wrong segmentation on the imaging performance of the image is avoided, and the operation cost is lower on the basis of improving the reliability of image processing.
The following are examples of the apparatus of the present application that may be used to perform the method embodiments of the present application. For details not disclosed in the embodiments of the apparatus of the present application, please refer to the embodiments of the method of the present application.
Referring to fig. 10, a schematic diagram of an image processing apparatus according to an exemplary embodiment of the present application is shown. The image processing apparatus may be implemented as all or part of the apparatus by software, hardware or a combination of both. The image processing apparatus includes an image acquisition module 1001, an information acquisition module 1002, and an image correction module 1003.
An image obtaining module 1001, configured to determine a first feature area in an image, and obtain a depth image of the image and a segmented image of a second feature area in the image, where the second feature area includes the first feature area;
An information obtaining module 1002, configured to obtain feature depth information corresponding to the first feature area according to the depth image;
and an image correction module 1003, configured to correct the second feature region in the segmented image according to feature depth information corresponding to the first feature region.
In one embodiment, the image modification module 1003 includes:
The target determining unit is used for determining a target area belonging to the second characteristic area in the segmented image according to the characteristic depth information corresponding to the first characteristic area;
and the target correction unit is used for correcting the second characteristic region according to the target region.
In one embodiment, the targeting unit comprises:
A first candidate subunit, configured to determine a first candidate region in the segmented image where the confidence coefficient is in a first confidence coefficient range;
And the first target subunit is used for determining a first target area belonging to the second characteristic area in the segmented image according to the characteristic depth information corresponding to the first characteristic area and the first candidate area.
In one embodiment, the targeting unit comprises:
A second candidate subunit, configured to determine a second candidate region in the segmented image where the confidence coefficient is in a second confidence coefficient range;
the second computing subunit is used for acquiring the characteristic depth information corresponding to the second candidate region according to the depth image, and re-computing the confidence coefficient of the second candidate region according to the characteristic depth information corresponding to the second candidate region;
and the second target subunit is used for determining a second target area belonging to the second characteristic area in the segmented image according to the recalculated confidence coefficient of the second candidate area.
In one embodiment, the second computing subunit is specifically configured to:
Determining weight information of the feature depth information corresponding to the second candidate region according to the feature depth information corresponding to the first feature region;
And recalculating the confidence coefficient of the second candidate region according to the weight information of the feature depth information corresponding to the second candidate region.
In one embodiment, the target modification unit is specifically configured to:
fusing the target region to the second feature region.
In one embodiment, the information acquisition module 1002 includes:
The alignment unit is used for acquiring a depth image area corresponding to the first characteristic area according to the aligned depth image and the image;
And the acquisition unit is used for acquiring the characteristic depth information corresponding to the first characteristic region according to the depth image region corresponding to the first characteristic region.
In one embodiment, the acquisition unit comprises:
an average calculating subunit, configured to perform average calculation on a depth value corresponding to the depth image area according to the depth image area corresponding to the first feature area, so as to obtain average depth information corresponding to the first feature area;
And the characteristic acquisition subunit is used for acquiring characteristic depth information corresponding to the first characteristic region according to the average depth information.
In one embodiment, the feature acquisition subunit is specifically configured to:
And carrying out compensation calculation on the depth value corresponding to the average depth information to obtain the characteristic depth information corresponding to the first characteristic region.
According to the application, the second characteristic region segmented by the image is corrected according to the characteristic depth information of the first characteristic region, namely, a reference basis is provided for image segmentation by the depth information, so that a more accurate second characteristic region is obtained, errors and instabilities caused by only adopting the depth information or a single algorithm of a segmentation algorithm in the related technology are improved, the influence of the wrong segmentation on the imaging performance of the image is avoided, and the operation cost is lower on the basis of improving the reliability of image processing.
It should be noted that, in the image processing apparatus provided in the foregoing embodiment, when the image processing method is executed, only the division of the foregoing functional modules is used as an example, in practical application, the foregoing functional allocation may be performed by different functional modules, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the image processing apparatus and the image processing method provided in the foregoing embodiments belong to the same concept, which represents a detailed implementation process in the method embodiment, and are not described herein again.
The foregoing embodiment numbers of the present application are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
The embodiment of the present application further provides a computer storage medium, where the computer storage medium may store a plurality of instructions, where the instructions are suitable for being loaded by a processor and executed by the processor to perform the image processing method according to the embodiment shown in fig. 1 to fig. 9, and the specific execution process may refer to the specific description of the embodiment shown in fig. 1 to fig. 9, which is not repeated herein.
The present application further provides a computer program product, where at least one instruction is stored, where the at least one instruction is loaded by the processor and executed by the processor to perform the image processing method according to the embodiment shown in fig. 1 to fig. 9, and the specific execution process may refer to the specific description of the embodiment shown in fig. 1 to fig. 9, which is not repeated herein.
Referring to fig. 11, a schematic structural diagram of an electronic device is provided in an embodiment of the present application. As shown in fig. 11, the electronic device 1100 may include a processor 1101, a network interface 1104, a user interface 1103, a memory 1105, a communication bus 1102.
Wherein communication bus 1102 is used to facilitate connection communications among the components.
The user interface 1103 may include a Display screen (Display) and a Camera (Camera), and the optional user interface 1103 may further include a standard wired interface and a wireless interface.
Network interface 1104 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface), among others.
Wherein the processor 1101 may comprise one or more processing cores. The processor 1101 connects various portions of the overall electronic device 1100 using various interfaces and lines, performs various functions of the electronic device 1100, and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 1105, and invoking data stored in the memory 1105. Alternatively, the processor 1101 may be implemented in at least one hardware form of digital signal Processing (DIGITAL SIGNAL Processing, DSP), field-Programmable gate array (Field-Programmable GATE ARRAY, FPGA), programmable logic array (Programmable Logic Array, PLA). The processor 1101 may integrate one or a combination of several of a central processing unit (Central Processing Unit, CPU), an image processor (Graphics Processing Unit, GPU), a modem, etc. The CPU mainly processes an operating system, a user interface, an application program and the like, the GPU is used for rendering and drawing contents required to be displayed by the display screen, and the modem is used for processing wireless communication. It will be appreciated that the modem may not be integrated into the processor 1101 and may be implemented by a single chip.
The Memory 1105 may include a random access Memory (Random Access Memory, RAM) or a Read-Only Memory (Read-Only Memory). Optionally, the memory 1105 includes a non-transitory computer readable medium (non-transitory computer-readable storage medium). Memory 1105 may be used to store instructions, programs, code, sets of codes, or sets of instructions. The memory 1105 may include a stored program area that may store instructions for implementing an operating system, instructions for functions (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing the above-described respective method embodiments, etc., and a stored data area that may store data, etc., referred to in the above-described respective method embodiments. The memory 1105 may alternatively be a storage device located remotely from the aforementioned processor 1101. As shown in fig. 11, an operating system, a network communication module, a user interface module, and an image correction application program may be included in the memory 1105 as one type of computer storage medium.
In the electronic device 1100 shown in fig. 11, the user interface 1103 is mainly used as an interface for providing input for a user and acquiring data input by the user, while the processor 1101 may be used to call an image processing application program stored in the memory 1105 and specifically perform the following operations:
Determining a first characteristic region in an image, and acquiring a depth image of the image and a segmented image of a second characteristic region in the image, wherein the second characteristic region comprises the first characteristic region;
Acquiring characteristic depth information corresponding to the first characteristic region according to the depth image;
and correcting the second characteristic region in the segmented image according to the characteristic depth information corresponding to the first characteristic region.
In one embodiment, the processor 1101 performs the correction on the second feature area in the segmented image according to the feature depth information corresponding to the first feature area, specifically performing:
Determining a target region belonging to the second characteristic region in the segmented image according to the characteristic depth information corresponding to the first characteristic region;
and correcting the second characteristic region according to the target region.
In one embodiment, the determining, by the processor 1101, the target area belonging to the second feature area in the segmented image according to the feature depth information corresponding to the first feature area specifically includes:
Determining a first candidate region of the segmented image, the confidence of which is in a first confidence range;
And determining a first target area belonging to the second characteristic area in the segmented image according to the characteristic depth information corresponding to the first characteristic area and the first candidate area.
In one embodiment, the determining, by the processor 1101, the target area belonging to the second feature area in the segmented image according to the feature depth information corresponding to the first feature area specifically includes:
Determining a second candidate region of the segmented image, the confidence of which is in a second confidence range;
Acquiring feature depth information corresponding to the second candidate region according to the depth image, and recalculating the confidence coefficient of the second candidate region according to the feature depth information corresponding to the second candidate region;
And determining a second target region belonging to the second characteristic region in the segmented image according to the recalculated confidence coefficient of the second candidate region.
In one embodiment, the processor 1101 recalculates the confidence of the second candidate region according to the feature depth information corresponding to the second candidate region, specifically performs:
Determining weight information of the feature depth information corresponding to the second candidate region according to the feature depth information corresponding to the first feature region;
And recalculating the confidence coefficient of the second candidate region according to the weight information of the feature depth information corresponding to the second candidate region.
In one embodiment, the processor 1101 corrects the second feature area according to the target area, specifically performing:
fusing the target region to the second feature region.
In one embodiment, the processor 1101 obtains the feature depth information corresponding to the first feature area according to the depth image, and specifically performs:
Acquiring a depth image area corresponding to the first characteristic area according to the aligned depth image and the image;
and acquiring the characteristic depth information corresponding to the first characteristic region according to the depth image region corresponding to the first characteristic region.
In one embodiment, the processor 1101 obtains the feature depth information corresponding to the first feature area according to the depth image area corresponding to the first feature area, and specifically performs:
According to the depth image area corresponding to the first characteristic area, carrying out average calculation on the depth value corresponding to the depth image area to obtain average depth information corresponding to the first characteristic area;
and obtaining the characteristic depth information corresponding to the first characteristic region according to the average depth information.
In one embodiment, the processor 1101 obtains, according to the average depth information, feature depth information corresponding to the first feature area, and specifically performs the step;
And carrying out compensation calculation on the depth value corresponding to the average depth information to obtain the characteristic depth information corresponding to the first characteristic region.
According to the application, the second characteristic region segmented by the image is corrected according to the characteristic depth information of the first characteristic region, namely, a reference basis is provided for image segmentation by the depth information, so that a more accurate second characteristic region is obtained, errors and instabilities caused by only adopting the depth information or a single algorithm of a segmentation algorithm in the related technology are improved, the influence of the wrong segmentation on the imaging performance of the image is avoided, and the operation cost is lower on the basis of improving the reliability of image processing.
Those skilled in the art will appreciate that implementing all or part of the above-described methods in accordance with the embodiments may be accomplished by way of a computer program stored on a computer readable storage medium, which when executed may comprise the steps of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a read-only memory, a random access memory, or the like.
The foregoing disclosure is illustrative of the present application and is not to be construed as limiting the scope of the application, which is defined by the appended claims.