[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

WO2018082185A1 - 图像处理方法和装置 - Google Patents

图像处理方法和装置 Download PDF

Info

Publication number
WO2018082185A1
WO2018082185A1 PCT/CN2016/113288 CN2016113288W WO2018082185A1 WO 2018082185 A1 WO2018082185 A1 WO 2018082185A1 CN 2016113288 W CN2016113288 W CN 2016113288W WO 2018082185 A1 WO2018082185 A1 WO 2018082185A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
pixel
depth
area
value
Prior art date
Application number
PCT/CN2016/113288
Other languages
English (en)
French (fr)
Inventor
杨铭
Original Assignee
广州视源电子科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 广州视源电子科技股份有限公司 filed Critical 广州视源电子科技股份有限公司
Publication of WO2018082185A1 publication Critical patent/WO2018082185A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4092Image resolution transcoding, e.g. by using client-server architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Definitions

  • the present invention relates to the field of image processing technologies, and in particular, to an image processing method and apparatus.
  • the focus of beautification is often the foreground part of the image.
  • the main beautification object is the face in the foreground.
  • higher image resolution and real-time performance are often required, for example, skin landscaping of video images in live broadcast, magic mirror and the like.
  • the present invention provides an image processing method and apparatus for solving the problem that the image processing cannot be sub-regionally processed, and that the high resolution and the better real-time performance cannot be simultaneously supported, thereby realizing high-precision image partitioning. Processing and efficient high quality image processing.
  • the embodiment of the present invention adopts the following technical solutions:
  • an embodiment of the present invention provides an image processing method, including:
  • the processed area image is fused with the RGB image.
  • the embodiment of the present invention further provides an image processing apparatus, including:
  • An initial image acquisition module configured to acquire a depth image and an RGB image in the same scene
  • An area image segmentation module configured to divide the RGB image according to a depth value in the depth image to obtain at least two area images
  • a downsampling module configured to perform a down sampling process on at least one of the at least two area images
  • a recursive bilateral filtering module configured to perform recursive bilateral filtering processing on the area image after the downsampling process
  • An image fusion module is configured to fuse the processed area image with the RGB image.
  • the depth image and the RGB image in the same scene are acquired, and the RGB image is segmented according to the depth value in the depth image to obtain at least two area images, and at least one of the at least two area images is downsampled.
  • the recursive bilateral filtering process is performed on the region image after the downsampling process, and the processed region image is merged with the RGB image.
  • the image processing speed achieves better real-time effects, especially for high-resolution video images, which achieves better image processing effects on high-definition video images and ensures image processing speed. This makes it possible to prevent the jamming phenomenon when processing high-definition video images and improve the user experience.
  • FIG. 1 is a schematic flow chart of an image processing method according to Embodiment 1 of the present invention.
  • FIG. 2A is a schematic flowchart of an image processing method according to Embodiment 2 of the present invention.
  • FIG. 2B is a schematic flow chart of an alternative embodiment of S220 of FIG. 2A;
  • FIG. 2C is a schematic flow chart of an alternative embodiment of S260 of FIG. 2A;
  • 2D is a schematic diagram of splitting an RGB image into two area images according to Embodiment 2 of the present invention.
  • FIG. 3 is a schematic structural diagram of an image processing apparatus according to Embodiment 3 of the present invention.
  • FIG. 4A is a schematic structural diagram of an image processing apparatus according to Embodiment 4 of the present invention.
  • 4B is a block diagram showing an alternative embodiment of the area image segmentation module 420 of FIG. 4A;
  • 4C is a block diagram of an alternative embodiment of the image fusion module 460 of FIG. 4A.
  • FIG. 1 is a schematic flowchart of an image processing method according to Embodiment 1 of the present invention.
  • the method can be executed by a terminal such as a smart phone or a personal computer, and is suitable for a scene in which an image is beautified.
  • an RGB image can be understood as a color image, and the color of the RGB image is changed by three reference color channels of red (Red, R), green (Green, G), and blue (Blue, B) and their mutual The superposition of the various to get a variety of colors.
  • the depth image can represent the color resolution of the image by the depth of the image, ie the number of bits used to store each pixel.
  • the size of the pixel values in the depth image reflects the depth of the depth of field. In this embodiment, by acquiring the depth image and the RGB image in the same scene, more and more scene information can be acquired, so that more accurate image processing is performed by the image data.
  • the acquired image is related to factors such as shooting distance, shooting angle, shooting environment, and shooting time. Therefore, the depth image and the RGB image in the same scene may be obtained by acquiring the depth image and the RGB image of the same scene at the same time.
  • the depth image may be acquired by a depth camera, and the RGB image may be acquired by an ordinary camera; of course, the depth image and the RGB image may also be acquired by the same camera.
  • S120 Segment the RGB image according to the depth value in the depth image to obtain at least two area images.
  • dividing the original RGB image according to the depth value in the depth image to obtain the at least two area images may be: determining a second pixel value of the original RGB image according to the first pixel value of the depth image, and further dividing the original according to the second pixel value.
  • the RGB image yields at least two area images.
  • the segmented region image may also be labeled. For example, pixel value pairs for each pixel based on the region image Each area image is labeled.
  • the pixel values of each pixel of the same image area may be labeled with the same value.
  • the original RGB image may be divided according to the depth value in the depth image to obtain two regional images, wherein the pixel values of the pixels of the regional image may be represented by 1 or 0 respectively; wherein 1 represents the foreground region; Background area. There can be only one background area in an image, and there can be multiple foreground areas.
  • S130 Perform downsampling processing on at least one of the at least two area images.
  • image downsampling can reduce the image, generate a thumbnail of the original image, and convert the high-resolution image into a low-resolution image, thereby achieving the speed of processing the low-resolution image when the high-resolution image is input.
  • the image of one or a certain area may be down-sampled according to the actual segmentation situation and the personalized requirement of the user.
  • the corresponding area image may be down-sampled according to the default setting.
  • the foreground area and/or the background area can be processed according to the pixel value of the area image of 1 or 0, and the user's personalized needs.
  • the foreground region of the pixel value of 1 may be down-sampled by default.
  • S130 can also be executed before S120, and the processing speed can be improved.
  • the RGB image is segmented according to the pixel value, that is, the image resolution has an influence on the image segmentation. If S130 is executed first, the image will be changed. The resolution of the original image may be deviated when the image is divided. Therefore, in this embodiment, S120 is preferably performed before S130.
  • S140 Perform recursive bilateral filtering processing on the area image after the downsampling process.
  • the prior art bilateral filtering technique still does not enable real-time processing of high definition video images.
  • this embodiment uses a recursive bilateral filtering technique to replace the bilateral filtering technique in the prior art.
  • R k,i R(x k ,x i ) determines the filter coefficients;
  • the recursive bilateral filtering is similar to the traditional bilateral filtering processing method, and the output values processed for the ith pixel are calculated according to the following formula:
  • R k,i represents the pixel difference between pixel k and pixel i
  • S k,i represents the geometric spatial distance between pixel k and pixel i
  • X K represents the pixel value of pixel k
  • y i represents processing Output value after i pixels
  • the processed area image may be for each area image of the full picture, or may be for the foreground area, in order to simultaneously consider the calculation efficiency and the fusion effect, the processed area image may be processed without any processing.
  • the area image is blended. This technical solution is particularly suitable for the case of processing only individual region images after RGB image segmentation.
  • Image fusion can include pixel level fusion, feature level fusion, and decision level fusion.
  • spatial domain algorithm and transform domain algorithm in pixel level fusion there are multiple fusion rule methods in spatial domain algorithm, such as logic filtering method, gray weighted average method, contrast modulation method, etc.; pyramid domain decomposition fusion in transform domain algorithm Method, wavelet transform method, etc.
  • the processed area image and the RGB image are performed.
  • the fusion may be specifically based on the pixel values of the processed region image being blended with the pixel values of the RGB image to preserve the original data as much as possible.
  • the depth image and the RGB image in the same scene are acquired, and the RGB image is segmented according to the depth value in the depth image to obtain at least two area images, and at least one of the at least two area images is performed.
  • the downsampling process performs recursive bilateral filtering on the region image after the downsampling process, and fuses the processed region image with the RGB image.
  • the image of the rate is obvious; the image of the processed area is merged with the original RGB image, which can effectively reduce the distortion after image processing, preserve the integrity of the image information, and at the same time take into account the processing efficiency and the fusion effect.
  • the above technical solution can realize the personalized requirement of the user, greatly improve the image processing speed, and the image information distortion is small, especially when processing the high-definition video image, the picture jamming phenomenon is greatly reduced.
  • FIG. 2A is a schematic flowchart of an image processing method according to Embodiment 2 of the present invention.
  • the main difference between the embodiment and the first embodiment is that the content of the up-sampling process of the region image after the recursive bilateral filtering process is added, and the optional implementation manner of S220 in FIG. 2A and the optional implementation manner of S260 are further provided. .
  • S220 Segment the RGB image according to the depth value in the depth image to obtain at least two area images.
  • S220 may include the following S221, S222, S223, and S224. Steps where:
  • S221 Calculate a mapping matrix of the first coordinates of each pixel of the depth image and the second coordinates of each pixel of the RGB image.
  • the mapping matrix of the first coordinate and the second coordinate may be calculated by a method of the prior art, for example, according to parameters of a camera that acquires a depth image and an RGB image, or may acquire features in a depth image and an RGB image.
  • the coincident portion acquires the correspondence between the depth image and the features of the RGB image based on the feature coincidence portion, and further calculates the mapping matrix according to the correspondence relationship between the features.
  • S222 Determine a second depth value of each pixel of the RGB image according to the second coordinate, the mapping matrix, and the first depth value of each pixel of the depth image.
  • the second depth value of each pixel of the corresponding RGB image may be obtained by multiplying the coordinates of each pixel point of the RGB image by the mapping matrix.
  • the coordinates of each pixel point of the corresponding depth image may be aligned with the RGB image, and the depth value of each pixel of the RGB image is obtained by the depth value of each pixel of the depth image.
  • S223 determining whether the second depth value is greater than a preset depth threshold, and if yes, determining a position of the pixel corresponding to the second depth value as a foreground area, and if not, determining a position of the pixel corresponding to the second depth value As a background area.
  • the depth threshold (or depth range) is compared, and the RGB image is split into two according to the comparison result.
  • the depth threshold can be a specific point value or a range.
  • the specific numerical value or numerical interval of the depth threshold may be set by the user according to the actual image of the acquired RGB image, which is not limited herein.
  • the segmentation of the original RGB image according to the second depth value and the preset rule to obtain the at least two region images may specifically determine whether the second depth value is greater than a preset depth threshold, and if so, then The position of the pixel corresponding to the depth value is used as the foreground area. If not, the position of the pixel corresponding to the second depth value is used as the background area; and the original RGB image is divided according to the foreground area and the background area to obtain two area images.
  • the position of the pixel corresponding to the second depth value within the depth threshold range may be acquired, and as the foreground region, the set of positions of the remaining pixels is used as the background region.
  • the RGB image is divided into two regional images as an example.
  • the depth threshold range may be set according to actual conditions. And determining whether the second depth value of each pixel in the RGB image is within a preset depth threshold range, and if so, using the pixel as the foreground area, that is, the pixel part of the avatar part, if not in the preset depth threshold range Inside, the pixel is used as the pixel of the background area.
  • the pixel values of each pixel in the foreground area may be marked as 1 (black area in FIG.
  • the set of the remaining pixels is used as the background area, and the pixel values of each pixel in the background area may be marked as 0 (FIG. 2D) Medium grid area). Further, the foreground area and/or the background area can be processed according to the pixel values of the respective pixel points. It should be noted that the grid in FIG. 2D is only used to indicate that the pixel values of the background area are all marked as 0, and the grid area does not exist.
  • the obtained image of the area to be processed may be affected by the noise of the depth image and present some loop-like areas, which can be filled by the expansion corrosion operation. Domain, get the final area image.
  • the at least two area images include two, three and more area images.
  • the advantage of such setting can realize the partition processing or partial processing of the image, which can improve the efficiency of image processing and enrich the effect of image processing.
  • S230 Perform downsampling processing on at least one of the at least two area images.
  • S240 Perform recursive bilateral filtering processing on the area image after the downsampling process.
  • S250 Perform upsampling processing on the region image after the recursive bilateral filtering process.
  • the resolution of the image is lowered, and the restored image is processed to have a better fusion effect with the original image.
  • the resolution of the subsequent area image that is, the low resolution area image after the recursive bilateral filtering processing is converted into the same image as the original image resolution.
  • the image of the region after the recursive bilateral filtering process is enlarged by using the upsampling method, so that the resolution of the processed region image is the same as the resolution of the original image.
  • Image upsampling usually uses an interpolation method, that is, inserting a new element between pixels by using a suitable interpolation algorithm based on the recursive bilaterally filtered region image pixels.
  • S260 may include three steps S261, S262, and S263, where:
  • S261 Perform Gaussian smoothing on the processed area image to obtain smooth pixel values of each pixel of the area image.
  • the size of the segmented RGB image may be consistent with the size of the original RGB image, and the pixel points of the segmented RGB image are in one-to-one correspondence with the pixels of the original RGB image, and further, segmentation
  • the pixel value of each pixel in the latter RGB image can be used only 1 or 0 indicates whether the position of the RGB image corresponding to the pixel is the area image to be processed, respectively.
  • 1 can be used to indicate the foreground area; 0 is the background area.
  • Gaussian smoothing is performed on the segmented image, and the pixel value a(x, y) of the segmented image thus obtained may take an arbitrary value between 0-1. This operation can effectively remove noise in the image and ensure image quality.
  • S262 Calculate the fused pixel value of the fused image according to the smoothed pixel value and the pixel value of each pixel of the RGB image.
  • the smoothed pixel value and the pixel value of each pixel of the RGB image may be weighted and summed to calculate a fused pixel value of the fused image. For example, suppose the smoothed pixel value of the processed area image is I B (x, y), and the pixel value of each pixel point of the original RGB image is I O (x, y).
  • a(x, y) represents the weight of the smoothed pixel value. It can be understood that the specific value of a(x, y) can be set according to the actual situation, and can be a fixed value or calculated by other parameters, which is not limited herein.
  • S263 Output the fused image according to the fused pixel value.
  • the depth image and the RGB image in the same scene are acquired, and the RGB image is segmented according to the depth value in the depth image to obtain at least two area images, and at least one of the at least two area images is performed.
  • the downsampling process performs recursive bilateral filtering on the region image after the downsampling process, and performs upsampling processing on the recursive bilaterally filtered region image, and fuses the processed region image with the RGB image.
  • the effect is obvious; the processed area image is merged with the original RGB image, Effectively reduce the distortion after image processing, preserve the integrity of image information, and at the same time take into account the processing efficiency and fusion effect.
  • the above technical solution can realize the personalized requirement of the user, greatly improve the image processing speed, and the image information distortion is small, especially when processing the high-definition video image, the picture jamming phenomenon is greatly reduced.
  • the following is an embodiment of an image processing apparatus according to an embodiment of the present invention.
  • the image processing apparatus and the image processing method belong to a general inventive concept.
  • FIG. 3 is a schematic structural diagram of an image processing apparatus according to Embodiment 3 of the present invention.
  • the image processing apparatus 300 of this embodiment may include the following contents:
  • the initial image acquisition module 310 is configured to acquire a depth image and an RGB image in the same scene.
  • the area image segmentation module 320 is configured to divide the RGB image according to the depth value in the depth image to obtain at least two area images.
  • the downsampling module 330 is configured to perform downsampling processing on at least one of the at least two area images.
  • the recursive bilateral filtering module 340 is configured to perform recursive bilateral filtering processing on the downsampled region image.
  • the recursive bilateral filtering process is performed on the image of the region after the downsampling process, and is specifically processed according to the following formula:
  • the image fusion module 350 is configured to fuse the processed area image with the RGB image.
  • the depth image and the RGB image in the same scene are acquired, and the RGB image is segmented according to the depth value in the depth image to obtain at least two area images, and at least one of the at least two area images is performed.
  • the downsampling process performs recursive bilateral filtering on the region image after the downsampling process, and fuses the processed region image with the RGB image.
  • the image of the rate is obvious; the image of the processed area is merged with the original RGB image, which can effectively reduce the distortion after image processing, preserve the integrity of the image information, and at the same time take into account the processing efficiency and the fusion effect.
  • the above technical solution can realize the personalized requirement of the user, greatly improve the image processing speed, and the image information distortion is small, especially when processing the high-definition video image, the picture jamming phenomenon is greatly reduced.
  • FIG. 4A is a schematic structural diagram of an image processing apparatus according to Embodiment 4 of the present invention.
  • the main difference between this embodiment and the third embodiment is that the upsampling module 450 is added, and an optional implementation of the area image segmentation module 420 and an optional implementation of the image fusion module 460 of FIG. 4A are further provided.
  • the image processing apparatus 400 of this embodiment may include the following contents:
  • the initial image acquisition module 410 is configured to acquire a depth image and an RGB image in the same scene.
  • the area image segmentation module 420 is configured to segment the RGB image according to the depth value in the depth image to obtain at least two area images.
  • the downsampling module 430 is configured to perform downsampling processing on at least one of the at least two area images.
  • the recursive bilateral filtering module 440 is configured to perform recursive bilateral filtering processing on the downsampled region image.
  • the upsampling module 450 is configured to perform upsampling processing on the recursive bilateral filtering processed region image.
  • the image fusion module 460 is configured to fuse the processed area image with the RGB image.
  • the area image segmentation module 420 includes:
  • the mapping matrix calculation unit 421 is configured to calculate a mapping matrix of the first coordinates of each pixel point of the depth image and the second coordinates of each pixel point of the RGB image.
  • the second depth value determining unit 422 is configured to determine a second depth value of each pixel point of the RGB image according to the second coordinate, the mapping matrix, and the first depth value 423 of each pixel point of the depth image.
  • the area image determining unit 423 is configured to determine whether the second depth value is greater than a preset depth threshold, and if yes, the position of the pixel corresponding to the second depth value is used as the foreground area, and if not, the second depth value The position of the corresponding pixel is used as the background area.
  • the area image dividing unit 424 is configured to divide the RGB image according to the determination result to obtain at least two area images.
  • the image fusion module 460 includes:
  • the Gaussian smoothing processing unit 461 is configured to perform Gaussian smoothing on the processed region image to acquire smooth pixel values of each pixel of the region image.
  • the fused pixel value calculation unit 462 is configured to calculate a fused pixel value of the fused image according to the smoothed pixel value and the pixel value of each pixel point of the RGB image.
  • the fused image output unit 463 is configured to output the fused image according to the fused pixel value.
  • the depth image and the RGB image in the same scene are acquired, according to the deep
  • the depth value in the degree image is divided into RGB images to obtain at least two area images, at least one of the at least two area images is downsampled, and the downsampled area image is subjected to recursive bilateral filtering processing, for recursive bilateral
  • the filtered region image is subjected to upsampling processing, and the processed region image is fused with the RGB image.
  • the effect is obvious; the image of the processed area is merged with the original RGB image, which can effectively reduce the distortion after image processing, preserve the integrity of the image information, and at the same time take into account the processing efficiency and fusion effect.
  • the above technical solution can realize the personalized requirement of the user, greatly improve the image processing speed, and the image information distortion is small, especially when processing the high-definition video image, the picture jamming phenomenon is greatly reduced.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

一种图像处理方法和装置。其中,图像处理方法包括:获取同一场景下的深度图像和RGB图像(S110),根据深度图像中的深度值分割RGB图像得到至少两个区域图像(S120),对至少两个区域图像中的至少一个区域图像进行下采样处理(S130),对下采样处理后的区域图像进行递归双边滤波处理(S140),将处理后的区域图像与RGB图像进行融合(S150)。通过对RGB图像分割处理,能实现对不同区域进行相同或不同的处理,满足用户的个性化需求;通过对区域图像进行下采样处理及递归双边滤波处理,能大大提高处理速度,尤其对于高分辨率图像效果明显;将处理后的区域图像与原始RGB图像进行融合,能够有效减少图像处理后的失真,保存图像信息的完整性,同时兼顾处理效率与融合效果。

Description

图像处理方法和装置 技术领域
本发明涉及图像处理技术领域,尤其涉及一种图像处理方法和装置。
背景技术
随着移动终端的普及,越来越多的用户习惯于借助移动终端的拍照功能,来记录生活,留住回忆。为了使得图片根据用户的偏好度进行显示,目前已经出现了大量的美化图像软件,或配置有美化图像功能的终端。
在许多图像美化应用场景中,美化的重点对象往往是图像中的前景部分,例如,如人脸美化时,主要的美化对象为前景中的人脸。此外,在许多图像美化应用场景中,往往要求较高的图像分辨率与实时性,例如,对直播、魔镜等应用中视频图像的皮肤美化。现有技术中,普遍是对图像整体美化,不分区域不分重点地对全图处理,效率低效果差,并且不能同时实现支持高分辨率(如1080P)和较佳实时性。
发明内容
为解决相关技术问题,本发明提供一种图像处理方法和装置,以解决图像处理时不能对图像分区域处理,以及不能同时支持高分辨率和较佳实时性的问题,实现对高精度图像分区处理和高效率高质量的图像处理。
为实现上述目的,本发明实施例采用如下技术方案:
第一方面,本发明实施例提供了一种图像处理方法,包括:
获取同一场景下的深度图像和RGB图像;
根据所述深度图像中的深度值分割所述RGB图像得到至少两个区域图像;
对所述至少两个区域图像中的至少一个区域图像进行下采样处理;
对下采样处理后的区域图像进行递归双边滤波处理;
将处理后的区域图像与所述RGB图像进行融合。
第二方面,本发明实施例还对应提供了一种图像处理装置,包括:
初始图像获取模块,用于获取同一场景下的深度图像和RGB图像;
区域图像分割模块,用于根据所述深度图像中的深度值分割所述RGB图像得到至少两个区域图像;
下采样模块,用于对所述至少两个区域图像中的至少一个区域图像进行下采样处理;
递归双边滤波模块,用于对下采样处理后的区域图像进行递归双边滤波处理;
图像融合模块,用于将处理后的区域图像与所述RGB图像进行融合。
本发明实施例提供的技术方案带来的有益效果:
本技术方案中,获取同一场景下的深度图像和RGB图像,根据深度图像中的深度值分割RGB图像得到至少两个区域图像,对至少两个区域图像中的至少一个区域图像进行下采样处理,对下采样处理后的区域图像进行递归双边滤波处理,将处理后的区域图像与RGB图像进行融合。通过对图像进行分区处理,对不同区域进行相同或不同处理,满足个性化需求,通过下采样处理将高分辨率图像转换为低分辨率图像,有效提高图像处理速度,通过递归双边滤波处理进一步提高了图像处理速度,达到较佳的实时效果,尤其对于高分辨率的视频图像,实现了对高清视频图像较佳的图像处理效果,并保证了图像处理速度, 使得在对高清视频画面处理时不发生卡顿现象,提高用户体验。
附图说明
为了更清楚地说明本发明实施例中的技术方案,下面将对本发明实施例描述中所需要使用的附图作简单的介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据本发明实施例的内容和这些附图获得其他的附图。
图1是本发明实施例一提供的一种图像处理方法的流程示意图;
图2A是本发明实施例二提供的一种图像处理方法的流程示意图;
图2B是图2A中S220的可选实施方式的流程示意图;
图2C是图2A中S260的可选实施方式的流程示意图;
图2D是基于本发明实施例二所提供的分割RGB图像为两个区域图像的示意图;
图3是本发明实施例三提供的一种图像处理装置的架构示意图;
图4A是本发明实施例四提供的一种图像处理装置的架构示意图;
图4B是图4A中区域图像分割模块420的可选实施方式的架构示意图;
图4C是图4A中图像融合模块460的可选实施方式的架构示意图。
具体实施方式
为使本发明解决的技术问题、采用的技术方案和达到的技术效果更加清楚,下面将结合附图对本发明实施例的技术方案作进一步的详细描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域技术人员在没有作出创造性劳动前提下所获得的所有其他实 施例,都属于本发明保护的范围。
实施例一
请参考图1,其是本发明实施例一提供的一种图像处理方法的流程示意图,该方法可以由智能手机、个人计算机等终端来执行,适用于对图像进行美化处理的场景。
本实施例的图像处理方法,可以包括如下步骤:
S110:获取同一场景下的深度图像和RGB图像。
示例性的,RGB图像可以理解为彩色图像,RGB图像的色彩是通过对红(Red,R)、绿(Green,G)、蓝(Blue,B)三个基准颜色通道的变化以及它们相互之间的叠加来得到各式各样的颜色。深度图像可以通过图像深度,即存储每个像素所用的位数,来表征图像的色彩分辨率,深度图像中的像素值的大小反映了景深的远近。在本实施例中,获取同一场景下的深度图像和RGB图像,能够获取更多更丰富的场景信息,以便通过图像数据进行更精准地图像处理。
一般地,获取到的图像与拍摄距离、拍摄角度、拍摄环境与拍摄时间等因素有关。因此,获取同一场景下的深度图像和RGB图像具体可以是,获取同一时刻下同一场景的深度图像和RGB图像。具体地,深度图像可由深度摄像头获取,RGB图像可由普通摄像头获取;当然,也可以通过同一摄像头获取深度图像和RGB图像。
S120:根据深度图像中的深度值分割RGB图像得到至少两个区域图像。
示例性的,根据深度图像中的深度值分割原始RGB图像得到至少两个区域图像可以是,根据深度图像的第一像素值确定原始RGB图像的第二像素值,进而根据第二像素值分割原始RGB图像得到至少两个区域图像。进一步地,还可以对分割出的区域图像进行标注。例如,基于区域图像的各像素点的像素值对 各区域图像进行标注。可选的,同一图像区域的各像素点的像素值可以相同的数值标注。
举例来说,可以根据深度图像中的深度值分割原始RGB图像得到两个区域图像,其中,区域图像的各像素点的像素值可分别用1或者0表示;其中,1表示前景区域;0表示背景区域。一个图像中可以只有一个背景区域,可以有多个前景区域。
S130:对至少两个区域图像中的至少一个区域图像进行下采样处理。
示例性的,图像下采样可以缩小图像,生成原图像的缩略图,可将高分辨率的图像转换为低分辨率的图像,进而实现输入高分辨率图像时达到处理低分辨率图像的速度。可以根据实际的分割情况及用户的个性化需求选择对某一个或某几个区域图像进行下采样处理,当然也可以是根据默认设置对对应的区域图像进行下采样处理。可以根据区域图像的像素值为1或0,以及用户个性化需求处理前景区域和/或背景区域。优选的,可以默认设置对像素值为1的前景区域进行下采样处理。
需要说明的是,S130也可在S120之前执行,同样可以实现提高处理速度的目的,只是在S120中是依据像素值分割RGB图像,即图像分辨率对图像分割有影响,如果先执行S130会改变原图像的分辨率,则在对图像进行分割时可能会发生偏差,因此本实施例优选将S120在S130之前执行。
S140:对下采样处理后的区域图像进行递归双边滤波处理。
示例性的,尽管在S130采用了下采样技术,但是现有技术中的双边滤波技术仍然不能实现对高清视频图像的实时处理。为最大限度地提高对图像处理的速度,本实施例采用递归双边滤波技术替代现有技术中的双边滤波技术。
传统的双边滤波技术中,滤波器由两个函数构成,一个函数通过像素间的 几何空间距离Sk,i=S(k,i)决定滤波器系数,另一个函数通过像素间的像素差值Rk,i=R(xk,xi)决定滤波器系数;
递归双边滤波与传统的双边滤波的处理方法类似,均按下述公式计算对第i个像素处理后的输出值:
Figure PCTCN2016113288-appb-000001
其中,Rk,i表示像素k和像素i之间的像素差值,Sk,i表示像素k与像素i之间的几何空间距离,XK表示像素k的像素值,yi表示处理第i个像素后的输出值;
但递归双边滤波与传统的双边滤波的不同点在于,该公式中的
Figure PCTCN2016113288-appb-000002
即像素k和像素i之间像素差值的计算方法与传统的双边滤波中的计算方法不同。在此基础上即可以递归的方式实现双边滤波,降低计算复杂度,尤其对于高分辨率的图像,计算时间将大大减小。
S150:将处理后的区域图像与RGB图像进行融合。
示例性的,由于处理后的区域图像,可能针对全图的各个区域图像,也可能针对前景区域,因此为了同时兼顾计算效率与融合效果,可选是将处理后的区域图像与未经过任何处理的区域图像进行融合。该技术方案尤其适用于仅处理RGB图像分割后的个别区域图像的情况。
图像融合可包括像素级融合、特征级融合及决策级融合等。其中,像素级融合中有空间域算法和变换域算法,空间域算法中有多种融合规则方法,如逻辑滤波法,灰度加权平均法,对比调制法等;变换域算法中有金字塔分解融合法、小波变换法等。在本实施例中,将处理后的区域图像与所述RGB图像进行 融合具体可以是,基于处理后的区域图像的像素值与所述RGB图像的像素值进行融合,以尽可能多地保存原始数据。
综上,在本技术方案中,获取同一场景下的深度图像和RGB图像,根据深度图像中的深度值分割RGB图像得到至少两个区域图像,对至少两个区域图像中的至少一个区域图像进行下采样处理,对下采样处理后的区域图像进行递归双边滤波处理,将处理后的区域图像与RGB图像进行融合。通过对RGB图像分割处理,能实现对不同区域进行相同或不同的处理,满足用户的个性化需求;通过对区域图像进行下采样处理及递归双边滤波处理,能大大提高处理速度,尤其对于高分辨率图像效果明显;将处理后的区域图像与原始RGB图像进行融合,能够有效减少图像处理后的失真,保存图像信息的完整性,且同时兼顾处理效率与融合效果。上述技术方案,可实现用户的个性化需求,极大提高图像处理速度,图像信息失真度小,尤其在处理高清视频画面时,大大减少画面卡顿现象。
实施例二
请参考图2A、2B、2C和2D,其中,图2A是本发明实施例二提供的一种图像处理方法的流程示意图。本实施例与实施例一的主要区别在于,增加了对递归双边滤波处理后的区域图像进行上采样处理的内容,并进一步提供了图2A中S220的可选实施方式及S260的可选实施方式。
本实施例的图像处理方法,可以包括如下步骤:
S210:获取同一场景下的深度图像和RGB图像。
S220:根据深度图像中的深度值分割RGB图像得到至少两个区域图像。
可选的,如图2B所示,S220可以包括下述S221、S222、S223和S224四 个步骤,其中:
S221:计算深度图像的各像素点的第一坐标与RGB图像各像素点的第二坐标的映射矩阵。
示例性的,第一坐标与第二坐标的映射矩阵可以采用现有技术的方法进行计算,例如根据获取深度图像和RGB图像的摄像头的参数进行计算,也可以是获取深度图像与RGB图像中特征重合的部分,基于特征重合部分获取深度图像与RGB图像的特征之间的对应关系,进而根据特征之间的对应关系计算出映射矩阵。
S222:根据第二坐标、映射矩阵以及深度图像各像素点的第一深度值,确定RGB图像各像素点的第二深度值。
示例性的,可以是根据第二坐标和映射矩阵,确定与RGB图像各像素点对应的深度图像各像素点;将获取到的深度图像各像素点的第一深度值,作为与深度图像各像素点对应的RGB图像各像素点的第二深度值。示例性地,根据RGB图像各像素点的第一坐标和映射矩阵,确定与RGB图像各像素点对应的深度图像各像素点具体可以是,将RGB图像各像素点的坐标与映射矩阵相乘获得对应的深度图像各像素点的坐标。即将深度图像与RGB图像对齐,通过深度图像各像素点的深度值获取RGB图像各像素点的深度值。
S223:判断第二深度值是否大于预设的深度阈值,若是,则将与第二深度值对应的像素点的位置作为前景区域,若否,则将与第二深度值对应的像素点的位置作为背景区域。
示例性的,可以根据RGB图像中的各像素点的第二深度值,以及预设的深度阈值(或深度范围),遍历RGB图像中的各像素点,将各像素点的第二深度值与深度阈值(或深度范围)进行比对,根据比对结果将RGB图像分割为两个 区域图像。深度阈值可以是具体的一个点值,也可以是一个范围。深度阈值的具体数值或者数值区间,用户可以结合获取到的RGB图像的图像信息,根据实际需求进行设置,在此并不做限定。
在本实施例中,根据第二深度值和预设的规则分割原始RGB图像得到至少两个区域图像具体可以是,判断第二深度值是否大于预设的深度阈值,若是,则将与第二深度值对应的像素点的位置作为前景区域,若否,则将与第二深度值对应的像素点的位置作为背景区域;根据前景区域和背景区域,分割原始RGB图像得到两个区域图像。或者,也可以是获取位于深度阈值范围内的第二深度值对应的像素点的位置,作为前景区域,其余各像素点的位置的集合作为背景区域。
S224:根据判断结果,分割RGB图像得到至少两个区域图像。
示例性的,本方案中以RGB图像分割为两个区域图像为例,如图2D所示,若要将头像部分作为前景区域,其余部分作为背景区域,可以根据实际情况设定深度阈值范围,进而判断RGB图像中的各像素点的第二深度值是否处于预设的深度阈值范围内,若是,则将该像素点作为前景区域即头像部分的像素点,若未处于预设的深度阈值范围内,则将该像素点作为背景区域的像素点。其中,前景区域各像素点的像素值可以均标记为1(图2D中黑色区域);将其余各像素点的集合作为背景区域,背景区域各像素点的像素值可以均标记为0(图2D中网格区域)。进而,可以根据各像素点的像素值分别对前景区域和/或背景区域进行处理。需要说明的是,图2D中的网格仅仅用于表示背景区域的像素值均标记为0,不表示背景区域真实存在网格。
在实际的操作过程中,获得的所要处理的区域图像可能会受到深度图像的噪声的影响而呈现一些漏洞状的区域,此时可以通过膨胀腐蚀操作填补这些区 域,得到最终的区域图像。
可以理解的是,至少两个区域图像包括两个、三个及更多个区域图像。这样设置的好处,可以实现对图像的分区处理或局部处理,能够提高图像处理的效率,丰富图像处理的效果。
S230:对至少两个区域图像中的至少一个区域图像进行下采样处理。
S240:对下采样处理后的区域图像进行递归双边滤波处理。
S250:对递归双边滤波处理后的区域图像进行上采样处理。
示例性的,由于在上述S230中,对要进行处理的区域图像执行了下采样操作,降低了图像的分辨率,为使得处理后的图像与原图像有更好的融合效果,需要还原被处理后的区域图像的分辨率,即将递归双边滤波处理后的低分辨率区域图像转换为与原图像分辨率相同的图像。
本实施例采用上采样的方法放大递归双边滤波处理后的区域图像,使得被处理后的区域图像的分辨率与原图像的分辨率相同。图像上采样通常使用内插值方法,即在递归双边滤波处理后的区域图像像素的基础上在像素点之间采用合适的插值算法插入新的元素。
S260:将处理后的区域图像与RGB图像进行融合。
可选的,如图2C所示,S260可以包括S261、S262和S263三个步骤,其中:
S261:对处理后的区域图像进行高斯平滑处理,获取区域图像各像素点的平滑像素值。
示例性的,在RGB图像分割时,可以使得分割后的RGB图像的大小与原始RGB图像的大小一致,分割后的RGB图像的像素点与原始RGB图像的像素点一一对应,进一步地,分割后的RGB图像中的每个像素点的像素值可仅采用 1或者0表示,分别代表该像素所对应的RGB图像的位置是否为要处理的区域图像。如上,如果分割成2个区域图像,可以采用1表示前景区域;0表示背景区域。在此基础上,对上述分割图像进行高斯平滑,这样得到的分割图像的像素值a(x,y)则可能取0-1之间的任意值。该操作可以有效去除图像中的噪声,保证图像品质。
S262:根据平滑像素值与RGB图像各像素点的像素值,计算融合图像的融合像素值。
示例性的,可以将平滑像素值与RGB图像各像素点的像素值进行加权后求和,计算出融合图像的融合像素值。举例而言,假设处理后的区域图像的平滑像素值为IB(x,y),原始RGB图像各像素点的像素值为IO(x,y),此时,可以将IB(x,y)与IO(x,y)进行加权后,求和,得到最终的融合图像的融合像素值IR(x,y),即IR(x,y)=IB(x,y)*a(x,y)+IO(x,y)*{1-a(x,y)}。其中,a(x,y)表示平滑像素值的权重。可以理解的是,a(x,y)的具体取值可以根据实际情况设置,可以为固定值,也可以通过其他参数计算得出,在此并不做限定。
S263:根据融合像素值输出融合图像。
综上,在本技术方案中,获取同一场景下的深度图像和RGB图像,根据深度图像中的深度值分割RGB图像得到至少两个区域图像,对至少两个区域图像中的至少一个区域图像进行下采样处理,对下采样处理后的区域图像进行递归双边滤波处理,对递归双边滤波处理后的区域图像进行上采样处理,将处理后的区域图像与RGB图像进行融合。通过对RGB图像分割处理,能实现对不同区域进行相同或不同的处理,满足用户的个性化需求;通过对区域图像进行下采样处理、递归双边滤波处理及上采样处理,能大大提高处理速度,尤其对于高分辨率图像效果明显;将处理后的区域图像与原始RGB图像进行融合,能够 有效减少图像处理后的失真,保存图像信息的完整性,且同时兼顾处理效率与融合效果。上述技术方案,可实现用户的个性化需求,极大提高图像处理速度,图像信息失真度小,尤其在处理高清视频画面时,大大减少画面卡顿现象。
以下为本发明实施例提供的图像处理装置的实施例,图像处理装置与上述图像处理方法属于一个总的发明构思,在图像处理装置的实施例中未详尽描述的细节内容,请参考上述图像处理方法的实施例。
实施例三
请参考图3,其是本发明实施例三提供的一种图像处理装置的架构示意图。
本实施例的图像处理装置300,可以包括如下内容:
初始图像获取模块310,用于获取同一场景下的深度图像和RGB图像。
区域图像分割模块320,用于根据深度图像中的深度值分割RGB图像得到至少两个区域图像。
下采样模块330,用于对至少两个区域图像中的至少一个区域图像进行下采样处理。
递归双边滤波模块340,用于对下采样处理后的区域图像进行递归双边滤波处理。
优选的,对下采样处理后的区域图像进行递归双边滤波处理,具体按如下公式进行处理:
Figure PCTCN2016113288-appb-000003
其中,
Figure PCTCN2016113288-appb-000004
表示像素k和像素i之间的像素差值,Sk,i表示像素k与像素i之间的几何空间距离,XK表 示像素k的像素值,yi表示处理第i个像素后的输出值。
图像融合模块350,用于将处理后的区域图像与RGB图像进行融合。
综上,在本技术方案中,获取同一场景下的深度图像和RGB图像,根据深度图像中的深度值分割RGB图像得到至少两个区域图像,对至少两个区域图像中的至少一个区域图像进行下采样处理,对下采样处理后的区域图像进行递归双边滤波处理,将处理后的区域图像与RGB图像进行融合。通过对RGB图像分割处理,能实现对不同区域进行相同或不同的处理,满足用户的个性化需求;通过对区域图像进行下采样处理及递归双边滤波处理,能大大提高处理速度,尤其对于高分辨率图像效果明显;将处理后的区域图像与原始RGB图像进行融合,能够有效减少图像处理后的失真,保存图像信息的完整性,且同时兼顾处理效率与融合效果。上述技术方案,可实现用户的个性化需求,极大提高图像处理速度,图像信息失真度小,尤其在处理高清视频画面时,大大减少画面卡顿现象。
实施例四
请参考图4A、4B和4C,其中,图4A是本发明实施例四提供的一种图像处理装置的架构示意图。本实施与实施例三的主要区别在于,增加了上采样模块450,并进一步提供了图4A中区域图像分割模块420的可选实施方式和图像融合模块460的可选实施方式。
本实施例的图像处理装置400,可以包括如下内容:
初始图像获取模块410,用于获取同一场景下的深度图像和RGB图像。
区域图像分割模块420,用于根据深度图像中的深度值分割RGB图像得到至少两个区域图像。
下采样模块430,用于对至少两个区域图像中的至少一个区域图像进行下采样处理。
递归双边滤波模块440,用于对下采样处理后的区域图像进行递归双边滤波处理。
上采样模块450,用于对递归双边滤波处理后的区域图像进行上采样处理。
图像融合模块460,用于将处理后的区域图像与RGB图像进行融合。
优选的,如图4B所示,区域图像分割模块420,包括:
映射矩阵计算单元421,用于计算深度图像的各像素点的第一坐标与RGB图像各像素点的第二坐标的映射矩阵。
第二深度值确定单元422,用于根据第二坐标、映射矩阵以及深度图像各像素点的第一深度值423,确定RGB图像各像素点的第二深度值。
区域图像判断单元423,用于判断第二深度值是否大于预设的深度阈值,若是,则将与第二深度值对应的像素点的位置作为前景区域,若否,则将与第二深度值对应的像素点的位置作为背景区域。
区域图像分割单元424,用于根据判断结果,分割RGB图像得到至少两个区域图像。
优选的,如图4C所示,图像融合模块460包括:
高斯平滑处理单元461,用于对处理后的区域图像进行高斯平滑处理,获取区域图像各像素点的平滑像素值。
融合像素值计算单元462,用于根据平滑像素值与RGB图像各像素点的像素值,计算融合图像的融合像素值。
融合图像输出单元463,用于根据融合像素值输出融合图像。
综上,在本技术方案中,获取同一场景下的深度图像和RGB图像,根据深 度图像中的深度值分割RGB图像得到至少两个区域图像,对至少两个区域图像中的至少一个区域图像进行下采样处理,对下采样处理后的区域图像进行递归双边滤波处理,对递归双边滤波处理后的区域图像进行上采样处理,将处理后的区域图像与RGB图像进行融合。通过对RGB图像分割处理,能实现对不同区域进行相同或不同的处理,满足用户的个性化需求;通过对区域图像进行下采样处理、递归双边滤波处理及上采样处理,能大大提高处理速度,尤其对于高分辨率图像效果明显;将处理后的区域图像与原始RGB图像进行融合,能够有效减少图像处理后的失真,保存图像信息的完整性,且同时兼顾处理效率与融合效果。上述技术方案,可实现用户的个性化需求,极大提高图像处理速度,图像信息失真度小,尤其在处理高清视频画面时,大大减少画面卡顿现象。
注意,上述仅为本发明的较佳实施例及所运用技术原理。本领域技术人员会理解,本发明不限于这里所述的特定实施例,对本领域技术人员来说能够进行各种明显的变化、重新调整和替代而不会脱离本发明的保护范围。因此,虽然通过以上实施例对本发明进行了较为详细的说明,但是本发明不仅仅限于以上实施例,在不脱离本发明构思的情况下,还可以包括更多其他等效实施例,而本发明的范围由所附的权利要求范围决定。

Claims (10)

  1. 一种图像处理方法,其特征在于,包括:
    获取同一场景下的深度图像和RGB图像;
    根据所述深度图像中的深度值分割所述RGB图像得到至少两个区域图像;
    对所述至少两个区域图像中的至少一个区域图像进行下采样处理;
    对下采样处理后的区域图像进行递归双边滤波处理;
    将处理后的区域图像与所述RGB图像进行融合。
  2. 如权利要求1所述的方法,其特征在于,所述对下采样后的区域图像进行递归双边滤波处理之后,还包括:
    对递归双边滤波处理后的区域图像进行上采样处理。
  3. 如权利要求1或2所述的方法,其特征在于,所述根据所述深度图像中的深度值分割所述RGB图像得到至少两个区域图像,包括:
    计算所述深度图像的各像素点的第一坐标与所述RGB图像各像素点的第二坐标的映射矩阵;
    根据所述第二坐标、所述映射矩阵以及所述深度图像各像素点的第一深度值,确定所述RGB图像各像素点的第二深度值;
    判断所述第二深度值是否大于预设的深度阈值,若是,则将与所述第二深度值对应的像素点的位置作为前景区域,若否,则将与所述第二深度值对应的像素点的位置作为背景区域;
    根据判断结果,分割所述RGB图像得到至少两个区域图像。
  4. 如权利要求3所述的方法,其特征在于,所述对下采样处理后的区域图像进行递归双边滤波处理,具体按如下公式进行处理:
    Figure PCTCN2016113288-appb-100001
    其中,
    Figure PCTCN2016113288-appb-100002
    表示像素k和像素i之间的像素差值;
    Sk,i表示像素k与像素i之间的几何空间距离;
    XK表示像素k的像素值;
    yi表示处理第i个像素后的输出值。
  5. 如权利要求4所述的方法,其特征在于,所述将处理后的区域图像与所述RGB图像进行融合包括:
    对处理后的所述区域图像进行高斯平滑处理,获取所述区域图像各像素点的平滑像素值;
    根据所述平滑像素值与所述RGB图像各像素点的像素值,计算融合图像的融合像素值;
    根据所述融合像素值输出融合图像。
  6. 一种图像处理装置,其特征在于,包括:
    初始图像获取模块,用于获取同一场景下的深度图像和RGB图像;
    区域图像分割模块,用于根据所述深度图像中的深度值分割所述RGB图像得到至少两个区域图像;
    下采样模块,用于对所述至少两个区域图像中的至少一个区域图像进行下采样处理;
    递归双边滤波模块,用于对下采样处理后的区域图像进行递归双边滤波处理;
    图像融合模块,用于将处理后的区域图像与所述RGB图像进行融合。
  7. 如权利要求6所述的装置,其特征在于,还包括:
    上采样模块,用于对递归双边滤波处理后的区域图像进行上采样处理。
  8. 如权利要求6或7所述的装置,其特征在于,所述区域图像分割模块,包括:
    映射矩阵计算单元,用于计算所述深度图像的各像素点的第一坐标与所述RGB图像各像素点的第二坐标的映射矩阵;
    第二深度值确定单元,用于根据所述第二坐标、所述映射矩阵以及所述深度图像各像素点的第一深度值,确定所述RGB图像各像素点的第二深度值;
    区域图像判断单元,用于判断所述第二深度值是否大于预设的深度阈值,若是,则将与所述第二深度值对应的像素点的位置作为前景区域,若否,则将与所述第二深度值对应的像素点的位置作为背景区域;
    区域图像分割单元,用于根据判断结果,分割所述RGB图像得到至少两个区域图像。
  9. 如权利要求8所述的方法,其特征在于,所述对下采样处理后的区域图像进行递归双边滤波处理,具体按如下公式进行处理:
    Figure PCTCN2016113288-appb-100003
    其中,
    Figure PCTCN2016113288-appb-100004
    表示像素k和像素i之间的像素差值;
    Sk,i表示像素k与像素i之间的几何空间距离;
    XK表示像素k的像素值;
    yi表示处理第i个像素后的输出值。
  10. 如权利要求9所述的装置,其特征在于,所述图像融合模块包括:
    高斯平滑处理单元,用于对处理后的所述区域图像进行高斯平滑处理,获 取所述区域图像各像素点的平滑像素值;
    融合像素值计算单元,用于根据所述平滑像素值与所述RGB图像各像素点的像素值,计算融合图像的融合像素值;
    融合图像输出单元,用于根据所述融合像素值输出融合图像。
PCT/CN2016/113288 2016-11-03 2016-12-29 图像处理方法和装置 WO2018082185A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610956184.8A CN106485720A (zh) 2016-11-03 2016-11-03 图像处理方法和装置
CN201610956184.8 2016-11-03

Publications (1)

Publication Number Publication Date
WO2018082185A1 true WO2018082185A1 (zh) 2018-05-11

Family

ID=58271809

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/113288 WO2018082185A1 (zh) 2016-11-03 2016-12-29 图像处理方法和装置

Country Status (2)

Country Link
CN (1) CN106485720A (zh)
WO (1) WO2018082185A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110517211A (zh) * 2019-07-31 2019-11-29 茂莱(南京)仪器有限公司 一种基于梯度域映射的图像融合方法
CN111524204A (zh) * 2020-05-06 2020-08-11 杭州趣维科技有限公司 一种人像发丝动漫化纹理生成方法
CN114155426A (zh) * 2021-12-13 2022-03-08 中国科学院光电技术研究所 一种基于局部多向梯度信息融合的弱小目标检测方法
CN114255203A (zh) * 2020-09-22 2022-03-29 中国农业大学 一种鱼苗数量估计方法及系统

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107369165A (zh) * 2017-07-10 2017-11-21 Tcl移动通信科技(宁波)有限公司 一种视频选择画面优化方法及存储介质、智能终端
EP3680853A4 (en) 2017-09-11 2020-11-04 Guangdong Oppo Mobile Telecommunications Corp., Ltd. IMAGE PROCESSING PROCESS AND DEVICE, ELECTRONIC DEVICE AND COMPUTER READABLE INFORMATION MEDIA
CN107610077A (zh) * 2017-09-11 2018-01-19 广东欧珀移动通信有限公司 图像处理方法和装置、电子装置和计算机可读存储介质
CN107798654B (zh) * 2017-11-13 2022-04-26 北京小米移动软件有限公司 图像磨皮方法及装置、存储介质
CN108269280A (zh) * 2018-01-05 2018-07-10 厦门美图之家科技有限公司 一种深度图像的处理方法及移动终端
CN110766644B (zh) * 2018-07-27 2023-03-24 杭州海康威视数字技术股份有限公司 一种图像降采样方法及装置
CN109377499B (zh) * 2018-09-12 2022-04-15 中山大学 一种像素级物体分割方法及装置
CN110427742B (zh) * 2019-08-06 2021-05-25 北京如优教育科技有限公司 认证服务平台内容保存系统、方法及存储介质
CN110503704B (zh) * 2019-08-27 2023-07-21 北京迈格威科技有限公司 三分图的构造方法、装置和电子设备
CN111242863B (zh) * 2020-01-09 2023-05-23 合肥酷芯微电子有限公司 基于图像处理器实现的消除镜头横向色差的方法及介质
CN111275139B (zh) * 2020-01-21 2024-02-23 杭州大拿科技股份有限公司 手写内容去除方法、手写内容去除装置、存储介质
CN111861927B (zh) * 2020-07-24 2022-06-28 上海艾麒信息科技有限公司 图像场景还原方法及系统
CN115988311A (zh) * 2021-10-14 2023-04-18 荣耀终端有限公司 图像处理方法与电子设备

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103428409A (zh) * 2012-05-15 2013-12-04 深圳中兴力维技术有限公司 一种基于固定场景的视频降噪处理方法及装置
US20150002545A1 (en) * 2013-06-28 2015-01-01 Canon Kabushiki Kaisha Variable blend width compositing
CN104952036A (zh) * 2015-06-18 2015-09-30 福州瑞芯微电子有限公司 一种即时视频中的人脸美化方法和电子设备
US20160086316A1 (en) * 2014-09-19 2016-03-24 Kanghee Lee Real time skin smoothing image enhancement filter

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3992575B2 (ja) * 2002-09-13 2007-10-17 株式会社シマノ 釣竿
CN103795961A (zh) * 2012-10-30 2014-05-14 三亚中兴软件有限责任公司 会议电视网真系统及其图像处理方法
CN105869159A (zh) * 2016-03-28 2016-08-17 联想(北京)有限公司 一种图像分割方法及装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103428409A (zh) * 2012-05-15 2013-12-04 深圳中兴力维技术有限公司 一种基于固定场景的视频降噪处理方法及装置
US20150002545A1 (en) * 2013-06-28 2015-01-01 Canon Kabushiki Kaisha Variable blend width compositing
US20160086316A1 (en) * 2014-09-19 2016-03-24 Kanghee Lee Real time skin smoothing image enhancement filter
CN104952036A (zh) * 2015-06-18 2015-09-30 福州瑞芯微电子有限公司 一种即时视频中的人脸美化方法和电子设备

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110517211A (zh) * 2019-07-31 2019-11-29 茂莱(南京)仪器有限公司 一种基于梯度域映射的图像融合方法
CN111524204A (zh) * 2020-05-06 2020-08-11 杭州趣维科技有限公司 一种人像发丝动漫化纹理生成方法
CN111524204B (zh) * 2020-05-06 2023-06-27 杭州小影创新科技股份有限公司 一种人像发丝动漫化纹理生成方法
CN114255203A (zh) * 2020-09-22 2022-03-29 中国农业大学 一种鱼苗数量估计方法及系统
CN114255203B (zh) * 2020-09-22 2024-04-09 中国农业大学 一种鱼苗数量估计方法及系统
CN114155426A (zh) * 2021-12-13 2022-03-08 中国科学院光电技术研究所 一种基于局部多向梯度信息融合的弱小目标检测方法
CN114155426B (zh) * 2021-12-13 2023-08-15 中国科学院光电技术研究所 一种基于局部多向梯度信息融合的弱小目标检测方法

Also Published As

Publication number Publication date
CN106485720A (zh) 2017-03-08

Similar Documents

Publication Publication Date Title
WO2018082185A1 (zh) 图像处理方法和装置
CN108694705B (zh) 一种多帧图像配准与融合去噪的方法
CN110324664B (zh) 一种基于神经网络的视频补帧方法及其模型的训练方法
Kim et al. Optimized contrast enhancement for real-time image and video dehazing
Li et al. Edge-preserving decomposition-based single image haze removal
CN113034358B (zh) 一种超分辨率图像处理方法以及相关装置
US8059911B2 (en) Depth-based image enhancement
WO2018068420A1 (zh) 图像处理方法和装置
WO2016206087A1 (zh) 一种低照度图像处理方法和装置
EP2863362B1 (en) Method and apparatus for scene segmentation from focal stack images
CN111311482B (zh) 背景虚化方法、装置、终端设备及存储介质
CN109767408B (zh) 图像处理方法、装置、存储介质及计算机设备
WO2021232965A1 (zh) 视频去噪方法、装置、移动终端和存储介质
Plath et al. Adaptive image warping for hole prevention in 3D view synthesis
CN113039576A (zh) 图像增强系统和方法
CN111353955A (zh) 一种图像处理方法、装置、设备和存储介质
CN114627034A (zh) 一种图像增强方法、图像增强模型的训练方法及相关设备
Arulkumar et al. Super resolution and demosaicing based self learning adaptive dictionary image denoising framework
CN115471413A (zh) 图像处理方法及装置、计算机可读存储介质和电子设备
CN103685858A (zh) 视频实时处理的方法及设备
CN102223545A (zh) 一种快速多视点视频颜色校正方法
CN116263942A (zh) 一种调整图像对比度的方法、存储介质及计算机程序产品
CN111369435B (zh) 基于自适应稳定模型的彩色图像深度上采样方法及系统
Lai et al. Single image dehazing with optimal transmission map
CN104754320B (zh) 一种3d‑jnd阈值计算方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16920542

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 04.10.2019)

122 Ep: pct application non-entry in european phase

Ref document number: 16920542

Country of ref document: EP

Kind code of ref document: A1