[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

WO2018082185A1 - Procédé et dispositif de traitement d'image - Google Patents

Procédé et dispositif de traitement d'image Download PDF

Info

Publication number
WO2018082185A1
WO2018082185A1 PCT/CN2016/113288 CN2016113288W WO2018082185A1 WO 2018082185 A1 WO2018082185 A1 WO 2018082185A1 CN 2016113288 W CN2016113288 W CN 2016113288W WO 2018082185 A1 WO2018082185 A1 WO 2018082185A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
pixel
depth
area
value
Prior art date
Application number
PCT/CN2016/113288
Other languages
English (en)
Chinese (zh)
Inventor
杨铭
Original Assignee
广州视源电子科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 广州视源电子科技股份有限公司 filed Critical 广州视源电子科技股份有限公司
Publication of WO2018082185A1 publication Critical patent/WO2018082185A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4092Image resolution transcoding, e.g. by using client-server architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Definitions

  • the present invention relates to the field of image processing technologies, and in particular, to an image processing method and apparatus.
  • the focus of beautification is often the foreground part of the image.
  • the main beautification object is the face in the foreground.
  • higher image resolution and real-time performance are often required, for example, skin landscaping of video images in live broadcast, magic mirror and the like.
  • the present invention provides an image processing method and apparatus for solving the problem that the image processing cannot be sub-regionally processed, and that the high resolution and the better real-time performance cannot be simultaneously supported, thereby realizing high-precision image partitioning. Processing and efficient high quality image processing.
  • the embodiment of the present invention adopts the following technical solutions:
  • an embodiment of the present invention provides an image processing method, including:
  • the processed area image is fused with the RGB image.
  • the embodiment of the present invention further provides an image processing apparatus, including:
  • An initial image acquisition module configured to acquire a depth image and an RGB image in the same scene
  • An area image segmentation module configured to divide the RGB image according to a depth value in the depth image to obtain at least two area images
  • a downsampling module configured to perform a down sampling process on at least one of the at least two area images
  • a recursive bilateral filtering module configured to perform recursive bilateral filtering processing on the area image after the downsampling process
  • An image fusion module is configured to fuse the processed area image with the RGB image.
  • the depth image and the RGB image in the same scene are acquired, and the RGB image is segmented according to the depth value in the depth image to obtain at least two area images, and at least one of the at least two area images is downsampled.
  • the recursive bilateral filtering process is performed on the region image after the downsampling process, and the processed region image is merged with the RGB image.
  • the image processing speed achieves better real-time effects, especially for high-resolution video images, which achieves better image processing effects on high-definition video images and ensures image processing speed. This makes it possible to prevent the jamming phenomenon when processing high-definition video images and improve the user experience.
  • FIG. 1 is a schematic flow chart of an image processing method according to Embodiment 1 of the present invention.
  • FIG. 2A is a schematic flowchart of an image processing method according to Embodiment 2 of the present invention.
  • FIG. 2B is a schematic flow chart of an alternative embodiment of S220 of FIG. 2A;
  • FIG. 2C is a schematic flow chart of an alternative embodiment of S260 of FIG. 2A;
  • 2D is a schematic diagram of splitting an RGB image into two area images according to Embodiment 2 of the present invention.
  • FIG. 3 is a schematic structural diagram of an image processing apparatus according to Embodiment 3 of the present invention.
  • FIG. 4A is a schematic structural diagram of an image processing apparatus according to Embodiment 4 of the present invention.
  • 4B is a block diagram showing an alternative embodiment of the area image segmentation module 420 of FIG. 4A;
  • 4C is a block diagram of an alternative embodiment of the image fusion module 460 of FIG. 4A.
  • FIG. 1 is a schematic flowchart of an image processing method according to Embodiment 1 of the present invention.
  • the method can be executed by a terminal such as a smart phone or a personal computer, and is suitable for a scene in which an image is beautified.
  • an RGB image can be understood as a color image, and the color of the RGB image is changed by three reference color channels of red (Red, R), green (Green, G), and blue (Blue, B) and their mutual The superposition of the various to get a variety of colors.
  • the depth image can represent the color resolution of the image by the depth of the image, ie the number of bits used to store each pixel.
  • the size of the pixel values in the depth image reflects the depth of the depth of field. In this embodiment, by acquiring the depth image and the RGB image in the same scene, more and more scene information can be acquired, so that more accurate image processing is performed by the image data.
  • the acquired image is related to factors such as shooting distance, shooting angle, shooting environment, and shooting time. Therefore, the depth image and the RGB image in the same scene may be obtained by acquiring the depth image and the RGB image of the same scene at the same time.
  • the depth image may be acquired by a depth camera, and the RGB image may be acquired by an ordinary camera; of course, the depth image and the RGB image may also be acquired by the same camera.
  • S120 Segment the RGB image according to the depth value in the depth image to obtain at least two area images.
  • dividing the original RGB image according to the depth value in the depth image to obtain the at least two area images may be: determining a second pixel value of the original RGB image according to the first pixel value of the depth image, and further dividing the original according to the second pixel value.
  • the RGB image yields at least two area images.
  • the segmented region image may also be labeled. For example, pixel value pairs for each pixel based on the region image Each area image is labeled.
  • the pixel values of each pixel of the same image area may be labeled with the same value.
  • the original RGB image may be divided according to the depth value in the depth image to obtain two regional images, wherein the pixel values of the pixels of the regional image may be represented by 1 or 0 respectively; wherein 1 represents the foreground region; Background area. There can be only one background area in an image, and there can be multiple foreground areas.
  • S130 Perform downsampling processing on at least one of the at least two area images.
  • image downsampling can reduce the image, generate a thumbnail of the original image, and convert the high-resolution image into a low-resolution image, thereby achieving the speed of processing the low-resolution image when the high-resolution image is input.
  • the image of one or a certain area may be down-sampled according to the actual segmentation situation and the personalized requirement of the user.
  • the corresponding area image may be down-sampled according to the default setting.
  • the foreground area and/or the background area can be processed according to the pixel value of the area image of 1 or 0, and the user's personalized needs.
  • the foreground region of the pixel value of 1 may be down-sampled by default.
  • S130 can also be executed before S120, and the processing speed can be improved.
  • the RGB image is segmented according to the pixel value, that is, the image resolution has an influence on the image segmentation. If S130 is executed first, the image will be changed. The resolution of the original image may be deviated when the image is divided. Therefore, in this embodiment, S120 is preferably performed before S130.
  • S140 Perform recursive bilateral filtering processing on the area image after the downsampling process.
  • the prior art bilateral filtering technique still does not enable real-time processing of high definition video images.
  • this embodiment uses a recursive bilateral filtering technique to replace the bilateral filtering technique in the prior art.
  • R k,i R(x k ,x i ) determines the filter coefficients;
  • the recursive bilateral filtering is similar to the traditional bilateral filtering processing method, and the output values processed for the ith pixel are calculated according to the following formula:
  • R k,i represents the pixel difference between pixel k and pixel i
  • S k,i represents the geometric spatial distance between pixel k and pixel i
  • X K represents the pixel value of pixel k
  • y i represents processing Output value after i pixels
  • the processed area image may be for each area image of the full picture, or may be for the foreground area, in order to simultaneously consider the calculation efficiency and the fusion effect, the processed area image may be processed without any processing.
  • the area image is blended. This technical solution is particularly suitable for the case of processing only individual region images after RGB image segmentation.
  • Image fusion can include pixel level fusion, feature level fusion, and decision level fusion.
  • spatial domain algorithm and transform domain algorithm in pixel level fusion there are multiple fusion rule methods in spatial domain algorithm, such as logic filtering method, gray weighted average method, contrast modulation method, etc.; pyramid domain decomposition fusion in transform domain algorithm Method, wavelet transform method, etc.
  • the processed area image and the RGB image are performed.
  • the fusion may be specifically based on the pixel values of the processed region image being blended with the pixel values of the RGB image to preserve the original data as much as possible.
  • the depth image and the RGB image in the same scene are acquired, and the RGB image is segmented according to the depth value in the depth image to obtain at least two area images, and at least one of the at least two area images is performed.
  • the downsampling process performs recursive bilateral filtering on the region image after the downsampling process, and fuses the processed region image with the RGB image.
  • the image of the rate is obvious; the image of the processed area is merged with the original RGB image, which can effectively reduce the distortion after image processing, preserve the integrity of the image information, and at the same time take into account the processing efficiency and the fusion effect.
  • the above technical solution can realize the personalized requirement of the user, greatly improve the image processing speed, and the image information distortion is small, especially when processing the high-definition video image, the picture jamming phenomenon is greatly reduced.
  • FIG. 2A is a schematic flowchart of an image processing method according to Embodiment 2 of the present invention.
  • the main difference between the embodiment and the first embodiment is that the content of the up-sampling process of the region image after the recursive bilateral filtering process is added, and the optional implementation manner of S220 in FIG. 2A and the optional implementation manner of S260 are further provided. .
  • S220 Segment the RGB image according to the depth value in the depth image to obtain at least two area images.
  • S220 may include the following S221, S222, S223, and S224. Steps where:
  • S221 Calculate a mapping matrix of the first coordinates of each pixel of the depth image and the second coordinates of each pixel of the RGB image.
  • the mapping matrix of the first coordinate and the second coordinate may be calculated by a method of the prior art, for example, according to parameters of a camera that acquires a depth image and an RGB image, or may acquire features in a depth image and an RGB image.
  • the coincident portion acquires the correspondence between the depth image and the features of the RGB image based on the feature coincidence portion, and further calculates the mapping matrix according to the correspondence relationship between the features.
  • S222 Determine a second depth value of each pixel of the RGB image according to the second coordinate, the mapping matrix, and the first depth value of each pixel of the depth image.
  • the second depth value of each pixel of the corresponding RGB image may be obtained by multiplying the coordinates of each pixel point of the RGB image by the mapping matrix.
  • the coordinates of each pixel point of the corresponding depth image may be aligned with the RGB image, and the depth value of each pixel of the RGB image is obtained by the depth value of each pixel of the depth image.
  • S223 determining whether the second depth value is greater than a preset depth threshold, and if yes, determining a position of the pixel corresponding to the second depth value as a foreground area, and if not, determining a position of the pixel corresponding to the second depth value As a background area.
  • the depth threshold (or depth range) is compared, and the RGB image is split into two according to the comparison result.
  • the depth threshold can be a specific point value or a range.
  • the specific numerical value or numerical interval of the depth threshold may be set by the user according to the actual image of the acquired RGB image, which is not limited herein.
  • the segmentation of the original RGB image according to the second depth value and the preset rule to obtain the at least two region images may specifically determine whether the second depth value is greater than a preset depth threshold, and if so, then The position of the pixel corresponding to the depth value is used as the foreground area. If not, the position of the pixel corresponding to the second depth value is used as the background area; and the original RGB image is divided according to the foreground area and the background area to obtain two area images.
  • the position of the pixel corresponding to the second depth value within the depth threshold range may be acquired, and as the foreground region, the set of positions of the remaining pixels is used as the background region.
  • the RGB image is divided into two regional images as an example.
  • the depth threshold range may be set according to actual conditions. And determining whether the second depth value of each pixel in the RGB image is within a preset depth threshold range, and if so, using the pixel as the foreground area, that is, the pixel part of the avatar part, if not in the preset depth threshold range Inside, the pixel is used as the pixel of the background area.
  • the pixel values of each pixel in the foreground area may be marked as 1 (black area in FIG.
  • the set of the remaining pixels is used as the background area, and the pixel values of each pixel in the background area may be marked as 0 (FIG. 2D) Medium grid area). Further, the foreground area and/or the background area can be processed according to the pixel values of the respective pixel points. It should be noted that the grid in FIG. 2D is only used to indicate that the pixel values of the background area are all marked as 0, and the grid area does not exist.
  • the obtained image of the area to be processed may be affected by the noise of the depth image and present some loop-like areas, which can be filled by the expansion corrosion operation. Domain, get the final area image.
  • the at least two area images include two, three and more area images.
  • the advantage of such setting can realize the partition processing or partial processing of the image, which can improve the efficiency of image processing and enrich the effect of image processing.
  • S230 Perform downsampling processing on at least one of the at least two area images.
  • S240 Perform recursive bilateral filtering processing on the area image after the downsampling process.
  • S250 Perform upsampling processing on the region image after the recursive bilateral filtering process.
  • the resolution of the image is lowered, and the restored image is processed to have a better fusion effect with the original image.
  • the resolution of the subsequent area image that is, the low resolution area image after the recursive bilateral filtering processing is converted into the same image as the original image resolution.
  • the image of the region after the recursive bilateral filtering process is enlarged by using the upsampling method, so that the resolution of the processed region image is the same as the resolution of the original image.
  • Image upsampling usually uses an interpolation method, that is, inserting a new element between pixels by using a suitable interpolation algorithm based on the recursive bilaterally filtered region image pixels.
  • S260 may include three steps S261, S262, and S263, where:
  • S261 Perform Gaussian smoothing on the processed area image to obtain smooth pixel values of each pixel of the area image.
  • the size of the segmented RGB image may be consistent with the size of the original RGB image, and the pixel points of the segmented RGB image are in one-to-one correspondence with the pixels of the original RGB image, and further, segmentation
  • the pixel value of each pixel in the latter RGB image can be used only 1 or 0 indicates whether the position of the RGB image corresponding to the pixel is the area image to be processed, respectively.
  • 1 can be used to indicate the foreground area; 0 is the background area.
  • Gaussian smoothing is performed on the segmented image, and the pixel value a(x, y) of the segmented image thus obtained may take an arbitrary value between 0-1. This operation can effectively remove noise in the image and ensure image quality.
  • S262 Calculate the fused pixel value of the fused image according to the smoothed pixel value and the pixel value of each pixel of the RGB image.
  • the smoothed pixel value and the pixel value of each pixel of the RGB image may be weighted and summed to calculate a fused pixel value of the fused image. For example, suppose the smoothed pixel value of the processed area image is I B (x, y), and the pixel value of each pixel point of the original RGB image is I O (x, y).
  • a(x, y) represents the weight of the smoothed pixel value. It can be understood that the specific value of a(x, y) can be set according to the actual situation, and can be a fixed value or calculated by other parameters, which is not limited herein.
  • S263 Output the fused image according to the fused pixel value.
  • the depth image and the RGB image in the same scene are acquired, and the RGB image is segmented according to the depth value in the depth image to obtain at least two area images, and at least one of the at least two area images is performed.
  • the downsampling process performs recursive bilateral filtering on the region image after the downsampling process, and performs upsampling processing on the recursive bilaterally filtered region image, and fuses the processed region image with the RGB image.
  • the effect is obvious; the processed area image is merged with the original RGB image, Effectively reduce the distortion after image processing, preserve the integrity of image information, and at the same time take into account the processing efficiency and fusion effect.
  • the above technical solution can realize the personalized requirement of the user, greatly improve the image processing speed, and the image information distortion is small, especially when processing the high-definition video image, the picture jamming phenomenon is greatly reduced.
  • the following is an embodiment of an image processing apparatus according to an embodiment of the present invention.
  • the image processing apparatus and the image processing method belong to a general inventive concept.
  • FIG. 3 is a schematic structural diagram of an image processing apparatus according to Embodiment 3 of the present invention.
  • the image processing apparatus 300 of this embodiment may include the following contents:
  • the initial image acquisition module 310 is configured to acquire a depth image and an RGB image in the same scene.
  • the area image segmentation module 320 is configured to divide the RGB image according to the depth value in the depth image to obtain at least two area images.
  • the downsampling module 330 is configured to perform downsampling processing on at least one of the at least two area images.
  • the recursive bilateral filtering module 340 is configured to perform recursive bilateral filtering processing on the downsampled region image.
  • the recursive bilateral filtering process is performed on the image of the region after the downsampling process, and is specifically processed according to the following formula:
  • the image fusion module 350 is configured to fuse the processed area image with the RGB image.
  • the depth image and the RGB image in the same scene are acquired, and the RGB image is segmented according to the depth value in the depth image to obtain at least two area images, and at least one of the at least two area images is performed.
  • the downsampling process performs recursive bilateral filtering on the region image after the downsampling process, and fuses the processed region image with the RGB image.
  • the image of the rate is obvious; the image of the processed area is merged with the original RGB image, which can effectively reduce the distortion after image processing, preserve the integrity of the image information, and at the same time take into account the processing efficiency and the fusion effect.
  • the above technical solution can realize the personalized requirement of the user, greatly improve the image processing speed, and the image information distortion is small, especially when processing the high-definition video image, the picture jamming phenomenon is greatly reduced.
  • FIG. 4A is a schematic structural diagram of an image processing apparatus according to Embodiment 4 of the present invention.
  • the main difference between this embodiment and the third embodiment is that the upsampling module 450 is added, and an optional implementation of the area image segmentation module 420 and an optional implementation of the image fusion module 460 of FIG. 4A are further provided.
  • the image processing apparatus 400 of this embodiment may include the following contents:
  • the initial image acquisition module 410 is configured to acquire a depth image and an RGB image in the same scene.
  • the area image segmentation module 420 is configured to segment the RGB image according to the depth value in the depth image to obtain at least two area images.
  • the downsampling module 430 is configured to perform downsampling processing on at least one of the at least two area images.
  • the recursive bilateral filtering module 440 is configured to perform recursive bilateral filtering processing on the downsampled region image.
  • the upsampling module 450 is configured to perform upsampling processing on the recursive bilateral filtering processed region image.
  • the image fusion module 460 is configured to fuse the processed area image with the RGB image.
  • the area image segmentation module 420 includes:
  • the mapping matrix calculation unit 421 is configured to calculate a mapping matrix of the first coordinates of each pixel point of the depth image and the second coordinates of each pixel point of the RGB image.
  • the second depth value determining unit 422 is configured to determine a second depth value of each pixel point of the RGB image according to the second coordinate, the mapping matrix, and the first depth value 423 of each pixel point of the depth image.
  • the area image determining unit 423 is configured to determine whether the second depth value is greater than a preset depth threshold, and if yes, the position of the pixel corresponding to the second depth value is used as the foreground area, and if not, the second depth value The position of the corresponding pixel is used as the background area.
  • the area image dividing unit 424 is configured to divide the RGB image according to the determination result to obtain at least two area images.
  • the image fusion module 460 includes:
  • the Gaussian smoothing processing unit 461 is configured to perform Gaussian smoothing on the processed region image to acquire smooth pixel values of each pixel of the region image.
  • the fused pixel value calculation unit 462 is configured to calculate a fused pixel value of the fused image according to the smoothed pixel value and the pixel value of each pixel point of the RGB image.
  • the fused image output unit 463 is configured to output the fused image according to the fused pixel value.
  • the depth image and the RGB image in the same scene are acquired, according to the deep
  • the depth value in the degree image is divided into RGB images to obtain at least two area images, at least one of the at least two area images is downsampled, and the downsampled area image is subjected to recursive bilateral filtering processing, for recursive bilateral
  • the filtered region image is subjected to upsampling processing, and the processed region image is fused with the RGB image.
  • the effect is obvious; the image of the processed area is merged with the original RGB image, which can effectively reduce the distortion after image processing, preserve the integrity of the image information, and at the same time take into account the processing efficiency and fusion effect.
  • the above technical solution can realize the personalized requirement of the user, greatly improve the image processing speed, and the image information distortion is small, especially when processing the high-definition video image, the picture jamming phenomenon is greatly reduced.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

La présente invention concerne un procédé et un dispositif de traitement d'image. Le procédé de traitement d'image consiste : à obtenir une image de profondeur et une image RVB dans le même scénario (S110) ; à segmenter l'image RVB en fonction d'une valeur de profondeur dans l'image de profondeur afin d'obtenir au moins deux images de zone (S120) ; à effectuer un sous-échantillonnage sur au moins l'une desdites images de zone (S130) ; à effectuer un filtrage bilatéral récursif sur l'image de zone exécutée en sous-échantillonnage (S140) ; à fusionner l'image de zone traitée et l'image RVB (S150). Par segmentation de l'image RVB, différentes zones peuvent être traitées de manière différente ou égale pour satisfaire une exigence personnalisée de l'utilisateur. En exécutant un sous-échantillonnage et un filtrage bilatéral récursif sur l'image de zone, la vitesse de traitement peut être considérablement améliorée, en particulier pour une image à haute résolution. Par fusion de l'image de zone traitée et de l'image RVB initiale, la distorsion après traitement de l'image peut être efficacement réduite afin de conserver l'intégralité des informations d'image, l'efficacité de traitement et l'effet de fusion pouvant être envisagés simultanément.
PCT/CN2016/113288 2016-11-03 2016-12-29 Procédé et dispositif de traitement d'image WO2018082185A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610956184.8A CN106485720A (zh) 2016-11-03 2016-11-03 图像处理方法和装置
CN201610956184.8 2016-11-03

Publications (1)

Publication Number Publication Date
WO2018082185A1 true WO2018082185A1 (fr) 2018-05-11

Family

ID=58271809

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/113288 WO2018082185A1 (fr) 2016-11-03 2016-12-29 Procédé et dispositif de traitement d'image

Country Status (2)

Country Link
CN (1) CN106485720A (fr)
WO (1) WO2018082185A1 (fr)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110517211A (zh) * 2019-07-31 2019-11-29 茂莱(南京)仪器有限公司 一种基于梯度域映射的图像融合方法
CN111524204A (zh) * 2020-05-06 2020-08-11 杭州趣维科技有限公司 一种人像发丝动漫化纹理生成方法
CN114155426A (zh) * 2021-12-13 2022-03-08 中国科学院光电技术研究所 一种基于局部多向梯度信息融合的弱小目标检测方法
CN114255203A (zh) * 2020-09-22 2022-03-29 中国农业大学 一种鱼苗数量估计方法及系统

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107369165A (zh) * 2017-07-10 2017-11-21 Tcl移动通信科技(宁波)有限公司 一种视频选择画面优化方法及存储介质、智能终端
EP3680853A4 (fr) 2017-09-11 2020-11-04 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Procédé et dispositif de traitement d'images, dispositif électronique et support d'informations lisible par ordinateur
CN107610077A (zh) * 2017-09-11 2018-01-19 广东欧珀移动通信有限公司 图像处理方法和装置、电子装置和计算机可读存储介质
CN107798654B (zh) * 2017-11-13 2022-04-26 北京小米移动软件有限公司 图像磨皮方法及装置、存储介质
CN108269280A (zh) * 2018-01-05 2018-07-10 厦门美图之家科技有限公司 一种深度图像的处理方法及移动终端
CN110766644B (zh) * 2018-07-27 2023-03-24 杭州海康威视数字技术股份有限公司 一种图像降采样方法及装置
CN109377499B (zh) * 2018-09-12 2022-04-15 中山大学 一种像素级物体分割方法及装置
CN110427742B (zh) * 2019-08-06 2021-05-25 北京如优教育科技有限公司 认证服务平台内容保存系统、方法及存储介质
CN110503704B (zh) * 2019-08-27 2023-07-21 北京迈格威科技有限公司 三分图的构造方法、装置和电子设备
CN111242863B (zh) * 2020-01-09 2023-05-23 合肥酷芯微电子有限公司 基于图像处理器实现的消除镜头横向色差的方法及介质
CN111275139B (zh) * 2020-01-21 2024-02-23 杭州大拿科技股份有限公司 手写内容去除方法、手写内容去除装置、存储介质
CN111861927B (zh) * 2020-07-24 2022-06-28 上海艾麒信息科技有限公司 图像场景还原方法及系统
CN115988311A (zh) * 2021-10-14 2023-04-18 荣耀终端有限公司 图像处理方法与电子设备

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103428409A (zh) * 2012-05-15 2013-12-04 深圳中兴力维技术有限公司 一种基于固定场景的视频降噪处理方法及装置
US20150002545A1 (en) * 2013-06-28 2015-01-01 Canon Kabushiki Kaisha Variable blend width compositing
CN104952036A (zh) * 2015-06-18 2015-09-30 福州瑞芯微电子有限公司 一种即时视频中的人脸美化方法和电子设备
US20160086316A1 (en) * 2014-09-19 2016-03-24 Kanghee Lee Real time skin smoothing image enhancement filter

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3992575B2 (ja) * 2002-09-13 2007-10-17 株式会社シマノ 釣竿
CN103795961A (zh) * 2012-10-30 2014-05-14 三亚中兴软件有限责任公司 会议电视网真系统及其图像处理方法
CN105869159A (zh) * 2016-03-28 2016-08-17 联想(北京)有限公司 一种图像分割方法及装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103428409A (zh) * 2012-05-15 2013-12-04 深圳中兴力维技术有限公司 一种基于固定场景的视频降噪处理方法及装置
US20150002545A1 (en) * 2013-06-28 2015-01-01 Canon Kabushiki Kaisha Variable blend width compositing
US20160086316A1 (en) * 2014-09-19 2016-03-24 Kanghee Lee Real time skin smoothing image enhancement filter
CN104952036A (zh) * 2015-06-18 2015-09-30 福州瑞芯微电子有限公司 一种即时视频中的人脸美化方法和电子设备

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110517211A (zh) * 2019-07-31 2019-11-29 茂莱(南京)仪器有限公司 一种基于梯度域映射的图像融合方法
CN111524204A (zh) * 2020-05-06 2020-08-11 杭州趣维科技有限公司 一种人像发丝动漫化纹理生成方法
CN111524204B (zh) * 2020-05-06 2023-06-27 杭州小影创新科技股份有限公司 一种人像发丝动漫化纹理生成方法
CN114255203A (zh) * 2020-09-22 2022-03-29 中国农业大学 一种鱼苗数量估计方法及系统
CN114255203B (zh) * 2020-09-22 2024-04-09 中国农业大学 一种鱼苗数量估计方法及系统
CN114155426A (zh) * 2021-12-13 2022-03-08 中国科学院光电技术研究所 一种基于局部多向梯度信息融合的弱小目标检测方法
CN114155426B (zh) * 2021-12-13 2023-08-15 中国科学院光电技术研究所 一种基于局部多向梯度信息融合的弱小目标检测方法

Also Published As

Publication number Publication date
CN106485720A (zh) 2017-03-08

Similar Documents

Publication Publication Date Title
WO2018082185A1 (fr) Procédé et dispositif de traitement d'image
CN108694705B (zh) 一种多帧图像配准与融合去噪的方法
CN110324664B (zh) 一种基于神经网络的视频补帧方法及其模型的训练方法
Kim et al. Optimized contrast enhancement for real-time image and video dehazing
Li et al. Edge-preserving decomposition-based single image haze removal
CN113034358B (zh) 一种超分辨率图像处理方法以及相关装置
US8059911B2 (en) Depth-based image enhancement
WO2018068420A1 (fr) Procédé et appareil de traitement d'images
WO2016206087A1 (fr) Procédé et dispositif de traitement d'images à faible éclairage
EP2863362B1 (fr) Procédé et appareil pour une segmentation de scène à partir d'images de pile focale
CN111311482B (zh) 背景虚化方法、装置、终端设备及存储介质
CN109767408B (zh) 图像处理方法、装置、存储介质及计算机设备
WO2021232965A1 (fr) Procédé et appareil de réduction de bruit vidéo, terminal mobile et support de stockage
Plath et al. Adaptive image warping for hole prevention in 3D view synthesis
CN113039576A (zh) 图像增强系统和方法
CN111353955A (zh) 一种图像处理方法、装置、设备和存储介质
CN114627034A (zh) 一种图像增强方法、图像增强模型的训练方法及相关设备
Arulkumar et al. Super resolution and demosaicing based self learning adaptive dictionary image denoising framework
CN115471413A (zh) 图像处理方法及装置、计算机可读存储介质和电子设备
CN103685858A (zh) 视频实时处理的方法及设备
CN102223545A (zh) 一种快速多视点视频颜色校正方法
CN116263942A (zh) 一种调整图像对比度的方法、存储介质及计算机程序产品
CN111369435B (zh) 基于自适应稳定模型的彩色图像深度上采样方法及系统
Lai et al. Single image dehazing with optimal transmission map
CN104754320B (zh) 一种3d‑jnd阈值计算方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16920542

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 04.10.2019)

122 Ep: pct application non-entry in european phase

Ref document number: 16920542

Country of ref document: EP

Kind code of ref document: A1