[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN107370958B - Image blurs processing method, device and camera terminal - Google Patents

Image blurs processing method, device and camera terminal Download PDF

Info

Publication number
CN107370958B
CN107370958B CN201710756408.5A CN201710756408A CN107370958B CN 107370958 B CN107370958 B CN 107370958B CN 201710756408 A CN201710756408 A CN 201710756408A CN 107370958 B CN107370958 B CN 107370958B
Authority
CN
China
Prior art keywords
image
region
depth
pixel point
foreground
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710756408.5A
Other languages
Chinese (zh)
Other versions
CN107370958A (en
Inventor
丁佳铭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201710756408.5A priority Critical patent/CN107370958B/en
Publication of CN107370958A publication Critical patent/CN107370958A/en
Priority to PCT/CN2018/101997 priority patent/WO2019042216A1/en
Application granted granted Critical
Publication of CN107370958B publication Critical patent/CN107370958B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

The present invention proposes that a kind of image blurs processing method, device and camera terminal, and wherein method includes: to obtain depth image to be processed;Determine the foreground area and background region in depth image;When carrying out virtualization processing to background region, however, it is determined that comprising the pixel positioned at the foreground area in the corresponding target filter field of pixel to be blurred, be then adjusted to the weighted value of each pixel in the target filter field.This method pass through by include foreground area pixel target filter field in the weighted value of each pixel be adjusted; so that after virtualization operation; boundary clear between the foreground area and background region of image; the effective protection virtualization edge of background region; the whole image quality for improving image, improves the Experience Degree of user.

Description

Image blurring processing method and device and shooting terminal
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image blurring processing method and apparatus, and a shooting terminal.
Background
Conventionally, when processing a captured image, it is common to blur the background portion of the image in order to highlight the main portion of the image. For example, the background portion of the image may be blurred using a gaussian blur technique to preserve the body portion.
Specifically, gaussian blurring is a blurring algorithm that takes distance as weight, and usually calculates pixel values of a certain point to be blurred and surrounding neighboring points, and obtains a pixel value corresponding to a pixel point to be blurred by using a weighted average calculation method. However, the inventors found that after the image is processed by using the gaussian blur technique, the joint edge of the main portion and the background portion in the whole image is not clear, and the image has a ghost, and the overall image quality of the processed captured image is not good, as shown in fig. 1.
Disclosure of Invention
The present invention is directed to solving, at least to some extent, one of the technical problems in the related art.
Therefore, a first objective of the present invention is to provide an image blurring processing method, where a weighted value of each pixel point in a target filtering region including a foreground region pixel point is adjusted, so that a boundary between a foreground region and a background region of an image is clearer during blurring operation, a blurring edge of the background region is effectively protected, an overall image quality of the image is improved, and a user experience is improved.
A second object of the present invention is to provide an image blurring processing apparatus.
A third object of the present invention is to provide a photographing terminal.
A fourth object of the invention is to propose a computer-readable storage medium.
To achieve the above object, an embodiment of a first aspect of the present invention provides an image blurring processing method, including:
acquiring a depth-of-field image to be processed;
determining a foreground region and a background region in the depth image;
when blurring processing is carried out on the background area, if it is determined that a target filtering area corresponding to a pixel point to be blurred contains a pixel point located in the foreground area, the weighted value of each pixel point in the target filtering area is adjusted.
The image blurring processing method provided by the embodiment of the present invention determines a foreground region and a background region of an acquired depth image by acquiring the depth image to be processed, and then determines whether a pixel point located in the foreground region is included in a target filtering region corresponding to a pixel point to be blurred when blurring the determined background region, and adjusts a weight value of each pixel point in the target filtering region if the pixel point is included in the target filtering region. Therefore, the weighted values of the pixels in the target filtering area containing the pixels in the foreground area are adjusted, so that when blurring operation is performed on the background area, the boundary between the foreground area and the background area of the image is clearer, the blurring edge of the background area is effectively protected, the overall image quality of the image is improved, and the experience of a user is improved.
In addition, the image blurring processing method proposed by the above embodiment of the present invention may further have the following additional technical features:
in one embodiment of the present invention, the determining the foreground region and the background region in the range image includes:
and determining a foreground region and a background region in the depth image according to a preset range.
In another embodiment of the present invention, the depth image to be processed includes a portrait;
the determining a foreground region and a background region in the depth image comprises:
carrying out face recognition on the depth-of-field image to determine a face area;
and determining a foreground region and a background region in the depth-of-field image according to the depth-of-field value corresponding to the face region.
In another embodiment of the present invention, the determining that a target filtering region corresponding to a pixel point to be blurred includes a pixel point located in the foreground region includes:
if the depth of field value corresponding to any pixel point contained in the target filtering area is within the preset range, determining that any pixel point is a pixel point of the foreground area;
or,
and if the difference value of the depth of field values respectively corresponding to two adjacent pixel points in the target filtering area is greater than a preset value, determining that the target filtering area contains the pixel points positioned in the foreground area.
In another embodiment of the present invention, the adjusting the weight value of each pixel point in the target filtering region includes:
setting the weighted value of the pixel point positioned in the foreground area to be zero;
and adjusting the weighted values of the residual pixel points in the target filtering area to enable the sum of the weighted values of the residual pixel points to be equal to one.
In another embodiment of the present invention, the adjusting the weight values of the remaining pixel points in the target filtering region includes:
expanding the weight values of the residual pixel points in equal proportion;
or,
and adjusting the weight values of the residual pixel points according to the depth of field values respectively corresponding to the residual pixel points.
To achieve the above object, a second aspect of the present invention provides an image blurring processing apparatus, including:
the acquisition module is used for acquiring a depth-of-field image to be processed;
the determining module is used for determining a foreground region and a background region in the depth image;
and the adjusting module is used for adjusting the weight value of each pixel point in the target filtering area if it is determined that the target filtering area corresponding to the pixel point to be blurred contains the pixel point positioned in the foreground area when blurring the background area.
The image blurring processing device provided in the embodiment of the present invention determines a foreground region and a background region of an acquired depth image by acquiring the depth image to be processed, and then determines whether a pixel point located in the foreground region is included in a target filtering region corresponding to a pixel point to be blurred when blurring the determined background region, and adjusts a weight value of each pixel point in the target filtering region if the pixel point is included in the target filtering region. Therefore, the weighted values of the pixels in the target filtering area containing the pixels in the foreground area are adjusted, so that when blurring operation is performed on the background area, the boundary between the foreground area and the background area of the image is clearer, the blurring edge of the background area is effectively protected, the overall image quality of the image is improved, and the experience of a user is improved.
In addition, the image blurring processing device according to the above embodiment of the present invention may further have the following additional technical features:
in an embodiment of the present invention, the determining module is specifically configured to:
and determining a foreground region and a background region in the depth image according to a preset range.
In another embodiment of the present invention, the depth image to be processed includes a portrait;
the determining module includes:
the first determining subunit is used for carrying out face recognition on the depth image and determining a face area;
and the second determining subunit is configured to determine a foreground region and a background region in the depth image according to the depth value corresponding to the face region.
In another embodiment of the present invention, the adjusting module is specifically configured to:
if the depth of field value corresponding to any pixel point contained in the target filtering area is within the preset range, determining that any pixel point is a pixel point of the foreground area;
or,
and if the difference value of the depth of field values respectively corresponding to two adjacent pixel points in the target filtering area is greater than a preset value, determining that the target filtering area contains the pixel points positioned in the foreground area.
In another embodiment of the present invention, the adjusting module further comprises:
the first adjusting subunit is used for setting the weight value of the pixel point positioned in the foreground area to zero;
and the second adjusting subunit is used for adjusting the weighted values of the residual pixel points in the target filtering area, so that the sum of the weighted values of the residual pixel points is equal to one.
In another embodiment of the present invention, the second adjusting subunit is specifically configured to:
expanding the weight values of the residual pixel points in equal proportion;
or,
and adjusting the weight values of the residual pixel points according to the depth of field values respectively corresponding to the residual pixel points.
To achieve the above object, a third aspect of the present invention provides a shooting terminal, including: a memory, a processor, and an image processing circuit;
the memory for storing executable instructions of the processor;
the image processing circuit is used for processing the depth image;
the processor is configured to call the program code in the memory, and implement the image blurring processing method according to the embodiment of the first aspect according to the depth image output by the image processing circuit.
The photographing terminal provided in the embodiment of the present invention determines a foreground region and a background region of an acquired depth image by acquiring the depth image to be processed, and then determines whether a target filtering region corresponding to a pixel point to be blurred includes a pixel point located in the foreground region when blurring the determined background region, and adjusts a weight value of each pixel point in the target filtering region if the target filtering region includes the pixel point. Therefore, the weighted values of the pixels in the target filtering area containing the pixels in the foreground area are adjusted, so that when blurring operation is performed on the background area, the boundary between the foreground area and the background area of the image is clearer, the blurring edge of the background area is effectively protected, the overall image quality of the image is improved, and the experience of a user is improved.
To achieve the above object, a fourth aspect of the present invention provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the image blurring processing method described in the first aspect.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The foregoing and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a diagram illustrating a prior art image blurring process using Gaussian blur;
FIG. 2 is a flowchart of an image blurring processing method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram illustrating a determination of a target filtering region of a pixel to be blurred according to an embodiment of the present invention;
FIG. 4 is a flowchart of an image blurring processing method according to another embodiment of the present invention;
fig. 5 is a schematic diagram illustrating processing of foreground region pixels including pixels of a foreground region in a target filtering region according to an embodiment of the present invention;
FIG. 6 is a diagram illustrating an effect of blurring an image according to the present invention;
FIG. 7 is a schematic structural diagram of an image blurring processing apparatus according to an embodiment of the present invention;
FIG. 8 is a schematic structural diagram of an image blurring processing apparatus according to another embodiment of the present invention;
FIG. 9 is a schematic structural diagram of an image blurring processing apparatus according to yet another embodiment of the present invention;
fig. 10 is a schematic structural view of a photographing terminal according to an embodiment of the present invention;
fig. 11 is a schematic structural diagram of an image processing circuit according to an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are illustrative and intended to be illustrative of the invention and are not to be construed as limiting the invention.
The invention mainly aims at the problems that in the prior art, when the image is subjected to blurring processing by a Gaussian blur technology, the joint boundary of a foreground region and a background region in the processed image is not clear, and double images appear.
The image processing method provided by the invention comprises the steps of firstly obtaining a depth image to be processed, then determining a foreground region and a background region in the obtained depth image, firstly determining whether a target filtering region corresponding to a pixel point to be blurred contains a pixel point in the foreground region when blurring the determined background region, and if so, adjusting the weight value of each pixel point in the target filtering region. Therefore, the weighted value of each pixel point in the target filtering area containing the pixel points in the foreground area is adjusted, after blurring operation is carried out, the boundary between the foreground area and the background area of the image is clear, the blurring edge of the background area is effectively protected, the overall image quality of the image is improved, and the user experience is improved.
The following describes the image blurring processing method according to the embodiment of the present invention in detail with reference to the accompanying drawings.
Fig. 2 is a flowchart of an image blurring processing method according to an embodiment of the present invention.
As shown in fig. 2, the image blurring processing method may include the steps of:
step 201, acquiring a depth image to be processed.
The image blurring processing method provided by the embodiment of the invention can be executed by the image blurring processing device provided by the invention, and the device can be configured in a shooting terminal to realize image blurring processing on a shot image.
In the present application, the shooting terminal may be any hardware device having a shooting function, such as a camera, a smart phone, a Personal Digital Assistant (PDA), and the like.
In a specific implementation, the depth image to be processed may be acquired by a prior art means, and this embodiment will not be described in detail here.
It should be noted that the depth image acquired in this embodiment may be a landscape image, a person image, or a building image, and the like, and this application is not limited to this specifically.
Step 202, determining a foreground region and a background region in the depth image.
In specific implementation, the foreground region and the background region can be determined according to the type of the depth image, or the foreground region and the background region can be determined according to the brightness of a shot object in the depth image.
Specifically, if the foreground region and the background region are determined according to the type of the depth image in the embodiment, it is determined whether the depth image belongs to a person image or a landscape image, and after the specific type of the depth image to be processed is determined, the foreground region and the background region of the obtained depth image to be processed may be determined in the following manner:
in the first mode, if the depth image to be processed is a landscape image or a building image, the foreground region and the background region in the depth image to be processed may be determined according to a preset range.
Specifically, the preset range in this embodiment may be set by a user, or may also be automatically configured by the shooting device according to the depth image, and is not specifically limited herein.
For example, if the predetermined range is 20 centimeters (cm) to 50 cm. The depth image currently acquired by the terminal comprises a flower, and the depth range corresponding to the flower is 55cm-60cm, so that the background area of the flower in the depth image can be determined.
In the second mode, if the depth-of-field image to be processed is a person image, face recognition may be performed on the depth-of-field image to determine a face region in the depth-of-field image, and then a foreground region and a background region in the depth-of-field image are determined according to a depth-of-field value corresponding to the face region.
For example, if the depth of field value of the detected face region is 15, the region with the depth of field value in the range of 12-17 may be determined as the foreground region, and the other regions may be determined as the background region.
It should be noted that the depth value of the face region may refer to an average of depth values corresponding to all pixel points in the face region, or may also refer to a depth value of any one pixel point in the face region, which is not limited in this embodiment.
If the foreground region and the background region are determined according to the brightness of the object in the depth image, the brightness condition of the depth image is obtained, and the foreground region and the background region are determined according to the brightness condition.
Specifically, in an actual shooting process, the light of the object to be shot needs to be supplemented due to environmental factors, and then, in the light supplementing process, the intensities of the light received by the objects at different distances are different, so that the brightness displayed on the display screen of the shooting device is different. For example, the brightness is higher in a region close to the photographing apparatus, and the brightness is lower in a region far from the photographing apparatus.
Based on the analysis, the foreground region and the background region of the depth image can be determined according to the brightness conditions of different positions of the object in the image. That is, a region with high brightness in the captured image may be determined as a foreground region, and a region with low brightness may be determined as a background region.
Step 203, when blurring the background region, if it is determined that the target filtering region corresponding to the pixel point to be blurred includes a pixel point located in the foreground region, adjusting a weight value of each pixel point in the target filtering region.
Specifically, after the foreground region and the background region of the depth image are determined, the determined background region may be blurred. When blurring is performed on the determined background region, a target filtering region of a pixel point to be blurred in the background region needs to be determined.
In this embodiment, the target filtering region of the pixel to be blurred may be determined according to a determination method in the existing gaussian blurring technique.
For example, as shown in fig. 3, a 3 × 3 matrix containing the pixels to be processed may be taken from the image, and if the coordinates of the position of the pixels to be processed are (0, 0), the coordinates of the remaining 8 pixels are (-1,1), (0,1), (1,1), (-1,0), (1,0), (-1, -1), (0, -1), and (1, -1).
Further, after a target filtering region of the pixel to be blurred is determined, whether the target filtering region contains the pixel in the foreground region can be judged.
Specifically, whether the target filtering region includes a pixel point located in the foreground region may be determined in the following manner.
In one implementation situation, if the depth of field value corresponding to any pixel point contained in a target filtering area is within a preset range, determining any pixel point as a pixel point of a foreground area;
during specific implementation, the depth of field values of all the pixel points in the target filtering area can be obtained, then the depth of field values corresponding to all the pixel points are matched with the preset range one by one, if the depth of field values of the pixel points are within the preset range, the pixel points belong to the foreground area, and if not, the pixel points belong to the background area.
For example, if the depth of field value of a certain pixel in the target filtering region is 12 and the preset range is 10 to 20, it can be determined that the pixel is within the preset range through matching, that is, the pixel belongs to the foreground region.
For another example, if the depth of field value of a certain pixel in the target filtering region is 30, the pixel can be found to belong to a background region through matching.
Or, in another implementation situation, if the difference between the depth of field values respectively corresponding to two adjacent pixels in the target filtering region is greater than a preset value, it is determined that the pixel located in the foreground region is included in the target filtering region.
Specifically, because the difference between the depth of field of the foreground region and the depth of field of the background region is usually large, in this embodiment, the region to which each pixel belongs may also be determined according to the depth of field variation value between adjacent pixels.
The method comprises the steps of firstly obtaining depth of field values corresponding to all pixel points in a target filtering interval, then subtracting the depth of field values of adjacent pixel points, comparing the difference value with a preset value, and if the difference value is larger than the preset value, indicating that the target filtering area comprises the pixel points of a foreground area.
For example, the depth of field values of two adjacent pixels in the target filtering region are 24 and 25, respectively, and then the depth of field values of the two pixels are subtracted to obtain a difference value of 1. If the preset value is 10, it is indicated that the difference value between the two pixel points is smaller than the preset value, and the pixel point of the foreground region is not included in the target filtering region.
In this embodiment, if it is determined that the target filtering region corresponding to the pixel to be blurred includes the pixel of the foreground region, the weighted value of each pixel in the target filtering region is adjusted, so that when the pixel to be blurred is blurred, the pixel in the foreground region does not affect the blurring effect of the background region, so that the boundary between the foreground region and the background region after blurring is clear, the blurred edge is protected, and the main body is more prominent.
The image blurring processing method provided by the embodiment of the present invention determines a foreground region and a background region of an acquired depth image by acquiring the depth image to be processed, and then determines whether a pixel point located in the foreground region is included in a target filtering region corresponding to a pixel point to be blurred when blurring the determined background region, and adjusts a weight value of each pixel point in the target filtering region if the pixel point is included in the target filtering region. From this, through in the target filtering area that will include foreground region pixel, the weighted value of each pixel is adjusted for in the image after the background region blurring that obtains, the limit between foreground region and the background region is clean, clear, has effectively protected background region's blurring edge, and makes the main part more outstanding, has improved the whole picture quality of image, has promoted user's experience degree.
By the above analysis, when blurring the background region, it is necessary to first determine whether the target filtering region corresponding to the pixel to be blurred includes the pixel in the foreground region, and if it is determined that the target filtering region includes the pixel in the foreground region, the weighted value of each pixel in the target filtering region is adjusted. In a possible implementation scenario of the present invention, the weighted value of each pixel in the target filtering region including the pixels in the foreground region is adjusted in multiple ways, so that the boundary between the foreground region and the background region in the obtained image after blurring processing of the background region is clean and clear. The image blurring processing method in the above case will be further described with reference to fig. 4.
FIG. 4 is a flowchart of an image blurring processing method according to another embodiment of the present invention.
As shown in fig. 4, the image blurring processing method may include the steps of:
step 401, acquiring a depth image to be processed.
Step 402, determining a foreground region and a background region in the depth image.
Step 403, when blurring the background area, if it is determined that the target filtering area corresponding to the pixel point to be blurred includes a pixel point located in the foreground area, setting a weight value of the pixel point located in the foreground area to zero.
Step 404, adjusting the weighted values of the remaining pixel points in the target filtering region so that the sum of the weighted values of the remaining pixel points is equal to one.
Specifically, when it is determined that the target filtering region includes the pixel point located in the foreground region, the weighted value of the pixel point in the foreground region may be set to zero, so that when blurring the background region, the boundary between the blurred background region and the foreground region may not be unclear due to the information that the target filtering region of the pixel point to be blurred includes the pixel point in the foreground region, as shown in fig. 5.
As can be seen from fig. 5, the pixel point to be blurred is a, and a pixel point B in the foreground region exists near the pixel point a, so that before blurring the pixel point a, the weight value of the left region including the pixel point B is set to 0.
Further, after the weighted value of the pixel point in the foreground region is set to be zero, in order to ensure that the standard of the pixel point subjected to the blurring processing always accords with the preset blurring processing standard, the weights of the rest pixel points in the target filtering region can be adjusted.
In specific implementation, the weighted values of the remaining pixels in the target filtering region can be adjusted in the following ways.
One implementation is to expand the weighted values of the remaining pixels in equal proportion.
Specifically, assuming that the unified standard for blurring the pixels in the background region of the image is 1, after the weighted values of the pixels in the foreground region are set to zero, the sum of the weighted values of the pixels in the target filtering region is not equal to 1, so that after the weighted values of the remaining pixels are expanded in equal proportion, the sum of the weighted values of the remaining pixels is always equal to 1. Therefore, the blurring distortion phenomenon can not occur when the pixel point to be subjected to blurring processing is subjected to blurring processing, the boundary between the foreground area and the background area of the image is enabled to be cleaner and clearer, and the double image situation is avoided.
Or, in another implementation, the weighted values of the remaining pixel points are adjusted according to the depth of field values respectively corresponding to the remaining pixel points.
Specifically, because the difference between the depth of field values of the pixels in different areas in the image is large, when the weight values of the remaining pixels are adjusted, the corresponding adjustment can be performed according to the depth of field values respectively corresponding to the remaining pixels.
For example, the weighted value of the pixel point at the same depth of field level as the pixel point to be blurred may be increased more, and the weighted value of the pixel point not at the same depth of field level as the pixel point to be blurred may be increased less.
It can be understood that the weighted value of the pixel point at the same depth of field level as the pixel point to be blurred is increased by a little, so that in the image after blurring processing of the background region, the background region is ensured not to be distorted on the basis of realizing the cleanness and the clarity of the boundary between the foreground region and the background region.
In summary, the effect of blurring the image by the image blurring processing method in the present application is as shown in fig. 6.
According to the image blurring processing method provided by the embodiment of the invention, the weight value of each pixel point in the target filtering region containing the pixel points of the foreground region is adjusted, so that the boundary between the foreground region and the background region in the obtained image subjected to blurring processing of the background region is clean and clear, the blurring edge of the background region is effectively protected, the main body is more prominent, the overall image quality of the image is improved, and the user experience is improved.
In order to implement the above embodiments, the present invention further provides an image blurring processing apparatus.
Fig. 7 is a schematic structural diagram of an image blurring processing apparatus according to an embodiment of the present invention.
Referring to fig. 7, the image blurring processing apparatus includes: an acquisition module 10, a determination module 20 and an adjustment module 30.
The acquiring module 10 is configured to acquire a depth image to be processed;
the determining module 20 is configured to determine a foreground region and a background region in the depth image;
in one possible implementation scenario, the determining module 20 is specifically configured to: and determining a foreground region and a background region in the depth image according to a preset range.
Further, as shown in fig. 8, when the depth image to be processed includes a portrait, the determining module 20 includes: a first determining subunit 21 and a second determining subunit 22.
The first determining subunit 21 is configured to perform face recognition on the depth image, and determine a face region;
the second determining subunit 22 is configured to determine a foreground region and a background region in the depth image according to the depth value corresponding to the face region.
The adjusting module 30 is configured to, when blurring the background region, adjust a weight value of each pixel point in the target filtering region if it is determined that the target filtering region corresponding to the pixel point to be blurred includes a pixel point located in the foreground region.
In an implementation form of the present invention, the adjusting module 30 is specifically configured to:
if the depth of field value corresponding to any pixel point in the target filtering area is within a preset range, determining any pixel point as a pixel point of the foreground area;
or,
and if the difference value of the depth of field values respectively corresponding to two adjacent pixel points in the target filtering area is greater than a preset value, determining that the pixel points located in the foreground area are contained in the target filtering area.
In another possible implementation case, as shown in fig. 9, the adjusting module 30 in the apparatus further includes: a first adjusting subunit 31 and a second adjusting subunit 32.
The first adjusting subunit 31 is configured to set a weight value of a pixel point located in the foreground region to zero;
the second adjusting subunit 32 is configured to adjust the weighted values of the remaining pixel points in the target filtering region, so that the sum of the weighted values of the remaining pixel points is equal to one.
Further, the second adjusting subunit 32 is specifically configured to:
expanding the weight values of the residual pixel points in equal proportion;
or,
and adjusting the weight values of the residual pixel points according to the depth of field values respectively corresponding to the residual pixel points.
It should be noted that, for the implementation process and the technical principle of the image blurring processing apparatus of this embodiment, reference is made to the foregoing explanation of the embodiment of the image blurring processing method, and details are not repeated here.
The image blurring processing device provided in the embodiment of the present invention determines a foreground region and a background region of an acquired depth image by acquiring the depth image to be processed, and then determines whether a pixel point located in the foreground region is included in a target filtering region corresponding to a pixel point to be blurred when blurring the determined background region, and adjusts a weight value of each pixel point in the target filtering region if the pixel point is included in the target filtering region. From this, through in the target filtering area that will include foreground region pixel, the weighted value of each pixel is adjusted for in the image after the background region blurring that obtains, the limit between foreground region and the background region is clean, clear, has effectively protected background region's blurring edge, and makes the main part more outstanding, has improved the whole picture quality of image, has promoted user's experience degree.
In order to implement the above embodiment, the present invention further provides a shooting terminal.
Fig. 10 is a schematic structural diagram of a photographing terminal according to an embodiment of the present invention.
As shown in fig. 10, the photographing terminal 100 includes a memory 111, a processor 112, and an image processing circuit 113.
Wherein, the memory 111 is used for storing executable instructions of the processor 112;
the processor 112 is configured to call the program code in the memory 111 and implement the image blurring processing method according to the depth image output by the image processing circuit 113.
Specifically, the Image Processing circuit 113 may be implemented using hardware and/or software components, and may include various Processing units defining an ISP (Image Signal Processing) pipeline.
FIG. 11 is a schematic diagram of an image processing circuit in one embodiment. As shown in fig. 11, for convenience of explanation, only aspects of the image processing technique related to the embodiment of the present invention are shown.
As shown in fig. 11, the image processing circuit 113 includes an imaging device 1140, an ISP processor 1150 and control logic 1160. The imaging device 1140 may include a camera with one or more lenses 1141, an image sensor 1142, and a structured light projector 1143. The structured light projector 1143 projects the structured light to the object to be measured. The structured light pattern may be a laser stripe, a gray code, a sinusoidal stripe, or a randomly arranged speckle pattern. The image sensor 1142 captures a structured light image projected onto the object to be measured, and transmits the structured light image to the ISP processor 1150, and the ISP processor 1150 demodulates the structured light image to obtain the depth information of the object to be measured. Meanwhile, the image sensor 1142 may capture color information of the measured object. Of course, the two image sensors 1142 may capture the structured light image and the color information of the measured object, respectively.
Taking speckle structured light as an example, the ISP processor 1150 demodulates the structured light image, specifically including acquiring a speckle image of the measured object from the structured light image, performing image data calculation on the speckle image of the measured object and the reference speckle image according to a predetermined algorithm, and obtaining a moving distance of each scattered spot of the speckle image on the measured object relative to a reference scattered spot in the reference speckle image. And (4) converting and calculating by using a trigonometry method to obtain the depth value of each scattered spot of the speckle image, and obtaining the depth information of the measured object according to the depth value.
Of course, the depth image information and the like may be acquired by a binocular vision method or a method based on the time difference of flight TOF, and the method is not limited thereto, as long as the depth information of the object to be measured can be acquired or obtained by calculation, and all methods fall within the scope of the present embodiment.
After the ISP processor 1150 receives the color information of the object to be measured captured by the image sensor 1142, the image data corresponding to the color information of the object to be measured may be processed. ISP processor 1150 analyzes the image data to obtain image statistics that may be used to determine and/or image one or more control parameters of imaging device 1140. The image sensor 1142 may include an array of color filters (e.g., Bayer filters), and the image sensor 1142 may acquire light intensity and wavelength information captured with each imaging pixel of the image sensor 1142 and provide a set of raw image data that may be processed by the ISP processor 1150.
ISP processor 1150 processes the raw image data pixel by pixel in a variety of formats. For example, each image pixel may have a bit depth of 8, 10, 12, or 14 bits, and ISP processor 1150 may perform one or more image processing operations on the raw image data, collecting image statistics about the image data. Wherein the image processing operations may be performed with the same or different bit depth precision.
ISP processor 1150 may also receive pixel data from image memory 1170. The image memory 1170 may be part of a memory device, a storage device, or a separate dedicated memory within an electronic device, and may include a DMA (direct memory Access) feature.
Upon receiving the raw image data, ISP processor 1150 may perform one or more image processing operations.
After the ISP processor 1150 obtains the color information and the depth information of the object to be measured, it may be fused to obtain a three-dimensional image. The feature of the corresponding object to be measured can be extracted by at least one of an appearance contour extraction method or a contour feature extraction method. For example, the features of the object to be measured are extracted by methods such as an active shape model method ASM, an active appearance model method AAM, a principal component analysis method PCA, and a discrete cosine transform method DCT, which are not limited herein. And then the characteristics of the measured object extracted from the depth information and the characteristics of the measured object extracted from the color information are subjected to registration and characteristic fusion processing. The fusion processing may be a process of directly combining the features extracted from the depth information and the color information, a process of combining the same features in different images after weight setting, or a process of generating a three-dimensional image based on the features after fusion in other fusion modes.
Image data for a three-dimensional image may be sent to image memory 1170 for additional processing before being displayed. ISP processor 1150 receives processed data from image memory 1170 and performs image data processing on the processed data in the raw domain and in the RGB and YCbCr color spaces. Image data for a three-dimensional image may be output to display 1180 for viewing by a user and/or further Processing by a Graphics Processing Unit (GPU). Further, the output of ISP processor 1150 may also be sent to image memory 1170, and display 1180 may read image data from image memory 1170. In one embodiment, image memory 1170 may be configured to implement one or more frame buffers. Further, the output of ISP processor 1150 may be transmitted to encoder/decoder 1190 for encoding/decoding image data. The encoded image data may be saved and decompressed before being displayed on the display 1180 device. The encoder/decoder 1190 may be implemented by a CPU or GPU or coprocessor.
The image statistics determined by ISP processor 1150 may be sent to control logic 1160 unit. The control logic 1160 may include a processor and/or microcontroller that executes one or more routines (e.g., firmware) that may determine control parameters of the imaging device 1140 based on the received image statistics.
The photographing terminal provided in the embodiment of the present invention determines a foreground region and a background region of an acquired depth image by acquiring the depth image to be processed, and then determines whether a target filtering region corresponding to a pixel point to be blurred includes a pixel point located in the foreground region when blurring the determined background region, and adjusts a weight value of each pixel point in the target filtering region if the target filtering region includes the pixel point. From this, through in the target filtering area that will include foreground region pixel, the weighted value of each pixel is adjusted for in the image after the background region blurring that obtains, the limit between foreground region and the background region is clean, clear, has effectively protected background region's blurring edge, and makes the main part more outstanding, has improved the whole picture quality of image, has promoted user's experience degree.
In order to implement the above embodiments, the present invention further provides a computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, implements the image blurring processing method of the first aspect.
The computer-readable storage medium provided in the embodiment of the present invention may be configured in a shooting terminal that needs to perform image blurring, and when performing image blurring on a terminal device, determine a foreground region and a background region of an acquired depth-of-field image by acquiring a depth-of-field image to be processed, and then when performing blurring on the determined background region, determine whether a target filtering region corresponding to a pixel point to be blurred includes a pixel point located in the foreground region, and if so, adjust a weight value of each pixel point in the target filtering region. From this, through in the target filtering area that will include foreground region pixel, the weighted value of each pixel is adjusted for in the image after the background region blurring that obtains, the limit between foreground region and the background region is clean, clear, has effectively protected background region's blurring edge, and makes the main part more outstanding, has improved the whole picture quality of image, has promoted user's experience degree.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and alternate implementations are included within the scope of the preferred embodiment of the present invention in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present invention.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.

Claims (10)

1. An image blurring processing method, comprising:
acquiring a depth-of-field image to be processed;
determining a foreground region and a background region in the depth image;
when blurring processing is carried out on the background area, if it is determined that a target filtering area corresponding to a pixel point to be blurred contains a pixel point located in the foreground area, the weighted value of each pixel point in the target filtering area is adjusted.
2. The method of claim 1, wherein said determining foreground and background regions in said range image comprises:
and determining a foreground region and a background region in the depth image according to a preset range.
3. The method according to claim 1, wherein the depth image to be processed includes a portrait;
the determining a foreground region and a background region in the depth image comprises:
carrying out face recognition on the depth-of-field image to determine a face area;
and determining a foreground region and a background region in the depth-of-field image according to the depth-of-field value corresponding to the face region.
4. The method of claim 2, wherein the determining that the target filtering region corresponding to the pixel point to be blurred includes a pixel point located in the foreground region comprises:
if the depth of field value corresponding to any pixel point contained in the target filtering area is within the preset range, determining that any pixel point is a pixel point of the foreground area;
or,
and if the difference value of the depth of field values respectively corresponding to two adjacent pixel points in the target filtering area is greater than a preset value, determining that the target filtering area contains the pixel points positioned in the foreground area.
5. The method according to any one of claims 1 to 3, wherein the adjusting the weight value of each pixel point in the target filtering region includes:
setting the weighted value of the pixel point positioned in the foreground area to be zero;
and adjusting the weighted values of the residual pixel points in the target filtering area to enable the sum of the weighted values of the residual pixel points to be equal to one.
6. The method of claim 5, wherein the adjusting the weight values of the remaining pixels in the target filtering region comprises:
expanding the weight values of the residual pixel points in equal proportion;
or,
and adjusting the weight values of the residual pixel points according to the depth of field values respectively corresponding to the residual pixel points.
7. An image blurring processing apparatus, comprising:
the acquisition module is used for acquiring a depth-of-field image to be processed;
the determining module is used for determining a foreground region and a background region in the depth image;
and the adjusting module is used for adjusting the weight value of each pixel point in the target filtering area if it is determined that the target filtering area corresponding to the pixel point to be blurred contains the pixel point positioned in the foreground area when blurring the background area.
8. The apparatus of claim 7, wherein the adjustment module is specifically configured to:
if the depth of field value corresponding to any pixel point contained in the target filtering area is within a preset range, determining that any pixel point is a pixel point of the foreground area;
or,
and if the difference value of the depth of field values respectively corresponding to two adjacent pixel points in the target filtering area is greater than a preset value, determining that the target filtering area contains the pixel points positioned in the foreground area.
9. A photographing terminal, comprising: a memory, a processor, and an image processing circuit;
the memory for storing executable instructions of the processor;
the image processing circuit is used for processing the depth image;
the processor is used for calling the executable instructions in the memory and realizing the image blurring processing method according to any one of claims 1 to 6 according to the depth image output by the image processing circuit.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out an image blurring processing method according to any one of claims 1 to 6.
CN201710756408.5A 2017-08-29 2017-08-29 Image blurs processing method, device and camera terminal Active CN107370958B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201710756408.5A CN107370958B (en) 2017-08-29 2017-08-29 Image blurs processing method, device and camera terminal
PCT/CN2018/101997 WO2019042216A1 (en) 2017-08-29 2018-08-23 Image blurring processing method and device, and photographing terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710756408.5A CN107370958B (en) 2017-08-29 2017-08-29 Image blurs processing method, device and camera terminal

Publications (2)

Publication Number Publication Date
CN107370958A CN107370958A (en) 2017-11-21
CN107370958B true CN107370958B (en) 2019-03-29

Family

ID=60310445

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710756408.5A Active CN107370958B (en) 2017-08-29 2017-08-29 Image blurs processing method, device and camera terminal

Country Status (2)

Country Link
CN (1) CN107370958B (en)
WO (1) WO2019042216A1 (en)

Families Citing this family (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107370958B (en) * 2017-08-29 2019-03-29 Oppo广东移动通信有限公司 Image blurs processing method, device and camera terminal
CN108024058B (en) * 2017-11-30 2019-08-02 Oppo广东移动通信有限公司 Image blurs processing method, device, mobile terminal and storage medium
CN108322639A (en) * 2017-12-29 2018-07-24 维沃移动通信有限公司 A kind of method, apparatus and mobile terminal of image procossing
CN110009556A (en) 2018-01-05 2019-07-12 广东欧珀移动通信有限公司 Image background weakening method, device, storage medium and electronic equipment
CN108495030A (en) * 2018-03-16 2018-09-04 维沃移动通信有限公司 A kind of image processing method and mobile terminal
CN108629745B (en) * 2018-04-12 2021-01-19 Oppo广东移动通信有限公司 Image processing method and device based on structured light and mobile terminal
CN113056906A (en) * 2018-11-26 2021-06-29 Oppo广东移动通信有限公司 System and method for taking tele-like images
TWI701639B (en) * 2018-12-11 2020-08-11 緯創資通股份有限公司 Method of identifying foreground object in image and electronic device using the same
TWI693576B (en) * 2019-02-26 2020-05-11 緯創資通股份有限公司 Method and system for image blurring processing
CN110349080B (en) * 2019-06-10 2023-07-04 北京迈格威科技有限公司 Image processing method and device
CN112395912B (en) * 2019-08-14 2022-12-13 中移(苏州)软件技术有限公司 Face segmentation method, electronic device and computer readable storage medium
CN112395922A (en) * 2019-08-16 2021-02-23 杭州海康威视数字技术股份有限公司 Face action detection method, device and system
CN112785487B (en) * 2019-11-06 2023-08-04 RealMe重庆移动通信有限公司 Image processing method and device, storage medium and electronic equipment
CN110910304B (en) * 2019-11-08 2023-12-22 北京达佳互联信息技术有限公司 Image processing method, device, electronic equipment and medium
WO2021102704A1 (en) * 2019-11-26 2021-06-03 深圳市大疆创新科技有限公司 Image processing method and apparatus
CN111161136B (en) * 2019-12-30 2023-11-03 深圳市商汤科技有限公司 Image blurring method, image blurring device, equipment and storage device
CN113138387B (en) * 2020-01-17 2024-03-08 北京小米移动软件有限公司 Image acquisition method and device, mobile terminal and storage medium
CN111242843B (en) * 2020-01-17 2023-07-18 深圳市商汤科技有限公司 Image blurring method, image blurring device, equipment and storage device
CN111314602B (en) * 2020-02-17 2021-09-17 浙江大华技术股份有限公司 Target object focusing method, target object focusing device, storage medium and electronic device
CN111275045B (en) * 2020-02-28 2024-02-06 Oppo广东移动通信有限公司 Image main body recognition method and device, electronic equipment and medium
WO2021223144A1 (en) * 2020-05-07 2021-11-11 深圳市大疆创新科技有限公司 Image processing method and apparatus
CN111626924B (en) * 2020-05-28 2023-08-15 维沃移动通信有限公司 Image blurring processing method and device, electronic equipment and readable storage medium
CN113938578B (en) * 2020-07-13 2024-07-30 武汉Tcl集团工业研究院有限公司 Image blurring method, storage medium and terminal equipment
CN113965663B (en) * 2020-07-21 2024-09-20 深圳Tcl新技术有限公司 Image quality optimization method, intelligent terminal and storage medium
CN114079725B (en) * 2020-08-13 2023-02-07 华为技术有限公司 Video anti-shake method, terminal device, and computer-readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1766927A (en) * 2004-05-20 2006-05-03 豪威科技有限公司 Methods and systems for locally adaptive image processing filters
CN105100615A (en) * 2015-07-24 2015-11-25 青岛海信移动通信技术股份有限公司 Image preview method, apparatus and terminal
CN105574818A (en) * 2014-10-17 2016-05-11 中兴通讯股份有限公司 Depth-of-field rendering method and device
CN105979165A (en) * 2016-06-02 2016-09-28 广东欧珀移动通信有限公司 Blurred photos generation method, blurred photos generation device and mobile terminal

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TR200907868A2 (en) * 2009-10-16 2011-05-23 Vestel Elektron�K Sanay� Ve T�Caret Anon�M ��Rket�@ Automated test method with black transparent regions
US8466980B2 (en) * 2010-04-06 2013-06-18 Alcatel Lucent Method and apparatus for providing picture privacy in video
CN105744173B (en) * 2016-02-15 2019-04-16 Oppo广东移动通信有限公司 A kind of method, device and mobile terminal of differentiation image front and back scene area
CN105629631B (en) * 2016-02-29 2020-01-10 Oppo广东移动通信有限公司 Control method, control device and electronic device
CN107370958B (en) * 2017-08-29 2019-03-29 Oppo广东移动通信有限公司 Image blurs processing method, device and camera terminal

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1766927A (en) * 2004-05-20 2006-05-03 豪威科技有限公司 Methods and systems for locally adaptive image processing filters
CN105574818A (en) * 2014-10-17 2016-05-11 中兴通讯股份有限公司 Depth-of-field rendering method and device
CN105100615A (en) * 2015-07-24 2015-11-25 青岛海信移动通信技术股份有限公司 Image preview method, apparatus and terminal
CN105979165A (en) * 2016-06-02 2016-09-28 广东欧珀移动通信有限公司 Blurred photos generation method, blurred photos generation device and mobile terminal

Also Published As

Publication number Publication date
CN107370958A (en) 2017-11-21
WO2019042216A1 (en) 2019-03-07

Similar Documents

Publication Publication Date Title
CN107370958B (en) Image blurs processing method, device and camera terminal
CN107864337B (en) Sketch image processing method, device and equipment and computer readable storage medium
CN107680128B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN107948517B (en) Preview picture blurring processing method, device and equipment
CN108111749B (en) Image processing method and device
CN108055452B (en) Image processing method, device and equipment
EP3496383A1 (en) Image processing method, apparatus and device
WO2019105262A1 (en) Background blur processing method, apparatus, and device
CN113766125B (en) Focusing method and device, electronic equipment and computer readable storage medium
CN107509031B (en) Image processing method, image processing device, mobile terminal and computer readable storage medium
CN107451969B (en) Image processing method, image processing device, mobile terminal and computer readable storage medium
KR101662846B1 (en) Apparatus and method for generating bokeh in out-of-focus shooting
US10304164B2 (en) Image processing apparatus, image processing method, and storage medium for performing lighting processing for image data
CN107945105B (en) Background blurring processing method, device and equipment
CN107493432B (en) Image processing method, image processing device, mobile terminal and computer readable storage medium
CN108989699B (en) Image synthesis method, image synthesis device, imaging apparatus, electronic apparatus, and computer-readable storage medium
CN111932587B (en) Image processing method and device, electronic equipment and computer readable storage medium
TWI462054B (en) Estimation Method of Image Vagueness and Evaluation Method of Image Quality
KR20200031169A (en) Image processing method and device
JP6833415B2 (en) Image processing equipment, image processing methods, and programs
CN113822942B (en) Method for measuring object size by monocular camera based on two-dimensional code
CN107820019B (en) Blurred image acquisition method, blurred image acquisition device and blurred image acquisition equipment
CN108156369B (en) Image processing method and device
WO2020057248A1 (en) Image denoising method and apparatus, and device and storage medium
CN113674303B (en) Image processing method, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 523860 No. 18, Wu Sha Beach Road, Changan Town, Dongguan, Guangdong

Applicant after: OPPO Guangdong Mobile Communications Co., Ltd.

Address before: 523860 No. 18, Wu Sha Beach Road, Changan Town, Dongguan, Guangdong

Applicant before: Guangdong OPPO Mobile Communications Co., Ltd.

GR01 Patent grant
GR01 Patent grant