[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN113673474B - Image processing method, device, electronic equipment and computer readable storage medium - Google Patents

Image processing method, device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN113673474B
CN113673474B CN202111015385.5A CN202111015385A CN113673474B CN 113673474 B CN113673474 B CN 113673474B CN 202111015385 A CN202111015385 A CN 202111015385A CN 113673474 B CN113673474 B CN 113673474B
Authority
CN
China
Prior art keywords
image
face
region
mask
pixel point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111015385.5A
Other languages
Chinese (zh)
Other versions
CN113673474A (en
Inventor
李章宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202111015385.5A priority Critical patent/CN113673474B/en
Publication of CN113673474A publication Critical patent/CN113673474A/en
Application granted granted Critical
Publication of CN113673474B publication Critical patent/CN113673474B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the application discloses an image processing method, an image processing device, electronic equipment and a computer readable storage medium. The method comprises the following steps: performing face recognition on the first image to obtain face information of each face contained in the first image; generating a weight mask according to the face mask of the first image and the face information, wherein the weight mask is used for determining a face clear region corresponding to each face in the first image and a face fuzzy transition region corresponding to each face; acquiring a blurred image and a background blurred image corresponding to the first image; and fusing the blurred image and the background blurring image according to the weight mask to obtain a second image. The image processing method, the device, the electronic equipment and the computer readable storage medium can avoid the problem that no salient points appear due to the fact that all portrait areas in an image are clear, and improve the visual display effect of the image.

Description

Image processing method, device, electronic equipment and computer readable storage medium
Technical Field
The present invention relates to the field of image technology, and in particular, to an image processing method, an image processing device, an electronic device, and a computer readable storage medium.
Background
In the field of image technology, in order to highlight a portrait in an image, a background area in the portrait image is usually blurred, so as to achieve the effect of clearly capturing a main person. The prior blurring technology generally keeps all portrait areas in the image clear, so that the problem of no salient points appears, and the visual effect of the image is poor.
Disclosure of Invention
The embodiment of the application discloses an image processing method, an image processing device, electronic equipment and a computer readable storage medium, which can avoid the problem that no salient points appear due to the fact that all portrait areas in an image are clear, and improve the visual display effect of the image.
The embodiment of the application discloses an image processing method, which comprises the following steps:
performing face recognition on the first image to obtain face information of each face contained in the first image;
generating a weight mask according to the face mask of the first image and the face information, wherein the weight mask is used for determining a face clear region corresponding to each face in the first image and a face fuzzy transition region corresponding to each face; the human face clear region refers to a region which needs to be kept clear in a human image region of the first image, and the human image blurring transition region refers to a change region from clear to blurring in the human image region;
Acquiring a blurred image and a background blurred image corresponding to the first image, wherein the blurring degree of the blurred image is smaller than that of a background area in the background blurred image;
and fusing the blurred image and the background blurring image according to the weight mask to obtain a second image.
The embodiment of the application discloses an image processing device, including:
the face recognition module is used for recognizing the face of the first image to obtain face information of each face contained in the first image;
the weight generation module is used for generating a weight mask according to the face mask of the first image and the face information, wherein the weight mask is used for determining a face clear area corresponding to each face in the first image and a face fuzzy transition area corresponding to each face; the human face clear region refers to a region which needs to be kept clear in a human image region of the first image, and the human image blurring transition region refers to a change region from clear to blurring in the human image region;
the image acquisition module is used for acquiring a blurred image and a background blurred image corresponding to the first image, and the blurring degree of the blurred image is smaller than that of a background area in the background blurred image;
And the fusion module is used for fusing the blurred image and the background blurring image according to the weight mask to obtain a second image.
The embodiment of the application discloses electronic equipment, which comprises a memory and a processor, wherein the memory stores a computer program, and the computer program is executed by the processor to enable the processor to realize the method.
The present embodiments disclose a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a method as described above.
According to the image processing method, the device, the electronic equipment and the computer readable storage medium disclosed by the embodiment of the application, face recognition is carried out on a first image to obtain face information of each face contained in the first image, a weight mask is generated according to a face mask and the face information of the first image, and then a fuzzy image corresponding to the first image and a background blurring image are fused based on the weight mask to obtain a second image.
In addition, under the condition that the first image contains a plurality of faces, each face in the obtained second image can be ensured to be clear, the condition that only one person is focused and other faces are blurred can be avoided, and the image blurring effect is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a block diagram of an image processing circuit in one embodiment;
FIG. 2 is a flow chart of an image processing method in one embodiment;
FIG. 3 is a flow diagram of generating a weight mask in one embodiment;
FIG. 4A is a schematic diagram of a clear face region in one embodiment;
FIG. 4B is a schematic diagram of a weight mask in one embodiment;
FIG. 5 is a flowchart of an image processing method in another embodiment;
FIG. 6 is a block diagram of an image processing apparatus in one embodiment;
fig. 7 is a block diagram of an electronic device in one embodiment.
Detailed Description
The following description of the technical solutions in the embodiments of the present application will be made clearly and completely with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
It should be noted that the terms "comprising" and "having" and any variations thereof in the embodiments and figures herein are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus.
It will be understood that the terms "first," "second," and the like, as used herein, may be used to describe various elements, but these elements are not limited by these terms. These terms are only used to distinguish one element from another element. For example, a first image may be referred to as a second image, and similarly, a second image may be referred to as a first image, without departing from the scope of the present application. Both the first image and the second image are images, but they are not the same image.
In the prior art, a mode of blurring a background area after the background area of a portrait image is identified is generally adopted, so that all portrait areas of the portrait image keep clear, and the problem of no prominent key points can be caused. In some technical solutions, blurring is performed on a portrait image based on a high-precision depth map, but the mode can only keep one person clear, other portrait images not at the same depth level can be blurred, and blurring effect of the image is poor.
The embodiment of the application discloses an image processing method, an image processing device, electronic equipment and a computer readable storage medium, which can avoid the problem that no salient points appear because all portrait areas in an image are clear, improve the visual display effect of the image, avoid the condition that only one person is focused and other faces are blurred, and improve the image blurring effect.
Embodiments of the present application provide an electronic device that may include, but is not limited to, a mobile phone, an intelligent wearable device, a tablet computer, a PC (Personal Computer ), a vehicle-mounted terminal, a digital camera, etc., which embodiments of the present application are not limited to. The electronic device includes image processing circuitry, which may be implemented using hardware and/or software components, and may include various processing units defining an ISP (Image Signal Processing ) pipeline. FIG. 1 is a block diagram of an image processing circuit in one embodiment. For ease of illustration, fig. 1 illustrates only aspects of image processing techniques related to embodiments of the present application.
As shown in fig. 1, the image processing circuit includes an ISP processor 140 and a control logic 150. Image data captured by imaging device 110 is first processed by ISP processor 140, where ISP processor 140 analyzes the image data to capture image statistics that may be used to determine one or more control parameters of imaging device 110. Imaging device 110 may include one or more lenses 112 and an image sensor 114. The image sensor 114 may include a color filter array (e.g., bayer filters), and the image sensor 114 may acquire light intensity and wavelength information captured by each imaging pixel and provide a set of raw image data that may be processed by the ISP processor 140. The attitude sensor 120 (e.g., tri-axis gyroscope, hall sensor, accelerometer, etc.) may provide acquired image processing parameters (e.g., anti-shake parameters) to the ISP processor 140 based on the type of attitude sensor 120 interface. The attitude sensor 120 interface may employ an SMIA (Standard Mobile Imaging Architecture ) interface, other serial or parallel camera interfaces, or a combination of the above.
It should be noted that, although only one imaging device 110 is shown in fig. 1, in the embodiment of the present application, at least two imaging devices 110 may be included, where each imaging device 110 may correspond to one image sensor 114, or a plurality of imaging devices 110 may correspond to one image sensor 114, which is not limited herein. The operation of each imaging device 110 may be as described above.
In addition, the image sensor 114 may also send raw image data to the gesture sensor 120, the gesture sensor 120 may provide raw image data to the ISP processor 140 based on the gesture sensor 120 interface type, or the gesture sensor 120 may store raw image data in the image memory 130.
The ISP processor 140 processes the raw image data on a pixel-by-pixel basis in a variety of formats. For example, each image pixel may have a bit depth of 8, 10, 12, or 14 bits, and ISP processor 140 may perform one or more image processing operations on the raw image data, collecting statistical information about the image data. Wherein the image processing operations may be performed with the same or different bit depth precision.
ISP processor 140 may also receive image data from image memory 130. For example, the gesture sensor 120 interface sends the raw image data to the image memory 130, where the raw image data in the image memory 130 is provided to the ISP processor 140 for processing. Image memory 130 may be part of a memory device, a storage device, or a separate dedicated memory within an electronic device, and may include DMA (Direct Memory Access ) features.
Upon receiving raw image data from the image sensor 114 interface or from the pose sensor 120 interface or from the image memory 130, the ISP processor 140 may perform one or more image processing operations, such as temporal filtering. The processed image data may be sent to image memory 130 for additional processing before being displayed. The ISP processor 140 receives the processing data from the image memory 130 and performs image data processing in the original domain and in the RGB and YCbCr color spaces on the processing data. The image data processed by ISP processor 140 may be output to display 160 for viewing by a user and/or further processing by a graphics engine or GPU (Graphics Processing Unit, graphics processor). In addition, the output of ISP processor 140 may also be sent to image memory 130, and display 160 may read image data from image memory 130. In one embodiment, image memory 130 may be configured to implement one or more frame buffers.
The statistics determined by ISP processor 140 may be sent to control logic 150. For example, the statistics may include image sensor 114 statistics such as vibration frequency of gyroscope, auto-exposure, auto-white balance, auto-focus, flicker detection, black level compensation, lens 112 shading correction, etc. Control logic 150 may include a processor and/or microcontroller that executes one or more routines (e.g., firmware) that may determine control parameters of imaging device 110 and control parameters of ISP processor 140 based on the received statistics. For example, the control parameters of the imaging device 110 may include attitude sensor 120 control parameters (e.g., gain, integration time for exposure control, anti-shake parameters, etc.), camera flash control parameters, camera anti-shake displacement parameters, lens 112 control parameters (e.g., focal length for focusing or zooming), or a combination of these parameters. The ISP control parameters may include gain levels and color correction matrices for automatic white balancing and color adjustment (e.g., during RGB processing), as well as lens 112 shading correction parameters.
The image processing method provided in the embodiment of the present application will be described with reference to the image processing circuit of fig. 1. The ISP processor 140 may acquire a first image from the imaging device 110 or the image memory 130 and face recognition the first image to obtain face information of each face included in the first image. The ISP processor 140 may generate a weight mask according to the face mask and the face information of the first image, where the weight mask may be used to determine a clear face region corresponding to each face in the first image and a blurred face transition region corresponding to each face. The ISP processor 140 may obtain a blurred image and a background blurred image corresponding to the first image, where the blurred image has a blur degree smaller than a blur degree of a background region in the background blurred image, and then fuse the blurred image and the background blurred image according to the weight mask to obtain the second image.
Alternatively, the ISP processor 140 may send the second image to the image memory 130 for storage, and may also send the second image to the display 160 for display.
It should be noted that, the image processing method provided in the embodiment of the present application may also be implemented by other processors of the electronic device, such as a CPU (central processing unit ), GPU (graphics processing unit, graphics processor), and the like, where the other processors may obtain the image data processed by the ISP processor 140, that is, obtain the first image, and implement the image processing method provided in the embodiment of the present application.
As shown in fig. 2, in one embodiment, an image processing method is provided, which can be applied to the electronic device, and the method may include the following steps:
step 210, face recognition is performed on the first image to obtain face information of each face included in the first image.
The first image may be an image including a person, and the first image may be an image requiring image processing. The first image may be a color image, for example, an image in RGB (Red Green Blue) format or an image in YUV (Y represents brightness, U and V represent chromaticity) format, or the like. The first image may be an image stored in a memory of the electronic device in advance, or may be an image acquired by the electronic device in real time through a camera.
The electronic device may perform face recognition on the first image to obtain face information of each face included in the first image, where the face information may include, but is not limited to, one or more of a face area of each face, a center point coordinate of each face area, a radius of each face area, and the like.
In some embodiments, the manner of recognizing the first image may include, but is not limited to, recognition based on a face template, recognition based on a classifier, recognition through a deep neural network, and the like.
For example, the electronic device may face the first image using a convolutional neural network, which may be trained from a set of face samples, which may include a plurality of face images labeled with face regions. The convolutional neural network can extract the face feature points in the first image, and determine the face detection frames corresponding to each face in the first image according to the face feature points, and the image area where the face detection frames corresponding to each face are located can be used as the face area.
Alternatively, the shape of the face detection frame may include, but is not limited to, square, rectangular, circular, etc. If the face detection frame is square, rectangle, etc., the radius of the corresponding face region can be the radius of the circumcircle of the face detection frame, and if the face detection frame is circular, the radius of the face region is the radius of the face detection frame. The coordinates of the central point of the face area are the coordinates of the central pixel point of the face detection frame, if the face detection frame is square, rectangle or other quadrangle, the horizontal coordinates of the central pixel point can be half of the width of the face detection frame, and the vertical coordinates of the central pixel point can be half of the height of the face detection frame. If the face detection frame is circular, the center point of the face area is the center of the face detection frame.
Step 220, generating a weight mask according to the portrait mask and the face information of the first image.
In this embodiment of the present application, the weight mask may be used to determine a clear face region corresponding to each face in the first image, and a blurred image transition region corresponding to each face, where the clear face region refers to a region in the image region of the first image that needs to be kept clear, and the blurred image transition region refers to a region between clear and blurred images in the image region.
The portrait mask of the first image can be obtained, the portrait mask can be used for representing the position of a portrait region in the first image, and pixel points belonging to the portrait region in the first image can be marked. Alternatively, in the portrait mask, different pixel values may be used to represent the portrait region and the non-portrait region (i.e., background region), for example, a pixel value of 255 indicates that the pixel belongs to the portrait region, a pixel value of 0 indicates that the pixel belongs to the background region, or a pixel value of 0 indicates that the pixel belongs to the portrait region, a pixel value of 255 indicates that the pixel belongs to the background region, and so on, but is not limited thereto.
In some embodiments, the portrait mask may be pre-stored in a memory, and after the electronic device acquires the first image, the electronic device may acquire the corresponding portrait mask from the memory according to an image identifier of the first image, where the image identifier may include, but is not limited to, information such as an image number, an image acquisition time, an image name, and the like. The portrait mask may be generated by performing portrait identification on the first image after the electronic device acquires the first image. The manner of performing the portrait identification on the first image may include, but is not limited to, the following ways:
In the first mode, a portrait region of the first image is identified based on a depth map of the first image, and a portrait mask is obtained. The depth estimation may be performed on the first image, so as to obtain a depth map of the first image, where the depth map may include depth information corresponding to each pixel point in the first image, where the depth information may be used to represent a distance between a point on the object to be photographed and the camera, and a larger depth information may represent a distance. Since the difference in depth information between the portrait area and the background area is large, the portrait area of the first image may be identified from the depth map, for example, an area composed of pixels whose depth information is smaller than a first depth threshold in the first image may be determined as the portrait area, an area composed of pixels whose depth information is greater than a second depth threshold, which may be smaller than or equal to the second depth threshold may be determined as the background area, and so on.
The manner of estimating the depth of the first image by the electronic device may be a software depth estimation manner, or may be a manner of calculating depth information by combining with a hardware device, or the like. The depth estimation mode of the software may include, but is not limited to, a mode of performing depth estimation by using a neural network such as a depth estimation model, wherein the depth estimation model may be obtained by training a depth training set, and the depth training set may include a plurality of sample images and a depth map corresponding to each sample image. The depth estimation manner in connection with the hardware device may include, but is not limited to, depth estimation with multiple cameras (e.g., dual cameras), depth estimation with structured light, depth estimation with TOF (Time of flight), and the like. The embodiment of the application does not limit the manner of depth estimation.
In some embodiments, the face information of each face in the first image may be combined with the depth map, the depth information of each face region in the depth map may be determined, a depth difference between the depth information of each pixel in the depth map and the depth information of each face region may be calculated, if the depth difference between the pixel and any face region is less than a difference threshold, it may be determined that the pixel belongs to a person image region corresponding to the face region, and accuracy of person image region identification may be improved.
And secondly, performing image segmentation processing on the first image to obtain a portrait mask. Methods of image segmentation processing may include, but are not limited to, methods using graph theory-based image segmentation methods, cluster-based image segmentation methods, semantic-based image segmentation methods, instance-based image segmentation methods, deeplab-series network model-based image segmentation methods, U-Net-based segmentation methods, or full convolution network (Fully Convolutional Network, FCN) -based image segmentation methods, and the like.
Taking an electronic device to perform image segmentation processing on a first image through an image segmentation model to obtain an image mask as an example, the image segmentation model can be a model with a U-Net structure, the image segmentation model can comprise an encoder and a decoder, the encoder can comprise a plurality of downsampling layers, and the decoder can comprise a plurality of upsampling layers. The image segmentation model can firstly perform multiple times of downsampling convolution processing on the first image through multiple downsampling layers of the encoder, and then perform multiple times of upsampling processing through multiple upsampling layers of the decoder to obtain the image mask. In the portrait segmentation model, jump connection can be realized between the downsampling layer and the upsampling layer between the same resolutions, and features of the downsampling layer and the upsampling layer between the same resolutions are fused, so that the upsampling process is more accurate.
Alternatively, the portrait segmentation model may be trained from a portrait sample set, which may include a plurality of portrait sample images carrying portrait tags that may be used to label portrait regions in the portrait sample images, e.g., the portrait tags may include sample portrait masks, etc.
In some embodiments, in the face mask of the first image may be determined according to the face information of each face, a face clear area corresponding to each face, and optionally, the face clear area of each face may be a face area corresponding to a face, a circumcircle area of the face area, or an area defined in the face area. Further, the center point of the face definition area may coincide with the center point of the face area to ensure that the face is clear.
A person image blurring transition region in a person image region corresponding to each face may be determined based on each face definition region to generate a weight mask. As an embodiment, for the first pixel value, the pixel points in the weight mask, which belong to the face-clear region, and the pixel points in the image region, which do not belong to the face-clear region and the image-blur transition region, may correspond to the second pixel value, and the pixel values of the pixel points belonging to the blur transition region may be between the first pixel value and the second pixel value, for example, the first pixel value may be 0, and the second pixel value may be 255, and the like, but is not limited thereto.
Step 230, obtaining a blurred image and a background blurred image corresponding to the first image.
The blurred image is an image obtained by blurring the first image, and the background blurred image is an image obtained by blurring a background region of the first image, and optionally, the blurring process may include, but is not limited to, mean blurring process, median blurring process, and the like, and the blurring process may be implemented by using a gaussian filter, mean blurring process, median blurring process, and the like, which is not limited herein.
The blur degree of the blurred image may be less than the blur degree of the background region in the background blurred image, and in some embodiments, the blurred image may be an image obtained by blurring the first image based on the first blur radius, and the background blurred image may be an image obtained by blurring the background region of the first image based on the second blur radius. Wherein the blur radius may be used to characterize the degree of blur, the greater the blur radius, the stronger the blur effect, and thus the first blur radius may be smaller than the second blur radius.
And step 240, fusing the blurred image and the background blurred image according to the weight mask to obtain a second image.
Because the face clear region corresponding to each face and the image blurring transition region corresponding to each face are marked in the weight mask, the weight mask is used for fusing the blurring image and the background blurring image to obtain the image content in the second image, the face clear region and the background region can correspond to the image content in the background blurring image, the other image regions except the face clear region and the image blurring transition region in the image region can correspond to the image content in the blurring image, and the image content of the image blurring transition region in the second image can be the combination of the background blurring image and the blurring image, so that the effect of the image region from clear to blurring is realized. The portrait area in the background blurring image is clear image content, and the whole image in the blurring image has blurring effect, so that each face in the second image is kept clear, other portrait areas are kept to a small degree of blurring, and a transition area is formed between the clear image and the blurring image, so that the whole image effect is more natural.
In the embodiment of the application, as the clear face region corresponding to each face in the first image and the blurred image transition region corresponding to each face are marked in the weight mask, the second image obtained by fusing the blurred images with different blur degrees and the background blurred images based on the weight mask can keep each face clear, other portrait regions are gradually transited from clear to blurred, the problem that no salient points appear due to the fact that all portrait regions in the image are clear can be avoided, the image effect is more natural, and the visual display effect of the image is improved. In addition, under the condition that the first image contains a plurality of faces, each face in the obtained second image can be ensured to be clear, the condition that only one person is focused and other faces are blurred can be avoided, and the image blurring effect is improved.
As shown in fig. 3, in one embodiment, the step of generating a weight mask according to the face mask and the face information of the first image may include the steps of:
step 302, determining a corresponding face clear area of each face in a face mask of the first image according to the face information.
The face information may include information such as a face region, coordinates of a center point of the face region, a radius of a circumcircle of the face region, and the like, and as an embodiment, an image region formed by the circumcircle of the face region may be used as a face clear region of the face region. The clear face region corresponding to each face in the face mask is a circular region taking the center point coordinates of the face region corresponding to each face as a center point and taking the circumscribed circle radius of the face region corresponding to each face as the region radius.
Taking a first face in the first image as an example, the first face is any face in the first image, the face clear area of the first face may be a circular area with a center point coordinate of a face area corresponding to the first face as a center point and a circumcircle radius of the face area corresponding to the first face as an area radius, that is, the face clear area of the first face may be a circumcircle area of the face area of the first face.
Illustratively, fig. 4A is a schematic diagram of a clear face region in one embodiment. As shown in fig. 4A, the white area in the portrait mask is a portrait area, the black area is a background area, and the portrait mask includes two faces, and the face clear area 412 and the face clear area 414 can be determined based on the face area of each face respectively.
Step 304, determining a first distance between each pixel point of the portrait region in the portrait mask and a center point of the corresponding target face clear region.
The target face clear region corresponding to each pixel point of the human image region may be a target face clear region closest to each pixel point of the human image region. In some embodiments, the number of faces included in the first image may be determined according to a recognition result of performing face recognition on the first image, if only one face is included in the first image, the face clear area corresponding to the face is the target face clear area corresponding to all pixel points of the face area in the face mask, and it may be determined that a distance between each pixel point of the face area in the face mask and a center point of the face clear area corresponding to the face is a first distance.
If the first image contains at least two faces, a target face clear area with the nearest pixel point distance of the face area in the face mask can be determined, and a first distance between each pixel point and the center point of the corresponding target face clear area is calculated. The distances between the pixel points and the face clear areas can be calculated respectively, and the face clear area with the smallest distance is selected as the target face clear area.
In some embodiments, the step of determining a target face-definition area in the portrait mask, where each pixel point of the portrait area is closest to the nearest target face-definition area, and calculating a first distance between each pixel point of the portrait area and a center point of the corresponding target face-definition area, respectively, may include: calculating a third distance from the target pixel point to each face clear region according to a second distance between the target pixel point and the center point of each face clear region and the region radius of each face clear region; determining a human face clear area with the minimum third distance as a target human face clear area corresponding to the target pixel point; and determining the first distance between the target pixel point and the center point of the clear target face region according to the third distance between the target pixel point and the target face region and the region radius of the target face region.
The target pixel point can be any pixel point of a human image area in the human image mask, the center point coordinate of each human face clear area can be obtained, and the second distance from the target pixel point to the center point of each human face clear area can be calculated according to the coordinate of the target pixel point and the center point coordinate of each human face clear area by utilizing a two-point distance formula. The second distance from the target pixel point to the center point of each face definition region can be subtracted by the region radius of the corresponding face definition region to obtain a third distance from the target pixel point to each face definition region.
Specifically, the calculation formula of the third distance from the target pixel point to each portrait definition area may be shown in formula (1):
d 3_k(x,y) =d 2_k(x,y) -r k formula (1);
wherein d 3_k(x,y) A third distance d representing the distance from the pixel point (x, y) to the kth portrait definition area 2_k(x,y) Representing a second distance, r, between the pixel point (x, y) and the center point of the kth human image definition region k The region radius of the kth portrait definition region is represented. k may be a positive integer greater than or equal to 2.
After obtaining the third distances from the target pixel point to the respective portrait definition areas, the smallest third distance can be selected from the third distances, and the portrait definition area corresponding to the smallest third distance is used as the target portrait definition area corresponding to the target pixel point. And adding the third distance from the target pixel point to the target human face clear region to the region radius of the target human face region to obtain the first distance between the target pixel point and the central point of the target human face clear region.
Specifically, the calculation formula of the first distance between the target pixel point and the center point of the target face definition area may be as shown in formula (2):
d 1(x,y) =MIN(d 3_k(x,y) )+r k′ formula (2);
wherein MIN (d) 3_k(x,y) ) Representing the smallest third distance, r, of the third distances from the pixel point (x, y) to the respective portrait definition areas k′ An area radius d representing the area of the human figure definition corresponding to the smallest third distance (i.e. the target human figure definition area) 1(x,y) A first distance from the pixel point (x, y) to the center point of the target portrait definition area is represented.
All the pixel points of the portrait region in the portrait mask can be traversed, and each pixel point of the portrait region can be calculated to obtain a first distance from the center point of the clear region of the target portrait according to the method described in the embodiment. Taking fig. 4A as an example, it may be determined that the target face clear area corresponding to each pixel point in the white area (i.e., the human image area) in fig. 4A is the face clear area 412 or the face clear area 414, that is, the attribution of each pixel point in the white area is determined, and then the first distance between each pixel point in the white area and the corresponding target face clear area is calculated.
And 306, normalizing the first distance of each pixel point according to the human image transition range of the target human face clear area corresponding to each pixel point to obtain a normalized value corresponding to each pixel point, and generating a weight mask according to the normalized value and the human image mask.
The portrait transition range may be a preset region range of a portrait blurring transition region, and the shape and size of the region of the portrait blurring transition region may be set according to actual requirements, which is not limited in the embodiment of the present application.
For the face clear areas with different sizes, the face transition ranges with different sizes can be respectively corresponding, and the size of the face transition ranges can be related to the area radius of the corresponding face clear areas. Illustratively, the image transition range may be a circular ring range outside the face definition region and close to the face definition region, and the small circular radius of the circular ring range is the region radius of the face definition region, and the large circular radius may be 2 times the region radius of the face definition region. That is, the portrait transition range may be a range formed from the region radius to 2 times the region radius from the center point of the clear region of the face.
For each pixel point of a human image region in a human image mask, normalization processing can be performed on the first distance of each pixel point of the human image region to obtain a normalization value corresponding to each pixel point, wherein the normalization processing can refer to mapping a numerical value to a value in a range of 0-1.
Taking the target pixel point in the human image area as an example, the target pixel point can be any pixel point in the human image area, and the human image transition range corresponding to the target human face clear area can be determined according to the area radius of the target human face clear area corresponding to the target pixel point. For example, the portrait transition range may be [ r ] k′ ,2r k′ ]I.e. the range from the region radius to 2 times the region radius from the center point of the clear face region.
The first distance from the target pixel point to the center point of the target face clear region can be calculated, the difference value between the first distance and the region radius of the target face clear region is determined, the ratio between the difference value and the human image transition range is determined, and then normalization processing is carried out on the ratio to obtain the normalization value of the target pixel point.
Specifically, the range of the portrait transition is [ r ] k′ ,2r k′ ]For example, the formula for calculating the ratio may be as shown in formula (3):
wherein d 1(x,y) Representing a first distance, r, from a pixel point (x, y) to a center point of a clear region of a target portrait k′ Representing the regional radius of the target portrait clear region corresponding to the pixel point (x, y), F (x,y) The ratio corresponding to the pixel point (x, y) is expressed.
The normalization processing can be performed on the ratio corresponding to the target pixel point, as an implementation manner, whether the ratio is smaller than 0 or larger than 1 can be judged, if the ratio is smaller than 0, it is indicated that the target pixel point belongs to a clear face region, and the normalization value of the target pixel point can be determined to be 0; if the ratio is greater than 1, the target pixel point is not in the human image transition range corresponding to the target human image clear region, and does not belong to the human image fuzzy transition region or the human face clear region, and the normalized value of the target pixel point can be determined to be 1. If the ratio is greater than or equal to 0 and less than or equal to 1, the target pixel point is in a human image transition range corresponding to the target human image clear region and belongs to the human image fuzzy transition region, and the ratio can be used as a normalization value of the target pixel point.
After the normalized value of each pixel point of the portrait area in the portrait mask is calculated according to the above manner, a weight mask can be generated according to the normalized value of each pixel point of the portrait area and the portrait mask. In some embodiments, the pixel value of each pixel of the portrait area may be multiplied by the normalized value of each pixel of the portrait area to obtain the weight value corresponding to each pixel of the portrait area, so as to generate the weight mask.
Specifically, the calculation formula of the weight value corresponding to each pixel point of the portrait area in the portrait mask may be as shown in formula (4):
W_Mask (x,y) =P_Marsk (x,y) ·f (x,y) formula (4);
wherein, W_mask (x,y) Weight value representing pixel point (x, y), P_Marsk (x,y) Representing the pixel value of pixel point (x, y) in the portrait mask, f (x,y) And (3) representing the normalized value corresponding to the pixel point (x, y).
Illustratively, FIG. 4B is a schematic diagram of a weight mask in one embodiment. As shown in fig. 4B, the weight mask corresponds to the image mask of fig. 4A, and includes two faces, wherein the left face corresponds to the face-clear region 412 and the image-blur transition region 422 (the region where the left image region gradually changes from black to white), the right face corresponds to the face-clear region 414 and the image-blur transition region 424 (the region where the right image region gradually changes from black to white), and there are corresponding face-clear regions and image-blur transition regions for each face. It should be noted that, fig. 4B is only for aiding in explaining the region in the weight mask, and does not indicate that the pattern is carried in the real weight mask.
In some embodiments, after generating the weight mask according to the normalized value corresponding to each pixel point of the portrait area in the portrait mask and the portrait mask, the weight mask may be further subjected to blurring processing, where the blurring processing may include median blurring processing, so that the weight mask after blurring processing is smoother, and then the blurred image and the background blurring image are fused according to the weight mask after blurring processing to obtain a second image, so that the obtained second image is smoother and natural.
In the embodiment of the application, according to the portrait mask of the first image and the face information of each face in the first image, a corresponding weight mask can be accurately generated, and the clear face region and the blurred transition region of each face in the first image are accurately marked through the weight mask, so that the visual display effect of the second image obtained by subsequent fusion can be improved.
In another embodiment, as shown in fig. 5, an image processing method is provided, which may include the steps of:
step 502, face recognition is performed on the first image, so as to obtain face information of each face contained in the first image.
Step 504, generating a weight mask according to the portrait mask and the face information of the first image.
The descriptions of steps 502 to 504 may refer to the related descriptions in the above embodiments, and the detailed descriptions are not repeated here.
In some embodiments, the weight mask may be further generated according to the portrait mask of the first image, the depth map of the first image, and the face information described above, where the depth map may include depth information corresponding to each pixel point in the first image. As an embodiment, depth information of each pixel point of a portrait area in a portrait mask may be obtained according to a depth map, and the depth information of each pixel point is used as a weight value to generate a weight mask.
As another embodiment, depth information of a face region in the first image may be determined, and a weight value of each pixel point of the face region may be determined according to a depth difference between the depth information of each pixel point of the face region and the depth information of the face region in the face mask.
Alternatively, the normalization process may be performed on the depth difference, where the normalization value corresponding to the pixel point with the depth difference smaller than the first threshold may be 0, the normalization value corresponding to the pixel point with the depth difference greater than the second threshold may be 1, and the normalization value corresponding to the pixel point with the depth difference between the first threshold and the second threshold may be a ratio of the depth difference to the difference of the second threshold minus the first threshold. The first threshold and the second threshold can be set according to actual demands, and the second threshold is larger than the first threshold.
In some embodiments, in the case that the first image includes multiple faces, there may be a case that depth information of the face regions is inconsistent, so that the depth information of each face region may be obtained, and based on the depth information of each face region, nonlinear stretching processing is performed on the face region in the face mask, so that the depth information of the whole face region is stretched to the same level, and then weight values of each pixel point are determined according to the depth information after stretching processing of each pixel point in the face region. The depth information of the face region may be average depth information of all pixels included in the face region, or may be depth information of a center point of the face region.
The depth map, the human face mask, the human face information and the like of the first image are used for generating a weight mask, so that a human image area which is closer to the human face in the second image obtained by subsequent fusion is clearer, a human image area which is farther from the human face is more blurred, and a relatively real human image refocusing effect is realized.
And step 506, blurring the first image based on the first blur radius to obtain a blurred image.
And step 508, blurring the background area of the first image based on the second blur radius to obtain a background blurring image.
The descriptions of steps 506 to 508 may refer to the related descriptions in the above embodiments, and the detailed descriptions are not repeated here.
In some embodiments, the background area of the first image may be divided according to the depth map of the first image, and pixels with the same or similar depth information in the background area may be divided into the same area, so as to obtain a plurality of background sub-areas. The second blur radius of each background subarea can be determined according to the depth information of each background subarea, the depth information of the background subarea can be in positive correlation with the corresponding second blur radius, namely, the larger the depth information of the background subarea is, the larger the second mode radius is, the more remote and blurred effects are realized, and the image effect of the background blurred image is improved.
The step sequence of steps 506 to 508 is not limited in the embodiment of the present application, and may be performed in parallel, sequentially, or before or after the weight mask is generated.
And 510, taking the weight mask as the Alpha value of the blurred image, and carrying out Alpha fusion on the blurred image and the background blurred image to obtain a second image.
In some embodiments, the fusion mode of the blurred image and the background blurred image may be Alpha fusion processing, and the Alpha fusion processing may respectively assign an Alpha value to each pixel point in the blurred image and the background blurred image, so that the blurred image and the background blurred image have different transparency. The weight mask can be used as Alpha value of the blurred image, and Alpha fusion can be carried out on the blurred image and the background blurring image based on the target weight graph to obtain a second image.
Specifically, alpha fusion processing is performed on the blurred image and the background blurred image, and the formula of the Alpha fusion processing can be shown as formula (5):
I=αI 1 +(1-α)I 2 equation (5);
wherein I is 1 Represents blurred image, alpha represents weight mask, I 2 And (3) representing a background blurring image, and I representing a fused second image. Further, assume that the pixel value of each pixel point of the portrait region in the portrait mask is 255, and the background regionThe pixel value of the pixel point is 0, the weight value of the image clear region in the weight mask is 0, the weight value of the image blur transition region is a value gradually changing in a 0-255 interval, the weight values of other image regions except the image clear region and the image blur transition region are 255, and the value of the pixel point of the background region in the weight mask can be 0. Therefore, the face clear region and the background region in the fused second image correspond to the image content in the background blurring image, the other face regions except the face clear region and the face blurring transition region correspond to the image content in the blurring image, the face blurring transition region is the fused image content of the background blurring image and the blurring image, and the face blurring transition region can present a clear-to-blurring transition effect, so that the second image is more natural.
In the embodiment of the application, the face of each human body in the second image can be kept clear, the human image areas except the face are kept to be blurred to a small extent, the refocusing of the human images is realized, the problem that no salient points appear due to the fact that all the human image areas in the image are clear can be avoided, only one person is focused can be avoided, the condition that other faces are blurred is avoided, and the blurring effect of the image is improved.
As shown in fig. 6, in one embodiment, an image processing apparatus 600 is provided and may be applied to the above electronic device, where the image processing apparatus 600 may include a face recognition module 610, a weight generation module 620, an image acquisition module 630, and a fusion module 640.
The face recognition module 610 is configured to perform face recognition on the first image to obtain face information of each face included in the first image.
The weight generating module 620 is configured to generate a weight mask according to the face mask and the face information of the first image, where the weight mask is used to determine a face clear region corresponding to each face in the first image and a face fuzzy transition region corresponding to each face.
The image obtaining module 630 is configured to obtain a blurred image and a background blurred image corresponding to the first image, where the blur degree of the blurred image is smaller than the blur degree of a background area in the background blurred image.
And the fusion module 640 is configured to fuse the blurred image and the background blurred image according to the weight mask, so as to obtain a second image.
In the embodiment of the application, as the clear face region corresponding to each face in the first image and the blurred image transition region corresponding to each face are marked in the weight mask, the second image obtained by fusing the blurred images with different blur degrees and the background blurred images based on the weight mask can keep each face clear, other portrait regions are gradually transited from clear to blurred, the problem that no salient points appear due to the fact that all portrait regions in the image are clear can be avoided, the image effect is more natural, and the visual display effect of the image is improved. In addition, under the condition that the first image contains a plurality of faces, each face in the obtained second image can be ensured to be clear, the condition that only one person is focused and other faces are blurred can be avoided, and the image blurring effect is improved.
In one embodiment, the weight generation module 620 includes a clear region determination unit, a distance determination unit, and a normalization unit.
And the clear region determining unit is used for determining a face clear region corresponding to each face in the face mask of the first image according to the face information.
In one embodiment, the face information includes coordinates of a center point of the face region, and a radius of a circumcircle of the face region; the clear face region corresponding to each face in the face mask is a circular region taking the center point coordinates of the face region corresponding to each face as a center point and taking the circumscribed circle radius of the face region corresponding to each face as the region radius.
The distance determining unit is used for determining first distances between each pixel point of the human image area in the human image mask and the center point of the corresponding target human face clear area respectively, wherein the target human face clear area corresponding to each pixel point of the human image area is the target human face clear area with the nearest distance between each pixel point.
In one embodiment, the distance determining unit is further configured to determine, if the first image includes only one face, a distance between each pixel point of the image area in the image mask and a center point of a clear face area corresponding to the face as the first distance; and if the first image comprises at least two faces, determining a target face clear area with the nearest pixel point distance of the image area in the image mask, and respectively calculating the first distance between each pixel point of the image area and the center point of the corresponding target face clear area.
In one embodiment, the distance determining unit is further configured to calculate a third distance from the target pixel point to each face clear area according to a second distance between the target pixel point and a center point of each face clear area and an area radius of each face clear area, where the target pixel point is any pixel point of a human image area in the human image mask; determining a human face clear area with the minimum third distance as a target human face clear area corresponding to the target pixel point; and determining the first distance between the target pixel point and the center point of the clear target face region according to the third distance between the target pixel point and the target face region and the region radius of the target face region.
The normalization unit is used for carrying out normalization processing on the first distance of each pixel point of the human image area according to the human image transition range of the target human face clear area corresponding to each pixel point of the human image area in the human image mask, obtaining the normalization value corresponding to each pixel point of the human image area, and generating the weight mask according to the normalization value and the human image mask.
In one embodiment, the normalization unit is further configured to determine a person transition range corresponding to the target face clear area according to an area radius of the target face clear area corresponding to a target pixel, where the target pixel is any pixel of a person area in the person mask; calculating a difference value between a first distance of a target pixel point and the radius of the region, and determining a ratio between the difference value and a portrait transition range; and carrying out normalization processing on the ratio to obtain a normalized value of the target pixel point.
In one embodiment, the normalization unit is further configured to determine that the normalization value of the target pixel point is 0 if the ratio is less than 0; if the ratio is greater than 1, determining that the normalized value of the target pixel point is 1; if the ratio is greater than or equal to 0 and less than or equal to 1, the ratio is taken as the normalized value of the target pixel point.
In one embodiment, the normalization unit is further configured to multiply a pixel value of each pixel of the portrait area in the portrait mask with a normalization value of each pixel of the portrait area to obtain a weight value corresponding to each pixel of the portrait area, so as to generate a weight mask.
In one embodiment, the weight generation module 620 includes a blurring unit in addition to the clear region determination unit, the distance determination unit, and the normalization unit.
And the blurring unit is used for blurring the weight mask.
And the fusion module 640 is further configured to fuse the blurred image and the background blurred image according to the weight mask after the blurring process, so as to obtain a second image.
In the embodiment of the application, according to the portrait mask of the first image and the face information of each face in the first image, a corresponding weight mask can be accurately generated, and the clear face region and the blurred transition region of each face in the first image are accurately marked through the weight mask, so that the visual display effect of the second image obtained by subsequent fusion can be improved.
In one embodiment, the weight generating module 620 is further configured to generate a weight mask according to the portrait mask of the first image, the depth map of the first image, and the face information, where the depth map includes depth information corresponding to each pixel point in the first image.
In one embodiment, the image obtaining module 630 is further configured to blur the first image based on the first blur radius to obtain a blurred image; and blurring the background area of the first image based on the second blurring radius to obtain a background blurring image, wherein the first blurring radius is smaller than the second blurring radius.
In one embodiment, the fusion module 640 is further configured to use the weight mask as an Alpha value of the blurred image, and perform Alpha fusion on the blurred image and the background blurred image to obtain the second image.
In the embodiment of the application, the face of each human body in the second image can be kept clear, the human image areas except the face are kept to be blurred to a small extent, the refocusing of the human images is realized, the problem that no salient points appear due to the fact that all the human image areas in the image are clear can be avoided, only one person is focused can be avoided, the condition that other faces are blurred is avoided, and the blurring effect of the image is improved.
Fig. 7 is a block diagram of an electronic device in one embodiment. As shown in fig. 7, the electronic device 700 may include one or more of the following components: processor 710, memory 720 coupled to processor 710, wherein memory 720 may store one or more computer programs that may be configured to implement methods as described in the various embodiments above when executed by one or more processors 710.
Processor 710 may include one or more processing cores. The processor 710 utilizes various interfaces and lines to connect various portions of the overall electronic device 700, perform various functions of the electronic device 700, and process data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 720, and invoking data stored in the memory 720. Alternatively, the processor 710 may be implemented in hardware in at least one of digital signal processing (Digital Signal Processing, DSP), field programmable gate array (Field-Programmable Gate Array, FPGA), programmable logic array (Programmable Logic Array, PLA). The processor 710 may integrate one or a combination of several of a central processing unit (Central Processing Unit, CPU), an image processor (Graphics Processing Unit, GPU), and a modem, etc. The CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for being responsible for rendering and drawing of display content; the modem is used to handle wireless communications. It will be appreciated that the modem may not be integrated into the processor 710 and may be implemented solely by a single communication chip.
The Memory 720 may include a random access Memory (Random Access Memory, RAM) or a Read-Only Memory (ROM). Memory 720 may be used to store instructions, programs, code, sets of codes, or sets of instructions. The memory 720 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for implementing at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing the various method embodiments described above, and the like. The storage data area may also store data created by the electronic device 700 in use, and the like.
It is to be appreciated that the electronic device 700 may include more or fewer structural elements than those described in the above structural block diagrams, including, for example, a power module, physical key, wiFi (Wireless Fidelity ) module, speaker, bluetooth module, sensor, etc., and may not be limited herein.
The present application discloses a computer-readable storage medium storing a computer program, wherein the computer program, when executed by a processor, implements the method as described in the above embodiments.
The present embodiments disclose a computer program product comprising a non-transitory computer readable storage medium storing a computer program, which when executed by a processor, implements a method as described in the above embodiments.
Those skilled in the art will appreciate that all or part of the processes in the methods of the above embodiments may be implemented by a computer program for instructing relevant hardware, where the program may be stored in a non-volatile computer readable storage medium, and where the program, when executed, may include processes in the embodiments of the methods described above. Wherein the storage medium may be a magnetic disk, an optical disk, a ROM, etc.
Any reference to memory, storage, database, or other medium as used herein may include non-volatile and/or volatile memory. Suitable nonvolatile memory can include ROM, programmable ROM (PROM), erasable PROM (EPROM), electrically Erasable PROM (Electrically Erasable PROM, EEPROM), or flash memory. Volatile memory can include random access memory (random access memory, RAM), which acts as external cache memory. By way of illustration and not limitation, RAM may take many forms, such as Static RAM (SRAM), dynamic RAM (Dynamic Random Access Memory, DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDR SDRAM), enhanced SDRAM (Enhanced Synchronous DRAM, ESDRAM), synchronous Link DRAM (SLDRAM), memory bus Direct RAM (Rambus DRAM), and Direct memory bus dynamic RAM (DRDRAM).
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present application. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. Those skilled in the art will also appreciate that the embodiments described in the specification are all alternative embodiments and that the acts and modules referred to are not necessarily required in the present application.
In various embodiments of the present application, it should be understood that the size of the sequence numbers of the above processes does not mean that the execution sequence of the processes is necessarily sequential, and the execution sequence of the processes should be determined by the functions and internal logic thereof, and should not constitute any limitation on the implementation process of the embodiments of the present application.
The units described above as separate components may or may not be physically separate, and components shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The foregoing has described in detail the image processing method, apparatus, electronic device and computer readable storage medium disclosed in the embodiments of the present application, and specific examples have been applied to illustrate the principles and embodiments of the present application, where the foregoing description of the embodiments is only for aiding in understanding the method and core idea of the present application. Meanwhile, as those skilled in the art will have modifications in the specific embodiments and application scope in accordance with the ideas of the present application, the present description should not be construed as limiting the present application in view of the above.

Claims (15)

1. An image processing method, comprising:
performing face recognition on the first image to obtain face information of each face contained in the first image;
generating a weight mask according to the face mask of the first image and the face information, wherein the weight mask is used for determining a face clear region corresponding to each face in the first image and a face fuzzy transition region corresponding to each face; the human face clear region refers to a region which needs to be kept clear in a human image region of the first image, and the human image blurring transition region refers to a change region from clear to blurring in the human image region;
Acquiring a blurred image and a background blurred image corresponding to the first image, wherein the blurring degree of the blurred image is smaller than that of a background area in the background blurred image; the blurred image is an image obtained by blurring the first image, and the background blurring image is an image obtained by blurring a background area of the first image;
and fusing the blurred image and the background blurring image according to the weight mask to obtain a second image.
2. The method of claim 1, wherein generating a weight mask from the face mask of the first image and the face information comprises:
determining a face clear area corresponding to each face in a face mask of the first image according to the face information;
respectively determining first distances between each pixel point of a portrait region in the portrait mask and a central point of a corresponding target face clear region, wherein the target face clear region corresponding to each pixel point is the face clear region with the nearest distance between each pixel point;
and normalizing the first distance of each pixel point according to the human image transition range of the target human face clear area corresponding to each pixel point to obtain a normalized value corresponding to each pixel point, and generating a weight mask according to the normalized value and the human image mask.
3. The method according to claim 2, wherein determining the first distance between each pixel point of the portrait region in the portrait mask and the center point of the corresponding target face clear region includes:
if the first image only comprises one face, determining the distance between each pixel point of a portrait area in the portrait mask and the center point of a clear face area corresponding to the face as a first distance;
if the first image comprises at least two faces, determining a target face clear area with the nearest pixel point distance, and respectively calculating a first distance between each pixel point and a central point of the corresponding target face clear area.
4. A method according to claim 3, wherein determining the closest target face-definition region of each pixel point, and calculating the first distance between each pixel point and the center point of the corresponding target face-definition region, respectively, comprises:
calculating a third distance from a target pixel point to each face clear region according to a second distance between the target pixel point and the center point of each face clear region and the region radius of each face clear region, wherein the target pixel point is any pixel point of a human image region in the human image mask;
Determining a human face clear region with the minimum third distance as a target human face clear region corresponding to the target pixel point;
and determining a first distance between the target pixel point and the center point of the clear target face region according to a third distance between the target pixel point and the target face region and the region radius of the target face region.
5. The method according to any one of claims 2 to 4, wherein the face information includes coordinates of a center point of a face region, and a radius of a circumcircle of the face region; the clear face region of each face in the face mask is a circular region with the center point coordinates of the face region corresponding to each face as the center point and the circumcircle radius of the face region corresponding to each face as the region radius.
6. The method of claim 2, wherein normalizing the first distance of each pixel according to the image transition range of the target face clear region corresponding to each pixel to obtain a normalized value corresponding to each pixel comprises:
determining a human image transition range corresponding to a target human face clear region according to the region radius of the target human face clear region corresponding to a target pixel point, wherein the target pixel point is any pixel point of a human image region in the human image mask;
Calculating a difference value between the first distance of the target pixel point and the radius of the region, and determining a ratio between the difference value and the portrait transition range;
and carrying out normalization processing on the ratio to obtain a normalized value of the target pixel point.
7. The method of claim 6, wherein normalizing the ratio to obtain a normalized value of the target pixel comprises:
if the ratio is smaller than 0, determining that the normalized value of the target pixel point is 0;
if the ratio is greater than 1, determining that the normalized value of the target pixel point is 1;
and if the ratio is greater than or equal to 0 and less than or equal to 1, taking the ratio as a normalized value of the target pixel point.
8. The method according to claim 6 or 7, wherein generating a weight mask from the normalized values and the portrait mask comprises:
and multiplying the pixel value of each pixel point of the portrait region in the portrait mask with the normalized value of each pixel point to obtain the weight value corresponding to each pixel point so as to generate a weight mask.
9. The method of any one of claims 2-4, wherein after the generating a weight mask from the normalized values and the portrait mask, the method further comprises:
Performing fuzzy processing on the weight mask;
the fusion of the blurred image and the background blurring image according to the weight mask is carried out to obtain a second image, and the method comprises the following steps:
and fusing the blurred image and the background blurring image according to the weight mask after blurring processing to obtain a second image.
10. The method of claim 1, wherein generating a weight mask from the face mask of the first image and the face information comprises:
and generating a weight mask according to the portrait mask of the first image, the depth map of the first image and the face information, wherein the depth map comprises depth information corresponding to each pixel point in the first image.
11. The method according to any one of claims 1 to 4, wherein the acquiring the blurred image and the background blurred image corresponding to the first image includes:
performing blurring processing on the first image based on a first blurring radius to obtain a blurred image;
and blurring the background area of the first image based on a second blurring radius to obtain a background blurring image, wherein the first blurring radius is smaller than the second blurring radius.
12. The method according to any one of claims 1 to 4, wherein the fusing the blurred image and the background blurred image according to the weight mask to obtain a second image includes:
and taking the weight mask as the Alpha value of the blurred image, and carrying out Alpha fusion on the blurred image and the background blurred image to obtain a second image.
13. An image processing apparatus, comprising:
the face recognition module is used for recognizing the face of the first image to obtain face information of each face contained in the first image;
the weight generation module is used for generating a weight mask according to the face mask of the first image and the face information, wherein the weight mask is used for determining a face clear area corresponding to each face in the first image and a face fuzzy transition area corresponding to each face; the human face clear region refers to a region which needs to be kept clear in a human image region of the first image, and the human image blurring transition region refers to a change region from clear to blurring in the human image region;
the image acquisition module is used for acquiring a blurred image and a background blurred image corresponding to the first image, and the blurring degree of the blurred image is smaller than that of a background area in the background blurred image; the blurred image is an image obtained by blurring the first image, and the background blurring image is an image obtained by blurring a background area of the first image;
And the fusion module is used for fusing the blurred image and the background blurring image according to the weight mask to obtain a second image.
14. An electronic device comprising a memory and a processor, the memory having stored therein a computer program which, when executed by the processor, causes the processor to implement the method of any of claims 1 to 12.
15. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the method according to any one of claims 1 to 12.
CN202111015385.5A 2021-08-31 2021-08-31 Image processing method, device, electronic equipment and computer readable storage medium Active CN113673474B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111015385.5A CN113673474B (en) 2021-08-31 2021-08-31 Image processing method, device, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111015385.5A CN113673474B (en) 2021-08-31 2021-08-31 Image processing method, device, electronic equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN113673474A CN113673474A (en) 2021-11-19
CN113673474B true CN113673474B (en) 2024-01-12

Family

ID=78547791

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111015385.5A Active CN113673474B (en) 2021-08-31 2021-08-31 Image processing method, device, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN113673474B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115223022B (en) * 2022-09-15 2022-12-09 平安银行股份有限公司 Image processing method, device, storage medium and equipment
CN115526809B (en) * 2022-11-04 2023-03-10 山东捷瑞数字科技股份有限公司 Image processing method and device, electronic equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107704798A (en) * 2017-08-09 2018-02-16 广东欧珀移动通信有限公司 Image weakening method, device, computer-readable recording medium and computer equipment
CN108234882A (en) * 2018-02-11 2018-06-29 维沃移动通信有限公司 A kind of image weakening method and mobile terminal
WO2019105214A1 (en) * 2017-11-30 2019-06-06 Oppo广东移动通信有限公司 Image blurring method and apparatus, mobile terminal and storage medium
CN111402111A (en) * 2020-02-17 2020-07-10 深圳市商汤科技有限公司 Image blurring method, device, terminal and computer readable storage medium
CN112561822A (en) * 2020-12-17 2021-03-26 苏州科达科技股份有限公司 Beautifying method and device, electronic equipment and storage medium
CN112884637A (en) * 2021-01-29 2021-06-01 北京市商汤科技开发有限公司 Special effect generation method, device, equipment and storage medium
CN112991208A (en) * 2021-03-11 2021-06-18 Oppo广东移动通信有限公司 Image processing method and device, computer readable medium and electronic device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107704798A (en) * 2017-08-09 2018-02-16 广东欧珀移动通信有限公司 Image weakening method, device, computer-readable recording medium and computer equipment
WO2019105214A1 (en) * 2017-11-30 2019-06-06 Oppo广东移动通信有限公司 Image blurring method and apparatus, mobile terminal and storage medium
CN108234882A (en) * 2018-02-11 2018-06-29 维沃移动通信有限公司 A kind of image weakening method and mobile terminal
CN111402111A (en) * 2020-02-17 2020-07-10 深圳市商汤科技有限公司 Image blurring method, device, terminal and computer readable storage medium
CN112561822A (en) * 2020-12-17 2021-03-26 苏州科达科技股份有限公司 Beautifying method and device, electronic equipment and storage medium
CN112884637A (en) * 2021-01-29 2021-06-01 北京市商汤科技开发有限公司 Special effect generation method, device, equipment and storage medium
CN112991208A (en) * 2021-03-11 2021-06-18 Oppo广东移动通信有限公司 Image processing method and device, computer readable medium and electronic device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
无监督深度学习模型的多聚焦图像融合算法;王长城等;《计算机工程与应用》;全文 *

Also Published As

Publication number Publication date
CN113673474A (en) 2021-11-19

Similar Documents

Publication Publication Date Title
CN111402135B (en) Image processing method, device, electronic equipment and computer readable storage medium
CN111741211B (en) Image display method and apparatus
CN113313661B (en) Image fusion method, device, electronic equipment and computer readable storage medium
CN110149482B (en) Focusing method, focusing device, electronic equipment and computer readable storage medium
CN110248096B (en) Focusing method and device, electronic equipment and computer readable storage medium
CN113888437A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN108810418B (en) Image processing method, image processing device, mobile terminal and computer readable storage medium
CN113766125B (en) Focusing method and device, electronic equipment and computer readable storage medium
WO2021022983A1 (en) Image processing method and apparatus, electronic device and computer-readable storage medium
WO2021057474A1 (en) Method and apparatus for focusing on subject, and electronic device, and storage medium
CN113658197B (en) Image processing method, device, electronic equipment and computer readable storage medium
CN108259770B (en) Image processing method, image processing device, storage medium and electronic equipment
WO2022261828A1 (en) Image processing method and apparatus, electronic device, and computer-readable storage medium
US12039767B2 (en) Subject detection method and apparatus, electronic device, and computer-readable storage medium
CN113313626A (en) Image processing method, image processing device, electronic equipment and storage medium
CN108616700B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN113673474B (en) Image processing method, device, electronic equipment and computer readable storage medium
CN110956679B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN112017137B (en) Image processing method, device, electronic equipment and computer readable storage medium
CN113610865B (en) Image processing method, device, electronic equipment and computer readable storage medium
CN113674303B (en) Image processing method, device, electronic equipment and storage medium
CN113610884B (en) Image processing method, device, electronic equipment and computer readable storage medium
CN113379609B (en) Image processing method, storage medium and terminal equipment
CN110365897B (en) Image correction method and device, electronic equipment and computer readable storage medium
CN116437222B (en) Image processing method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant