CN111028137A - Image processing method, image processing device, electronic equipment and computer readable storage medium - Google Patents
Image processing method, image processing device, electronic equipment and computer readable storage medium Download PDFInfo
- Publication number
- CN111028137A CN111028137A CN201811177952.5A CN201811177952A CN111028137A CN 111028137 A CN111028137 A CN 111028137A CN 201811177952 A CN201811177952 A CN 201811177952A CN 111028137 A CN111028137 A CN 111028137A
- Authority
- CN
- China
- Prior art keywords
- image
- processed
- target
- processing
- binary
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000012545 processing Methods 0.000 title claims abstract description 126
- 238000003672 processing method Methods 0.000 title claims abstract description 31
- 238000007499 fusion processing Methods 0.000 claims abstract description 41
- 238000000034 method Methods 0.000 claims abstract description 38
- 230000015572 biosynthetic process Effects 0.000 claims abstract description 16
- 238000003786 synthesis reaction Methods 0.000 claims abstract description 16
- 238000006243 chemical reaction Methods 0.000 claims description 32
- 238000001514 detection method Methods 0.000 claims description 19
- 238000004590 computer program Methods 0.000 claims description 11
- 238000009499 grossing Methods 0.000 claims description 10
- 230000002194 synthesizing effect Effects 0.000 claims description 7
- 230000000694 effects Effects 0.000 description 31
- 230000008569 process Effects 0.000 description 17
- 238000010586 diagram Methods 0.000 description 15
- 238000003062 neural network model Methods 0.000 description 15
- 238000003709 image segmentation Methods 0.000 description 11
- 238000012549 training Methods 0.000 description 9
- 238000003384 imaging method Methods 0.000 description 7
- 238000004422 calculation algorithm Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 238000013528 artificial neural network Methods 0.000 description 2
- 238000003705 background correction Methods 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 238000010428 oil painting Methods 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 238000010429 water colour painting Methods 0.000 description 2
- 241000282472 Canis lupus familiaris Species 0.000 description 1
- 241000282326 Felis catus Species 0.000 description 1
- 241001465754 Metazoa Species 0.000 description 1
- 239000000872 buffer Substances 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000008521 reorganization Effects 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/04—Context-preserving transformations, e.g. by using an importance map
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
Abstract
The application relates to an image processing method, an image processing device, an electronic device and a computer readable storage medium. The method comprises the following steps: the method comprises the steps of obtaining a first binary image generated according to a target area of an image to be processed, conducting fusion processing on the first binary image and a stylized image to obtain a first image, conducting stylized processing on the stylized image, conducting fusion processing on a second binary image generated according to the first binary image and the image to be processed to obtain a second image, and conducting synthesis processing on the first image and the second image to obtain a target image. The first binary image generated according to the target area can be fused with the stylized image to obtain the first image, and the second binary image and the image to be processed are fused to obtain the second image and the first image which are subjected to synthesis processing to obtain the target image, so that the accuracy of image processing can be improved.
Description
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image processing method and apparatus, an electronic device, and a computer-readable storage medium.
Background
With the development of image processing technology, image stylization processing technology has emerged. The electronic equipment can carry out stylization processing on the image in the image shooting or processing process, so that the processed image has the effect of simulating real artistic manipulation creation. However, the conventional method has a problem of low accuracy of image processing.
Disclosure of Invention
The embodiment of the application provides an image processing method, an image processing device, electronic equipment and a computer readable storage medium, which can improve the accuracy of image processing.
An image processing method comprising:
acquiring a first binary image generated according to a target area of an image to be processed;
performing fusion processing on the first binary image and the stylized image to obtain a first image, wherein the stylized image is obtained by stylizing the image to be processed;
performing fusion processing on a second binary image generated according to the first binary image and the image to be processed to obtain a second image;
and synthesizing the first image and the second image to obtain a target image.
An image processing apparatus comprising:
the binary image generation module is used for acquiring a first binary image generated according to a target area of an image to be processed;
the first fusion processing module is used for carrying out fusion processing on the first binary image and the stylized image to obtain a first image, wherein the stylized image is obtained by stylizing the image to be processed;
the second fusion processing module is used for carrying out fusion processing on a second binary image generated according to the first binary image and the image to be processed to obtain a second image;
and the synthesis processing module is used for synthesizing the first image and the second image to obtain a target image.
An electronic device comprising a memory and a processor, the memory having stored therein a computer program that, when executed by the processor, causes the processor to perform the steps of:
acquiring a first binary image generated according to a target area of an image to be processed;
performing fusion processing on the first binary image and the stylized image to obtain a first image, wherein the stylized image is obtained by stylizing the image to be processed;
performing fusion processing on a second binary image generated according to the first binary image and the image to be processed to obtain a second image;
and synthesizing the first image and the second image to obtain a target image.
A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, carries out the steps of:
acquiring a first binary image generated according to a target area of an image to be processed;
performing fusion processing on the first binary image and the stylized image to obtain a first image, wherein the stylized image is obtained by stylizing the image to be processed;
performing fusion processing on a second binary image generated according to the first binary image and the image to be processed to obtain a second image;
and synthesizing the first image and the second image to obtain a target image.
According to the image processing method, the image processing device, the electronic equipment and the computer readable storage medium, the first binary image generated according to the target area of the image to be processed is obtained, the first binary image and the stylized image are subjected to fusion processing to obtain the first image, the stylized image is obtained by subjecting the image to be processed to stylization processing, the second binary image generated according to the first binary image and the image to be processed are subjected to fusion processing to obtain the second image, and the first image and the second image are subjected to synthesis processing to obtain the target image. The first binary image generated according to the target area can be fused with the stylized image to obtain the first image, and the second binary image and the image to be processed are fused to obtain the second image and the first image which are subjected to synthesis processing to obtain the target image, so that the accuracy of image processing can be improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a diagram of an exemplary embodiment of an image processing method;
FIG. 2 is a flow diagram of a method of image processing in one embodiment;
FIG. 3 is a flow diagram that illustrates stylizing an image to be processed, according to one embodiment;
FIG. 4 is a flow diagram of a method of image processing in one embodiment;
FIG. 5 is a flow diagram of smoothing a target image in one embodiment;
FIG. 6 is a diagram illustrating processing of an image using an image processing method according to one embodiment;
FIG. 7 is a block diagram showing the configuration of an image processing apparatus according to an embodiment;
FIG. 8 is a block diagram showing an internal configuration of an electronic apparatus according to an embodiment;
FIG. 9 is a schematic diagram of an image processing circuit in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
It will be understood that, as used herein, the terms "first," "second," and the like may be used herein to describe various elements, but these elements are not limited by these terms. These terms are only used to distinguish one element from another. For example, a first image may be referred to as a second image, and similarly, a second image may be referred to as a first image, without departing from the scope of the present application. The first image and the second image are both images, but they are not the same image.
FIG. 1 is a diagram of an embodiment of an application environment of an image processing method. As shown in fig. 1, the electronic apparatus 100 has a camera mounted thereon. Specifically, the electronic device 100 may obtain an image to be processed by shooting through a camera, obtain a first binary image generated according to a target area of the image to be processed, perform fusion processing on the first binary image and a stylized image obtained by stylizing the image to be processed to obtain a first image, perform fusion processing on a second binary image generated according to the first binary image and the image to be processed to obtain a second image, and perform synthesis processing on the first image and the second image to obtain a target image. It is understood that the electronic device 100 may be a mobile phone, a computer, a wearable device, etc., and is not limited thereto.
FIG. 2 is a flow diagram of a method of image processing in one embodiment. The image processing method in the present embodiment is described by taking the electronic device 100 in fig. 1 as an example. As shown in fig. 2, the image processing method includes steps 202 to 208. Wherein:
The image to be processed is an image composed of a plurality of pixel points. Specifically, the image to be processed may be generated by the electronic device capturing a picture of a current scene in real time through a camera; the image may be stored locally in the electronic device, or may be an image downloaded from a network by the electronic device. The target area of the image to be processed is an area with stylized effect after the image to be processed is processed. Specifically, the target area may be an area where the target object is located, may also be a background area in the image to be processed, and may also be a whole area in the image to be processed. The electronic equipment generates a first binary image according to the target area, namely different pixel values are set for pixel points contained in the target area and pixel points outside the target area in the image to be processed respectively, the pixel values of all the pixel points in the target area are the same, and the pixel values of all the pixel points outside the target area are the same. The target area may be obtained by selecting the image to be processed by the user, or may be obtained by detecting the image to be processed by the electronic device. The electronic device may acquire a first binary image generated from a target area of an image to be processed.
And 204, performing fusion processing on the first binary image and the stylized image to obtain a first image, wherein the stylized image is obtained by stylizing the image to be processed.
The stylizing process refers to an operation of imparting an artistic style effect to an image. When describing the object, the art creator uses a method deviating from the actual picture, such as pencil brush, oil brush, and water color brush, or performs exaggeration, deformation, distortion, and breakage processing on the actual picture, so that the created picture has different artistic styles. The electronic equipment can perform stylization processing on the image to be processed according to different artistic style effects, so that the image to be processed has the corresponding artistic style effect. Specifically, the electronic device may stylize the image by one or more of altering image pixels, increasing contrast, increasing saturation or edge saturation, adding lines in the image, color filling, image decomposition and reorganization, and so forth. For example, the electronic device may cause the stylized image after the stylization process to have effects such as a sketch, a water color painting, an oil painting, a comic, a mosaic, and the like, but is not limited thereto. The first image is an image having stylization processing in an area corresponding to a target area of an image to be processed. The electronic device can perform stylization processing on the image to be processed, and perform fusion processing on the stylized image obtained after the stylization processing and the first binary image to obtain the first image.
The fusion processing is processing for fusing image information of a plurality of images, and the fused image can contain more image information. Specifically, the electronic device may fuse 2 or more images, for example, fuse 2 images into 1 image, fuse 3 images into 2 images, and the like. The electronic equipment can adopt an image fusion algorithm to perform fusion processing on the image, and can also perform fusion processing on the image by combining the pixel value average value, the entropy value, the standard deviation, the average gradient and the like of the pixel points in the image.
The electronic equipment performs fusion processing on the first binary image generated according to the target area and the stylized image obtained after stylization processing, so that a first image with stylized processing effect can be obtained.
And step 206, performing fusion processing on the second binary image generated according to the first binary image and the image to be processed to obtain a second image.
The electronic device may generate a second binary image from the first binary image. Specifically, the electronic device may process a pixel value of each pixel point in the first binary image, so as to obtain a second binary image. The second image is an image of which a part of the image frame to be processed is reserved. The electronic device may perform fusion processing on the second binary image and the image to be processed to obtain a second image.
And step 208, synthesizing the first image and the second image to obtain a target image.
The target image is an image having a stylized processing effect in the target area. The synthesis processing is an operation of generating a final synthesized image from a plurality of images according to a predetermined rule. The electronic device synthesizes the first image and the second image to obtain a target image, and specifically, the electronic device may superimpose pixel values of corresponding pixel points in the first image and the second image according to a certain rule to generate the target image.
The superposition may be a linear superposition or a non-linear superposition. The linear superposition may be summing pixel values of pixels in the first image and pixel values of corresponding pixels in the second image, or summing pixel values of pixels in the first image and pixel values of corresponding pixels in the second image. And weighting and summing the pixel values of the pixel points in the first image and the pixel values of the corresponding pixel points in the second image. Specifically, a first weight value may be configured for a pixel value of a pixel point in the first image, and a second weight value may be configured for a pixel value of a corresponding pixel point in the second image; and multiplying the pixel value of the pixel point in the first image by the first weight, and adding the pixel value of the corresponding pixel point in the second image by the second weight to obtain the pixel value of the corresponding pixel point in the target image. When the image to be processed is an RGB (Red, Green, Blue, Red, Green, Blue) image, the electronic device may further respectively obtain pixel values of the pixel points of the first image and the second image in the RGB channels, and respectively superimpose the pixel values of the three channels.
When the electronic equipment stylizes the image, if the stylization processing is carried out on the whole image according to the expected stylization effect, the whole image processing effect is the same, the image main body cannot be highlighted, and the problem of inaccurate image processing exists. In the image processing method provided by the embodiment of the application, a first binary image generated according to a target area of an image to be processed is obtained, the first binary image and a stylized image obtained by stylizing the image to be processed are fused to obtain a first image, a second binary image generated according to the first binary image and the image to be processed are fused to obtain a second image, the first image and the second image are synthesized to obtain the target image, the target image is an image containing a stylized effect and an actual picture, the target area in the image can be highlighted, and the accuracy of image processing is improved; in addition, in the embodiment of the application, the target image is synthesized according to the first image and the second image, wherein the first image is obtained by fusing the first binary image and the stylized image, so that the consistency of the stylized effect of the target image and the stylized effect in the stylized image is ensured.
In an embodiment, in the provided image processing method, the electronic device may further perform synthesis processing on the first image and the image to be processed to obtain a target image. Specifically, the electronic device may perform cutout processing on the first image according to the target region to obtain a stylized region corresponding to the target region in the first image, and replace the target region in the image to be processed with the stylized region to obtain the target image; the electronic device may also perform superposition processing on the first image and other areas outside the target area in the image to be processed, and replace the target area in the image to be processed with the area corresponding to the target area in the first image to obtain the target image, where the target image is an image including an artistic style and an actual picture.
In one embodiment, before the obtaining of the first binary image generated according to the target area of the image to be processed, the image processing method further includes: detecting a target area of an image to be processed through a target detection model; setting the pixel value of the pixel point in the target area as a first pixel value, and setting the pixel value of the pixel point outside the target area as a second pixel value to obtain a first binary image, wherein the first pixel value is different from the second pixel value.
The electronic device can train a target detection model for performing target detection on the image and outputting a target area of the image to be processed. Specifically, the electronic device may train the target detection model by using a deep learning algorithm such as FCN (full Convolutional neural network). In the training process of the target detection model, the training image marked with the area where the target object is located can be input into the model, the training image is detected through the model to obtain an output predicted target area, a loss function is obtained according to the predicted target area and the marked area, and the parameters of the model are adjusted according to the loss function, so that the trained target detection model can output an accurate target area.
The electronic equipment can detect the image to be processed through the target detection model to obtain the target area of the image to be processed, so that the pixel values of the pixel points contained in the target area in the image to be processed are all set as first pixel values, the pixel values of the pixel points outside the target area are all set as second pixel values, and the first binary image distinguishing the target area is obtained. The first pixel value and the second pixel value are different, and both the first pixel value and the second pixel value can be determined according to actual requirements, which is not limited herein. For example, the first pixel value may be 1, and the second pixel value may be 0; the first pixel value may be 0 and the second pixel value may be 1; the first pixel value may be 255, the second pixel value may be 0, or the like.
In one embodiment, the electronic device may further perform image segmentation processing on the image to be processed through the image segmentation model. The electronic equipment can input the training image marked with the segmentation region into the model and train the model, so that the trained image segmentation model has the function of accurate image segmentation. Wherein the image segmentation model may be an FCN neural network model. Specifically, the image segmentation model may output a gray image including a target region, where original pixel values of pixel points of the target region in the image to be processed are reserved in the gray image, and pixel points outside the target region have the same pixel value, and the pixel value of the pixel points outside the target region is the second pixel value. After obtaining the gray image output by the image segmentation model, the electronic device sets the pixel points of the pixel points in the target area in the gray image to be first pixel values, and sets the pixel points outside the target area to be second pixel points, so as to obtain a first binary image. The electronic equipment performs image segmentation processing on the image to be processed by adopting the image segmentation model which outputs the gray-scale image containing the target area, so that the operation power consumption of the image segmentation model can be reduced, and the image segmentation processing efficiency and accuracy are improved.
In one embodiment, the process of detecting the target region of the image to be processed by the target detection model in the provided image processing method includes: obtaining at least one candidate region obtained by detecting an image to be processed by a target detection model; and acquiring depth information corresponding to each candidate region, and taking the candidate region with the minimum depth information as a target region.
The depth information is used for representing the distance between the shot object and the camera in the image to be processed. The electronic device obtains depth information corresponding to each candidate region, and specifically, the electronic device may obtain depth information corresponding to each pixel point in the candidate region, and use an average value, a median value, a maximum value, a minimum value, a mode value, or the like of the depth information corresponding to each pixel point in the candidate region as the depth information corresponding to the candidate region. When the electronic equipment detects the image to be processed through the target detection model, the candidate regions where one or more target objects are located can be obtained, so that the depth information corresponding to each candidate region is obtained, and the candidate region with the minimum depth information is used as the target region of the image to be processed. In an embodiment, after obtaining the depth information corresponding to each candidate region, the electronic device may further obtain a preset number of candidate regions, of which the depth information is smaller than a depth threshold, as target regions of the image to be processed. When people perform shooting on a daily basis, the distance between a shot subject and a camera is usually the smallest, and the accuracy of a target area can be improved by taking the candidate area with the smallest depth information in at least one candidate area detected by a target detection model as the target area.
In one embodiment, the provided image processing method further comprises: and performing inversion processing on the first binary image to obtain a second binary image.
The electronic equipment performs inversion processing on the first binary image, namely, pixel values of all pixel points in the first binary image are exchanged. Specifically, the electronic device replaces the pixel value of the pixel point in the first binary image with the second pixel value from the first pixel value, and replaces the pixel value of the pixel point in the binary image with the first pixel value from the second pixel value. The electronic device may obtain a pixel point of which a pixel value in the first binary image is the first pixel value, replace the pixel value of the pixel point with the second pixel value, obtain a pixel point of which the pixel value in the first binary image is the second pixel value, and replace the pixel value of the pixel point with the second pixel value, thereby obtaining the second binary image. For example, when the pixel value of the pixel point in the target area in the first binary image is 1 and the pixel value of the pixel point outside the target area is 0, the electronic device performs inversion processing on the first binary image, that is, the pixel value of the pixel point in the target area in the first binary image is replaced by 0 from 1, the pixel value of the pixel point outside the target area is replaced by 0 from 1, so as to obtain a second binary image, and the pixel value corresponding to each pixel point in the second binary image is opposite to the first binary image.
The first binary image is subjected to negation processing to obtain a second binary image, the second binary image and the image to be processed are subjected to fusion processing, the second image of the image to be processed which is reserved outside the target area can be obtained, the first image and the second image are synthesized to obtain the target image, the target image is an image containing artistic style and an actual image, the image processing process is simple, and the image processing efficiency is improved.
In one embodiment, before the fusion processing of the first binary image and the stylized image, the provided image processing method further includes: and performing stylization processing on the image to be processed through a neural network model to obtain a processed stylized image.
The stylizing process refers to an operation of imparting an artistic style effect to an image. The electronic equipment can perform stylization processing on the image to be processed according to different artistic style effects, so that the image to be processed has the corresponding artistic style effect. For example, the electronic device may cause the stylized image after the stylization process to have effects such as a sketch, a water color painting, an oil painting, a comic, a mosaic, and the like, but is not limited thereto. The electronic device may train a neural network model for stylizing the image. Specifically, the electronic device may train a Neural network model according to deep learning algorithms such as vgg (visual Geometry group), cnn (volumetric Neural network), and the like, and perform stylization processing on the image to be processed according to the Neural network model to obtain a processed stylized image. In the model training process, a large number of training images and target stylized images can be input into the model, the model is trained according to the target stylized images and the training images, and parameters of the model are adjusted according to the stylized images corresponding to the output training images, so that the finally trained neural network model can accurately output the stylized images corresponding to the target stylized images. In the training process of the neural network model, one or more target stylized images with different stylized effects can be adopted to train the neural network, so that the trained neural network model can output stylized images with different styles. The electronic equipment performs stylization processing on the image to be processed through the neural network model to obtain the processed stylized image, so that the stylization processing efficiency can be improved, and the stylization processing effect of the image is better.
As shown in fig. 3, in an embodiment, before the fusion processing of the first binary image and the stylized image, the image processing method further includes steps 302 to 306. Wherein:
Specifically, the electronic device may obtain a target label corresponding to a target area when the target detection model outputs the target area of the image to be processed. Target tags may include, but are not limited to, figures, text, gourmet, animals such as cats, dogs, etc.
And step 304, acquiring a corresponding style conversion type according to the target label.
Specifically, the electronic device may preset style conversion types corresponding to different target tags, so that when a target tag of an image to be processed is obtained, a style conversion type corresponding to the target tag is obtained. For example, the electronic device may preset the style conversion type of the portrait as cartoon, the style conversion type of the gourmet as watercolor, and the like. In one embodiment, the electronic device may also obtain a style conversion type selected by the user. Specifically, the electronic device may provide multiple style conversion types on the image display page to be processed, for example, provide effect graphs of different style conversion types, so that a selection instruction of a user on the effect graph may be received, and a corresponding style conversion type may be obtained according to the selection instruction.
And step 306, performing stylization processing on the image to be processed according to the style conversion type to obtain a stylized image.
The electronic equipment carries out stylization processing on the image to be processed, and the image to be processed can be processed through a stylization processing algorithm. Specifically, the electronic device may pre-store style processing parameters corresponding to different style conversion types, and perform stylization processing on the image to be processed by obtaining corresponding style processing parameters according to the style conversion type corresponding to the image to be processed. In one embodiment, the electronic device may train neural network models corresponding to different style conversion types, and when the electronic device may acquire the style conversion type of the image to be processed, the stylized processing may be performed on the image to be processed according to the neural network model corresponding to the style conversion type. In another embodiment, the neural network model trained by the electronic device may also output stylized images of a plurality of style conversion types. Specifically, the electronic device may adjust a parameter value of the neural network model according to the acquired style conversion type, so that the adjusted neural network model may output a stylized image corresponding to the style conversion type.
The electronic equipment obtains the corresponding style conversion type according to the target label by obtaining the target label corresponding to the target area, and performs stylization processing on the image to be processed through the neural network model corresponding to the style conversion type to obtain the stylized image, so that the accuracy of stylization processing can be improved, and the stylization processing effect is optimized.
In one embodiment, the image processing method provided before the combining the first image and the second image further includes: and acquiring a color adjusting parameter corresponding to the style conversion type, and adjusting the pixel value of each pixel point in the second image according to the color adjusting parameter.
The color adjustment parameter refers to a parameter for adjusting the pixel value of a pixel point in an image according to actual requirements. The color adjustment parameters may include parameters corresponding to hue, saturation, or color temperature, etc. The electronic equipment can pre-store color adjustment parameters corresponding to different style conversion types, so that when the style conversion type corresponding to the stylized image is obtained, the corresponding color adjustment parameter is obtained according to the style conversion type, and the pixel value of each pixel point in the second image is adjusted according to the color adjustment parameter. The color adjustment parameters may be set according to actual application requirements, and are not limited herein. The electronic equipment acquires corresponding color adjustment parameters according to the style conversion type, adjusts the pixel value of each pixel point in the second image, and then synthesizes the second image and the first image to obtain the target image, so that the processing effect of the target image can be improved. For example, when the pre-stored style conversion type of the electronic device is caricatured, the corresponding color adjustment parameter is a parameter for reducing saturation, so that the second image is adjusted according to the color adjustment parameter, and after the first image and the second image are synthesized to obtain the target image, the saturation of the area of the target image where the image to be processed is reserved is reduced, the caricatured area can be better highlighted, and the processing effect is better.
As shown in fig. 4, in one embodiment, the provided image processing method includes steps 402 to 408. Wherein:
The electronic equipment can obtain the pixel value of the pixel point in the first binary image and the corresponding pixel value of the pixel point in the stylized image, and the value obtained by multiplying the two pixel points is used as the pixel value of the corresponding pixel point in the first image.
The electronic device can obtain the pixel value of the pixel point in the second binary image and the corresponding pixel value of the pixel point in the stylized image, and the value obtained by multiplying the two pixel points is used as the pixel value of the corresponding pixel point in the second image.
And step 408, overlapping the first image and the second image to obtain a target image.
The electronic equipment obtains a target area of an image to be processed, generates a first binary image according to the target area, multiplies pixel values of pixel points contained in the first binary image and pixel values of corresponding pixel points in a stylized image to obtain a first image, performs inversion processing on the first binary image to obtain a second binary image, multiplies the pixel values of the pixel points contained in the second binary image and the pixel values of the corresponding pixel points in the image to be processed to obtain a second image, and superimposes the first image and the second image to obtain the target image with the stylized processing effect, so that stylized processing of the image is realized, and personalized requirements are met.
In one embodiment, a process of superimposing a first image with a second image in an image processing method is provided, which includes: acquiring a first pixel value of a pixel point in a first image and a second pixel value corresponding to the pixel point in the first image in a second image; and adding the first pixel value and the second pixel value.
The electronic equipment can acquire a first pixel value of a pixel point in a first image, and acquire a second pixel value corresponding to the pixel point at the position in a second image according to the position matching of the pixel point in the first image to the pixel point at the position in the second image; the electronic device may also establish a mapping relationship between each pixel point in the first image and each pixel point in the second image, and obtain a first pixel value and a second pixel value corresponding to the corresponding pixel points in the first image and the second image, respectively, according to the mapping relationship.
The adding process may be adding and summing a first pixel value of a pixel point in the first image and a second pixel value corresponding to the pixel point in the first image in the second image; or weighting the first pixel value of the pixel point in the first image and the corresponding second pixel value of the pixel point of the first image in the second image, and then adding and summing. The electronic equipment superposes the first image with the target area being the stylized processing effect and the second image with the reserved image outside the target area of the image to be processed to obtain the target image with stylized processing, and the accuracy of image processing is improved.
As shown in fig. 5, in one embodiment, the provided image processing method further includes steps 502 to 506. Wherein:
The target area may be one or more. The electronic device detects the area corresponding to the target area in the image to be processed, and specifically, the electronic device may acquire each target area output when the image to be processed is subjected to target detection, and calculate the area corresponding to each target area.
The area threshold is an area size for determining whether or not to smooth the contour of the target region. The area threshold value can be set according to actual requirements. For example, the area threshold may be 5%, 10%, 18%, 20%, etc. of the image area, without being limited thereto. The electronic equipment acquires a target area with the area exceeding an area threshold value, so that a target area outline corresponding to the target area is acquired. The target area outline is composed of all pixel points at the edge of the target area.
The smoothing process is an operation of adjusting the pixel values of the pixel points to gradually change the brightness of the adjusted image and improve the image quality. The electronic equipment performs smoothing processing on each pixel point of the target area contour, and specifically, the electronic equipment can perform smoothing processing on each pixel point of the target area contour through an interpolation method, a linear smoothing method, a convolution method and the like, so that the target area and other areas in the processed target image can be in smooth transition, and the image processing effect is optimized.
Fig. 6 is a schematic diagram illustrating an effect of processing an image to be processed by using the image processing method to obtain a target image in one embodiment. Taking the first pixel value as 1 and the second pixel value as 0 in the method as an example for explanation, the specific steps for realizing the method are as follows:
first, the electronic device may detect a target area 612 in the image to be processed 610, and generate a first binary image 620 according to the target area 612. That is, the electronic device sets the pixel value of each pixel point in the target area 612 in the first binary image 620 to 1, and sets the pixel value of each pixel point outside the target area to 0.
Then, the electronic device performs stylization on the image to be processed 610 to obtain a stylized image 630, so as to multiply the stylized image 630 and the first binary image 620 to obtain a first image 640, where the target area 612 in the first image 640 is a stylized processing effect, and the pixel value of a pixel outside the target area 612 is 0.
Then, the electronic device performs an inversion process on the first binary image 620, that is, the pixel value of the pixel point with the original pixel value of 1 in the first binary image 620 is replaced by 0, and the pixel value of the pixel point with the original pixel value of 0 is replaced by 1, so as to obtain a second binary image 650, and multiplies the second binary image 650 by the image to be processed 610, so as to obtain a second image 660.
Then, the electronic device adds the first image 640 and the second image 660 to obtain a target image 670, wherein the target area 612 in the target image 670 has a stylized processing effect, and the effect of the image to be processed 610 is retained outside the target area 612.
In an embodiment, after obtaining the first image 640, the electronic device may also perform synthesis processing on the image to be processed 610 and the first image 640, perform addition processing on the first image 640 and other areas except the target area in the image to be processed 610 according to the target area 612, and replace the target area 612 of the image to be processed 610 with a corresponding area in the first image 640 to obtain the target image 670.
It should be understood that although the various steps in the flow charts of fig. 2-5 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2-5 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternating with other steps or at least some of the sub-steps or stages of other steps.
Fig. 7 is a block diagram of an image processing apparatus according to an embodiment. As shown in fig. 7, the image processing apparatus includes a binary image generation module 702, a first fusion processing module 704, a second fusion processing module 706, and a synthesis processing module 708. Wherein:
a binary image generating module 702, configured to obtain a first binary image generated according to a target area of an image to be processed.
The first fusion processing module 704 is configured to perform fusion processing on the first binary image and the stylized image to obtain a first image, where the stylized image is obtained by performing stylized processing on an image to be processed.
The second fusion processing module 706 is configured to perform fusion processing on the second binary image generated according to the first binary image and the image to be processed to obtain a second image.
And a synthesis processing module 708, configured to perform synthesis processing on the first image and the second image to obtain a target image.
In one embodiment, the binary image generation module 702 may be further configured to detect a target region of the image to be processed through a target detection model; setting the pixel value of the pixel point in the target area as a first pixel value, and setting the pixel value of the pixel point outside the target area as a second pixel value to obtain a first binary image, wherein the first pixel value is different from the second pixel value.
In an embodiment, the binary image generating module 702 may be further configured to obtain at least one candidate region obtained by detecting the image to be processed by the target detection model, obtain depth information corresponding to each candidate region, use the candidate region with the smallest depth information as the target region, set a pixel value of a pixel point in the target region as a first pixel value, and set a pixel value of a pixel point outside the target region as a second pixel value, so as to obtain a first binary image, where the first pixel value is different from the second pixel value.
In an embodiment, the binary image generating module 702 may further be configured to perform an inversion process on the first binary image to obtain a second binary image.
In one embodiment, the provided image processing method further includes a stylization processing module 710. The stylized processing module 710 is configured to obtain a target tag corresponding to a target area, obtain a corresponding style conversion type according to the target tag, and perform stylization processing on an image to be processed according to the style conversion type to obtain a stylized image.
In an embodiment, the synthesis processing module 708 may be further configured to obtain a color adjustment parameter corresponding to the style conversion type, adjust a pixel value of each pixel in the second image according to the color adjustment parameter, and perform synthesis processing on the adjusted second image and the first image to obtain the target image.
In an embodiment, the provided image processing apparatus further includes a smoothing module 712, where the smoothing module 712 is configured to detect a region area corresponding to a target region in the target image, obtain a target region contour corresponding to the target region whose region area exceeds an area threshold, and perform smoothing on each pixel point of the target region contour.
The image processing device provided by the embodiment of the application can acquire the first binary image generated according to the target area of the image to be processed, perform fusion processing on the first binary image and the stylized image obtained by stylizing the image to be processed to obtain the first image, perform fusion processing on the second binary image generated according to the first binary image and the image to be processed to obtain the second image, and perform synthesis processing on the first image and the second image to obtain the target image, so that the accuracy of image processing is improved.
The division of the modules in the image processing apparatus is only for illustration, and in other embodiments, the image processing apparatus may be divided into different modules as needed to complete all or part of the functions of the image processing apparatus.
Fig. 8 is a schematic diagram of an internal structure of an electronic device in one embodiment. As shown in fig. 8, the electronic device includes a processor and a memory connected by a system bus. Wherein, the processor is used for providing calculation and control capability and supporting the operation of the whole electronic equipment. The memory may include a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The computer program can be executed by a processor to implement an image processing method provided in the following embodiments. The internal memory provides a cached execution environment for the operating system computer programs in the non-volatile storage medium. The electronic device may be a mobile phone, a tablet computer, or a personal digital assistant or a wearable device, etc.
The implementation of each module in the image processing apparatus provided in the embodiment of the present application may be in the form of a computer program. The computer program may be run on a terminal or a server. The program modules constituted by the computer program may be stored on the memory of the terminal or the server. Which when executed by a processor, performs the steps of the method described in the embodiments of the present application.
The embodiment of the application also provides the electronic equipment. The electronic device includes therein an Image Processing circuit, which may be implemented using hardware and/or software components, and may include various Processing units defining an ISP (Image Signal Processing) pipeline. FIG. 9 is a schematic diagram of an image processing circuit in one embodiment. As shown in fig. 9, for convenience of explanation, only aspects of the image processing technique related to the embodiments of the present application are shown.
As shown in fig. 9, the image processing circuit includes an ISP processor 940 and a control logic 950. The image data captured by the imaging device 910 is first processed by the ISP processor 940, and the ISP processor 940 analyzes the image data to capture image statistics that may be used to determine and/or control one or more parameters of the imaging device 910. The imaging device 910 may include a camera having one or more lenses 912 and an image sensor 914. Image sensor 914 may include an array of color filters (e.g., Bayer filters), and image sensor 914 may acquire light intensity and wavelength information captured with each imaging pixel of image sensor 914 and provide a set of raw image data that may be processed by ISP processor 940. The sensor 920 (e.g., a gyroscope) may provide parameters of the acquired image processing (e.g., anti-shake parameters) to the ISP processor 940 based on the type of interface of the sensor 920. The sensor 920 interface may utilize an SMIA (Standard Mobile Imaging Architecture) interface, other serial or parallel camera interfaces, or a combination of the above.
In addition, image sensor 914 may also send raw image data to sensor 920, sensor 920 may provide raw image data to ISP processor 940 based on the type of interface of sensor 920, or sensor 920 may store raw image data in image memory 930.
The ISP processor 940 processes the raw image data pixel by pixel in a variety of formats. For example, each image pixel may have a bit depth of 8, 10, 12, or 14 bits, and the ISP processor 940 may perform one or more image processing operations on the raw image data, collecting statistical information about the image data. Wherein the image processing operations may be performed with the same or different bit depth precision.
Upon receiving raw image data from image sensor 914 interface or from sensor 920 interface or from image memory 930, ISP processor 940 may perform one or more image processing operations, such as temporal filtering. The processed image data may be sent to image memory 930 for additional processing before being displayed. ISP processor 940 receives processed data from image memory 930 and performs image data processing on the processed data in the raw domain and in the RGB and YCbCr color spaces. The image data processed by ISP processor 940 may be output to display 970 for viewing by a user and/or further processed by a Graphics Processing Unit (GPU). Further, the output of ISP processor 940 may also be sent to image memory 930 and display 970 may read image data from image memory 930. In one embodiment, image memory 930 may be configured to implement one or more frame buffers. In addition, the output of the ISP processor 940 may be transmitted to an encoder/decoder 960 for encoding/decoding the image data. The encoded image data may be saved and decompressed before being displayed on a display 970 device. The encoder/decoder 960 may be implemented by a CPU or GPU or coprocessor.
The statistical data determined by the ISP processor 940 may be transmitted to the control logic 950 unit. For example, the statistical data may include image sensor 914 statistics such as auto-exposure, auto-white balance, auto-focus, flicker detection, black level compensation, lens 912 shading correction, and the like. The control logic 950 may include a processor and/or microcontroller that executes one or more routines (e.g., firmware) that may determine control parameters of the imaging device 910 and control parameters of the ISP processor 940 based on the received statistical data. For example, the control parameters of imaging device 910 may include sensor 920 control parameters (e.g., gain, integration time for exposure control, anti-shake parameters, etc.), camera flash control parameters, lens 912 control parameters (e.g., focal length for focusing or zooming), or a combination of these parameters. The ISP control parameters may include gain levels and color correction matrices for automatic white balance and color adjustment (e.g., during RGB processing), as well as lens 912 shading correction parameters.
The electronic device may implement the image processing method described in the embodiments of the present application according to the image processing technology described above.
The embodiment of the application also provides a computer readable storage medium. One or more non-transitory computer-readable storage media containing computer-executable instructions that, when executed by one or more processors, cause the processors to perform the steps of the image processing method.
A computer program product comprising instructions which, when run on a computer, cause the computer to perform an image processing method.
Any reference to memory, storage, database, or other medium used by embodiments of the present application may include non-volatile and/or volatile memory. Suitable non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM), which acts as external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms, such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), Enhanced SDRAM (ESDRAM), synchronous Link (Synchlink) DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and bus dynamic RAM (RDRAM).
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.
Claims (10)
1. An image processing method, comprising:
acquiring a first binary image generated according to a target area of an image to be processed;
performing fusion processing on the first binary image and the stylized image to obtain a first image, wherein the stylized image is obtained by stylizing the image to be processed;
performing fusion processing on a second binary image generated according to the first binary image and the image to be processed to obtain a second image;
and synthesizing the first image and the second image to obtain a target image.
2. The method of claim 1, wherein the obtaining the first binary image generated from the target region of the image to be processed is preceded by:
detecting a target area of the image to be processed through a target detection model;
setting the pixel value of the pixel point in the target area as a first pixel value, and setting the pixel value of the pixel point outside the target area as a second pixel value to obtain the first binary image, wherein the first pixel value is different from the second pixel value.
3. The method according to claim 2, wherein the detecting the target region of the image to be processed by the target detection model comprises:
obtaining at least one candidate region obtained by detecting the image to be processed by the target detection model;
and acquiring depth information corresponding to each candidate region, and taking the candidate region with the minimum depth information as the target region.
4. The method of claim 1, further comprising:
and performing inversion processing on the first binary image to obtain a second binary image.
5. The method according to claim 1, wherein before the fusing the first binary image with the stylized image, the method comprises:
acquiring a target label corresponding to the target area;
acquiring a corresponding style conversion type according to the target label;
and performing stylization processing on the image to be processed according to the style conversion type to obtain the stylized image.
6. The method of claim 5, wherein prior to the combining the first image with the second image, further comprising:
acquiring color adjustment parameters corresponding to the style conversion types;
and adjusting the pixel value of each pixel point in the second image according to the color adjustment parameter.
7. The method according to any one of claims 1 to 6, further comprising:
detecting the area corresponding to the target area in the target image;
acquiring a target area outline corresponding to the target area with the area exceeding the area threshold;
and smoothing each pixel point of the target area outline.
8. An image processing apparatus characterized by comprising:
the binary image generation module is used for acquiring a first binary image generated according to a target area of an image to be processed;
the first fusion processing module is used for carrying out fusion processing on the first binary image and the stylized image to obtain a first image, wherein the stylized image is obtained by stylizing the image to be processed;
the second fusion processing module is used for carrying out fusion processing on a second binary image generated according to the first binary image and the image to be processed to obtain a second image;
and the synthesis processing module is used for synthesizing the first image and the second image to obtain a target image.
9. An electronic device comprising a memory and a processor, the memory having stored therein a computer program that, when executed by the processor, causes the processor to perform the steps of the image processing method according to any one of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811177952.5A CN111028137B (en) | 2018-10-10 | 2018-10-10 | Image processing method, apparatus, electronic device, and computer-readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811177952.5A CN111028137B (en) | 2018-10-10 | 2018-10-10 | Image processing method, apparatus, electronic device, and computer-readable storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111028137A true CN111028137A (en) | 2020-04-17 |
CN111028137B CN111028137B (en) | 2023-08-15 |
Family
ID=70191633
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811177952.5A Active CN111028137B (en) | 2018-10-10 | 2018-10-10 | Image processing method, apparatus, electronic device, and computer-readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111028137B (en) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111986076A (en) * | 2020-08-21 | 2020-11-24 | 深圳市慧鲤科技有限公司 | Image processing method and device, interactive display device and electronic equipment |
CN112070708A (en) * | 2020-08-21 | 2020-12-11 | 杭州睿琪软件有限公司 | Image processing method, image processing apparatus, electronic device, and storage medium |
CN112862686A (en) * | 2021-02-19 | 2021-05-28 | 成都国科微电子有限公司 | Demosaicing method, device, equipment and medium based on bright and dark lines |
CN113160297A (en) * | 2021-04-25 | 2021-07-23 | Oppo广东移动通信有限公司 | Image depth estimation method and device, electronic equipment and computer-readable storage medium |
CN113160039A (en) * | 2021-04-28 | 2021-07-23 | 北京达佳互联信息技术有限公司 | Image style migration method and device, electronic equipment and storage medium |
CN113194245A (en) * | 2021-03-25 | 2021-07-30 | 上海闻泰电子科技有限公司 | Image processing method, device, equipment and storage medium |
CN113469923A (en) * | 2021-05-28 | 2021-10-01 | 北京达佳互联信息技术有限公司 | Image processing method and device, electronic equipment and storage medium |
WO2022068363A1 (en) * | 2020-09-30 | 2022-04-07 | 中兴通讯股份有限公司 | Image processing method and apparatus, and terminal and medium |
CN115082298A (en) * | 2022-07-15 | 2022-09-20 | 北京百度网讯科技有限公司 | Image generation method, image generation device, electronic device, and storage medium |
CN115272146A (en) * | 2022-07-27 | 2022-11-01 | 天翼爱音乐文化科技有限公司 | Stylized image generation method, system, device and medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130120393A1 (en) * | 2009-09-04 | 2013-05-16 | Holger Winnemoeller | Methods and Apparatus for Marker-Based Stylistic Rendering |
CN105227865A (en) * | 2015-10-29 | 2016-01-06 | 努比亚技术有限公司 | A kind of image processing method and terminal |
CN106778928A (en) * | 2016-12-21 | 2017-05-31 | 广州华多网络科技有限公司 | Image processing method and device |
CN107154030A (en) * | 2017-05-17 | 2017-09-12 | 腾讯科技(上海)有限公司 | Image processing method and device, electronic equipment and storage medium |
CN107743199A (en) * | 2017-10-30 | 2018-02-27 | 努比亚技术有限公司 | Image processing method, mobile terminal and computer-readable recording medium |
CN110956679A (en) * | 2018-09-26 | 2020-04-03 | Oppo广东移动通信有限公司 | Image processing method and device, electronic equipment and computer readable storage medium |
-
2018
- 2018-10-10 CN CN201811177952.5A patent/CN111028137B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130120393A1 (en) * | 2009-09-04 | 2013-05-16 | Holger Winnemoeller | Methods and Apparatus for Marker-Based Stylistic Rendering |
CN105227865A (en) * | 2015-10-29 | 2016-01-06 | 努比亚技术有限公司 | A kind of image processing method and terminal |
CN106778928A (en) * | 2016-12-21 | 2017-05-31 | 广州华多网络科技有限公司 | Image processing method and device |
CN107154030A (en) * | 2017-05-17 | 2017-09-12 | 腾讯科技(上海)有限公司 | Image processing method and device, electronic equipment and storage medium |
CN107743199A (en) * | 2017-10-30 | 2018-02-27 | 努比亚技术有限公司 | Image processing method, mobile terminal and computer-readable recording medium |
CN110956679A (en) * | 2018-09-26 | 2020-04-03 | Oppo广东移动通信有限公司 | Image processing method and device, electronic equipment and computer readable storage medium |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112070708A (en) * | 2020-08-21 | 2020-12-11 | 杭州睿琪软件有限公司 | Image processing method, image processing apparatus, electronic device, and storage medium |
CN111986076A (en) * | 2020-08-21 | 2020-11-24 | 深圳市慧鲤科技有限公司 | Image processing method and device, interactive display device and electronic equipment |
US11985287B2 (en) | 2020-08-21 | 2024-05-14 | Hangzhou Glority Software Limited | Image processing method, image processing device, electronic apparatus and storage medium |
CN112070708B (en) * | 2020-08-21 | 2024-03-08 | 杭州睿琪软件有限公司 | Image processing method, image processing apparatus, electronic device, and storage medium |
WO2022068363A1 (en) * | 2020-09-30 | 2022-04-07 | 中兴通讯股份有限公司 | Image processing method and apparatus, and terminal and medium |
CN112862686B (en) * | 2021-02-19 | 2023-10-27 | 杭州国科微电子有限公司 | Demosaicing method, device, equipment and medium based on bright and dark lines |
CN112862686A (en) * | 2021-02-19 | 2021-05-28 | 成都国科微电子有限公司 | Demosaicing method, device, equipment and medium based on bright and dark lines |
CN113194245A (en) * | 2021-03-25 | 2021-07-30 | 上海闻泰电子科技有限公司 | Image processing method, device, equipment and storage medium |
CN113160297A (en) * | 2021-04-25 | 2021-07-23 | Oppo广东移动通信有限公司 | Image depth estimation method and device, electronic equipment and computer-readable storage medium |
CN113160039A (en) * | 2021-04-28 | 2021-07-23 | 北京达佳互联信息技术有限公司 | Image style migration method and device, electronic equipment and storage medium |
CN113160039B (en) * | 2021-04-28 | 2024-03-26 | 北京达佳互联信息技术有限公司 | Image style migration method and device, electronic equipment and storage medium |
CN113469923A (en) * | 2021-05-28 | 2021-10-01 | 北京达佳互联信息技术有限公司 | Image processing method and device, electronic equipment and storage medium |
CN113469923B (en) * | 2021-05-28 | 2024-05-24 | 北京达佳互联信息技术有限公司 | Image processing method and device, electronic equipment and storage medium |
CN115082298A (en) * | 2022-07-15 | 2022-09-20 | 北京百度网讯科技有限公司 | Image generation method, image generation device, electronic device, and storage medium |
CN115272146B (en) * | 2022-07-27 | 2023-04-07 | 天翼爱音乐文化科技有限公司 | Stylized image generation method, system, device and medium |
CN115272146A (en) * | 2022-07-27 | 2022-11-01 | 天翼爱音乐文化科技有限公司 | Stylized image generation method, system, device and medium |
Also Published As
Publication number | Publication date |
---|---|
CN111028137B (en) | 2023-08-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111028137B (en) | Image processing method, apparatus, electronic device, and computer-readable storage medium | |
EP3477931B1 (en) | Image processing method and device, readable storage medium and electronic device | |
CN110149482B (en) | Focusing method, focusing device, electronic equipment and computer readable storage medium | |
CN110428366B (en) | Image processing method and device, electronic equipment and computer readable storage medium | |
CN110248096B (en) | Focusing method and device, electronic equipment and computer readable storage medium | |
CN108805103B (en) | Image processing method and device, electronic equipment and computer readable storage medium | |
CN108810418B (en) | Image processing method, image processing device, mobile terminal and computer readable storage medium | |
CN110473185B (en) | Image processing method and device, electronic equipment and computer readable storage medium | |
CN110572573B (en) | Focusing method and device, electronic equipment and computer readable storage medium | |
CN108805265B (en) | Neural network model processing method and device, image processing method and mobile terminal | |
CN108810413B (en) | Image processing method and device, electronic equipment and computer readable storage medium | |
EP3496383A1 (en) | Image processing method, apparatus and device | |
CN107730444B (en) | Image processing method, image processing device, readable storage medium and computer equipment | |
CN108537749B (en) | Image processing method, image processing device, mobile terminal and computer readable storage medium | |
US20200045219A1 (en) | Control method, control apparatus, imaging device, and electronic device | |
CN108961302B (en) | Image processing method, image processing device, mobile terminal and computer readable storage medium | |
CN108419028B (en) | Image processing method, image processing device, computer-readable storage medium and electronic equipment | |
CN108717530B (en) | Image processing method, image processing device, computer-readable storage medium and electronic equipment | |
CN113313661B (en) | Image fusion method, device, electronic equipment and computer readable storage medium | |
CN109360254B (en) | Image processing method and device, electronic equipment and computer readable storage medium | |
CN107993209B (en) | Image processing method, image processing device, computer-readable storage medium and electronic equipment | |
CN110956679B (en) | Image processing method and device, electronic equipment and computer readable storage medium | |
CN107862658B (en) | Image processing method, image processing device, computer-readable storage medium and electronic equipment | |
CN109712177B (en) | Image processing method, image processing device, electronic equipment and computer readable storage medium | |
CN107948617B (en) | Image processing method, image processing device, computer-readable storage medium and computer equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |