CN111784611B - Portrait whitening method, device, electronic equipment and readable storage medium - Google Patents
Portrait whitening method, device, electronic equipment and readable storage medium Download PDFInfo
- Publication number
- CN111784611B CN111784611B CN202010636778.7A CN202010636778A CN111784611B CN 111784611 B CN111784611 B CN 111784611B CN 202010636778 A CN202010636778 A CN 202010636778A CN 111784611 B CN111784611 B CN 111784611B
- Authority
- CN
- China
- Prior art keywords
- portrait
- image
- whitening
- mask
- network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000002087 whitening effect Effects 0.000 title claims abstract description 160
- 238000000034 method Methods 0.000 title claims abstract description 42
- 238000012545 processing Methods 0.000 claims abstract description 73
- 238000012549 training Methods 0.000 claims abstract description 43
- 230000006870 function Effects 0.000 claims description 68
- 238000010586 diagram Methods 0.000 claims description 25
- 230000008447 perception Effects 0.000 claims description 22
- 238000004590 computer program Methods 0.000 claims description 6
- 238000004891 communication Methods 0.000 claims description 2
- 230000008439 repair process Effects 0.000 description 8
- 230000004913 activation Effects 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 230000003796 beauty Effects 0.000 description 2
- 238000012423 maintenance Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 230000007704 transition Effects 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000001815 facial effect Effects 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000008054 signal transmission Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/77—Retouching; Inpainting; Scratch removal
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Processing Or Creating Images (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the application provides a portrait whitening method, a portrait whitening device, electronic equipment and a readable storage medium, and relates to the technical field of image processing. Firstly, acquiring an image to be whitened containing a portrait, and then inputting the image to be whitened into a portrait whitening model to whiten to obtain a whitening result image, wherein the portrait whitening model takes the portrait image as a training sample, trains a portrait processing network which is built in advance and comprises a portrait whitening main network and a portrait mask sub-network, and obtains a trained portrait whitening main network. Therefore, the trained portrait whitening main network is used for whitening the portrait in the image to be whitened, and more image details are reserved at the same time, so that the image to be whitened is prevented from being distorted due to whitening.
Description
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a method and apparatus for whitening a portrait, an electronic device, and a readable storage medium.
Background
The whitening of the portrait is a maintenance operation which is frequently carried out by people loving beauty, and manual maintenance of the portrait requires that the skin area of the portrait is scratched out firstly, then the skin color is adjusted, and finally the scratched edge area is subjected to smoothing operation, so that the skin area and other areas are in transition coordination and natural. Such a repair operation often requires a certain repair skill, and repair of one image takes a lot of time, which is not friendly to most lovers. Therefore, it is urgent to develop an algorithm capable of intelligently whitening the skin of a person by one key.
Currently, a filter is generally used to whiten a portrait, that is, to adjust the color of the whole image. However, the method is difficult to adjust to the skin area, influences the color of the image background, has strong filter feel, and is difficult to achieve ideal picture repairing effect.
How to keep more image details while whitening the portrait is a problem worthy of research.
Disclosure of Invention
In view of the above, the present application provides a portrait whitening method, apparatus, electronic device, and readable storage medium to solve the above problems.
Embodiments of the application may be implemented as follows:
in a first aspect, an embodiment of the present application provides a method for whitening a portrait, including:
acquiring an image to be whitened containing a portrait;
and inputting the image to be whitened into a portrait whitening model to whiten, and obtaining a whitening result image, wherein the portrait whitening model takes a portrait image as a training sample, trains a pre-constructed portrait processing network comprising a portrait whitening main network and a portrait mask sub-network, and obtains the trained portrait whitening main network.
In an alternative embodiment, the portrait whitening model is trained by the following steps:
acquiring a portrait image and a target image, wherein the target image is obtained by carrying out face whitening on a portrait in the portrait image;
taking the portrait image as a training sample, taking the target image as a label, and training the portrait processing network by adopting a pre-constructed loss function to obtain a trained portrait processing network;
and taking the figure whitening main network in the trained figure processing network as the figure whitening model.
In an alternative embodiment, the target image includes a target mask image and a target portrait image, and the portrait whitening main network includes a portrait mask perception sub-network and a portrait whitening sub-network;
the step of training the portrait processing network by taking the portrait image as a training sample and the target image as a label and adopting a pre-constructed loss function to obtain a trained portrait processing network comprises the following steps of:
inputting the portrait image into the portrait mask perception sub-network, and carrying out mask perception on the portrait image by utilizing the portrait mask perception sub-network to obtain a mask perception image;
inputting the mask perceived image into the portrait mask sub-network, and carrying out portrait mask processing on the mask perceived image by using the portrait mask sub-network to obtain a preliminary mask image;
inputting the mask perceived image into the portrait whitening sub-network, and carrying out face whitening on the mask perceived image by using the portrait whitening sub-network to obtain a primary result image after whitening;
calculating a loss value of the loss function according to the preliminary result image, the preliminary mask image, the target mask image and the target portrait image;
and updating parameters of the portrait processing network according to the loss value until the loss value meets a preset condition, so as to obtain the portrait processing network after training.
In an alternative embodiment, the loss function includes a semantic loss function, an L1 loss function, and a first L2 loss function, and the loss values include a first output value of the semantic loss function, a second output value of the L1 loss function, and a third output value of the first L2 loss function;
the step of calculating the loss value of the loss function according to the preliminary result image, the preliminary mask image, the target mask image and the target portrait image comprises the following steps:
calculating a first output value of the semantic loss function by using the preliminary result image and the target portrait image;
calculating a second output value of the L1 loss function by using the preliminary result image and the target portrait image;
and calculating a third output value of the first L2 loss function by using the preliminary mask image and the target mask image.
In an optional embodiment, the step of updating the parameters of the portrait processing network according to the loss value until the loss value meets a preset condition, and obtaining the portrait processing network after training includes:
calculating a weighted sum of the first output value, the second output value and the third output value;
judging whether the weighted sum is smaller than a preset threshold value or not;
if yes, stopping updating parameters of the portrait processing network to obtain the portrait processing network after training;
if not, updating parameters of the portrait processing network according to the first output value, the second output value and the third output value, and repeatedly executing the steps until the weighted sum is smaller than the preset threshold value, thereby obtaining the portrait processing network after training.
In an alternative embodiment, the semantic loss function includes a pre-trained VGG model and a second L2 loss function;
the step of calculating the first output value of the semantic loss function using the preliminary result image and the target portrait image includes:
inputting the preliminary result image into the VGG model to obtain a first functional diagram;
inputting the target portrait image into the VGG model to obtain a second functional diagram;
and calculating an output value of the second L2 loss function by using the first functional diagram and the second functional diagram, and taking the output value as the first output value.
In a second aspect, an embodiment of the present application provides a portrait whitening apparatus, including:
the first acquisition module is used for acquiring an image to be whitened containing a portrait;
the system comprises a whitening module, a human image processing module and a human image processing module, wherein the whitening module is used for inputting the image to be whitened into a human image whitening model to whiten to obtain a whitening result image, the human image whitening model takes the human image as a training sample, and trains a human image processing network which is constructed in advance and comprises a human image whitening main network and a human image mask secondary network, and the human image whitening main network after training is obtained.
In an alternative embodiment, the apparatus further comprises:
the second acquisition module is used for acquiring a portrait image and a target image, wherein the target image is obtained by carrying out face whitening on a portrait in the portrait image;
the training module is used for taking the portrait image as a training sample, taking the target image as a label, and training the portrait processing network by adopting a pre-constructed loss function to obtain a trained portrait processing network;
and the portrait whitening model acquisition module is used for taking a portrait whitening main network in the trained portrait processing network as the portrait whitening model.
In a third aspect, an embodiment of the present application provides an electronic device, where the electronic device includes a processor, a memory, and a bus, where the memory stores machine-readable instructions executable by the processor, and when the electronic device is running, the processor and the memory communicate with each other through the bus, and the processor executes the machine-readable instructions to perform the steps of the portrait whitening method according to any one of the foregoing embodiments.
In a fourth aspect, an embodiment of the present application provides a readable storage medium having stored therein a computer program that, when executed, implements the portrait whitening method according to any one of the foregoing embodiments.
The embodiment of the application provides a portrait whitening method, a portrait whitening device, electronic equipment and a readable storage medium. Therefore, the trained portrait whitening main network is used for whitening the portrait, and more image details are reserved at the same time, so that the distortion of the image due to whitening is avoided.
In order to make the above objects, features and advantages of the present application more comprehensible, several embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a block diagram of an electronic device according to an embodiment of the present application.
Fig. 2 is a flowchart of a method for whitening a portrait according to an embodiment of the present application.
Fig. 3 is a training schematic diagram of a portrait whitening model according to an embodiment of the present application.
Fig. 4 is a diagram of one of portrait images provided in an embodiment of the present application.
Fig. 5 is a view showing one of target images corresponding to the portrait image shown in fig. 4 provided in the embodiment of the present application.
Fig. 6 is a target mask image included in the target image shown in fig. 5.
Fig. 7 is a functional block diagram of a figure whitening apparatus according to an embodiment of the present application.
Icon: 100-an electronic device; 110-memory; a 120-processor; 130-portrait whitening means; 131-a first acquisition module; 132-whitening module.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more apparent, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments of the present application. The components of the embodiments of the present application generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the application, as presented in the figures, is not intended to limit the scope of the application, as claimed, but is merely representative of selected embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures.
In the description of the present application, it should be noted that, if the terms "upper", "lower", "inner", "outer", and the like indicate an azimuth or a positional relationship based on the azimuth or the positional relationship shown in the drawings, or the azimuth or the positional relationship in which the inventive product is conventionally put in use, it is merely for convenience of describing the present application and simplifying the description, and it is not indicated or implied that the apparatus or element referred to must have a specific azimuth, be configured and operated in a specific azimuth, and thus it should not be construed as limiting the present application.
Furthermore, the terms "first," "second," and the like, if any, are used merely for distinguishing between descriptions and not for indicating or implying a relative importance.
It should be noted that the features of the embodiments of the present application may be combined with each other without conflict.
As introduced in the background art, the whitening of the portrait is a repair operation which is frequently carried out by people loving beauty, and manual repair is required to scratch out the skin area of the portrait firstly, then adjust the skin color, and finally smooth the scratched edge area, so that the skin area and other areas are coordinated and natural in transition. Such a repair operation often requires a certain repair skill, and repair of one image takes a lot of time, which is not friendly to most lovers. Therefore, it is urgent to develop an algorithm capable of intelligently whitening the skin of a person by one key.
Currently, a filter is generally used to whiten a portrait, that is, to adjust the color of the whole image. However, the method is difficult to adjust to the skin area, influences the color of the image background, has strong filter feel, and is difficult to achieve ideal picture repairing effect.
How to keep more image details while whitening the portrait is a problem worthy of research.
In view of the above, the embodiments of the present application provide a portrait whitening method, apparatus, electronic device, and readable storage medium, where a portrait whitening result image is obtained by inputting an image to be whitened including a portrait into a pre-trained portrait whitening model. The above-described scheme is explained in detail below.
Referring to fig. 1, fig. 1 is a block diagram of an electronic device 100 according to an embodiment of the application. The device may comprise a processor 120, a memory 110, a portrait whitening means 130, and a bus, the memory 110 storing machine-readable instructions executable by the processor 120, the processor 120 and the memory 110 communicating via the bus when the electronic device 100 is running, the processor 120 executing the machine-readable instructions and performing the steps of a portrait whitening method.
The memory 110, the processor 120, and other elements are electrically connected directly or indirectly to each other to achieve signal transmission or interaction.
For example, the components may be electrically connected to each other via one or more communication buses or signal lines. The portrait-whitening device 130 includes at least one software function module that may be stored in the memory 110 in the form of software or firmware (firmware). The processor 120 is configured to execute executable modules stored in the memory 110, such as software functional modules or computer programs included in the portrait whitening device 130.
The Memory 110 may be, but is not limited to, random access Memory (Random ACCessmemory, RAM), read Only Memory (ROM), programmable Read Only Memory (Programmable Read-Only Memory, PROM), erasable Read Only Memory (Erasable Programmable Read-Only Memory, EPROM), electrically erasable Read Only Memory (Electric Erasable Programmable Read-Only Memory, EEPROM), etc.
The processor 120 may be an integrated circuit chip with signal processing capabilities. The processor 120 may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), etc.
But also Digital Signal Processors (DSPs), application Specific Integrated Circuits (ASICs), field Programmable Gate Arrays (FPGAs) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components. The disclosed methods, steps, and logic blocks in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
In an embodiment of the present application, the memory 110 is configured to store a program, and the processor 120 is configured to execute the program after receiving an execution instruction. The method of defining a flow disclosed in any implementation of the embodiments of the present application may be applied to the processor 120, or implemented by the processor 120.
In an embodiment of the present application, the electronic device 100 may be, but is not limited to, a smart phone, a personal computer, a tablet computer, and other devices with processing functions.
It will be appreciated that the structure shown in fig. 1 is merely illustrative. The electronic device 100 may also have more or fewer components than shown in fig. 1, or have a different configuration than shown in fig. 1. The components shown in fig. 1 may be implemented in hardware, software, or a combination thereof.
As a possible implementation manner, the embodiment of the present application provides a method for whitening a portrait, please refer to fig. 2 in combination, and fig. 2 is a flowchart of the method for whitening a portrait provided by the embodiment of the present application.
The detailed description is provided below in connection with the specific flow shown in fig. 2.
Step S1, obtaining an image to be whitened containing a portrait.
And S2, inputting the image to be whitened into a portrait whitening model to whiten, and obtaining a whitening result image, wherein the portrait whitening model takes the portrait image as a training sample, trains a pre-constructed portrait processing network comprising a portrait whitening main network and a portrait mask sub-network, and obtains a trained portrait whitening main network.
The image to be whitened can be shot by the current electronic equipment, and can be stored in the memory in advance by the current electronic equipment, and can be obtained from the memory when needed.
As a possible implementation scenario, after the image to be whitened including the portrait is obtained through the two modes, the image to be whitened is sent to the portrait whitening model, and the whitening result image can be obtained.
Through training a pre-constructed portrait processing network comprising a portrait whitening main network and a portrait mask secondary network, only the trained portrait whitening main network is reserved after training is completed. Therefore, on one hand, the two networks are combined to train simultaneously in the training stage, so that more image details are reserved while the portrait is whitened, image distortion is avoided, and on the other hand, the trained portrait mask sub-network is completely deleted when the model is actually used, so that the whitening effect of the portrait whitening model is ensured, and the executing speed of whitening the image to be whitened is further improved.
It can be understood that the portrait whitening model may be obtained by training in advance in other electronic devices and then migrating to the current electronic device, or may be obtained by training in advance in the current electronic device and storing.
It should be understood that, in other embodiments, the sequence of part of the steps in the image whitening method according to the embodiment of the present application may be interchanged according to actual needs, or part of the steps may be omitted or deleted.
As an alternative embodiment, referring to fig. 3, the portrait whitening model is trained by the following steps:
step S100, a portrait image and a target image are obtained, wherein the target image is obtained by performing face whitening on a portrait in the portrait image.
Step S200, taking the portrait image as a training sample, taking the target image as a label, and training the portrait processing network by adopting a pre-constructed loss function to obtain a trained portrait processing network.
And step S300, taking the figure whitening main network in the trained figure processing network as a figure whitening model.
The face whitening method comprises the steps of carrying out face whitening on a face of a person image, and carrying out face whitening on the face of the person image.
For example, as shown in fig. 4 and 5, fig. 4 is one of the portrait images provided in the embodiment of the present application, and fig. 5 is one of the target images corresponding to the portrait image shown in fig. 4 in the embodiment of the present application.
The target image in fig. 5 is obtained by locally whitening the facial skin of the portrait included in the portrait image, and does not disturb the background image other than the portrait.
The portrait processing network is trained by adopting the portrait images and the target images, so that the trained portrait processing network has the effect of whitening faces aiming at the portraits, and more image details are reserved.
Further, the target image comprises a target mask image and a target portrait image, and the portrait whitening main network comprises a portrait mask perception sub-network and a portrait whitening sub-network.
As shown in fig. 6, fig. 6 is a target mask image included in the target image shown in fig. 5.
As a possible implementation manner, please refer to table 1 in combination, table 1 is a schematic structural diagram of a portrait processing network in an embodiment of the present application.
TABLE 1
Wherein WSkM represents the portrait processing network (Whitening Skin Model). ConX_ReLU means that the convolution layer X performs a convolution operation and then performs a ReLU activation operation. Skip_layerx_layery indicates that the output of LayerX layer (output after activation) will be added to the output of LayerY layer (output after activation), e.g., wskm_skip_de4_con6 indicates that the output of wskm_dec4_relu layer is added to the output of wskm_con6_relu layer.
Kernel is a convolution Kernel, padding is a filling parameter, stride is a step size of the convolution Kernel step, imaps is the number of input channels, omaps is the number of Output channels, output is Output, and Mask is a preliminary Mask image.
As shown in the table, the portrait mask sub-network includes wskm_con11_relu layers to wskm_con15 layers. The portrait whitening main network comprises a WSkM_Con1_ReLU layer to a WSkM_Con15 layer, the portrait mask perception sub-network comprises a WSkM_Con1_ReLU layer to a WSkM_Dec5_ReLU layer, and the portrait whitening sub-network comprises a WSkM_Dec6_ReLU layer to a WSkM_Dec10 layer.
Thus, by combining the structure of the humanized processing network, the human image processing network can be trained by the following method, and the trained human image processing network is obtained:
firstly, inputting a portrait image into a portrait mask perception sub-network, and carrying out mask perception on the portrait image by utilizing the portrait mask perception sub-network to obtain a mask perception image.
And then, inputting the mask perceived image into a portrait mask sub-network, and carrying out portrait mask processing on the mask perceived image by using the portrait mask sub-network to obtain a preliminary mask image.
And then, inputting the mask perceived image into a human image whitening sub-network, and carrying out face whitening on the mask perceived image by utilizing the human image whitening sub-network to obtain a whitened preliminary result image.
And then, calculating a loss value of the loss function according to the preliminary result image, the preliminary mask image, the target mask image and the target portrait image.
And finally, updating parameters of the portrait processing network according to the loss value until the loss value meets the preset condition, and obtaining the portrait processing network after training.
The multiple portrait images in the portrait mask perception sub-network are input, and the sizes of the portrait images can be adjusted to 256-512 by random scaling in advance, so that the diversity of training samples is increased, and the robustness of the trained portrait processing network is improved.
Further, the penalty function includes a semantic penalty function, an L1 penalty function, and a first L2 penalty function, and the penalty values include a first output value of the semantic penalty function, a second output value of the L1 penalty function, and a third output value of the first L2 penalty function.
As an alternative embodiment, the loss value of the loss function may be calculated according to the preliminary result image, the preliminary mask image, the target mask image, and the target portrait image by:
first, a first output value of a semantic loss function is calculated using the preliminary result image and the target portrait image.
Then, a second output value of the L1 loss function is calculated by using the preliminary result image and the target portrait image.
Finally, a third output value of the first L2 loss function is calculated by using the preliminary mask image and the target mask image.
Wherein, the L1 loss function is:
wherein D is L1 As L1 loss function, y i For the ith pixel value, f (x i ) For the ith pixel value in the preliminary result image, n is the target portrait image or the preliminary result imageThe number of pixels.
The first L2 loss function is:
wherein D is L2 First L2 loss function, z i For the i-th pixel value, g (x i ) And n is the number of pixel points in the preliminary mask image or the target mask image for the ith pixel value in the preliminary mask image.
Further, the semantic loss function includes a pre-trained VGG model and a second L2 loss function.
As an alternative embodiment, the first output value of the semantic loss function may be calculated by:
first, inputting the preliminary result image into a VGG model to obtain a first functional diagram.
And then, inputting the target portrait image into the VGG model to obtain a second functional diagram.
Finally, calculating the output value of the second L2 loss function by using the first functional diagram and the second functional diagram, and taking the output value as a first output value.
The second L2 loss function is:
wherein D is L3 As a second L2 loss function, p i For the i-th pixel value in the first functional diagram, q (x i ) And n is the number of pixel points in the first functional diagram or the second functional diagram.
As a possible implementation manner, the parameters of the portrait processing network may be updated according to the loss value until the loss value meets a preset condition by the following method, so as to obtain a portrait processing network after training:
first, a weighted sum of the first, second and third output values is calculated.
And secondly, judging whether the weighted sum is smaller than a preset threshold value.
If yes, stopping updating parameters of the portrait processing network, and obtaining the portrait processing network after training.
If not, updating parameters of the portrait processing network according to the first output value, the second output value and the third output value, and repeatedly executing the steps until the weighted sum is smaller than a preset threshold value, thereby obtaining the portrait processing network after training.
Wherein, as an alternative embodiment, the weight of the first output value may be 1, and the weight of the second output value may be 10 5 . The second output value may be weighted by 10 5 。
Therefore, the details such as textures, edges and the like of the preliminary result image and the target image can be consistent through the semantic loss function. And monitoring color information between the preliminary result image and the target image through the L1 loss function, and ensuring that the colors of the preliminary result image and the target image are similar. The primary human face whitening network can be supervised through the first L2 loss function, so that people want to whiten the face area of human face skin perceived by the primary human face whitening network, and whitening can be performed on the face area of the human face without changing other image details.
Based on the same inventive concept, please refer to fig. 7 in combination, the embodiment of the present application further provides a portrait whitening apparatus corresponding to the above-mentioned photographing parameter adjusting method, where the apparatus includes:
the first obtaining module 131 is configured to obtain an image to be whitened including a portrait.
The whitening module 132 is configured to input an image to be whitened into a portrait whitening model to whiten, so as to obtain a whitening result image, where the portrait whitening model uses the portrait image as a training sample, trains a pre-constructed portrait processing network including a portrait whitening main network and a portrait mask sub-network, and obtains a trained portrait whitening main network.
Further, the apparatus further comprises:
and the second acquisition module is used for acquiring a portrait image and a target image, wherein the target image is obtained by carrying out face whitening on the portrait in the portrait image.
The training module is used for taking the portrait image as a training sample, taking the target image as a label, and training the portrait processing network by adopting a pre-constructed loss function to obtain a trained portrait processing network.
And the portrait whitening model acquisition module is used for taking a portrait whitening main network in the trained portrait processing network as a portrait whitening model.
Because the principle of solving the problem of the device in the embodiment of the present application is similar to that of the above-mentioned portrait whitening method in the embodiment of the present application, the implementation principle of the device may refer to the implementation principle of the method, and the repetition is not repeated.
The embodiment of the application also provides a readable storage medium, wherein a computer program is stored in the readable storage medium, and the method for whitening the portrait is realized when the computer program is executed.
In summary, the embodiment of the application provides a portrait whitening method, a device, an electronic apparatus and a readable storage medium, which firstly acquire an image to be whitened including a portrait, and then input the image to be whitened into a portrait whitening model for whitening to obtain a whitening result image, wherein the portrait whitening model takes the portrait image as a training sample, trains a portrait processing network which is built in advance and comprises a portrait whitening main network and a portrait mask sub-network, and obtains a trained portrait whitening main network. Therefore, the trained portrait whitening main network is used for whitening the portrait, and more image details are reserved at the same time, so that the distortion of the image due to whitening is avoided.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. The apparatus embodiments described above are merely illustrative, for example, flow charts and block diagrams in the figures show apparatus, methods, and it is noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Architecture, functionality, and operation of possible implementations of computer program products. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The foregoing is merely illustrative of the present application, and the present application is not limited thereto, and any changes or substitutions easily contemplated by those skilled in the art within the scope of the present application should be included in the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
Claims (7)
1. A method for human whitening, the method comprising:
acquiring an image to be whitened containing a portrait;
inputting the image to be whitened into a portrait whitening model to whiten, and obtaining a whitening result image, wherein the portrait whitening model takes a portrait image as a training sample, trains a portrait processing network which is built in advance and comprises a portrait whitening main network and a portrait mask sub-network, and obtains the trained portrait whitening main network; the portrait whitening main network comprises a portrait mask perception sub-network and a portrait whitening sub-network;
the portrait whitening model is obtained by training the following steps:
acquiring a portrait image and a target image, wherein the target image is obtained by carrying out face whitening on a portrait in the portrait image; the target image comprises a target mask image and a target portrait image;
inputting the portrait image into the portrait mask perception sub-network, and carrying out mask perception on the portrait image by utilizing the portrait mask perception sub-network to obtain a mask perception image;
inputting the mask perceived image into the portrait mask sub-network, and carrying out portrait mask processing on the mask perceived image by using the portrait mask sub-network to obtain a preliminary mask image;
inputting the mask perceived image into the portrait whitening sub-network, and carrying out face whitening on the mask perceived image by using the portrait whitening sub-network to obtain a primary result image after whitening;
calculating a loss value of a loss function according to the preliminary result image, the preliminary mask image, the target mask image and the target portrait image;
updating parameters of the portrait processing network according to the loss value until the loss value meets a preset condition, so as to obtain the portrait processing network after training;
and taking the figure whitening main network in the trained figure processing network as the figure whitening model.
2. The method of claim 1, wherein the loss function comprises a semantic loss function, an L1 loss function, and a first L2 loss function, the loss values comprising a first output value of the semantic loss function, a second output value of the L1 loss function, and a third output value of the first L2 loss function;
the step of calculating the loss value of the loss function according to the preliminary result image, the preliminary mask image, the target mask image and the target portrait image comprises the following steps:
calculating a first output value of the semantic loss function by using the preliminary result image and the target portrait image;
calculating a second output value of the L1 loss function by using the preliminary result image and the target portrait image;
and calculating a third output value of the first L2 loss function by using the preliminary mask image and the target mask image.
3. The method according to claim 2, wherein the step of updating parameters of the portrait processing network according to the loss value until the loss value satisfies a preset condition, and obtaining the trained portrait processing network includes:
calculating a weighted sum of the first output value, the second output value and the third output value;
judging whether the weighted sum is smaller than a preset threshold value or not;
if yes, stopping updating parameters of the portrait processing network to obtain the portrait processing network after training;
if not, updating parameters of the portrait processing network according to the first output value, the second output value and the third output value, and repeatedly executing the steps until the weighted sum is smaller than the preset threshold value, thereby obtaining the portrait processing network after training.
4. The method of claim 2, wherein the semantic loss function comprises a pre-trained VGG model and a second L2 loss function;
the step of calculating the first output value of the semantic loss function using the preliminary result image and the target portrait image includes:
inputting the preliminary result image into the VGG model to obtain a first functional diagram;
inputting the target portrait image into the VGG model to obtain a second functional diagram;
and calculating an output value of the second L2 loss function by using the first functional diagram and the second functional diagram, and taking the output value as the first output value.
5. A portrait whitening device, the device comprising:
the first acquisition module is used for acquiring an image to be whitened containing a portrait;
the system comprises a whitening module, a human image processing module and a human image processing module, wherein the whitening module is used for inputting the image to be whitened into a human image whitening model to whiten to obtain a whitening result image, the human image whitening model takes a human image as a training sample, and trains a human image processing network which is constructed in advance and comprises a human image whitening main network and a human image mask secondary network, and the human image whitening main network after training is obtained; the portrait whitening main network comprises a portrait mask perception sub-network and a portrait whitening sub-network;
the second acquisition module is used for acquiring a portrait image and a target image, wherein the target image is obtained by carrying out face whitening on a portrait in the portrait image; the target image comprises a target mask image and a target portrait image;
the training module is used for inputting the portrait image into the portrait mask perception sub-network, and performing mask perception on the portrait image by utilizing the portrait mask perception sub-network to obtain a mask perception image; inputting the mask perceived image into the portrait mask sub-network, and carrying out portrait mask processing on the mask perceived image by using the portrait mask sub-network to obtain a preliminary mask image; inputting the mask perceived image into the portrait whitening sub-network, and carrying out face whitening on the mask perceived image by using the portrait whitening sub-network to obtain a primary result image after whitening; calculating a loss value of a loss function according to the preliminary result image, the preliminary mask image, the target mask image and the target portrait image; updating parameters of the portrait processing network according to the loss value until the loss value meets a preset condition, so as to obtain the portrait processing network after training;
and the portrait whitening model acquisition module is used for taking a portrait whitening main network in the trained portrait processing network as the portrait whitening model.
6. An electronic device comprising a processor, a memory and a bus, wherein the memory stores machine-readable instructions executable by the processor, the processor and the memory in communication via the bus when the electronic device is in operation, the processor executing the machine-readable instructions to perform the steps of the portrait whitening method according to any one of claims 1 to 4.
7. A readable storage medium, wherein a computer program is stored in the readable storage medium, which when executed implements the portrait whitening method according to any one of claims 1 to 4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010636778.7A CN111784611B (en) | 2020-07-03 | 2020-07-03 | Portrait whitening method, device, electronic equipment and readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010636778.7A CN111784611B (en) | 2020-07-03 | 2020-07-03 | Portrait whitening method, device, electronic equipment and readable storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111784611A CN111784611A (en) | 2020-10-16 |
CN111784611B true CN111784611B (en) | 2023-11-03 |
Family
ID=72758642
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010636778.7A Active CN111784611B (en) | 2020-07-03 | 2020-07-03 | Portrait whitening method, device, electronic equipment and readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111784611B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112561816A (en) * | 2020-12-10 | 2021-03-26 | 厦门美图之家科技有限公司 | Image processing method, image processing device, electronic equipment and readable storage medium |
CN112634126B (en) * | 2020-12-22 | 2024-10-18 | 厦门美图之家科技有限公司 | Portrait age-reducing processing method, training method, device, equipment and storage medium |
CN115546041B (en) * | 2022-02-28 | 2023-10-20 | 荣耀终端有限公司 | Training method of light supplementing model, image processing method and related equipment thereof |
CN117372615A (en) * | 2023-10-16 | 2024-01-09 | 北京百度网讯科技有限公司 | Image processing method, device, electronic equipment and storage medium |
Citations (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105787878A (en) * | 2016-02-25 | 2016-07-20 | 杭州格像科技有限公司 | Beauty processing method and device |
CN105825486A (en) * | 2016-04-05 | 2016-08-03 | 北京小米移动软件有限公司 | Beautifying processing method and apparatus |
CN106611402A (en) * | 2015-10-23 | 2017-05-03 | 腾讯科技(深圳)有限公司 | Image processing method and device |
CN108230271A (en) * | 2017-12-31 | 2018-06-29 | 广州二元科技有限公司 | Cosmetic method on face foundation cream in a kind of digital picture based on Face datection and facial feature localization |
CN108537292A (en) * | 2018-04-10 | 2018-09-14 | 上海白泽网络科技有限公司 | Semantic segmentation network training method, image, semantic dividing method and device |
CN109359527A (en) * | 2018-09-11 | 2019-02-19 | 杭州格像科技有限公司 | Hair zones extracting method and system neural network based |
CN109410131A (en) * | 2018-09-28 | 2019-03-01 | 杭州格像科技有限公司 | The face U.S. face method and system of confrontation neural network are generated based on condition |
CN109785258A (en) * | 2019-01-10 | 2019-05-21 | 华南理工大学 | A kind of facial image restorative procedure generating confrontation network based on more arbiters |
WO2019128508A1 (en) * | 2017-12-28 | 2019-07-04 | Oppo广东移动通信有限公司 | Method and apparatus for processing image, storage medium, and electronic device |
US10381105B1 (en) * | 2017-01-24 | 2019-08-13 | Bao | Personalized beauty system |
CN110263737A (en) * | 2019-06-25 | 2019-09-20 | Oppo广东移动通信有限公司 | Image processing method, image processing apparatus, terminal device and readable storage medium storing program for executing |
CN110580688A (en) * | 2019-08-07 | 2019-12-17 | 北京达佳互联信息技术有限公司 | Image processing method and device, electronic equipment and storage medium |
CN110706179A (en) * | 2019-09-30 | 2020-01-17 | 维沃移动通信有限公司 | Image processing method and electronic equipment |
CN111163265A (en) * | 2019-12-31 | 2020-05-15 | 成都旷视金智科技有限公司 | Image processing method, image processing device, mobile terminal and computer storage medium |
CN111161131A (en) * | 2019-12-16 | 2020-05-15 | 上海传英信息技术有限公司 | Image processing method, terminal and computer storage medium |
CN111160379A (en) * | 2018-11-07 | 2020-05-15 | 北京嘀嘀无限科技发展有限公司 | Training method and device of image detection model and target detection method and device |
CN111311485A (en) * | 2020-03-17 | 2020-06-19 | Oppo广东移动通信有限公司 | Image processing method and related device |
CN111345834A (en) * | 2018-12-21 | 2020-06-30 | 佳能医疗系统株式会社 | X-ray CT system and method |
-
2020
- 2020-07-03 CN CN202010636778.7A patent/CN111784611B/en active Active
Patent Citations (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106611402A (en) * | 2015-10-23 | 2017-05-03 | 腾讯科技(深圳)有限公司 | Image processing method and device |
CN105787878A (en) * | 2016-02-25 | 2016-07-20 | 杭州格像科技有限公司 | Beauty processing method and device |
CN105825486A (en) * | 2016-04-05 | 2016-08-03 | 北京小米移动软件有限公司 | Beautifying processing method and apparatus |
US10381105B1 (en) * | 2017-01-24 | 2019-08-13 | Bao | Personalized beauty system |
WO2019128508A1 (en) * | 2017-12-28 | 2019-07-04 | Oppo广东移动通信有限公司 | Method and apparatus for processing image, storage medium, and electronic device |
CN108230271A (en) * | 2017-12-31 | 2018-06-29 | 广州二元科技有限公司 | Cosmetic method on face foundation cream in a kind of digital picture based on Face datection and facial feature localization |
CN108537292A (en) * | 2018-04-10 | 2018-09-14 | 上海白泽网络科技有限公司 | Semantic segmentation network training method, image, semantic dividing method and device |
CN109359527A (en) * | 2018-09-11 | 2019-02-19 | 杭州格像科技有限公司 | Hair zones extracting method and system neural network based |
CN109410131A (en) * | 2018-09-28 | 2019-03-01 | 杭州格像科技有限公司 | The face U.S. face method and system of confrontation neural network are generated based on condition |
CN111160379A (en) * | 2018-11-07 | 2020-05-15 | 北京嘀嘀无限科技发展有限公司 | Training method and device of image detection model and target detection method and device |
CN111345834A (en) * | 2018-12-21 | 2020-06-30 | 佳能医疗系统株式会社 | X-ray CT system and method |
CN109785258A (en) * | 2019-01-10 | 2019-05-21 | 华南理工大学 | A kind of facial image restorative procedure generating confrontation network based on more arbiters |
CN110263737A (en) * | 2019-06-25 | 2019-09-20 | Oppo广东移动通信有限公司 | Image processing method, image processing apparatus, terminal device and readable storage medium storing program for executing |
CN110580688A (en) * | 2019-08-07 | 2019-12-17 | 北京达佳互联信息技术有限公司 | Image processing method and device, electronic equipment and storage medium |
CN110706179A (en) * | 2019-09-30 | 2020-01-17 | 维沃移动通信有限公司 | Image processing method and electronic equipment |
CN111161131A (en) * | 2019-12-16 | 2020-05-15 | 上海传英信息技术有限公司 | Image processing method, terminal and computer storage medium |
CN111163265A (en) * | 2019-12-31 | 2020-05-15 | 成都旷视金智科技有限公司 | Image processing method, image processing device, mobile terminal and computer storage medium |
CN111311485A (en) * | 2020-03-17 | 2020-06-19 | Oppo广东移动通信有限公司 | Image processing method and related device |
Also Published As
Publication number | Publication date |
---|---|
CN111784611A (en) | 2020-10-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111784611B (en) | Portrait whitening method, device, electronic equipment and readable storage medium | |
CN108961279A (en) | Image processing method, device and mobile terminal | |
CN113159143B (en) | Infrared and visible light image fusion method and device based on jump connection convolution layer | |
CN103247036A (en) | Multiple-exposure image fusion method and device | |
CN107808111A (en) | For pedestrian detection and the method and apparatus of Attitude estimation | |
CN110399788A (en) | AU detection method, device, electronic equipment and the storage medium of image | |
CN111311485B (en) | Image processing method and related device | |
CN111028142A (en) | Image processing method, apparatus and storage medium | |
JP2021531571A (en) | Certificate image extraction method and terminal equipment | |
CN111476213A (en) | Method and device for filling covering area of shelter based on road image | |
CN110533732A (en) | The recognition methods of the colour of skin, device, electronic equipment and storage medium in image | |
CN105023252A (en) | Method and system for enhancement processing of beautified image and shooting terminal | |
CN109492540B (en) | Face exchange method and device in image and electronic equipment | |
CN113763366A (en) | Face changing method, device, equipment and storage medium | |
CN109754372A (en) | A kind of image defogging processing method and processing device | |
CN111681187B (en) | Method and device for reducing color noise, electronic equipment and readable storage medium | |
CN114612595A (en) | Video generation method and device, computer equipment and storage medium | |
CN109615620A (en) | The recognition methods of compression of images degree, device, equipment and computer readable storage medium | |
CN113496472A (en) | Image defogging model construction method, road image defogging device and vehicle | |
CN112489144A (en) | Image processing method, image processing apparatus, terminal device, and storage medium | |
CN107704884A (en) | Image tag processing method, image tag processing unit and electric terminal | |
CN108171679B (en) | Image fusion method, system and equipment | |
CN110163794B (en) | Image conversion method, image conversion device, storage medium and electronic device | |
CN116797505A (en) | Image fusion method, electronic device and storage medium | |
CN116168106A (en) | Image processing method, device, computer equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |