CN112083863A - Image processing method and device, electronic equipment and readable storage medium - Google Patents
Image processing method and device, electronic equipment and readable storage medium Download PDFInfo
- Publication number
- CN112083863A CN112083863A CN202010981348.9A CN202010981348A CN112083863A CN 112083863 A CN112083863 A CN 112083863A CN 202010981348 A CN202010981348 A CN 202010981348A CN 112083863 A CN112083863 A CN 112083863A
- Authority
- CN
- China
- Prior art keywords
- image
- input
- makeup
- feature
- area
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04845—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/04817—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/0482—Interaction with lists of selectable items, e.g. menus
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Image Processing (AREA)
Abstract
The application discloses an image processing method and device, and belongs to the technical field of image processing. The method comprises the following steps: receiving a first input to a first image, the first input comprising: selecting a first area; acquiring a first image feature of a candidate object corresponding to the first region in response to the first input; receiving a second input to the first image feature, the second input comprising: selecting a first image feature of at least one target object from a plurality of first image features of the candidate objects; and responding to the second input, and processing a target area matched with the target object in the second image according to the first image characteristic of the at least one target object. This application can promote the flexibility of changing makeup when changing makeup to the image.
Description
Technical Field
The present application belongs to the field of image processing technologies, and in particular, to an image processing method, an image processing apparatus, an electronic device, and a readable storage medium.
Background
With the improvement of life quality, users gradually realize the conversion from 'feeling beauty' to 'creating beauty', and the common means of improving the self aesthetic feeling through makeup is adopted; meanwhile, with the rapid development of the internet technology, various social software and information websites provide a convenient way for users, so that the users can process their own pictures through the application programs provided by the electronic equipment, thereby imitating the beautiful looks of others.
In the related art, in the imitation makeup process, a whole set of makeup (including but not limited to facial makeup, clothing matching, etc.) in a target makeup drawing is copied mainly by a technical scheme of changing makeup by one key, and the copied whole set of state is added to an image of a user so as to achieve the aim of imitation makeup.
However, the above scheme of performing the imitation makeup processing on the image is to change the image by using a whole set of makeup, and the makeup changing processing is not flexible enough, so that the user cannot be given more personalized option.
Disclosure of Invention
An object of the embodiments of the present application is to provide an image processing method, an image processing apparatus, an electronic device, and a readable storage medium, which can solve the problem of inflexibility caused by changing a makeup set to an image in the related art.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides an image processing method, including:
receiving a first input to a first image, the first input comprising: selecting a first area;
acquiring a first image feature of a candidate object corresponding to the first region in response to the first input;
receiving a second input to the first image feature, the second input comprising: selecting a first image feature of at least one target object from a plurality of first image features of the candidate objects;
and responding to the second input, and processing a target area matched with the target object in the second image according to the first image characteristic of the at least one target object.
In a second aspect, an embodiment of the present application provides an image processing apparatus, including:
a first receiving module to receive a first input to a first image, the first input comprising: selecting a first area;
an obtaining module, configured to obtain, in response to the first input, a first image feature of a candidate object corresponding to the first region;
a second receiving module for receiving a second input to the first image feature, the second input comprising: selecting a first image feature of at least one target object from a plurality of first image features of the candidate objects;
and the processing module is used for responding to the second input and processing a target area matched with the target object in the second image according to the first image characteristic of the at least one target object.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, and when executed by the processor, the program or instructions implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In the embodiment of the application, by receiving a first input of a region of interest in a first image, the method of the embodiment of the application can acquire a first image feature of a candidate object corresponding to the region, and can extract an image feature of a local region of interest of a user from the whole image; then, when the user needs to make up on the second image, the first image feature of at least one target object may be selected from the first image features of the interested candidate objects, and then the method of the embodiment of the application may perform make up on the target area in the second image matching with the target object according to the first image feature of the at least one target object, so that the user may use any interested local make up to make up on the image in a local area, and does not need to move the whole make up of the first image to the second image, thereby improving the flexibility of making up the image, and giving the user more options for making up the local make up.
Drawings
FIG. 1 is a flow diagram of an image processing method according to one embodiment of the present application;
FIG. 2 is one of the schematic diagrams of the graphical interface of one embodiment of the present application;
FIG. 3 is a second schematic diagram of an exemplary graphical interface;
FIG. 4 is a third schematic diagram of an exemplary graphical interface;
FIG. 5 is a fourth illustration of a graphical interface in accordance with an embodiment of the present application;
FIG. 6 is a fifth schematic view of a graphical interface according to one embodiment of the present application;
FIG. 7 is a sixth schematic view of a graphical interface according to an embodiment of the present application;
FIG. 8 is a seventh illustration of a graphical interface diagram according to an embodiment of the present application;
FIG. 9 is an eighth schematic view of a graphical interface according to one embodiment of the present application;
FIG. 10 is a ninth illustration of a graphical interface diagram of one embodiment of the present application;
FIG. 11 is a block diagram of an image processing apparatus according to an embodiment of the present application;
FIG. 12 is a diagram of a hardware configuration of an electronic device according to an embodiment of the present application;
fig. 13 is a hardware configuration diagram of an electronic device according to another embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The image processing method provided by the embodiment of the present application is described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
Referring to fig. 1, a flowchart of an image processing method according to an embodiment of the present application is shown, where the method may specifically include the following steps:
in one example, the first image may be an image having a makeup, and the following embodiments will be described taking the makeup as a face makeup.
In one example, as shown in fig. 2, a cell phone interface is displayed with an image 11 of a user, and the cell phone interface provides a list of candidates, including "eyebrows", "eyes", "cheeks", "mouths". When the user likes the makeup of a candidate in the image 11, the user may click on an icon of the candidate in the candidate list in fig. 2, where the user clicks on the icon of the "mouth" corresponding to the user selecting the mouth region corresponding to the "mouth" object in the image 11 (i.e., an example of the first region).
Of course, the first area selected by the user is not limited to one, and a plurality of first areas may be selected simultaneously or sequentially.
In other examples, when the image 11 is displayed on the interface of the mobile phone, the first region of different candidate objects may be selected by controlling the number of times of shaking of the mobile phone, for example, one shaking is performed, and the eyebrow region corresponding to the "eyebrow" object is selected by default; shake twice, and select the eye region corresponding to the "eye" object by default.
where the number of candidate objects corresponding to the first region may be one or more, and in the example of fig. 2, the candidate object is "mouth", then this step may extract a first image feature about the "mouth" object from the image 11.
It should be noted that the first image feature, the second image feature, the third image feature, the fourth image feature and the fifth image feature described herein are all features that express the makeup of the user (including but not limited to makeup features, clothing features, etc. of the face area). The first image feature, the second image feature, the third image feature and the fourth image feature are all partial makeup features for expressing the area of a certain candidate object in the image, and the fifth image feature is a whole makeup feature (equivalent to a set of partial makeup features of a plurality of candidate objects) for expressing the makeup of a user in the whole image.
Optionally, after the clicking operation in fig. 2, the method of the embodiment of the present application may further display an interface shown in fig. 3, in which a thumbnail 12 of a first region (i.e., a region of the mouth) selected by the user from the image 11 may be displayed, and an icon 13 about the "mouth" object may be displayed in the interface of fig. 3.
Alternatively, in the embodiment of fig. 1 and the following embodiments, when the image feature of a certain candidate object is extracted from an image, the image may be subjected to makeup analysis according to a deep learning algorithm, so as to obtain makeup features (i.e., the image features) of respective areas of different candidate objects in the image.
Optionally, after step 102, before step 103, the first image feature of the candidate object may be stored in a makeup material library.
In addition, during storage, the first image feature may be stored in association with the candidate object (for example, a makeup feature list of the candidate object is formed), so that when the user likes makeup of a certain local area (i.e., the candidate object), the makeup feature of the candidate object may be stored in the makeup feature list corresponding to the candidate object, and then the local makeup feature in the makeup material library may be continuously expanded through the user continuously triggering step 101.
Each candidate object can be preset in the makeup material base, and when the candidate object corresponding to the step 101 triggered by the user is a certain preset candidate object, the first image feature of the candidate object obtained in the step 102 can be directly stored in the makeup feature list of the candidate object in the makeup material base; and when the candidate object corresponding to the triggering step 101 is not the preset candidate object, the icon 13 in the interface of fig. 3 may be displayed in a "+" icon manner, so that the user may click the "+" icon to automatically define the name of the candidate object in the step 102 and put the cosmetic characteristics of the candidate object in storage.
In the embodiment of the application, by creating the parallel makeup feature lists for different makeup areas (namely different candidate objects) in the makeup material library, the ordered classification of the makeup data of the local area personally customized and liked by the user is realized.
Optionally, when putting the makeup features of the candidate object into a library, a thumbnail of a second area (see description below) to which the candidate object belongs in the first image or a thumbnail (e.g., thumbnail 12 in fig. 3) of the first area corresponding to the candidate object may be associated and put into the library.
in one example, the makeup material library stores a makeup feature list of each candidate object (e.g., eyes, mouth, cheek, etc.) in a plurality of candidate objects, wherein the makeup feature list of each candidate object may include at least one set of makeup features; and in order to make the user know the makeup effect of each makeup feature, each set of makeup features can be associated with a thumbnail capable of expressing the makeup effect of the candidate object, so that the user can select a certain makeup feature (or a thumbnail corresponding to the makeup feature) in the makeup feature list of a certain interested candidate object (namely, a target object) from the makeup material library.
Of course, the number of the candidate objects of interest selected by the user may be one or more, that is, the number of the target objects may be one or more, but the first image feature of each target object corresponds to a cosmetic effect.
And 104, responding to the second input, and processing a target area matched with the target object in the second image according to the first image characteristic of the at least one target object.
In one example, a position parameter of a target area to which a target object belongs may be identified in a second image through deep learning, and then, a first image feature of the target object is migrated to the target area of the second image based on the position parameter through a deep learning algorithm so that a makeup effect of the target area of the second image is consistent with a makeup effect of an area to which the target object belongs in the first image, thereby achieving the purpose of partial makeup.
Alternatively, the second image may be a face image without makeup;
alternatively, the first and second images may be images of the same user with and without makeup.
In the embodiment of the application, by receiving a first input of a region of interest in a first image, the method of the embodiment of the application can acquire a first image feature of a candidate object corresponding to the region, and can extract an image feature of a local region of interest of a user from the whole image; then, when the user needs to make up on the second image, the first image feature of at least one target object may be selected from the first image features of the interested candidate objects, and then the method of the embodiment of the application may perform make up on the target area in the second image matching with the target object according to the first image feature of the at least one target object, so that the user may use any interested local make up to make up on the image in a local area, and does not need to move the whole make up of the first image to the second image, thereby improving the flexibility of making up the image, and giving the user more options for making up the local make up.
In addition, compared with the technical scheme of integrally transferring the makeup, the technical scheme of the embodiment of the application can increase the selection interactivity of the user on the interested local makeup by means of the first input, and improves the interesting and playable property of adding the makeup to the image.
Alternatively, the first image may be an image after the second image is entirely changed with a target makeup drawing (a fourth image described below), and therefore, before step 101, the method according to the embodiment of the present application may further include: step 201, step 202 and step 203.
A step 201 of receiving a third input, wherein the third input comprises a selection input of a fourth image and the second image;
wherein the system may provide a preset interface to receive the fourth image and the second image selected by the user.
In one example, the fourth image may be an image of a makeup effect selected by the user, and may be an image of another user; the second image is the user's own image, which may be without makeup.
The third input may be an input operation representing that the second image is entirely changed with the makeup effect of the fourth image.
Step 202, in response to the third input, extracting a fifth image feature from the fourth image, and extracting a face feature from the second image;
for example, if the fourth image is a face image, a deep learning algorithm may be used to extract makeup features of the whole face from the fourth image; and adopting a deep learning algorithm to extract the face features of the second image.
And 203, performing deconvolution operation on the fifth image feature and the face feature to generate the first image.
The makeup of the face in the fourth image can be entirely transferred to the face area in the second image by performing deconvolution operation on the two groups of features input into the deconvolution layer, so that a first image is generated, wherein the face of the user in the first image can be different from the face of the user in the fourth image, but the makeup effects of the two images are consistent.
In the embodiment of the application, a user selects the target makeup drawing and the user drawing and triggers the third input representing the makeup transfer function, so that the whole makeup in the target makeup drawing can be transferred to the user drawing to obtain the first image, and then the first input and the second input are triggered for the first image, so that the user can use the makeup of a local area in the favorite target makeup drawing at any time and any place to change the makeup of the local area for the user drawing.
Alternatively, the first image generated in this embodiment may be the image 11 in fig. 4, the system may further output and display a dialog box shown in fig. 4, if the user selects the "yes" option in the dialog box, which indicates that the user likes this entire set of makeup, an interface as shown in fig. 5 may be displayed, and in the interface of fig. 5, a thumbnail 14 of the image 11 and a "set" icon 15 of this entire set of makeup about the image 11 may be displayed.
Optionally, after the user selects the "yes" option in fig. 4, the method of the embodiment of the present application may further store the fifth image feature and the thumbnail 15 in association with the above-mentioned makeup material library, and at the time of storage, in order to distinguish the makeup feature of the candidate object in the partial area, the fifth image feature and the thumbnail 15 may be stored in association with one another in a "set" makeup feature list, where each option in the list includes a group of the fifth image feature and the thumbnail of the image of the entire makeup (for example, the thumbnail 15 in fig. 5). Therefore, the makeup features of the whole set of makeup liked by the user can be stored in the makeup feature list of the set, and when the user needs to change the whole set of makeup of the picture by one key, the whole set of makeup of the picture can be changed to the whole set of makeup corresponding to the option by selecting one option in the makeup feature list of the set.
Alternatively, if the user does not like the entire set of makeup in fig. 4, but only likes the makeup effect of a certain partial area or partial areas, the user may select the "no" option in fig. 4, and move to step 101, for example, the user may turn to display the interface shown in fig. 3, and provide the user with a list of candidates for the user to select the favorite partial area.
Alternatively, when the above step 102 is executed, it may be implemented by S301 and S302:
s301, in response to the first input, identifying a second region to which a candidate object belongs in the first image, wherein the candidate object is an object matched with the first region in the first image;
the first region and the second region are associated with the same candidate object, but are different from each other in that the second region is an image region within an accurate contour of the candidate object in the first image; while the first region may be the second region, it may also include image regions other than just within the exact outline of the candidate object, such as a thumbnail 12 of the first region of the "mouth" object in fig. 3.
In one example, when the first input is an input (e.g., a click input) of an icon of a candidate object in the candidate object list with a finger as shown in fig. 2, an outline of the mouth object of the image 11 shown in fig. 2 may be identified, and an area within the outline may be identified as the second area to which the mouth object belongs.
In another example, when the first input is a smear input performed on the image 11 with a finger as shown in fig. 6, the position information of the smear region (i.e., the first region) corresponding to the smear input in the image 11 may be identified. Furthermore, the image 11 or the second image may be subjected to face analysis, so that position information of each candidate object in the face of the user may be generated, and then intersection is performed between the position information of the smearing region and the position information of each candidate object here, so that a second region to which a candidate object (which may be one or more, and in fig. 6, the candidate object corresponding to the smearing region includes eyes and eyebrows) corresponding to the smearing region belongs in the image 11 may be determined, for example, after the smearing operation in fig. 6, the interface in fig. 7 may be turned to, and the second region 16 in fig. 7 schematically illustrates eye objects related to the smearing region in fig. 6 may be displayed. Of course, the candidate object to which the painted area relates in fig. 6 also includes eyebrows, and fig. 7 also shows a second area of the eyebrow object as a label, which is not shown here.
S302, acquiring a first image characteristic of the candidate object based on the image characteristic of the second area.
Wherein, the image feature of the corresponding candidate object can be extracted for the second area in the first image, wherein the image feature is also the feature representing the makeup, and then the first image feature of the corresponding candidate object is obtained based on the extracted image feature.
In the embodiment of the application, the candidate object corresponding to the first area selected by the user in the first image can be identified, and identifying a second region of the first image to which the candidate object belongs, the accurate position of the candidate object interested by the user in the first image can be identified, then, acquiring a first image feature of the candidate object based on the image feature of the second region, such that the first image feature is associated with an image feature of a candidate object of the user-selected first region in the first image, then when the second image is processed with the first image feature of the user-selected target object, the partial makeup represented by the image features of the processed second image can be matched with the partial makeup interested by the user in the first image, and the personalized makeup of the partial makeup of the image of the user is realized.
Alternatively, in one embodiment, in performing S302, an image feature of the candidate object may be extracted from the second region in the first image, and the image feature may be used as the first image feature of the candidate object.
Optionally, the first image feature may be binned.
In the embodiment of the application, the makeup materials of a certain area in one existing makeup can be stored in a library; when the second image of the user is made up, different materials can be freely selected from the library to generate the second image of various favorite partial makeup, the makeup characteristics of the interested candidate object in the first image after the user likes making up are stored in the makeup material library, so that the user can be helped to conveniently expand the user personalized makeup material library (which is related to the user) of the user, the favorite partial makeup which is most suitable for the user is selected, the selection right of the user is increased, and the participation degree of the user is improved.
Optionally, in another embodiment, in executing S302, a fourth input to the second area may be received, and in response to the fourth input, the second area is edited based on an editing parameter corresponding to the fourth input; then, a second image feature is extracted as a first image feature of the candidate object for the second region after the editing.
Whether the implementation manner of the first input is to select a candidate object from the candidate object list by using a finger as shown in fig. 2, or to smear the region of the candidate object of interest in the image by using the finger of the user as shown in fig. 6, or other implementation manners, the second region may be edited by using the technical solution of the embodiment of the present application.
As illustrated in fig. 6 and fig. 7, fig. 7 may not only mark the second area 16 to which the eye object belongs, but also provide a plurality of editing options (e.g., skin-polishing, whitening, and fine-tuning), the user may trigger a fourth input to the second area 16 of the eye object by selecting any one of the editing options, after selecting a certain editing option, may adjust an editing parameter, for example, the skin-polishing editing option has skin-polishing parameters with different skin-polishing degrees, edit the second area 16 by using the selected skin-polishing parameter with the certain skin-polishing degree, and after displaying fig. 7, the second area 16 marked in fig. 7 is in an editable state; then, the makeup feature is extracted again for the edited second region 16 as the first image feature of the eye object.
The editing parameters corresponding to the fourth input may be provided for the user to perform personalized editing on the area to which a certain candidate object of interest belongs, and the editing content mainly refers to adjustment of the makeup parameters, including but not limited to: the makeup color of the current area is edited into shade, and the texture is edited into gloss or matte.
Optionally, the first image feature of the candidate object may be binned.
In the embodiment of the application, after the makeup material (including but not limited to the makeup characteristics and the thumbnail of the makeup in the area) of a certain area in the existing makeup is subjected to the adjustment of the makeup parameters, the makeup material of the area after the makeup parameters are adjusted is stored in a library; when the second image of the user is made up, the makeup materials of different candidate objects can be freely selected from the library to generate the second image of various favorite partial makeup, the makeup characteristics of the interested candidate objects in the first image after the user likes the makeup are adjusted and then stored in the makeup material library, so that the user can edit the customized image area, the user can be helped to conveniently expand the user-own personalized makeup material library (which is related to the user), the favorite partial makeup most suitable for the user is selected, the selection right of the user is increased, and the participation degree of the user is improved.
Optionally, in a further embodiment, in performing S302, a third image feature of the second region may be extracted for the first image; then, acquiring and outputting a third image matched with the third image characteristic, wherein the third image is an image with a fourth image characteristic similar to the third image characteristic of a third area to which the candidate object belongs; and finally, extracting the fourth image feature of each candidate object in the plurality of candidate objects from the target image selected by the user in the third image, wherein the fourth image feature is respectively used as the first image feature of the plurality of candidate objects.
For example, for a second area to which a candidate object of interest selected by the user in the first image belongs, a makeup feature (here, named as a third image feature) of the second area may be extracted, and then, a makeup request (carrying the makeup feature and a corresponding candidate object, such as makeup feature 1 of a mouth object) may be sent to the server, and the server side may obtain or pre-store a plurality of makeup pictures, each of which corresponds to a respective set of makeup features, where the set of makeup features of one makeup picture may be a set of makeup features (including makeup features of each candidate object) extracted based on a face image with makeup in the makeup picture; the server may search for at least one set of target makeup features, from among the plurality of sets of makeup features, for which the makeup feature of the mouth object is similar to the makeup feature 1 (the two sets of image features may be similar, for example, such that the cosine similarity of the two sets of image features satisfies a threshold condition), in response to the makeup request; then, the at least one set of target makeup features and the at least one makeup picture (i.e., the third image) corresponding to the at least one set of target makeup features are sent to the requesting client, and the client can output the at least one makeup picture. For example, displaying an interface as shown in fig. 8, 4 pictures of makeup recommended by the server are shown, and the user can select a favorite picture (i.e., a target image) from the recommended 4 pictures of makeup; then, the method of the embodiment of the application may extract, from the target image, the respective makeup feature of each candidate object of the plurality of candidate objects (e.g., the candidate objects of the face such as the eyes, the nose, the mouth, etc.) as the first image feature of each candidate object. Of course, if the server returns a set of target makeup features corresponding to each makeup picture, the makeup features of each candidate object can be directly obtained from the set of target makeup features corresponding to the target image to serve as the first image features of each candidate object.
The distinction between cosmetic features may be reflected in color parameter values and shape profiles.
In one example, since the makeup difference is mainly reflected in the difference of the color parameter values, when determining the third image, the determination may be performed based on the color statistical information of the regions to which the same candidate object belongs in the two images, that is, the second region and the third region, and if the difference between the color statistical information of the two regions is smaller than the threshold value, that is, the colors of the two regions are close, for example, the colors of the mouths are close, that is, the colors of the lipsticks are close, it is determined that the two images are a group of images with similar image characteristics of the same candidate object.
In another example, whether the makeup appearance of the object, i.e., the eyebrow, is similar may be represented not only on the above-described color statistical information but also on the eyebrow shape, and thus, if the outlines of the regions to which the eyebrow object belongs in the two images are similar and the color statistical information is close, it is indicated that the two images are a group of images in which the image characteristics of the eyebrow object are similar.
Alternatively, the first image features of the plurality of candidate objects may be binned separately.
In this embodiment of the application, for a second area to which a candidate object selected by a user from a first image belongs, a third image feature (including but not limited to color statistical information and contour information) of the second area may be identified, so that a third image with a fourth image feature of the third area of the candidate object similar to the third image feature may be obtained, that is, a third image with a whole set of makeup that can be matched with the makeup of the candidate object in the first image may be obtained, for example, a third image with a color close to the makeup feature of the mouth may be found in a preset set of sets in a server, and a third image with a whole set of makeup of the color of the mouth object selected by the user for the first image may be obtained, so that a whole set of makeup that is matched with the third image feature of the candidate object may be found, an intelligent recommendation of the whole set of makeup images of the area selected by the user may be realized, and finally, the third image of the area corresponding to each candidate object of the target image selected by the user from the recommended third image may be obtained The makeup features are stored in the library, and makeup materials are enriched. In addition, the makeup picture (third image) recommended by the system is a picture with similar makeup features of the candidate object and the candidate object of the first image, and the makeup materials of the makeup pictures with different makeup in the cloud can be combined and recommended according to the area selected by the user in the first image.
It should be noted that the three specific embodiments related to S302 may be arbitrarily combined to form a new implementation, and details are not described here.
Optionally, in one example, after displaying fig. 3 or fig. 7 (i.e., after step 102). Since the makeup feature list of each candidate object (such as eyes, mouths, cheeks and the like) is stored in the makeup material library, a user can respectively select a group of makeup features from the makeup feature lists of the candidate objects, and the selected combination of the makeup features of the candidate objects and the image thumbnail (such as the picture M) of the whole makeup corresponding to the combination can be stored as a selection item in the makeup feature list named as a 'set' in the makeup material library.
Then in the above step 103, the interface shown in fig. 9 may be displayed, in fig. 9, thumbnails of images of each whole makeup in the makeup feature list named "set" in the makeup material library may be displayed, and a picture M in the makeup feature list of "set" is schematically shown in fig. 9; when a user clicks the picture M by a finger, the icon 17, the icon 18 and the icon 19 can be displayed; wherein, the enlarged face image in fig. 9 is an original image of the picture M; wherein, the icon 17, the icon 18 and the icon 19 respectively represent three candidate objects of a mouth, a cheek and an eyebrow; the user may select the first image feature of one target object from the combination of the makeup features corresponding to picture M by clicking any one of the three icons representing candidate objects. For example, if the icon 17 and the icon 18 are selected, which indicates that the user has received the selection of the first image feature of the mouth and the first image feature of the cheek from the combination of the makeup features corresponding to the picture M, then the first image features of the two selected candidate objects may be migrated to the area of the corresponding candidate object in the second image when step 104 is executed.
In the example, a set of makeup contents customized by the user can be generated by combining the makeup contents of all candidate objects in the makeup material library and stored in the makeup feature list of the set category; when a user needs to make up the own image, the makeup feature list of the set type can be opened, the makeup features of some candidate objects can be selected from the combined materials of the set of makeup customized by the user in the makeup feature list of the set type to make up the own image, and areas such as a mouth, a cheek, eyebrows and the like can be selected to make up.
Alternatively, in the example of fig. 10, in step 103, the favorite makeup contents may not be combined into a set and then applied, but the makeup characteristics of each candidate, that is, the area makeup contents, may be selected directly from the makeup characteristic list of each candidate in the makeup material library. As shown in fig. 10, a second image 20 may be displayed, and icons of each candidate object related to the makeup material library are displayed below the second image 20, where icons of 5 candidate objects are shown, and when a user clicks a certain icon (here, the icon of an eye pattern is clicked) with a finger, a second area to which the candidate object corresponding to the selected icon belongs may be marked in the second image 20 in a dotted frame manner, and a makeup effect of a group of makeup features in a list of makeup features of the eye object in the makeup material library is displayed in the second area; when the user clicks the icon of the glasses pattern again, the makeup effect in the dotted line frame is switched to the makeup effect of the next makeup feature in the makeup feature list of the eye object, and in this way, the user can select each interested group of makeup features of each candidate object from the makeup feature list of each candidate object in the makeup material library, namely, receive the first image feature of at least one target object selected from the first images of the plurality of candidate objects.
In this example, the user may directly select a set of makeup features of each object candidate of interest from the list of makeup features of each object candidate in stock to obtain the makeup features of each area; then, the corresponding areas in the original photo of the user are gradually made up by using the makeup features of the areas.
It should be noted that, in the image processing method provided in the embodiment of the present application, the execution subject may be an image processing apparatus, or a control module in the image processing apparatus for executing the image processing method. The image processing apparatus provided in the embodiment of the present application is described with an example in which an image processing apparatus executes an image processing method.
Referring to fig. 11, a block diagram of an image processing apparatus according to an embodiment of the present application is shown. The image processing apparatus includes:
a first receiving module 31, configured to receive a first input to a first image, where the first input includes: selecting a first area;
an obtaining module 32, configured to obtain, in response to the first input, a first image feature of a candidate object corresponding to the first region;
a second receiving module 33, configured to receive a second input of the first image feature, where the second input includes: selecting a first image feature of at least one target object from a plurality of first image features of the candidate objects;
and the processing module 34 is configured to, in response to the second input, process a target region in the second image, which is matched with the target object, according to the first image feature of the at least one target object.
Optionally, the obtaining module 32 includes:
the identification submodule is used for responding to the first input and identifying a second area to which a candidate object belongs in the first image, wherein the candidate object is an object matched with the first area in the first image;
and the acquisition sub-module is used for acquiring the first image characteristic of the candidate object based on the image characteristic of the second area.
Optionally, the obtaining sub-module includes:
a receiving unit, configured to receive a fourth input to the second area;
the editing unit is used for responding to the fourth input and editing the second area based on an editing parameter corresponding to the fourth input;
and a first extraction unit, configured to extract a second image feature of the second region after the editing as a first image feature of the candidate object.
Optionally, the obtaining sub-module includes:
a second extraction unit configured to extract a third image feature of the second region from the first image;
an obtaining unit, configured to obtain and output a third image matched with the third image feature, where the third image is an image in which a fourth image feature of a third region to which the candidate object belongs is similar to the third image feature;
and the third extraction unit is used for extracting the fourth image characteristic of each candidate object in a plurality of candidate objects from the target image selected by the user in the third image, and the fourth image characteristics are respectively used as the first image characteristics of the candidate objects.
Optionally, the apparatus further comprises:
a third receiving module to receive a third input, the third input comprising a selection input for a fourth image and the second image;
the extraction module is used for responding to the third input, extracting fifth image characteristics from the fourth image and extracting face characteristics from the second image;
and the operation module is used for carrying out deconvolution operation on the fifth image characteristic and the human face characteristic to generate the first image.
In the embodiment of the application, by receiving a first input of a region of interest in a first image, the method of the embodiment of the application can acquire a first image feature of a candidate object corresponding to the region, and can extract an image feature of a local region of interest of a user from the whole image; then, when the user needs to make up on the second image, the first image feature of at least one target object may be selected from the first image features of the interested candidate objects, and then the method of the embodiment of the application may perform make up on the target area in the second image matching with the target object according to the first image feature of the at least one target object, so that the user may use any interested local make up to make up on the image in a local area, and does not need to move the whole make up of the first image to the second image, thereby improving the flexibility of making up the image, and giving the user more options for making up the local make up.
The image processing apparatus in the embodiment of the present application may be an apparatus, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a Personal Computer (PC), a Television (TV), a teller machine, a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The image processing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android operating system (Android), an iOS operating system, or other possible operating systems, which is not specifically limited in the embodiments of the present application.
The image processing apparatus provided in the embodiment of the present application can implement each process implemented by the foregoing method embodiment, and is not described here again to avoid repetition.
Optionally, as shown in fig. 12, an electronic device 2000 is further provided in this embodiment of the present application, and includes a processor 2002, a memory 2001, and a program or an instruction stored in the memory 2001 and executable on the processor 2002, where the program or the instruction implements each process of the above-mentioned embodiment of the image processing method when executed by the processor 2002, and can achieve the same technical effect, and no further description is provided here to avoid repetition.
It should be noted that the electronic devices in the embodiments of the present application include the mobile electronic devices and the non-mobile electronic devices described above.
Fig. 13 is a schematic hardware structure diagram of an electronic device implementing an embodiment of the present application.
The electronic device 1000 includes, but is not limited to: a radio frequency unit 1001, a network module 1002, an audio output unit 1003, an input unit 1004, a sensor 1005, a display unit 1006, a user input unit 1007, an interface unit 1008, a memory 1009, and a processor 1010.
Those skilled in the art will appreciate that the electronic device 1000 may further comprise a power source (e.g., a battery) for supplying power to various components, and the power source may be logically connected to the processor 1010 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system. The electronic device structure shown in fig. 13 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is omitted here.
The user input unit 1007 is configured to receive a first input for a first image, where the first input includes: selecting a first area;
a processor 1010 configured to obtain, in response to the first input, a first image feature of a candidate object corresponding to the first region;
a user input unit 1007 configured to receive a second input for the first image feature, the second input including: selecting a first image feature of at least one target object from a plurality of first image features of the candidate objects;
a processor 1010, configured to process, in response to the second input, a target region in the second image that matches the target object according to the first image feature of the at least one target object.
In the embodiment of the application, by receiving a first input of a region of interest in a first image, the method of the embodiment of the application can acquire a first image feature of a candidate object corresponding to the region, and can extract an image feature of a local region of interest of a user from the whole image; then, when the user needs to make up on the second image, the first image feature of at least one target object may be selected from the first image features of the interested candidate objects, and then the method of the embodiment of the application may perform make up on the target area in the second image matching with the target object according to the first image feature of the at least one target object, so that the user may use any interested local make up to make up on the image in a local area, and does not need to move the whole make up of the first image to the second image, thereby improving the flexibility of making up the image, and giving the user more options for making up the local make up.
Optionally, the processor 1010 is configured to, in response to the first input, identify a second region in the first image to which a candidate object belongs, where the candidate object is an object in the first image that matches the first region; and acquiring a first image characteristic of the candidate object based on the image characteristic of the second area.
In the embodiment of the application, the candidate object corresponding to the first area selected by the user in the first image can be identified, and identifying a second region of the first image to which the candidate object belongs, the accurate position of the candidate object interested by the user in the first image can be identified, then, acquiring a first image feature of the candidate object based on the image feature of the second region, such that the first image feature is associated with an image feature of a candidate object of the user-selected first region in the first image, then when the second image is processed with the first image feature of the user-selected target object, the partial makeup represented by the image features of the processed second image can be matched with the partial makeup interested by the user in the first image, and the personalized makeup of the partial makeup of the image of the user is realized.
Optionally, a user input unit 1007 for receiving a fourth input to the second region;
a processor 1010, configured to, in response to the fourth input, edit the second area based on an editing parameter corresponding to the fourth input; and extracting a second image characteristic of the second area after the editing as a first image characteristic of the candidate object.
In the embodiment of the application, after the makeup material (including but not limited to the makeup characteristics and the thumbnail of the makeup in the area) of a certain area in the existing makeup is subjected to the adjustment of the makeup parameters, the makeup material of the area after the makeup parameters are adjusted is stored in a library; when the second image of the user is made up, the makeup materials of different candidate objects can be freely selected from the library to generate the second image of various favorite partial makeup, the makeup characteristics of the interested candidate objects in the first image after the user likes the makeup are adjusted and then stored in the makeup material library, so that the user can edit the customized image area, the user can be helped to conveniently expand the user-own personalized makeup material library (which is related to the user), the favorite partial makeup most suitable for the user is selected, the selection right of the user is increased, and the participation degree of the user is improved.
Optionally, the processor 1010 is configured to extract a third image feature of the second region from the first image; acquiring and outputting a third image matched with the third image characteristic, wherein the third image is an image with a fourth image characteristic similar to the third image characteristic of a third area to which the candidate object belongs; and extracting the fourth image feature of each candidate object in the plurality of candidate objects from the target image selected by the user in the third image, wherein the fourth image feature is respectively used as the first image feature of the plurality of candidate objects.
In this embodiment of the application, for a second area to which a candidate object selected by a user from a first image belongs, a third image feature (including but not limited to color statistical information and contour information) of the second area may be identified, so that a third image with a fourth image feature of the third area of the candidate object similar to the third image feature may be obtained, that is, a third image with a whole set of makeup that can be matched with the makeup of the candidate object in the first image may be obtained, for example, a third image with a color close to the makeup feature of the mouth may be found in a preset set of sets in a server, and a third image with a whole set of makeup of the color of the mouth object selected by the user for the first image may be obtained, so that a whole set of makeup that is matched with the third image feature of the candidate object may be found, an intelligent recommendation of the whole set of makeup images of the area selected by the user may be realized, and finally, the third image of the area corresponding to each candidate object of the target image selected by the user from the recommended third image may be obtained The makeup features are stored in the library, and makeup materials are enriched. In addition, the makeup picture (third image) recommended by the system is a picture with similar makeup features of the candidate object and the candidate object of the first image, and the makeup materials of the makeup pictures with different makeup in the cloud can be combined and recommended according to the area selected by the user in the first image.
Optionally, a user input unit 1007 configured to receive a third input, where the third input includes a selection input for a fourth image and the second image;
a processor 1010, configured to, in response to the third input, extract a fifth image feature for the fourth image and extract a face feature for the second image; and performing deconvolution operation on the fifth image characteristic and the face characteristic to generate the first image.
In the embodiment of the application, a user selects the target makeup drawing and the user drawing and triggers the third input representing the makeup transfer function, so that the whole makeup in the target makeup drawing can be transferred to the user drawing to obtain the first image, and then the first input and the second input are triggered for the first image, so that the user can use the makeup of a local area in the favorite target makeup drawing at any time and any place to change the makeup of the local area for the user drawing.
It should be understood that in the embodiment of the present application, the input Unit 1004 may include a Graphics Processing Unit (GPU) 10041 and a microphone 10042, and the Graphics Processing Unit 10041 processes image data of still pictures or videos obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 1006 may include a display panel 10061, and the display panel 10061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 1007 includes a touch panel 10071 and other input devices 10072. The touch panel 10071 is also referred to as a touch screen. The touch panel 10071 may include two parts, a touch detection device and a touch controller. Other input devices 10072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. The memory 1009 may be used to store software programs as well as various data, including but not limited to application programs and operating systems. Processor 1010 may integrate an application processor that handles primarily operating systems, user interfaces, applications, etc. and a modem processor that handles primarily wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 1010.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the embodiment of the image processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the embodiment of the image processing method, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.
Claims (10)
1. An image processing method, characterized in that the method comprises:
receiving a first input to a first image, the first input comprising: selecting a first area;
acquiring a first image feature of a candidate object corresponding to the first region in response to the first input;
receiving a second input to the first image feature, the second input comprising: selecting a first image feature of at least one target object from a plurality of first image features of the candidate objects;
and responding to the second input, and processing a target area matched with the target object in the second image according to the first image characteristic of the at least one target object.
2. The method of claim 1, wherein said obtaining a first image feature of a candidate object corresponding to the first region in response to the first input comprises:
in response to the first input, identifying a second region in the first image to which a candidate object belongs, wherein the candidate object is an object in the first image that matches the first region;
and acquiring a first image characteristic of the candidate object based on the image characteristic of the second area.
3. The method of claim 2, wherein the obtaining the first image feature of the candidate object based on the image feature of the second region comprises:
receiving a fourth input to the second region;
in response to the fourth input, editing the second area based on an editing parameter corresponding to the fourth input;
and extracting a second image characteristic of the second area after the editing as a first image characteristic of the candidate object.
4. The method of claim 2, wherein the obtaining the first image feature of the candidate object based on the image feature of the second region comprises:
extracting a third image feature of the second region from the first image;
acquiring and outputting a third image matched with the third image characteristic, wherein the third image is an image with a fourth image characteristic similar to the third image characteristic of a third area to which the candidate object belongs;
and extracting the fourth image feature of each candidate object in the plurality of candidate objects from the target image selected by the user in the third image, wherein the fourth image feature is respectively used as the first image feature of the plurality of candidate objects.
5. The method of claim 1, wherein prior to receiving the first input for the first image, the method further comprises:
receiving a third input comprising a selection input for a fourth image and the second image;
in response to the third input, extracting fifth image features from the fourth image and extracting face features from the second image;
and performing deconvolution operation on the fifth image characteristic and the face characteristic to generate the first image.
6. An image processing apparatus, characterized in that the apparatus comprises:
a first receiving module to receive a first input to a first image, the first input comprising: selecting a first area;
an obtaining module, configured to obtain, in response to the first input, a first image feature of a candidate object corresponding to the first region;
a second receiving module for receiving a second input to the first image feature, the second input comprising: selecting a first image feature of at least one target object from a plurality of first image features of the candidate objects;
and the processing module is used for responding to the second input and processing a target area matched with the target object in the second image according to the first image characteristic of the at least one target object.
7. The apparatus of claim 6, wherein the obtaining module comprises:
the identification submodule is used for responding to the first input and identifying a second area to which a candidate object belongs in the first image, wherein the candidate object is an object matched with the first area in the first image;
and the acquisition sub-module is used for acquiring the first image characteristic of the candidate object based on the image characteristic of the second area.
8. The apparatus of claim 7, wherein the acquisition submodule comprises:
a receiving unit, configured to receive a fourth input to the second area;
the editing unit is used for responding to the fourth input and editing the second area based on an editing parameter corresponding to the fourth input;
and a first extraction unit, configured to extract a second image feature of the second region after the editing as a first image feature of the candidate object.
9. The apparatus of claim 7, wherein the acquisition submodule comprises:
a second extraction unit configured to extract a third image feature of the second region from the first image;
an obtaining unit, configured to obtain and output a third image matched with the third image feature, where the third image is an image in which a fourth image feature of a third region to which the candidate object belongs is similar to the third image feature;
and the third extraction unit is used for extracting the fourth image characteristic of each candidate object in a plurality of candidate objects from the target image selected by the user in the third image, and the fourth image characteristics are respectively used as the first image characteristics of the candidate objects.
10. The apparatus of claim 6, further comprising:
a third receiving module to receive a third input, the third input comprising a selection input for a fourth image and the second image;
the extraction module is used for responding to the third input, extracting fifth image characteristics from the fourth image and extracting face characteristics from the second image;
and the operation module is used for carrying out deconvolution operation on the fifth image characteristic and the human face characteristic to generate the first image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010981348.9A CN112083863A (en) | 2020-09-17 | 2020-09-17 | Image processing method and device, electronic equipment and readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010981348.9A CN112083863A (en) | 2020-09-17 | 2020-09-17 | Image processing method and device, electronic equipment and readable storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112083863A true CN112083863A (en) | 2020-12-15 |
Family
ID=73736549
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010981348.9A Pending CN112083863A (en) | 2020-09-17 | 2020-09-17 | Image processing method and device, electronic equipment and readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112083863A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112818147A (en) * | 2021-02-22 | 2021-05-18 | 维沃移动通信有限公司 | Picture processing method, device, equipment and storage medium |
CN113793248A (en) * | 2021-08-02 | 2021-12-14 | 北京旷视科技有限公司 | Method and device for transferring makeup, and method and device for aligning human face |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107358241A (en) * | 2017-06-30 | 2017-11-17 | 广东欧珀移动通信有限公司 | Image processing method, device, storage medium and electronic equipment |
CN108053365A (en) * | 2017-12-29 | 2018-05-18 | 百度在线网络技术(北京)有限公司 | For generating the method and apparatus of information |
CN108803991A (en) * | 2018-06-12 | 2018-11-13 | 广州视源电子科技股份有限公司 | Object screening method and device, computer readable storage medium and electronic terminal |
CN109671034A (en) * | 2018-12-26 | 2019-04-23 | 维沃移动通信有限公司 | A kind of image processing method and terminal device |
CN110110118A (en) * | 2017-12-27 | 2019-08-09 | 广东欧珀移动通信有限公司 | Dressing recommended method, device, storage medium and mobile terminal |
CN110853119A (en) * | 2019-09-15 | 2020-02-28 | 北京航空航天大学 | Robust reference picture-based makeup migration method |
CN111080747A (en) * | 2019-12-26 | 2020-04-28 | 维沃移动通信有限公司 | Face image processing method and electronic equipment |
CN111127378A (en) * | 2019-12-23 | 2020-05-08 | Oppo广东移动通信有限公司 | Image processing method, image processing device, computer equipment and storage medium |
-
2020
- 2020-09-17 CN CN202010981348.9A patent/CN112083863A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107358241A (en) * | 2017-06-30 | 2017-11-17 | 广东欧珀移动通信有限公司 | Image processing method, device, storage medium and electronic equipment |
CN110110118A (en) * | 2017-12-27 | 2019-08-09 | 广东欧珀移动通信有限公司 | Dressing recommended method, device, storage medium and mobile terminal |
CN108053365A (en) * | 2017-12-29 | 2018-05-18 | 百度在线网络技术(北京)有限公司 | For generating the method and apparatus of information |
CN108803991A (en) * | 2018-06-12 | 2018-11-13 | 广州视源电子科技股份有限公司 | Object screening method and device, computer readable storage medium and electronic terminal |
CN109671034A (en) * | 2018-12-26 | 2019-04-23 | 维沃移动通信有限公司 | A kind of image processing method and terminal device |
CN110853119A (en) * | 2019-09-15 | 2020-02-28 | 北京航空航天大学 | Robust reference picture-based makeup migration method |
CN111127378A (en) * | 2019-12-23 | 2020-05-08 | Oppo广东移动通信有限公司 | Image processing method, image processing device, computer equipment and storage medium |
CN111080747A (en) * | 2019-12-26 | 2020-04-28 | 维沃移动通信有限公司 | Face image processing method and electronic equipment |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112818147A (en) * | 2021-02-22 | 2021-05-18 | 维沃移动通信有限公司 | Picture processing method, device, equipment and storage medium |
CN113793248A (en) * | 2021-08-02 | 2021-12-14 | 北京旷视科技有限公司 | Method and device for transferring makeup, and method and device for aligning human face |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108537859B (en) | Image mask using deep learning | |
US10019779B2 (en) | Browsing interface for item counterparts having different scales and lengths | |
CN108121957B (en) | Method and device for pushing beauty material | |
US20190138851A1 (en) | Neural network-based image manipulation | |
US9478054B1 (en) | Image overlay compositing | |
CN108846792B (en) | Image processing method, image processing device, electronic equipment and computer readable medium | |
US10373348B2 (en) | Image processing apparatus, image processing system, and program | |
US12136173B2 (en) | Digital makeup palette | |
CN113330453B (en) | System and method for providing personalized video for multiple persons | |
US11776187B2 (en) | Digital makeup artist | |
CN113114841A (en) | Dynamic wallpaper acquisition method and device | |
EP3912086A2 (en) | Systems and methods for providing personalized videos | |
US20130301938A1 (en) | Human photo search system | |
CN112306347B (en) | Image editing method, image editing device and electronic equipment | |
WO2023197780A1 (en) | Image processing method and apparatus, electronic device, and storage medium | |
US11961169B2 (en) | Digital makeup artist | |
CN112083863A (en) | Image processing method and device, electronic equipment and readable storage medium | |
CN113453027A (en) | Live video and virtual makeup image processing method and device and electronic equipment | |
CN105335990B (en) | A kind of personal portrait material image generation method and device | |
US11321882B1 (en) | Digital makeup palette | |
CN112287817A (en) | Information acquisition method and device | |
CN113468372B (en) | Intelligent mirror and video recommendation method | |
CN110147511B (en) | Page processing method and device, electronic equipment and medium | |
CN116975337A (en) | Image searching method, device, electronic equipment and readable storage medium | |
CN117786176A (en) | Resource searching method, device, terminal equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20201215 |