WO2021120626A1 - 一种图像处理方法、终端及计算机存储介质 - Google Patents
一种图像处理方法、终端及计算机存储介质 Download PDFInfo
- Publication number
- WO2021120626A1 WO2021120626A1 PCT/CN2020/104638 CN2020104638W WO2021120626A1 WO 2021120626 A1 WO2021120626 A1 WO 2021120626A1 CN 2020104638 W CN2020104638 W CN 2020104638W WO 2021120626 A1 WO2021120626 A1 WO 2021120626A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- portrait
- image
- processing
- feature
- processed
- Prior art date
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 38
- 238000012545 processing Methods 0.000 claims abstract description 144
- 230000001815 facial effect Effects 0.000 claims abstract description 22
- 230000003796 beauty Effects 0.000 claims description 108
- 230000015654 memory Effects 0.000 claims description 42
- 238000000034 method Methods 0.000 claims description 20
- 238000003062 neural network model Methods 0.000 claims description 20
- 210000001508 eye Anatomy 0.000 claims description 14
- 230000008569 process Effects 0.000 claims description 13
- 238000004590 computer program Methods 0.000 claims description 12
- 238000005034 decoration Methods 0.000 claims description 8
- 230000009467 reduction Effects 0.000 claims description 7
- 238000012549 training Methods 0.000 claims description 6
- 210000000744 eyelid Anatomy 0.000 claims description 5
- 230000003716 rejuvenation Effects 0.000 claims description 3
- 210000001331 nose Anatomy 0.000 description 14
- 230000000694 effects Effects 0.000 description 13
- 210000000887 face Anatomy 0.000 description 10
- 238000001514 detection method Methods 0.000 description 9
- 238000010586 diagram Methods 0.000 description 8
- 210000004709 eyebrow Anatomy 0.000 description 8
- 210000000214 mouth Anatomy 0.000 description 8
- 230000001360 synchronised effect Effects 0.000 description 8
- 238000004364 calculation method Methods 0.000 description 6
- 238000011176 pooling Methods 0.000 description 6
- 230000005291 magnetic effect Effects 0.000 description 5
- 238000004422 calculation algorithm Methods 0.000 description 4
- 230000003068 static effect Effects 0.000 description 4
- 230000004913 activation Effects 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 210000005069 ears Anatomy 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000005294 ferromagnetic effect Effects 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 210000000697 sensory organ Anatomy 0.000 description 2
- 206010014970 Ephelides Diseases 0.000 description 1
- 206010027145 Melanocytic naevus Diseases 0.000 description 1
- 208000003351 Melanosis Diseases 0.000 description 1
- 208000007256 Nevus Diseases 0.000 description 1
- 206010048245 Yellow skin Diseases 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 210000001061 forehead Anatomy 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000004321 preservation Methods 0.000 description 1
- 238000011946 reduction process Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000002087 whitening effect Effects 0.000 description 1
- 230000037303 wrinkles Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/04—Context-preserving transformations, e.g. by using an importance map
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/77—Retouching; Inpainting; Scratch removal
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/166—Detection; Localisation; Normalisation using acquisition arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/63—Control of cameras or camera modules by using electronic viewfinders
- H04N23/631—Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
- H04N23/632—Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
Definitions
- This application relates to the field of terminals, in particular to an image processing method, terminal and computer storage medium.
- smart terminals such as smart phones are used to take pictures.
- smart terminals have replaced cameras as the mainstream tool for ordinary users to take pictures.
- the beauty function has gradually become a standard feature of camera applications in smart terminals.
- the smart terminal usually adopts unified beauty parameters for processing different users in the beauty mode, or uses unified beauty parameters for the entire person.
- the beauty effect required for a particular feature may be different from other features. If the beauty effect is not distinguished, the user experience may be affected.
- the purpose of this application is to provide an image processing method, terminal, and computer storage medium, which can improve the user experience by performing personalized beauty processing on the user according to the characteristics of the user.
- This application first provides an image processing method, which is applied to a terminal, and the method includes:
- the characteristics include at least one of the following: gender, face shape, eyes, race, and age.
- the performing beautification processing on the region corresponding to the at least one portrait feature includes:
- said adopting a beauty solution corresponding to the at least one portrait feature to perform beauty processing on the area corresponding to the at least one portrait feature includes:
- a selection of the at least one beauty solution is received, and beauty processing is performed on the region corresponding to the at least one portrait feature according to the selected beauty solution.
- the performing beauty processing on the region corresponding to the at least one portrait feature and/or not performing beauty processing on the region corresponding to the at least one portrait feature includes at least one of the following processes:
- the face shape is a round face, performing face-lifting and beautifying processing on the face area of the portrait in the image to be processed;
- eyes are single eyelids, perform beauty treatment on the eye area of the portrait in the image to be processed;
- wrinkle-preserving processing is performed on the facial area of the portrait in the image to be processed
- the age is a young person, perform lip color rejuvenation processing on the lip region of the portrait in the image to be processed;
- the performing beauty processing on the region corresponding to the at least one portrait feature and/or not performing beauty processing on the region corresponding to the at least one portrait feature includes:
- the performing beauty reduction processing on the region corresponding to the at least one portrait feature includes:
- the corresponding beauty level is gradually reduced when performing beauty processing from the edge to the center of the region corresponding to the at least one portrait feature
- the state of the region corresponding to the at least one portrait feature in the image to be processed after the beautification processing is restored to the state before the beautification processing.
- the performing feature recognition on the portrait in the to-be-processed image and acquiring at least two portrait features in the to-be-processed image includes:
- the neural network model is obtained by training based on historical images and corresponding historical portrait features.
- the terminal includes a processor and a memory for storing a program; when the program is executed by the processor, the processor realizes the image processing method described above.
- the present application also provides a computer storage medium that stores a computer program, and when the computer program is executed by a processor, the image processing method described above is implemented.
- the image processing method, terminal, and computer storage medium of the present application acquire at least two portrait features obtained by feature recognition of a portrait in an image to be processed, and perform beauty processing and/or on an area corresponding to the at least one portrait feature No beautification processing is performed on the area corresponding to the at least one portrait feature, so as to implement personalized beautification processing for the user according to the user characteristics, thereby improving the user experience.
- the image processing method of the present application detects whether the image to be processed contains a preset feature identifier, and determines at least two portrait features in the image to be processed according to the detection result, which can realize rapid and accurate processing of the image to be processed. Character recognition is performed on the portrait in the image.
- the image processing method provided by the present application performs beautification processing on the region corresponding to the portrait feature through different protection strategies, so as to realize the protection of the region corresponding to the portrait feature.
- the image processing method provided by the present application displays at least one portrait feature corresponding region, so that the user can select one or several portrait feature corresponding regions to perform beauty processing, which is flexible in operation and further improves the user experience.
- the image processing method provided by this application is flexible and convenient by providing users with a variety of beauty solutions to choose from, and further improves the user experience.
- FIG. 1 is a schematic flowchart of an image processing method provided by an embodiment of this application.
- FIG. 2 is a schematic structural diagram of a terminal provided by an embodiment of the application.
- FIG. 3 is a schematic diagram of a specific flow of an image processing method provided by an embodiment of the application.
- FIG. 4 is a schematic diagram of a process of performing gender recognition according to face region data in an embodiment of the application
- FIG. 5 is a schematic diagram of the male beard area after beautification protection is performed in an embodiment of the application.
- Fig. 6 is a schematic diagram of a woman's eyebrow nevus and nasal area after beauty protection is performed in an embodiment of the application.
- an image processing method provided by an embodiment of this application is applicable to the situation of performing beautification processing on an image.
- the image processing method can be performed by an image processing apparatus provided by an embodiment of the application.
- the image processing device may be implemented in software and/or hardware.
- the image processing device may be a terminal such as a smart phone, a personal digital assistant, or a tablet computer.
- the image processing method includes the following steps:
- Step S101 the terminal obtains an image to be processed
- the image to be processed is a single frame image
- the image to be processed may be a preview image collected by the terminal through a camera device such as a camera.
- the terminal as a mobile phone
- the mobile phone uses the preview image collected by the camera as the image to be processed after the camera application is turned on and switched to the beauty mode.
- the preview image is displayed on the shooting preview interface of the camera application of the mobile phone. on. That is, after receiving the camera start instruction, the terminal switches to the beauty mode, and collects the preview image taken by the camera as the image to be processed. If the mobile phone shoots a person, an image including the person is displayed on the shooting preview interface of the camera application of the mobile phone.
- Step S102 The terminal performs feature recognition on the portrait in the image to be processed, and acquires at least two portrait features in the image to be processed;
- the terminal performs feature recognition on the portrait in the image to be processed obtained in step S101, so as to correspondingly acquire at least two portrait features in the image to be processed.
- the characteristics include at least one of the following: gender, face shape, eyes, race, and age.
- the race can refer to the classification based on skin color, specifically it can be yellow race, white race, black race and brown race, or can refer to the classification based on region, specifically it can be Asian, European People, Africans, Americans, etc.
- the age can refer to the type of age, such as young people, old people, etc.
- the terminal may first obtain the face image in the image to be processed by performing face detection on the image to be processed, and then implement the face image in the image to be processed based on the face image. Feature recognition.
- the recognition may be performed by adopting a feature recognition model established based on an artificial intelligence algorithm, or by detecting the identification features of the portrait.
- the terminal may collect the voice of the photographer, and obtain the gender recognition result of the voice of the photographer, so as to correspondingly realize the gender recognition of the portrait in the image to be processed.
- the terminal may also determine the portrait feature in the image to be processed according to the features input by the user.
- Step S103 The terminal performs beauty processing on the region corresponding to the at least one portrait feature and/or does not perform beauty processing on the region corresponding to the at least one portrait feature.
- the way of performing beautification processing on the image to be processed may be different, and correspondingly, the corresponding areas of the portrait feature that need to be beautified and not be beautified may be different.
- the portrait feature corresponding area may include a beard area; and/or, if the gender is female, the portrait feature corresponding area may include an eyebrow mole area and/or nose decoration area.
- the face area of a portrait may be used as the corresponding area of the face; for a single eyelid, the eye area of the portrait may be used as the corresponding area of the eye.
- the performing beauty processing on the region corresponding to the at least one portrait feature may be performing beauty reduction processing on the region corresponding to the at least one portrait feature when performing beauty processing on the image to be processed , It is also possible to use a beautifying solution corresponding to the at least one portrait feature to perform beautification processing on the area corresponding to the at least one portrait feature. Understandably, if the beauty reduction process is performed on the region corresponding to the at least one portrait feature, after the beauty process of the to-be-processed image is completed, the effect of the region corresponding to the at least one portrait feature will be inconsistent with the previous effect.
- the effect change degree of the region corresponding to the at least one portrait feature will still be weaker than the effect change degree of other regions; if the beauty solution corresponding to the portrait feature is used to perform beautification processing on the personality feature region, the After the beauty processing of the image to be processed is completed, the effect of the region corresponding to the at least one portrait feature may better match the user's purpose.
- the performing beauty reduction processing on the region corresponding to the at least one portrait feature may include: gradually reducing the corresponding beauty level when performing beauty processing from the edge to the center of the region corresponding to the at least one portrait feature Or, not performing beauty processing on the region corresponding to the at least one portrait feature; or, restoring the state of the region corresponding to the at least one portrait feature in the image to be processed after the beauty processing to the state before the beauty processing .
- the beautification treatment including the dermabrasion treatment as an example, when the dermabrasion treatment is performed on the male beard area, the level of the dermabrasion treatment can be reduced.
- the restoration of the state of the region corresponding to the at least one portrait feature in the image to be processed after the beautification processing to the state before the beautification processing can be understood as after the beautification processing is performed on the image to be processed, Restore the state of the region corresponding to the portrait feature in the image to be processed after the beautification processing to the state before the beauty process, so as not to perform beautification on the region corresponding to the portrait feature in the image to be processed. Color processing, so as to achieve the protection of local details.
- beauty processing is not performed on the region corresponding to the portrait feature, after the beauty processing of the image to be processed is completed, the effect of the region corresponding to the portrait feature will still remain consistent with the previous effect. In this way, by performing beautification processing on the area corresponding to the portrait feature through different protection strategies, the protection of the area corresponding to the portrait feature is realized.
- the terminal acquires at least two portrait features obtained by feature recognition of a portrait in the image to be processed, and performs beauty processing and/or on the region corresponding to the at least one portrait feature No beautification processing is performed on the area corresponding to the at least one portrait feature, so as to implement personalized beautification processing for the user according to the user characteristics, thereby improving the user experience.
- the performing feature recognition on the portrait in the to-be-processed image and acquiring at least two portrait features in the to-be-processed image includes: inputting the to-be-processed image after training The feature recognition neural network model for obtaining at least two portrait features in the to-be-processed image output from the trained feature recognition neural network model; wherein the feature recognition neural network model is based on historical images and corresponding historical portraits Features obtained through training. Understandably, the terminal may pre-store a feature recognition neural network model obtained by training historical images and corresponding historical portrait features using a neural network algorithm. When the image to be processed is used as the input of the feature recognition neural network model, The output of the feature recognition neural network model is the corresponding facial feature recognition result.
- the input of the image to be processed into the trained feature recognition neural network model may be the input of the face data in the image to be processed into the trained feature recognition neural network model.
- the establishment and training process of the feature recognition neural network model can refer to the prior art, and will not be repeated here.
- the terminal performing feature recognition on the portrait in the image to be processed, and acquiring at least two portrait features in the image to be processed includes:
- Detecting whether the portrait in the image to be processed includes a preset feature identifier, and obtaining a corresponding detection result
- the terminal detects whether the portrait in the image to be processed includes a preset feature identifier, and obtains a corresponding detection result, so as to obtain at least two portrait features in the image to be processed according to the detection result.
- the preset feature identification refers to an identification that can calibrate a certain feature of the user. For example, taking the feature as gender as an example, for males, the gender identification can be beard, throat, etc.; for women, The gender identification can be eyebrow moles, facial accessories such as nose decorations and veils, etc.
- the gender corresponding to the portrait in the image to be processed can be determined It is male; and when it is detected that the image to be processed contains female identification such as eyebrow mole or nose decoration, it can be determined that the gender corresponding to the portrait in the image to be processed is female.
- the feature as age as an example, for the elderly and young people, the age can be distinguished by detecting whether there are wrinkles on the face, etc. In this way, by detecting whether the image to be processed contains a preset feature identifier, and determining at least two portrait features in the image to be processed according to the detection result, it is possible to quickly and accurately identify the features of the person in the image to be processed .
- the terminal performs feature recognition on the portrait in the image to be processed, and before acquiring at least two portrait features in the image to be processed, the method may further include: detecting the number of portraits in the image to be processed Whether the preset quantity condition is satisfied; if it is satisfied, the step of performing feature recognition on the portrait in the image to be processed and acquiring at least two portrait features in the image to be processed is performed.
- the preset quantity condition may be one or more portraits.
- each portrait may contain different features, so that different beautification processing solutions need to be adopted for different portraits.
- the image to be processed may contain different male and female faces at the same time, and different portrait features are acquired for faces of different genders, and a corresponding beauty processing method is adopted for each region corresponding to the portrait feature.
- the terminal may pre-store the corresponding relationship between the different portrait features and the corresponding beauty solutions.
- the corresponding beauty solutions may be Dan Fengyan Beauty treatment; if the race is Asian, the corresponding beauty plan can be yellow skin beauty treatment, etc.
- the performing beauty processing on the region corresponding to the at least one portrait feature and/or not performing beauty processing on the region corresponding to the at least one portrait feature includes at least one of the following processing: if the face shape is a round face , Then perform face-lifting and beautifying processing on the face area of the portrait in the image to be processed; if the eyes are single eyelids, perform the beautifying processing on the eye area of the portrait in the image to be processed; if the age is the elderly, Wrinkle-preserving processing is performed on the facial area of the portrait in the image to be processed; if the age is young, the lip color rejuvenation processing is performed on the lip area of the portrait in the image to be processed; if the gender is male, the lip color is not processed.
- the beard area of the portrait in the image to be processed is subjected to beautification processing, or corresponding beautification processing; if the gender is female, no beautification processing is performed on the eyebrow area and/or the nose area of the portrait in the image to be processed , Or corresponding beauty treatment.
- the following takes the feature as gender as an example to describe in detail the process of performing beauty processing according to the feature recognition result.
- the beauty processing is performed on the region corresponding to the at least one portrait feature and/or the region corresponding to the at least one portrait feature is different.
- Performing beauty processing may be performing feature region recognition on the face in the image to be processed, and obtaining at least one feature region of the face in the face; wherein, the feature region of the face includes at least one feature region of five sense organs Determine the gender corresponding area in the face according to the gender and the at least one facial feature area; protect the gender corresponding area when performing beauty processing on the image to be processed.
- the terminal may use the existing facial feature region recognition method to perform feature region recognition on the face in the image to be processed, so as to obtain at least one facial feature region in the face, and the face
- the feature area includes at least one feature area of the five sense organs.
- the facial feature regions may include facial feature regions such as eyes, mouth, nose, ears, and eyebrows, as well as non facial feature regions such as chin and forehead. Since the position of the region corresponding to the portrait feature is fixed and can be determined correspondingly by the region of the facial feature, it can be determined that the portrait feature in the face corresponds to the result of the recognition of the portrait feature and the at least one facial feature region. area.
- the terminal can pass through the mouth after determining the characteristic areas of the nose and mouth in the face, that is, the positions of the nose and mouth in the face.
- the lower area and the area between the upper part of the mouth and the lower part of the nose obtain the corresponding beard area.
- the characteristics of beards are dark, such as black, which is lower in brightness than the skin area.
- the brightness of pixels in the area under the mouth and between the area above the mouth and under the nose can be smaller than
- the pixels with the average value of the pixels in the face area and the difference between the two are greater than the preset first threshold are used as pixels representing the beard, and the beard area is obtained.
- the terminal performing beauty processing on the area corresponding to the at least one portrait feature and/or not performing beauty processing on the area corresponding to the at least one portrait feature includes:
- the terminal may determine multiple candidate portrait feature corresponding regions according to the feature recognition result, but the user may not need to use each candidate portrait feature corresponding region as the target portrait feature corresponding region for beautification. Processing, but only one or a few candidate portrait feature corresponding regions need to be used as the target portrait feature region for beauty processing. Therefore, after determining at least one portrait feature corresponding region, the terminal can display the at least one portrait feature corresponding region. For a feature corresponding area, the user selects one or several portrait feature corresponding areas from it as the target portrait feature corresponding area for beautification processing.
- the terminal displays the at least one portrait feature corresponding area, it can simultaneously control the at least one portrait feature corresponding area to be in a selectable state, such as protruding and displaying each of the portrait feature corresponding areas Borders and so on.
- the terminal displays at least one portrait feature corresponding region, and the user selects a certain one or several portrait feature corresponding regions from the region for beautification processing, which is flexible in operation and further improves the user experience.
- said adopting the beauty solution corresponding to the at least one portrait feature to perform beauty processing on the region corresponding to the at least one portrait feature includes:
- a selection of the at least one beauty solution is received, and beauty processing is performed on the region corresponding to the at least one portrait feature according to the selected beauty solution.
- the terminal may store at least one beauty solution. Accordingly, when the terminal needs to perform beauty processing on an area corresponding to a certain portrait feature, it can output the At least one beauty solution corresponding to the at least one portrait feature, and the user selects one or more beauty solutions therefrom to perform beauty processing on the region corresponding to the at least one portrait feature. For example, taking the feature as the face shape and the face shape as a round face as an example, the terminal may recommend various beauty solutions such as face-lifting and beautifying treatment, freckles beautifying treatment, whitening and beautifying treatment to the user, and the user can perform according to needs Choose the corresponding beauty plan. In this way, by providing users with a variety of beauty solutions to choose from, they are flexible and convenient, and the user experience is further improved.
- the method further includes:
- the state of the target area in the image to be processed after the beautification processing is restored to the state before the beautification processing.
- the terminal may receive the user's input on the to-be-processed image after the beautification process, so as to determine that the user is on the beautification-processed image to be processed.
- the selected target area further restores the state of the target area in the to-be-processed image after the beautification processing to the state before the beautification processing.
- the terminal may display each area in the portrait after performing beautification processing on the image to be processed, for example, using a frame or highlighting to mark each area to facilitate the user's selection.
- the terminal may also determine the area selected by the sliding operation trajectory as the target area according to the sliding operation trajectory input by the user. In this way, the terminal determines the target area that does not require beautification processing according to the user's selection, so as to protect the target area, improve the flexibility of use, and further improve the user experience.
- the terminal includes: a processor 110 and a memory 111 for storing a computer program that can run on the processor 110;
- the processor 110 illustrated in FIG. 2 is not used to refer to the number of processors 110 as one, but only used to refer to the positional relationship of the processor 110 with respect to other devices.
- the processor 110 The number can be one or more; similarly, the memory 111 illustrated in FIG. 2 has the same meaning, that is, it is only used to refer to the positional relationship of the memory 111 with respect to other devices.
- the number of the memory 111 can be one or more.
- the processor 110 is configured to implement the image processing method applied to the foregoing terminal when running the computer program.
- the terminal may further include: at least one network interface 112.
- the various components in the terminal are coupled together through the bus system 113.
- the bus system 113 is used to implement connection and communication between these components.
- the bus system 113 also includes a power bus, a control bus, and a status signal bus.
- various buses are marked as the bus system 113 in FIG. 2.
- the memory 111 may be a volatile memory or a non-volatile memory, and may also include both volatile and non-volatile memory.
- the non-volatile memory can be a read-only memory (ROM, Read Only Memory), a programmable read-only memory (PROM, Programmable Read-Only Memory), an erasable programmable read-only memory (EPROM, Erasable Programmable Read-Only Memory, Electrically Erasable Programmable Read-Only Memory (EEPROM, Electrically Erasable Programmable Read-Only Memory), Magnetic Random Access Memory (FRAM, ferromagnetic random access memory), flash memory (Flash Memory), magnetic surface memory, CD-ROM, or CD-ROM, Compact Disc Read-Only Memory); Magnetic surface storage can be disk storage or tape storage.
- the volatile memory may be a random access memory (RAM, Random Access Memory), which is used as an external cache.
- RAM random access memory
- RAM Random Access Memory
- many forms of RAM are available, such as static random access memory (SRAM, Static Random Access Memory), Synchronous Static Random Access Memory (SSRAM, Synchronous Static Random Access Memory), Dynamic Random Access Memory (DRAM, Dynamic Random Access Memory), Synchronous Dynamic Random Access Memory (SDRAM), Double Data Rate Synchronous Dynamic Random Access Memory (DDRSDRAM, Double Data Rate Synchronous Dynamic Random Access Memory, Enhanced Synchronous Dynamic Random Access Memory (ESDRAM, Enhanced Synchronous Dynamic Random Access Memory), synchronous connection dynamic random access memory (SLDRAM, SyncLink Dynamic Random Access Memory), direct memory bus random access memory (DRRAM, Direct Rambus Random Access Memory).
- SRAM Static Random Access Memory
- SSRAM Synchronous Static Random Access Memory
- DRAM Dynamic Random Access Memory
- SDRAM Synchronous Dynamic Random Access Memory
- DDRSDRAM Double Data
- the memory 111 in the embodiment of the present application is used to store various types of data to support the operation of the terminal.
- Examples of these data include: any computer programs used to operate on the terminal, such as operating systems and applications; contact data; phone book data; messages; pictures; videos, etc.
- the operating system contains various system programs, such as a framework layer, a core library layer, and a driver layer, which are used to implement various basic services and process hardware-based tasks.
- Applications can include various applications, such as media players (Media Player), browser (Browser), etc., used to implement various application services.
- the program that implements the method of the embodiment of the present application may be included in the application program.
- this embodiment also provides a computer storage medium in which a computer program is stored.
- the computer storage medium may be a magnetic random access memory (FRAM, ferromagnetic random access memory) , Read Only Memory (ROM, Read Only Memory), Programmable Read Only Memory (PROM, Programmable Read-Only Memory), Erasable Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM, Electrically Erasable Programmable Read-Only Memory), flash memory (Flash Memory), magnetic surface memory, optical disk, or CD-ROM (Compact Disc Read-Only Memory); it can also be a variety of devices including one or any combination of the above-mentioned memories, such as mobile phones, computers, Tablet devices, personal digital assistants, etc.
- FRAM magnetic random access memory
- ROM Read Only Memory
- PROM Programmable Read Only Memory
- EPROM Erasable Programmable Read-Only Memory
- EEPROM Electrically Erasable Programmable Read-Only Memory
- flash memory Flash Memory
- FIG. 3 is a schematic diagram of a specific flow of an image processing method provided by an embodiment of the application, including the following steps:
- Step S201 Switch to the beauty mode
- the terminal correspondingly switches the photographing mode of the camera application to the beauty mode.
- Step S202 Read the data of the preview frame image
- the terminal reads the data of the preview frame image displayed on the preview interface of the camera application.
- Step S203 face detection
- the terminal performs face detection on the preview frame image to obtain the number of faces.
- Step S204 Judge whether the number of faces is 1, if yes, go to step S205, otherwise go to step S208;
- the terminal judges whether the number of faces in the preview frame image is 1, and if so, execute step S205, otherwise, execute step S208.
- Step S205 intercept the face area data
- the terminal detects that the number of faces in the preview frame image is 1, it intercepts the face area and converts it into face data in a specified format, and then extracts the feature points of the face, such as eyes, nose, and mouth. , Chin, ears and other main features.
- Step S206 the gender recognition module performs gender recognition according to the face area data
- a gender recognition module may be provided in the terminal, and the terminal inputs the face area data obtained in step S205 to the gender recognition module, and the gender recognition module can be regarded as a neural network model.
- the process of performing gender recognition based on face region data in this embodiment can be seen in FIG. 4, which includes the following steps:
- Step S301 Input the formatted face image data
- Step S302 convolution calculation
- a convolution calculation is performed on a convolution layer of the neural network model.
- the neural network model contains 4 convolutional layers as an example, and only one convolutional layer is convolutional calculation at a time. That is to say, the first convolutional layer of the neural network model performs the convolution calculation on the formatted person.
- the face image data is subjected to convolution calculation.
- Step S303 batch standardization
- batch normalization processing is performed on the features obtained by convolution in step S302, so that the obtained feature data distribution conforms to the normal distribution, thereby accelerating the convergence speed of the model and increasing the generalization ability of the model.
- Step S304 feature activation
- Step S305 Maximum pooling
- the obtained new feature map is used as the input data of the next convolutional layer.
- Step S306 global average pooling
- Step S307 Output the probability array of gender classification.
- the probability array of gender classification is obtained after global average pooling, where the index with a larger array value is the final gender result of the gender recognition module, for example, the array index of male is 0, and the array index of female is 1.
- Step S207 The gender recognition module delivers the gender recognition result and facial feature points to the beauty module
- the gender recognition module sends the gender type and facial feature point information of the face obtained through neural network algorithm calculation to the beauty module.
- a beautification module may be provided in the terminal to perform beautification processing on the image.
- Step S208 the beautification module adapts the corresponding beautification parameters
- the beautification module selects a set of default beautification parameters to perform beautification processing on the human face.
- the beautification module divides the protection area of the corresponding gender according to the gender information obtained in step S207 and combined with the facial feature point information.
- the beautification module processes the corresponding protection area, Select the corresponding beauty parameters to protect the protected area.
- Step S209 the beautification module processes the image according to the adapted corresponding beautification parameters.
- the beautification module performs beautification processing on the face according to the selected default beautification parameters.
- the beauty module processes the corresponding protection area, it adopts a gradual processing scheme to smooth the edges of the protection area and reduce the processing of other parts of the protection area to achieve partial detail protection Effect.
- Figure 5 it is a schematic diagram after beautification protection is performed on the beard area of men (that is, the area will reduce the beautification of the skin).
- Figure 6 it is the eyebrow mole and nose decoration area of women.
- the screen of the terminal can display the image obtained by the beautification processing in real time, so that the user can see the effect of the beautification processing in real time.
- the Tensorflow framework based on deep learning, the gender recognition algorithm model trained through a large number of face image samples, and through TensorFlow
- the toco tool converts the server-side model to the Tflite model, and then transplants it to the Android terminal to realize offline real-time gender recognition on the mobile terminal.
- beard protection will be added to the beauty of men
- eyebrow moles and nose decorations will be added to the beauty of women, thereby achieving both men and women.
- Differentiated smart beauty effect It can be summarized that real-time gender recognition can be realized offline on the mobile terminal, and the beauty parameters of the corresponding gender can be adapted to beautify the face.
- the image processing method, terminal, and computer storage medium of the present application acquire at least two portrait features obtained by feature recognition of a portrait in an image to be processed, and perform beauty processing and/or on an area corresponding to the at least one portrait feature No beautification processing is performed on the area corresponding to the at least one portrait feature, so as to implement personalized beautification processing for the user according to the user characteristics, thereby improving the user experience.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Signal Processing (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
Abstract
本申请涉及一种图像处理方法、终端及计算机存储介质,所述图像处理方法应用于终端,包括:获取待处理图像;对所述待处理图像中的人像进行特征识别,获取所述待处理图像中至少两个人像特征;对所述至少一个人像特征对应区域进行美颜处理、和/或对所述至少一个人像特征对应区域不进行美颜处理。本申请提供的图像处理方法、终端及计算机存储介质,终端获取对待处理图像中的人像进行特征识别所获得的至少两个人像特征,并对所述至少一个人像特征对应区域进行美颜处理、和/或对所述至少一个人像特征对应区域不进行美颜处理,以实现根据用户特征对所述用户进行个性化美颜处理,从而提升用户使用体验。
Description
本专利申请要求 2019年12月16日提交的中国专利申请号为201911296529.1,申请人为上海传英信息技术有限公司,发明名称为“一种图像处理方法、终端及计算机存储介质”的优先权,该申请的全文以引用的方式并入本申请中。
本申请涉及终端领域,特别是关于一种图像处理方法、终端及计算机存储介质。
随着移动通信技术的飞速发展和智能终端的迅速普及,使用智能终端如智能手机进行拍照的场景越来越多,在某种程度上,智能终端已经取代相机成为普通用户拍照的主流工具。为了满足用户对高品质图像的需求,美颜功能已逐渐成为智能终端中相机应用的一种标配功能。相关技术中,智能终端在美颜模式下对不同用户通常都采用统一的美颜参数进行处理,或者,对整个人采用统一的美颜参数进行处理。
对于不同用户或同一用户的不同特征而言,对特定特征所需要实现的美颜效果可能与其它特征不同,若不进行美颜效果区分,可能将影响用户使用体验。
本申请的目的在于提供一种图像处理方法、终端及计算机存储介质,通过根据用户特征对所述用户进行个性化美颜处理,从而提升了用户使用体验。
本申请首先提供一种图像处理方法,应用于终端,所述方法包括:
获取待处理图像;
对所述待处理图像中的人像进行特征识别,获取所述待处理图像中至少两个人像特征;
对所述至少一个人像特征对应区域进行美颜处理和/或对所述至少一个人像特征对应区域不进行美颜处理。
进一步,所述特征包括以下至少一种:性别、脸型、眼睛、人种、年龄。
进一步,所述对所述至少一个人像特征对应区域进行美颜处理,包括:
在对所述待处理图像进行美颜处理时,对所述至少一个人像特征对应区域进行美颜减弱处理、和/或采用与所述至少一个人像特征对应的美颜方案对所述至少一个人像特征对应区域进行美颜处理。
进一步,所述采用与所述至少一个人像特征对应的美颜方案对所述至少一个人像特征对应区域进行美颜处理,包括:
获取并输出所述至少一个人像特征对应的至少一种美颜方案;
接收对所述至少一种美颜方案的选择,并根据所述被选择的美颜方案对所述至少一个人像特征对应区域进行美颜处理。
进一步,所述对所述至少一个人像特征对应区域进行美颜处理和/或对所述至少一个人像特征对应区域不进行美颜处理包括以下处理至少一种:
若脸型为圆脸,则对所述待处理图像中的人像脸部区域进行瘦脸美颜处理;
若眼睛为单眼皮,则对所述待处理图像中的人像眼部区域进行丹凤眼美颜处理;
若年龄为老年人,则对所述待处理图像中的人像面部区域进行保留皱纹处理;
若年龄为青年人,则对所述待处理图像中的人像嘴唇区域进行唇色年轻化处理;
若性别为男性,则不对所述待处理图像中的人像的胡须区域进行美颜处理,或对应美颜处理;
若性别为女性,则不对所述待处理图像中的人像的眉心痣区域和/或鼻饰区域进行美颜处理,或对应美颜处理。
进一步,所述对所述至少一个人像特征对应区域进行美颜处理和/或对所述至少一个人像特征对应区域不进行美颜处理,包括:
显示所述至少一个人像特征对应区域;
接收用户的选择操作,并对所述选择操作所选定的人像特征对应区域进行美颜处理和/或不进行美颜处理。
进一步,所述对所述至少一个人像特征对应区域进行美颜减弱处理,包括:
从所述至少一个人像特征对应区域的边缘到中心进行美颜处理时对应的美颜级别逐渐减低;
或者,不对所述至少一个人像特征对应区域进行美颜处理;或者,
将美颜处理后的所述待处理图像中的所述至少一个人像特征对应区域的状态还原为美颜处理前的状态。
进一步,所述对所述待处理图像中的人像进行特征识别,获取所述待处理图像中至少两个人像特征,包括:
将所述待处理图像输入训练后的人像特征识别神经网络模型,得到所述训练后的人像特征识别神经网络模型输出的所述待处理图像中至少两个人像特征;其中,所述人像特征识别神经网络模型是基于历史图像与对应的历史人像特征进行训练获得的。
本申请另提供一种终端,所述终端包括处理器以及用于存储程序的存储器;当所述程序被所述处理器执行,使得所述处理器实现如上所述的图像处理方法。
本申请另提供一种计算机存储介质,存储有计算机程序,所述计算机程序被处理器执行时,实现如上所述的图像处理方法。
本申请的图像处理方法、终端及计算机存储介质,通过获取对待处理图像中的人像进行特征识别所获得的至少两个人像特征,并对所述至少一个人像特征对应区域进行美颜处理和/或对所述至少一个人像特征对应区域不进行美颜处理,以实现根据用户特征对所述用户进行个性化美颜处理,从而提升了用户使用体验。
进一步,本申请的图像处理方法,通过检测待处理图像中是否包含预设特征标识,并根据检测结果确定所述待处理图像中至少两个人像特征,能够实现快速且准确的对所述待处理图像中的人像进行特征识别。
再者,本申请提供的图像处理方法通过不同保护策略对人像特征对应区域进行美颜处理,实现了对人像特征对应区域的保护。
又者,本申请提供的图像处理方法通过显示至少一个人像特征对应区域,以由用户从其中选择某一个或某几个人像特征对应区域进行美颜处理,操作灵活,进一步提升用户使用体验。
此外,本申请提供的图像处理方法通过向用户提供可供选择的多种美颜方案,灵活便利,进一步提升了用户使用体验
图1为本申请实施例提供的一种图像处理方法的流程示意图;
图2为本申请实施例提供的一种终端的结构示意图;
图3为本申请实施例提供的一种图像处理方法的具体流程示意图;
图4为本申请实施例中根据脸部区域数据进行性别识别的过程示意图;
图5为本申请实施例中对男性的胡须区域进行美颜保护后的示意图;
图6为本申请实施例中对女性的眉心痣、鼻饰区域进行美颜保护后的示意图。
本申请的最佳实施方式
以下结合说明书附图及具体实施例对本申请技术方案做进一步的详细阐述。除非另有定义,本文所使用的所有的技术和科学术语与属于本申请的技术领域的技术人员通常理解的含义相同。本文中在本申请的说明书中所使用的术语只是为了描述具体的实施例的目的,不是旨在于限制本申请。本文所使用的术语“和/或”包括一个或多个相关的所列项目的任意的和所有的组合。
参见图1,为本申请实施例提供的一种图像处理方法,该图像处理方法可以适用于对图像进行美颜处理的情况,该图像处理方法可以由本申请实施例提供的一种图像处理装置来执行,该图像处理装置可以采用软件和/或硬件的方式来实现,在具体应用中,该图像处理装置可以具体是智能手机、个人数字助理、平板电脑等终端。本实施例中以所述图像处理方法的执行主体为终端为例,该图像处理方法包括以下步骤:
步骤S101:终端获取待处理图像;
这里,所述待处理图像为单帧图像,所述待处理图像可以是终端通过摄像装置如摄像头采集的一预览图像。以所述终端为手机为例,手机在相机应用被开启且切换到美颜模式后,将通过摄像头采集的预览图像作为待处理图像,此时该预览图像展示在手机的相机应用的拍摄预览界面上。也就是说,终端接收到摄像头启动指令后,切换到美颜模式,采集所述摄像头拍摄的预览图像作为待处理图像。若手机是对一个人进行拍摄的,则手机的相机应用的拍摄预览界面上显示有一包括人物的图像。
步骤S102:所述终端对所述待处理图像中的人像进行特征识别,获取所述待处理图像中至少两个人像特征;
具体地,所述终端对步骤S101中获得的所述待处理图像中的人像进行特征识别,以相应获取所述待处理图像中至少两个人像特征。
本实施例中,所述特征包括以下至少一种:性别、脸型、眼睛、人种、年龄。这里,所述人种可以是指以肤色进行的分类,具体可以是黄种人、白种人、黑种人和棕种人,或者可以是指以地域进行的分类,具体可以是亚洲人、欧洲人、非洲人、美洲人等,所述年龄可以是指年龄的类型,如年轻人、老年人等。这里,所述终端可以先通过对所述待处理图像进行人脸检测,从而获得所述待处理图像中的人脸图像,进而基于所述人脸图像实现对所述待处理图像中的人像进行特征识别。需要说明的是,所述终端对所述待处理图像中的人像进行特征识别,可以是采用基于人工智能算法所建立的特征识别模型进行识别,也可以是通过检测人像的标识特征进行识别。此外,对于性别识别而言,所述终端可采集拍照者的声音,并获取对所述拍照者的声音的性别识别结果,从而相应实现对所述待处理图像中的人像的性别识别。在实际应用中,所述终端也可以根据用户输入的特征确定所述待处理图像中的人像特征。
步骤S103:所述终端对所述至少一个人像特征对应区域进行美颜处理和/或对所述至少一个人像特征对应区域不进行美颜处理。
这里,根据所述人像特征的不同,对所述待处理图像进行美颜处理的方式可能也不相同,相应需要进行美颜处理和不进行美颜处理的人像特征对应区域可能也不相同。对于不同的性别,若性别为男性,则所述人像特征对应区域可包括胡须区域;和/或,若所述性别为女性,则所述人像特征对应区域可包括眉心痣区域和/或鼻饰区域。例如,对于脸型为圆脸而言,可将人像的脸部区域作为脸型对应区域;对于眼睛为单眼皮而言,可将人像的眼部区域作为眼睛对应区域。在一实施方式中,所述对所述至少一个人像特征对应区域进行美颜处理可以是在对所述待处理图像进行美颜处理时,对所述至少一个人像特征对应区域进行美颜减弱处理,也可以是采用与所述至少一个人像特征对应的美颜方案对所述至少一个人像特征对应区域进行美颜处理。可以理解地,若对所述至少一个人像特征对应区域进行美颜减弱处理,则所述待处理图像在完成美颜处理后,所述至少一个人像特征对应区域的效果会与之前的效果不一致,但是所述至少一个人像特征对应区域的效果变化程度仍将弱于其它区域的效果变化程度;若采用与所述人像特征对应的美颜方案对所述个性特征区域进行美颜处理,则所述待处理图像在完成美颜处理后,所述至少一个人像特征对应区域的效果可能更加匹配用户的目的。需要说明的是,所述对所述至少一个人像特征对应区域进行美颜减弱处理,可包括:从所述至少一个人像特征对应区域的边缘到中心进行美颜处理时对应的美颜级别逐渐减低;或者,不对所述至少一个人像特征对应区域进行美颜处理;或者,将美颜处理后的所述待处理图像中的所述至少一个人像特征对应区域的状态还原为美颜处理前的状态。以所述美颜处理包括磨皮处理为例,对男性的胡须区域进行磨皮处理时,可以减轻磨皮处理的级别。所述将美颜处理后的所述待处理图像中的所述至少一个人像特征对应区域的状态还原为美颜处理前的状态,可以理解为在对所述待处理图像进行美颜处理后,将获得的美颜处理后的所述待处理图像中的所述人像特征对应区域的状态还原为美颜处理前的状态,以实现不对所述待处理图像中的所述人像特征对应区域进行美颜处理,从而达到局部细节的保护。这里,若不对所述人像特征对应区域进行美颜处理,则所述待处理图像在完成美颜处理后,所述人像特征对应区域的效果仍然会保持与之前的效果一致。如此,通过不同保护策略对人像特征对应区域进行美颜处理,实现了对人像特征对应区域的保护。
综上,上述实施例提供的图像处理方法中,终端获取对待处理图像中的人像进行特征识别所获得的至少两个人像特征,并对所述至少一个人像特征对应区域进行美颜处理和/或对所述至少一个人像特征对应区域不进行美颜处理,以实现根据用户特征对所述用户进行个性化美颜处理,从而提升了用户使用体验。
本申请的实施方式
基于前述实施例,在一实施方式中,所述对所述待处理图像中的人像进行特征识别,获取所述待处理图像中至少两个人像特征,包括:将所述待处理图像输入训练后的特征识别神经网络模型,得到所述训练后的特征识别神经网络模型输出的所述待处理图像中至少两个人像特征;其中,所述特征识别神经网络模型是基于历史图像与对应的历史人像特征进行训练获得的。可以理解地,所述终端中可预先存储有采用神经网络算法对历史图像与对应的历史人像特征进行训练获得的特征识别神经网络模型,当将待处理图像作为特征识别神经网络模型的输入时,所述特征识别神经网络模型的输出为对应的人像特征识别结果。需要说明的是,所述将所述待处理图像输入训练后的特征识别神经网络模型,可以是将所述待处理图像中的人脸数据输入训练后的特征识别神经网络模型。所述特征识别神经网络模型的建立和训练过程可参考现有技术,在此不再进行赘述。
在一实施方式中,所述终端对所述待处理图像中的人像进行特征识别,获取所述待处理图像中至少两个人像特征,包括:
检测所述待处理图像中的人像是否包含预设特征标识,获取对应的检测结果;
根据所述检测结果获取所述待处理图像中至少两个人像特征。
具体地,终端检测所述待处理图像中的人像是否包含预设特征标识,获取对应的检测结果,从而根据所述检测结果获取所述待处理图像中至少两个人像特征。这里,所述预设特征标识是指能够对用户的某一特征进行标定的标识,例如,以特征为性别为例,对于男性而言,性别标识可以是胡须、喉结等;对于女性而言,性别标识可以是眉心痣、脸部饰品如鼻饰和面纱等,因此,在检测到所述待处理图像中包含胡须或喉结等男性标识时,可确定所述待处理图像中的人像对应的性别为男性;而在检测到所述待处理图像中包含眉心痣或鼻饰等女性标识时,可确定所述待处理图像中的人像对应的性别为女性。以特征为年龄为例,对于老年人和青年人而言,可通过检测面部是否有皱纹等进行年龄区分。如此,通过检测待处理图像中是否包含预设特征标识,并根据检测结果确定所述待处理图像中至少两个人像特征,能够实现快速且准确的对所述待处理图像中的人像进行特征识别。
在一实施方式中,所述终端对所述待处理图像中的人像进行特征识别,获取所述待处理图像中至少两个人像特征之前,还可包括:检测所述待处理图像中的人像数量是否满足预设数量条件;若满足,则执行所述对所述待处理图像中的人像进行特征识别,获取所述待处理图像中至少两个人像特征的步骤。这里,所述预设数量条件可以是人像数量为一个或多个。当所述待处理图像中的人像数量为两个时,每个人像可能包含不同的特征,使得对于不同人像需要采用不同的美颜处理方案。例如,所述待处理图像中可能同时包含男性和女性的不同人脸,而对不同性别的人脸获取不同的人像特征,对每个人像特征对应区域采用对应的美颜处理方式。
需要说明的是,对于不同的人像特征,所述终端中可预先存储有不同的人像特征与对应的美颜方案之间的对应关系,例如,若眼睛为单眼皮,对应的美颜方案可以是丹凤眼美颜处理;若人种为亚洲人,对应的美颜方案可以是黄肤色美颜处理等。在具体实施例中,所述对所述至少一个人像特征对应区域进行美颜处理和/或对所述至少一个人像特征对应区域不进行美颜处理包括以下处理至少一种:若脸型为圆脸,则对所述待处理图像中的人像脸部区域进行瘦脸美颜处理;若眼睛为单眼皮,则对所述待处理图像中的人像眼部区域进行丹凤眼美颜处理;若年龄为老年人,则对所述待处理图像中的人像面部区域进行保留皱纹处理;若年龄为青年人,则对所述待处理图像中的人像嘴唇区域进行唇色年轻化处理;若性别为男性,则不对所述待处理图像中的人像的胡须区域进行美颜处理,或对应美颜处理;若性别为女性,则不对所述待处理图像中的人像的眉心痣区域和/或鼻饰区域进行美颜处理,或对应美颜处理。
下面以所述特征为性别为例详细说明根据特征识别结果进行美颜处理的过程,所述对所述至少一个人像特征对应区域进行美颜处理和/或对所述至少一个人像特征对应区域不进行美颜处理,可以是对所述待处理图像中的人脸进行特征区域识别,获取所述人脸中的至少一个人脸特征区域;其中,所述人脸特征区域包括至少一个五官特征区域;根据所述性别和所述至少一个人脸特征区域确定所述人脸中的性别对应区域;在对所述待处理图像进行美颜处理时,对所述性别对应区域进行保护。这里,终端可以采用现有的人脸特征区域识别方法以对所述待处理图像中的人脸进行特征区域识别,从而获取所述人脸中的至少一个人脸特征区域,且所述人脸特征区域包括至少一个五官特征区域。需要说明的是,所述人脸特征区域既可包括眼睛、嘴、鼻子、耳朵、眉毛等五官特征区域,也可包括下巴、额头等非五官特征区域。由于人像特征对应区域的位置是固定的且能够由人脸特征区域相应进行确定,因此,可根据所述人像特征识别结果和所述至少一个人脸特征区域确定所述人脸中的人像特征对应区域。以所述性别为男性、所述人像特征对应区域为胡须区域为例,所述终端在确定人脸中的鼻子和嘴的特征区域即鼻子和嘴在人脸中的位置之后,进而可通过嘴下方区域以及嘴上方与鼻子下方之间的区域获取对应胡须区域。可以理解地,在通常情况下,胡须的特征表现为深色,如黑色,比皮肤区域的亮度低,相应的,可将嘴下方区域以及嘴上方与鼻子下方之间的区域中像素亮度值小于人脸区域像素平均值且两者之差大于预设第一阈值的像素作为表示胡须的像素,进而获得胡须区域。
在一实施方式中,所述终端对所述至少一个人像特征对应区域进行美颜处理和/或对所述至少一个人像特征对应区域不进行美颜处理,包括:
显示所述至少一个人像特征对应区域;
接收用户的选择操作,并对所述选择操作所选定的人像特征对应区域进行美颜处理和/或不进行美颜处理。
可以理解地,所述终端根据所述特征识别结果可能确定出多个备选的人像特征对应区域,但是用户可能不需要将每个备选的人像特征对应区域作为目标人像特征对应区域进行美颜处理,而只是需要对其中某一个或某几个备选的人像特征对应区域作为目标人像特征区域进行美颜处理,因此,所述终端在确定至少一个人像特征对应区域后,可显示所述至少一个特征对应区域,以由用户从其中选择某一个或某几个人像特征对应区域作为目标人像特征对应区域进行美颜处理。需要说明的是,所述终端显示所述至少一个人像特征对应区域后,可以同时控制所述至少一个人像特征对应区域处于可被选择的状态,如凸出显示每个所述人像特征对应区域的边界等。如此,终端显示至少一个人像特征对应区域,以由用户从其中选择某一个或某几个人像特征对应区域进行美颜处理,操作灵活,进一步提升用户使用体验。
在一实施方式中,所述采用与所述至少一个人像特征对应的美颜方案对所述至少一个人像特征对应区域进行美颜处理,包括:
获取并输出所述至少一个人像特征对应的至少一种美颜方案;
接收对所述至少一种美颜方案的选择,并根据所述被选择的美颜方案对所述至少一个人像特征对应区域进行美颜处理。
可以理解地,对于任一人像特征,所述终端中可能存储有至少一种美颜方案,相应地,所述终端在需要对其中某一人像特征对应区域进行美颜处理时,可输出所述至少一个人像特征对应的至少一种美颜方案,以由用户从其中选择一种或多种美颜方案对所述至少一个人像特征对应区域进行美颜处理。例如,以所述特征为脸型且脸型为圆脸为例,所述终端可向用户推荐瘦脸美颜处理、祛斑美颜处理、增白美颜处理等多种美颜方案,用户可根据需要进行选择相应美颜方案。如此,通过向用户提供可供选择的多种美颜方案,灵活便利,进一步提升了用户使用体验。
在一实施方式中,所述对所述至少一个人像特征对应区域进行美颜处理和/或对所述至少一个人像特征对应区域不进行美颜处理之后,还包括:
接收用户在美颜处理后的所述待处理图像上的输入;
响应所述输入,确定在所述美颜处理后的所述待处理图像上选择的目标区域;
将美颜处理后的所述待处理图像中的所述目标区域的状态还原为美颜处理前的状态。
可以理解地,为了方便用户灵活选择需要保护的区域,所述终端可接收用户在美颜处理后的所述待处理图像上的输入,以确定用户在美颜处理后的所述待处理图像上选择的目标区域,进而将美颜处理后的所述待处理图像中的所述目标区域的状态还原为美颜处理前的状态。这里,所述终端可以在对所述待处理图像进行美颜处理后,显示所述人像中的各区域,比如采用边框或高亮方式标记各区域等,以方便用户进行选择。此外,所述终端也可以根据用户输入的滑动操作轨迹,将所述滑动操作轨迹选择的区域确定为目标区域。如此,终端根据用户的选择确定无需进行美颜处理的目标区域,以保护所述目标区域,提高了使用灵活性,进一步提升了用户使用体验。
基于前述实施例相同的发明构思,本申请实施例提供了一种终端,如图2所示,该终端包括:处理器110和用于存储能够在处理器110上运行的计算机程序的存储器111;其中,图2中示意的处理器110并非用于指代处理器110的个数为一个,而是仅用于指代处理器110相对其他器件的位置关系,在实际应用中,处理器110的个数可以为一个或多个;同样,图2中示意的存储器111也是同样的含义,即仅用于指代存储器111相对其他器件的位置关系,在实际应用中,存储器111的个数可以为一个或多个。所述处理器110用于运行所述计算机程序时,实现应用于上述终端的所述图像处理方法。
该终端还可包括:至少一个网络接口112。该终端中的各个组件通过总线系统113耦合在一起。可理解,总线系统113用于实现这些组件之间的连接通信。总线系统113除包括数据总线之外,还包括电源总线、控制总线和状态信号总线。但是为了清楚说明起见,在图2中将各种总线都标为总线系统113。
其中,存储器111可以是易失性存储器或非易失性存储器,也可包括易失性和非易失性存储器两者。其中,非易失性存储器可以是只读存储器(ROM,Read Only Memory)、可编程只读存储器(PROM,Programmable Read-Only Memory)、可擦除可编程只读存储器(EPROM,Erasable
Programmable Read-Only Memory)、电可擦除可编程只读存储器(EEPROM,Electrically Erasable Programmable Read-Only Memory)、磁性随机存取存储器(FRAM,ferromagnetic
random access memory)、快闪存储器(Flash Memory)、磁表面存储器、光盘、或只读光盘(CD-ROM,Compact
Disc Read-Only Memory);磁表面存储器可以是磁盘存储器或磁带存储器。易失性存储器可以是随机存取存储器(RAM,Random Access Memory),其用作外部高速缓存。通过示例性但不是限制性说明,许多形式的RAM可用,例如静态随机存取存储器(SRAM,Static Random
Access Memory)、同步静态随机存取存储器(SSRAM,Synchronous Static Random Access Memory)、动态随机存取存储器(DRAM,Dynamic Random
Access Memory)、同步动态随机存取存储器(SDRAM,Synchronous Dynamic Random Access Memory)、双倍数据速率同步动态随机存取存储器(DDRSDRAM,Double
Data Rate Synchronous Dynamic Random Access Memory)、增强型同步动态随机存取存储器(ESDRAM,Enhanced
Synchronous Dynamic Random Access Memory)、同步连接动态随机存取存储器(SLDRAM,SyncLink Dynamic Random Access Memory)、直接内存总线随机存取存储器(DRRAM,Direct
Rambus Random Access Memory)。本申请实施例描述的存储器111旨在包括但不限于这些和任意其它适合类型的存储器。
本申请实施例中的存储器111用于存储各种类型的数据以支持该终端的操作。这些数据的示例包括:用于在该终端上操作的任何计算机程序,如操作系统和应用程序;联系人数据;电话簿数据;消息;图片;视频等。其中,操作系统包含各种系统程序,例如框架层、核心库层、驱动层等,用于实现各种基础业务以及处理基于硬件的任务。应用程序可以包含各种应用程序,例如媒体播放器(Media
Player)、浏览器(Browser)等,用于实现各种应用业务。这里,实现本申请实施例方法的程序可以包含在应用程序中。
基于前述实施例相同的发明构思,本实施例还提供了一种计算机存储介质,所述计算机存储介质中存储有计算机程序,计算机存储介质可以是磁性随机存取存储器(FRAM,ferromagnetic random access memory)、只读存储器(ROM,Read Only Memory)、可编程只读存储器(PROM,Programmable
Read-Only Memory)、可擦除可编程只读存储器(EPROM,Erasable Programmable Read-Only Memory)、电可擦除可编程只读存储器(EEPROM,Electrically
Erasable Programmable Read-Only Memory)、快闪存储器(Flash
Memory)、磁表面存储器、光盘、或只读光盘(CD-ROM,Compact Disc Read-Only Memory)等存储器;也可以是包括上述存储器之一或任意组合的各种设备,如移动电话、计算机、平板设备、个人数字助理等。所述计算机存储介质中存储的计算机程序被处理器运行时,实现应用于上述终端的所述图像处理方法。所述计算机程序被处理器执行时实现的具体步骤流程请参考图1所示实施例的描述,在此不再赘述。
基于前述实施例相同的发明构思,本实施例通过具体示例对前述实施例的技术方案进行详细说明,本实施例中以所述特征为性别为例。图3为本申请实施例提供的一种图像处理方法的具体流程示意图,包括以下步骤:
步骤S201:切换至美颜模式;
具体地,用户在终端开启相机应用后,并选择将相机应用的拍照模式切换至美颜模式时,终端相应将相机应用的拍照模式切换至美颜模式。
步骤S202:读取预览帧图像的数据;
具体地,终端读取相机应用的预览界面显示的预览帧图像的数据。
步骤S203:人脸检测;
具体地,终端对预览帧图像进行人脸检测,以获取人脸个数。
步骤S204:判断人脸数量是否为1,若是,则执行步骤S205,否则执行步骤S208;
具体地,终端判断预览帧图像中人脸数量是否为1,若是,则执行步骤S205,否则执行步骤S208。
步骤S205:截取脸部区域数据;
具体地,终端在检测到预览帧图像中人脸数量为1时,将该人脸区域进行截取,并转换为指定格式的人脸数据,然后提取人脸的特征点,如眼睛、鼻子、嘴巴、下巴、耳朵等主要特征。
步骤S206:性别识别模块根据脸部区域数据进行性别识别;
这里,终端中可设置有性别识别模块,所述终端将步骤S205获得的脸部区域数据输入给性别识别模块,所述性别识别模块可以看作是一个神经网络模型。这里,本实施例中根据脸部区域数据进行性别识别的过程可参见图4所示,包括以下步骤:
步骤S301:输入格式化后的人脸图像数据;
具体地,将格式化后的人脸图像数据输入神经网络模型的输入层。
步骤S302:卷积计算;
具体地,对于神经网络模型的一卷积层进行卷积计算。本实施例中以神经网络模型包含4层卷积层为例,每一次只对一卷积层进行卷积计算,也就是说神经网络模型的第一卷积层对所述格式化后的人脸图像数据进行卷积计算。
步骤S303:批标准化;
具体地,对步骤S302中卷积得到的特征进行批标准化处理,以使得到的特征数据分布符合正态分布,从而加快模型的收敛速度,增加模型的泛化能力。
步骤S304:特征激活;
步骤S305:最大值池化;
这里,经过激活和池化处理,将得到的新特征图作为下一卷积层的输入数据。
步骤S306:全局平均池化;
这里,在经过4层卷积、池化处理后,对得到的特征图进行全局平均池化处理,来替换全连接操作,以达到减少模型参数的目的。
步骤S307:输出性别分类的概率数组。
这里,经过全局平均池化后得到性别分类的概率数组,其中数组值较大的索引即为性别识别模块最终的性别结果,如男性的数组索引为0,女性的数组索引为1。
步骤S207:性别识别模块向美颜模块下发性别识别结果和人脸特征点;
具体地,性别识别模块将经过神经网络算法计算获得的人脸的性别类型以及人脸特征点信息向美颜模块下发。这里,终端中可设置有美颜模块,用于对图像进行美颜处理。
步骤S208:美颜模块适配相应美颜参数;
这里,当检测到的人脸个数为0或大于1时,美颜模块选择一组默认的美颜参数以对人脸进行美颜处理。当检测到的人脸个数为1时,美颜模块根据步骤S207中获得的性别信息并结合人脸特征点信息进行相应性别的保护区域划分,美颜模块在处理到相应的保护区域时,选择相应美颜参数以对所述保护区域进行保护。
步骤S209:美颜模块根据适配相应美颜参数对图像进行处理。
这里,当检测到的人脸个数为0或大于1时,美颜模块根据选择的默认的美颜参数对人脸进行美颜处理。当检测到的人脸个数为1时,美颜模块在处理到相应的保护区域时,采取渐变处理方案,对保护区域边缘进行平滑处理,对保护区域其他部分减轻处理,以达到局部细节保护的效果。如图5所示,为对男性的胡须区域进行美颜保护(即对该区域会减轻美颜的磨皮处理)后的示意图,如图6所示,为对女性的眉心痣、鼻饰区域进行美颜保护(即对眉心区域,鼻子装饰区域减少美颜的磨皮处理)后的示意图,以达到这些局部装饰细节的保留,并对人脸其他区域进行美颜处理。这里,所述终端的屏幕可以实时显示美颜处理获得的图像,以使用户可以实时的看到美颜处理效果。
综上,上述实施例提供的图像处理方法中,基于深度学习的Tensorflow框架,通过大量人脸图像样本训练出来的性别识别算法模型,并通过TensorFlow
toco 工具将服务器端模型转换为Tflite模型,然后移植到Android端,实现移动端的离线实时性别识别。在Android端的相机中通过加载性别识别模型,通过TensorFlow的java端API调用模型的执行入口,从而将检测到的人脸图像截取、转换格式后传递给模型,达到对每帧图像进行算法处理,输出性别识别结果,最后适配相应性别的美颜功能,达到不同性别拥有不同的美颜效果,如男性的美颜会添加胡须保护、女性的美颜会添加眉心痣、鼻饰保护,从而实现男女差异化的智能美颜效果。可以概况为,可以在移动端离线实现实时性别识别,适配相应性别的美颜参数对人脸进行美颜处理。
以上所述实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。
在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,除了包含所列的那些要素,而且还可包含没有明确列出的其他要素。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。
本申请的图像处理方法、终端及计算机存储介质,通过获取对待处理图像中的人像进行特征识别所获得的至少两个人像特征,并对所述至少一个人像特征对应区域进行美颜处理和/或对所述至少一个人像特征对应区域不进行美颜处理,以实现根据用户特征对所述用户进行个性化美颜处理,从而提升了用户使用体验。
Claims (10)
- 一种图像处理方法,应用于终端,其中,所述方法包括:获取待处理图像;对所述待处理图像中的人像进行特征识别,获取所述待处理图像中至少两个人像特征;对所述至少一个人像特征对应区域进行美颜处理和/或对所述至少一个人像特征对应区域不进行美颜处理。
- 根据权利要求1所述的图像处理方法,其中,所述特征包括以下至少一种:性别、脸型、眼睛、人种、年龄。
- 根据权利要求1所述的图像处理方法,其中,所述对所述至少一个人像特征对应区域进行美颜处理,包括:在对所述待处理图像进行美颜处理时,对所述至少一个人像特征对应区域进行美颜减弱处理和/或采用与所述至少一个人像特征对应的美颜方案对所述至少一个人像特征对应区域进行美颜处理。
- 根据权利要求3所述的图像处理方法,其中,所述采用与所述至少一个人像特征对应的美颜方案对所述至少一个人像特征对应区域进行美颜处理,包括:获取并输出所述至少一个人像特征对应的至少一种美颜方案;接收对所述至少一种美颜方案的选择,并根据所述被选择的美颜方案对所述至少一个人像特征对应区域进行美颜处理。
- 根据权利要求2所述的图像处理方法,其中,所述对所述至少一个人像特征对应区域进行美颜处理和/或对所述至少一个人像特征对应区域不进行美颜处理包括以下处理至少一种:若脸型为圆脸,则对所述待处理图像中的人像脸部区域进行瘦脸美颜处理;若眼睛为单眼皮,则对所述待处理图像中的人像眼部区域进行丹凤眼美颜处理;若年龄为老年人,则对所述待处理图像中的人像面部区域进行保留皱纹处理;若年龄为青年人,则对所述待处理图像中的人像嘴唇区域进行唇色年轻化处理;若性别为男性,则不对所述待处理图像中的人像的胡须区域进行美颜处理,或对应美颜处理;若性别为女性,则不对所述待处理图像中的人像的眉心痣区域和/或鼻饰区域进行美颜处理,或对应美颜处理。
- 根据权利要求1所述的图像处理方法,其中,所述对所述至少一个人像特征对应区域进行美颜处理和/或对所述至少一个人像特征对应区域不进行美颜处理,包括:显示所述至少一个人像特征对应区域;接收用户的选择操作,并对所述选择操作所选定的人像特征对应区域进行美颜处理和/或不进行美颜处理。
- 根据权利要求3所述的图像处理方法,其中,所述对所述至少一个人像特征对应区域进行美颜减弱处理,包括:从所述至少一个人像特征对应区域的边缘到中心进行美颜处理时对应的美颜级别逐渐减低;或者,不对所述至少一个人像特征对应区域进行美颜处理;或者,将美颜处理后的所述待处理图像中的所述至少一个人像特征对应区域的状态还原为美颜处理前的状态。
- 根据权利要求1所述的图像处理方法,其中,所述对所述待处理图像中的人像进行特征识别,获取所述待处理图像中至少两个人像特征,包括:将所述待处理图像输入训练后的特征识别神经网络模型,得到所述训练后的特征识别神经网络模型输出的所述待处理图像中至少两个人像特征;其中,所述特征识别神经网络模型是基于历史图像与对应的历史人像特征进行训练获得的。
- 一种终端,其中,包括:处理器和用于存储能够在处理器上运行的计算机程序的存储器,所述处理器运行所述计算机程序时,实现如权利要求1所述的图像处理方法。
- 一种计算机存储介质,其中,存储有计算机程序,所述计算机程序被处理器执行时,实现如权利要求1所述的图像处理方法。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911296529.1 | 2019-12-16 | ||
CN201911296529.1A CN111161131A (zh) | 2019-12-16 | 2019-12-16 | 一种图像处理方法、终端及计算机存储介质 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021120626A1 true WO2021120626A1 (zh) | 2021-06-24 |
Family
ID=70557199
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/104638 WO2021120626A1 (zh) | 2019-12-16 | 2020-07-24 | 一种图像处理方法、终端及计算机存储介质 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN111161131A (zh) |
WO (1) | WO2021120626A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113572955A (zh) * | 2021-06-25 | 2021-10-29 | 维沃移动通信(杭州)有限公司 | 图像处理方法、装置及电子设备 |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111161131A (zh) * | 2019-12-16 | 2020-05-15 | 上海传英信息技术有限公司 | 一种图像处理方法、终端及计算机存储介质 |
CN111784611B (zh) * | 2020-07-03 | 2023-11-03 | 厦门美图之家科技有限公司 | 人像美白方法、装置、电子设备和可读存储介质 |
CN112565601B (zh) * | 2020-11-30 | 2022-11-04 | Oppo(重庆)智能科技有限公司 | 图像处理方法、装置、移动终端及存储介质 |
CN113096049A (zh) * | 2021-04-26 | 2021-07-09 | 北京京东拓先科技有限公司 | 一种图片处理方案的推荐方法和装置 |
CN114973727B (zh) * | 2022-08-02 | 2022-09-30 | 成都工业职业技术学院 | 一种基于乘客特征的智能驾驶方法 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107274354A (zh) * | 2017-05-22 | 2017-10-20 | 奇酷互联网络科技(深圳)有限公司 | 图像处理方法、装置和移动终端 |
CN107578380A (zh) * | 2017-08-07 | 2018-01-12 | 北京金山安全软件有限公司 | 一种图像处理方法、装置、电子设备及存储介质 |
CN108012081A (zh) * | 2017-12-08 | 2018-05-08 | 北京百度网讯科技有限公司 | 智能美颜方法、装置、终端和计算机可读存储介质 |
CN108229278A (zh) * | 2017-04-14 | 2018-06-29 | 深圳市商汤科技有限公司 | 人脸图像处理方法、装置和电子设备 |
US10303933B2 (en) * | 2016-07-29 | 2019-05-28 | Samsung Electronics Co., Ltd. | Apparatus and method for processing a beauty effect |
CN111161131A (zh) * | 2019-12-16 | 2020-05-15 | 上海传英信息技术有限公司 | 一种图像处理方法、终端及计算机存储介质 |
-
2019
- 2019-12-16 CN CN201911296529.1A patent/CN111161131A/zh active Pending
-
2020
- 2020-07-24 WO PCT/CN2020/104638 patent/WO2021120626A1/zh active Application Filing
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10303933B2 (en) * | 2016-07-29 | 2019-05-28 | Samsung Electronics Co., Ltd. | Apparatus and method for processing a beauty effect |
CN108229278A (zh) * | 2017-04-14 | 2018-06-29 | 深圳市商汤科技有限公司 | 人脸图像处理方法、装置和电子设备 |
CN107274354A (zh) * | 2017-05-22 | 2017-10-20 | 奇酷互联网络科技(深圳)有限公司 | 图像处理方法、装置和移动终端 |
CN107578380A (zh) * | 2017-08-07 | 2018-01-12 | 北京金山安全软件有限公司 | 一种图像处理方法、装置、电子设备及存储介质 |
CN108012081A (zh) * | 2017-12-08 | 2018-05-08 | 北京百度网讯科技有限公司 | 智能美颜方法、装置、终端和计算机可读存储介质 |
CN111161131A (zh) * | 2019-12-16 | 2020-05-15 | 上海传英信息技术有限公司 | 一种图像处理方法、终端及计算机存储介质 |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113572955A (zh) * | 2021-06-25 | 2021-10-29 | 维沃移动通信(杭州)有限公司 | 图像处理方法、装置及电子设备 |
Also Published As
Publication number | Publication date |
---|---|
CN111161131A (zh) | 2020-05-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021120626A1 (zh) | 一种图像处理方法、终端及计算机存储介质 | |
CN110929651B (zh) | 图像处理方法、装置、电子设备及存储介质 | |
US20210192858A1 (en) | Electronic device for generating image including 3d avatar reflecting face motion through 3d avatar corresponding to face and method of operating same | |
US10438329B2 (en) | Image processing method and image processing apparatus | |
WO2022078041A1 (zh) | 遮挡检测模型的训练方法及人脸图像的美化处理方法 | |
WO2020019904A1 (zh) | 图像处理方法、装置、计算机设备及存储介质 | |
CN107958439B (zh) | 图像处理方法及装置 | |
CN112712470B (zh) | 一种图像增强方法及装置 | |
CN105825486A (zh) | 美颜处理的方法及装置 | |
CN108921856B (zh) | 图像裁剪方法、装置、电子设备及计算机可读存储介质 | |
CN107730448B (zh) | 基于图像处理的美颜方法及装置 | |
CN108876732A (zh) | 人脸美颜方法及装置 | |
CN114175113A (zh) | 提供头像的电子装置及其操作方法 | |
EP3328062A1 (en) | Photo synthesizing method and device | |
CN114007099A (zh) | 一种视频处理方法、装置和用于视频处理的装置 | |
WO2024021742A9 (zh) | 一种注视点估计方法及相关设备 | |
CN113850726A (zh) | 图像变换方法和装置 | |
KR20180109217A (ko) | 얼굴 영상 보정 방법 및 이를 구현한 전자 장치 | |
CN111723803A (zh) | 图像处理方法、装置、设备及存储介质 | |
CN114187166A (zh) | 图像处理方法、智能终端及存储介质 | |
CN110378839A (zh) | 人脸图像处理方法、装置、介质及电子设备 | |
CN113850709A (zh) | 图像变换方法和装置 | |
CN112184540A (zh) | 图像处理方法、装置、电子设备和存储介质 | |
CN113313026B (zh) | 一种基于隐私保护的人脸识别交互方法、装置以及设备 | |
CN111373409B (zh) | 获取颜值变化的方法及终端 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20903979 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20903979 Country of ref document: EP Kind code of ref document: A1 |