[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN113344812A - Image processing method and device and electronic equipment - Google Patents

Image processing method and device and electronic equipment Download PDF

Info

Publication number
CN113344812A
CN113344812A CN202110604658.3A CN202110604658A CN113344812A CN 113344812 A CN113344812 A CN 113344812A CN 202110604658 A CN202110604658 A CN 202110604658A CN 113344812 A CN113344812 A CN 113344812A
Authority
CN
China
Prior art keywords
image
comment
information
pieces
feature information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110604658.3A
Other languages
Chinese (zh)
Inventor
王运
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Hangzhou Co Ltd
Original Assignee
Vivo Mobile Communication Hangzhou Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Hangzhou Co Ltd filed Critical Vivo Mobile Communication Hangzhou Co Ltd
Priority to CN202110604658.3A priority Critical patent/CN113344812A/en
Publication of CN113344812A publication Critical patent/CN113344812A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The application discloses an image processing method, an image processing device and electronic equipment, and belongs to the technical field of communication. The method comprises the following steps: the electronic equipment acquires M object characteristic information of M objects in a target image, wherein each object corresponds to one object characteristic information, and M is a positive integer; the electronic equipment determines N image correction models according to N object feature information in the M object feature information, wherein each object feature information corresponds to one image correction model, and N is a positive integer less than or equal to M; the electronic equipment respectively carries out image processing on N image areas of the target image based on each image correction model in the N image correction models; wherein the N image areas are: and the image areas where the N objects corresponding to the N object characteristic information are located.

Description

Image processing method and device and electronic equipment
Technical Field
The application belongs to the technical field of communication, and particularly relates to an image processing method and device and electronic equipment.
Background
With the development of electronic devices (e.g., mobile phones, tablet computers), taking pictures has become a commonly used function in electronic devices.
In the related art, in order to enable the effect of the picture shot by the electronic equipment to be better, a user can use the image retouching software of the electronic equipment to retouch the picture, and the image retouching software can automatically adjust the picture according to a preset adjusting mode to generate the picture which is more in line with the aesthetic sense of the user.
However, under the condition that the electronic device shoots a lot of objects, the preset adjustment modes of the retouching software are the same, so that the retouching software cannot adjust all the objects properly, and finally, the shooting effect of part of the objects may be poor, thereby affecting the shooting effect of the whole picture.
Disclosure of Invention
The embodiment of the application aims to provide an image processing method and device and electronic equipment, and the method and device can solve the problem that under the condition that a plurality of objects are shot by the electronic equipment, due to the fact that preset adjusting modes of retouching software are the same, the retouching software cannot adjust all the objects properly, shooting effects of part of the objects are poor, and the shooting effect of the whole picture is affected.
In a first aspect, an embodiment of the present application provides an image processing method, including: the electronic equipment acquires M object characteristic information of M objects in a target image, wherein each object corresponds to one object characteristic information, and M is a positive integer; the electronic equipment determines N image correction models according to N object feature information in the M object feature information, wherein each object feature information corresponds to one image correction model, and N is a positive integer less than or equal to M; the electronic equipment respectively carries out image processing on N image areas of the target image based on each image correction model in the N image correction models; wherein the N image areas are: and the image areas where the N objects corresponding to the N object characteristic information are located.
In a second aspect, an embodiment of the present application provides an image processing apparatus, including: the device comprises an acquisition module, a determination module and a processing module; the acquiring module is configured to acquire M object feature information of M objects in a target image, where each object corresponds to one object feature information, and M is a positive integer; the determining module is configured to determine N image modification models according to N object feature information of the M object feature information acquired by the acquiring module, where each object feature information corresponds to one image modification model, and N is a positive integer less than or equal to M; the processing module is configured to perform image processing on N image regions of the target image based on each of the N image correction models determined by the determining module; wherein the N image areas are: and the image areas where the N objects corresponding to the N object characteristic information are located.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, and when executed by the processor, the program or instructions implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In this embodiment, the electronic device may first acquire object feature information of all objects in the target image, then determine N image correction models according to the object feature information of some or all objects (N objects) in all the objects, and process, based on the N image correction models, N image regions where the N objects corresponding to the N object feature information in the target image are located. Therefore, when the electronic equipment performs image processing on a plurality of objects in the target image, the image area where each object is located can be processed by using the most appropriate image correction model for each object, so that the correction models of each object in the target image are all the image correction models suitable for the object, the shooting quality of all the objects in the target image is improved, and the shooting effect of the whole picture is finally improved.
Drawings
Fig. 1 is a schematic flowchart of an image processing method according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present invention;
fig. 4 is a second schematic structural diagram of an electronic device according to an embodiment of the invention.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments that can be derived by one of ordinary skill in the art from the embodiments given herein are intended to be within the scope of the present disclosure.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The image processing method provided by the embodiment of the present application is described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
The present embodiment provides an image processing method, as shown in fig. 1, including the following steps 301 to 303:
step 301: the image processing apparatus acquires M object feature information of M objects in a target image.
In the embodiment of the application, each object corresponds to one object feature information, and M is a positive integer.
In this embodiment, the target image may be an image stored in the electronic device, or may be an image acquired by the electronic device from another way.
In an example, the image obtained by the electronic device from another way may be an image obtained from a network disk, an image obtained from a cloud of the electronic device, or an image obtained by the electronic device from an application possessed by the electronic device, for example, an image obtained from a chat application.
In this embodiment, the M objects may be all portrait information in the target image, or information of other types of objects. In example 1, if the target image includes 3 pieces of face information, i.e., object a, object B, and object C, the target image includes 3 pieces of object information.
In this embodiment, the object feature information may be image parameter information corresponding to an object in the target image.
For example, the image parameter information may include original parameter information of the image and/or corrected parameter information of the image after being corrected.
Step 302: the image processing device determines N image correction models according to N object feature information in the M object feature information, wherein N is a positive integer less than or equal to M.
In the embodiment of the application, each object feature information corresponds to one image correction model, and N is a positive integer.
In this embodiment, the N object feature information includes original parameter information and corrected parameter information of N objects in the target image.
It is understood that after the image processing apparatus acquires the target image, all objects (i.e., the above M objects) in the target image may be identified by using an image recognition technology, and the object feature information of each of the M objects may be acquired by the electronic device. Since the object feature information may or may not include the correction parameter information, only the object including the correction parameter information can eventually have the corresponding image correction model, and thus M is greater than or equal to N.
In this embodiment, each of the N image correction models may be used to correct an image area of an object corresponding to the image correction model on a target image.
In this embodiment of the application, the object feature information corresponding to each of the N objects may include one parameter or a plurality of parameters for any dimension, and the dimension may be any feature of the object. Wherein the dimension may be a facial organ of each object in the target image. Example 2, with reference to example 1, the object feature information corresponding to the object a in the N objects includes object feature information of a group including an eye, a nose, and a mouth, where the eye, the nose, and the mouth are 3 different dimensions, respectively, the dimension of the eye may correspond to one correction parameter, and the dimension of the nose and the mouth may correspond to two different correction parameters, respectively.
In an embodiment of the present application, the image modification model may include a set of modification parameter information composed of different dimensions of an object. Example 3, in combination with example 2, the image processing apparatus may determine a correction parameter corresponding to each of the 3 different dimensions of the eyes, nose, and mouth, so as to form an image correction model corresponding to the object a.
In the embodiment of the present application, in the N image correction models, the parameters corresponding to the same dimension may be completely different parameters, or may also be the same parameters.
Step 303: the image processing apparatus performs image processing on the N image regions of the target image based on each of the N image correction models.
In the embodiment of the present application, the N image areas are: and the image areas where the N objects corresponding to the N object characteristic information are located.
In the embodiment of the present application, the image processing may be to correct an image area of an object corresponding to an image correction model in a target image by using parameters included in the image correction model.
In this embodiment, the image processing apparatus may first acquire object feature information of all objects in the target image, then determine N image correction models according to the object feature information of some or all objects (N objects) in all the objects, and process, based on the N image correction models, N image regions where the N objects corresponding to the N object feature information are located in the target image. Therefore, when the electronic equipment performs image processing on a plurality of objects in the target image, the image area where each object is located can be processed by using the most appropriate image correction model for each object, so that the correction models of each object in the target image are all the image correction models suitable for the object, the shooting quality of all the objects in the target image is improved, and the shooting effect of the whole picture is finally improved.
Optionally, in this embodiment of the present application, before the step 301, the image processing method provided in this embodiment of the present application may further include the following steps a1 to a step A3:
step A1: the image processing apparatus acquires K first images and K pieces of comment information of the K first images.
Step A2: the image processing device determines K first image correction models according to the K first images.
Illustratively, each first image corresponds to a respective one of the first image modification models.
Step A3: the image processing apparatus determines the N image correction models based on the K pieces of comment information and the K pieces of first image correction models, respectively.
Illustratively, each first image corresponds to one piece of comment information, the K first images include the N objects, one object corresponds to at least one first image, and K is a positive integer.
For example, the first image may include at least one of N objects.
For example, the comment information may be comment information corresponding to the first image in an application having a network connection function in the electronic device. Example 4, the comment information corresponding to the first image in the sharing area of the chat application.
Illustratively, the above-mentioned one comment information refers to all comment information corresponding to the first image. With reference to the above example 1, the comment information may be 1 piece of comment information, or may be multiple pieces of comment information, which is not limited in this embodiment of the present application.
Further, the comment information may include comments on arbitrary dimensions of the N objects. Example 5, with reference to example 1 and example 2, if the first image is an image including an object a, an object B, and an object C in the chat application, the comment information may include comment information about a nose of the user a, comment information about eyes of the object B, and comment information about a mouth of the user C.
It can be understood that:
firstly, after the image processing device acquires the first image, an image correction model in the first image can be extracted by using an image recognition technology, and when a plurality of objects are included in the first image, the image correction model is an image correction model corresponding to the plurality of objects;
secondly, after the image processing device acquires the first image and the comment information of the first image, the comment information can be identified by using a language identification technology, positive comments and negative comments are determined, and then parameters which need to be kept for continuous use and parameters which can be replaced by other parameters in the image correction model are determined. The language identification technology enables the electronic equipment to identify and judge the comment information in a keyword setting mode, wherein the keyword can be preset for the electronic equipment and can also be set for a user in a self-defined mode.
Therefore, the image processing device can determine the really effective image correction model more intelligently and accurately through the comment information, so that the correction model finally used in the target image is the correction model meeting the aesthetic requirements of the user, the picture adjustment is completed more efficiently, and the efficiency of using the electronic equipment by the user is improved.
Optionally, in this embodiment of the application, each of the K image correction models is used to: and adjusting the image area corresponding to the first characteristic information and the image area corresponding to the second characteristic information. On this basis, in the step 303, the image processing method provided by the embodiment of the present application may further include the following steps B1 and B2:
step B1: the image processing apparatus determines, based on the Q pieces of comment information, comment scores of the Q pieces of first feature information and comment scores of the Q pieces of second feature information of the Q pieces of image correction models, respectively.
Step B2: the image processing apparatus determines a first image correction model based on the comment scores of the Q pieces of first feature information and the comment scores of the Q pieces of second feature information.
Illustratively, each comment information corresponds to: a comment score of the first characteristic information and a comment score of the second characteristic information, Q being a positive integer.
Illustratively, the Q pieces of comment information are: of the K pieces of comment information, comment information of Q pieces of first images corresponding to a first object, the Q pieces of first images being: among the K first images, an image including the first object; the first object is: any one of the N objects described above.
Illustratively, the Q first image modification models are: among the K first image correction models, the Q first images correspond to image correction models, and the second image correction model is: in the image correction model corresponding to the first object among the N image correction models, the Q comment information may be partial comment information or all comment information among the K comment information.
For example, the first characteristic information and the second characteristic information may be any characteristic information of the object. Such as facial organs, hairstyle, clothing color, style, etc.
For example, the first characteristic information and the second characteristic information may be different characteristic information.
In one example, in the case where the feature information is a facial organ, the above-mentioned facial organ may include eyes, a nose, a mouth, ears, a facial contour, a facial skin state, and the like.
Illustratively, the comment information may be a comment directly obtained by the electronic device on the feature information in the first image, and the comment form may be a text form, a voice form, a praise form, or the like.
It is to be understood that the comment information may include a single piece of comment content, or may include a plurality of pieces of comment content, and the comment information is all comment content for the first image.
For example, the comment score may be a score calculated by the image processing apparatus based on Q pieces of comment information.
In an example, the manner of calculating the comment score according to the Q pieces of comment information may be a method of directly scoring according to the comment information, for example, counting the number of positive comments and the number of negative comments for the first feature information, and counting the number of positive comments and the number of negative comments for the second feature information, thereby obtaining the comment score of the first feature information and the comment score of the second feature information.
In one example, the above manner of calculating the comment score according to the Q pieces of comment information may be a method of calculating the comment score by weighting. Specifically, the electronic device prestores weighted score proportions corresponding to all feature information, for example, if the feature information is a facial organ, and the facial organs are eyes, nose and mouth, the weighted score proportion corresponding to the eyes is 50%, the weighted score proportion corresponding to the nose is 20%, and the weighted score proportion corresponding to the mouth is 30%. On the basis, for any facial organ, the number of positive comments and the number of negative comments of the facial organ can be counted in advance, and then the comment score of the facial organ in the image correction model is obtained by combining with the weighted score proportion.
For example, the following steps are carried out: it is assumed that an image (i.e., the above-described first image) of one object (i.e., the above-described M) of M objects and comment information about the image are included in the electronic apparatus. Specifically, the comment information includes comments on three pieces of feature information, namely, a nose, a mouth, and eyes. For the nose, the number of positive comments, the number of negative comments and the number of praise acquired by the electronic equipment are respectively a1, b1 and c 1; for the mouth, the electronic equipment acquires a positive comment number of a2, a negative comment number of b2 and a praise number of c 2; for the eyes, the electronic device acquires a positive comment number of a3, a negative comment number of b3, a positive comment number of c3, and the weighted scores of the nose, the mouth, and the eyes are respectively divided into W1, W2, and W3, so that the final score W of the image correction model of the object is W (a1 × 2-b1+ c1) × 1+ (a2 × 2-b2+ c2) × 2+ (a3 × 2-b3+ c3) × W3.
It can be understood that the manner of calculating the comment score of the image correction model may be preset by the electronic device, or may be adjusted by the electronic device according to network data. The electronic equipment can adjust the mode of calculating the comment score according to the preset real-time change condition of the feature information acquired in the later stage, for example, when the comment of a certain feature information includes both the number of positive comments and the number of negative comments, if the number of negative comments is more than twice of the number of positive comments, the parameter of the feature information is not adopted.
Illustratively, after the comment scores of Q pieces of first feature information in Q pieces of image correction models and the comment scores of Q pieces of second feature information are acquired, the first image correction model may be determined. The first image modification model may be an image modification model finally adopted by the image processing apparatus, that is, the first image modification model is used to adjust the first object in the target image.
Optionally, in this embodiment of the application, after obtaining the comment scores of Q pieces of first feature information in Q pieces of image correction models and the comment scores of Q pieces of second feature information, the first image correction model may be determined in the following two ways.
The first mode is as follows:
illustratively, each of the K image modification models includes: and the image processing parameter corresponding to the first characteristic information and the image processing parameter corresponding to the second characteristic information. On this basis, in the above step B2, the image processing method provided by the embodiment of the present application may include the following steps C1 and C2:
step C1: the image processing device determines target first feature information according to the comment scores of the Q pieces of first feature information, and determines target second feature information according to the comment scores of the Q pieces of second feature information.
For example, the target first feature information may be any one of the Q pieces of first feature information, and the target second feature information may be any one of the Q pieces of second feature information.
Step C2: the image processing apparatus determines the second image correction model based on the image processing parameter corresponding to the target first feature information and the image processing parameter corresponding to the target second feature information.
It is to be understood that the first object may correspond to Q image correction models, each of the image correction models includes the first feature information and the second feature information, and the image processing apparatus may select a correction parameter corresponding to the first feature information that is scored to reach a predetermined condition from the Q image correction models, and a correction parameter corresponding to the second feature information that is scored to reach the predetermined condition from the Q image correction models, and use the correction parameter corresponding to the selected first feature information and the correction parameter corresponding to the second feature information as the correction parameter corresponding to the first feature information and the second feature information in the final first image correction model.
Further, the predetermined condition may be a condition having a score of first.
Example 1: the electronic equipment obtains all images including the user A in a chat application of the electronic equipment, 3 images (namely the K first images) and comment information of each image in the 3 images respectively comprise a correction model in each image, and the 3 correction models are a correction model 1, a correction positive model 2 and a correction model 3 respectively. Each of the modified models included 3 facial organs, nose, eyes and mouth, respectively. For the three correction models, the electronic device scores different facial organs in each correction model according to the comment information corresponding to the electronic device, wherein the scores corresponding to the nose, the eyes and the mouth in the correction model 1 are 90, 80 and 75 respectively, the scores corresponding to the nose, the eyes and the mouth in the correction model 1 are 80, 90 and 75 respectively, the scores corresponding to the nose, the eyes and the mouth in the correction model 1 are 75, 80 and 90 respectively, the electronic device finally selects the correction parameters corresponding to the nose in the correction model 1, the correction parameters corresponding to the eyes in the correction model 2, and the correction parameters corresponding to the mouth in the correction model 3 form the image correction model (i.e., the second image correction model) of the user a.
In this way, the image processing device can acquire the best correction parameters of each facial organ of any object in all correction models, perform fusion, and finally obtain the best correction model, so that an image with the best effect can be intelligently adjusted for a user.
The second mode is as follows:
illustratively, in the step B2, the image processing method provided by the embodiment of the present application may include the following steps D1 and D2:
step D1: the image processing apparatus determines the comment scores of the Q first image correction models based on the comment scores of the Q first feature information and the comment scores of the Q second feature information, respectively.
Step D2: the image processing apparatus determines a first image correction model having the highest comment score among the Q first image correction models as the second image correction model.
For example, as can be seen from the foregoing, the image processing apparatus may determine the comment scores of all the facial organs in each correction model according to the comment information after acquiring the Q first image correction models, and finally determine the comment score of each correction model.
Example 1: the electronic equipment obtains all images including the user A in a chat application of the electronic equipment, 3 images (namely the K first images) and comment information of each image in the 3 images respectively comprise a correction model in each image, and the 3 correction models are a correction model 1, a correction positive model 2 and a correction model 3 respectively. Each of the modified models included 3 facial organs, nose, eyes and mouth, respectively. For the three correction models, the electronic device obtains the scores of 3 facial organs in each correction model according to the comment information, and calculates the score of each correction model by weighting, wherein the score of the correction model 1 is 90, the score of the correction model 2 is 80, and the score of the correction model 3 is 70, and then the electronic device finally selects the correction model 1 as the image correction model of the user a (i.e., the second image correction model).
In this way, when the image processing apparatus acquires a plurality of image correction models of one object in the electronic device, the image correction model with the best effect can be intelligently selected in combination with the comment information and finally applied to the target image, so that the user can directly acquire the image with the best effect.
It should be noted that, in the image processing method provided in the embodiment of the present application, the execution subject may be an image processing apparatus, or a control module in the image processing apparatus for executing the image processing method. The image processing apparatus provided in the embodiment of the present application is described with an example in which an image processing apparatus executes an image processing method.
Fig. 2 is a schematic diagram of a possible structure of an image processing apparatus for implementing the embodiment of the present application. As shown in fig. 2, the image processing apparatus 600 includes: an acquisition module 601, a determination module 602 and a processing module 603; the obtaining module 601 is configured to obtain M object feature information of M objects in the target image, where each object corresponds to one object feature information, and M is a positive integer; the determining module is configured to determine N image modification models according to N object feature information of the M object feature information acquired by the acquiring module 601, where each object feature information corresponds to one image modification model, and N is a positive integer less than or equal to M; the processing module 603 is configured to perform image processing on N image regions of the target image based on each of the N image correction models determined by the determining module 602; wherein the N image areas are: and the image areas where the N objects corresponding to the N object characteristic information are located.
In the image processing apparatus provided in the embodiment of the present application, the image processing apparatus may first acquire object feature information of all objects in the target image, then determine N image correction models according to the object feature information of some or all objects (N objects) in all the objects, and process, based on the N image correction models, N image regions where N objects corresponding to the N object feature information in the target image are located. Therefore, when the electronic equipment performs image processing on a plurality of objects in the target image, the image area where each object is located can be processed by using the most appropriate image correction model for each object, so that the correction models of each object in the target image are all the image correction models suitable for the object, the shooting quality of all the objects in the target image is improved, and the shooting effect of the whole picture is finally improved.
Optionally, in this embodiment of the application, the obtaining module 601 is further configured to obtain K first images and K comment information of the K first images, where each first image corresponds to one comment information, each of the K first images includes the N objects, one object corresponds to at least one first image, and K is an integer greater than or equal to N; the determining module 602 is further configured to determine K first image modification models according to the K first images acquired by the acquiring module 601, where each first image corresponds to one first image modification model; the determining module 602 is further configured to determine the N image correction models based on the K pieces of review information and the K pieces of first image correction models, respectively.
Optionally, in this embodiment of the application, each of the K first image correction models is configured to: adjusting an image area corresponding to the first characteristic information and an image area corresponding to the second characteristic information; the determining module 602 is specifically configured to determine, according to the Q pieces of comment information, comment scores of the Q pieces of first feature information of the Q pieces of first image correction models and comment scores of the Q pieces of second feature information, respectively; each comment information corresponds to: a comment score of the first characteristic information and a comment score of the second characteristic information, Q being a positive integer less than or equal to K; the determining module 602 is further specifically configured to determine a first image modification model based on the comment scores of the Q pieces of first feature information and the comment scores of the Q pieces of second feature information; wherein, the above Q pieces of comment information are: of the K pieces of comment information, comment information of Q pieces of first images corresponding to a first object, the Q pieces of first images being: among the K first images, an image including the first object; the first object is: any one of the N objects; the Q first image correction models are: among the K first image correction models, the image correction models corresponding to the Q first images; the second image correction model is: among the N image correction models, an image correction model corresponding to the first object.
Optionally, in an embodiment of the present application, each of the K first image correction models includes: the image processing parameter corresponding to the first characteristic information and the image processing parameter corresponding to the second characteristic information; the determining module 602 is specifically configured to determine target first feature information according to the comment scores of the Q pieces of first feature information, and determine target second feature information according to the comment scores of the Q pieces of second feature information; the determining module 602 is specifically configured to determine the second image modification model according to the image processing parameter corresponding to the target first characteristic information and the image processing parameter corresponding to the target second characteristic information.
Optionally, in this embodiment of the application, the determining module 602 is specifically configured to determine the comment scores of the Q first image correction models according to the comment scores of the Q first feature information and the comment scores of the Q second feature information, respectively; the determining module 602 is specifically configured to determine a first image correction model with a highest comment score among the Q first image correction models as the second image correction model.
The image processing apparatus in the embodiment of the present application may be an apparatus, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The image processing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The image processing apparatus provided in the embodiment of the present application can implement each process implemented in the method embodiment of fig. 1, and is not described here again to avoid repetition.
Optionally, as shown in fig. 3, an electronic device 700 is further provided in an embodiment of the present application, and includes a processor 701, a memory 702, and a program or an instruction stored in the memory 702 and executable on the processor 701, where the program or the instruction is executed by the processor 701 to implement each process of the image processing method embodiment, and can achieve the same technical effect, and no further description is provided here to avoid repetition.
It should be noted that the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 4 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 100 includes, but is not limited to: a radio frequency unit 101, a network module 102, an audio output unit 103, an input unit 104, a sensor 105, a display unit 106, a user input unit 107, an interface unit 108, a memory 109, and a processor 110.
Those skilled in the art will appreciate that the electronic device 100 may further comprise a power source (e.g., a battery) for supplying power to various components, and the power source may be logically connected to the processor 110 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system. The electronic device structure shown in fig. 4 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is omitted here.
The processor 110 is configured to obtain M object feature information of M objects in a target image, where each object corresponds to one object feature information, and M is a positive integer; the processor 110 is further configured to determine N image modification models according to N object feature information of the M object feature information, where each object feature information corresponds to one image modification model, and N is a positive integer less than or equal to M; a processor 110, further configured to perform image processing on N image regions of the target image based on each of the N image correction models; wherein the N image areas are: and the image areas where the N objects corresponding to the N object characteristic information are located.
In this embodiment, the electronic device may first acquire object feature information of all objects in the target image, then determine N image correction models according to the object feature information of some or all objects (N objects) in all the objects, and process, based on the N image correction models, N image regions where the N objects corresponding to the N object feature information in the target image are located. Therefore, when the electronic equipment performs image processing on a plurality of objects in the target image, the image area where each object is located can be processed by using the most appropriate image correction model for each object, so that the correction models of each object in the target image are all the image correction models suitable for the object, the shooting quality of all the objects in the target image is improved, and the shooting effect of the whole picture is finally improved.
Optionally, the processor 110 is further configured to obtain K first images and K comment information of the K first images, where each first image corresponds to one comment information, each of the K first images includes the N objects, one object corresponds to at least one first image, and K is an integer greater than or equal to N; the processor 110 is further configured to determine K first image modification models according to the K first images, where each first image corresponds to one first image modification model; the processor 110 is further configured to determine the N image correction models based on the K pieces of comment information and the K pieces of first image correction models, respectively.
Optionally, each of the K first image modification models is configured to: adjusting an image area corresponding to the first characteristic information and an image area corresponding to the second characteristic information; the processor 110 is specifically configured to determine, according to the Q pieces of comment information, comment scores of the Q pieces of first feature information of the Q pieces of first image correction models and comment scores of the Q pieces of second feature information, respectively; each comment information corresponds to: a comment score of the first characteristic information and a comment score of the second characteristic information, Q being a positive integer less than or equal to K; the processor 110 is further specifically configured to determine a first image modification model based on the comment scores of the Q pieces of first feature information and the comment scores of the Q pieces of second feature information; wherein, the above Q pieces of comment information are: of the K pieces of comment information, comment information of Q pieces of first images corresponding to a first object, the Q pieces of first images being: among the K first images, an image including the first object; the first object is: any one of the N objects; the Q first image correction models are: among the K first image correction models, a first image correction model corresponding to the Q first images; the second image correction model is: among the N image correction models, an image correction model corresponding to the first object.
Optionally, each of the K first image modification models includes: the image processing parameter corresponding to the first characteristic information and the image processing parameter corresponding to the second characteristic information; the processor 110 is specifically configured to determine target first feature information according to the comment scores of the Q pieces of first feature information, and determine target second feature information according to the comment scores of the Q pieces of second feature information; the processor 110 is further specifically configured to determine the second image correction model according to the image processing parameter corresponding to the target first characteristic information and the image processing parameter corresponding to the target second characteristic information.
Optionally, the processor 110 is specifically configured to determine comment scores of the Q image correction models according to the comment scores of the Q pieces of first feature information and the comment scores of the Q pieces of second feature information, respectively; the processor 110 is further specifically configured to determine a first image correction model with a highest comment score among the Q first image correction models as the second image correction model.
It should be understood that, in the embodiment of the present application, the input Unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, and the Graphics Processing Unit 1041 processes image data of a still picture or a video obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 106 may include a display panel 1061, and the display panel 1061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 107 includes a touch panel 1071 and other input devices 1072. The touch panel 1071 is also referred to as a touch screen. The touch panel 1071 may include two parts of a touch detection device and a touch controller. Other input devices 1072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. The memory 109 may be used to store software programs as well as various data including, but not limited to, application programs and an operating system. The processor 110 may integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the embodiment of the image processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the embodiment of the image processing method, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a computer software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (11)

1. An image processing method, characterized in that the method comprises:
acquiring M object characteristic information of M objects in a target image, wherein each object corresponds to one object characteristic information, and M is a positive integer;
determining N image correction models according to N object feature information in the M object feature information, wherein each object feature information corresponds to one image correction model, and N is a positive integer less than or equal to M;
respectively carrying out image processing on N image areas of the target image based on each image correction model in the N image correction models;
wherein the N image regions are: and the image areas where the N objects corresponding to the N object characteristic information are located.
2. The method of claim 1, wherein prior to obtaining the M object feature information for the M objects in the target image, the method further comprises:
acquiring K first images and K comment information of the K first images, wherein each first image corresponds to one comment information, the K first images comprise the N objects, one object corresponds to at least one first image, and K is an integer greater than or equal to N;
determining K first image correction models according to the K first images, wherein each first image corresponds to one first image correction model;
determining the N image correction models based on the K pieces of comment information and the K pieces of first image correction models, respectively.
3. The method of claim 2, wherein each of the K first image modification models is used to: adjusting an image area corresponding to the first characteristic information and an image area corresponding to the second characteristic information;
the determining the N image correction models based on the K pieces of review information and the K pieces of first image correction models, respectively, includes:
according to the Q pieces of comment information, comment scores of the Q pieces of first feature information and comment scores of the Q pieces of second feature information of the Q pieces of first image correction models are respectively determined; each comment information corresponds to: a comment score of the first characteristic information and a comment score of the second characteristic information, Q being a positive integer less than or equal to K;
determining a second image correction model based on the comment scores of the Q pieces of first characteristic information and the comment scores of the Q pieces of second characteristic information;
wherein the Q pieces of comment information are: comment information of Q first images corresponding to a first object in the K comment information; the Q first images are: among the K first images, an image of the first object is included; the first object is: any one of the N objects;
the Q first image correction models are: among the K first image correction models, image correction models corresponding to the Q first images;
the second image correction model is as follows: and among the N image correction models, the image correction model corresponding to the first object.
4. The method of claim 3, wherein each of the K first image modification models comprises: the image processing parameter corresponding to the first characteristic information and the image processing parameter corresponding to the second characteristic information;
determining a second image correction model based on the Q comment scores of the first feature information and the Q comment scores of the second feature information, including:
determining target first feature information according to the comment scores of the Q pieces of first feature information, and determining target second feature information according to the comment scores of the Q pieces of second feature information, wherein the target first feature information is any one of the Q pieces of first feature information, and the target second feature information is any one of the Q pieces of second feature information;
and determining the second image correction model according to the image processing parameters corresponding to the target first characteristic information and the image processing parameters corresponding to the target second characteristic information.
5. The method according to claim 3, wherein the determining a third image correction model based on the Q number of comment scores of the first feature information and the Q number of comment scores of the second feature information includes:
according to the comment scores of the Q pieces of first feature information and the comment scores of the Q pieces of second feature information, the comment scores of the Q pieces of first image correction models are respectively determined;
and determining the first image correction model with the highest comment score in the Q first image correction models as the second image correction model.
6. An image processing apparatus, characterized in that the apparatus comprises: the device comprises an acquisition module, a determination module and a processing module;
the acquisition module is used for acquiring M object characteristic information of M objects in the target image, wherein each object corresponds to one object characteristic information, and M is a positive integer;
the determining module is configured to determine N image modification models according to N object feature information of the M object feature information acquired by the acquiring module, where each object feature information corresponds to one image modification model, and N is a positive integer less than or equal to M;
the processing module is configured to perform image processing on N image regions of the target image respectively based on each of the N image correction models determined by the determining module;
wherein the N image regions are: and the image areas where the N objects corresponding to the N object characteristic information are located.
7. The apparatus of claim 6,
the obtaining module is further configured to obtain K first images and K comment information of the K first images, where each first image corresponds to one comment information, the K first images include the N objects, one object corresponds to at least one first image, and K is an integer greater than or equal to N;
the determining module is further configured to determine K first image modification models according to the K first images acquired by the acquiring module, where each first image corresponds to one first image modification model;
the determining module is further configured to determine the N image correction models based on the K pieces of review information and the K pieces of first image correction models, respectively.
8. The apparatus of claim 7, wherein each of the K first image modification models is to: adjusting an image area corresponding to the first characteristic information and an image area corresponding to the second characteristic information;
the determining module is specifically configured to determine, according to the Q pieces of comment information, comment scores of the Q pieces of first feature information of the Q pieces of first image correction models and comment scores of the Q pieces of second feature information, respectively; each comment information corresponds to: a comment score of the first characteristic information and a comment score of the second characteristic information, Q being a positive integer less than or equal to K;
the determining module is specifically further configured to determine a second image modification model based on the comment scores of the Q pieces of first feature information and the comment scores of the Q pieces of second feature information;
wherein the Q pieces of comment information are: in the K pieces of comment information, comment information of Q pieces of first images corresponding to a first object, where the Q pieces of first images are: among the K first images, an image of the first object is included; the first object is: any one of the N objects;
the Q first image correction models are: among the K first image correction models, image correction models corresponding to the Q first images;
the second image correction model is as follows: and among the N image correction models, the image correction model corresponding to the first object.
9. The apparatus of claim 8, wherein each of the K first image modification models comprises: the image processing parameter corresponding to the first characteristic information and the image processing parameter corresponding to the second characteristic information;
the determining module is specifically configured to determine the target first feature information according to the comment scores of the Q pieces of first feature information, and determine the target second feature information according to the comment scores of the Q pieces of second feature information;
the determining module is specifically configured to determine the second image modification model according to the image processing parameter corresponding to the target first feature information and the image processing parameter corresponding to the target second feature information.
10. The apparatus of claim 8,
the determining module is specifically configured to determine comment scores of the Q image correction models according to the comment scores of the Q pieces of first feature information and the comment scores of the Q pieces of second feature information;
the determining module is specifically configured to determine, as the second image correction model, a first image correction model with a highest comment score among the Q first image correction models.
11. An electronic device comprising a processor, a memory and a program or instructions stored on the memory and executable on the processor, the program or instructions, when executed by the processor, implementing the steps of the image processing method according to any one of claims 1 to 5.
CN202110604658.3A 2021-05-31 2021-05-31 Image processing method and device and electronic equipment Pending CN113344812A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110604658.3A CN113344812A (en) 2021-05-31 2021-05-31 Image processing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110604658.3A CN113344812A (en) 2021-05-31 2021-05-31 Image processing method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN113344812A true CN113344812A (en) 2021-09-03

Family

ID=77473553

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110604658.3A Pending CN113344812A (en) 2021-05-31 2021-05-31 Image processing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN113344812A (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108012081A (en) * 2017-12-08 2018-05-08 北京百度网讯科技有限公司 Intelligence U.S. face method, apparatus, terminal and computer-readable recording medium
CN108958592A (en) * 2018-07-11 2018-12-07 Oppo广东移动通信有限公司 Method for processing video frequency and Related product
CN110136198A (en) * 2018-02-09 2019-08-16 腾讯科技(深圳)有限公司 Image processing method and its device, equipment and storage medium
CN110192388A (en) * 2016-12-01 2019-08-30 夏普株式会社 Image processing apparatus, digital camera, image processing program and recording medium
CN111047511A (en) * 2019-12-31 2020-04-21 维沃移动通信有限公司 Image processing method and electronic equipment
CN111402157A (en) * 2020-03-12 2020-07-10 维沃移动通信有限公司 Image processing method and electronic equipment
CN111815504A (en) * 2020-06-30 2020-10-23 北京金山云网络技术有限公司 Image generation method and device
CN112598605A (en) * 2021-03-08 2021-04-02 江苏龙虎网信息科技股份有限公司 Photo cloud transmission live-broadcast picture-repairing system based on face recognition
CN112785488A (en) * 2019-11-11 2021-05-11 宇龙计算机通信科技(深圳)有限公司 Image processing method and device, storage medium and terminal

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110192388A (en) * 2016-12-01 2019-08-30 夏普株式会社 Image processing apparatus, digital camera, image processing program and recording medium
CN108012081A (en) * 2017-12-08 2018-05-08 北京百度网讯科技有限公司 Intelligence U.S. face method, apparatus, terminal and computer-readable recording medium
CN110136198A (en) * 2018-02-09 2019-08-16 腾讯科技(深圳)有限公司 Image processing method and its device, equipment and storage medium
CN108958592A (en) * 2018-07-11 2018-12-07 Oppo广东移动通信有限公司 Method for processing video frequency and Related product
CN112785488A (en) * 2019-11-11 2021-05-11 宇龙计算机通信科技(深圳)有限公司 Image processing method and device, storage medium and terminal
CN111047511A (en) * 2019-12-31 2020-04-21 维沃移动通信有限公司 Image processing method and electronic equipment
CN111402157A (en) * 2020-03-12 2020-07-10 维沃移动通信有限公司 Image processing method and electronic equipment
CN111815504A (en) * 2020-06-30 2020-10-23 北京金山云网络技术有限公司 Image generation method and device
CN112598605A (en) * 2021-03-08 2021-04-02 江苏龙虎网信息科技股份有限公司 Photo cloud transmission live-broadcast picture-repairing system based on face recognition

Similar Documents

Publication Publication Date Title
WO2020063009A1 (en) Image processing method and apparatus, storage medium, and electronic device
WO2016110199A1 (en) Expression migration method, electronic device and system
WO2022068806A1 (en) Image processing method and apparatus, and electronic device
CN111369428A (en) Virtual head portrait generation method and device
US20240046538A1 (en) Method for generating face shape adjustment image, model training method, apparatus and device
CN112333385B (en) Electronic anti-shake control method and device
CN112269522A (en) Image processing method, image processing device, electronic equipment and readable storage medium
CN112532885B (en) Anti-shake method and device and electronic equipment
CN112309449A (en) Audio recording method and device
CN112150444A (en) Method and device for identifying attribute features of face image and electronic equipment
US20240347078A1 (en) Video Generation Circuits, Video Generation Method, and Electronic Devices
CN112449098B (en) Shooting method, device, terminal and storage medium
CN113347356A (en) Shooting method, shooting device, electronic equipment and storage medium
CN112948048A (en) Information processing method, information processing device, electronic equipment and storage medium
CN112511890A (en) Video image processing method and device and electronic equipment
CN112511743A (en) Video shooting method and device
CN113344812A (en) Image processing method and device and electronic equipment
WO2022174826A1 (en) Image processing method and apparatus, device, and storage medium
WO2022042502A1 (en) Beautifying function enabling method and apparatus, and electronic device
CN111953907B (en) Composition method and device
CN112150486B (en) Image processing method and device
CN112562066B (en) Image reconstruction method and device and electronic equipment
CN113962840A (en) Image processing method, image processing device, electronic equipment and storage medium
CN112561787A (en) Image processing method, image processing device, electronic equipment and storage medium
CN112367468B (en) Image processing method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination