CN111402157A - Image processing method and electronic equipment - Google Patents
Image processing method and electronic equipment Download PDFInfo
- Publication number
- CN111402157A CN111402157A CN202010170787.1A CN202010170787A CN111402157A CN 111402157 A CN111402157 A CN 111402157A CN 202010170787 A CN202010170787 A CN 202010170787A CN 111402157 A CN111402157 A CN 111402157A
- Authority
- CN
- China
- Prior art keywords
- image
- user
- processing
- person image
- preset
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 26
- 238000012545 processing Methods 0.000 claims abstract description 146
- 230000003796 beauty Effects 0.000 abstract description 24
- 238000000034 method Methods 0.000 description 29
- 230000006870 function Effects 0.000 description 11
- 238000004590 computer program Methods 0.000 description 10
- 230000001815 facial effect Effects 0.000 description 10
- 230000000007 visual effect Effects 0.000 description 6
- 230000000694 effects Effects 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 238000012958 reprocessing Methods 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 239000013078 crystal Substances 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000004069 differentiation Effects 0.000 description 1
- 238000007599 discharging Methods 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000010079 rubber tapping Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000005236 sound signal Effects 0.000 description 1
- 238000010897 surface acoustic wave method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/77—Retouching; Inpainting; Scratch removal
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The invention provides an image processing method and electronic equipment, wherein the image processing method comprises the following steps: acquiring a first person image of a first user and a second person image of a second user in a preset interface; processing the first person image and the second person image according to a preset processing rule; the first user is a target user, and the second user is a target user or a non-target user; the preset processing rule comprises a preset value of difference between a first image score corresponding to the processed first person image and a second image score corresponding to the processed second person image. According to the scheme, under the scene that multiple people appear on the same interface, different beauty can be performed on different users according to the preset processing rule, the personalized requirements of the users are met, the intelligence of image processing is improved, and the problem that the image processing scheme in the prior art is not intelligent enough is well solved.
Description
Technical Field
The present invention relates to the field of electronic devices, and in particular, to an image processing method and an electronic device.
Background
Currently, in the process of using a mobile phone by a mobile terminal user, self-timer shooting is one of the most frequently used functions of the terminal. When a user is taking a picture by himself, it is a very common scene that many people appear in the field of view of the terminal camera at the same time. The person appearing in the field of view of the camera may be a passerby who is passing by, or may be another user who takes a picture with the user.
In the past, in the self-timer shooting process, the terminal can perform processing such as beautifying on all people in the camera visual field. Among the persons appearing in the field of view of the camera, many other persons may not need the processing of beauty or the like, except for the user himself. Meanwhile, the traditional facial beautification self-timer has no difference, so that the facial beautification effect of the portrait is different while all people are beautified, and the personalized requirements of users cannot be met.
As can be seen from the above, the existing image processing scheme is not intelligent enough and not enough to meet the personalized requirements of users.
Disclosure of Invention
The invention aims to provide an image processing method and electronic equipment, and aims to solve the problem that an image processing scheme in the prior art is not intelligent enough.
In order to solve the technical problem, the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides an image processing method, which is applied to an electronic device, and the image processing method includes:
acquiring a first person image of a first user and a second person image of a second user in a preset interface;
processing the first person image and the second person image according to a preset processing rule;
the first user is a target user, and the second user is a target user or a non-target user;
the preset processing rule comprises a preset value of difference between a first image score corresponding to the processed first person image and a second image score corresponding to the processed second person image.
In a second aspect, an embodiment of the present invention further provides an electronic device, where the electronic device includes:
the first acquisition module is used for acquiring a first person image of a first user and a second person image of a second user in a preset interface;
the first processing module is used for processing the first person image and the second person image according to a preset processing rule;
the first user is a target user, and the second user is a target user or a non-target user;
the preset processing rule comprises a preset value of difference between a first image score corresponding to the processed first person image and a second image score corresponding to the processed second person image.
In a third aspect, an embodiment of the present invention further provides an electronic device, which includes a processor, a memory, and a computer program stored on the memory and executable on the processor, where the computer program, when executed by the processor, implements the steps of the image processing method described above.
In a fourth aspect, the embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of the image processing method described above.
In the embodiment of the invention, a first person image of a first user and a second person image of a second user in a preset interface are obtained; processing the first person image and the second person image according to a preset processing rule; the first user is a target user, and the second user is a target user or a non-target user; the preset processing rule comprises a preset value of difference between a first image score corresponding to the processed first person image and a second image score corresponding to the processed second person image; the method and the device can realize that different beauty is carried out on different users according to the preset processing rule under the condition that a plurality of people appear on the same interface, meet the personalized requirements of the users, improve the intelligence of image processing, and well solve the problem that the image processing scheme in the prior art is not intelligent enough.
Drawings
FIG. 1 is a flowchart illustrating an image processing method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a specific application flow of an image processing method according to an embodiment of the present invention;
FIG. 3 is a first schematic structural diagram of an electronic device according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The present invention provides an image processing method applied to an electronic device, aiming at the problem that the image processing scheme in the prior art is not intelligent enough, as shown in fig. 1, the image processing method includes:
step 11: the method comprises the steps of obtaining a first person image of a first user and a second person image of a second user in a preset interface.
The preset interface may be any interface where a person image appears, such as a photographing interface, a conference video interface, or an image editing interface, and the person image may be an image including a face of a user, which is not limited herein.
Step 12: processing the first person image and the second person image according to a preset processing rule; the first user is a target user, and the second user is a target user or a non-target user; the preset processing rule comprises a preset value of difference between a first image score corresponding to the processed first person image and a second image score corresponding to the processed second person image.
The target user may be a user whose identified character image of the electronic device matches a preset feature (such as a long hair) or a specific user which is preset (such as preset by the character image), and is not limited herein.
The preset processing rule may be a selected target processing rule, there may be a correspondence between the preset processing rule and the user identity, and the preset processing rule may be selected or adjusted before step 12 is executed.
The preset value may be a positive value, a negative value or zero, and is not limited herein.
The image processing method provided by the embodiment of the invention obtains a first person image of a first user and a second person image of a second user in a preset interface; processing the first person image and the second person image according to a preset processing rule; the first user is a target user, and the second user is a target user or a non-target user; the preset processing rule comprises a preset value of difference between a first image score corresponding to the processed first person image and a second image score corresponding to the processed second person image; the method and the device can realize that different beauty is carried out on different users according to the preset processing rule under the condition that a plurality of people appear on the same interface, meet the personalized requirements of the users, improve the intelligence of image processing, and well solve the problem that the image processing scheme in the prior art is not intelligent enough.
Under the condition that the second user is a non-target user, acquiring a second person image of the second user in a preset interface, wherein the acquiring comprises: acquiring figure images of all non-target users in the preset interface; performing scoring processing on each acquired person image to obtain image scores corresponding to each person image; obtaining a target image score from the obtained image scores; and determining the person image corresponding to the target image score as the second person image.
The target image score may be a highest numerical score, a lowest numerical score, or a numerically intermediate score, and is not limited herein.
This can facilitate accurate and quick determination of the second person image.
Specifically, the obtaining of the target image score from the obtained image scores includes: and acquiring the image score with the maximum value from the obtained image scores to serve as a target image score.
This can satisfy the image processing requirement of the user with respect to the personal image with the highest score.
In an embodiment of the present invention, the processing the first person image and the second person image according to a preset processing rule includes: acquiring first input parameter information; processing the first person image and the second person image according to the first input parameter information and a preset processing rule; the first input parameter information is image processing parameter information for the first person image, or the first input parameter information is image processing parameter information for the second person image.
The image is processed together according to the input information and the automatic processing rule, so that the requirement that a user automatically and correspondingly adjusts the rest other images under the condition of adjusting one image can be met, and the processing speed is increased.
Further, after the processing the first person image and the second person image according to the preset processing rule, the method further includes: acquiring second input parameter information; reprocessing the first person image and the second person image according to the second input parameter information and the preset processing rule; the second input parameter information is image processing parameter information for the first person image, or the second input parameter information is image processing parameter information for the second person image.
That is, after the image is automatically processed according to the processing rule, the image can be adjusted again according to the input information, so that the user requirements can be better met; and the method can meet the requirement that the user automatically and correspondingly adjusts the rest other images under the condition of adjusting one image, thereby accelerating the processing speed.
In the embodiment of the present invention, the preset processing rule is a selected target processing rule, or the preset processing rule is a processing rule corresponding to a target user with the highest priority; and the target user with the highest priority level is the target user with the highest priority level in all the users in the preset interface.
Therefore, the personalized requirements of the user can be better met.
The image processing method provided by the embodiment of the present invention is further described below, where the electronic device takes a terminal as an example, the preset interface takes a camera shooting interface of multi-user self-shooting as an example, and the image processing takes user beauty (specifically, processing a portrait of a user) as an example.
In view of the above technical problems, an embodiment of the present invention provides an image processing method, which involves: through presetting a beauty strategy (namely the preset processing rule) of a user, in a multi-person self-timer scene, intelligently beautifying all the faces in the camera view field (namely the camera shooting interface); specifically, the terminal scores the color values of the portrait in the camera according to grades, and records the score of each person; then, according to a preset beautifying strategy, intelligently beautifying according to the grading grade of each person; meanwhile, when the user needs to manually adjust, only one person needs to be manually operated, and other persons automatically beautify according to a preset beautifying strategy.
More specifically, a specific implementation of the scheme provided by the embodiment of the present invention is as follows: in a multi-user self-timer scenario, performing differential beauty according to a user-defined beauty policy may include, as shown in fig. 2:
step 21: a user firstly defines a beautifying strategy;
if the score of the color value of the user 1 (i.e. the target user) is at least 3 points higher than that of the non-user (i.e. the non-target user), and the score of the color value of the user 2 (i.e. the target user) is at least 5 points higher than that of the non-user.
Step 22: when a front camera is opened for self-shooting, the terminal firstly identifies whether only one face exists in the camera view field, if so, the step 23 is carried out, and if not, the step 24 is carried out;
specifically, when a front camera is turned on for self-shooting, the terminal firstly detects the number of human faces in the field of view of the camera; it is determined whether there is only one face and it jumps to step 23 when there is only one face in the camera view and to step 24 when there are multiple (at least two) faces.
Step 23: performing beautifying according to a normal flow;
that is, if there is only one face in the camera view, the face is beautified according to the normal face (i.e. the non-difference face beautification and image processing of the conventional process are performed).
Step 24: judging whether a user (namely the target user) exists in the camera visual field, if not, entering a step 25, and if so, entering a step 26;
that is, if there are multiple faces in the camera view, first identify the faces in the camera view, and if there is no user, go to step 25; if there is a user, it jumps to step 26.
Step 25: performing beautifying according to a normal flow;
that is, if there is no user in the camera's field of view (i.e., the target user), then normal beauty is followed.
Step 26: the terminal scores the face values of all people in the camera view, and differentiates and beautifies all people appearing in the camera view according to the selected beautifying strategy;
therein, two cases are involved: the case of only one user (case one) and the case of multiple (at least two) users (case two);
in the first situation, the terminal scores the color values of all people in the camera view, and performs differentiated facial beautification on the user and other people;
specifically, if there are 1 user in the camera view, the terminal scores the color value of all people in the camera view, and performs different beauty treatment on the user and other people, so that the score after the user beautifies the face meets the policy preset by the user (which belongs to a specific case of the selected beauty treatment policy). If the terminal identifies that the user 1 exists in the camera view, the terminal performs different beautification on the user 1 finally, so that the color value score of the user 1 after the beautification is higher than the non-user 3 score with the highest color value score in the camera view before the beautification; that is, assume that the color value score for user 1 before beauty is 70, and the color value score for non-user 1 is 77; then the score for user 1 is 90 and the score for non-user 1 is 87 after beauty.
For the second case, the terminal scores the face values of all the people in the camera view, and makes the face of the portrait of each user in the camera view meet the preset face-beautifying strategy corresponding to the user (namely, the selected face-beautifying strategy);
specifically, if there are multiple users in the camera view, the terminal scores all people in the camera view, and makes the beauty of the portrait of each user in the camera view meet the beauty policy preset by the user. If the terminal identifies that there are user 1 and user 2 in the visual field, the terminal will perform different beauty treatment on user 1 and user 2, so that the color value of user 1 after beauty treatment is scored higher than the non-user 3 with the highest color value in the camera visual field before beauty treatment, and the color value of user 2 after beauty treatment is scored higher than the non-user 5 with the highest color value in the camera visual field before beauty treatment; that is, assume that user 1 has a color value score of 70, user 2 has a color value score of 75, and non-user 1 has a color value score of 77 before beauty; then after beauty, the user 1 score is 90, the user 2 score is 92, and the non-user 1 score is 87.
Step 27: after step 26, when someone in the camera view field is manually beautified, the terminal automatically beautifies the others in the camera view field, so that the score of the color value of each person after beautification meets the preset policy (i.e. the selected beautification policy);
there are two cases with someone above: the person mentioned above is a user (case one) and the person mentioned above is a non-user (case two);
in the first case, when the user is manually beautified, the terminal automatically beautifies the other people who are not manually beautified, so that the score of the user's face value after the beautification meets the preset policy (i.e. the selected beautification policy);
specifically, when the user is beautified manually, the terminal can automatically perform difference beautification on other people who are not beautified manually, so that the score of the face value of the user after the beautification meets a preset strategy, and if the user 1 is beautified manually, the terminal can automatically adjust the beautification effect of other people (including other users and/or non-users) who are not beautified manually, so that the score of the face value of the user 1 after the beautification is always higher than that of the non-user.
Aiming at the second situation, when the non-user is beautified manually, the terminal can automatically beautify the different faces of other people without manually beautifying;
specifically, when the non-user is beautified manually, the terminal can automatically perform difference beautification on other people without manual beautification (namely, perform differentiation processing on the figure images without manual adjustment), if the non-user 1 is beautified manually, the terminal can automatically adjust the beautification effect of other non-users, and perform difference beautification on the user 1 in other people, so that the score of the color value of the user 1 after the beautification is always higher than the score of the highest non-user color value in the non-user by 3.
It should be noted that the above-mentioned score of the color value may be an image score corresponding to a person image of the corresponding user.
As can be seen from the above, the scheme provided by the embodiment of the present invention mainly relates to: confirming whether a target user exists in the camera view field, if so, executing differentiated beautifying (namely the image processing method), and if not, executing normal beautifying (namely the traditional image processing (beautifying) method);
regarding the differentiated facial beautification, the differentiated facial beautification can be automatically performed according to a facial beautification strategy; furthermore, the user can correspondingly perform differentiated facial beautification according to the facial beautification strategy by combining input parameter information (specifically, user input information corresponding to the manual facial beautification).
In summary, the scheme provided by the embodiment of the invention can achieve the purpose of performing different facial beautification on different people according to the preset strategy in a multi-person self-shooting scene.
An embodiment of the present invention further provides an electronic device, as shown in fig. 3, where the electronic device includes:
the first obtaining module 31 is configured to obtain a first person image of a first user and a second person image of a second user in a preset interface;
the first processing module 32 is configured to process the first person image and the second person image according to a preset processing rule;
the first user is a target user, and the second user is a target user or a non-target user;
the preset processing rule comprises a preset value of difference between a first image score corresponding to the processed first person image and a second image score corresponding to the processed second person image.
The electronic equipment provided by the embodiment of the invention acquires a first person image of a first user and a second person image of a second user in a preset interface; processing the first person image and the second person image according to a preset processing rule; the first user is a target user, and the second user is a target user or a non-target user; the preset processing rule comprises a preset value of difference between a first image score corresponding to the processed first person image and a second image score corresponding to the processed second person image; the method and the device can realize that different beauty is carried out on different users according to the preset processing rule under the condition that a plurality of people appear on the same interface, meet the personalized requirements of the users, improve the intelligence of image processing, and well solve the problem that the image processing scheme in the prior art is not intelligent enough.
Wherein, in the case that the second user is a non-target user, the first obtaining module includes: the first obtaining sub-module is used for obtaining figure images of all non-target users in the preset interface; the first processing submodule is used for carrying out scoring processing on each acquired person image to obtain an image score corresponding to each person image; the second obtaining submodule is used for obtaining a target image score from the obtained image scores; and the first determining submodule is used for determining the person image corresponding to the target image score as the second person image.
Specifically, the second obtaining sub-module includes: a first obtaining unit configured to obtain, as a target image score, the image score having a largest numerical value from the obtained image scores.
In an embodiment of the present invention, the first processing module includes: the third obtaining submodule is used for obtaining the first input parameter information; the second processing submodule is used for processing the first person image and the second person image according to the first input parameter information and a preset processing rule; the first input parameter information is image processing parameter information for the first person image, or the first input parameter information is image processing parameter information for the second person image.
Further, the electronic device further includes: the second acquisition module is used for acquiring second input parameter information after the first person image and the second person image are processed according to a preset processing rule; the second processing module is used for carrying out reprocessing on the first person image and the second person image according to the second input parameter information and the preset processing rule; the second input parameter information is image processing parameter information for the first person image, or the second input parameter information is image processing parameter information for the second person image.
In the embodiment of the present invention, the preset processing rule is a selected target processing rule, or the preset processing rule is a processing rule corresponding to a target user with the highest priority; and the target user with the highest priority level is the target user with the highest priority level in all the users in the preset interface.
The electronic device provided in the embodiment of the present invention can implement each process implemented by the electronic device in the method embodiments of fig. 1 to fig. 2, and is not described herein again to avoid repetition.
Fig. 4 is a schematic diagram of a hardware structure of an electronic device 40 for implementing various embodiments of the present invention, where the electronic device 40 includes, but is not limited to: radio frequency unit 41, network module 42, audio output unit 43, input unit 44, sensor 45, display unit 46, user input unit 47, interface unit 48, memory 49, processor 410, and power supply 411. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 4 does not constitute a limitation of the electronic device, and that the electronic device may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the electronic device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
The processor 410 is configured to obtain a first person image of a first user and a second person image of a second user in a preset interface; processing the first person image and the second person image according to a preset processing rule; the first user is a target user, and the second user is a target user or a non-target user; the preset processing rule comprises a preset value of difference between a first image score corresponding to the processed first person image and a second image score corresponding to the processed second person image.
In the embodiment of the invention, a first person image of a first user and a second person image of a second user in a preset interface are obtained; processing the first person image and the second person image according to a preset processing rule; the first user is a target user, and the second user is a target user or a non-target user; the preset processing rule comprises a preset value of difference between a first image score corresponding to the processed first person image and a second image score corresponding to the processed second person image; the method and the device can realize that different beauty is carried out on different users according to the preset processing rule under the condition that a plurality of people appear on the same interface, meet the personalized requirements of the users, improve the intelligence of image processing, and well solve the problem that the image processing scheme in the prior art is not intelligent enough.
Optionally, in a case that the second user is a non-target user, the processor 410 is specifically configured to obtain a person image of each non-target user in the preset interface; performing scoring processing on each acquired person image to obtain image scores corresponding to each person image; obtaining a target image score from the obtained image scores; and determining the person image corresponding to the target image score as the second person image.
Optionally, the processor 410 is specifically configured to obtain the image score with the largest numerical value from the obtained image scores, as a target image score.
Optionally, the processor 410 is specifically configured to obtain first input parameter information; processing the first person image and the second person image according to the first input parameter information and a preset processing rule; the first input parameter information is image processing parameter information for the first person image, or the first input parameter information is image processing parameter information for the second person image.
Optionally, the processor 410 is further configured to, after processing the first person image and the second person image according to a preset processing rule, obtain second input parameter information; reprocessing the first person image and the second person image according to the second input parameter information and the preset processing rule; the second input parameter information is image processing parameter information for the first person image, or the second input parameter information is image processing parameter information for the second person image.
Optionally, the preset processing rule is a selected target processing rule, or the preset processing rule is a processing rule corresponding to a target user with the highest priority level; and the target user with the highest priority level is the target user with the highest priority level in all the users in the preset interface.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 41 may be used for receiving and sending signals during a message sending and receiving process or a call process, and specifically, receives downlink data from a base station and then processes the received downlink data to the processor 410; in addition, the uplink data is transmitted to the base station. In general, radio frequency unit 41 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 41 can also communicate with a network and other devices through a wireless communication system.
The electronic device provides wireless broadband internet access to the user via the network module 42, such as to assist the user in sending and receiving e-mails, browsing web pages, and accessing streaming media.
The audio output unit 43 may convert audio data received by the radio frequency unit 41 or the network module 42 or stored in the memory 49 into an audio signal and output as sound. Also, the audio output unit 43 may also provide audio output related to a specific function performed by the electronic apparatus 40 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 43 includes a speaker, a buzzer, a receiver, and the like.
The input unit 44 is for receiving an audio or video signal. The input Unit 44 may include a Graphics Processing Unit (GPU) 441 and a microphone 442, and the Graphics processor 441 processes image data of still pictures or videos obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 46. The image frames processed by the graphic processor 441 may be stored in the memory 49 (or other storage medium) or transmitted via the radio frequency unit 41 or the network module 42. The microphone 442 may receive sound and may be capable of processing such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 41 in case of the phone call mode.
The electronic device 40 also includes at least one sensor 45, such as light sensors, motion sensors, and other sensors. Specifically, the light sensor includes an ambient light sensor that adjusts the brightness of the display panel 461 according to the brightness of ambient light, and a proximity sensor that turns off the display panel 461 and/or the backlight when the electronic device 40 moves to the ear. As one type of motion sensor, an accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of an electronic device (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensors 45 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The Display unit 46 may include a Display panel 461, and the Display panel 461 may be configured in the form of a liquid Crystal Display (L acquired Crystal Display, L CD), an Organic light Emitting Diode (O L ED), or the like.
The user input unit 47 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic device. Specifically, the user input unit 47 includes a touch panel 471 and other input devices 472. The touch panel 471, also referred to as a touch screen, may collect touch operations by a user (e.g., operations by a user on or near the touch panel 471 using a finger, a stylus, or any other suitable object or accessory). The touch panel 471 can include two parts, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 410, receives a command from the processor 410, and executes the command. In addition, the touch panel 471 can be implemented by various types, such as resistive, capacitive, infrared, and surface acoustic wave. The user input unit 47 may include other input devices 472 in addition to the touch panel 471. Specifically, the other input devices 472 may include, but are not limited to, a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a track ball, a mouse, and a joystick, which are not described herein again.
Further, the touch panel 471 can be overlaid on the display panel 461, and when the touch panel 471 detects a touch operation on or near the touch panel 471, the touch panel transmits the touch operation to the processor 410 to determine the type of the touch event, and then the processor 410 provides a corresponding visual output on the display panel 461 according to the type of the touch event. Although the touch panel 471 and the display panel 461 are shown as two separate components in fig. 4, in some embodiments, the touch panel 471 and the display panel 461 may be integrated to implement the input and output functions of the electronic device, and are not limited herein.
The interface unit 48 is an interface for connecting an external device to the electronic apparatus 40. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. Interface unit 48 may be used to receive input from external devices (e.g., data information, power, etc.) and transmit the received input to one or more elements within electronic apparatus 40 or may be used to transmit data between electronic apparatus 40 and external devices.
The memory 49 may be used to store software programs as well as various data. The memory 49 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 49 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 410 is a control center of the electronic device, connects various parts of the entire electronic device using various interfaces and lines, performs various functions of the electronic device and processes data by operating or executing software programs and/or modules stored in the memory 49 and calling data stored in the memory 49, thereby performing overall monitoring of the electronic device. Processor 410 may include one or more processing units; preferably, the processor 410 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 410.
The electronic device 40 may further include a power supply 411 (e.g., a battery) for supplying power to various components, and preferably, the power supply 411 may be logically connected to the processor 410 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system.
In addition, the electronic device 40 includes some functional modules that are not shown, and are not described in detail herein.
Preferably, an embodiment of the present invention further provides an electronic device, which includes a processor 410, a memory 49, and a computer program stored in the memory 49 and capable of running on the processor 410, and when the computer program is executed by the processor 410, the computer program implements each process of the above-mentioned embodiment of the image processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not described here again.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the embodiment of the image processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling an electronic device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.
Claims (10)
1. An image processing method applied to an electronic device, the image processing method comprising:
acquiring a first person image of a first user and a second person image of a second user in a preset interface;
processing the first person image and the second person image according to a preset processing rule;
the first user is a target user, and the second user is a target user or a non-target user;
the preset processing rule comprises a preset value of difference between a first image score corresponding to the processed first person image and a second image score corresponding to the processed second person image.
2. The image processing method of claim 1, wherein in a case that the second user is a non-target user, acquiring a second person image of the second user in a preset interface comprises:
acquiring figure images of all non-target users in the preset interface;
performing scoring processing on each acquired person image to obtain image scores corresponding to each person image;
obtaining a target image score from the obtained image scores;
and determining the person image corresponding to the target image score as the second person image.
3. The image processing method according to claim 2, wherein said obtaining a target image score from the obtained image scores comprises:
and acquiring the image score with the maximum value from the obtained image scores to serve as a target image score.
4. The image processing method according to claim 1, wherein the processing the first and second human images according to a preset processing rule comprises:
acquiring first input parameter information;
processing the first person image and the second person image according to the first input parameter information and a preset processing rule;
the first input parameter information is image processing parameter information for the first person image, or the first input parameter information is image processing parameter information for the second person image.
5. The image processing method according to claim 1, wherein the preset processing rule is a selected target processing rule, or the preset processing rule is a processing rule corresponding to a target user with a highest priority;
and the target user with the highest priority level is the target user with the highest priority level in all the users in the preset interface.
6. An electronic device, characterized in that the electronic device comprises:
the first acquisition module is used for acquiring a first person image of a first user and a second person image of a second user in a preset interface;
the first processing module is used for processing the first person image and the second person image according to a preset processing rule;
the first user is a target user, and the second user is a target user or a non-target user;
the preset processing rule comprises a preset value of difference between a first image score corresponding to the processed first person image and a second image score corresponding to the processed second person image.
7. The electronic device of claim 6, wherein in the case that the second user is a non-target user, the first obtaining module includes:
the first obtaining sub-module is used for obtaining figure images of all non-target users in the preset interface;
the first processing submodule is used for carrying out scoring processing on each acquired person image to obtain an image score corresponding to each person image;
the second obtaining submodule is used for obtaining a target image score from the obtained image scores;
and the first determining submodule is used for determining the person image corresponding to the target image score as the second person image.
8. The electronic device of claim 7, wherein the second acquisition submodule comprises:
a first obtaining unit configured to obtain, as a target image score, the image score having a largest numerical value from the obtained image scores.
9. The electronic device of claim 6, wherein the first processing module comprises:
the third obtaining submodule is used for obtaining the first input parameter information;
the second processing submodule is used for processing the first person image and the second person image according to the first input parameter information and a preset processing rule;
the first input parameter information is image processing parameter information for the first person image, or the first input parameter information is image processing parameter information for the second person image.
10. The electronic device according to claim 6, wherein the preset processing rule is a selected target processing rule, or the preset processing rule is a processing rule corresponding to a target user with a highest priority;
and the target user with the highest priority level is the target user with the highest priority level in all the users in the preset interface.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010170787.1A CN111402157B (en) | 2020-03-12 | 2020-03-12 | Image processing method and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010170787.1A CN111402157B (en) | 2020-03-12 | 2020-03-12 | Image processing method and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111402157A true CN111402157A (en) | 2020-07-10 |
CN111402157B CN111402157B (en) | 2024-04-09 |
Family
ID=71436193
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010170787.1A Active CN111402157B (en) | 2020-03-12 | 2020-03-12 | Image processing method and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111402157B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113344812A (en) * | 2021-05-31 | 2021-09-03 | 维沃移动通信(杭州)有限公司 | Image processing method and device and electronic equipment |
CN113473227A (en) * | 2021-08-16 | 2021-10-01 | 维沃移动通信(杭州)有限公司 | Image processing method, image processing device, electronic equipment and storage medium |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105512615A (en) * | 2015-11-26 | 2016-04-20 | 小米科技有限责任公司 | Picture processing method and apparatus |
CN107274355A (en) * | 2017-05-22 | 2017-10-20 | 奇酷互联网络科技(深圳)有限公司 | image processing method, device and mobile terminal |
CN107341762A (en) * | 2017-06-16 | 2017-11-10 | 广东欧珀移动通信有限公司 | Take pictures processing method, device and terminal device |
CN107424130A (en) * | 2017-07-10 | 2017-12-01 | 北京小米移动软件有限公司 | Picture U.S. face method and apparatus |
CN107463373A (en) * | 2017-07-10 | 2017-12-12 | 北京小米移动软件有限公司 | The management method and device of picture U.S. face method, good friend's face value |
CN108764334A (en) * | 2018-05-28 | 2018-11-06 | 北京达佳互联信息技术有限公司 | Facial image face value judgment method, device, computer equipment and storage medium |
CN110263737A (en) * | 2019-06-25 | 2019-09-20 | Oppo广东移动通信有限公司 | Image processing method, image processing apparatus, terminal device and readable storage medium storing program for executing |
CN110287809A (en) * | 2019-06-03 | 2019-09-27 | Oppo广东移动通信有限公司 | Image processing method and Related product |
-
2020
- 2020-03-12 CN CN202010170787.1A patent/CN111402157B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105512615A (en) * | 2015-11-26 | 2016-04-20 | 小米科技有限责任公司 | Picture processing method and apparatus |
CN107274355A (en) * | 2017-05-22 | 2017-10-20 | 奇酷互联网络科技(深圳)有限公司 | image processing method, device and mobile terminal |
CN107341762A (en) * | 2017-06-16 | 2017-11-10 | 广东欧珀移动通信有限公司 | Take pictures processing method, device and terminal device |
CN107424130A (en) * | 2017-07-10 | 2017-12-01 | 北京小米移动软件有限公司 | Picture U.S. face method and apparatus |
CN107463373A (en) * | 2017-07-10 | 2017-12-12 | 北京小米移动软件有限公司 | The management method and device of picture U.S. face method, good friend's face value |
CN108764334A (en) * | 2018-05-28 | 2018-11-06 | 北京达佳互联信息技术有限公司 | Facial image face value judgment method, device, computer equipment and storage medium |
CN110287809A (en) * | 2019-06-03 | 2019-09-27 | Oppo广东移动通信有限公司 | Image processing method and Related product |
CN110263737A (en) * | 2019-06-25 | 2019-09-20 | Oppo广东移动通信有限公司 | Image processing method, image processing apparatus, terminal device and readable storage medium storing program for executing |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113344812A (en) * | 2021-05-31 | 2021-09-03 | 维沃移动通信(杭州)有限公司 | Image processing method and device and electronic equipment |
CN113473227A (en) * | 2021-08-16 | 2021-10-01 | 维沃移动通信(杭州)有限公司 | Image processing method, image processing device, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN111402157B (en) | 2024-04-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109461117B (en) | Image processing method and mobile terminal | |
CN110969981B (en) | Screen display parameter adjusting method and electronic equipment | |
CN108427873B (en) | Biological feature identification method and mobile terminal | |
CN108881782B (en) | Video call method and terminal equipment | |
CN111031253B (en) | Shooting method and electronic equipment | |
CN111401463B (en) | Method for outputting detection result, electronic equipment and medium | |
CN110825897A (en) | Image screening method and device and mobile terminal | |
CN109819166B (en) | Image processing method and electronic equipment | |
CN109656636B (en) | Application starting method and device | |
CN109727212B (en) | Image processing method and mobile terminal | |
CN111415722A (en) | Screen control method and electronic equipment | |
CN108174110B (en) | Photographing method and flexible screen terminal | |
CN107729100B (en) | Interface display control method and mobile terminal | |
CN110636225B (en) | Photographing method and electronic equipment | |
CN110602387B (en) | Shooting method and electronic equipment | |
CN111402157B (en) | Image processing method and electronic equipment | |
CN109949809B (en) | Voice control method and terminal equipment | |
CN109858447B (en) | Information processing method and terminal | |
CN107563353B (en) | Image processing method and device and mobile terminal | |
WO2021185142A1 (en) | Image processing method, electronic device and storage medium | |
CN107895108B (en) | Operation management method and mobile terminal | |
CN111261128B (en) | Screen brightness adjusting method and electronic equipment | |
CN110868537B (en) | Shooting method and electronic equipment | |
CN110443752B (en) | Image processing method and mobile terminal | |
CN109819331B (en) | Video call method, device and mobile terminal |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |