CN107959789B - Image processing method and mobile terminal - Google Patents
Image processing method and mobile terminal Download PDFInfo
- Publication number
- CN107959789B CN107959789B CN201711107998.5A CN201711107998A CN107959789B CN 107959789 B CN107959789 B CN 107959789B CN 201711107998 A CN201711107998 A CN 201711107998A CN 107959789 B CN107959789 B CN 107959789B
- Authority
- CN
- China
- Prior art keywords
- face information
- mobile terminal
- stored
- image processing
- user
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
- H04N23/611—Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/62—Control of parameters via user interfaces
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Signal Processing (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Telephone Function (AREA)
- Studio Devices (AREA)
Abstract
The embodiment of the invention provides an image processing method and a mobile terminal, relates to the technical field of communication, and aims to solve the problem that the existing image processing method cannot meet the personalized requirements of users. The image processing method is applied to the mobile terminal and comprises the following steps: in a shooting preview mode, if an opening area image processing instruction is received, acquiring prestored face information; identifying the face information of all the people in the shooting preview interface; comparing the face information of all the identified figures with the pre-stored face information, and taking the figure matched with the pre-stored face information as a target figure; and carrying out image processing on the face area of the target person. The image processing method in the embodiment of the invention is applied to the mobile terminal.
Description
Technical Field
The embodiment of the invention relates to the technical field of communication, in particular to an image processing method and a mobile terminal.
Background
At present, most mobile terminals have a shooting function, and the mobile terminals have the advantages of being convenient to carry and the like, so that more and more people can select to use the mobile terminals to shoot. With the increase of the number of users, people have more and more requirements on the shooting effect, and in order to meet the requirements of users, the shooting function is gradually improved.
For example, in the shooting of the mobile terminal, an image processing function is added, so that in an output image, people and the like can present a perfect state, the shooting effect is deep and popular with users, and the pursuit of the users for beauty is greatly met.
For the increase of the above functions, when performing the graphic processing, only the whole image can be processed at the same time, so that the personalized requirements of the user cannot be satisfied.
Disclosure of Invention
The embodiment of the invention provides an image processing method, which aims to solve the problem that the existing image processing method cannot meet the personalized requirements of users.
In a first aspect, an embodiment of the present invention provides an image processing method applied to a mobile terminal, including: in a shooting preview mode, if an opening area image processing instruction is received, acquiring prestored face information; identifying the face information of all the people in the shooting preview interface; comparing the face information of all the identified figures with the pre-stored face information, and taking the figure matched with the pre-stored face information as a target figure; and carrying out image processing on the face area of the target person.
In a second aspect, an embodiment of the present invention further provides a mobile terminal, including: the pre-stored face information acquisition module is used for acquiring pre-stored face information if an opening area image processing instruction is received in a shooting preview mode; the face information identification module is used for identifying the face information of all people in the shooting preview interface; the target figure confirming module is used for comparing the face information of all identified figures with the pre-stored face information and taking the figure matched with the pre-stored face information as a target figure; and the area image processing module is used for carrying out image processing on the face area of the target person.
In a third aspect, an embodiment of the present invention further provides a mobile terminal, including a processor, a memory, and a computer program stored on the memory and executable on the processor, where the computer program, when executed by the processor, implements the steps of the image processing method according to the first aspect
In a fourth aspect, the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of the image processing method according to the first aspect.
In the embodiment of the invention, in the shooting preview mode, a user can input an opening area image processing instruction to open the area image processing function, and after the mobile terminal receives the opening area image processing instruction, the mobile terminal can realize the area image processing function. Firstly, the mobile terminal automatically acquires pre-stored face information, the pre-stored face information is face information pre-stored by the mobile terminal, the face information can be selectively stored by a user or can be automatically identified and stored by the mobile terminal, the pre-stored face information can be face information of a specific figure, such as the user, so that after the pre-stored face information is acquired, the face information of all figures in a shooting preview interface is sequentially identified, the identified face information of each figure is compared with the acquired pre-stored face information one by one, if the face information of one figure is matched with the pre-stored face information in a comparison result, the figure is taken as a target figure, namely, the figure set according to the personalized requirements of the user, and then the face area of the target figure is subjected to image processing, so that the personalized requirements of the user are met. Compared with the prior art, the image processing method in the embodiment of the invention can perform processing operation on the local area in the image in a targeted manner when shooting, such as the face area of a certain specific person, particularly in the shooting of group photos, and if the user wants the shooting effect of the user to be different from that of other people, the embodiment of the invention can meet the personalized requirement of the user.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive labor.
FIG. 1 is one of the flow charts of an image processing method of an embodiment of the present invention;
FIG. 2 is a second flowchart of an image processing method according to an embodiment of the present invention;
FIG. 3 is a third flowchart of an image processing method according to an embodiment of the present invention;
FIG. 4 is a fourth flowchart of an image processing method according to an embodiment of the present invention;
FIG. 5 is a fifth flowchart of an image processing method according to an embodiment of the present invention;
FIG. 6 is one of the block diagrams of a mobile terminal of an embodiment of the present invention;
fig. 7 is a second block diagram of a mobile terminal according to an embodiment of the present invention;
fig. 8 is a third block diagram of a mobile terminal according to an embodiment of the present invention;
fig. 9 is a fourth block diagram of a mobile terminal of an embodiment of the present invention;
fig. 10 is a fifth block diagram of a mobile terminal of an embodiment of the present invention;
fig. 11 is a sixth block diagram of a mobile terminal according to an embodiment of the present invention;
fig. 12 is a schematic structural diagram of a mobile terminal according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example one
Referring to fig. 1, there is shown a flowchart of an image processing method according to an embodiment of the present invention, the image processing method being applied to a mobile terminal, including:
step 101: and under the shooting preview mode, if an opening area image processing instruction is received, acquiring prestored face information.
In the step, the mobile terminal enters a shooting preview mode, in the mode, the mobile terminal can provide a regional image processing function, and meanwhile, an opening instruction input button and the like of the function can be displayed on a display interface, so that a user can input an opening regional image processing instruction, after the mobile terminal receives the instruction, the mobile terminal can realize the regional image processing function in the shooting preview mode, and pre-stored face information is required to be acquired on the premise of realizing the function.
The pre-stored face information is a specific face information pre-stored in the mobile terminal, and the specific face information can be stored by a user specification, such as the face information of the user, or can be automatically stored by the mobile terminal, such as the mobile terminal performs recognition and screening according to pictures stored in a picture library, and the like, so as to obtain the face information in the pictures.
Step 102: face information of all persons within the photographing preview interface is recognized.
In this step, before the region processing is performed on the image, the person within the shooting preview interface is first identified, and here, a step of detecting the number of persons existing within the shooting preview interface may be added. If the number of the persons existing in the shooting preview interface is detected to be a plurality, the shooting task at the moment can be regarded as the shooting of the group photo, so that the face information of all the persons is sequentially recognized, and the target person of the image processing is determined in the plurality of persons; if the number of the people existing in the shooting preview interface is detected to be one, identifying the face information of the people to determine whether the people are the target people of image processing; and if no person exists in the shooting preview interface, the regional image processing function is not required to be executed.
Step 103: and comparing the face information of all the identified figures with the pre-stored face information, and taking the figure matched with the pre-stored face information as a target figure.
In this step, the face information recognized in the above step is compared with the pre-stored face information one by one, and in the comparison result, if face information matching the pre-stored face information exists, the person having the face information is considered as the target person for image processing, and if face information matching the pre-stored face information does not exist, the target person without image processing is considered, and image processing is not required.
Step 104: the image processing is performed on the face area of the target person.
Based on the target person determined in the previous step, image processing is performed on the face area of the target person. In the step, only the target person in the shooting preview interface is subjected to image processing of the face, so that only the face area of the target person in the output image is subjected to image processing, local image processing is realized, and meanwhile, the target person can be different from other persons, and the personalized requirement of a user is met.
In the embodiment, the mobile terminal is additionally provided with a function of regional image processing, and can process a local region of an image in the shooting process, so that an image processing mode which can only process the whole image in the prior art is replaced. The specific process for realizing the function can be as follows: the face information of the figures existing in the shooting preview interface is recognized one by one, all the recognized face information is compared with the pre-stored face information in the mobile terminal one by one, so that the face information matched with the pre-stored face information is determined in all the face information, the figures matched with the pre-stored face information are used as the selected target figures, and then the image processing is only performed on the face area of the target figures in a targeted mode, and the pre-stored face information is set according to user preferences and the like, so that the selected target figures realize the personalized selection of the user, and the personalized requirements of the user on the image processing in the shooting process are met. Particularly, when a group photo is taken, the user can obtain an image effect different from that of other people by using the image processing method in the embodiment, so that the user looks special in the group photo, and the shooting requirement of the user is met.
In this embodiment, before the pre-stored face information is obtained, a step of pre-storing the pre-stored face information may be added, and the pre-storing steps corresponding to the three pre-storing manners are listed below by way of example.
Referring to fig. 2, fig. 2 shows a flow chart of a first mode, which includes:
step 1051: and identifying the face information of all the shooting persons stored in the mobile terminal.
In this step, a manner of acquiring the face information of the photographed persons is first enumerated, taking a self-portrait person among the photographed persons as an example: when the mobile terminal shoots by using the front camera, the mobile terminal can be considered to be carrying out self-shooting, so that after the self-shooting image is output, the mobile terminal can automatically store the self-shooting image, the mobile terminal obtains the face information of self-shooting characters in all self-shooting images stored in the mobile terminal, and the face information of all self-shooting characters stored in the mobile terminal can be identified in the step. In the same method, the mobile terminal can automatically store the shot images of the rear camera, so that the mobile terminal can acquire the face information of the shot persons in all the shot images stored in the mobile terminal, and further can identify the face information of all the shot persons stored in the mobile terminal in the step.
Step 1052: and taking the face information of at least one shooting person with the highest frequency of appearance as the pre-stored face information.
In this step, after the face information of all the photographed persons is recognized, at least one face information of the photographed person is determined from the face information of the photographed person having the highest frequency of appearance as the pre-stored face information and stored.
In the first mode, the shooting function of the mobile terminal is utilized, so that the user accounts for a larger proportion of the shot characters when the user shoots, particularly when the user shoots by himself, and based on the characteristic, the mobile terminal automatically sets and stores the pre-stored face information. Therefore, the method can meet the local image processing of the user, and meanwhile, if the proportion of other people in the image shot by the user is large, the method can also meet the local image processing of other people except the user, such as people closely related to the user.
Referring to fig. 3, fig. 3 shows a flow chart of a second approach, which includes:
step 1061: face information of a user who successfully unlocks the mobile terminal is obtained.
In order to protect privacy and security of a user, generally, the mobile terminal is provided with an unlocking password, and the user can unlock the mobile terminal only by inputting a correct unlocking password on the mobile terminal and enters a display interface. One way to identify the unlock code is: the face information of the user is recognized through the face recognition technology to unlock the mobile terminal, and the second mode utilizes the process. The user unlocks the mobile terminal through the face information to log in the display interface, when the face information of the user is recognized by the mobile terminal, the mobile terminal is successfully unlocked, the mobile terminal enters the display interface, and the user successfully unlocks. In this step, the mobile terminal may automatically acquire face information of a user who successfully unlocked the mobile terminal.
Step 1062: and taking the face information of the user successfully unlocked as the pre-stored face information.
And taking the acquired face information of the user who successfully unlocks the mobile terminal as prestored face information and storing the face information.
In the second mode, the unlocking login function of the mobile terminal is utilized, and usually, when a user unlocks the mobile terminal and logs in the display interface, the user can successfully enter the display interface only by inputting the face information of the user, and based on the characteristic, the mobile terminal automatically sets and stores the pre-stored face information. This approach may satisfy local image processing for the user himself.
The display interface of the mobile terminal may include a main display interface, a payment display interface, and the like, that is, the pre-stored face information may be face information of a user who successfully logs in the main display interface of the mobile terminal, and may also be face information of a user who successfully logs in the payment interface of the mobile terminal.
Referring to fig. 4, fig. 4 shows a flow chart of a third mode, which includes:
step 1051: and identifying the face information of all the shooting persons stored in the mobile terminal.
Step 1052: and taking the face information of at least one shooting person with the highest frequency of appearance as the pre-stored face information.
These two steps are the same as those in the first mode to set the pre-stored face information according to the photographing function.
Step 1053: and if the user successfully unlocks the mobile terminal, acquiring the face information of the user.
On the basis of the first mode, step 1053 is added, if the mobile terminal is provided with a function of unlocking the login interface through face recognition, it can be basically determined that the face information which can be successfully unlocked is the face information of the user, so that the mobile terminal can automatically acquire the face information which is successfully unlocked.
Step 1054: and updating the face information of the user successfully unlocked into the pre-stored face information.
The face information of the user who successfully unlocks is directly updated to the pre-stored face information, and in the step, the pre-stored face information stored in the step 1052 is replaced, so that the accuracy of identifying the user can be improved, and errors caused when the person is not shot are avoided.
In the third mode, if the mobile terminal stores the face information with the highest frequency of occurrence in shooting and stores the face information of the user who successfully unlocks, the face information of the user who successfully unlocks is preferentially selected as the pre-stored face information, and the accuracy of identifying the user is higher. Therefore, the third mode can meet the requirement of local image processing of the user.
The third mode is particularly suitable for the situation that the face information with the highest frequency of occurrence in shooting and the face information of the user who successfully unlocks are not the same face information, and the accuracy of the face information of the user who successfully unlocks is higher.
It can be seen that, in the above three modes, the mobile terminal automatically sets the pre-stored face information according to some functions of the mobile terminal itself, and the set pre-stored face information can be face information of the user itself, so that the mobile terminal can perform image processing on the face area of the user separately during shooting. In more modes, the user can also input pre-stored face information in the mobile terminal, and the input pre-stored face information is not limited to the user, can be other people and can also be a plurality of designated people.
Referring to fig. 5, step 104 may include:
step 1041: and performing beautifying processing on the face area of the target person.
Preferably, the image processing here may be a beauty processing to beautify the face area of the target person to satisfy the beauty interests of the user, and particularly in a group photo, the user or a person having close relationship with the user may be automatically beautified.
Example two
Referring to fig. 6, a block diagram of a mobile terminal of an embodiment of the present invention is shown, including:
the pre-stored face information acquiring module 10 is configured to, in the shooting preview mode, acquire pre-stored face information if an opening area image processing instruction is received;
a face information recognition module 20 for recognizing face information of all persons in the shooting preview interface;
a target character confirmation module 30 for comparing the face information of all the recognized characters with the pre-stored face information and taking the character matched with the pre-stored face information as a target character;
and the area image processing module 40 is used for carrying out image processing on the face area of the target person.
In the embodiment, the mobile terminal is additionally provided with a function of regional image processing, and can process a local region of an image in the shooting process, so that an image processing mode which can only process the whole image in the prior art is replaced. The specific process for realizing the function can be as follows: the face information of the figures existing in the shooting preview interface is recognized one by one, all the recognized face information is compared with the pre-stored face information in the mobile terminal one by one, so that the face information matched with the pre-stored face information is determined in all the face information, the figures matched with the pre-stored face information are used as the selected target figures, and then the image processing is only performed on the face area of the target figures in a targeted mode, and the pre-stored face information is set according to user preferences and the like, so that the selected target figures realize the personalized selection of the user, and the personalized requirements of the user on the image processing in the shooting process are met. Particularly, when a group photo is taken, the user can obtain an image different from other people by using the image processing method in the embodiment, so that the user looks special in the group photo, and the shooting requirement of the user is met.
The mobile terminal in fig. 6 can implement each process implemented by the mobile terminal in the method embodiment of fig. 1, and is not described herein again to avoid repetition.
Referring to fig. 7, preferably, the mobile terminal further includes:
a photographed person recognition module 51 for recognizing face information of all photographed persons stored in the mobile terminal;
a first pre-stored face information determining module 52 for determining the face information of at least one photographed person having the highest frequency of appearance as pre-stored face information.
The mobile terminal in fig. 7 can implement each process implemented by the mobile terminal in the method embodiment of fig. 2, and is not described herein again to avoid repetition.
Referring to fig. 8, on the basis of fig. 7, the mobile terminal further includes:
a first unlocking user obtaining module 53, configured to obtain face information of the user if the user successfully unlocks the mobile terminal;
and a second pre-stored face information determining module 54, configured to update the face information of the user who successfully unlocks to the pre-stored face information.
The mobile terminal in fig. 8 can implement each process implemented by the mobile terminal in the method embodiment of fig. 3, and is not described herein again to avoid repetition.
Referring to fig. 9, preferably, the mobile terminal further includes:
a second unlocking user obtaining module 61, configured to obtain face information of a user who successfully unlocks the mobile terminal;
and a third pre-stored face information determining module 62, configured to use the face information of the user successfully unlocked as the pre-stored face information.
The mobile terminal in fig. 9 can implement each process implemented by the mobile terminal in the method embodiment of fig. 4, and is not described herein again to avoid repetition.
Referring to fig. 10, the area image processing module 40 includes:
a region beauty processing unit 41 for performing beauty processing on the face region of the target person.
The mobile terminal in fig. 10 can implement each process implemented by the mobile terminal in the method embodiment of fig. 5, and is not described here again to avoid repetition.
Another embodiment of the present invention further provides a mobile terminal, including a processor, a memory, and a computer program stored in the memory and capable of running on the processor, where the computer program, when executed by the processor, implements each process of any one of the embodiments of the image processing method described above, and can achieve the same technical effect, and therefore, in order to avoid repetition, details are not repeated here.
Another embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the image processing is executed by the processor, the image processing method implements each process of any one of the embodiments of the image processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
Fig. 11 is a block diagram of a mobile terminal according to another embodiment of the present invention. The mobile terminal 800 shown in fig. 11 includes: at least one processor 801, memory 802, at least one network interface 804 and other user interfaces 803, a tele camera 806 and a wide camera 807. The various components in the mobile terminal 800 are coupled together by a bus system 805. It is understood that the bus system 805 is used to enable communications among the components connected. The bus system 805 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are labeled as bus system 805 in FIG. 11.
The user interface 803 may include, among other things, a display, a keyboard, or a pointing device (e.g., a mouse, trackball, touch pad, or touch screen, among others.
It will be appreciated that the memory 802 in embodiments of the invention may be either volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The non-volatile memory may be a Read-only memory (ROM), a programmable Read-only memory (PROM), an erasable programmable Read-only memory (erasabprom, EPROM), an electrically erasable programmable Read-only memory (EEPROM), or a flash memory. The volatile memory may be a Random Access Memory (RAM) which functions as an external cache. By way of example, but not limitation, many forms of RAM are available, such as static random access memory (staticiram, SRAM), dynamic random access memory (dynamic RAM, DRAM), synchronous dynamic random access memory (syncronous DRAM, SDRAM), Double Data Rate Synchronous Dynamic Random Access Memory (DDRSDRAM), Enhanced synchronous SDRAM (ESDRAM), synchronous link SDRAM (SLDRAM), and direct memory bus SDRAM (DRRAM). The memory 802 of the subject systems and methods described in connection with the embodiments of the invention is intended to comprise, without being limited to, these and any other suitable types of memory.
In some embodiments, memory 802 stores the following elements, executable modules or data structures, or a subset thereof, or an expanded set thereof: an operating system 8021 and application programs 8022.
The operating system 8021 includes various system programs, such as a framework layer, a core library layer, a driver layer, and the like, and is used for implementing various basic services and processing hardware-based tasks. The application 8022 includes various applications, such as a media player (MediaPlayer), a Browser (Browser), and the like, for implementing various application services. A program implementing a method according to an embodiment of the present invention may be included in application program 8022.
In this embodiment of the present invention, the mobile terminal 800 further includes: an image processing control program stored on the memory 802 and executable on the processor 801, in particular an image processing control program in the application program 8022, which when executed by the processor 801, implements the steps of: in a shooting preview mode, if an opening area image processing instruction is received, acquiring prestored face information; identifying the face information of all the people in the shooting preview interface; comparing the face information of all the identified figures with the pre-stored face information, and taking the figure matched with the pre-stored face information as a target figure; performing image processing on the face region of the target person
The methods disclosed in the embodiments of the present invention described above may be implemented in the processor 801 or implemented by the processor 801. The processor 801 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 801. The processor 801 may be a general-purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic device, or discrete hardware components. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software modules may reside in ram, flash memory, rom, prom, or eprom, registers, among other computer-readable storage media known in the art. The computer readable storage medium is located in the memory 702, and the processor 801 reads the information in the memory 802, and combines the hardware to complete the steps of the method. In particular, the computer readable storage medium has stored thereon a computer program which, when being executed by the processor 801, realizes the steps of an embodiment of the image processing method as any of the above.
It is to be understood that the embodiments described herein may be implemented in hardware, software, firmware, middleware, microcode, or any combination thereof. For a hardware implementation, the processing units may be implemented within one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), general purpose processors, controllers, micro-controllers, microprocessors, other electronic units configured to perform the functions described herein, or a combination thereof.
For a software implementation, the techniques described in this disclosure may be implemented with modules (e.g., procedures, functions, and so on) that perform the functions described in this disclosure. The software codes may be stored in a memory and executed by a processor. The memory may be implemented within the processor or external to the processor.
Optionally, the computer program when executed by the processor 801 may further implement the steps of: identifying face information of all shooting persons stored in the mobile terminal; and taking the face information of at least one shooting person with the highest frequency of appearance as pre-stored face information.
Optionally, the computer program when executed by the processor 801 may further implement the steps of: if the user successfully unlocks the mobile terminal, face information of the user is obtained; and updating the face information of the user into the pre-stored face information.
Alternatively, as another embodiment, the computer program may further implement the following steps when being executed by the processor 801: acquiring face information of a user who successfully unlocks the mobile terminal; and taking the face information of the user as pre-stored face information.
Alternatively, as another embodiment, the computer program may further implement the following steps when being executed by the processor 801: and performing beautifying processing on the face area of the target person.
The mobile terminal 800 can implement each process implemented by the mobile terminal in the foregoing embodiments, and details are not repeated here to avoid repetition.
In the embodiment, the mobile terminal is additionally provided with a function of regional image processing, and can process a local region of an image in the shooting process, so that an image processing mode which can only process the whole image in the prior art is replaced. The specific process for realizing the function can be as follows: the face information of the figures existing in the shooting preview interface is recognized one by one, all the recognized face information is compared with the pre-stored face information in the mobile terminal one by one, so that the face information matched with the pre-stored face information is determined in all the face information, the figures matched with the pre-stored face information are used as the selected target figures, and then the image processing is only performed on the face area of the target figures in a targeted mode, and the pre-stored face information is set according to user preferences and the like, so that the selected target figures realize the personalized selection of the user, and the personalized requirements of the user on the image processing in the shooting process are met. Particularly, when a group photo is taken, the user can obtain an image different from other people by using the image processing method in the embodiment, so that the user looks special in the group photo, and the shooting requirement of the user is met.
Fig. 12 is a schematic structural diagram of a mobile terminal according to another embodiment of the present invention. Specifically, the mobile terminal 900 in fig. 12 may be a mobile phone, a tablet computer, a Personal Digital Assistant (PDA), or a vehicle-mounted computer.
The mobile terminal 900 in fig. 12 includes a Radio Frequency (RF) circuit 910, a memory 920, an input unit 930, a display unit 940, a processor 960, an audio circuit 970, a wifi (wireless fidelity) module 980, a power supply 990, a telephoto camera 9100, and a wide-angle camera 9200.
The input unit 930 may be used, among other things, to receive numeric or character information input by a user and to generate signal inputs related to user settings and function control of the mobile terminal 900. Specifically, in the embodiment of the present invention, the input unit 930 may include a touch panel 931. The touch panel 931, also referred to as a touch screen, may collect a touch operation performed by a user on or near the touch panel 931 (for example, a user may operate the touch panel 931 by using a finger, a stylus pen, or any other suitable object or accessory), and drive the corresponding connection device according to a preset program. Alternatively, the touch panel 931 may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts it to touch point coordinates, and sends the touch point coordinates to the processor 960, where the touch controller can receive and execute commands sent by the processor 960. In addition, the touch panel 931 may be implemented by various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 931, the input unit 930 may also include other input devices 932, and the other input devices 932 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
Among other things, the display unit 940 may be used to display information input by the user or information provided to the user and various menu interfaces of the mobile terminal 900. The display unit 940 may include a display panel 941, and optionally, the display panel 941 may be configured in the form of an LCD or an organic light-emitting diode (OLED).
It should be noted that the touch panel 931 may overlay the display panel 941 to form a touch display screen, and when the touch display screen detects a touch operation on or near the touch display screen, the touch display screen transmits the touch operation to the processor 960 to determine the type of the touch event, and then the processor 960 provides a corresponding visual output on the touch display screen according to the type of the touch event.
The touch display screen comprises an application program interface display area and a common control display area. The arrangement modes of the application program interface display area and the common control display area are not limited, and can be an arrangement mode which can distinguish two display areas, such as vertical arrangement, left-right arrangement and the like. The application interface display area may be used to display an interface of an application. Each interface may contain at least one interface element such as an icon and/or widget desktop control for an application. The application interface display area may also be an empty interface that does not contain any content. The common control display area is used for displaying controls with high utilization rate, such as application icons like setting buttons, interface numbers, scroll bars, phone book icons and the like.
The processor 960 is a control center of the mobile terminal 900, connects various parts of the entire mobile phone using various interfaces and lines, and performs various functions of the mobile terminal 900 and processes data by operating or executing software programs and/or modules stored in the first memory 921 and calling data stored in the second memory 922, thereby integrally monitoring the mobile terminal 900. Optionally, processor 960 may include one or more processing units.
In this embodiment of the present invention, the mobile terminal 900 further includes: an image processing control program stored in the memory 920 and executable on the processor 960, specifically, an image processing control program in an application program, which when executed by the processor 960, implements the steps of: in a shooting preview mode, if an opening area image processing instruction is received, acquiring prestored face information; identifying the face information of all the people in the shooting preview interface; comparing the face information of all the identified figures with the pre-stored face information, and taking the figure matched with the pre-stored face information as a target figure; and carrying out image processing on the face area of the target person.
Optionally, the computer program when executed by the processor 960 may also implement the steps of: identifying face information of all shooting persons stored in the mobile terminal; and taking the face information of at least one shooting person with the highest frequency of appearance as pre-stored face information.
Optionally, the computer program when executed by the processor 960 may also implement the steps of: if the user successfully unlocks the mobile terminal, face information of the user is obtained; and updating the face information of the user into the pre-stored face information.
Optionally, the computer program when executed by the processor 960 may also implement the steps of: acquiring face information of a user who successfully unlocks the mobile terminal; and taking the face information of the user as pre-stored face information.
Alternatively, as another embodiment, the computer program when executed by the processor 960 may further implement the following steps: and performing beautifying processing on the face area of the target person.
In the embodiment, the mobile terminal is additionally provided with a function of regional image processing, and can process a local region of an image in the shooting process, so that an image processing mode which can only process the whole image in the prior art is replaced. The specific process for realizing the function can be as follows: the face information of the figures existing in the shooting preview interface is recognized one by one, all the recognized face information is compared with the pre-stored face information in the mobile terminal one by one, so that the face information matched with the pre-stored face information is determined in all the face information, the figures matched with the pre-stored face information are used as the selected target figures, and then the image processing is only performed on the face area of the target figures in a targeted mode, and the pre-stored face information is set according to user preferences and the like, so that the selected target figures realize the personalized selection of the user, and the personalized requirements of the user on the image processing in the shooting process are met. Particularly, when a group photo is taken, the user can obtain an image different from other people by using the image processing method in the embodiment, so that the user looks special in the group photo, and the shooting requirement of the user is met.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a computer-readable storage medium, which includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned computer-readable storage media comprise: various media capable of storing program codes, such as a U disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.
Claims (8)
1. An image processing method applied to a mobile terminal is characterized by comprising the following steps:
identifying face information of all shooting persons stored in the mobile terminal;
taking the face information of at least one shot person with the highest frequency of occurrence as pre-stored face information;
in a shooting preview mode, if an opening area image processing instruction is received, acquiring prestored face information;
identifying the face information of all the people in the shooting preview interface;
comparing the face information of all the identified figures with the pre-stored face information, and taking the figure matched with the pre-stored face information as a target figure;
performing image processing on the face area of the target person;
wherein, after the step of using the face information of at least one of the photographed persons having the highest frequency of appearance as the pre-stored face information, the method further comprises:
if the user successfully unlocks the mobile terminal, face information of the user is obtained;
updating the face information of the user into the pre-stored face information;
wherein the updating the face information of the user to the pre-stored face information includes: and replacing the face information of at least one shooting figure with the face information of the user who successfully unlocks the mobile terminal, wherein the face information of the shooting figure has the highest frequency of occurrence, and the face information is used as the pre-stored face information.
2. The image processing method according to claim 1, wherein before the step of acquiring pre-stored face information if an open area image processing instruction is received in the shooting preview mode, the method further comprises:
acquiring face information of a user who successfully unlocks the mobile terminal;
and taking the face information of the user as pre-stored face information.
3. The image processing method according to any one of claims 1 to 2, wherein the step of performing image processing on the face area of the target person includes:
and performing beautifying processing on the face area of the target person.
4. A mobile terminal, comprising:
the shot person identification module is used for identifying all pieces of face information of shot persons stored in the mobile terminal;
the first pre-stored face information determining module is used for taking the face information of at least one shooting figure with the highest frequency of occurrence as pre-stored face information;
the pre-stored face information acquisition module is used for acquiring pre-stored face information if an opening area image processing instruction is received in a shooting preview mode;
the face information identification module is used for identifying the face information of all people in the shooting preview interface;
the target figure confirming module is used for comparing the face information of all identified figures with the pre-stored face information and taking the figure matched with the pre-stored face information as a target figure;
the region image processing module is used for carrying out image processing on the face region of the target person;
wherein the mobile terminal further comprises:
the first unlocking user acquisition module is used for acquiring the face information of the user if the user successfully unlocks the mobile terminal;
the second pre-stored face information determining module is used for updating the face information of the user into the pre-stored face information;
the second pre-stored face information determining module is specifically configured to replace the face information of the at least one photographed person with the face information of the user who successfully unlocks the mobile terminal, where the face information has the highest frequency of occurrence, as the pre-stored face information.
5. The mobile terminal of claim 4, wherein the mobile terminal further comprises:
the second unlocking user acquisition module is used for acquiring the face information of the user who successfully unlocks the mobile terminal;
and the third pre-stored face information determining module is used for taking the face information of the user as pre-stored face information.
6. The mobile terminal according to any of claims 4 to 5, wherein the area image processing module comprises:
and the area beautifying processing unit is used for performing beautifying processing on the face area of the target person.
7. A mobile terminal, characterized in that it comprises a processor, a memory, a computer program stored on said memory and executable on said processor, said computer program, when executed by said processor, implementing the steps of the image processing method according to any one of claims 1 to 3.
8. A computer-readable storage medium, characterized in that a computer program is stored thereon, which computer program, when being executed by a processor, carries out the steps of the image processing method according to any one of claims 1 to 3.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711107998.5A CN107959789B (en) | 2017-11-10 | 2017-11-10 | Image processing method and mobile terminal |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711107998.5A CN107959789B (en) | 2017-11-10 | 2017-11-10 | Image processing method and mobile terminal |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107959789A CN107959789A (en) | 2018-04-24 |
CN107959789B true CN107959789B (en) | 2020-03-06 |
Family
ID=61964623
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711107998.5A Active CN107959789B (en) | 2017-11-10 | 2017-11-10 | Image processing method and mobile terminal |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107959789B (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108924418A (en) * | 2018-07-02 | 2018-11-30 | 珠海市魅族科技有限公司 | A kind for the treatment of method and apparatus of preview image, terminal, readable storage medium storing program for executing |
CN109361861A (en) * | 2018-11-14 | 2019-02-19 | 维沃移动通信有限公司 | A kind of image processing method and mobile terminal |
CN112637477A (en) * | 2019-10-08 | 2021-04-09 | 华为技术有限公司 | Image processing method and electronic equipment |
CN111773676B (en) * | 2020-07-23 | 2024-06-21 | 网易(杭州)网络有限公司 | Method and device for determining virtual character actions |
CN115018698B (en) * | 2022-08-08 | 2022-11-08 | 深圳市联志光电科技有限公司 | Image processing method and system for man-machine interaction |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105827979A (en) * | 2016-04-29 | 2016-08-03 | 维沃移动通信有限公司 | Prompting photographing method and mobile terminal |
CN105915782A (en) * | 2016-03-29 | 2016-08-31 | 维沃移动通信有限公司 | Picture obtaining method based on face identification, and mobile terminal |
CN107123081A (en) * | 2017-04-01 | 2017-09-01 | 北京小米移动软件有限公司 | image processing method, device and terminal |
CN107274355A (en) * | 2017-05-22 | 2017-10-20 | 奇酷互联网络科技(深圳)有限公司 | image processing method, device and mobile terminal |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107301389B (en) * | 2017-06-16 | 2020-04-14 | Oppo广东移动通信有限公司 | Method, device and terminal for identifying user gender based on face features |
-
2017
- 2017-11-10 CN CN201711107998.5A patent/CN107959789B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105915782A (en) * | 2016-03-29 | 2016-08-31 | 维沃移动通信有限公司 | Picture obtaining method based on face identification, and mobile terminal |
CN105827979A (en) * | 2016-04-29 | 2016-08-03 | 维沃移动通信有限公司 | Prompting photographing method and mobile terminal |
CN107123081A (en) * | 2017-04-01 | 2017-09-01 | 北京小米移动软件有限公司 | image processing method, device and terminal |
CN107274355A (en) * | 2017-05-22 | 2017-10-20 | 奇酷互联网络科技(深圳)有限公司 | image processing method, device and mobile terminal |
Also Published As
Publication number | Publication date |
---|---|
CN107959789A (en) | 2018-04-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107959789B (en) | Image processing method and mobile terminal | |
CN107528938B (en) | Video call method, terminal and computer readable storage medium | |
CN106055996B (en) | Multimedia information sharing method and mobile terminal | |
CN106126077B (en) | Display control method of application program icons and mobile terminal | |
CN106648382B (en) | A kind of picture browsing method and mobile terminal | |
EP3661187A1 (en) | Photography method and mobile terminal | |
CN107678644B (en) | Image processing method and mobile terminal | |
CN106056533B (en) | A kind of method and terminal taken pictures | |
CN107509030B (en) | focusing method and mobile terminal | |
CN106657793B (en) | A kind of image processing method and mobile terminal | |
CN107124543B (en) | Shooting method and mobile terminal | |
CN107613203B (en) | Image processing method and mobile terminal | |
CN107665434B (en) | Payment method and mobile terminal | |
CN107172346B (en) | Virtualization method and mobile terminal | |
CN106951174B (en) | A kind of method of adjustment and mobile terminal of dummy keyboard | |
EP3640828B1 (en) | Methods, mechanisms, and computer-readable storage media for unlocking applications on a mobile terminal with a sliding module | |
CN106383638B (en) | Payment mode display method and mobile terminal | |
CN106791437B (en) | Panoramic image shooting method and mobile terminal | |
CN107592458B (en) | Shooting method and mobile terminal | |
CN106445328B (en) | Unlocking method of mobile terminal screen and mobile terminal | |
CN106454086B (en) | Image processing method and mobile terminal | |
CN107172347B (en) | Photographing method and terminal | |
CN107370758B (en) | Login method and mobile terminal | |
CN107480500B (en) | Face verification method and mobile terminal | |
CN105825105A (en) | Method and electronic equipment for displaying objects hidden on interface |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |