[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN109903218B - Image processing method and terminal - Google Patents

Image processing method and terminal Download PDF

Info

Publication number
CN109903218B
CN109903218B CN201910151708.XA CN201910151708A CN109903218B CN 109903218 B CN109903218 B CN 109903218B CN 201910151708 A CN201910151708 A CN 201910151708A CN 109903218 B CN109903218 B CN 109903218B
Authority
CN
China
Prior art keywords
tooth
target
determining
face image
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910151708.XA
Other languages
Chinese (zh)
Other versions
CN109903218A (en
Inventor
张楚楚
周梦姣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201910151708.XA priority Critical patent/CN109903218B/en
Publication of CN109903218A publication Critical patent/CN109903218A/en
Application granted granted Critical
Publication of CN109903218B publication Critical patent/CN109903218B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The invention provides an image processing method and a terminal, wherein the method comprises the following steps: acquiring feature information of a face image, wherein the feature information comprises a face shape and/or five sense organs; determining dental type recommendation information according to the characteristic information of the face image; and processing the tooth area in the face image based on the dental type recommended information. Therefore, according to the scheme of the invention, the proper tooth model can be matched according to the characteristic information of the face image, so that the tooth area in the face image is processed, the adjustment of the teeth in the tooth area according to the face characteristic is realized, the teeth in the tooth area are matched with the face characteristic, the teeth are more attractive as a whole, and the beautifying requirement of a user on the tooth image is met.

Description

Image processing method and terminal
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image processing method and a terminal.
Background
In modern society, people never stop pursuing beauty, and most people have experience of tooth arrangement. Especially, women and children often choose to straighten their teeth due to their irregular teeth. However, dental correction is not effective on different persons. Among them, for a person with serious bucktooth or tooth protrusion, it is difficult to perform orthodontic treatment. In addition, each person defines a different aesthetic and is susceptible to aesthetic changes. For example, it has been generally recognized from the past that the more orderly and square teeth are better seen, while the more popular are now the characteristic tooth shapes of tiger teeth and the like.
However, in the present beautifying mode, only the color of the tooth image is subjected to whitening treatment, and thus more image processing requirements of the user on the tooth are often not met.
Disclosure of Invention
The embodiment of the invention provides an image processing method and a terminal, which are used for solving the problem that the prior art can only whiten the color of a tooth image and cannot meet the requirement of a user on more image processing of teeth.
In a first aspect, an embodiment of the present invention provides an image processing method, including:
acquiring feature information of a face image, wherein the feature information comprises a face shape and/or five sense organs;
determining dental type recommendation information according to the characteristic information of the face image;
and processing the tooth area in the face image based on the dental type recommended information.
In a second aspect, an embodiment of the present invention provides a terminal, including:
the feature acquisition module is used for acquiring feature information of the face image, wherein the feature information comprises facial forms and/or five sense organs;
the recommendation information determining module is used for determining dental type recommendation information according to the characteristic information of the face image;
and the first processing module is used for processing the tooth area in the face image based on the dental type recommended information.
In a third aspect, embodiments of the present invention provide a terminal comprising a memory, a processor and a computer program stored on the memory and executable on the processor; the processor implements the image processing method as described above when executing the program.
In a fourth aspect, embodiments of the present invention provide a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements steps in an image processing method as described above.
The embodiment of the invention has the beneficial effects that:
according to the embodiment of the invention, the characteristic information of the face image can be acquired, then the tooth type recommended information is determined according to the acquired characteristic information of the face image, and the tooth area in the face image is processed based on the tooth type recommended information, namely, the embodiment of the invention can be matched with a proper tooth model according to the characteristic information of the face image, so that the tooth area in the face image is processed, the adjustment of the teeth in the tooth area according to the face characteristic is realized, the teeth in the tooth area are matched with the face characteristic, the teeth are more attractive as a whole, and the beautifying requirement of a user on the tooth image is met.
Drawings
FIG. 1 shows a flowchart of an image processing method according to an embodiment of the present invention;
fig. 2 shows a block diagram of a terminal according to an embodiment of the present invention;
fig. 3 shows a schematic hardware structure of a terminal according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
An embodiment of the present invention provides an image processing method, as shown in fig. 1, including:
step 101: and acquiring characteristic information of the face image.
Wherein the characteristic information comprises facial forms and/or five sense organs.
In addition, the face image is an image shot by a 3D camera. The 3D camera is added with depth measurement of a shooting object on the basis of a two-dimensional image, namely three-dimensional position and size information, so that a three-dimensional image is formed. Therefore, the face image acquired in the embodiment of the invention is a three-dimensional stereoscopic image. The tooth areas in the face image are also rendered in a three-dimensional stereoscopic effect.
Step 102: and determining dental type recommendation information according to the characteristic information of the face image.
Step 103: and processing the tooth area in the face image based on the dental type recommended information.
Therefore, according to the embodiment of the invention, the characteristic information of the face image can be acquired, then the dental type recommended information is determined according to the acquired characteristic information of the face image, and the dental region in the face image is processed based on the dental type recommended information, namely, the embodiment of the invention can be matched with a proper dental model according to the characteristic information of the face image, so that the dental region in the face image is processed, the adjustment of the teeth in the dental region according to the face characteristic is realized, the teeth in the dental region are matched with the face characteristic, the teeth are more attractive as a whole, and the beautifying requirement of a user on the dental image is met.
Wherein, optionally, a combined tooth model is pre-stored, and the combined tooth model comprises face information and a plurality of single tooth models adjacent to each other in positions corresponding to the face feature information;
the determining dental type recommendation information according to the feature information of the face image comprises the following steps:
determining a target combined tooth model matched with the characteristic information of the face image from the pre-stored combined tooth models;
determining the target combined tooth model as tooth recommendation information;
the processing the tooth area in the face image based on the dental recommendation information comprises the following steps:
replacing teeth in the tooth region with the target composite tooth model.
Wherein each combined tooth model comprises a plurality of single tooth models adjacent to each other in position and facial features applicable to the single tooth models. Therefore, according to the embodiment of the invention, the corresponding combined tooth model can be matched according to the facial features, so that the teeth are adapted to the face, and the overall attractive effect is further improved. In addition, in the embodiment of the invention, a plurality of adjacent single teeth are replaced as a whole, so that more processing methods for the tooth images are provided for users, and the requirements of different users are further met.
Optionally, the determining, from the pre-stored combined tooth models, a target combined tooth model matched with the feature information of the face image includes:
determining at least one combined tooth model matched with the characteristic information of the face image from the pre-stored combined tooth models, and displaying the combined tooth models;
receiving a first input;
in response to the first input, a combined tooth model for which the first input is directed is determined from the displayed combined tooth models and is used as a target combined tooth model.
Wherein the first input may be a touch operation by the user on a touch screen of the terminal displaying the combined tooth model for a certain displayed combined tooth model.
From the above, the embodiment of the present invention can match corresponding combined tooth models according to the facial features, and the user can also select one of the matched combined tooth models that is satisfactory to himself as the target combined tooth model, thereby replacing the teeth in the tooth region on the facial image with the target combined tooth model. The user can select the satisfied user from the combined tooth model matched according to the face characteristics according to the actual requirements of the user, and the actual requirements of the user are further met, so that the use experience of the user is improved.
In addition, adaptive 3D fusion techniques may be employed specifically when replacing teeth in a tooth region on a face image with a target composite tooth model.
Optionally, the image processing method further includes:
detecting a protruding portion in the tooth region;
in the event that a protruding portion is present in the tooth region, the protruding portion is cleared.
Wherein the protruding part in the dental region is detected, i.e. whether an bucktooth is present in the dental region is detected.
Specifically, the detecting of the protruding portion in the tooth region includes:
processing the tooth region by adopting a triangulation algorithm to obtain a triangulation network of the tooth region;
detecting whether a target grid exists in triangulation grids of the triangulation network, wherein the difference between the height of the target grid and the height of a grid adjacent to the target grid is out of a preset range;
the tooth portion formed by the target mesh is determined as a protruding portion.
The tooth region can be processed by adopting a triangulation algorithm, so that whether target grids which are obviously higher than adjacent grids exist in each formed triangulation grid or not is judged.
In addition, when the protruding portion is detected to exist in the tooth region, the protruding portion can be identified in the tooth region on the face image, so that the position of the protruding portion can be made clearer.
Optionally, the clearing the protruding portion includes:
processing the tooth region by adopting a triangulation algorithm;
the triangulated mesh of the protruding portion is adjusted to clear the protruding portion.
For triangulation, for example, for a curved surface, triangulation is to split the curved surface into pieces of blocks, and the following conditions are required to be satisfied: (1) each piece of debris is a curved triangle; (2) Any two such curved-sided triangles on a curved surface either do not intersect or intersect exactly on a common side (two or more sides cannot be intersected at the same time). It can be seen that in the embodiment of the present invention, after the triangulation algorithm is adopted to process the tooth area on the face image, the triangulation grid can be obtained.
Further, optionally, the adjusting the triangulation grid of the protruding portion comprises:
the height of the triangulation grid of the protruding part is adjusted, so that the difference between the height of the triangulation grid of the protruding part and the height of the triangulation grid adjacent to the protruding part is within a preset range;
or alternatively
Receiving a target input;
in response to the target input, adjusting a height of the triangulated mesh of the protruding portion according to the target input.
Wherein the target input may be a sliding operation of the user on a touch screen of a terminal displaying the tooth region with respect to the protruding portion.
When the triangulation grid of the protruding part is adjusted to clear the protruding part, the protruding part can be automatically processed through a terminal, and can also be manually adjusted by a user, so that the use requirements of different users are met.
Preferably, the method further comprises:
acquiring a ratio of individual tooth length to gum width in the tooth region;
and adjusting the length of the single tooth and/or the width of the gum when the ratio of the length of the single tooth to the width of the gum is outside a preset range.
Wherein the ratio of the length of a single tooth to the width of the gum is outside of a preset range, indicating that the ratio of the length of the tooth to the area of the gum is unsuitable. In the embodiment of the invention, when the ratio of the tooth length to the gum area is not suitable, the ratio of the adjusted tooth length to the gum width can be within a preset range by adjusting the length and/or the gum width of a single tooth, so that the ratio of the tooth length to the gum area reaches the ratio of aesthetic requirements, and the whole tooth is more attractive.
Preferably, a plurality of individual tooth models are stored in advance; the image processing method further includes:
determining a target tooth position in the tooth region to be adjusted;
determining a target single tooth model matched with the target tooth position from a plurality of single tooth models stored in advance;
replacing teeth at the target tooth position in the tooth region with the target single tooth model.
Therefore, the embodiment of the invention can also replace a single tooth by using a single tooth model, and provides more processing methods for the tooth images for users, thereby further meeting the requirements of different users.
Further, the determining a target single tooth model matched with the target tooth position from a plurality of single tooth models stored in advance comprises:
determining at least one single tooth model matched with the target tooth position from a plurality of single tooth models stored in advance according to the target tooth position, and displaying the single tooth model;
receiving a second input;
and responding to the second input, determining a single tooth model aimed by the second input from the displayed single tooth models, and taking the single tooth model as a target single tooth model.
The second input may be a touch operation of the user on the touch screen of the terminal displaying the single tooth model for a certain displayed single tooth model.
As can be seen from the foregoing, the embodiments of the present invention can select a single tooth to be replaced according to the actual operation of the user, and automatically recommend the matched single tooth model according to the position of the single tooth to be replaced, so that the user can select one of the automatically recommended single tooth models that is satisfied with himself as the target single tooth model, thereby replacing the teeth in the tooth area with the target single tooth model. The user can select the user's satisfaction from the automatically recommended single tooth model according to the actual demand of the user, and the actual demand of the user is further met, so that the use experience of the user is improved.
In addition, when the single tooth model to be adjusted is replaced by the target single tooth model, the adaptive 3D fusion technology can be adopted.
Preferably, the image processing method further includes:
receiving a third input;
in response to the third input, the position and/or size of teeth in the tooth region is adjusted.
Wherein the third input may be a sliding operation of the user on the touch screen of the terminal displaying the tooth region with respect to the tooth region.
That is, in the embodiment of the invention, the tooth position and/or the size of the tooth area on the face image can be adjusted according to the actual operation of the user, so that the operation requirement of the user is further met. For example, a user may fine tune the teeth by finger opening and closing dragging on a touch screen of a terminal displaying the tooth area.
From the above, it can be seen that the embodiments of the present invention can detect the protruding portion in the tooth area (i.e., detect whether or not there is a tooth, and remove it, can adjust the ratio of the tooth length to the gum width, can replace several adjacent individual teeth with the combined tooth model, can replace individual teeth with the individual tooth model, and can adjust the position and/or size of the teeth according to the actual operation of the user. It should be noted that, the several adjustment modes for teeth may be arbitrarily combined, and the sequence is not limited.
In summary, according to the embodiment of the invention, the shape and size of the teeth can be adjusted, the gum width can be adjusted, the teeth to be adjusted can be replaced by the prestored combined tooth model and the single tooth model, and the detail adjustment can be performed on the individual unsatisfactory teeth, so that various image processing on the tooth images can be realized, the tooth images are more attractive as a whole, and various adjustment requirements of users on the tooth images are met.
The embodiment of the present invention also provides a terminal, as shown in fig. 2, the terminal 200 includes:
a feature acquisition module 201, configured to acquire feature information of a face image, where the feature information includes a face shape and/or a five sense organs;
a recommendation information determining module 202, configured to determine dental type recommendation information according to feature information of the face image;
the first processing module 203 is configured to process a tooth area in the face image based on the dental type recommendation information.
Optionally, a combined tooth model is pre-stored, wherein the combined tooth model comprises face information and a plurality of single tooth models adjacent to each other in positions corresponding to the face feature information;
the recommendation information determining module 202 includes:
a first model matching unit for determining a target combined tooth model matched with the feature information of the face image from the pre-stored combined tooth models;
a recommendation information determining unit configured to determine the target combined tooth model as tooth recommendation information;
the first processing module 203 includes:
a replacement unit for replacing teeth in the tooth region with the target composite tooth model.
Optionally, the first model matching unit includes:
a matching subunit, configured to determine, from among the pre-stored combined tooth models, at least one combined tooth model that matches the feature information of the face image, and display the combined tooth model;
an input receiving subunit for receiving a first input;
and the model determining subunit is used for responding to the first input, determining the combined tooth model aimed by the first input from the displayed combined tooth models, and taking the combined tooth model as a target combined tooth model.
Optionally, the terminal further includes:
a detection module for detecting a protruding portion in the tooth region;
a cleaning module for cleaning the protruding portion in case that the protruding portion exists in the tooth region.
Optionally, the clearing module includes:
the triangulation unit is used for processing the tooth area by adopting a triangulation algorithm;
an adjustment unit for adjusting the triangulated mesh of the protruding portion to clear the protruding portion.
Optionally, the terminal further includes:
the ratio acquisition module is used for acquiring the ratio of the length of a single tooth to the width of a gum in the tooth area;
and the second processing module is used for adjusting the length of the single tooth and/or the width of the gum when the ratio of the length of the single tooth to the width of the gum is out of a preset range.
Optionally, a plurality of individual tooth models are pre-stored; the terminal further comprises:
a tooth position determining module for determining a target tooth position in the tooth region to be adjusted;
the model matching module is used for determining a target single tooth model matched with the target tooth position from a plurality of single tooth models stored in advance;
a third processing module for replacing teeth at the target tooth position in the tooth region with the target single tooth model.
Optionally, the model matching module includes:
a second model matching unit, configured to determine, according to the target tooth position, at least one single tooth model that matches the target tooth position from a plurality of single tooth models stored in advance, and display the single tooth model;
an input receiving unit for receiving a second input;
and the model determining unit is used for responding to the second input, determining a single tooth model aimed by the second input from the displayed single tooth models, and taking the single tooth model as a target single tooth model.
Optionally, the terminal further includes:
an input receiving module for receiving a third input;
a fourth processing module for adjusting the position and/or size of teeth in the tooth region in response to the third input.
As can be seen from the foregoing, the terminal 200 according to the embodiment of the present invention can obtain the feature information of the face image, then determine the dental type recommendation information according to the obtained feature information of the face image, and further process the dental region in the face image based on the dental type recommendation information, that is, the embodiment of the present invention can match a suitable dental model according to the feature information of the face image, thereby process the dental region in the face image, and implement adjustment of the teeth in the dental region according to the face feature, so that the teeth in the dental region are matched with the face feature, and further make the teeth more beautiful as a whole, and meet the beautifying requirement of the user on the dental image.
Embodiments of the present invention also provide a terminal, as shown in fig. 3, the terminal 300 includes, but is not limited to: radio frequency unit 301, network module 302, audio output unit 303, input unit 304, sensor 306, display unit 306, user input unit 307, interface unit 308, memory 309, processor 310, and power supply 311. It will be appreciated by those skilled in the art that the terminal structure shown in fig. 3 is not limiting of the terminal and that the terminal may include more or fewer components than shown, or may combine certain components, or a different arrangement of components. In the embodiment of the invention, the terminal comprises, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer and the like.
The processor 310 is configured to obtain feature information of a face image, where the feature information includes a face shape and/or a five sense organs; determining dental type recommendation information according to the characteristic information of the face image; and processing the tooth area in the face image based on the dental type recommended information.
Therefore, the terminal 300 according to the embodiment of the present invention can acquire the feature information of the face image, then determine the dental type recommendation information according to the acquired feature information of the face image, and further process the dental region in the face image based on the dental type recommendation information, that is, the embodiment of the present invention can match a suitable dental model according to the feature information of the face image, thereby process the dental region in the face image, and implement adjustment of the teeth in the dental region according to the face feature, so that the teeth in the dental region are matched with the face feature, and further make the teeth more beautiful as a whole, and satisfy the beautifying requirement of the user on the dental image.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 301 may be used to receive and send information or signals during a call, specifically, receive downlink data from a base station, and then process the downlink data with the processor 310; and, the uplink data is transmitted to the base station. Typically, the radio frequency unit 301 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 301 may also communicate with networks and other devices through a wireless communication system.
The terminal provides wireless broadband internet access to the user through the network module 302, such as helping the user to send and receive e-mail, browse web pages, access streaming media, etc.
The audio output unit 303 may convert audio data received by the radio frequency unit 301 or the network module 302 or stored in the memory 309 into an audio signal and output as sound. Also, the audio output unit 303 may also provide audio output (e.g., a call signal reception sound, a message reception sound, etc.) related to a specific function performed by the terminal 300. The audio output unit 303 includes a speaker, a buzzer, a receiver, and the like.
The input unit 304 is used to receive an audio or video signal. The input unit 304 may include a graphics processor (Graphics Processing Unit, GPU) 3041 and a microphone 3042, the graphics processor 3041 processing image data of still pictures or video obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 306. The image frames processed by the graphics processor 3041 may be stored in the memory 309 (or other storage medium) or transmitted via the radio frequency unit 301 or the network module 302. The microphone 3042 may receive sound, and may be capable of processing such sound into audio data. The processed audio data may be converted into a format output that can be transmitted to the mobile communication base station via the radio frequency unit 301 in the case of a telephone call mode.
The terminal 300 further comprises at least one sensor 305, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 3061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 3061 and/or the backlight when the terminal 300 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the acceleration in all directions (generally three axes), and can detect the gravity and direction when the accelerometer sensor is stationary, and can be used for recognizing the terminal gesture (such as horizontal and vertical screen switching, related games, magnetometer gesture calibration), vibration recognition related functions (such as pedometer and knocking), and the like; the sensor 305 may further include a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, an infrared sensor, etc., which are not described herein.
The display unit 306 is used to display information input by a user or information provided to the user. The display unit 306 may include a display panel 3061, and the display panel 3061 may be configured in the form of a liquid crystal display (Liquid Crystal Display, LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 307 may be used to receive input numeric or character information and to generate key signal inputs related to user settings and function control of the terminal. Specifically, the user input unit 307 includes a touch panel 3071 and other input devices 3072. The touch panel 3071, also referred to as a touch screen, may collect touch operations thereon or thereabout by a user (e.g., operations of the user on the touch panel 3071 or thereabout the touch panel 3071 using any suitable object or accessory such as a finger, stylus, or the like). The touch panel 3071 may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch azimuth of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 310, and receives and executes commands sent by the processor 310. In addition, the touch panel 3071 may be implemented in various types such as resistive, capacitive, infrared, and surface acoustic wave. The user input unit 307 may include other input devices 3072 in addition to the touch panel 3071. Specifically, other input devices 3072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, the touch panel 3071 may be overlaid on the display panel 3061, and when the touch panel 3071 detects a touch operation thereon or thereabout, the touch operation is transmitted to the processor 310 to determine a type of touch event, and then the processor 310 provides a corresponding visual output on the display panel 3061 according to the type of touch event. Although in fig. 3, the touch panel 3071 and the display panel 3061 are two independent components to implement the input and output functions of the terminal, in some embodiments, the touch panel 3071 and the display panel 3061 may be integrated to implement the input and output functions of the terminal, which is not limited herein.
The interface unit 308 is an interface through which an external device is connected to the terminal 300. For example, the external devices may include a wired or wireless headset port, an external power (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 308 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the terminal 300 or may be used to transmit data between the terminal 300 and an external device.
Memory 309 may be used to store software programs as well as various data. The memory 309 may mainly include a storage program area that may store an operating system, application programs required for at least one function (such as a sound playing function, an image playing function, etc.), and a storage data area; the storage data area may store data (such as audio data, phonebook, etc.) created according to the use of the handset, etc. In addition, memory 309 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
The processor 310 is a control center of the terminal, connects various parts of the entire terminal using various interfaces and lines, and performs various functions of the terminal and processes data by running or executing software programs and/or modules stored in the memory 309 and calling data stored in the memory 309, thereby performing overall monitoring of the terminal. Processor 310 may include one or more processing units; preferably, the processor 310 may integrate an application processor that primarily handles operating systems, user interfaces, applications, etc., with a modem processor that primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 310.
The terminal 300 may further include a power source 311 (e.g., a battery) for powering the various components, and preferably the power source 311 may be logically coupled to the processor 310 via a power management system that performs functions such as managing charging, discharging, and power consumption.
In addition, the terminal 300 includes some functional modules, which are not shown, and will not be described herein.
The embodiment of the invention also provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor, implements the processes of the above-mentioned image processing method embodiment, and can achieve the same technical effects, so that repetition is avoided, and no further description is given here. Wherein the computer readable storage medium is selected from Read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), magnetic disk or optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method according to the embodiments of the present invention.
The embodiments of the present invention have been described above with reference to the accompanying drawings, but the present invention is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present invention and the scope of the claims, which are to be protected by the present invention.

Claims (10)

1. An image processing method, comprising:
acquiring feature information of a face image, wherein the feature information comprises a face shape and/or five sense organs;
determining dental type recommendation information according to the characteristic information of the face image;
processing the tooth area in the face image based on the dental type recommended information;
the determining dental type recommendation information according to the feature information of the face image comprises the following steps:
determining a target combined tooth model matched with the characteristic information of the face image from the pre-stored combined tooth models;
determining the target combined tooth model as tooth recommendation information;
the processing the tooth area in the face image based on the dental recommendation information comprises the following steps:
replacing teeth in the tooth region with the target composite tooth model.
2. The image processing method according to claim 1, wherein the combined tooth model includes face information and a plurality of single tooth models adjacent to each other in positions corresponding to the face feature information.
3. The image processing method according to claim 2, wherein the determining a target combined tooth model matching the feature information of the face image from among the previously stored combined tooth models includes:
determining at least one combined tooth model matched with the characteristic information of the face image from the pre-stored combined tooth models, and displaying the combined tooth models;
receiving a first input;
in response to the first input, a combined tooth model for which the first input is directed is determined from the displayed combined tooth models and is used as a target combined tooth model.
4. The image processing method according to claim 1, characterized in that the image processing method further comprises:
detecting a protruding portion in the tooth region;
in the event that a protruding portion is present in the tooth region, the protruding portion is cleared.
5. The image processing method according to claim 4, wherein the removing the protruding portion includes:
processing the tooth region by adopting a triangulation algorithm;
the triangulated mesh of the protruding portion is adjusted to clear the protruding portion.
6. The image processing method according to claim 1, characterized in that the image processing method further comprises:
acquiring a ratio of individual tooth length to gum width in the tooth region;
and adjusting the length of the single tooth and/or the width of the gum when the ratio of the length of the single tooth to the width of the gum is outside a preset range.
7. The image processing method according to claim 1, wherein a plurality of individual tooth models are stored in advance; the image processing method further includes:
determining a target tooth position in the tooth region to be adjusted;
determining a target single tooth model matched with the target tooth position from a plurality of single tooth models stored in advance;
replacing teeth at the target tooth position in the tooth region with the target single tooth model.
8. The image processing method according to claim 7, wherein the determining a target individual tooth model matching the target tooth position from among a plurality of individual tooth models stored in advance includes:
determining at least one single tooth model matched with the target tooth position from a plurality of single tooth models stored in advance according to the target tooth position, and displaying the single tooth model;
receiving a second input;
and responding to the second input, determining a single tooth model aimed by the second input from the displayed single tooth models, and taking the single tooth model as a target single tooth model.
9. The image processing method according to claim 1, wherein the method image processing further comprises:
receiving a third input;
in response to the third input, the position and/or size of teeth in the tooth region is adjusted.
10. A terminal, comprising:
the feature acquisition module is used for acquiring feature information of the face image, wherein the feature information comprises facial forms and/or five sense organs;
the recommendation information determining module is used for determining dental type recommendation information according to the characteristic information of the face image;
the first processing module is used for processing the tooth area in the face image based on the dental type recommended information;
the recommendation information determining module includes:
a first model matching unit for determining a target combined tooth model matched with the feature information of the face image from the pre-stored combined tooth models;
a recommendation information determining unit configured to determine the target combined tooth model as tooth recommendation information;
the first processing module includes:
a replacement unit for replacing teeth in the tooth region with the target composite tooth model.
CN201910151708.XA 2019-02-28 2019-02-28 Image processing method and terminal Active CN109903218B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910151708.XA CN109903218B (en) 2019-02-28 2019-02-28 Image processing method and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910151708.XA CN109903218B (en) 2019-02-28 2019-02-28 Image processing method and terminal

Publications (2)

Publication Number Publication Date
CN109903218A CN109903218A (en) 2019-06-18
CN109903218B true CN109903218B (en) 2023-06-02

Family

ID=66945899

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910151708.XA Active CN109903218B (en) 2019-02-28 2019-02-28 Image processing method and terminal

Country Status (1)

Country Link
CN (1) CN109903218B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113436734B (en) * 2020-03-23 2024-03-05 北京好啦科技有限公司 Tooth health assessment method, equipment and storage medium based on face structure positioning
CN111918089A (en) * 2020-08-10 2020-11-10 广州繁星互娱信息科技有限公司 Video stream processing method, video stream display method, device and equipment
CN113096049A (en) * 2021-04-26 2021-07-09 北京京东拓先科技有限公司 Recommendation method and device for picture processing scheme

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101613159B1 (en) * 2014-12-31 2016-04-20 오스템임플란트 주식회사 Automatic dental image registration method, apparatus, and recording medium thereof
CN109272466A (en) * 2018-09-19 2019-01-25 维沃移动通信有限公司 A kind of tooth beautification method and device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101613159B1 (en) * 2014-12-31 2016-04-20 오스템임플란트 주식회사 Automatic dental image registration method, apparatus, and recording medium thereof
CN109272466A (en) * 2018-09-19 2019-01-25 维沃移动通信有限公司 A kind of tooth beautification method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
双目立体视觉的三维人脸重建方法;贾贝贝等;《智能系统学报》;20091215(第06期);全文 *

Also Published As

Publication number Publication date
CN109903218A (en) 2019-06-18

Similar Documents

Publication Publication Date Title
CN109461117B (en) Image processing method and mobile terminal
CN108712603B (en) Image processing method and mobile terminal
CN108492246B (en) Image processing method and device and mobile terminal
CN109685915B (en) Image processing method and device and mobile terminal
CN111047511A (en) Image processing method and electronic equipment
CN107644396B (en) Lip color adjusting method and device
CN108683850B (en) Shooting prompting method and mobile terminal
CN109903218B (en) Image processing method and terminal
CN108881782B (en) Video call method and terminal equipment
CN107786811B (en) A kind of photographic method and mobile terminal
CN109167914A (en) A kind of image processing method and mobile terminal
CN110113532A (en) A kind of filming control method, terminal and computer readable storage medium
CN109688325B (en) Image display method and terminal equipment
CN109671034B (en) Image processing method and terminal equipment
CN108712574B (en) Method and device for playing music based on images
CN109542321A (en) A kind of control method and device of screen display content
CN107563353B (en) Image processing method and device and mobile terminal
CN109639981B (en) Image shooting method and mobile terminal
CN111091519A (en) Image processing method and device
CN108259756B (en) Image shooting method and mobile terminal
CN112733673B (en) Content display method and device, electronic equipment and readable storage medium
CN108536272B (en) Method for adjusting frame rate of application program and mobile terminal
CN109446993A (en) A kind of image processing method and mobile terminal
CN109144369A (en) A kind of image processing method and terminal device
CN109727191B (en) Image processing method and mobile terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TG01 Patent term adjustment