CN114972069A - Adjusting method, adjusting device and electronic equipment - Google Patents
Adjusting method, adjusting device and electronic equipment Download PDFInfo
- Publication number
- CN114972069A CN114972069A CN202210425536.2A CN202210425536A CN114972069A CN 114972069 A CN114972069 A CN 114972069A CN 202210425536 A CN202210425536 A CN 202210425536A CN 114972069 A CN114972069 A CN 114972069A
- Authority
- CN
- China
- Prior art keywords
- points
- key
- curve
- point
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 79
- 230000035772 mutation Effects 0.000 claims abstract description 64
- 238000004891 communication Methods 0.000 claims description 11
- 230000009466 transformation Effects 0.000 claims description 7
- 230000001131 transforming effect Effects 0.000 claims description 6
- 230000008859 change Effects 0.000 claims description 3
- 230000006978 adaptation Effects 0.000 claims 2
- 238000012545 processing Methods 0.000 abstract description 7
- 238000001514 detection method Methods 0.000 description 34
- 230000008569 process Effects 0.000 description 18
- 239000011521 glass Substances 0.000 description 16
- 230000006870 function Effects 0.000 description 14
- 238000010586 diagram Methods 0.000 description 12
- 230000000694 effects Effects 0.000 description 9
- 238000004364 calculation method Methods 0.000 description 8
- 238000012549 training Methods 0.000 description 8
- 238000004422 calculation algorithm Methods 0.000 description 7
- 238000000605 extraction Methods 0.000 description 6
- 238000012937 correction Methods 0.000 description 5
- 230000001360 synchronised effect Effects 0.000 description 5
- 238000010801 machine learning Methods 0.000 description 3
- 241000190070 Sarracenia purpurea Species 0.000 description 2
- 230000003190 augmentative effect Effects 0.000 description 2
- 238000004590 computer program Methods 0.000 description 2
- 238000002372 labelling Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 238000013519 translation Methods 0.000 description 2
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000009795 derivation Methods 0.000 description 1
- 238000007599 discharging Methods 0.000 description 1
- 238000003708 edge detection Methods 0.000 description 1
- 230000001815 facial effect Effects 0.000 description 1
- 238000011478 gradient descent method Methods 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/80—Geometric correction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
Abstract
The application discloses an adjusting method, an adjusting device and electronic equipment, and belongs to the technical field of image processing. The adjusting method comprises the following steps: under the condition that a target image is distorted, key points are obtained, wherein the key points are frame points of a first target in the target image; respectively acquiring catastrophe points corresponding to the key points, wherein the catastrophe points are points with the minimum distance from the key points in boundary points of the face region in the target image; determining a distorted area in the face area based on the key points and the mutation points; and adjusting the pixel value of the distortion area to a target pixel value.
Description
Technical Field
The application belongs to the technical field of image processing, and particularly relates to an adjusting method, an adjusting device and electronic equipment.
Background
In a shooting scene, transparent objects such as water cups, glasses and glass have refraction phenomena, so that other objects deform in a shooting picture. The portrait is the most common scene of shooing, and when the people took glasses to shoot, because the refraction of glasses lens and the influence of shooting angle, face's profile can take place to warp, not only can influence the pleasing to the eye of portrait photo, also can increase face identification's the degree of difficulty. In the conventional repairing method, a user needs to operate corresponding software to correct the deformed part after shooting, for example, Photoshop software is used for correcting, and the operation is complex.
Disclosure of Invention
An object of the embodiments of the present application is to provide an adjusting method, an adjusting apparatus, and an electronic device, which can solve the problem that an object in an image is deformed due to refraction.
In a first aspect, an embodiment of the present application provides an adjustment method, where the method includes:
under the condition that a target image is distorted, key points are obtained, wherein the key points are frame points of a first target in the target image;
respectively acquiring catastrophe points corresponding to the key points, wherein the catastrophe points are points with the minimum distance from the key points in boundary points of the human face region in the target image;
determining a distorted area in the face area based on the key points and the mutation points;
and adjusting the pixel value of the distortion area to a target pixel value.
In a second aspect, an embodiment of the present application provides an adjustment apparatus, including:
the first determining module is used for acquiring a key point under the condition that a target image is distorted, wherein the key point is a frame point of a first target in the target image;
the second determining module is used for respectively acquiring catastrophe points corresponding to the key points, wherein the catastrophe points are the points with the minimum distance from the key points in the boundary points of the face region in the target image;
a third determining module, configured to determine a distorted area where distortion occurs in the face area based on the key point and the mutation point;
and the first adjusting module is used for adjusting the pixel value of the distortion area to a target pixel value.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor and a memory, where the memory stores a program or instructions executable on the processor, and the program or instructions, when executed by the processor, implement the adjustment method according to the first aspect.
In a fourth aspect, the present invention provides a readable storage medium, on which a program or instructions are stored, and when executed by a processor, the program or instructions implement the adjusting method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the adjustment method according to the first aspect.
In a sixth aspect, the present application provides a computer program product, which is stored in a storage medium and executed by at least one processor to implement the adjusting method according to the first aspect.
In the embodiment of the application, a distorted area with distortion in a face area is determined by determining key points and mutation points of the face area on a target image, and then the pixel value of the distorted area is adjusted, so that the distortion of the face area is eliminated. Therefore, according to the technical scheme, the distorted face can be automatically corrected under the condition that distortion occurs in the image, the correction efficiency is improved, and the attractiveness of the image is ensured. Compared with the scheme that the distortion is corrected manually by a user, the distortion area needing to be corrected can be determined accurately, so that influence on other targets in the image is avoided, and the correction accuracy is higher.
Drawings
Fig. 1 is a flowchart of an adjusting method provided in an embodiment of the present application;
fig. 2 is a schematic diagram of an image to be processed in an adjustment method provided in an embodiment of the present application;
FIG. 3 is a schematic diagram of key points in an adjustment method provided in an embodiment of the present application;
FIG. 4 is a second flowchart of an adjusting method according to an embodiment of the present application;
FIG. 5 is a schematic diagram of key points and mutation points in an adjustment method provided in an embodiment of the present application;
fig. 6 is a third flowchart of an adjusting method provided in the embodiment of the present application;
FIG. 7 is a fourth flowchart of an adjusting method provided in the embodiments of the present application;
FIG. 8 is a schematic diagram of a target edge line in an adjustment method according to an embodiment of the present disclosure;
fig. 9 is a schematic effect diagram of an adjusting method provided in an embodiment of the present application;
FIG. 10 is a fifth flowchart of an adjustment method provided in the embodiments of the present application;
FIG. 11 is a schematic structural diagram of an adjusting apparatus according to an embodiment of the present disclosure;
fig. 12 is a schematic structural diagram of an electronic device provided in an embodiment of the present application;
fig. 13 is a second schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments that can be derived by one of ordinary skill in the art from the embodiments given herein are intended to be within the scope of the present disclosure.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The following describes embodiments of the present application in detail by using specific examples and application scenarios thereof with reference to the accompanying drawings.
The embodiment of the application first provides an adjusting method. For example, the adjustment method may be applied to an electronic device with a display function, such as a mobile phone, a tablet computer, a Personal Computer (PC), a wearable electronic device (e.g., a smart watch), an Augmented Reality (AR)/Virtual Reality (VR) device, and an in-vehicle device, which is not limited in this embodiment of the present application.
The distortion refers to a phenomenon in which an object is affected by light rays and changes in a distorted shape when being imaged. Under the condition that a transparent object exists in a shooting scene, the transparent object can generate light refraction, so that other objects are distorted. When a user needs to eliminate distortion in an image and restore the original shape of a distorted object, the image can be processed by the technical scheme provided by the embodiment.
Fig. 1 shows a flowchart of an adjusting method provided in an embodiment of the present application. As shown in fig. 1, the adjusting method may include the steps of:
step 100: under the condition that the target image is distorted, key points are obtained, and the key points are frame points of a first target in the target image.
The target image may be a plurality of types of images, such as a photo currently taken by a camera, an image in a video, an image stored on an electronic device, or an image received by an application, and the like, which is not particularly limited in this embodiment.
When processing the target image, it may be determined whether the target image includes an object having distortion. If the distortion phenomenon does not exist in the target image, namely, the distortion object does not exist, the target image does not need to be adjusted. The first object refers to a transparent object in the target image, which may cause a light refraction phenomenon. For example, the first target may be glass, a cup, glasses, or the like.
If there is a distortion in the target image due to refraction of the light of the transparent first target, the distorted keypoint on the first target can be determined. The point of the target image that intersects the first target may be distorted or deviated from the original position after the light is refracted. Keypoints refer to the frame points of the first target, e.g., points on the frame of the glasses. The target image starts to be refracted and distorted at the key points, that is, the key points or points adjacent to the key points in the target image may deviate from the original positions after being refracted by the transparent object.
And whether the target image is distorted or not and the distorted key points can be determined through the trained distortion detection model. Specifically, the target image is input into the distortion detection model, and the label and the key point of the target image output by the distortion detection model can be obtained. Wherein the label output by the distortion detection model may indicate whether the target image has distortion. Illustratively, when the label is 1, then the target image is distorted; when the label is 0, it can be determined that there is no distortion in the target image.
In an exemplary embodiment, the distortion detection model may include a feature extraction layer, a classification layer, and a keypoint detection layer. The target image is input into the distortion detection model, and the feature extraction layer can perform feature extraction on the target image and output the features of the target image. The feature can be continuously input into the classification layer to obtain the label output by the classification layer. The feature extraction layer is also connected with the key point detection layer, and after the features of the target image output by the feature extraction layer are input to the key point detection layer, the key points output by the key point detection layer can be obtained. In the case that the label output by the classification layer indicates that the target image has no distortion, the key point output by the key point detection layer is empty, that is, no key point exists in the target image.
The distortion detection model can be obtained by training the marked sample image. First, a certain number of sample images are obtained, wherein the sample images can comprise images without distortion or images with distortion. Labels and key points for each sample image are then noted. The label is used to indicate whether there is distortion in the sample image. When the value of the label can be set to be 1 in advance, the sample image is distorted; a label value of 0 indicates that there is no distortion in the sample image. Or, whether distortion exists may also be characterized by other values of the label, for example, if the label is "yes", it indicates that distortion exists in the sample image, and if the label is "no", it indicates that distortion does not exist in the sample image.
The labeled sample image can be used as a training data set of the distortion detection model to train the distortion detection model. In each training process, a sample image is input into a distortion detection model, and the distortion detection model can output a prediction label and a predicted key point of the sample image. Parameters of the distortion detection model are adjusted by calculating a loss between the predicted label and the label of the sample image annotation, and calculating a loss between the predicted key point and the key point of the sample image annotation. And continuing to process the next sample image by adopting the adjusted parameters, and repeatedly training for many times until the loss between the predicted label output by the distortion detection model and the label marked on the sample image is less than the corresponding preset value, and the loss between the predicted key point and the marked key point is also less than the corresponding preset value, so that the training of the distortion detection model is finished.
When the distortion detection model is trained, various loss functions may be used to calculate the loss, various optimization functions are used to adjust the parameters, for example, the loss between the prediction tag and the labeling tag and the loss between the prediction key point and the labeling key point are calculated based on algorithms such as a cross entropy loss function and a mean square error loss function, and then, for example, a random gradient descent method and an Adam optimization method are used to adjust the parameters, which is not particularly limited in this embodiment.
The trained distortion detection model can be used for detecting the target image, and the first target is taken as glasses for illustration. When the user wears the glasses for shooting, the shot image may be as 201 in fig. 2, and the image 201 may be the target image. The image 201 is input into the distortion detection model, and the label and the key point of the image 201 output by the distortion detection model can be obtained. Illustratively, the classification layer may map the features of the image 201 output by the feature extraction layer to be between 0 and 1 through a softmax function, and then determine the type of the image 201 by judging the value of the label output by the classification layer. If the value of the tag is greater than a preset value between 0 and 1, then image 201 is distorted. For example, when the label output by the classification layer is greater than 0.5, it may be determined that there is distortion in the image 201; when the label output by the classification layer is not greater than 0.5, then there is no distortion of the image 201. In addition, the label of the image 201 may also be determined by other preset values, for example, whether the label is greater than 0.6, 0.7, and the like, which is not particularly limited in this embodiment.
If the label of image 201 indicates that there is distortion in image 201, the key points of the distortion detection model output are obtained. A plurality of keypoints may be included in image 201 and the distortion detection model may output the coordinates of each keypoint in image 201. The keypoints can be labeled at corresponding positions on the image 201 according to the coordinates of each keypoint. As shown in fig. 3, the image 201 has four key points, which are a key point a, a key point B, a key point C, and a key point D.
In the embodiment, the machine learning method is applied to the refraction distortion detection process, and the distortion detection model for detecting the refraction distortion key points is obtained through machine learning, so that the machine learning is promoted and used, the problem of high refraction distortion error caused by manual processing can be avoided, and the accuracy of the refraction distortion detection is improved.
Step 200: and respectively acquiring catastrophe points corresponding to the key points, wherein the catastrophe points are the points with the minimum distance from the key points in the boundary points of the human face region in the target image.
The abrupt change point is a distorted position point in the face region, namely the abrupt change point is a distorted point on the boundary of the face region and is a position point which is shown by the refraction of the transparent object in the face region. When the shape of the face is distorted, all boundary points, namely catastrophe points, which are distorted on the contour line can be determined through the contour line of the face region. Specifically, fig. 4 shows a flowchart for obtaining mutation points corresponding to each of the key points.
As shown in fig. 4, the method includes:
In addition, the connected region of the face region may also be obtained by other algorithms, such as a Two-Pass algorithm, for example, which is not particularly limited in this embodiment. For example, the target image is an image 201, and a connected region of a human face in the image 201 can be obtained through a skin color detection algorithm. Skin color detection algorithms are capable of dividing skin regions in images in various color spaces.
Step 402: based on the boundary curve of the connected region, a plurality of boundary points on the boundary curve are obtained. Illustratively, the boundary curve of the connected component can be obtained by an edge detection algorithm. And then, carrying out derivation on each point on the boundary curve to obtain the coordinates of the pixel points with the derivative of 0, and determining the coordinates as boundary points. A plurality of boundary points may be included on the boundary curve.
Step 403: and respectively calculating the distance between each boundary point and the key point, and determining the mutation points corresponding to the key points from the plurality of boundary points according to the distance, wherein the key points correspond to the mutation points one to one. Illustratively, based on the coordinates at the key points and the coordinates at the boundary points, the distance between each two boundary points and each two key points can be calculated through an algorithm of the cosine distance between the two points and the Euclidean distance. For example, assume that there are a total of 4 boundary points determined on the boundary curve of the face in the image 201. And respectively calculating the distance between each boundary point and the key point A, and taking the boundary point with the minimum distance with the key point A as a catastrophe point corresponding to the key point A according to the calculation result. And similarly, determining the boundary point closest to each key point in sequence as the mutation point corresponding to the key point, wherein the key points and the mutation points are in one-to-one correspondence. As shown in fig. 5, the key point a corresponds to the mutation point a, the key point B corresponds to the mutation point B, the key point C corresponds to the mutation point C, and the key point D corresponds to the mutation point D.
It is understood that in the image 201, the distortion effect caused by the glasses (i.e. the second object) is reduced relative to the normal human face shape. The discontinuities may be outside of the keypoints when the second target causes an amplified distorting effect on the first target.
The method and the device for determining the mutation points on the curve have the advantages that the calculation process is simple, the mutation points on the curve can be quickly extracted, the calculation efficiency is improved, and the calculation amount and the calculation resources are saved.
In an exemplary embodiment, the mutation points may also be determined by a trained model. To distinguish from the above distortion detection model, a model for determining a mutation point is referred to as a first model. When the first model is obtained through training, the first model can be trained by adopting the sample image marked with the mutation point, so that the trained first model can output the mutation point of the target image. The method has the advantages that the mutation point is determined by the model, the accuracy of the mutation point cannot be influenced under the complex condition that the shape of the first target is quite irregular, the more complex distortion condition can be adapted, and the accuracy of the determined mutation point is improved.
In addition, another neural network model, such as a second model, may also be trained based on the training data set that simultaneously marks the mutation points and the keypoints. And then, the second model can simultaneously detect key points and catastrophe points in the target image, so that the processing flow can be simplified, and the efficiency of the model can be improved.
Next, with continued reference to fig. 1, step 300: and determining a distorted area in the face area based on the key points and the catastrophe points.
The distortion region is a region surrounded by the key point and the mutation point. A plurality of distortion regions may be included in the target image. Referring to fig. 5, the distortion region in the image 201 includes a distortion region of the left face and a distortion region of the right face. Taking the left facial distortion region as an example, the distortion region is a region surrounded by a key point a, a key point B, a mutation point B and a mutation point a. To determine the distortion region, the connectivity between keypoint a and keypoint B needs to be estimated. Fig. 6 shows a flow chart for determining a distortion region.
As shown in fig. 6, the method includes the following:
step 601: and acquiring a connected key curve in the face region based on the mutation points. Specifically, according to the above embodiment, after the connected region of the face region is determined, the abrupt point on the boundary curve can be obtained according to the boundary curve of the connected region. The key connecting curve in the face area is the part between the mutation points on the boundary curve. Similarly to the distortion region, the face region may also include a plurality of connected key curves. Taking the image 201 in fig. 5 as an example, a portion between the abrupt point a and the abrupt point b on the boundary curve of the human face, i.e., the curve ab, is a connected key curve in the left face distortion region. Similarly, the portion of the boundary curve between the discontinuity c and the discontinuity d is the connected key curve in the distortion region on the right. The connected key curve is determined from the connected region of the face region, so that the connectivity between the connected key curve and the face region can be ensured, the target edge line can be ensured to be connected with the face region when the target edge line is obtained through conversion, and the smoothness of the face region is improved.
Step 602: and acquiring a target edge line of the face region based on the key points and the connected key curve. The target edge line is the connectivity curve between the estimated keypoints. When the first object refracts the human face area, the refracted rays are generally parallel to each other. And further, the shape of the target edge line between the connected key curve and the key point after the human face area is distorted is similar. Then, the transformation mode and the transformation quantity of the connected key curve are determined, and the connected key curve between the catastrophe points can be transformed into the target edge line between the key points in a curve transformation mode.
Fig. 7 shows a flowchart of determining a target edge line in the present application. As shown in fig. 7, the method includes the following:
step 701: and determining the offset and the scaling quantity of the connected key curve based on the key points and the mutation points. According to the corresponding relation between the key points and the mutation points, target edge lines corresponding to connected key curves among the mutation points can be determined. Illustratively, referring to fig. 5, in the image 201, the connected key curve ab corresponds to a target edge line between the key point a and the key point B. When the target edge line AB is determined, the offset and the scaling of the connected key curve AB need to be calculated.
The offset can be understood as the degree to which the connectivity key curve needs to be moved. Illustratively, the offset may include an offset in the horizontal direction and an offset in the vertical direction, denoted as Px and Py, respectively. For example, the key point A (x) is set by using the central point of the target image (such as the image 201) as the origin of the coordinate system A ,y A ) Key point B (x) B ,y B ) And the corresponding mutation point a (x) a ,y b )、b(x b ,y b ) The offset in the horizontal direction of the connected key curve ab can be expressed as: px ═ x [ [ (x) a +x b )-(x A +x B )]/2. That is, the offset in the horizontal direction of the connected key curve ab may be an average value of differences between the coordinate values of the mutation points a and B in the horizontal direction and the coordinate values of the key points a and B in the horizontal direction. Similarly, the offset in the vertical direction of the connected key curve ab can be expressed as: py ═ y [ [ (y) a +y b )-(y A +y B )]/2。
It should be understood that, in the image 201, when the center is the origin, the offset Px is [ (x) a +x b )-(x A +x B )]And/2, if the value is a positive number, it can indicate that the connection key curve ab needs to be moved in the positive direction in the horizontal direction. For the connected key curve cd, since the horizontal coordinate values of the discontinuity point C and the discontinuity point D are smaller than the key point C and the key point D, the offset of the connected key curve cd in the horizontal direction is a negative number, which indicates that the connected key curve cd needs to move in the horizontal direction in the negative direction. That is, the offset is a value having a direction, and the calculation method of the offset can be flexibly adjusted according to the actual coordinate system of the target image, for example, the offset is adjustedWhole Px ═ x [ [ (x) A +x B )-(x a +x b )]And/2, the present embodiment is not limited thereto.
The amount of scaling refers to the extent to which the connectivity key curve needs to be scaled up or down. Illustratively, the scaling of the connected key curve can be calculated by the ratio of the distance between the two points of the catastrophe point a and the catastrophe point B to the distance between the two points of the key point a and the key point B. The distance between the catastrophe point a and the catastrophe point b can be calculated through a distance formula between two points in space. The distance between mutation point a and mutation point b can be expressed as:similarly, the distance between key point A and key point BFurther, the scaling amount of the connected key curve ab:
d1/d 2. Similarly, according to step 701, the offset and the scaling of the connected key curve between the discontinuity point c and the discontinuity point d may also be calculated, so as to obtain the offset and the scaling of the connected key curve of each distortion region in the image 201.
Step 702: and transforming the connected key curve based on the offset and the scaling amount to obtain a target edge line of the face region. Specifically, the connected key curve is translated according to the offset. For example, the offset may include (Px, Py), and the discontinuity a (x) on the connected key curve ab is translated along the offset direction a ,y a ) The coordinates after translation are (x) a +Px,y a + Py), mutation point b (x) b ,y b ) The coordinates after translation are (x) b +Px,y b + Py). And then scaling the communication key curve moved according to the offset according to the scaling amount. The scaled curve can be obtained by multiplying the moved connected key curve by the scaling amount. For example, the scaling amount is f, and the discontinuity a (x) on the connected key curve before scaling a +Px,y a + Py) and the scaled coordinates are (f (x) a +Px),(f*(y a +Py)). In summary, after the connected key curve ab is subjected to moving and scaling transformation, the obtained curve is the target edge line between the key point a and the key point B in the face region. Fig. 8 shows a schematic diagram of a curve of the image 201 after transformation of the connected key curve. With reference to fig. 5 and 8, the connected key curve AB in the image 201 is transformed to obtain the target edge line AB. Similarly, the connected key curve is moved and scaled according to the offset and the scaling of the connected key curve CD to obtain a target edge line CD, so as to determine the target edge line at each distortion of the face in the image 201.
Furthermore, whether two ends of the target edge line obtained by transforming the connected key curve exceed key points or not is determined, and the parts exceeding the key points are deleted, so that the target edge line takes the key points as end points, and the smoothness of the target edge line in the face area can be ensured.
In general, when determining the position of an object before refraction, the position of the object before refraction can be calculated based on the refractive index of the transparent object and the spatial refraction process. However, in the present embodiment, the two points (the catastrophe point a and the catastrophe point B) on the key curve and the two points (the key point a and the key point B) on the target edge line are connected to each other, so that the degree to which the connected key curve needs to be moved and the degree to which the connected key curve needs to be scaled can be determined, and the shape in the normal case can be restored. Compared with a complex physical refraction process, the method has the advantages of simple calculation process and high efficiency.
With continuing reference to FIG. 6, step 603: and acquiring a distortion area based on the connected key curve and the target edge line. And after the target edge line is obtained, communicating the key curve and the region surrounded by the target edge line to form a distortion region. Taking the image 201 as an example, after a target edge line between a key point a and a key point B is determined, the key point a, the key point B, the catastrophe point a and the catastrophe point B are all communicated with each other, and the formed region is a distorted region on the left side of the face in the image 201. Similarly, the region composed of the target edge line between the key points C and D and the connected key curve cd between the mutation points C and D is the distortion region on the right side of the face.
In the embodiment, when the shape of the face area before distortion needs to be restored, the target edge line of the normal shape on the face area before distortion can be automatically restored without manual participation, the distortion area is identified, and the efficiency is high. And moreover, the distorted connected key curve is transformed, so that a target edge line which is closer to a normal shape can be obtained, and the accuracy of a distorted area is improved.
Step 400: and adjusting the pixel value of the distortion area to the target pixel value.
In this embodiment, the sampling is performed in the region of the face region, and the pixel value of the face region can be obtained. The target pixel value may include an average value of pixel values of all pixel points in the face region; the range of pixel values between the maximum pixel value and the minimum pixel value in the face region can also be included; or, the probability distribution in the face region is greater than a predetermined value, such as a pixel value greater than 0.9. And adjusting the pixel value of each pixel point in the distortion area according to the target pixel value to enable the pixel value to be in the same or similar color with the human face area. For example, a pixel value range may be determined according to pixel values of pixels in the face region, and the original pixel values of the pixels in the distortion region are adjusted to the pixel values in the pixel value range. The method of bilinear interpolation, neighborhood interpolation and the like is adopted to determine each pixel point of the distortion area, and the embodiment does not specially limit the pixel points. After the color of the face region is supplemented to the distortion region, the pixel points in the distortion region can present the color similar to the face region, so that the distortion region of the face region is eliminated, and the distortion correction effect is realized.
Continuing with the example of the human face in the image 201, fig. 9 shows a schematic diagram after distortion correction of the human face in the image 201. Referring to fig. 9, after the pixel values in the distorted area of the face in the image 201 are adjusted, the shape of the face in the normal case can be restored. In the embodiment, even if the user takes a picture with glasses, the normal face shape can be obtained, the requirements of the user can be met, and the usability of the image is improved.
Fig. 10 shows a flow chart of the adjusting method of the present application. Taking a face image as an example, referring to fig. 10, the adjustment method of the present application may include the following steps: step 1001: and training a distortion detection model. Step 1002: and inputting the face image into a distortion detection model to obtain a label and a key point of the face image. In the present embodiment, the target image is the face image. Step 1003: judging whether the face image has distortion or not; if there is distortion, step 1004 is performed. In general, in an image with glasses, a human face is distorted due to refraction of the glasses. Step 1004: and determining the edge line of the target in the face image. The target edge line is the target edge line of the distorted face region. Step 1005: a distortion zone is determined. Step 1006: and adjusting the color of the distorted area to obtain a normal human face picture. For example, if the distorted area is a human face, the distorted area may be color-filled with skin color. If it is determined in step 1003 that the face image has no distortion, the image does not need to be processed, and the next target image can be processed.
It is understood that, in the present embodiment, the process of correcting the face distortion caused by the glasses is described by taking the face image worn by the glasses as an example. However, the method in the present embodiment may also be applied to a scene for correcting image distortion caused by other transparent objects such as a water cup and glass, for example, distortion caused by the water cup to the background, and the present application is not limited thereto. In addition, each step shown in fig. 10 has been specifically described in the above embodiments, and is not described again here.
In the adjusting method provided by the embodiment of the application, the execution main body can be an adjusting device. In the embodiment of the present application, an adjusting apparatus is taken as an example to execute an adjusting method, and an adjusting apparatus corresponding to the adjusting method provided in the embodiment of the present application is described.
Fig. 11 shows a block diagram of an adjusting apparatus provided in an embodiment of the present application. As shown in fig. 11, the adjusting apparatus 1100 provided in this embodiment may include a first determining module 1101, a second determining module 1102, a third determining module 1103, and a first adjusting module 1104. Specifically, the first determining module 1101 may be configured to, when a target image is distorted, obtain a key point, where the key point is a frame point of a first target in the target image. The second determining module 1102 may be configured to obtain mutation points corresponding to the key points, where a mutation point is a point with a minimum distance from a key point in boundary points of the face region in the target image. The third determining module 1103 may be configured to determine a distorted region of the face region where distortion occurs based on the key points and the mutation points. The first adjusting module 1104 can be used for adjusting the pixel value of the distortion region to the target pixel value.
In an exemplary embodiment, the third determining module 1103 may specifically include: the first acquisition unit is used for acquiring a connected key curve in the face region based on the catastrophe point; the second acquisition unit is used for acquiring a target edge line of the face region based on the key points and the connected key curve; and the third acquisition unit is used for acquiring the distortion region based on the connected key curve and the target edge line.
In an exemplary embodiment, the first obtaining unit may specifically include: the first determining unit is used for determining a connected region of a face region in a target image; and the first obtaining subunit is used for obtaining a key connection curve between the mutation points on the boundary curve based on the boundary curve of the connection region.
In an exemplary embodiment, the second obtaining unit specifically includes: the second determining unit is used for determining the offset and the scaling quantity of the connected key curve based on the key point and the catastrophe point; and the first transformation unit is used for transforming the connected key curve based on the offset and the zoom amount to obtain a target edge line of the face region.
In an exemplary embodiment, the second determining module 1102 specifically includes: the third determining unit is used for determining a connected region of the face region in the target image; a fourth acquiring unit, configured to acquire a plurality of boundary points on a boundary curve based on the boundary curve of the connected region; and the fourth determining unit is used for respectively calculating the distance between each candidate mutation point and the key point, and determining the mutation points corresponding to the key points from the plurality of boundary points according to the distances, wherein the key points correspond to the mutation points one to one.
The adjusting apparatus 1100 according to this embodiment determines a distorted area of a face area by determining a key point and a mutation point of the distorted face area on a target image. And then, the pixel value of the distortion area is adjusted, so that the distortion of the face area is eliminated, and the effect of automatically correcting the refraction distortion can be realized. Compared with a scheme that a user manually corrects distortion, the distortion area needing to be corrected can be accurately determined, so that influence on other targets in the image is avoided, correction accuracy is higher, and attractiveness of the image is guaranteed. Moreover, manual participation is not needed in the treatment process, so that the labor time cost can be saved.
The adjusting apparatus 1100 in the embodiment of the present application may be an electronic device, or may be a component in an electronic device, such as an integrated circuit or a chip. The electronic device may be a terminal, or may be a device other than a terminal. The electronic Device may be, for example, a Mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic Device, a Mobile Internet Device (MID), an Augmented Reality (AR)/Virtual Reality (VR) Device, a robot, a wearable Device, an ultra-Mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and may also be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine, a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The adjusting apparatus 1100 in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The adjusting apparatus 1100 provided in this embodiment of the present application can implement each process implemented in the method embodiments in fig. 1 to fig. 10, and is not described here again to avoid repetition.
Optionally, as shown in fig. 12, an embodiment of the present application further provides an electronic device 1200, which includes a processor 1201 and a memory 1202. The memory 1202 stores a program or an instruction that can be executed on the processor 1201, and when the program or the instruction is executed by the processor 1201, the steps of the above-described embodiment of the adjustment method are implemented, and the same technical effect can be achieved, and in order to avoid repetition, the details are not described here again.
It should be noted that the electronic devices in the embodiments of the present application include the mobile electronic device and the non-mobile electronic device described above.
Fig. 13 is a schematic hardware structure diagram of an electronic device implementing an embodiment of the present application.
The electronic device 1300 includes, but is not limited to: a radio frequency unit 1301, a network module 1302, an audio output unit 1303, an input unit 1304, a sensor 1305, a display unit 1306, a user input unit 1307, an interface unit 1308, a memory 1309, a processor 1310, and the like.
Those skilled in the art will appreciate that the electronic device 1300 may further comprise a power source (e.g., a battery) for supplying power to the various components, and the power source may be logically connected to the processor 1310 via a power management system, so as to manage charging, discharging, and power consumption management functions via the power management system. The electronic device structure shown in fig. 13 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is omitted here.
The processor 1310 is configured to, when a target image is distorted, obtain a key point, where the key point is a frame point of a first target in the target image; respectively acquiring catastrophe points corresponding to the key points, wherein the catastrophe points are points with the minimum distance from the key points in boundary points of the human face region in the target image; determining a distorted area in the face area based on the key points and the catastrophe points; and adjusting the pixel value of the distortion area to a target pixel value.
In some embodiments, the processor 1310 is further configured to obtain a connected key curve in the face region based on the mutation point; acquiring a target edge line of the face area based on the key points and the connected key curve; and acquiring a distortion area based on the connected key curve and the target edge line.
In some embodiments, the processor 1310 is further configured to determine a connected region of the face region in the target image; and acquiring a key connection curve between the mutation points on the boundary curve based on the boundary curve of the connection region.
In some embodiments, the processor 1310 is further configured to determine an offset and a scaling amount of the connected key curve based on the key point and the discontinuity point; and transforming the connected key curve based on the offset and the scaling amount to obtain a target edge line of the face region.
In some embodiments, the processor 1310 is further configured to determine a connected region of the face region in the target image; acquiring a plurality of boundary points on a boundary curve based on the boundary curve of the connected region; and respectively calculating the distance between each boundary point and the key point, and determining the mutation points corresponding to the key points from the plurality of boundary points according to the distances, wherein the key points correspond to the mutation points one to one.
It should be noted that, in this embodiment, the electronic device may implement each process in the method embodiment in this embodiment and achieve the same beneficial effect, and for avoiding repetition, details are not described here.
It should be understood that in the embodiment of the present application, the input Unit 1304 may include a Graphics Processing Unit (GPU) 13041 and a microphone 13042, and the Graphics processor 13041 processes image data of still pictures or videos obtained by an image capturing apparatus (such as a camera) in a video capturing mode or an image capturing mode. The display unit 1306 may include a display panel 13061, and the display panel 13061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 1307 includes a touch panel 13071 and at least one of other input devices 13072. A touch panel 13071, also referred to as a touch screen. The touch panel 13071 may include two parts, a touch detection device and a touch controller. Other input devices 13072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
The memory 1309 may be used to store software programs as well as various data. The memory 1309 may mainly include a first storage area storing programs or instructions and a second storage area storing data, wherein the first storage area may store an operating system, application programs or instructions required for at least one function (such as a sound playing function, an image playing function, etc.), and the like. Further, memory 1309 can comprise volatile memory or nonvolatile memory, or memory 1309 can comprise both volatile and nonvolatile memory. The non-volatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory. The volatile Memory may be a Random Access Memory (RAM), a Static Random Access Memory (Static RAM, SRAM), a Dynamic Random Access Memory (Dynamic RAM, DRAM), a Synchronous Dynamic Random Access Memory (Synchronous DRAM, SDRAM), a Double Data Rate Synchronous Dynamic Random Access Memory (Double Data Rate SDRAM, ddr SDRAM), an Enhanced Synchronous SDRAM (ESDRAM), a Synchronous Link DRAM (SLDRAM), and a Direct Memory bus RAM (DRRAM). Memory 1309 in the embodiments of the present application includes, but is not limited to, these and any other suitable types of memory.
Processor 1310 may include one or more processing units; optionally, the processor 1310 integrates an application processor, which mainly handles operations related to the operating system, user interface, application programs, etc., and a modem processor, which mainly handles wireless communication signals, such as a baseband processor. It will be appreciated that the modem processor described above may not be integrated into processor 1310.
The embodiments of the present application further provide a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the foregoing adjusting method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a computer read only memory ROM, a random access memory RAM, a magnetic or optical disk, and the like.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to run a program or an instruction to implement each process of the foregoing adjustment method embodiment, and can achieve the same technical effect, and for avoiding repetition, details are not repeated here.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
Embodiments of the present application provide a computer program product, where the program product is stored in a storage medium, and the program product is executed by at least one processor to implement the processes of the foregoing adjustment method embodiments, and achieve the same technical effects, and in order to avoid repetition, details are not described here again.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a computer software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.
Claims (12)
1. An adjustment method, comprising:
under the condition that a target image is distorted, key points are obtained, wherein the key points are frame points of a first target in the target image;
respectively acquiring catastrophe points corresponding to the key points, wherein the catastrophe points are points with the minimum distance from the key points in boundary points of the human face region in the target image;
determining a distorted area in the face area based on the key points and the mutation points;
and adjusting the pixel value of the distortion area to a target pixel value.
2. The adjustment method according to claim 1, wherein the determining a distorted area of the face area based on the key point and the mutation point comprises:
acquiring a connected key curve in the face region based on the mutation points;
acquiring a target edge line of the face region based on the key points and the connected key curve;
and acquiring a distortion area based on the connected key curve and the target edge line.
3. The adjustment method according to claim 2, wherein the obtaining of the connected key curve in the face region based on the abrupt change point comprises:
determining a connected region of the face region in the target image;
and acquiring a key communication curve between the mutation points on the boundary curve based on the boundary curve of the communication region.
4. The adjustment method according to claim 2, wherein the obtaining of the target edge line of the face region based on the key point and the connected key curve comprises:
determining the offset and the scaling amount of the connected key curve based on the key points and the mutation points;
and transforming the connected key curve based on the offset and the scaling amount to obtain a target edge line of the face region.
5. The adjusting method according to claim 1, wherein the obtaining the mutation points corresponding to the key points respectively comprises:
determining a connected region of the face region in the target image;
acquiring a plurality of boundary points on the boundary curve based on the boundary curve of the connected region;
and respectively calculating the distance between each boundary point and the key point, and determining the mutation point corresponding to the key point from a plurality of boundary points according to the distance, wherein the key points are in one-to-one correspondence with the mutation points.
6. An adjustment device, comprising:
the first determining module is used for acquiring a key point under the condition that a target image is distorted, wherein the key point is a frame point of a first target in the target image;
a second determining module, configured to obtain mutation points corresponding to the key points, where the mutation points are points with a minimum distance from the key points in boundary points of a face region in the target image;
a third determining module, configured to determine a distorted area where distortion occurs in the face area based on the key point and the mutation point;
and the first adjusting module is used for adjusting the pixel value of the distortion area to a target pixel value.
7. The adjustment apparatus according to claim 6, wherein the third determination module comprises:
the first obtaining unit is used for obtaining a connected key curve in the face region based on the catastrophe point;
the second acquisition unit is used for acquiring a target edge line of the face region based on the key point and the communication key curve;
and the third acquisition unit is used for acquiring a distortion area based on the connected key curve and the target edge line.
8. The adjustment device according to claim 7, wherein the first obtaining unit comprises:
the first determining unit is used for determining a connected region of the face region in the target image;
and the first obtaining subunit is used for obtaining a key communication curve between the mutation points on the boundary curve based on the boundary curve of the communication region.
9. The adjustment device according to claim 7, wherein the second obtaining unit includes:
a second determining unit, configured to determine an offset and a scaling amount of the connected key curve based on the key point and the catastrophe point;
and the first transformation unit is used for transforming the connected key curve based on the offset and the scaling amount to obtain a target edge line of the face region.
10. The adjustment apparatus according to claim 6, wherein the second determination module comprises:
a third determining unit, configured to determine a connected region of the face region in the target image;
a fourth acquiring unit, configured to acquire a plurality of boundary points on a boundary curve based on the boundary curve of the connected region;
and the fourth determining unit is used for respectively calculating the distance between each boundary point and the key point, and determining the mutation point corresponding to the key point from a plurality of boundary points according to the distance, wherein the key points correspond to the mutation points one to one.
11. An electronic device comprising a processor and a memory, the memory storing a program or instructions executable on the processor, the program or instructions when executed by the processor implementing the adaptation method of any one of claims 1-5.
12. A readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the adaptation method according to any one of claims 1-5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210425536.2A CN114972069A (en) | 2022-04-21 | 2022-04-21 | Adjusting method, adjusting device and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210425536.2A CN114972069A (en) | 2022-04-21 | 2022-04-21 | Adjusting method, adjusting device and electronic equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114972069A true CN114972069A (en) | 2022-08-30 |
Family
ID=82979782
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210425536.2A Pending CN114972069A (en) | 2022-04-21 | 2022-04-21 | Adjusting method, adjusting device and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114972069A (en) |
-
2022
- 2022-04-21 CN CN202210425536.2A patent/CN114972069A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108230383B (en) | Hand three-dimensional data determination method and device and electronic equipment | |
CN110443205B (en) | Hand image segmentation method and device | |
JP6011102B2 (en) | Object posture estimation method | |
CN112001859B (en) | Face image restoration method and system | |
CN113689578B (en) | Human body data set generation method and device | |
CN111695554B (en) | Text correction method and device, electronic equipment and storage medium | |
CN112561973A (en) | Method and device for training image registration model and electronic equipment | |
CN111738092B (en) | Method for recovering occluded human body posture sequence based on deep learning | |
CN115115552B (en) | Image correction model training method, image correction device and computer equipment | |
CN111723707A (en) | A gaze point estimation method and device based on visual saliency | |
CN114494347A (en) | Single-camera multi-mode sight tracking method and device and electronic equipment | |
CN116580169B (en) | Digital man driving method and device, electronic equipment and storage medium | |
CN112652020B (en) | Visual SLAM method based on AdaLAM algorithm | |
EP4093015A1 (en) | Photographing method and apparatus, storage medium, and electronic device | |
US9323981B2 (en) | Face component extraction apparatus, face component extraction method and recording medium in which program for face component extraction method is stored | |
CN111080754B (en) | Character animation production method and device for connecting characteristic points of head and limbs | |
CN115713794A (en) | Image-based sight line drop point estimation method and device | |
CN110781712A (en) | Human head space positioning method based on human face detection and recognition | |
CN114821048A (en) | Object segmentation method and related device | |
CN114283448A (en) | A child sitting posture reminder method and system based on head posture estimation | |
JP5051671B2 (en) | Information processing apparatus, information processing method, and program | |
CN114972069A (en) | Adjusting method, adjusting device and electronic equipment | |
CN112307799A (en) | Gesture recognition method, device, system, storage medium and device | |
CN117008761A (en) | Display method, display device, electronic equipment and storage medium | |
CN115908809A (en) | Target detection method and system based on scale division |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |