[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN114693514A - Image processing method, image processing device, electronic equipment and storage medium - Google Patents

Image processing method, image processing device, electronic equipment and storage medium Download PDF

Info

Publication number
CN114693514A
CN114693514A CN202210313378.1A CN202210313378A CN114693514A CN 114693514 A CN114693514 A CN 114693514A CN 202210313378 A CN202210313378 A CN 202210313378A CN 114693514 A CN114693514 A CN 114693514A
Authority
CN
China
Prior art keywords
target
image
face
processed
offset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210313378.1A
Other languages
Chinese (zh)
Inventor
马润欣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202210313378.1A priority Critical patent/CN114693514A/en
Publication of CN114693514A publication Critical patent/CN114693514A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • G06T17/205Re-meshing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/02Affine transformations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/18Image warping, e.g. rearranging pixels individually
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)

Abstract

The present disclosure relates to an image processing method, an apparatus, an electronic device, and a storage medium, the method including: acquiring a face image to be processed and a target processing item, wherein the target processing item is used for representing a processing mode of the face image to be processed; acquiring a target reference offset map corresponding to the target processing item, wherein a color value in the target reference offset map is used for representing an offset vector of each pixel point after the target processing item is processed on a reference image; transforming the target reference offset image according to the attitude angle of the face image to be processed to obtain a target offset image corresponding to the face image to be processed; and processing the face image to be processed according to the target offset image to obtain a target image. The method improves the treatment efficiency on the basis of keeping the liquefaction deformation treatment effect, has a simple treatment mode, reduces the performance occupation, and reduces the cost.

Description

Image processing method, image processing device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image processing method and apparatus, an electronic device, and a storage medium.
Background
With the development of video communication technology and mobile devices, human face image processing on a client, such as human face beauty, has become a basic requirement of users. The face beautifying technology can be applied to camera application programs or video application programs.
In the related art, the core of the face beautifying technology is to adjust face organs and external contours in an image deformation mode based on key points of a face, wherein image deformation methods are mainly divided into two categories: and (5) performing liquefaction deformation and triangulation deformation. The liquefaction deformation mode needs more control parameters, so the adjustment process is more complicated, the performance occupation is higher, and the treatment efficiency is low. The triangulation deformation mode depends on the fine design of the triangular mesh, and the development of each function needs to consume larger development cost.
Disclosure of Invention
The present disclosure provides an image processing method, an image processing apparatus, an electronic device, and a storage medium, to at least solve the problems of low processing efficiency and high cost in the related art. The technical scheme of the disclosure is as follows:
according to a first aspect of an embodiment of the present disclosure, there is provided an image processing method including:
acquiring a face image to be processed and a target processing item, wherein the target processing item is used for representing a processing mode of the face image to be processed;
acquiring a target reference offset map corresponding to the target processing item, wherein a color value in the target reference offset map is used for representing an offset vector of each pixel point after the target processing item is processed on a reference image;
transforming the target reference offset image according to the attitude angle of the face image to be processed to obtain a target offset image corresponding to the face image to be processed;
and processing the face image to be processed according to the target offset image to obtain a target image.
Optionally, transforming the target reference offset map according to the attitude angle of the face image to be processed to obtain a target offset map corresponding to the face image to be processed, including:
performing triangular mesh division on the face image to be processed according to the face key points of the face image to be processed to obtain a target face mesh;
mapping the target reference offset map onto the target face grid according to the reference face grid corresponding to the reference image and the target face grid to obtain an initial offset map corresponding to the face image to be processed, wherein the reference face grid is obtained by triangulating the reference image based on a face key point of the reference image;
and adjusting the color value in the initial offset map according to the attitude angle of the face image to be processed to obtain a target offset map corresponding to the face image to be processed.
Optionally, adjusting the color value in the initial offset map according to the pose angle of the face image to be processed to obtain a target offset map corresponding to the face image to be processed, including:
determining a transformation matrix according to the roll angle in the attitude angle and the proportion of the image size of the face image to be processed relative to the image size of the reference image, wherein the transformation matrix is used for representing the position transformation relation of the face image to be processed relative to the reference image;
determining the difference between the integer 1 and the yaw angle in the attitude angle as an adjustment coefficient of the color value in the initial offset map;
determining a triangular grid area average value of a target mapping point in the target face grid as a first area average value, determining a reference mapping point corresponding to the target mapping point in the reference face grid, determining a triangular grid area average value of the reference mapping point as a second area average value, and determining a ratio of the first area average value to the second area average value as a vertex scale of the target mapping point in the target face grid;
and adjusting the color value in the initial offset map according to the transformation matrix, the adjustment coefficient and the vertex scale to obtain a target offset map.
Optionally, adjusting the color value in the initial offset map according to the transformation matrix, the adjustment coefficient, and the vertex scale to obtain a target offset map, including:
according to the transformation matrix, the adjustment coefficient and the vertex scale, adjusting the r channel color value and the g channel color value in the initial offset map according to the following formula to obtain a target offset map:
Figure BDA0003569178820000021
wherein,
Figure BDA0003569178820000031
is a binary vector representing the adjusted r channel color value and the adjusted g channel color value,
Figure BDA0003569178820000032
the vector is a binary vector and represents r channel color values in the initial offset map and g channel color values in the initial offset map, rotMat represents the transformation matrix, degree represents the adjustment coefficient, and scale represents the vertex scale.
Optionally, the triangular mesh division is performed on the face image to be processed according to the face key points of the face image to be processed to obtain a target face mesh, including:
determining a target extension point corresponding to a face key point of the face image to be processed, wherein the distance between the target extension point and the face key point in the face image to be processed is greater than the distance between a reference extension point in the reference image and the face key point in the reference image, the reference extension point is an extension point corresponding to the face key point in the reference image, and the reference face grid is obtained by performing triangular grid division on the reference image based on the face key point in the reference image and the reference extension point;
and carrying out triangular mesh division on the face image to be processed according to the face key points of the face image to be processed and the target outward expansion points to obtain a target face mesh.
Optionally, processing the face image to be processed according to the target offset map to obtain a target image, including:
determining an offset vector corresponding to the target processing item of the face image to be processed according to the target offset image;
and carrying out offset processing on the offset vector on the pixel points in the face image to be processed to obtain the target image.
Optionally, determining, according to the target offset map, an offset vector corresponding to the target processing item for the to-be-processed face image, includes:
determining a transverse offset vector in the offset vectors according to the r channel color value in the target offset map;
and determining a longitudinal offset vector in the offset vectors according to the g channel color value in the target offset map.
Optionally, before the obtaining of the target reference offset map corresponding to the target processing item, the method further includes:
carrying out liquefaction deformation processing on the target processing item on the reference image to obtain an offset vector of a pixel point in the reference image;
and generating the target reference offset map according to the offset vectors of the pixel points in the reference image.
Optionally, generating the target reference offset map according to the offset vector of the pixel point in the reference image includes:
determining the color value of the pixel point in the target reference offset image according to the offset vector of the pixel point in the reference image and the following formula:
Figure BDA0003569178820000041
wherein rgbiocolor is a color value of a pixel in the target reference offset map, xyOffset is an offset vector of a pixel in the reference image,
Figure BDA0003569178820000042
is a binary vector representing the preset color adjustment value of r channel and the preset color adjustment value of g channel of pixel point in the target reference offset map, m is the preset color value of b channel of pixel point in the target reference offset map, m is the color value ofAnd presetting color values of pixel point a channels in the target reference offset image.
Optionally, the method further includes:
splicing the target reference offset map and reference offset maps corresponding to other processing items except the target processing item into a texture image;
the acquiring a target reference offset map corresponding to the target processing item includes:
and acquiring a target reference offset map corresponding to the target processing item from the texture image.
According to a second aspect of the embodiments of the present disclosure, there is provided an image processing apparatus including:
the image acquisition module is configured to acquire a face image to be processed and a target processing item, wherein the target processing item is used for representing a processing mode of the face image to be processed;
a reference offset map obtaining module configured to perform obtaining of a target reference offset map corresponding to the target processing item, where a color value in the target reference offset map is used to represent an offset vector of each pixel after the target processing item is processed on a reference image;
the offset map transformation module is configured to transform the target reference offset map according to the attitude angle of the face image to be processed to obtain a target offset map corresponding to the face image to be processed;
and the image processing module is configured to process the face image to be processed according to the target offset image to obtain a target image.
Optionally, the offset map transforming module includes:
the mesh division unit is configured to execute triangular mesh division on the face image to be processed according to the face key points of the face image to be processed to obtain a target face mesh;
the offset map mapping unit is configured to map the target reference offset map onto the target face mesh according to a reference face mesh corresponding to the reference image and the target face mesh to obtain an initial offset map corresponding to the to-be-processed face image, wherein the reference face mesh is obtained by triangulating the reference image based on a face key point of the reference image;
and the offset map adjusting unit is configured to adjust the color value in the initial offset map according to the attitude angle of the face image to be processed to obtain a target offset map corresponding to the face image to be processed.
Optionally, the offset map adjusting unit includes:
a transformation matrix determining subunit configured to perform determining a transformation matrix according to a rolling angle in the attitude angle and a ratio of an image size of the face image to be processed with respect to an image size of the reference image, the transformation matrix being used for characterizing a position transformation relationship of the face image to be processed with respect to the reference image;
an adjustment coefficient determination subunit configured to perform determination of a difference between an integer 1 and a yaw angle in the attitude angle as an adjustment coefficient of a color value in the initial migration map;
a vertex scale determination subunit configured to perform determining a triangular mesh area average value of a target mapping point in the target face mesh as a first area average value, and determining a reference mapping point corresponding to the target mapping point in the reference face mesh, determining a triangular mesh area average value of the reference mapping point as a second area average value, and determining a ratio of the first area average value to the second area average value as a vertex scale of the target mapping point in the target face mesh;
and the offset map adjusting subunit is configured to perform adjustment on the color values in the initial offset map according to the transformation matrix, the adjustment coefficient and the vertex scale to obtain a target offset map.
Optionally, the offset map adjusting subunit is configured to perform:
according to the transformation matrix, the adjustment coefficient and the vertex scale, adjusting the r channel color value and the g channel color value in the initial offset map according to the following formula to obtain a target offset map:
Figure BDA0003569178820000051
wherein,
Figure BDA0003569178820000061
is a binary vector representing the adjusted r channel color value and the adjusted g channel color value,
Figure BDA0003569178820000062
the vector is a binary vector and represents r channel color values in the initial offset map and g channel color values in the initial offset map, rotMat represents the transformation matrix, degree represents the adjustment coefficient, and scale represents the vertex scale.
Optionally, the mesh dividing unit includes:
a target extension point determining subunit, configured to perform determining a target extension point corresponding to a face key point of the to-be-processed face image, where a distance between the target extension point and the face key point in the to-be-processed face image is greater than a distance between a reference extension point in the reference image and the face key point in the reference image, the reference extension point is an extension point corresponding to the face key point in the reference image, and the reference face mesh is obtained by triangulating the reference image based on the face key point in the reference image and the reference extension point;
and the mesh division subunit is configured to perform triangular mesh division on the face image to be processed according to the face key points of the face image to be processed and the target outward expansion points to obtain a target face mesh.
Optionally, the image processing module includes:
the offset vector determining unit is configured to determine an offset vector corresponding to the target processing item of the face image to be processed according to the target offset image;
and the offset processing unit is configured to perform offset processing of the offset vector on the pixel points in the face image to be processed to obtain the target image.
Optionally, the offset vector determining unit is configured to perform:
determining a transverse offset vector in the offset vectors according to the r channel color value in the target offset map;
and determining a longitudinal offset vector in the offset vectors according to the g channel color value in the target offset map.
Optionally, the apparatus further comprises:
a reference image liquefaction deformation module configured to perform liquefaction deformation processing on the reference image on the target processing item to obtain an offset vector of a pixel point in the reference image;
a reference offset map generation module configured to perform generation of the target reference offset map according to offset vectors of pixel points in the reference image.
Optionally, the reference offset map generating module is configured to perform:
determining the color value of the pixel point in the target reference offset image according to the offset vector of the pixel point in the reference image and the following formula:
Figure BDA0003569178820000071
wherein rgbiocolor is a color value of a pixel in the target reference offset map, xyOffset is an offset vector of a pixel in the reference image,
Figure BDA0003569178820000072
is a binary vector representing the preset color adjustment value of r channel and g channel of pixel point in the target reference offset image, m is the preset color value of b channel of pixel point in the target reference offset imageAnd (4) presetting color values of the channels of the pixel points a.
Optionally, the apparatus further comprises:
the offset map splicing module is configured to splice the target reference offset map and reference offset maps corresponding to other processing items except the target processing item into a texture image;
the reference offset map acquisition module is configured to perform:
and acquiring a target reference offset map corresponding to the target processing item from the texture image.
According to a third aspect of an embodiment of the present disclosure, there is provided an electronic apparatus including:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the image processing method according to the first aspect.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium, wherein instructions, when executed by a processor of an electronic device, enable the electronic device to perform the image processing method according to the first aspect.
According to a fifth aspect of embodiments of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the image processing method of the first aspect.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:
the embodiment of the disclosure obtains the target reference offset map corresponding to the target processing item after obtaining the face image to be processed and the target processing item, transforms the target reference offset map according to the attitude angle of the face image to be processed to obtain the target offset map corresponding to the face image to be processed, processes the face image to be processed according to the target offset map to obtain the target image, because the target reference offset map is the offset vector which is generated in advance and used for representing each pixel point after the target processing item is performed on the reference image, when processing the image, only the target reference offset map needs to be mapped into the face image to be processed, and the face image to be processed is processed by using the obtained target offset map, without controlling more parameters, thereby reducing the calculation amount and reducing the performance occupation on the basis of reserving the liquefaction deformation processing effect, the treatment efficiency is improved, the treatment mode is simple, and the cost is reduced.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure and are not to be construed as limiting the disclosure.
FIG. 1 is a flow diagram illustrating an image processing method according to an exemplary embodiment.
FIG. 2 is a flow diagram of determining a target offset map corresponding to a face image to be processed in an exemplary embodiment;
fig. 3 is a block diagram illustrating an image processing apparatus according to an exemplary embodiment.
FIG. 4 is a block diagram illustrating an electronic device in accordance with an example embodiment.
Detailed Description
In order to make the technical solutions of the present disclosure better understood by those of ordinary skill in the art, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
The liquefaction deformation mode is with the pixel as processing unit, only carry out deformation to the point in the liquefaction region (the circle that uses the control point as the centre of a circle), the position coordinate deformation degree that is close to the centre of a circle more is big more, the deformation degree that is close to the circle edge more is little less, the edge deformation degree reduces to zero, the fixity of the relative position on original summit has been guaranteed to this principle, the smoothness of deformation has been guaranteed in the gradual change of range of moving simultaneously, nevertheless it is not nimble enough to lead to effect control with the outside radial gradual change of central point, deformation unit through some bases of stack among the practical application, include: pushing, rotating, amplifying and reducing and the like to realize various beautifying effects, traversing each pixel point, calculating deformation vectors of each point under the action of each deformation unit, superposing the deformation vectors to obtain the deformation vectors of each point, and performing final rendering. The liquefaction deformation algorithm can realize various complex deformation special effects by adjusting parameters such as liquefaction radius, liquefaction strength and the like at each control point, but because the parameters needing to be controlled are more, the adjustment process is more complicated, and the deformation of a plurality of control point liquefaction overlapping areas is difficult to control.
The triangle deformation mode is that the area of the face needing deformation is divided into a plurality of triangles, the vertex coordinates after deformation are calculated according to each processing item self-defined deformation rule, affine transformation is carried out on each triangle, and the triangle area formed by original texture coordinates is mapped to the triangle area after movement and is rendered, so that the purpose of deformation is achieved. Specifically, for each triangular mesh, an affine transformation matrix of the triangular mesh is calculated according to three vertex coordinates before and after deformation, and then transformation is applied to all pixels in the mesh to obtain a final deformation result of the mesh. The method has a simple principle and high speed, local fine control is easily realized, and because adjacent triangles share the same edge and the texture on the same edge is fixed, the method can ensure the smoothness of the picture when the relative position (up, down, left and right) of the vertex of the triangle is not changed, but the method depends on the fine design of the triangular mesh, the development of each function needs to consume larger development cost, the effect cannot be directly controlled by a designer, and meanwhile, more control logic needs to be added in the development process due to the improper design of the triangular mesh and the broken line problem caused by the relative position deviation of midpoints of different faces, so that the development efficiency is further weakened. To ensure that the vertex is fixed in position relative to the location, and to avoid crease-like problems, the method relies heavily on the design of the motion vectors.
Table 1 is the advantages and disadvantages of the liquefaction deformation and the triangulation deformation:
TABLE 1 advantages and disadvantages of the liquefaction deformation and triangulation deformation
Figure BDA0003569178820000091
Figure BDA0003569178820000101
Analyzing the problems and deficiencies of the related technical solutions, the image processing method of the embodiment of the disclosure can simultaneously satisfy the following characteristics: 1) the deformation influence range is controllable; 2) the deformation is smooth and natural; 3) the development cost is low; 4) the performance is controllable. Meanwhile, the background is greatly dragged due to the movement and deformation of the face, so that the user experience is extremely influenced, and the problem is solved by the embodiment of the disclosure. The specific scheme of the embodiment of the disclosure is as follows:
fig. 1 is a flowchart illustrating an image processing method according to an exemplary embodiment, which may be used in an electronic device such as a mobile phone or a tablet computer, as shown in fig. 1, and includes the following steps.
In step S11, a to-be-processed face image and a target processing item are obtained, where the target processing item is used to represent a processing manner of the to-be-processed face image.
The target processing item may be, for example, a processing method such as face thinning and a small face.
The face image to be processed can be obtained by photographing, or an album can be displayed and the face image selected by the user can be obtained as the face image to be processed. Meanwhile, one processing item selected by the user may be acquired from the displayed plurality of processing items as a target processing item.
In step S12, a target reference offset map corresponding to the target processing item is obtained, and a color value in the target reference offset map is used to represent an offset vector of each pixel after the target processing item is processed on the reference image.
Wherein the reference image is an image including a model face, and the model face is generally a front face in the reference image.
And respectively processing various processing items on the reference image in advance to obtain the offset vector of each pixel point of the reference image under various processing items, so that reference offset maps corresponding to various processing items are generated based on the offset vector of each pixel point under various processing items, and the reference offset maps corresponding to various processing items are stored.
And after the face image to be processed and the target processing item are obtained, obtaining a target reference offset image corresponding to the target processing item from the storage position of the reference offset image.
In step S13, the target reference offset map is transformed according to the pose angle of the to-be-processed face image, so as to obtain a target offset map corresponding to the to-be-processed face image.
The target reference offset map is an offset vector corresponding to the processing of the target processing item on the reference image including the model front face, and at this time, when the face image to be processed is processed, the target reference offset map needs to be mapped onto the face image to be processed, and the target reference offset map can be transformed based on the attitude angle of the face image to be processed so as to be mapped into the target offset map required for the processing of the target processing item on the face image to be processed. The color value in the target offset image represents an offset vector of each pixel point required when the target processing item is processed on the face image to be processed.
Fig. 2 is a flowchart for determining a target offset map corresponding to a face image to be processed in an exemplary embodiment, and as shown in fig. 2, transforming the target reference offset map according to a pose angle of the face image to be processed to obtain the target offset map corresponding to the face image to be processed, including steps S131 to S133:
in step S131, triangular mesh division is performed on the face image to be processed according to the face key points of the face image to be processed, so as to obtain a target face mesh.
And processing the face image to be processed by using the key point detection model to obtain the face key points and the attitude angles of the face image to be processed. The face key points comprise key points of the face outer contour.
And performing triangular mesh division on the face image to be processed based on the face key points of the face image to be processed, so that the vertexes of the triangular meshes comprise the face key points, and if the vertexes of the triangular meshes comprise the key points of the face outline, obtaining a target face mesh corresponding to the face image to be processed. Each point in the target face grid corresponds to a pixel point of the face image to be processed, and the size of each point is the same as that of the face image to be processed.
In step S132, the target reference offset map is mapped onto the target face mesh according to the reference face mesh corresponding to the reference image and the target face mesh, so as to obtain an initial offset map corresponding to the to-be-processed face image, where the reference face mesh is obtained by triangulating the reference image based on the face key points of the reference image.
And carrying out triangular mesh division on the reference face image based on the face key points of the reference face image in advance, so that the vertexes of the triangular meshes comprise the face key points, and obtaining the reference face mesh corresponding to the reference face image if the vertexes of the triangular meshes comprise the key points of the face outline. The way of carrying out mesh division on the reference face image is the same as the way of carrying out mesh division on the face image to be processed, so that the triangular meshes in the reference face mesh correspond to the triangular meshes in the target face mesh respectively. When triangular mesh division is carried out, the finer the mesh division is, the more accurate the deformation corresponding to each region is; on the contrary, the deformation may be misaligned, for example, the whole reference face image is divided into two triangles to be deformed, and the deformation is likely not to act on the face. The reference face grid covers a non-gray area in the whole target reference offset image, namely the grid covers an area with deformation in the target reference offset image. Each point in the reference face grid corresponds to a pixel point of the reference face image and corresponds to a pixel point in the target reference offset image, and the size of each point is the same as that of the reference face image and the target reference offset image.
The target reference offset map is mapped according to the reference face mesh and the target face mesh, the reference face mesh can be covered on the target reference offset map, and the target reference offset map is transformed based on the triangular mesh in the reference face mesh and the triangular mesh in the target face mesh, so that the color values in the triangular mesh of the reference face mesh in the target reference offset map are mapped into the corresponding triangular mesh in the target face mesh, and the initial offset map corresponding to the face image to be processed is obtained.
In an exemplary embodiment, the triangulating the face image to be processed according to the face key points of the face image to be processed to obtain a target face mesh, includes: determining a target extension point corresponding to a face key point of the face image to be processed, wherein the distance between the target extension point and the face key point in the face image to be processed is greater than the distance between a reference extension point in the reference image and the face key point in the reference image, the reference extension point is an extension point corresponding to the face key point in the reference image, and the reference face grid is obtained by performing triangular grid division on the reference image based on the face key point in the reference image and the reference extension point; and carrying out triangular mesh division on the face image to be processed according to the face key points of the face image to be processed and the target outward expansion points to obtain a target face mesh.
Under the condition that the size of a face area in the face image to be processed is equal to that of a face area in the reference image, the distance between the target extension point and the face key point in the face image to be processed is larger than the distance between the reference extension point and the face key point in the reference image, and if the size of the face area in the face image to be processed is different from that of the face area in the reference image, the face area in the face image to be processed can be converted into the same size for processing.
And determining target outward expansion points corresponding to the face key points in the face image to be processed based on the distance between the target outward expansion points and the face key points in the face image to be processed, and further performing triangular mesh division on the face image to be processed based on the face key points and the target outward expansion points of the face image to be processed, namely forming triangular meshes by the face key points and the target outward expansion points in the face image to be processed respectively to obtain a plurality of triangular meshes, namely obtaining the target face mesh corresponding to the face image to be processed.
After the target face grid is determined in the above manner, the coverage area of the target face grid is larger than that of the reference face grid. The coverage area of the target face grid comprises a face area in the face image to be processed and a partial area outside the face area in the face image to be processed; the coverage area of the reference face grid comprises a face area in the reference image and also comprises a partial area outside the face area in the reference image. The area outside the face area in the coverage area of the target face grid is larger than the area outside the face area in the coverage area of the reference face grid, thus, the dynamic influence of background distortion can be solved, the deformation outside the face area can be distributed to a larger area, modifying the deformation outside the face area on the premise of ensuring the basically consistent deformation inside the face, reducing and diffusing the color of the target offset map after the grid mapping, this results in a reduction of the deformation outside the face region, thus separating the inner region of the face from the outer region by meshing, the corresponding positions of the reference face mesh of the inner region and the target face mesh being kept the same, and the external area achieves the purpose of deformation apportionment by mapping the grid map containing the offset vector in the target reference offset map to a larger area on the target face grid map.
It should be noted that, in addition to using the coverage area of the target face mesh larger than the coverage area of the reference face mesh to solve the problem of background distortion, other purposes may also be achieved through other mapping relationships, such as protecting some areas by controlling the position of the mesh with deformation in the target offset map.
In step S133, the color value in the initial offset map is adjusted according to the pose angle of the face image to be processed, so as to obtain a target offset map corresponding to the face image to be processed.
The color value in the initial offset image is an offset vector corresponding to the processing of the target processing item when the face in the face image to be processed is a front face, but the face in the face image to be processed is not necessarily a front face, and at this time, the color value in the initial offset image needs to be adjusted based on the attitude angle of the face image to be processed, so that the target offset image corresponding to the face image to be processed is obtained, the color value in the target offset image corresponds to the attitude angle in the face image to be processed, and the face image to be processed is convenient to process.
The method comprises the steps of carrying out triangular mesh division on a face image to be processed, mapping a target reference offset image onto a target face mesh based on a reference face mesh and the target face mesh to obtain an initial offset image, and adjusting a color value in the initial offset image based on an attitude angle of the face image to be processed to obtain a target offset image.
In an exemplary embodiment, adjusting the color value in the initial offset map according to the pose angle of the facial image to be processed to obtain a target offset map corresponding to the facial image to be processed includes: determining a transformation matrix according to the roll angle in the attitude angle and the proportion of the image size of the face image to be processed relative to the image size of the reference image, wherein the transformation matrix is used for representing the position transformation relation of the face image to be processed relative to the reference image; determining the difference between the integer 1 and the yaw angle in the attitude angle as an adjustment coefficient of the color value in the initial offset map; determining a triangular grid area average value of a target mapping point in the target face grid as a first area average value, determining a reference mapping point corresponding to the target mapping point in the reference face grid, determining a triangular grid area average value of the reference mapping point as a second area average value, and determining a ratio of the first area average value to the second area average value as a vertex scale of the target mapping point in the target face grid; and adjusting the color value in the initial offset map according to the transformation matrix, the adjustment coefficient and the vertex scale to obtain a target offset map.
The transformation matrix represents the influence of the roll angle and the image size on deformation, and is a coefficient adjusted based on the proportion of the human face in the human face image to be processed, for example, the deformation range is larger when the proportion of the human face is larger, the coefficient in the transformation matrix is a coefficient larger than 1, the deformation range is smaller if the proportion of the human face is smaller, and the coefficient in the transformation matrix is a coefficient smaller than 1.
And determining a deformation range based on the roll angle in the attitude angle and the ratio of the image size of the face image to be processed to the image size of the reference image, and further obtaining a position transformation relation of the face image to be processed to the reference image, namely obtaining a transformation matrix. The yaw angle in the attitude angle represents the angle of the left and right turning head of the human face, namely whether the angle is a side face or not, the color of the initial offset map is adjusted according to the yaw angle in the attitude angle, so that the color of the inward side of the human face is reduced, and the adjustment coefficient of the color value in the initial offset map can be determined as follows: the degree is 1-yaw, wherein degree is an adjustment coefficient, and yaw is a yaw angle. Determining the average value of the triangular mesh areas corresponding to the target mapping points in the target face mesh to obtain a first area average value, determining the reference mapping points corresponding to the target mapping points in the reference face mesh, determining the average value of the triangular mesh areas corresponding to the reference mapping points to obtain a second area average value, and determining the ratio of the first area average value to the second area average value as the peak scale of the target mapping points in the target face mesh. And adjusting the color value in the initial offset map by using the transformation matrix, the adjustment coefficient and the vertex scale to obtain a target offset map. The method and the device realize the adjustment of the color value in the initial offset map based on the attitude angle of the face image to be processed, obtain a more accurate target offset map and improve the image processing effect.
In an exemplary embodiment, adjusting the color value in the initial offset map according to the transformation matrix, the adjustment coefficient, and the vertex scale to obtain a target offset map includes:
according to the transformation matrix, the adjustment coefficient and the vertex scale, adjusting the r channel color value and the g channel color value in the initial offset map according to the following formula to obtain a target offset map:
Figure BDA0003569178820000151
wherein,
Figure BDA0003569178820000152
is a binary vector representing the adjusted r channel color values and the adjusted g channel color values,
Figure BDA0003569178820000153
the vector is a binary vector and represents r channel color values in the initial offset map and g channel color values in the initial offset map, rotMat represents the transformation matrix, degree represents the adjustment coefficient, and scale represents the vertex scale.
And adjusting the r channel color value and the g channel color value in the initial offset map based on the formula, wherein the obtained result is the r channel color value and the g channel color value in the target offset map. By adjusting the color value in the initial offset map according to the formula, a more accurate target offset map can be obtained.
In step S14, the face image to be processed is processed according to the target offset map, so as to obtain a target image.
And acquiring an offset vector of each pixel point from the target offset image, and performing offset processing on the offset vector on the corresponding pixel point in the face image to be processed based on the offset vector of each pixel point to obtain a target image corresponding to the face image to be processed and the target processing item, namely obtaining the target image after the face image to be processed is subjected to the processing of the target processing item.
In an exemplary embodiment, processing the face image to be processed according to the target offset map to obtain a target image includes: determining an offset vector corresponding to the target processing item of the face image to be processed according to the target offset image; and carrying out offset processing on the offset vector on the pixel points in the face image to be processed to obtain the target image.
And the color value in the target offset image represents an offset vector corresponding to the target processing item when the target processing item is processed on the face image to be processed, and the color value in the target offset image is analyzed to determine the offset vector corresponding to the target processing item processing on the face image to be processed. And performing offset processing on the offset vector corresponding to each pixel point in the face image to be processed to obtain a target image corresponding to the face image to be processed and the target processing item, namely obtaining the target image after the face image to be processed is subjected to the processing of the target processing item. The offset vector corresponding to the target processing item of the face image to be processed is determined according to the target offset image, and offset processing is carried out on each pixel point in the face image to be processed based on the offset vector, so that the processing effect of the obtained target image can be improved.
In an exemplary embodiment, determining, according to the target offset map, an offset vector corresponding to the target processing item for the face image to be processed includes: determining a transverse offset vector in the offset vectors according to the r channel color value in the target offset map; and determining a longitudinal offset vector in the offset vectors according to the g channel color value in the target offset map.
And determining an offset vector corresponding to the target processing item of the face image to be processed according to the color value in the target offset map by using the operation opposite to the operation of calculating the target reference offset map of the target reference face image. And converting r channel color values in the target offset map into horizontal offset vectors in the offset vectors and converting g channel color values in the target offset map into vertical offset vectors in the offset vectors by using the reverse operation of calculating the target reference offset map of the target reference face image. Namely, the transverse offset vector and the longitudinal offset vector are determined according to the following formula:
xyOffset=rgbaColor.xy-0.5
the method comprises the steps that the xyz offset is an offset vector of a current pixel point in a plane coordinate system, the xyz offset is a binary vector and comprises a transverse offset vector and a longitudinal offset vector, the range is a normalized range [ -0.5,0.5], and rgba color.
The r channel color value in the target offset map represents the transverse offset vector, and the g channel color value represents the longitudinal offset vector, so that the accurate offset vector can be obtained, and the image processing effect is improved.
The image processing method provided in this exemplary embodiment obtains a target reference offset map corresponding to a target processing item after obtaining a face image to be processed and the target processing item, transforms the target reference offset map according to a pose angle of the face image to be processed to obtain a target offset map corresponding to the face image to be processed, and processes the face image to be processed according to the target offset map to obtain a target image, where the target reference offset map is an offset vector that is generated in advance and used to represent each pixel point of the reference image after the target processing item is processed, and when processing the image, only the target reference offset map needs to be mapped into the face image to be processed, and the obtained target offset map is used to process the face image to be processed, and there is no need to control many parameters, and the amount of computation is reduced on the basis of reserving a liquefaction deformation processing effect, the performance occupation is reduced, the treatment efficiency is improved, the treatment mode is simple, and the cost is reduced.
On the basis of the above technical solution, before the obtaining of the target reference offset map corresponding to the target processing item, the method further includes: carrying out liquefaction deformation processing on the target processing item on the reference image to obtain an offset vector of a pixel point in the reference image; and generating the target reference offset map according to the offset vectors of the pixel points in the reference image.
And carrying out liquefaction deformation processing on each pixel point in the reference image by using a liquefaction formula corresponding to the target processing item to obtain an offset vector corresponding to each pixel point in the reference image, and expressing the offset vector corresponding to each pixel point by using a color value to obtain a target reference offset map.
When carrying out the liquefaction deformation processing of target processing item to the reference image, use the pixel as the processing unit, only carry out deformation to the point in the liquefaction region (use the control point is the circle of current pixel as the centre of a circle promptly), the position coordinate deformation degree that is close to the centre of a circle more is big more, the deformation degree that is close to the circle edge more is little more, the edge deformation degree reduces to zero, the fixity of the relative position on original summit has been guaranteed to this principle, the smoothness of deformation has been guaranteed to the gradual change of range of moving simultaneously, however, it is not nimble enough to lead to effect control with the outside radial gradual change of central point, can be through the deformation unit of some bases of stack among the practical application, include: pushing, rotating, amplifying and reducing and the like to realize various beautifying effects, traversing each pixel point, calculating deformation vectors of each point under the action of each deformation unit, superposing the deformation vectors to obtain the deformation vectors of each point, and performing final rendering. The liquefaction deformation algorithm can realize various complex deformation special effects by adjusting parameters such as liquefaction radius, liquefaction strength and the like at each control point.
The embodiment of the disclosure determines the target reference offset map of the reference image corresponding to the target processing item in advance, does not occupy the performance of real-time processing, reduces the calculated amount in the image processing process, and ensures the smoothness and naturalness of deformation by using the liquefaction deformation processing.
On the basis of the above technical solution, generating the target reference offset map according to the offset vectors of the pixel points in the reference image includes:
according to the offset vector of the pixel point in the reference image, determining the color value of the pixel point in the target reference offset image according to the following formula:
Figure BDA0003569178820000181
wherein rgbiocolor is a color value of a pixel in the target reference offset map, xyOffset is an offset vector of a pixel in the reference image, which is a binary vector,
Figure BDA0003569178820000182
the color adjustment value of the r channel and the color adjustment value of the g channel of the pixel point in the target reference offset map are represented by a binary vector, m is a preset color value of the b channel of the pixel point in the target reference offset map, and m is a preset color value of the a channel of the pixel point in the target reference offset map. For example, the preset color value of the b channel may be 0.5, or may be other values, where 0.5 may ensure that the non-deformation area is gray, and the preset color value of the a channel may be 1.0.
For any pixel point on the reference image, rgba color is the color value of the pixel point in the target reference offset image, xyOffset is the offset vector of the pixel point in the plane coordinate system, and the range of the xyOffset is normalized to [ -0.5,0.5 ].
The color value of the pixel point in the offset image is determined through the formula, the region with deformation is guaranteed to have color, the region without deformation is gray (namely, the gray represents 0 offset), and deformation processing is facilitated.
On the basis of the technical scheme, the method further comprises the following steps: splicing the target reference offset map and reference offset maps corresponding to other processing items except the target processing item into a texture image;
the acquiring a target reference offset map corresponding to the target processing item includes: and acquiring a target reference offset map corresponding to the target processing item from the texture image.
In addition to performing the liquefaction deformation processing of the target processing item on the reference image in advance, it is necessary to perform the liquefaction deformation processing of other processing items on the reference image in advance, that is, perform the liquefaction deformation processing of various processing items on the reference image in advance, and generate the reference offset map of the reference image under various processing items. The target reference offset map and the reference offset maps corresponding to other processing items except the target processing item are spliced into one texture image, so that the limitation that the number of textures on a low-end machine type cannot exceed 8 can be avoided, and the reference offset maps are simply spliced on one texture image without superposition, because the strength of each target processing item selected by a user is different, different parameters can be adjusted for each target processing item after different target processing items are superposed when the face image to be processed is processed.
And intercepting the texture image at the splicing position from the texture image according to the splicing position of the target reference offset image corresponding to the target processing item in the texture image to obtain the target reference offset image corresponding to the target processing item. The reference offset map corresponding to a plurality of processing items is included in one texture image, and the target reference offset map is intercepted based on the target processing item, so that the limitation that the number of textures on a low-end model cannot exceed 8 can be avoided.
An important step in liquefaction deformation is to calculate a deformation offset vector, the process needs to traverse each pixel point and each deformation configuration unit in a configuration list, and the deformation offset vector of each pixel point is calculated through a liquefaction formula; in the embodiment of the disclosure, the process of calculating the deformation offset vector is placed in the manufacture of the reference offset map, the reference offset map is directly used as a material and is arranged in the engineering, the reference offset map is mapped to each pixel of the face image to be processed through the texture mapping, the offset vector does not need to be calculated by using a liquefaction formula in the image processing process, and the calculated amount is greatly reduced. Meanwhile, the texture coordinate mapping relation and the mapping area can be flexibly controlled in the texture mapping, and the influence range is controlled.
The embodiment of the disclosure combines the advantages of the liquefaction deformation and triangulation deformation technologies, has the flexibility of the liquefaction deformation mode, and also has the controllability and high efficiency of the triangulation deformation mode. On this basis, in order to solve the problem that the background in the real-time image processing is greatly dragged along with the movement of the human face, the embodiment of the disclosure distributes partial deformation to the background by controlling the deformation area, and greatly weakens the dragging problem of the background in the real-time image processing.
Fig. 3 is a block diagram illustrating an image processing apparatus according to an exemplary embodiment. Referring to fig. 2, the apparatus includes an image acquisition module 31, a reference offset map acquisition module 32, an offset map transformation module 33, and an image processing module 34.
The image acquisition module 31 is configured to perform acquisition of a face image to be processed and a target processing item, where the target processing item is used to represent a processing mode of the face image to be processed;
the reference offset map obtaining module 32 is configured to perform obtaining of a target reference offset map corresponding to the target processing item, where a color value in the target reference offset map is used to represent an offset vector of each pixel point after the target processing item is processed on a reference image;
the offset map transformation module 33 is configured to perform transformation on the target reference offset map according to the pose angle of the face image to be processed, so as to obtain a target offset map corresponding to the face image to be processed;
the image processing module 34 is configured to perform processing on the face image to be processed according to the target offset map, so as to obtain a target image.
Optionally, the offset map transforming module includes:
the mesh division unit is configured to perform triangular mesh division on the face image to be processed according to the face key points of the face image to be processed to obtain a target face mesh;
the offset map mapping unit is configured to map the target reference offset map onto the target face mesh according to a reference face mesh corresponding to the reference image and the target face mesh to obtain an initial offset map corresponding to the to-be-processed face image, wherein the reference face mesh is obtained by triangulating the reference image based on a face key point of the reference image;
and the offset map adjusting unit is configured to adjust the color value in the initial offset map according to the attitude angle of the face image to be processed to obtain a target offset map corresponding to the face image to be processed.
Optionally, the offset map adjusting unit includes:
a transformation matrix determination subunit configured to determine a transformation matrix according to the rolling angle in the attitude angle and the ratio of the image size of the face image to be processed to the image size of the reference image, wherein the transformation matrix is used for representing the position transformation relation of the face image to be processed to the reference image;
an adjustment coefficient determination subunit configured to perform determination of a yaw angle of an integer 1 and the attitude angle as an adjustment coefficient of a color value in the initial offset map;
a vertex scale determination subunit configured to perform determining a triangular mesh area average value of a target mapping point in the target face mesh as a first area average value, and determining a reference mapping point corresponding to the target mapping point in the reference face mesh, determining a triangular mesh area average value of the reference mapping point as a second area average value, and determining a ratio of the first area average value to the second area average value as a vertex scale of the target mapping point in the target face mesh;
and the offset map adjusting subunit is configured to perform adjustment on the color values in the initial offset map according to the transformation matrix, the adjustment coefficient and the vertex scale to obtain a target offset map.
Optionally, the offset map adjusting subunit is configured to perform:
according to the transformation matrix, the adjustment coefficient and the vertex scale, adjusting the r channel color value and the g channel color value in the initial offset map according to the following formula to obtain a target offset map:
Figure BDA0003569178820000211
wherein,
Figure BDA0003569178820000212
is a binary vector representing the adjusted r channel color values and the adjusted g channel color values,
Figure BDA0003569178820000213
the vector is a binary vector and represents r channel color values in the initial migration map and g channel color values in the initial migration map, rotMat represents the transformation matrix, degree represents the adjustment coefficient, and scale represents the vertex scale.
Optionally, the mesh dividing unit includes: (ii) a
A target extension point determining subunit, configured to perform determining a target extension point corresponding to a face key point of the to-be-processed face image, where a distance between the target extension point and the face key point in the to-be-processed face image is greater than a distance between a reference extension point in the reference image and the face key point in the reference image, the reference extension point is an extension point corresponding to the face key point in the reference image, and the reference face mesh is obtained by triangularly meshing the reference image based on the face key point in the reference image and the reference extension point;
and the mesh division subunit is configured to perform triangular mesh division on the face image to be processed according to the face key points of the face image to be processed and the target outward expansion points to obtain a target face mesh.
Optionally, the image processing module includes:
the offset vector determining unit is configured to determine an offset vector corresponding to the target processing item of the face image to be processed according to the target offset image;
and the offset processing unit is configured to perform offset processing of the offset vector on the pixel points in the face image to be processed to obtain the target image.
Optionally, the offset vector determining unit is configured to perform:
determining a transverse offset vector in the offset vectors according to the r channel color value in the target offset map;
and determining a longitudinal offset vector in the offset vectors according to the g channel color value in the target offset map.
Optionally, the apparatus further comprises:
a reference image liquefaction deformation module configured to perform liquefaction deformation processing on the reference image on the target processing item to obtain an offset vector of a pixel point in the reference image;
a reference offset map generation module configured to perform generation of the target reference offset map according to offset vectors of pixel points in the reference image.
Optionally, the reference offset map generating module is configured to perform:
according to the offset vector of the pixel point in the reference image, determining the color value of the pixel point in the target reference offset image according to the following formula:
Figure BDA0003569178820000221
wherein rgbiocolor is a color value of a pixel in the target reference offset map, xyOffset is an offset vector of a pixel in the reference image,
Figure BDA0003569178820000222
is a binary vector representing the sum of preset color adjustment values of r channels of pixels in the target reference offset mapAnd g, a preset color adjustment value of a channel, m is a preset color value of a channel of a pixel point b in the target reference offset map, and m is a preset color value of a channel of a pixel point a in the target reference offset map.
Optionally, the apparatus further comprises:
an offset map stitching module configured to perform stitching of the target reference offset map and reference offset maps corresponding to processing items other than the target processing item into a texture image;
the reference offset map acquisition module is configured to perform:
and acquiring a target reference offset map corresponding to the target processing item from the texture image.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
FIG. 4 is a block diagram illustrating an electronic device in accordance with an example embodiment. For example, the electronic device 400 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 4, electronic device 400 may include one or more of the following components: processing components 402, memory 404, power components 406, multimedia components 408, audio components 410, input/output (I/O) interfaces 412, sensor components 414, and communication components 416.
The processing component 402 generally controls overall operation of the electronic device 400, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 402 may include one or more processors 420 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 402 can include one or more modules that facilitate interaction between the processing component 402 and other components. For example, the processing component 402 can include a multimedia module to facilitate interaction between the multimedia component 408 and the processing component 402.
The memory 404 is configured to store various types of data to support operations at the electronic device 400. Examples of such data include instructions for any application or method operating on the electronic device 400, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 404 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 406 provides power to the various components of the electronic device 400. Power components 406 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for electronic device 400.
The multimedia component 408 comprises a screen providing an output interface between the electronic device 400 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 408 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the electronic device 400 is in an operation mode, such as a photographing mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 410 is configured to output and/or input audio signals. For example, the audio component 410 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 400 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 404 or transmitted via the communication component 416. In some embodiments, audio component 410 also includes a speaker for outputting audio signals.
The I/O interface 412 provides an interface between the processing component 402 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor component 414 includes one or more sensors for providing various aspects of status assessment for the electronic device 400. For example, the sensor component 414 can detect an open/closed state of the electronic device 400, the relative positioning of components, such as a display and keypad of the electronic device 400, the sensor component 414 can also detect a change in position of the electronic device 400 or a component of the electronic device 400, the presence or absence of user contact with the electronic device 400, orientation or acceleration/deceleration of the electronic device 400, and a change in temperature of the electronic device 400. The sensor assembly 414 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 414 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 414 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 416 is configured to facilitate wired or wireless communication between the electronic device 400 and other devices. The electronic device 400 may access a wireless network based on a communication standard, such as WiFi, a carrier network (such as 2G, 3G, 4G, or 5G), or a combination thereof. In an exemplary embodiment, the communication component 416 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 416 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 400 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described image processing methods.
In an exemplary embodiment, a computer-readable storage medium comprising instructions, such as the memory 404 comprising instructions, executable by the processor 420 of the electronic device 400 to perform the image processing method described above is also provided. Alternatively, the computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, a computer program product is also provided, comprising a computer program which, when executed by a processor, implements the image processing method described above.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the invention is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the invention is limited only by the appended claims.

Claims (10)

1. An image processing method, comprising:
acquiring a face image to be processed and a target processing item, wherein the target processing item is used for representing a processing mode of the face image to be processed;
acquiring a target reference offset map corresponding to the target processing item, wherein a color value in the target reference offset map is used for representing an offset vector of each pixel point after the target processing item is processed on a reference image;
transforming the target reference offset image according to the attitude angle of the face image to be processed to obtain a target offset image corresponding to the face image to be processed;
and processing the face image to be processed according to the target offset image to obtain a target image.
2. The method according to claim 1, wherein transforming the target reference offset map according to the pose angle of the face image to be processed to obtain a target offset map corresponding to the face image to be processed, comprises:
performing triangular mesh division on the face image to be processed according to the face key points of the face image to be processed to obtain a target face mesh;
mapping the target reference offset map onto the target face mesh according to the reference face mesh corresponding to the reference image and the target face mesh to obtain an initial offset map corresponding to the face image to be processed, wherein the reference face mesh is obtained by triangulating the reference image based on face key points of the reference image;
and adjusting the color value in the initial offset map according to the attitude angle of the face image to be processed to obtain a target offset map corresponding to the face image to be processed.
3. The method of claim 2, wherein adjusting the color value in the initial offset map according to the pose angle of the face image to be processed to obtain a target offset map corresponding to the face image to be processed comprises:
determining a transformation matrix according to the roll angle in the attitude angle and the proportion of the image size of the face image to be processed relative to the image size of the reference image, wherein the transformation matrix is used for representing the position transformation relation of the face image to be processed relative to the reference image;
determining the difference between the integer 1 and the yaw angle in the attitude angle as an adjustment coefficient of the color value in the initial offset map;
determining a triangular grid area average value of a target mapping point in the target face grid as a first area average value, determining a reference mapping point corresponding to the target mapping point in the reference face grid, determining a triangular grid area average value of the reference mapping point as a second area average value, and determining a ratio of the first area average value to the second area average value as a vertex scale of the target mapping point in the target face grid;
and adjusting the color value in the initial offset map according to the transformation matrix, the adjustment coefficient and the vertex scale to obtain a target offset map.
4. The method of claim 3, wherein adjusting the color values in the initial offset map according to the transformation matrix, the adjustment coefficients and the vertex scale to obtain a target offset map comprises:
according to the transformation matrix, the adjustment coefficient and the vertex scale, adjusting the r channel color value and the g channel color value in the initial offset map according to the following formula to obtain a target offset map:
Figure FDA0003569178810000021
wherein,
Figure FDA0003569178810000022
is a binary vector representing the adjusted r channel color values and the adjusted g channel color values,
Figure FDA0003569178810000023
the vector is a binary vector and represents r channel color values in the initial offset map and g channel color values in the initial offset map, rotMat represents the transformation matrix, degree represents the adjustment coefficient, and scale represents the vertex scale.
5. The method according to any one of claims 2 to 4, wherein the triangular mesh division is performed on the face image to be processed according to the face key points of the face image to be processed to obtain a target face mesh, and the method comprises the following steps:
determining a target extension point corresponding to a face key point of the face image to be processed, wherein the distance between the target extension point and the face key point in the face image to be processed is greater than the distance between a reference extension point in the reference image and the face key point in the reference image, the reference extension point is an extension point corresponding to the face key point in the reference image, and the reference face grid is obtained by performing triangular grid division on the reference image based on the face key point in the reference image and the reference extension point;
and carrying out triangular mesh division on the face image to be processed according to the face key points of the face image to be processed and the target outward expansion points to obtain a target face mesh.
6. The method according to any one of claims 1 to 4, wherein processing the face image to be processed according to the target offset map to obtain a target image comprises:
determining an offset vector corresponding to the target processing item of the face image to be processed according to the target offset image;
and carrying out offset processing on the offset vector on the pixel points in the face image to be processed to obtain the target image.
7. An image processing apparatus characterized by comprising:
the image acquisition module is configured to acquire a face image to be processed and a target processing item, wherein the target processing item is used for representing a processing mode of the face image to be processed;
a reference offset map obtaining module configured to perform obtaining of a target reference offset map corresponding to the target processing item, where a color value in the target reference offset map is used to represent an offset vector of each pixel after the target processing item is processed on a reference image;
the offset map transformation module is configured to transform the target reference offset map according to the attitude angle of the face image to be processed to obtain a target offset map corresponding to the face image to be processed;
and the image processing module is configured to process the face image to be processed according to the target offset image to obtain a target image.
8. An electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the image processing method of any one of claims 1 to 6.
9. A computer-readable storage medium in which instructions, when executed by a processor of an electronic device, enable the electronic device to perform the image processing method of any of claims 1 to 6.
10. A computer program product comprising a computer program, characterized in that the computer program realizes the image processing method of any one of claims 1 to 6 when executed by a processor.
CN202210313378.1A 2022-03-28 2022-03-28 Image processing method, image processing device, electronic equipment and storage medium Pending CN114693514A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210313378.1A CN114693514A (en) 2022-03-28 2022-03-28 Image processing method, image processing device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210313378.1A CN114693514A (en) 2022-03-28 2022-03-28 Image processing method, image processing device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114693514A true CN114693514A (en) 2022-07-01

Family

ID=82141738

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210313378.1A Pending CN114693514A (en) 2022-03-28 2022-03-28 Image processing method, image processing device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114693514A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117726499A (en) * 2023-05-29 2024-03-19 荣耀终端有限公司 Image deformation processing method, electronic device, and computer-readable storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120163732A1 (en) * 2009-09-18 2012-06-28 Panasonic Corporation Image processing apparatus and image processing method
CN112200716A (en) * 2020-10-15 2021-01-08 广州博冠信息科技有限公司 Image processing method, device, electronic equipment and nonvolatile storage medium
CN112241933A (en) * 2020-07-15 2021-01-19 北京沃东天骏信息技术有限公司 Face image processing method and device, storage medium and electronic equipment
CN113887507A (en) * 2021-10-25 2022-01-04 厦门美图之家科技有限公司 Face image processing method and device, electronic equipment and storage medium
CN114082184A (en) * 2021-01-19 2022-02-25 北京沃东天骏信息技术有限公司 Method and device for creating plane grid, computer storage medium and electronic equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120163732A1 (en) * 2009-09-18 2012-06-28 Panasonic Corporation Image processing apparatus and image processing method
CN112241933A (en) * 2020-07-15 2021-01-19 北京沃东天骏信息技术有限公司 Face image processing method and device, storage medium and electronic equipment
CN112200716A (en) * 2020-10-15 2021-01-08 广州博冠信息科技有限公司 Image processing method, device, electronic equipment and nonvolatile storage medium
CN114082184A (en) * 2021-01-19 2022-02-25 北京沃东天骏信息技术有限公司 Method and device for creating plane grid, computer storage medium and electronic equipment
CN113887507A (en) * 2021-10-25 2022-01-04 厦门美图之家科技有限公司 Face image processing method and device, electronic equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117726499A (en) * 2023-05-29 2024-03-19 荣耀终端有限公司 Image deformation processing method, electronic device, and computer-readable storage medium

Similar Documents

Publication Publication Date Title
CN110675310B (en) Video processing method and device, electronic equipment and storage medium
US11410284B2 (en) Face beautification method and apparatus, computer device, and storage medium
CN107680033B (en) Picture processing method and device
US11308692B2 (en) Method and device for processing image, and storage medium
US20220351346A1 (en) Method for processing images and electronic device
CN107818543B (en) Image processing method and device
CN107977934B (en) Image processing method and device
EP3125158A2 (en) Method and device for displaying images
CN107958439B (en) Image processing method and device
CN110400266B (en) Image correction method and device and storage medium
US11030733B2 (en) Method, electronic device and storage medium for processing image
CN114170302A (en) Camera external parameter calibration method and device, electronic equipment and storage medium
CN105512605A (en) Face image processing method and device
CN107798654B (en) Image buffing method and device and storage medium
CN112034984B (en) Virtual model processing method and device, electronic equipment and storage medium
WO2023029379A1 (en) Image special effect generation method and apparatus
CN107341777B (en) Picture processing method and device
CN112614228B (en) Method, device, electronic equipment and storage medium for simplifying three-dimensional grid
CN112508773B (en) Image processing method and device, electronic equipment and storage medium
CN112767288A (en) Image processing method and device, electronic equipment and storage medium
US20210118148A1 (en) Method and electronic device for changing faces of facial image
CN110378847A (en) Face image processing process, device, medium and electronic equipment
CN109784327A (en) Bounding box determines method, apparatus, electronic equipment and storage medium
CN114693514A (en) Image processing method, image processing device, electronic equipment and storage medium
CN113902869A (en) Three-dimensional head grid generation method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination