[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

WO2018137454A1 - 一种图像形状调节方法和调节装置 - Google Patents

一种图像形状调节方法和调节装置 Download PDF

Info

Publication number
WO2018137454A1
WO2018137454A1 PCT/CN2017/118807 CN2017118807W WO2018137454A1 WO 2018137454 A1 WO2018137454 A1 WO 2018137454A1 CN 2017118807 W CN2017118807 W CN 2017118807W WO 2018137454 A1 WO2018137454 A1 WO 2018137454A1
Authority
WO
WIPO (PCT)
Prior art keywords
mesh
module
grid
local
image
Prior art date
Application number
PCT/CN2017/118807
Other languages
English (en)
French (fr)
Inventor
施扬
雷宇
付一洲
金宇林
伏英娜
Original Assignee
迈吉客科技(北京)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 迈吉客科技(北京)有限公司 filed Critical 迈吉客科技(北京)有限公司
Publication of WO2018137454A1 publication Critical patent/WO2018137454A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation

Definitions

  • the present invention relates to the field of computer graphics processing, and in particular, to an image shape adjustment method and an adjustment apparatus.
  • the processing of stereoscopic graphics changes by a computer is mainly done by a shader (a renderer for rendering of colors, textures, patterns).
  • the shader has higher processing efficiency for the stereoscopic graphic object, but for the deformation processing of the planar image, there is uncertainty directly through the shader processing.
  • the deformation of the anthropomorphic image has a certain degree of ambiguity in rendering the vertices. It is difficult to debug the processing directly by the shader. Even by breaking the point, printing the log and the like cannot be solved well.
  • there are large compatibility problems between shaders implemented by different types of GPUs Simple shader processing results can be consistent, and complex shaders have certain uncertainties in the processing results of different GPUs, which leads to Rendering error.
  • the GPU processing highlights the real-time nature and lacks effective power management means.
  • the power consumption is large, which is not conducive to the endurance duration.
  • an embodiment of the present invention provides an image shape adjustment method and an adjustment apparatus for solving the technical problem of complexity and high energy consumption of image shape processing on a mobile device.
  • Deformation data of the local object is formed by changing the properties of the local mesh
  • the GPU is used to form the processing of the complete object.
  • the determining the complete object outline and object attributes in the image includes:
  • the establishing a continuous grid based on the complete object contour and the object attribute using the regular pattern includes:
  • the mapping between the local object and the local mesh includes:
  • Each grid is associated with a texture pattern of a corresponding partial face.
  • the deformation data of the local object formed by changing the attributes of the local mesh includes:
  • the processing of the complete object by using the GPU as a rendering parameter includes:
  • the initial continuous mesh, the texture pattern of the partial face corresponding to the mesh, and the rendering parameters are passed to the GPU through the graphics processing interface.
  • Object determination module used to determine the complete object outline and object attributes in the image
  • Grid building module for constructing a continuous mesh using regular graphics based on complete object contours and object properties
  • Mapping module used to map a local object to a local mesh
  • Grid adjustment module used to form deformation data of a local object by changing attributes of the local grid
  • Output module used to use continuous grid properties as rendering parameters, using the GPU to form the processing of the complete object.
  • the object determination module includes:
  • Main contour determination sub-module used to determine the contour of the face
  • Secondary contour determination sub-module used to determine the contour of the facial features
  • Specific contour determination sub-module used to determine the key feature location of the facial features.
  • the grid building module includes:
  • Contour feature positioning sub-module used to set key feature points in the contour position of the face
  • Key feature positioning sub-module used to set key feature points in key feature positions of the five senses
  • Grid Connection Submodule Used to form a continuous mesh between key feature points.
  • the mapping module includes:
  • Image area mapping sub-module for mapping each grid to a unique corresponding partial face
  • An image attribute mapping sub-module configured to form a correspondence between each mesh and a texture pattern of a corresponding partial face
  • the grid adjustment module includes:
  • Active adjustment sub-module used to record the active mesh being pushed and pulled, the displacement position and vertex position of the active mesh
  • Passive adjustment sub-module used to record the passive grid adjacent to the push-pull grid, the displacement position and vertex position of the passive grid;
  • Update submodule Used to update the shape and position of the mesh in a continuous mesh to form rendering parameters.
  • the output module includes:
  • Interface Adaptation Sub-module used to transfer the initial continuous mesh, the texture pattern of the partial face corresponding to the mesh, and the rendering parameters to the GPU through the graphics processing interface.
  • An image shape adjustment device includes a processor and a memory
  • the memory is configured to store program code for performing the image shape adjustment method described above;
  • the processor is configured to execute the program code.
  • the image shape adjustment method and adjustment apparatus of the present invention can make full use of the feature attributes of a specific object to determine an accurate range of a specific object in an image, and at the same time determine a key feature position in a specific object.
  • These key feature locations form the basic coordinate system of a particular object and are used to establish an accurate positional association between key features, ie a continuous mesh.
  • the change position coordinates of the actively adjusted local key features that is, the local mesh attributes, and the influence of the locally adjusted key features, and the changes of other passively adjusted local key features can be obtained.
  • Position coordinates which are other grid properties.
  • a general recognition algorithm can be used to perform a rendering parameter transformation process of a specific image on a general-purpose processor, and the resulting parameter can be directly applied to a basic stretching, warping or rendering process with rendering consistency, ensuring different types of GPUs. Rendering effect. It fully satisfies the versatility of GPUs on different mobile terminals, avoiding the high energy consumption of GPU advanced data processing.
  • FIG. 1 is a process flow diagram of an embodiment of an image shape adjustment method according to the present invention.
  • FIG. 2 is a process flow diagram of another embodiment of an image shape adjustment method according to the present invention.
  • Fig. 3 is a flow chart showing the processing of an image shape adjusting device according to an embodiment of the present invention.
  • Fig. 4 is a flow chart showing the processing of another embodiment of the image shape adjusting device of the present invention.
  • an image shape adjustment method according to an embodiment of the present invention includes:
  • Step 01 Determine a complete object outline and an object attribute in the image
  • Step 02 construct a continuous mesh using regular patterns based on the complete object contour and the object attribute;
  • Step 03 Establish a mapping relationship between the local object and the local mesh
  • Step 04 Form deformation data of the local object by changing attributes of the local mesh
  • Step 05 Using the continuous mesh attribute as a rendering parameter, the GPU is used to form a process for the complete object.
  • the image shape adjustment method can fully utilize the feature attributes of a specific object to determine an accurate range of a specific object in an image, and simultaneously determine a key feature position in a specific object.
  • These key feature locations form the basic coordinate system of a particular object and are used to establish an accurate positional association between key features, ie a continuous mesh.
  • the change position coordinates of the actively adjusted local key features that is, the local mesh attributes, and the influence of the locally adjusted key features, and the changes of other passively adjusted local key features can be obtained.
  • Position coordinates which are other grid properties.
  • a general recognition algorithm can be used to perform a rendering parameter transformation process of a specific image on a general-purpose processor, and the resulting parameter can be directly applied to a basic stretching, warping or rendering process with rendering consistency, ensuring different types of GPUs. Rendering effect. It fully satisfies the versatility of GPUs on different mobile terminals, avoiding the high energy consumption of GPU advanced data processing.
  • the complete object contour in the above step 01 is a facial contour or a head front contour
  • the object attribute is a facial features, including but not limited to, the highest point of the humerus, the brow bone.
  • Specific key features such as the highest point, inner corner of the eye, outer corner of the eye, corner of the mouth, inner tip of the eyebrow, outer eyebrow tip, nose, nose, and pupil.
  • the largest object covering other objects in the complete object contour can be used as the reference boundary of the continuous mesh.
  • the contour edge of the file can be used as a reference boundary of the continuous mesh, and the reference boundary extension can form a follow-up extension mesh for adapting the changes of the face and the surrounding environment.
  • the attributes of the local mesh include a shape attribute, and a vertex coordinate attribute of the shape.
  • the rendering parameters include the changed mesh vertex coordinates.
  • the GPU utilizes the vertex shader supported by both OpenGL and Direct3D to change the vertex coordinates of the mesh in real time, using texture mapping methods supported by both OpenGL and Direct3D to change the rendering to the screen.
  • the step 01 includes:
  • Step 11 Determine the outline of the face. After the color image is converted into a grayscale image, clustering processing is performed to obtain a contour of the face.
  • Step 12 Determine the outline of the facial features. After the color image is converted into a grayscale image, clustering is performed to obtain a contour of the facial features.
  • Step 13 Determine the key feature locations of the facial features.
  • the physiological characteristics of the facial features can be used to identify irrelevant key feature locations.
  • the step 02 includes:
  • Step 21 Set key feature points at the contour position of the face.
  • the key feature points of the contour include, but are not limited to, the symmetry axis endpoint of the face, the maximum offset point and endpoint of the continuous dashed line, discrete symmetry points, and the like.
  • Step 22 Set key feature points in the key features of the facial features.
  • Key features of the five senses include, but are not limited to, the above-mentioned points of physiological characteristics.
  • Step 23 Form a continuous grid between the key feature points.
  • the mesh of the continuous mesh is a triangle or a triangle, and the continuous mesh is a two-dimensional mesh.
  • the vertices of the triangle are adjacent points.
  • the perimeter of the continuous mesh extends outward to form an epitaxial grid of the following continuous mesh, and the epitaxial mesh is formed as a boundary of the continuous mesh and is formed by a triangle.
  • the continuous mesh epitaxy forms a larger passive follower grid centered on respective image pixels comprising a continuous grid.
  • the correlation between the edge mesh of the passive follower mesh and the central mesh is weakened with increasing distance.
  • the step 03 includes:
  • Step 31 Correlate each grid with a unique corresponding partial face.
  • the correspondence includes both the area correspondence of each mesh and the corresponding partial face of the overlay, and the correspondence with the vertex position of the partial face.
  • each mesh and the unique corresponding partial face may be replaced by a unique corresponding face or facial features corresponding to the corresponding set of meshes to form a corresponding relationship.
  • Step 32 Correspond relationship between each mesh and the texture pattern of the corresponding partial face.
  • Image attributes including the area of each mesh and the corresponding partial face of the overlay, including but not limited to image attributes such as texture, color, and brightness.
  • the step 04 includes:
  • Step 41 Record the active mesh that is pushed and pulled, the displacement position and the vertex position of the active mesh.
  • the active mesh includes a grid that is controlled to change shape or/and position to change displacement.
  • Step 42 Record the passive mesh adjacent to the pushed-pull mesh, the displacement position and the vertex position of the passive mesh.
  • the passive grid includes a grid that conducts active mesh changes, forming shapes or/and shifting positions.
  • Step 43 Update the shape and position of the mesh in the continuous mesh to form a rendering parameter.
  • the modified coordinates of these formed mesh vertices are placed in the Vertex Shader to control the shape of the mesh.
  • the step 05 includes:
  • Step 51 Pass the initial continuous mesh, the texture pattern of the partial face corresponding to the mesh, and the rendering parameters to the GPU through the graphics processing interface.
  • Graphics processing interfaces include, but are not limited to, standard interfaces or interface subsets such as DirectX, OpenGLES, and the like.
  • the GPU completes the deformation and rendering of the original facial image according to the rendering parameters, and implements face adjustment.
  • the image shape adjustment method according to an embodiment of the present invention fully utilizes the CPU to complete the acquisition of the main deformation parameters of the fine object face, thereby greatly shortening the GPU high-load processing time and reducing the energy consumption.
  • the GPU obtains parameters only by completing a simple processing rendering process, maintaining compatibility and output consistency with the image processing software hardware.
  • the processing of the GPU includes:
  • the deformed full-screen texture is then drawn for the extended face mesh.
  • the face shape change can be limited to the extent of the epitaxial grid, and does not affect the deformation of the screen where it should not be accompanied by deformation.
  • Fig. 3 is a flow chart showing the processing of an image shape adjusting device according to an embodiment of the present invention.
  • an image shape adjustment apparatus includes:
  • Object determination module 100 for determining a complete object outline and an object attribute in the image
  • Grid building module 200 for constructing a continuous mesh using regular graphics based on the complete object contour and object attributes;
  • the mapping module 300 is configured to establish a mapping relationship between the local object and the local mesh.
  • the mesh adjustment module 400 is configured to form deformation data of the local object by changing attributes of the local mesh;
  • the output module 500 is configured to use a GPU to form a processing of a complete object by using a continuous mesh attribute as a rendering parameter.
  • Fig. 4 is a flow chart showing the processing of another embodiment of the image shape adjusting device of the present invention.
  • the object determination module 100 includes:
  • the main contour determining sub-module 110 is for determining the contour of the face. After the color image is converted into a grayscale image, clustering processing is performed to obtain a contour of the face.
  • Secondary contour determination sub-module 120 used to determine the contour of the facial features. After the color image is converted into a grayscale image, clustering is performed to obtain a contour of the facial features.
  • the specific contour determination sub-module 130 is used to determine the key feature position of the facial features.
  • the physiological characteristics of the facial features can be used to identify irrelevant key feature locations.
  • the mesh creation module 200 includes:
  • the contour feature locating sub-module 210 is configured to set a key feature point at a contour position of the face.
  • the key feature points of the contour include, but are not limited to, the symmetry axis endpoint of the face, the maximum offset point and endpoint of the continuous dashed line, discrete symmetry points, and the like.
  • the key feature locating sub-module 220 is configured to set a key feature point in a key feature position of the facial features. Key features of the five senses include, but are not limited to, the above-mentioned points of physiological characteristics.
  • Grid connection sub-module 230 for forming a continuous mesh between key feature points.
  • the mesh of the continuous mesh is a triangle or a triangle, and the continuous mesh is a two-dimensional mesh.
  • the vertices of the triangle are adjacent points.
  • the outer periphery of the continuous mesh ie, the contour of the face
  • the epitaxial mesh is formed as a boundary of the continuous mesh by a triangle.
  • the mapping module 300 includes:
  • the image area mapping sub-module 310 is configured to form each grid to correspond to a unique corresponding partial face.
  • the correspondence includes both the area correspondence of each mesh and the corresponding partial face of the overlay, and the correspondence with the vertex position of the partial face.
  • the image attribute mapping sub-module 320 is configured to form a correspondence between each of the meshes and a texture pattern of the corresponding partial face.
  • Image attributes including the area of each mesh and the corresponding partial face of the overlay, including but not limited to image attributes such as texture, color, brightness, and the like.
  • the mesh adjustment module 400 includes:
  • the active adjustment sub-module 410 is configured to record the active mesh that is pushed and pulled, the displacement position and the vertex position of the active mesh.
  • the active mesh includes a grid that is controlled to change shape or/and position to change displacement.
  • the passive adjustment sub-module 420 is configured to record a passive mesh adjacent to the pushed-pull mesh, a displacement position and a vertex position of the passive mesh.
  • the passive grid includes a grid that conducts active mesh changes, forming shapes or/and shifting positions.
  • Update sub-module 430 for updating the shape and position of the mesh in the continuous mesh to form a rendering parameter.
  • the output module 500 includes:
  • the interface adaptation sub-module 510 is configured to pass the initial continuous mesh, the texture pattern of the partial face corresponding to the mesh, and the rendering parameters to the GPU through the graphics processing interface.
  • Graphics processing interfaces include, but are not limited to, standard interfaces or interface subsets such as DirectX, OpenGLES, and the like.
  • An image shape adjustment device includes a memory and a processor, wherein:
  • a memory for storing program code for implementing the processing steps of the image shape adjustment method of the above embodiment
  • the processor is for executing program code for implementing the processing steps of the image shape adjustment method of the above embodiment.
  • the disclosed systems, devices, and methods may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • there may be another division manner for example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in an electrical, mechanical or other form.
  • the units described as separate components may or may not be physically separate, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the image shape adjustment method and adjustment apparatus of the present invention establish an accurate positional association between key features, forming an associated mesh attribute. It can realize the rendering parameter transformation processing of a specific image on a general-purpose processor by using a universal recognition algorithm, and the formed result parameters can be directly applied to the basic stretching, warping or rendering process with rendering consistency, and the rendering of different types of GPUs is guaranteed. effect. It fully satisfies the versatility of GPUs on different mobile terminals, avoiding the high energy consumption of GPU advanced data processing.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Generation (AREA)

Abstract

一种图像形状调节方法和调节装置,用于在移动设备上图像形状处理存在复杂性和高能耗的技术问题。包括:确定图像中的完整对象轮廓和对象属性(01);基于完整对象轮廓和对象属性利用规律图形建立连续网格(02);将局部对象与局部网格建立映射关系(03);通过改变局部网格的属性,形成局部对象的变形数据(04);将连续的网格属性作为渲染参数,利用GPU形成对完整对象的处理(05)。

Description

一种图像形状调节方法和调节装置
本发明是要求由申请人提出的,申请日为2017年01月25日,申请号为CN201710060917.4,名称为“一种图像形状调节方法和调节装置”的申请的优先权。以上申请的全部内容通过整体引用结合于此。
技术领域
本发明涉及计算机图形处理领域,特别涉及一种图像形状调节方法和调节装置。
发明背景
现有技术中计算机对立体图形变化的处理主要是通过shader(渲染器,用于色彩、纹理、图案的渲染)完成。shader针对立体的图形对象具有较高的处理效率,但是对于平面图像的变形处理,直接通过shader处理存在不确定性。例如拟人图像的变形,渲染顶点存在一定的模糊度,直接进行shader使得处理过程难以调试,即使通过打断点,打印日志等手段也无法很好解决。而且针对不同类型GPU(图形处理器)实现的shader之间存在较大兼容性问题,简单的shader处理结果可以保持一致性,复杂的shader在不同GPU处理的结果存在一定的不确定性,会导致渲染误差。
而且GPU处理突出实时性缺乏有效的电源管理手段,对于移动终端而言功耗较大不利于续航时长。
发明内容
有鉴于此,本发明实施例提供了一种图像形状调节方法和调节装置,用于解决在移动设备上图像形状处理存在复杂性和高能耗的技术问题。
本发明实施例的图像形状调节方法,包括:
确定图像中的完整对象轮廓和对象属性;
基于完整对象轮廓和对象属性利用规律图形建立连续网格;
将局部对象与局部网格建立映射关系;
通过改变局部网格的属性,形成局部对象的变形数据;
将连续的网格属性作为渲染参数,利用GPU形成对完整对象的处理。
所述确定图像中的完整对象轮廓和对象属性包括:
确定脸部的轮廓;
确定五官的轮廓;
确定五官的关键特征位置。
所述基于完整对象轮廓和对象属性利用规律图形建立连续网格包括:
在脸部的轮廓位置设置关键特征点位;
在五官的关键特征位置设置关键特征点位;
在关键特征点位间结成连续网格。
所述将局部对象与局部网格建立映射关系包括:
将每一个网格与唯一对应的局部脸部形成对应关系;
将每一个网格与对应的局部脸部的纹理图案形成对应关系。
所述通过改变局部网格的属性,形成局部对象的变形数据包括:
记录被推拉的主动网格、主动网格的位移位置和顶点位置;
记录与推拉的网格相邻的被动网格、被动网格的位移位置和顶点位置;
更新连续网格中网格的形状和位置,形成渲染参数。
所述将连续的网格属性作为渲染参数,利用GPU形成对完整对象的处理包括:
将初始的连续网格、与网格对应的局部脸部的纹理图案和渲染参数通过图形处理接口传递至GPU。
本发明实施例的图像形状调节装置,包括:
对象确定模块:用于确定图像中的完整对象轮廓和对象属性;
网格建立模块:用于基于完整对象轮廓和对象属性利用规律图形建立连续网格;
映射模块:用于将局部对象与局部网格建立映射关系;
网格调整模块:用于通过改变局部网格的属性,形成局部对象的变形数据;
输出模块:用于将连续的网格属性作为渲染参数,利用GPU形成对完整对象的处理。
所述对象确定模块包括:
主要轮廓确定子模块:用于确定脸部的轮廓;
次要轮廓确定子模块:用于确定五官的轮廓;
具体轮廓确定子模块:用于确定五官的关键特征位置。
所述网格建立模块包括:
轮廓特征定位子模块:用于在脸部的轮廓位置设置关键特征点位;
关键特征定位子模块:用于在五官的关键特征位置设置关键特征点位;
网格连接子模块:用于在关键特征点位间结成连续网格。
所述映射模块包括:
图像面积映射子模块:用于将每一个网格与唯一对应的局部脸部形成对应关系;
图像属性映射子模块:用于将每一个网格与对应的局部脸部的纹理图案形成对应关系;
所述网格调整模块包括:
主动调整子模块:用于记录被推拉的主动网格、主动网格的位移位置和顶点位置;
被动调整子模块:用于记录与推拉的网格相邻的被动网格、被动网格的位移位置和顶点位置;
更新子模块:用于更新连续网格中网格的形状和位置,形成渲染参数。
所述输出模块包括:
接口适配子模块:用于将初始的连续网格、与网格对应的局部脸部的纹理图案和渲染参数通过图形处理接口传递至GPU。
本发明实施例的图像形状调节装置,包括处理器和存储器,
所述存储器用于存储完成上述的图像形状调节方法的程序代码;
所述处理器用于运行所述程序代码。
本发明的图像形状调节方法和调节装置,可以充分利用特定对象的特征属性确定在图像中特定对象的准确范围,同时确定特定对象中的关键特征位置。这些关键特征位置形成特定对象的基本坐标体系,用于建立关键特征间的准确位置关联,即连续的网格。通过主动调整局部关键特征的位置,就可以获得主动调整后的局部关键特征的变化位置坐标,即局部网格属性,以及受主动调整的局部关键特征的影响,其他被动调整的局部关键特征的变化位置坐标,即其他网格属性。这样就可以实现利用通用的识别算法在通用处理器上完成特定图像的渲染参数变换处理,形成的结果参数可以直接应用于具有渲染一致性的基本拉伸、扭曲或渲染过程,保证了不同类型GPU的渲染效果。完全满足在不同移动终端GPU上的通用性,避免了GPU高级数据处理的高能耗。
附图简要说明
图1为本发明图像形状调节方法一实施例的处理流程图。
图2为本发明图像形状调节方法另一实施例的处理流程图。
图3为本发明图像形状调节装置一实施例的处理流程图。
图4为本发明图像形状调节装置另一实施例的处理流程图。
实施本发明的方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
图纸中的步骤编号仅用于作为该步骤的附图标记,不表示执行顺序。
图1为本发明图像形状调节方法一实施例的处理流程图。如图1所示,本发明一实施例的图像形状调节方法,包括:
步骤01:确定图像中的完整对象轮廓和对象属性;
步骤02:基于完整对象轮廓和对象属性利用规律图形建立连续网格;
步骤03:将局部对象与局部网格建立映射关系;
步骤04:通过改变局部网格的属性,形成局部对象的变形数据;
步骤05:将连续的网格属性作为渲染参数,利用GPU形成对完整对象的处理。
本发明一实施例的图像形状调节方法,可以充分利用特定对象的特征属性确定在图像中特定对象的准确范围,同时确定特定对象中的关键特征位置。这些关键特征位置形成特定对象的基本坐标体系,用于建立关键特征间的准确位置关联,即连续的网格。通过主动调整局部关键特征的位置,就可以获得主动调整后的局部关键特征的变化位置坐标,即局部网格属性,以及受主动调整的局部关键特征的影响,其他被动调整的局部关键特征的变化位置坐标,即其他网格属性。这样就可以实现利用通用的识别算法在通用处理器上完成特定图像的渲染参数变换处理,形成的结果参数可以直接应用于具有渲染一致性的基本拉伸、扭曲或渲染过程,保证了不同类型GPU的渲染效果。完全满足在不同移动终端GPU上的通用性,避免了GPU高级数据处理的高能耗。
在本发明一实施例的图像形状调节方法中,上述步骤01中的完整对象轮廓为脸部轮廓或头部正面轮廓,对象属性为五官的属性,包括但不限于、颧骨最高点、眉骨最高点、内眼角、外眼角、嘴角、内侧眉尖、外侧眉尖、鼻准、鼻孔和瞳孔等具体关键特征位置。
上述步骤02中,完整对象轮廓中涵盖其他对象的最大对象可以作为连续网格的参考边界。当最大对象为脸部,立案的轮廓边缘可以作为连续网格的参考边界,参考边界外延可以形成随动的外延网格,用于适配脸部与周围环境的变化。
上述步骤04中,局部网格的属性包括形状属性,形状的顶点坐标属性。
上述步骤05中,渲染参数包括改变的网格顶点坐标。GPU利用OpenGL和Direct3D都支持的vertex shader来实时改变网格顶点坐标,即利用OpenGL和Direct3D都支持的纹理映射方法来改变渲染到屏幕上的结果。
图2为本发明图像形状调节方法另一实施例的处理流程图。如图2所示,在本发明一实施例的图像形状调节方法中,上述步骤01包括:
步骤11:确定脸部的轮廓。可以将彩色图像转换为灰度图像后,进行聚类处理以获得脸部的轮廓。
步骤12:确定五官的轮廓。可以将彩色图像转换为灰度图像后,进行聚类处理以获得五官的轮廓。
步骤13:确定五官的关键特征位置。可以五官的生理特点数据识别无关的关键特征位置。
在本发明一实施例的图像形状调节方法中,上述步骤02包括:
步骤21:在脸部的轮廓位置设置关键特征点位。轮廓的关键特征点位包括但不限于脸部的对称轴端点、连续虚线的最大偏移点和端点、离散的对称点等。
步骤22:在五官的关键特征位置设置关键特征点位。五官的关键特征点位包括但不限于上述的生理特点的点位。
步骤23:在关键特征点位间结成连续网格。连续网格的网格为三角形或以三角形为主,连续网格为二维网格,三角形的顶点分别为相邻的点位。
在本发明一实施例中,连续网格周边(即脸部轮廓)向外延伸形成随动连续网格的外延网格,外延网格作为连续网格的边界,以三角形成。
在本发明一实施例中,以包括连续网格的相应图像像素为中心,连续网格外延形成更大的被动随动网格。被动随动网格的边缘网格与中心网格的变化相关性随距离增加弱化。
在本发明一实施例的图像形状调节方法中,上述步骤03包括:
步骤31:将每一个网格与唯一对应的局部脸部形成对应关系。对应关系既包括每个网格与覆盖的相应局部脸部的面积对应关系,也包括与局部脸部的顶点位置的对应关系。在本发明一实施例中,每一个网格与唯一对应的局部脸部可以替换为唯一对应的脸型或五官对象与相应一组网格唯一对应,形成对应关系。
步骤32:将每一个网格与对应的局部脸部的纹理图案形成对应关系。包括每个网格与覆盖的相应局部脸部的面积内的图像属性,包括但不限于纹理、色彩、 亮度等图像属性。
在本发明一实施例的图像形状调节方法中,上述步骤04包括:
步骤41:记录被推拉的主动网格、主动网格的位移位置和顶点位置。主动网格包括受控改变形状或/和发生位移改变位置的网格。
步骤42:记录与推拉的网格相邻的被动网格、被动网格的位移位置和顶点位置。被动网格包括传导主动网格变化,形成形状或/和发生位移改变位置的网格。
步骤43:更新连续网格中网格的形状和位置,形成渲染参数。这些形成的网格顶点的修改坐标放入Vertex Shader,就可以控制网格的形状。
在本发明一实施例的图像形状调节方法中,上述步骤05包括:
步骤51:将初始的连续网格、与网格对应的局部脸部的纹理图案和渲染参数通过图形处理接口传递至GPU。图形处理接口包括但不限于DirectX、OpenGLES等标准的接口或接口子集。
GPU在获得上述数据后根据渲染参数完成对原始脸部图像的变形和渲染,实现脸型调整。本发明一实施例的图像形状调节方法,充分利用CPU完成了精细对象脸部的主要变形参数的获得,大大缩短了GPU高负荷处理时长,降低了能耗。GPU获得参数仅需要完成简单的处理渲染过程,保持了对图像处理软件硬件的兼容性和输出一致性。
GPU的处理过程包括:
利用纹理映射的机理,来改变屏幕上脸部像素的位置;
先针对随动网格绘制原始全屏纹理;
再针对外延的人脸网格来绘制变形后的全屏纹理。
此种方法,可以保证脸型变化可以限制在外延网格的范围内,不会影响到屏幕远处不该随之一起产生变形之处。
图3为本发明图像形状调节装置一实施例的处理流程图。如图3所示,与本发明一实施例的图像形状调节方法相应,本发明一实施例的图像形状调节装置,包括:
对象确定模块100:用于确定图像中的完整对象轮廓和对象属性;
网格建立模块200:用于基于完整对象轮廓和对象属性利用规律图形建立连续网格;
映射模块300:用于将局部对象与局部网格建立映射关系;
网格调整模块400:用于通过改变局部网格的属性,形成局部对象的变形数据;
输出模块500:用于将连续的网格属性作为渲染参数,利用GPU形成对完整对象的处理。
图4为本发明图像形状调节装置另一实施例的处理流程图。如图4所示,在本发明一实施例的图像形状调节装置中,对象确定模块100包括:
主要轮廓确定子模块110:用于确定脸部的轮廓。可以将彩色图像转换为灰度图像后,进行聚类处理以获得脸部的轮廓。
次要轮廓确定子模块120:用于确定五官的轮廓。可以将彩色图像转换为灰度图像后,进行聚类处理以获得五官的轮廓。
具体轮廓确定子模块130:用于确定五官的关键特征位置。可以五官的生理特点数据识别无关的关键特征位置。
在本发明一实施例的图像形状调节装置中,网格建立模块200包括:
轮廓特征定位子模块210:用于在脸部的轮廓位置设置关键特征点位。轮廓的关键特征点位包括但不限于脸部的对称轴端点、连续虚线的最大偏移点和端点、离散的对称点等。
关键特征定位子模块220:用于在五官的关键特征位置设置关键特征点位。五官的关键特征点位包括但不限于上述的生理特点的点位。
网格连接子模块230:用于在关键特征点位间结成连续网格。连续网格的网格为三角形或以三角形为主,连续网格为二维网格,三角形的顶点分别为相邻的点位。
进一步,还用于连续网格周边(即脸部轮廓)向外延伸形成随动连续网格的外延网格,外延网格作为连续网格的边界,以三角形成。
进一步,还用于以包括连续网格的相应图像像素为中心,连续网格外延形成 更大的被动随动网格。被动随动网格的边缘网格与中心网格的变化相关性随距离增加弱化。
在本发明一实施例的图像形状调节装置中,映射模块300包括:
图像面积映射子模块310:用于将每一个网格与唯一对应的局部脸部形成对应关系。对应关系既包括每个网格与覆盖的相应局部脸部的面积对应关系,也包括与局部脸部的顶点位置的对应关系。
图像属性映射子模块320:用于将每一个网格与对应的局部脸部的纹理图案形成对应关系。包括每个网格与覆盖的相应局部脸部的面积内的图像属性,包括但不限于纹理、色彩、亮度等图像属性。
在本发明一实施例的图像形状调节装置中,网格调整模块400包括:
主动调整子模块410:用于记录被推拉的主动网格、主动网格的位移位置和顶点位置。主动网格包括受控改变形状或/和发生位移改变位置的网格。
被动调整子模块420:用于记录与推拉的网格相邻的被动网格、被动网格的位移位置和顶点位置。被动网格包括传导主动网格变化,形成形状或/和发生位移改变位置的网格。
更新子模块430:用于更新连续网格中网格的形状和位置,形成渲染参数。
在本发明一实施例的图像形状调节装置中,输出模块500包括:
接口适配子模块510:用于将初始的连续网格、与网格对应的局部脸部的纹理图案和渲染参数通过图形处理接口传递至GPU。图形处理接口包括但不限于DirectX、OpenGLES等标准的接口或接口子集。
以上所述仅为本发明的较佳实施例而已,并不用以限制本发明,凡在本发明的精神和原则之内,所作的任何修改、等同替换等,均应包含在本发明的保护范围之内。
本发明一实施例的图像形状调节装置包括存储器和处理器,其中:
存储器用于存储实现上述实施例的图像形状调节方法的处理步骤的程序代码;
处理器用于运行实现上述实施例的图像形状调节方法的处理步骤的程序 代码。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本发明的范围。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指 令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本发明各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序校验码的介质。
以上所述,仅为本发明的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本发明的保护范围之内。因此,本发明的保护范围应所述以权利要求的保护范围为准。
工业实用性
本发明的图像形状调节方法和调节装置建立了关键特征间的准确位置关联,形成了关联的网格属性。可以实现利用通用的识别算法在通用处理器上完成特定图像的渲染参数变换处理,形成的结果参数可以直接应用于具有渲染一致性的基本拉伸、扭曲或渲染过程,保证了不同类型GPU的渲染效果。完全满足在不同移动终端GPU上的通用性,避免了GPU高级数据处理的高能耗。

Claims (13)

  1. 一种图像形状调节方法,包括:
    确定图像中的完整对象轮廓和对象属性;
    基于完整对象轮廓和对象属性利用规律图形建立连续网格;
    将局部对象与局部网格建立映射关系;
    通过改变局部网格的属性,形成局部对象的变形数据;
    将连续的网格属性作为渲染参数,利用GPU形成对完整对象的处理。
  2. 如权利要求1所述的图像形状调节方法,所述确定图像中的完整对象轮廓和对象属性包括:
    确定脸部的轮廓;
    确定五官的轮廓;
    确定五官的关键特征位置。
  3. 如权利要求1所述的图像形状调节方法,所述基于完整对象轮廓和对象属性利用规律图形建立连续网格包括:
    在脸部的轮廓位置设置关键特征点位;
    在五官的关键特征位置设置关键特征点位;
    在关键特征点位间结成连续网格。
  4. 如权利要求1所述的图像形状调节方法,所述将局部对象与局部网格建立映射关系包括:
    将每一个网格与唯一对应的局部脸部形成对应关系;
    将每一个网格与对应的局部脸部的纹理图案形成对应关系。
  5. 如权利要求1所述的图像形状调节方法,所述通过改变局部网格的属性,形成局部对象的变形数据包括:
    记录被推拉的主动网格、主动网格的位移位置和顶点位置;
    记录与推拉的网格相邻的被动网格、被动网格的位移位置和顶点位置;
    更新连续网格中网格的形状和位置,形成渲染参数。
  6. 如权利要求1所述的图像形状调节方法,所述将连续的网格属性作为渲染参数,利用GPU形成对完整对象的处理包括:
    将初始的连续网格、与网格对应的局部脸部的纹理图案和渲染参数通过图形处理接口传递至GPU。
  7. 一种图像形状调节装置,包括:
    对象确定模块:用于确定图像中的完整对象轮廓和对象属性;
    网格建立模块:用于基于完整对象轮廓和对象属性利用规律图形建立连续网格;
    映射模块:用于将局部对象与局部网格建立映射关系;
    网格调整模块:用于通过改变局部网格的属性,形成局部对象的变形数据;
    输出模块:用于将连续的网格属性作为渲染参数,利用GPU形成对完整对象的处理。
  8. 如权利要求7所述的图像形状调节装置,所述对象确定模块包括:
    主要轮廓确定子模块:用于确定脸部的轮廓;
    次要轮廓确定子模块:用于确定五官的轮廓;
    具体轮廓确定子模块:用于确定五官的关键特征位置。
  9. 如权利要求7所述的图像形状调节装置,所述网格建立模块包括:
    轮廓特征定位子模块:用于在脸部的轮廓位置设置关键特征点位;
    关键特征定位子模块:用于在五官的关键特征位置设置关键特征点位;
    网格连接子模块:用于在关键特征点位间结成连续网格。
  10. 如权利要求7所述的图像形状调节装置,所述映射模块包括:
    图像面积映射子模块:用于将每一个网格与唯一对应的局部脸部形成对应关系;
    图像属性映射子模块:用于将每一个网格与对应的局部脸部的纹理图案形成对应关系。
  11. 如权利要求7所述的图像形状调节装置,所述网格调整模块包括:
    主动调整子模块:用于记录被推拉的主动网格、主动网格的位移位置和顶点 位置;
    被动调整子模块:用于记录与推拉的网格相邻的被动网格、被动网格的位移位置和顶点位置;
    更新子模块:用于更新连续网格中网格的形状和位置,形成渲染参数。
  12. 如权利要求7所述的图像形状调节装置,所述输出模块包括:
    接口适配子模块:用于将初始的连续网格、与网格对应的局部脸部的纹理图案和渲染参数通过图形处理接口传递至GPU。
  13. 一种图像形状调节装置,包括处理器和存储器,其特征在于,
    所述存储器用于存储完成权利要求1至6任一所述的图像形状调节方法的程序代码;
    所述处理器用于运行所述程序代码。
PCT/CN2017/118807 2017-01-25 2017-12-27 一种图像形状调节方法和调节装置 WO2018137454A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710060917.4 2017-01-25
CN201710060917.4A CN106846428A (zh) 2017-01-25 2017-01-25 一种图像形状调节方法和调节装置

Publications (1)

Publication Number Publication Date
WO2018137454A1 true WO2018137454A1 (zh) 2018-08-02

Family

ID=59122955

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/118807 WO2018137454A1 (zh) 2017-01-25 2017-12-27 一种图像形状调节方法和调节装置

Country Status (2)

Country Link
CN (1) CN106846428A (zh)
WO (1) WO2018137454A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106846428A (zh) * 2017-01-25 2017-06-13 迈吉客科技(北京)有限公司 一种图像形状调节方法和调节装置
CN112488909B (zh) * 2019-09-11 2024-09-24 广州虎牙科技有限公司 多人脸的图像处理方法、装置、设备及存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080174795A1 (en) * 2007-01-23 2008-07-24 Accenture Global Services Gmbh Reshaping an image to thin or fatten a face
CN103824253A (zh) * 2014-02-19 2014-05-28 中山大学 一种基于图像局部精确变形的人物五官变形方法
CN104063890A (zh) * 2013-03-22 2014-09-24 中国移动通信集团福建有限公司 一种人脸卡通动漫形象化方法及系统
CN105184735A (zh) * 2014-06-19 2015-12-23 腾讯科技(深圳)有限公司 一种人像变形方法及装置
CN106296571A (zh) * 2016-07-29 2017-01-04 厦门美图之家科技有限公司 一种基于人脸网格的缩小鼻翼方法、装置和计算设备
CN106846428A (zh) * 2017-01-25 2017-06-13 迈吉客科技(北京)有限公司 一种图像形状调节方法和调节装置

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080174795A1 (en) * 2007-01-23 2008-07-24 Accenture Global Services Gmbh Reshaping an image to thin or fatten a face
CN104063890A (zh) * 2013-03-22 2014-09-24 中国移动通信集团福建有限公司 一种人脸卡通动漫形象化方法及系统
CN103824253A (zh) * 2014-02-19 2014-05-28 中山大学 一种基于图像局部精确变形的人物五官变形方法
CN105184735A (zh) * 2014-06-19 2015-12-23 腾讯科技(深圳)有限公司 一种人像变形方法及装置
CN106296571A (zh) * 2016-07-29 2017-01-04 厦门美图之家科技有限公司 一种基于人脸网格的缩小鼻翼方法、装置和计算设备
CN106846428A (zh) * 2017-01-25 2017-06-13 迈吉客科技(北京)有限公司 一种图像形状调节方法和调节装置

Also Published As

Publication number Publication date
CN106846428A (zh) 2017-06-13

Similar Documents

Publication Publication Date Title
CN110807836B (zh) 三维人脸模型的生成方法、装置、设备及介质
AU2017235889B2 (en) Digitizing physical sculptures with a desired control mesh in 3d
EP3992919B1 (en) Three-dimensional facial model generation method and apparatus, device, and medium
CN111714885B (zh) 游戏角色模型生成、角色调整方法、装置、设备及介质
WO2018137455A1 (zh) 一种图像互动方法及互动装置
CN107452049B (zh) 一种三维头部建模方法及装置
CN107564080B (zh) 一种人脸图像的替换系统
CN109978984A (zh) 人脸三维重建方法及终端设备
CN104899563A (zh) 一种二维人脸关键特征点定位方法及系统
CN107507216A (zh) 图像中局部区域的替换方法、装置及存储介质
JP2019536162A (ja) シーンのポイントクラウドを表現するシステム及び方法
JP7244810B2 (ja) 単色画像及び深度情報を使用した顔テクスチャマップ生成
US10185875B2 (en) Image processing device, image display device, image processing method, and medium
CN105144243A (zh) 数据可视化
CN111382618B (zh) 一种人脸图像的光照检测方法、装置、设备和存储介质
TWI684163B (zh) 虛擬實境裝置、影像處理方法以及非暫態電腦可讀取記錄媒體
WO2018137454A1 (zh) 一种图像形状调节方法和调节装置
WO2023179091A1 (zh) 三维模型渲染方法、装置、设备、存储介质及程序产品
CN114241119A (zh) 一种游戏模型生成方法、装置、系统及计算机存储介质
US20220277586A1 (en) Modeling method, device, and system for three-dimensional head model, and storage medium
CN117745915B (zh) 一种模型渲染方法、装置、设备及存储介质
KR20220126063A (ko) 재구성된 이미지를 생성하는 이미지 처리 방법 및 장치
JP2020532022A (ja) 全視角方向の球体ライトフィールドレンダリング方法
US10403038B2 (en) 3D geometry enhancement method and apparatus therefor
WO2022262201A1 (zh) 面部三维模型可视化方法、装置、电子设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17894171

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 22.11.2019)

122 Ep: pct application non-entry in european phase

Ref document number: 17894171

Country of ref document: EP

Kind code of ref document: A1