CN105279788A - Method for generating object space swept volume - Google Patents
Method for generating object space swept volume Download PDFInfo
- Publication number
- CN105279788A CN105279788A CN201510705103.2A CN201510705103A CN105279788A CN 105279788 A CN105279788 A CN 105279788A CN 201510705103 A CN201510705103 A CN 201510705103A CN 105279788 A CN105279788 A CN 105279788A
- Authority
- CN
- China
- Prior art keywords
- model
- voxel
- dimensional
- sampling
- swept
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Landscapes
- Processing Or Creating Images (AREA)
- Image Generation (AREA)
Abstract
本发明公开了一种生成物体空间扫掠体的方法,涉及虚拟仿真、计算机辅助制造(CAM)与产品周期管理(PLM)领域,特别涉及一种虚拟模型运动扫掠体生成方法。该方法采用三维空间体素(Voxel)方法,获得物体在三维空间中经过运动变换所形成的扫掠体,或称包络体、包络空间、扫掠空间、扫掠轨迹。与目前已有扫掠体计算方法相比,本发明提出的方法适用于任意复杂三维形状的刚性与非刚性运动变换,能够对运动变换过程中每一采样时刻的扫掠体进行实时三维显示和表面重建。此外,本发明还提出采用大规模行并行计算的方法对扫掠体计算过程进行加速,在算法计算效率、计算精度以及鲁棒性方面均优于现有已有方法。
The invention discloses a method for generating a space sweeping body of an object, relates to the fields of virtual simulation, computer-aided manufacturing (CAM) and product cycle management (PLM), and in particular relates to a method for generating a virtual model moving sweeping body. This method uses the three-dimensional space voxel (Voxel) method to obtain the swept volume formed by the motion transformation of the object in the three-dimensional space, or the envelope volume, envelope space, sweep space, and sweep trajectory. Compared with the current calculation methods for swept volumes, the method proposed by the present invention is suitable for rigid and non-rigid motion transformations of arbitrary complex three-dimensional shapes, and can perform real-time three-dimensional display and analysis of swept volumes at each sampling moment during the motion transformation process. surface reconstruction. In addition, the present invention also proposes a large-scale parallel computing method to accelerate the sweeping volume calculation process, which is superior to existing methods in terms of algorithm calculation efficiency, calculation accuracy and robustness.
Description
技术领域 technical field
本发明属于一种虚拟仿真、计算机辅助制造(CAM)与产品周期管理(PLM)领域,特别涉及一种虚拟模型运动扫掠体生成方法。 The invention belongs to the fields of virtual simulation, computer-aided manufacturing (CAM) and product cycle management (PLM), and in particular relates to a method for generating a virtual model motion sweeping body.
背景技术 Background technique
空间扫掠体,又称为空间包络体、包络空间、扫掠空间、扫掠轨迹空间,英文名词是Swept Volume,它是指任意一个物体沿着任意一个轨迹线(甚至是一个面),以某种方式(平移、旋转、缩放、变形)进行运动变换,最终在三维空间中产生一个包括该物体在运动过程中经过的全部空间总和。扫掠体问题包含了包络建模与计算、包络线、面划定及三维显示等一系列子问题。然而,其中最有挑战性的问题就是扫掠体的生成和可视化。空间扫掠体技术在几何造型复杂模型设计、装备组装可行性分析、大规模设计、以及机器人运动路径规划等方面有着非常重要的作用,在航空航天、船舶、汽车以及其它制造业有着非凡价值和意义。 Space sweep volume, also known as space envelope, envelope space, sweep space, sweep trajectory space, the English term is Swept Volume, which refers to any object along any trajectory line (even a surface) , perform motion transformation in a certain way (translation, rotation, scaling, deformation), and finally generate a sum of all the spaces that the object passes through in the three-dimensional space. The swept volume problem includes a series of sub-problems such as envelope modeling and calculation, envelope curve, surface delineation and 3D display. However, one of the most challenging problems is the generation and visualization of swept volumes. Space swept body technology plays a very important role in the design of complex geometric modeling, feasibility analysis of equipment assembly, large-scale design, and robot motion path planning. It has extraordinary value and significance.
对空间扫掠体的研究最早开始于上世纪5、60年代,尽管已经经过了半个多世纪的发展,也出现了一系列关于生成扫掠体的方法,但由于实现算法复杂、数据计算量太大,在具体实际生产实践中的应用始终存在着巨大障碍,因此该技术一直以来被认为是计算机图形学、可视化、虚拟现实领域中公认的难题之一。 The research on swept volumes began in the 1950s and 1960s. Although it has been developed for more than half a century, a series of methods for generating swept volumes have emerged. However, due to the complexity of the algorithm and the amount of data calculation It is too large, and there are always huge obstacles in its application in actual production practice. Therefore, this technology has always been considered as one of the recognized problems in the fields of computer graphics, visualization, and virtual reality.
在早期的扫掠体研究阶段,实现方法主要是基于解析几何的方法,如基于运动形态和基于几何形态的算法。基于运动形态的方法是以曲线族包络面方程为基础的,利用包络理论,通过微分方程进行求解。例如,Kim和Moon提出了一个专门针对代数曲线和曲面,生成纯旋转扫掠体的代数算法。该方法得到的扫掠体边界与回旋由平面物体的边界有关。 Parida和Mudur进一步针对更加通用的扫掠体方法,加入了一系列扫掠规则。此后,一些研究人员进一步将该方法加以扩展,提出扫描包络的方法,实现了多边形物体的三维扫掠体生成。由于上述扫掠体生成涉及到大量的方程求解,计算效率较低,其算法复杂度与多边形模型的顶点数量相关。这一时期的扫掠体计算的核心根基是隐函数理论:多样分层和扫描包络偏微分方程。大多数的工作是通过给扫掠路径一定假设条件,然后表征扫掠体的边界。在这方面有大量关于如何进行几何实体造型表面相交计算的研究报道。尽管基于几何计算的方法在一定程度上实现了扫掠体的生成,但是它们普遍的缺陷是仅适用于简单的多边形或者简单的扫掠轨迹,通用性较差,更重要的是其计算过程涉及大量偏微分方程求解,要想得到精确的扫掠体,底层计算的代数复杂度非常高。此外,所有这些用于表面相交计算的算法实现都存在处理精度和鲁棒性等诸多问题。因此,这些基于几何计算的方法无法精确计算出任意多边形沿着一个给定光滑路径的扫掠体。 In the early stage of swept volume research, the implementation methods were mainly methods based on analytic geometry, such as algorithms based on motion shapes and geometric shapes. The method based on the motion form is based on the envelope surface equation of the curve family, and uses the envelope theory to solve it through the differential equation. For example, Kim and Moon proposed an algebraic algorithm for generating purely rotated swept volumes specifically for algebraic curves and surfaces. The boundary of the swept volume obtained by this method is related to the boundary of the convoluted planar object. Parida and Mudur further add a series of sweeping rules to the more general sweeping volume method. Since then, some researchers have further extended the method and proposed the method of scanning envelope, which realizes the generation of three-dimensional swept volume of polygonal objects. Since the generation of the above-mentioned swept volume involves solving a large number of equations, the calculation efficiency is low, and its algorithm complexity is related to the number of vertices of the polygonal model. The core foundation of swept volume calculations in this period is implicit function theory: multivariate layering and sweeping envelope partial differential equations. Most of the work is to characterize the boundary of the swept volume by giving certain assumptions about the swept path. In this aspect, there are a lot of research reports on how to calculate the intersection calculation of geometric solid modeling surfaces. Although the methods based on geometric calculations can achieve the generation of swept volumes to a certain extent, their common defect is that they are only suitable for simple polygons or simple sweep trajectories, and their versatility is poor. More importantly, their calculation process involves To solve a large number of partial differential equations, in order to obtain accurate swept volumes, the algebraic complexity of the underlying calculations is very high. In addition, all these algorithm implementations for surface intersection calculations have many problems such as processing accuracy and robustness. Therefore, these geometric calculation-based methods cannot accurately calculate the swept volume of any polygon along a given smooth path.
近年来,随着计算机硬件水平和计算能力的不断提升,扫掠体技术被重新提出,一种新颖的图形表示及操作算法-体素(Voxel)法的出现使得任意复杂图形延任意复杂运动轨迹的扫掠体计算成为了可能。目前针对任意复杂物体的空间扫掠体生成技术,世界上仅有美国波音(Boeing)、法国达索(Dassault)、德国西门子(Siemens)三家企业推出了实际产品,它们分别为波音的Voxmap PointShellTM (VPS),达索的CATIA,西门子的Kineo Wrapping。波音的Voxmap PointShellTM(VPS)本身是一个使用体素(Voxel)来解决空间几何问题的函数库。在VPS中,扫掠体(Swept Volume)生成模块可以通过用户定义的物体对象运动轨迹来生成物体扫过的空间实体,并完成扫描体表面补偿功能。另外,VPS的网格化功能利用局部体素空间拓扑结构快速生成多边形模型。达索的CATIA是PLM协同解决方案的一个重要组成部分,其空间扫掠体模块实现方法与波音的Voxmap PointShellTM(VPS)不同,其采用了基于几何拓扑结构演算和水平集(Level-set)方法,在保证计算精度的同时也具有较低的计算复杂度。西门子PLM软件Kineo™ Wrapping解决方案是一个通用的计算扫掠体和包围的软件库。Kineo Wrapping可以通过使用外部定义的运动空间预留,在模拟过程中创建一个移动部件的空间扫掠体。此外,该软件还支持人体模型、机器人或铰接系统的运动扫掠体生成。Kineo Wrapping通过一系列算法优化,使得用户能够直接从扫掠体得到简化的网格模型和包络面,而无须存储其它中间转换。 In recent years, with the continuous improvement of computer hardware level and computing power, swept volume technology has been proposed again, and the emergence of a novel graphic representation and operation algorithm - voxel (Voxel) method makes arbitrary complex graphics extend arbitrary complex motion trajectories. Swept volume calculations are now possible. At present, for the space-sweeping volume generation technology of any complex object, only three companies in the world, Boeing (Boeing), Dassault (France), and Siemens (Germany), have launched actual products. They are respectively Boeing's Voxmap PointShell TM (VPS), Dassault's CATIA, Siemens' Kineo Wrapping. Boeing's Voxmap PointShell TM (VPS) itself is a function library that uses voxels (Voxel) to solve spatial geometry problems. In VPS, the swept volume (Swept Volume) generation module can generate the spatial entity swept by the object through the user-defined object motion trajectory, and complete the surface compensation function of the swept volume. In addition, the meshing function of VPS utilizes the local voxel space topology to quickly generate polygonal models. Dassault's CATIA is an important part of the PLM collaborative solution. Its space-swept volume module implementation method is different from Boeing's Voxmap PointShell TM (VPS), which uses geometric topology-based calculus and level-set (Level-set) method, while ensuring calculation accuracy, it also has low computational complexity. The Siemens PLM software Kineo™ Wrapping solution is a general-purpose software library for calculating swept volumes and wrapping. Kineo Wrapping can create a space-swept volume of moving parts during simulation by using externally defined motion space reservations. Additionally, the software supports kinematic swept volume generation for mannequins, robots or articulated systems. Kineo Wrapping is optimized through a series of algorithms, enabling users to obtain simplified mesh models and envelope surfaces directly from swept volumes without storing other intermediate transformations.
尽管以上扫掠体生成技术也是基于Voxel的方法且各有特点,但是通过分析它们的工作流程可以发现,它们均存以下不足: Although the above swept volume generation techniques are also based on the Voxel method and have their own characteristics, by analyzing their workflow, it can be found that they all have the following shortcomings:
1、用于生成扫掠体的物体模型只能进行刚体运动,即平移和旋转,而对如物体本身弯曲、缩放、挤压以及自由表面任意变形等非刚性变换所产生的扫掠体则无能为力; 1. The object model used to generate the swept body can only perform rigid body motion, that is, translation and rotation, but cannot do anything to the swept body generated by non-rigid transformations such as bending, scaling, extrusion, and arbitrary deformation of the free surface. ;
2、只能生成一个最终的静态空间扫掠体,而不能将物体运动过程中每一时刻的空间扫掠体实时动态地计算并显示出来; 2. Only one final static space sweeping body can be generated, but the space sweeping body at each moment during the movement of the object cannot be dynamically calculated and displayed in real time;
3、 现有空间扫掠体计算算法复杂度高、计算效率低,并且都是采用离线计算的方式,而不是实时动态计算。 3. The existing space-swept volume calculation algorithm has high complexity and low calculation efficiency, and all of them adopt offline calculation instead of real-time dynamic calculation.
发明内容 Contents of the invention
本发明目的是针对现有空间扫掠体计算中存在的不足,提出一种新颖的基于三维空间体素的物体空间扫掠体快速计算方法。该方法通过加速的多边形面片体素化算法,将在三维空间中运动变换的物体模型实时地转化为体素模型,再通过快速体素运算算法将不同时刻的体素模型进行合并,从而生成每一时刻的空间扫掠体体素模型,并进行实时三维显示。 The purpose of the present invention is to propose a novel fast calculation method for object space sweeping volume based on three-dimensional space voxel, aiming at the shortcomings in existing space scanning volume calculation. This method converts the object model moving and transforming in three-dimensional space into a voxel model in real time through the accelerated polygon voxelization algorithm, and then merges the voxel models at different times through a fast voxel operation algorithm to generate The voxel model is scanned in space at each moment and displayed in real time in 3D.
本发明方法通过以下具体步骤实现: The inventive method is realized through the following specific steps:
第1、将要进行扫掠的虚拟物体模型M以多边形面片和顶点的形式表示,并记录各顶点的坐标信息D以及顶点构成多边形面片所需连接信息P; 1. The virtual object model M to be swept is represented in the form of a polygonal patch and vertices, and the coordinate information D of each vertex and the required connection information P of the vertices to form a polygonal patch are recorded;
第2、为虚拟物体模型M在三维空间中赋予一个时间长度为l的连续运动变换H; The 2nd, for the virtual object model M in three-dimensional space, give a continuous motion transformation H whose time length is l ;
第3、将虚拟物体模型M在三维空间中的连续运动变换H在时间轴上进行离散化采样,每一采样点代表某一时刻t的运动变换H(t),其中 ; Third, the continuous motion transformation H of the virtual object model M in three-dimensional space is discretized and sampled on the time axis, and each sampling point represents the motion transformation H(t) at a certain moment t , where ;
第4、在运动变换采样0时刻,将多边形面片和顶点表示的虚拟物体模型M在其自身所处的局部坐标系空间Flocal 中转换成三维体素模型V0 ,并将该体素模型V0 复制到物体运动变换空间所在世界坐标系Fworld 中,作为物体运动变换初始0时刻的扫掠体体素模型SV(0)=V0 ; 4. At the moment of motion transformation sampling 0 , the virtual object model M represented by the polygonal facet and vertices is converted into a three-dimensional voxel model V 0 in its own local coordinate system space F local , and the voxel model V 0 is copied to the world coordinate system F world where the object motion transformation space is located, as the swept voxel model SV(0) = V 0 at the initial 0 moment of object motion transformation;
第5、在运动变换采样t时刻,其中1≤t≤l,根据虚拟物体模型M当前的运动变换H(t),重新计算虚拟物体模型M的多边形面片顶点坐标信息D、顶点构成多边形面片所需连接信息P; Fifth, at the moment of motion transformation sampling t , where 1≤t≤l , according to the current motion transformation H(t) of the virtual object model M , recalculate the coordinate information D of the vertices of the polygonal surface of the virtual object model M , and the vertices constitute the polygonal surface The connection information P required by the chip;
第6、将第5步经过运动变换H(t)的虚拟物体模型M在其自身所处的局部坐标系空间Flocal 中重新转换成三维体素模型Vt ,并将该体素模型Vt 复制到物体运动变换空间所在的世界坐标系Fworld 中; Step 6. Retransform the virtual object model M that has undergone motion transformation H(t) in step 5 into a three-dimensional voxel model V t in its own local coordinate system space F local , and convert the voxel model V t Copy to the world coordinate system F world where the object motion transformation space is located;
第7、将第6步t时刻采样点的三维体素模型Vt 与之前所有0到t-1时刻采样点的三维体素模型的和以集合并运算的方式进行合并,生成到t时刻为止的扫掠体体素模型SV(t)=,其中1≤t≤l,而之前所有0到t-1时刻采样点的三维体素模型的和事实上代表的是t-1时刻的扫掠体体素模型,即SV(t-1)= ,所以t时刻的扫掠体三维体素模型SV(t)又可以表示为SV(t)= Vt SV(t-1); Step 7. The sum of the 3D voxel model V t of the sampling point at time t in step 6 and the 3D voxel model of all previous sampling points from 0 to t-1 Combine with set The method is combined to generate the swept volume voxel model SV(t) = , where 1≤ t ≤ l , and the sum of the 3D voxel models of all previous sampling points from 0 to t-1 In fact, it represents the swept voxel model at time t-1 , that is, SV(t-1)= , so the swept volume 3D voxel model SV(t) at time t can be expressed as SV(t)= V t SV(t-1) ;
第8、将第7步生成的t时刻扫掠体体素模型SV(t)进行三维显示并加以保存; 8th, carry out three-dimensional display and save the swept volume voxel model SV(t) at time t generated in step 7;
第9、将第7步生成的t时刻扫掠体体素模型SV(t)进行表面体素提取,生成t时刻扫掠体体素表面点子集SS(t),进行三维显示并加以保存; 9. Extract the surface voxel from the swept volume voxel model SV(t) at time t generated in step 7, generate a subset of surface points SS(t) of the swept voxel at time t, perform three-dimensional display and save it;
第10、将第7步生成的t时刻扫掠体体素模型SV(t)进行表面三维重建,生成t时刻扫掠体表面的多边形面片模型SP(t),进行三维显示并加以保存; 10. Perform surface three-dimensional reconstruction on the swept volume voxel model SV(t) at time t generated in step 7, generate polygonal patch model SP(t) on the surface of the swept volume at time t, perform three-dimensional display and save;
第11、对所有运动变换采样时刻t,,重复第5到第10步,生成物体模型M在每一采样时刻t的扫掠体体素模型SV(t)、扫掠体体素表面点子集SS(t)、扫掠体表面的多边形面片模型SP(t),并进行实时三维动态显示。 11. Sampling time t for all motion transformations, , repeat steps 5 to 10 to generate the swept voxel model SV(t) of the object model M at each sampling time t , the swept voxel surface point subset SS(t) , and the polygon of the swept volume surface Surface model SP(t) , and real-time three-dimensional dynamic display.
其中,第1步所述的虚拟物体模型M可以是单个物体模型,也可以是多个物体模型构成的模型组合; Wherein, the virtual object model M described in the first step can be a single object model, or a model combination composed of multiple object models;
第1步所述由多个物体模型构成的模型组合中,物体模型之间可以是平级关系也可以是具有层级连接的父子继承关系; In the model combination composed of multiple object models described in step 1, the object models can be in a horizontal relationship or a parent-child inheritance relationship with hierarchical connections;
第1步所述的虚拟物体模型M可以由任意三维模型格式加以表达,例如但不限于隐式曲面、网格面、多边形面片、Nurbs曲面、样条区面、实体几何体等; The virtual object model M described in step 1 can be expressed by any 3D model format, such as but not limited to implicit surface, mesh surface, polygonal surface, Nurbs surface, spline area surface, solid geometry, etc.;
第2步所述的连续运动变换H包括针对模型M的刚性变换,包括:平移、旋转,以及针对顶点坐标信息D和顶点构成多边形面片所需连接信息P施加的非刚性形变修改,例如但不限于:缩放、弯曲、扭曲、挤压、错切、顶点位移自由变换、蒙皮变换等; The continuous motion transformation H described in the second step includes a rigid transformation for the model M , including: translation, rotation, and non-rigid deformation modification applied to the vertex coordinate information D and the connection information P required by the vertices to form a polygonal patch, for example but Not limited to: scaling, bending, twisting, extrusion, miscutting, vertex displacement free transformation, skin transformation, etc.;
第3步所述的对连续运动变换H在时间轴上进行离散化采样的过程,采样方式不限于等间隔均匀采样,可以是变间隔的非均匀采样,且采样间隔和采样精度可以任意设定; In the process of discretizing the continuous motion transformation H on the time axis described in step 3, the sampling method is not limited to uniform sampling at equal intervals, but non-uniform sampling at variable intervals, and the sampling interval and sampling accuracy can be set arbitrarily ;
第4步和第6步所述的虚拟物体模型M在其自身所处的局部坐标系空间Flocal 中转换成三维体素模型的过程,采用并行计算进行加速; The process of converting the virtual object model M described in steps 4 and 6 into a three-dimensional voxel model in its own local coordinate system space F local is accelerated by parallel computing;
第7步所述的t时刻的扫掠体三维体素模型SV(t)计算过程采用并行计算进行加速; The calculation process of the swept volume three-dimensional voxel model SV(t) at time t described in the 7th step adopts parallel computing to accelerate;
第9步所述的表面体素提取和第10步表面三维重建采用并行计算进行加速,并且这两步为可选性执行步骤,而非必须执行步骤。 The surface voxel extraction described in step 9 and the surface 3D reconstruction in step 10 are accelerated by parallel computing, and these two steps are optional but not mandatory.
本发明的优点和有益效果:Advantages and beneficial effects of the present invention:
与传统基于计算几何的扫掠体计算方法相比,本发明提出的方法计算量小、复杂度低、鲁棒性强;与现有基于体素的扫掠体计算方法相比,本发明提出的方法不仅适用于旋转、平移的刚性运动,还适用于弯曲、缩放、挤压以及自由表面任意非刚性变换的扫掠体计算。再有,比目前所有扫掠体生成方法更加优越的是,该方法能够实时计算显示出物体运动变换过程中任意时刻的空间扫掠体,而不仅仅是最终时刻的一个静态扫掠体。此外,本发明提出的方法还采用了并行计算来加速多边形体素化和体素操作运算过程。本发明在生产制造、虚拟现实、军事仿真、人机工程等领域都有着重要的实用价值和良好的应用前景。 Compared with the traditional calculation method of swept volume based on computational geometry, the method proposed by the present invention has small calculation amount, low complexity and strong robustness; compared with the existing calculation method of swept volume based on voxel, the present invention proposes The method is not only suitable for rigid motion of rotation and translation, but also for sweeping volume calculation of bending, scaling, extrusion and arbitrary non-rigid transformation of free surface. Furthermore, what is more superior than all current sweeping volume generation methods is that this method can calculate and display the spatial sweeping volume at any moment during the motion transformation process of the object in real time, not just a static sweeping volume at the final moment. In addition, the method proposed by the present invention also adopts parallel computing to accelerate the process of polygonal voxelization and voxel operation. The invention has important practical value and good application prospect in the fields of manufacturing, virtual reality, military simulation, ergonomics and the like.
附图说明 Description of drawings
图1为用于空间扫掠体计算的模型对象; Fig. 1 is a model object used for space swept volume calculation;
图2为用于空间扫掠体计算的模型对象赋予一个时间长度为l的连续运动变换H; Fig. 2 endows a time length as a continuous motion transformation H of l for the model object used for space-swept volume calculation;
图3为将时间长度为l的连续运动变换H进行离散化采样; Fig. 3 is that the continuous motion transformation H whose time length is 1 is discretized and sampled;
图4为将用于实施例空间扫掠体计算的模型进行体素化; Fig. 4 voxelizes the model used for the calculation of the space swept volume of the embodiment;
图5为将t时刻采样点的三维体素模型与之前所有0到t-1时刻采样点的三维体素模型的和以集合并运算的方式进行合并; Fig. 5 merges the three-dimensional voxel model of the sampling point at time t with the sum of the three-dimensional voxel models of all sampling points at time 0 to t-1 before merging in the manner of set merge operation;
图6为t时刻采样点计算得到空间扫掠体体素模型; Fig. 6 is the voxel model of spatially swept voxel obtained by calculating the sampling point at time t ;
图7为经过时间长度为l的运动变换后得到的最终空间扫掠体体素模型; Fig. 7 is the final space-swept voxel model obtained after the motion transformation with a time length of l ;
图8为一个机械臂空间扫掠体计算实例; Fig. 8 is a calculation example of a manipulator space swept volume;
图9为一个人体运动空间扫掠体计算实例。 Fig. 9 is an example of calculation of a swept volume in human motion space.
其中,101为原始球体模型,102为原始圆柱体模型,103为球体多边形/顶点模型,104为圆柱体多边形/顶点模型,105为模型对象自身局部坐标系;106为扫掠体空间世界坐标系;107为模型对象的体素模型;108为位置在(i, j, k)的一个非空体素;109为一个空体素;110为采样点t时刻的多边形/顶点模型;111为采样点t时刻的扫掠对象体素模型;112为t-1时刻的空间扫掠体体素模型;113为t时刻的空间扫掠体体素模型;114为最终的空间扫掠体体素模型。 Among them, 101 is the original sphere model, 102 is the original cylinder model, 103 is the sphere polygon/vertex model, 104 is the cylinder polygon/vertex model, 105 is the local coordinate system of the model object itself; 106 is the swept body space world coordinate system ; 107 is the voxel model of the model object; 108 is a non-empty voxel at ( i , j , k ); 109 is an empty voxel; 110 is the polygon/vertex model at sampling point t ; 111 is sampling The swept object voxel model at point t time; 112 is the space swept voxel model at t -1 time; 113 is the space swept voxel model at t time; 114 is the final space swept voxel model .
具体实施方式 detailed description
为了使本发明的目的、技术方案及优点更加清楚明白,下面结合附图提供的具体实施例旨在作为本发明示例的描述,并不旨在表示可以构建或使用本发明示例的唯一形式。应当理解,此处所描述的具体实施例仅仅用以解释本发明,阐述本发明示例的功能,以及用于构建和操作本发明示例的步骤的序列,并不用于限定本发明。然而,可以通过不同的示例来实现相同或等效功能和序列。 In order to make the purpose, technical solutions and advantages of the present invention more clear, the specific embodiments provided below in conjunction with the accompanying drawings are intended as descriptions of examples of the present invention, and are not intended to represent the only form that can be constructed or used in the examples of the present invention. It should be understood that the specific embodiments described here are only used to explain the present invention, illustrate the functions of the examples of the present invention, and the sequence of steps for constructing and operating the examples of the present invention, and are not intended to limit the present invention. However, the same or equivalent functions and sequences can be implemented by different examples.
本发明提出的物体空间扫掠体快速计算方法,其特征在于该方法计算物体在三维空间中经过运动变换所形成的扫掠体,或称包络体、包络空间、扫掠空间、扫掠轨迹。该方法核心思想是通过加速的多边形面片体素化算法,将在三维空间中运动变换的物体模型实时地转化为体素模型,再通过快速体素运算算法将不同时刻的体素模型进行合并,从而生成每一时刻的空间扫掠体体素模型,并进行实时三维显示。 The method for fast calculation of swept volume in object space proposed by the present invention is characterized in that the method calculates the swept volume formed by motion transformation of the object in three-dimensional space, or called envelope, envelope space, swept space, swept volume, etc. track. The core idea of this method is to convert the object model moving and transforming in three-dimensional space into a voxel model in real time through the accelerated polygon voxelization algorithm, and then merge the voxel models at different times through a fast voxel operation algorithm. , so as to generate a space-swept voxel model at each moment, and perform real-time three-dimensional display.
在具体实施例中,用于扫掠体计算的物体由一个球体101和一个圆柱体102组成,如图1所示。在其它实施例中,用于计算扫掠体的物体可以是任意复杂三维物体。球体101与圆柱体102之间具有层级连接的父子继承关系,其中球体101为子物体,圆柱体102为球体101的父物体。即是,除非预先设定好的只针对圆柱体102所进行的排它性运动变换外,其它对圆柱体102的运动变换会被向下传递给球体101。反之,对球体101的运动变换则不会传递给圆柱体102。 In a specific embodiment, the object used for swept volume calculation is composed of a sphere 101 and a cylinder 102 , as shown in FIG. 1 . In other embodiments, the object used to calculate the swept volume may be any complex three-dimensional object. The sphere 101 and the cylinder 102 have a hierarchically connected parent-child inheritance relationship, wherein the sphere 101 is a child object, and the cylinder 102 is a parent object of the sphere 101 . That is, unless the preset exclusive motion transformation is only performed on the cylinder 102 , other motion transformations on the cylinder 102 will be passed down to the sphere 101 . Conversely, the motion transformation of the sphere 101 will not be transferred to the cylinder 102 .
球体101和圆柱体102的虚拟物体模型可由任何三维设计工具生成,且模型格式是任何能够表达出该物体几何特征信息的格式,只要该模型格式能够进一步转换成多边形/顶点模型。典型的三维模型格式包括但不局限于三角形面片、四边形面片、参数化曲面、实体模型、隐式曲面等。 The virtual object models of the sphere 101 and the cylinder 102 can be generated by any 3D design tool, and the model format is any format that can express the geometric feature information of the object, as long as the model format can be further converted into a polygon/vertex model. Typical 3D model formats include, but are not limited to, triangular patches, quadrilateral patches, parametric surfaces, solid models, implicit surfaces, etc.
球体101和圆柱体102的虚拟物体模型多边形/顶点的表示形式103和104可以通过计算机辅助设计和任何能够进行模型格式转换的方法完成,只要生成的多边形/顶点模型能够进一步被转换成体素模型。 The virtual object model polygon/vertex representations 103 and 104 of the sphere 101 and cylinder 102 can be completed by computer-aided design and any method capable of model format conversion, as long as the generated polygon/vertex models can be further converted into voxel models.
完成多边形/顶点模型转换后,球体103和圆柱体104的顶点坐标信息和顶点构成多边形面片的连接信息分别存放在数组D和数组P中。在一个具体实施方式中,D和P以如下形式存放数据: After the polygon/vertex model conversion is completed, the vertex coordinate information of the sphere 103 and the cylinder 104 and the connection information of the vertices constituting the polygon facet are stored in the array D and the array P respectively. In a specific implementation, D and P store data in the following form:
D=( d0,d1, d2, d3, d4, d5, D=(d 0 ,d 1 ,d 2 ,d 3 ,d 4 ,d 5 , …… , di, , d i , …… , dn-1, dn), d n-1 , d n )
P=( p0, p1, p2, p3, p4, p5, P=( p 0 , p 1 , p 2 , p 3 , p 4 , p 5 , …… , pj, , p j , …… , pm-1, pm), p m-1 , p m )
di 为三维空间中的一个三维向量, 0≤i≤n,n为顶点个数;pj 为用于指向数组D中某个元素的索引值。例如,当pj =i时,代表pj 指向的是di 。数组P中的元素排列顺序实际上构成了模型的多边形表示。在一个实施例中,假设构成多边形均为三角形,则数组P中从第一个元素开始,每三个连续的元素构成一个三角形,此时m应为3的整数倍。 d i is a three-dimensional vector in three-dimensional space, 0≤i≤n, n is the number of vertices; p j is an index value used to point to an element in the array D. For example, when p j = i , it means that p j points to d i . The arrangement order of the elements in the array P actually constitutes the polygonal representation of the model. In one embodiment, assuming that the polygons are all triangles, starting from the first element in the array P , every three consecutive elements form a triangle, and m should be an integer multiple of 3 at this time.
数组D中存放的顶点坐标信息值是定义为相对一个局部坐标系105坐标原点的位移。局部坐标系105采用笛卡尔坐标系,由一个坐标原点和三个正交的坐标轴X, Y, Z构成。局部坐标系105可以直接从原始输入模型中继承过来,也可以在进行多边形面片和顶点模型转换的过程中重新设定。坐标原点与三个正交的坐标轴X, Y, Z均可以任意设置。 The vertex coordinate information value stored in the array D is defined as the displacement relative to the coordinate origin of a local coordinate system 105 . The local coordinate system 105 adopts a Cartesian coordinate system, which is composed of a coordinate origin and three orthogonal coordinate axes X , Y , Z. The local coordinate system 105 can be directly inherited from the original input model, and can also be reset during the conversion process of polygon patch and vertex model. The coordinate origin and the three orthogonal coordinate axes X , Y , and Z can be set arbitrarily.
为球体103和圆柱体104所在三维空间Rd 中赋予一个时间长度为l的连续运动变换H。三维空间Rd 中的坐标系被称为世界坐标系106,世界坐标系106坐标原点与三个正交坐标轴U, V, W由使用者自行设定,如图2所示。 A continuous motion transformation H with a time length of l is assigned to the three-dimensional space Rd where the sphere 103 and the cylinder 104 are located. The coordinate system in the three-dimensional space R d is called the world coordinate system 106 , and the origin of the world coordinate system 106 and the three orthogonal coordinate axes U , V , W are set by the user, as shown in FIG. 2 .
在一个实施例中,连续运动变换H为一个针对球体103和圆柱体104的平移和旋转的组合。在其它实施例中,运动变换包括但不局限于平移和旋转,非刚性形变修改包括但不局限于针对顶点坐标信息D和顶点构成多边形面片所需连接信息P施加的缩放、弯曲、扭曲、挤压、错切、顶点位移自由变换、蒙皮变换等。 In one embodiment, the continuous motion transformation H is a combination of translation and rotation for the sphere 103 and cylinder 104 . In other embodiments, motion transformation includes but not limited to translation and rotation, and non-rigid deformation modification includes but not limited to scaling, bending, twisting , Extrusion, staggering, vertex displacement free transformation, skin transformation, etc.
连续运动变换H的一般形式为:其中,R为三维正交旋转矩阵,T为三维平移向量,S为三维缩放向量,FM代表非刚性形变修改变换。 The general form of the continuous motion transformation H is: Among them, R is a three-dimensional orthogonal rotation matrix, T is a three-dimensional translation vector, S is a three-dimensional scaling vector, and FM represents a non-rigid deformation modification transformation.
定义MH 为模型M经变换矩阵H变换而来,即:随着对象M移动,其运动轨迹可以表示成随时间变化的变换矩阵,在t0-tl 之间的每一时间点,Q产生一个新的位置、朝向、大小以及对模型顶点的变换。 Define M H as the transformation of the model M through the transformation matrix H , that is: As the object M moves, its trajectory can be expressed as a transformation matrix that changes over time , at each time point between t 0 -t l , Q produces a new position, orientation, size, and transformation of the vertices of the model.
进而,定义沿任意运动轨迹Q的扫掠对象形成的扫掠体为: Furthermore, define the sweeping object along any trajectory Q The swept body formed is :
运动变化矩阵随时间变化是连续的,因为在实际世界中任何物体的移动都不可能是离散的。但是,在计算机计算中,处理对象普遍为离散事件,因此需要将虚拟物体模型在三维空间中的连续运动变换H在时间轴上进行离散化采样,每一采样点代表某一时刻t的运动变换H(t),其中t为整数,且。 motion change matrix The change over time is continuous because the movement of any object in the real world cannot be discrete. However, in computer computing, processing objects are generally discrete events, so it is necessary to model virtual objects in three-dimensional space The continuous motion transformation H in is discretized and sampled on the time axis, and each sampling point represents the motion transformation H(t) at a certain moment t , where t is an integer, and .
在图2所示实施例中,球体103和圆柱体104的连续运动变换H在时间轴上进行的离散化采样过程为等间隔均匀采样,采样间隔为0.025s,即采样频率为40Hz,意味着每秒钟采集40次运动变换值。在其它实施例中,连续运动变换H在时间轴上进行的离散化采样过程可以为变间隔非均匀采样,即每两个采样时间点之间的间隔不必须一样。 In the embodiment shown in Fig. 2, the discretization sampling process of the continuous motion transformation H of the sphere 103 and the cylinder 104 on the time axis is uniform sampling at equal intervals, and the sampling interval is 0.025s, that is, the sampling frequency is 40Hz, which means Motion transformation values are collected 40 times per second. In other embodiments, the discretization sampling process performed by the continuous motion transformation H on the time axis may be variable-interval non-uniform sampling, that is, the interval between every two sampling time points does not have to be the same.
完成运动变换及其离散化采样后,在运动变换采样初始0时刻,将多边形面片和顶点表示的球体103和圆柱体104在其自身所处的局部坐标系105中转换成三维体素模型107,如图3所示。 After the motion transformation and its discretization sampling are completed, the sphere 103 and cylinder 104 represented by polygonal patches and vertices are converted into a three-dimensional voxel model 107 in their own local coordinate system 105 at the initial time of motion transformation sampling 0 ,As shown in Figure 3.
一个体素(Voxel)代表三维空间Rd 中一个规则网格上的一个值。每一个体素的坐标代表了其在三维空间中相对空间坐标系原点的位移。例如,体素108具有坐标(i, j, k),则其相对于坐标系105原点的位置是(i, j, k)。每一个体素的取值代表了该体素位置所占据的空间的占有性。例如,构成体素模型107的一个体素108取值为1代表体素108所在空间被占据,而除了构成球体103和圆柱体104的体素外的所有体素,如体素109取值为0代表体素109所在空间未被占据。 A voxel (Voxel) represents a value on a regular grid in the three-dimensional space Rd . The coordinates of each voxel represent its displacement relative to the origin of the space coordinate system in three-dimensional space. For example, voxel 108 has coordinates ( i , j , k ), then its position relative to the origin of coordinate system 105 is ( i , j , k ). The value of each voxel represents the occupancy of the space occupied by the voxel position. For example, a voxel 108 constituting the voxel model 107 takes a value of 1 to indicate that the space where the voxel 108 is located is occupied, while all voxels except the voxels constituting the sphere 103 and cylinder 104, for example, the voxel 109 takes a value of 0 means that the space where the voxel 109 is located is not occupied.
体素代表了一种体数据存储方式,通过把空间沿着互相正交的坐标轴进行均匀细分,形成均匀分割的离散格空间。在这个体素空间中的物体则是由体素构成的集合表示。 Voxels represent a storage method for volumetric data. By subdividing the space evenly along mutually orthogonal coordinate axes, a uniformly divided discrete grid space is formed. Objects in this voxel space are represented by a collection of voxels.
相邻体素之间的间隔构决定了多变面片模型到体素模型转换的精度,并且可以任意设置。 The interval between adjacent voxels determines the precision of the transformation from variable patch model to voxel model, and can be set arbitrarily.
注意,在替代示例中,体素模型能够以不同于上文中描述的规则网格的方式表达。例如,诸如八叉树等数据结构可用于进一步减小存储空间并加速计算。 Note that in alternative examples, the voxel model can be expressed in ways other than the regular grid described above. For example, data structures such as octrees can be used to further reduce storage space and speed up computation.
多边形面片到体素的转换过程可以采用任何能够将多边形面片模型转换成体素模型的方法,而不局限于某种特定方法。在一个实施例中,采用多边形扫描转换算法,对多边形网格模型进行体素化。该方法简仅需要对多边形模型中每一个三角形面片进行三步判断测试,就能得到数学上正确的体素化模型,并且能够通过设置转换条件来控制体素化精度。 The conversion process from the polygonal patch to the voxel can adopt any method that can convert the polygonal patch model into the voxel model, and is not limited to a specific method. In one embodiment, the polygonal mesh model is voxelized using a polygonal scan conversion algorithm. This method simply needs to perform three-step judgment tests on each triangular face in the polygonal model to obtain a mathematically correct voxelization model, and can control the voxelization accuracy by setting conversion conditions.
多边形面片模型到体素模型的转换可以采用任何能够高度并行化的方式进行加速,例如但不局限于多处理器、多线程、GPU处理器加速等。在并行化过程中,球体103和圆柱体104的多边形面片P以及顶点信息D被分配到并行的处理管线中进行体素化。在所有并行管线中的体素化过程都结束后,将每一条管线中得到的体素数据进行汇总形成最终的体素模型。 The conversion from the polygonal patch model to the voxel model can be accelerated in any highly parallelizable way, such as but not limited to multi-processor, multi-thread, GPU processor acceleration, etc. During the parallelization process, the polygonal patches P and vertex information D of the sphere 103 and the cylinder 104 are distributed to parallel processing pipelines for voxelization. After the voxelization process in all parallel pipelines is completed, the voxel data obtained in each pipeline is aggregated to form the final voxel model.
在进行体素模型扫掠体计算时,首先将体素模型107复制到物体运动变换空间Rd 所在世界坐标系106中,作为物体运动变换初始0时刻的扫掠体体素模型。复制过程,体素模型中每一个体素的坐标信息全部变换到世界坐标系106当中。 When calculating the swept volume of the voxel model, the voxel model 107 is first copied to the world coordinate system 106 where the object motion transformation space R d is located, as the swept voxel model at the initial time 0 of the object motion transformation. During the copying process, all the coordinate information of each voxel in the voxel model is transformed into the world coordinate system 106 .
在运动变换采样t时刻,其中1≤t≤l,根据球体103和圆柱体104当前的运动变换H(t),重新计算球体103和圆柱体104的多边形面片顶点坐标信息D、顶点构成多边形面片P,得到采样时刻t变换后的模型110。 At the moment of motion transformation sampling t , wherein 1≤t≤l , according to the current motion transformation H(t) of the sphere 103 and the cylinder 104, recalculate the polygon patch vertex coordinate information D of the sphere 103 and the cylinder 104, and the vertices constitute a polygon Patch P , to obtain the transformed model 110 at the sampling time t .
将t时刻经过运动变换H(t)的球体103和圆柱体104在其自身所处的局部坐标系空间105中重新转换成三维体素模型111,并将体素模型111复制到物体运动变换空间所在的世界坐标系106中,如图4所示。 Transform the sphere 103 and the cylinder 104 that have undergone the motion transformation H(t) at time t into a three-dimensional voxel model 111 in their own local coordinate system space 105, and copy the voxel model 111 to the object motion transformation space In the world coordinate system 106 where it is located, as shown in FIG. 4 .
球体103和圆柱体104在其自身所处的局部坐标系105空间中转换成三维体素模型的过程,采用并行计算进行加速,包括但不局限于多处理器、多线程、GPU处理器加速等。 The process of converting the sphere 103 and the cylinder 104 into a three-dimensional voxel model in the space of the local coordinate system 105 where it is located is accelerated by parallel computing, including but not limited to multi-processor, multi-thread, GPU processor acceleration, etc. .
将t时刻采样点的三维体素模型111与之前所有0到t-1时刻采样点的三维体素模型的和112以集合并运算的方式进行合并,生成到t时刻为止的扫掠体体素模型103,其中1≤t≤l,如图6所示。 Combining the 3D voxel model 111 of the sampling point at time t with the sum 112 of the 3D voxel models of all previous sampling points from 0 to t-1 combined in a manner to generate a swept volume voxel model 103 up to time t , where 1≤t≤l , as shown in FIG . 6 .
另外一种实施例中,体素模型之间的集合并运算还可以通过数学形态学运算膨胀运算实现。数学形态学表示以形态为基础对图像进行分析的数学工具,其数学基础和所用的语言是集合论。 In another embodiment, the combination operation between the voxel models can also be realized by the expansion operation of the mathematical morphology operation. Mathematical morphology represents a mathematical tool for analyzing images based on morphology, and its mathematical basis and language are set theory.
扫掠体三维体素模型计算过程采用并行计算进行加速,包括但不局限于多处理器、多线程、GPU处理器加速等。 The calculation process of the swept volume 3D voxel model is accelerated by parallel computing, including but not limited to multi-processor, multi-thread, GPU processor acceleration, etc.
将t时刻扫掠体体素模型113进行三维显示并保存。在一个实施例中,体素的三维显示方法包括但不局限与直接体绘制和体素立方体绘制。其中,体绘制并不绘制点、面、边界等中间图元,而是直接将三维体素数据投射到计算机屏幕上。根据实现方式不同,体绘制可以采用基于光线投射或基于3D贴图方法。光线投射式的原理是以观察视点为原点向屏幕上每一点发射一条射线,当射线穿过场景中的体数据场时,从前向后(或从后向前)将落在射线上的体元的RGB颜色和透明度Alpha值进行叠加,最终获得一幅二维图像。基于3D贴图的方法将三维体素数据表示成一系列平面所在的贴图,通过在特定方向上对贴图进行混合来得到最终的三维图像。 The swept voxel model 113 at time t is three-dimensionally displayed and saved. In one embodiment, the voxel three-dimensional display method includes but not limited to direct volume rendering and voxel cube rendering. Among them, volume rendering does not draw intermediate primitives such as points, surfaces, and boundaries, but directly projects three-dimensional voxel data onto the computer screen. Depending on the implementation, volume rendering can be based on ray casting or 3D mapping. The principle of ray casting is to send a ray to each point on the screen from the viewing point of view as the origin. When the ray passes through the volume data field in the scene, the voxel falling on the ray will be from front to back (or from back to front) The RGB color and alpha value of transparency are superimposed to obtain a two-dimensional image. The method based on 3D textures represents the 3D voxel data as a series of textures on the plane, and the final 3D image is obtained by mixing the textures in a specific direction.
在显示体素数据时可以加入多分辨显示技术(Level of Detail),用于提高显示速度。多分辨显示允许交互式地处理非常大的体素数据,即在显示效率与显示质量之间寻求一种平衡。多分辨显示技术的基本思想是根据当前视点相对被观察物体的距离动态调整物体的细节程度;当距离较远时选择低分辨率的模型;当距离较近时选择高分辨率模型,进而观察到更多的细节。 When displaying voxel data, multi-resolution display technology (Level of Detail) can be added to improve display speed. Multi-resolution display allows interactive processing of very large voxel data, ie a balance between display efficiency and display quality is sought. The basic idea of multi-resolution display technology is to dynamically adjust the level of detail of the object according to the distance between the current viewpoint and the observed object; when the distance is far away, a low-resolution model is selected; when the distance is short, a high-resolution model is selected, and then observed More details.
体素模型数据可以以数据流的方式保存成任何标准或自定义体素数据格式,例如vox,raw,kv6,binvox, HIPS, MIRA, VTK等格式。以binvox文件为例说明输出体素文件的一般数据格式。 Voxel model data can be saved in any standard or custom voxel data format in the form of data stream, such as vox, raw, kv6, binvox, HIPS, MIRA, VTK and other formats. Take the binvox file as an example to illustrate the general data format of the output voxel file.
Binvox格式使用游走长度编码减小文件大小。每个binvox文件包括一个ASCII头以及后面的二进制数据。 The Binvox format uses walk-length encoding to reduce file size. Each binvox file consists of an ASCII header followed by binary data.
头: head :
#binvox 1 #binvox1
dim 128 128 128 dim 128 128 128
translate -0.120158 -0.481158 -0.863158 translate -0.120158 -0.481158 -0.863158
scale 7.24632 scale 7.24632
data data
第一行#binvox 1表示的版本号。下一行dim 128 128 128代表了体素数据的长宽高。再后面两行给出了归一化后的左边变换和缩放信息。最后一行data表示头的结束。 The version number represented by the first line #binvox 1. next line dim 128 128 128 represents the length, width and height of the voxel data. The next two lines give the normalized left transformation and scaling information. The last line of data indicates the end of the header.
Binvox体素数据中的每一个体素都可以通过相对于(0,0,0)原点的(i,j,k)坐标来获取。Binvox采用的是y轴最快,z轴其次,x轴最慢的方式进行索引更新。 Each voxel in Binvox voxel data can be obtained by ( i , j , k ) coordinates relative to the ( 0 , 0 , 0 ) origin. Binvox uses the fastest way on the y-axis, followed by the z-axis, and the slowest way on the x-axis to update the index.
二进制数据binary data ::
ASCII头之后的二进制数据由一系列字节对组成。每个体素对应两个字节,第一个字节表示该体素是空还是非空,第二个字节表示后续多少个体素与当前体素值相同,即同为空或非空。 The binary data following the ASCII header consists of a series of byte pairs. Each voxel corresponds to two bytes, the first byte indicates whether the voxel is empty or non-empty, and the second byte indicates how many subsequent voxels have the same value as the current voxel, that is, both empty or non-empty.
扫掠体体素模型可以进行表面体素提取,生成扫掠体体素表面点子集。表面体素的提取可以采用任何边界提取或表面轮廓提取算法加以实现,只要能够剔除体素模型被封闭在内部的体素,保留与空体素相邻的位于体素模型表面的体素。表面体素的显示可以采用点绘制。点绘制将点作为绘制基本元素,摒弃了传统的三角片表示方法,只记录点的信息,这些点又不同于基于图像技术的输入像素,而包含了附加的几何信息,并且点采样与视点无关。在一个实施例中,采用斯坦福大学的 QSplat算法进行扫掠体表面体素绘制,QSplat使用一个树状层次包围球数据结构存储数据,树中每个结点包含:球的位置和半径、每点处的法向量、法锥面的宽度、颜色值。在进行绘制时,层次树按深度优先方法递归遍历。对每个中间结点,首先判断该球是否完全在屏幕之外或者是完全背向的,以进行可见性选择。如果该结点至少有一部分子结点是可见的,则将该结点在屏幕上的投影大小同一个阈值进行比较,如果大于阈值,则继续向下递归;如果小于阈值或者已经到达叶结点,则按该结点的球位置和半径确定的屏幕上的位置和大小绘制一个小区域。其中阈值的大小可由上一帧图像绘制的时间动态确定,以达到满足用户指定的每秒绘制帧数的要求。 The swept voxel model can perform surface voxel extraction to generate a subset of swept voxel surface points. The extraction of surface voxels can be realized by any boundary extraction or surface contour extraction algorithm, as long as the voxels enclosed in the voxel model can be eliminated, and the voxels adjacent to the empty voxels located on the surface of the voxel model can be retained. Surface voxels can be displayed using point rendering. Point rendering uses points as the basic elements of drawing, abandons the traditional triangle slice representation method, and only records point information. These points are different from the input pixels based on image technology, but contain additional geometric information, and point sampling has nothing to do with viewpoints . In one embodiment, the QSplat algorithm of Stanford University is used to render voxels on the swept body surface. QSplat uses a tree-like hierarchical enclosing ball data structure to store data. Each node in the tree contains: the position and radius of the ball, each point The normal vector at , the width of the normal cone, and the color value. When drawing, the hierarchical tree is traversed recursively in a depth-first manner. For each intermediate node, it is first judged whether the ball is completely out of the screen or completely facing away for visibility selection. If at least part of the child nodes of the node are visible, compare the projected size of the node on the screen with a threshold, if it is greater than the threshold, continue to recurse downwards; if it is less than the threshold or has reached the leaf node , draw a small area at the position and size on the screen determined by the ball position and radius of the node. The size of the threshold can be determined dynamically according to the drawing time of the previous frame image, so as to meet the requirement of drawing frames per second specified by the user.
扫掠体模型表面体素提取采用并行计算进行加速,包括但不局限于多处理器、多线程、GPU处理器加速等。 The surface voxel extraction of the swept volume model is accelerated by parallel computing, including but not limited to multi-processor, multi-thread, GPU processor acceleration, etc.
扫掠体体素模型113可以进行表面三维重建,生成扫掠体表面的多边形面片模型,进行三维显示并加以保存。表面三维重建可以采用任何能够将离散点数据生成多边形面片的算法,包括但不局限于行进立方体算法(marching Cubes algorithm)、行进四面体算法(marching tetrahedrons algorithm),Bloomenthal多边形化器(Polygonizer),或其它适合多边形化的算法。 The swept volume voxel model 113 can carry out 3D reconstruction of the surface, generate a polygonal patch model of the swept volume surface, perform 3D display and save it. Surface 3D reconstruction can use any algorithm that can generate polygonal patches from discrete point data, including but not limited to marching cube algorithm (marching Cubes algorithm), marching tetrahedron algorithm (marching tetrahedrons algorithm), Bloomenthal polygonizer (Polygonizer), or other algorithms suitable for polygonization.
扫掠体表面三维重建采用并行计算进行加速,包括但不局限于多处理器、多线程、GPU处理器加速等。 The 3D reconstruction of the swept body surface is accelerated by parallel computing, including but not limited to multi-processor, multi-thread, GPU processor acceleration, etc.
对所有运动变换采样时刻t,,生成球体103和圆柱体104在每一采样时刻t的扫掠体体素模型114、扫掠体体素表面点子集、扫掠体表面的多边形面片模型,并进行实时三维动态显示。其中,扫掠体体素表面点子集和扫掠体表面的多边形面片模型的计算、显示与保存不是必须执行的。 Sample instant t for all motion transformations, , generate the swept voxel model 114 of the sphere 103 and the cylinder 104 at each sampling time t , a subset of the voxel surface points of the swept voxel, and a polygonal patch model of the swept volume surface, and perform real-time three-dimensional dynamic display. Wherein, the calculation, display and storage of the swept volume voxel surface point subset and the polygonal patch model of the swept volume surface are not necessarily performed.
实验结果Experimental results
实验采用微软Windows平台下的Visual C++作为扫掠体系统开发平台。Visual C++是目前Windows系统下开发工具之一,它不仅提供MFC类库,而且程序编译方便,能有效的跟踪程序的执行过程并发现漏洞。另外,Visual C++还提供了各种图形接口,方便二次开发,用Visual C++编译的程序代码还有执行效率高的优点。 The experiment adopts the Visual C++ is used as the development platform of swept body system. Visual C++ is one of the development tools under the current Windows system. It not only provides the MFC class library, but also facilitates program compilation, and can effectively track the execution process of the program and find loopholes. In addition, Visual C++ also provides various graphic interfaces, which is convenient for secondary development, and the program code compiled with Visual C++ also has the advantage of high execution efficiency.
采用Autodesk的3DS max软件制作用于扫掠的三维物体模型。 Autodesk's 3DS max software was used to make a three-dimensional object model for sweeping.
采用Coin3d实现扫掠体及其包络面的三维可视化。 The three-dimensional visualization of the swept volume and its envelope is realized by using Coin3d.
在第一个扫掠体实例中,演示了多连杆机械臂复杂运动情况下形成的空间扫掠体,如图8所示。 In the first example of a swept body, the space swept body formed under the complex motion of the multi-link manipulator is demonstrated, as shown in Figure 8.
在第二个扫掠体实例中,模拟了上楼梯过程中运动人体所产生的空间扫掠体,如图9所示。与其它计算扫掠体方法相比,本实施例能够针对更加精确的骨骼蒙皮人体模型生成相应的扫掠体,而不只是针对简单的木偶连杆式刚性人体模型。因此,计算所得到的扫掠体精度更高。 In the second instance of swept volume, the space swept volume produced by the moving human body in the process of going up the stairs is simulated, as shown in Figure 9. Compared with other methods for calculating swept volumes, this embodiment can generate corresponding swept volumes for more accurate bone-skinned human models, not just for simple puppet-link rigid human models. Therefore, the calculated swept volume has higher precision.
此处的实施例的描述只是作为示例给出并且本领域的技术人员可以做出各种修改。以上说明、示例和数据提供了对本发明的各示例性实施例的结构和使用的全面描述。虽然上文以一定的详细度或参考一个或多个单个实施例描述了本发明的各实施例,但是,在不偏离本发明的精神或范围的情况下,本领域的技术人员可以对所公开的实施例做出很多修改。 The descriptions of the embodiments herein are given as examples only and various modifications can be made by those skilled in the art. The above specification, examples and data provide a complete description of the structure and use of exemplary embodiments of the invention. While various embodiments of the invention have been described above with a certain degree of detail, or with reference to one or more single embodiments, those skilled in the art can make use of the disclosed embodiments without departing from the spirit or scope of the invention. Many modifications are made to the embodiment.
Claims (8)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510705103.2A CN105279788B (en) | 2015-10-27 | 2015-10-27 | A kind of method for generating object floodlight scanning body |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510705103.2A CN105279788B (en) | 2015-10-27 | 2015-10-27 | A kind of method for generating object floodlight scanning body |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105279788A true CN105279788A (en) | 2016-01-27 |
CN105279788B CN105279788B (en) | 2017-10-17 |
Family
ID=55148738
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510705103.2A Active CN105279788B (en) | 2015-10-27 | 2015-10-27 | A kind of method for generating object floodlight scanning body |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105279788B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107092741A (en) * | 2016-02-02 | 2017-08-25 | 达索系统公司 | Designed using the B REP of face track |
CN107358650A (en) * | 2017-07-17 | 2017-11-17 | 湖北理工学院 | A kind of mechanical structure basic body Morphogenesis method of restructural |
CN108022275A (en) * | 2017-12-12 | 2018-05-11 | 苏州蜗牛数字科技股份有限公司 | A kind of bone instantiation method and device in illusory engine |
CN110262389A (en) * | 2019-07-02 | 2019-09-20 | 广东三维家信息科技有限公司 | Simulate the method and device of gate process |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5542036A (en) * | 1994-07-05 | 1996-07-30 | General Electric Company | Implicit modeling of swept volumes and swept surfaces |
CN104658012A (en) * | 2015-03-05 | 2015-05-27 | 第二炮兵工程设计研究院 | Motion capture method based on inertia and optical measurement fusion |
-
2015
- 2015-10-27 CN CN201510705103.2A patent/CN105279788B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5542036A (en) * | 1994-07-05 | 1996-07-30 | General Electric Company | Implicit modeling of swept volumes and swept surfaces |
CN104658012A (en) * | 2015-03-05 | 2015-05-27 | 第二炮兵工程设计研究院 | Motion capture method based on inertia and optical measurement fusion |
Non-Patent Citations (3)
Title |
---|
ANDREAS VON DZIEGIELEWSKI 等: "High Precision Conservative Surface Mesh Generation for Swept Volumes", 《JOURNAL OF LATEX CLASS FILES》 * |
J. ROSSIGNAC 等: "Boundary of the volume swept by a free-form solid in screw motion", 《COMPUTER-AIDED DESIGN》 * |
吴凡: "基于机器视觉的虚拟雕刻刀具跟踪及扫掠体建模研究", 《中国优秀硕士学位论文全文数据库工程科技I辑》 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107092741A (en) * | 2016-02-02 | 2017-08-25 | 达索系统公司 | Designed using the B REP of face track |
CN107092741B (en) * | 2016-02-02 | 2023-07-18 | 达索系统公司 | B-REP design using face trajectories |
CN107358650A (en) * | 2017-07-17 | 2017-11-17 | 湖北理工学院 | A kind of mechanical structure basic body Morphogenesis method of restructural |
CN108022275A (en) * | 2017-12-12 | 2018-05-11 | 苏州蜗牛数字科技股份有限公司 | A kind of bone instantiation method and device in illusory engine |
CN110262389A (en) * | 2019-07-02 | 2019-09-20 | 广东三维家信息科技有限公司 | Simulate the method and device of gate process |
Also Published As
Publication number | Publication date |
---|---|
CN105279788B (en) | 2017-10-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109147048B (en) | Three-dimensional mesh reconstruction method by utilizing single-sheet colorful image | |
CN110288695B (en) | Surface reconstruction method of single-frame image 3D model based on deep learning | |
Jones et al. | 3D distance fields: A survey of techniques and applications | |
Wang et al. | Feature based 3D garment design through 2D sketches | |
CN110458939A (en) | Indoor scene modeling method based on perspective generation | |
CN106803267A (en) | Indoor scene three-dimensional rebuilding method based on Kinect | |
Dietrich et al. | Edge transformations for improving mesh quality of marching cubes | |
CN101404091A (en) | Three-dimensional human face reconstruction method and system based on two-step shape modeling | |
CN116071278A (en) | UAV aerial image synthesis method, system, computer equipment and storage medium | |
Jin et al. | General constrained deformations based on generalized metaballs | |
CN105279788B (en) | A kind of method for generating object floodlight scanning body | |
CN116543117A (en) | High-precision large-scene three-dimensional modeling method for unmanned aerial vehicle images | |
Hilton et al. | From 3d shape capture to animated models | |
Sun et al. | Human 3d avatar modeling with implicit neural representation: A brief survey | |
Miao et al. | SymmSketch: Creating symmetric 3D free-form shapes from 2D sketches | |
Vyatkin | Method of binary search for image elements of functionally defined objects using graphics processing units | |
CN116797696A (en) | Skeleton driving method and device for character animation | |
CN111275806A (en) | Parallelization real-time rendering system and method based on points | |
Garcia et al. | Interactive applications for sketch-based editable polycube map | |
CN118015192A (en) | A fast 3D model reconstruction method based on 2D images | |
US20230215094A1 (en) | Computer Graphics Interface Using Visual Indicator Representing Object Global Volume and/or Global Volume Changes and Method Therefore | |
Wang et al. | VoxNeRF: Bridging Voxel Representation and Neural Radiance Fields for Enhanced Indoor View Synthesis | |
Vyatkin et al. | Optimized finite element method using free-form volume patches for deformation of three-dimensional objects | |
CN106408644B (en) | Three-dimensional control cage construction method | |
CN107292942B (en) | A Linear Blend Shape Editing Method with Continuous Weight C2 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |