[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN108352082B - Techniques to crowd 3D objects into a plane - Google Patents

Techniques to crowd 3D objects into a plane Download PDF

Info

Publication number
CN108352082B
CN108352082B CN201680065062.8A CN201680065062A CN108352082B CN 108352082 B CN108352082 B CN 108352082B CN 201680065062 A CN201680065062 A CN 201680065062A CN 108352082 B CN108352082 B CN 108352082B
Authority
CN
China
Prior art keywords
common plane
polygons
polygon
image data
support structure
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201680065062.8A
Other languages
Chinese (zh)
Other versions
CN108352082A (en
Inventor
K·N·艾弗森
E·拉利希
G·M·格奥尔盖斯库
J·加库伯维克
M·库斯尼尔
V·西索勒克
T·萨西
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Microsoft Technology Licensing LLC
Original Assignee
Microsoft Technology Licensing LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Microsoft Technology Licensing LLC filed Critical Microsoft Technology Licensing LLC
Publication of CN108352082A publication Critical patent/CN108352082A/en
Application granted granted Critical
Publication of CN108352082B publication Critical patent/CN108352082B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/40Hidden part removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/10Constructive solid geometry [CSG] using solid primitives, e.g. cylinders, cubes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2004Aligning objects, relative positioning of parts

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Generation (AREA)

Abstract

Techniques for generating three-dimensional (3D) objects from full or partial 3D data are described. Image data defining or partially defining the 3D object may be obtained. Using this data, a common plane-facing surface of the 3D object that is substantially parallel to a common plane (e.g., a ground plane) may be defined. One or more edges of the surface facing the common plane may be determined and extended to the common plane. A bottom surface bounded by one or more extended edges and parallel to the common plane may be generated based on the surface facing the common plane. In some aspects, defining the common plane-facing surface may include segmenting the image data into a plurality of polygons, orienting at least one of the polygons to face the common plane, and discarding the occluding polygon.

Description

Techniques to crowd 3D objects into a plane
Technical Field
The present disclosure relates generally to three-dimensional (3D) modeling, and more particularly to pushing or extending 3D objects to a plane, for example, to provide support for the 3D objects and enable 3D printing.
Background
Creating 3D image data, such as 3D objects, presents particular challenges in both the complexity of modeling the 3D objects and the complexity of generating the 3D objects to accurately depict real-life objects. In addition to these challenges, there is also recent application of 3D data to 3D printing, which typically requires full 3D object definition to produce a complete object or product. Current techniques for creating 3D objects or 3D image data include CAD/CAM software products, 3D scanning sensors, etc. However, these and other 3D modeling techniques often require specific and comprehensive technical expertise, expensive software tools or chains of such tools, or even specialized hardware (such as sensors). These requirements present obstacles to the widespread use of 3D modeling.
Currently, there are techniques for taking 3D data and repairing that data so that the data represents a real body with a unique shell and interior that can be 3D printed. However, these techniques may not output very fine 3D objects. For example, 3D derivation of map data may generate a visually appealing surface; however, the underlying grid may be uneven or incomplete. In this example, if the 3D export is printed, it may not stand on a platform or may be tilted in a manner that does not represent the direction of the original terrain. In another example, a shell or mask may be generated using 3D scan data of a human face. The mask may be made printable, but may not be refined in a way that is attractive to the user, for example.
Accordingly, there is a need for better and more intuitive techniques for modifying 3D data, for example, for printing and other applications.
Disclosure of Invention
Illustrative examples of the present disclosure include, but are not limited to, methods, systems, and various devices. In one aspect, techniques for generating three-dimensional (3D) objects from full or partial 3D data may be improved. Image data defining or partially defining the 3D object may be obtained. Using this data, a common plane-facing surface of the 3D object may be defined that is substantially parallel to a common plane (e.g., a ground plane). One or more edges of the surface facing the common plane may be determined and extended to the common plane. A bottom surface bounded by one or more extended edges and parallel to the common plane may be generated based on the surface facing the common plane.
Other features of the system and method are described below. The features, functions, and advantages can be achieved independently in various examples or may be combined in yet other examples, further details of which can be seen with reference to the following description and drawings.
Drawings
Embodiments of the present disclosure will now be described more fully hereinafter with reference to the accompanying drawings, in which:
fig. 1 depicts an example of pushing a partially defined or non-manifold (non-manual) 3D object representing map data onto a ground plane.
Fig. 2 depicts an example of pushing a fully defined or manifold 3D object representing a space shuttle onto the ground plane.
Fig. 3 depicts an example operational procedure for pushing image data associated with an object onto a ground plane.
Fig. 4 depicts another example of pushing a plurality of manifold 3D objects onto a ground plane.
Fig. 5A-5C depict an example of pushing a complex 3D object representing a portion of a city onto a ground plane, and a user interface for interacting with the complex 3D object.
FIG. 6 depicts an example of a more detailed procedure for defining a surface of an object to be jostled onto a plane.
FIG. 7 illustrates an example diagram and process for determining which polygons of image data to use when pushing an object onto a plane.
FIG. 8 depicts an example of a more detailed procedure for building support for image data representing an object.
FIG. 9 is an example general-purpose computing environment in which the techniques described herein may be implemented.
Detailed Description
Systems and techniques are described herein for pushing or extending a 3D object toward a plane in 3D space, for example, to build a base or support for the 3D object. In one aspect, the described techniques may be used to crowd a fully defined and closed 3D object, referred to herein as a manifold object or manifold mesh, for example, using a 3D modeling application or tool. In another aspect, the described techniques may be used to enclose and squeeze partially defined 3D objects, referred to herein as non-manifold objects or non-manifold meshes, for example, using a 3D modeling application or tool. The described techniques may provide an easy-to-use, general-purpose tool for manipulating 3D image data and objects, as well as for generating 3D objects that may be printed and displayed on a flat surface, for example. The described techniques may be implemented in a standalone application or program, as a cloud service, included in an existing or separate 3D modeling application, program, platform, or the like.
In one example, image data may be obtained that includes, for example, map data, object data, one or more bitmaps, or other image data from various sources. The image data may comprise full 3D image data. The image data may define one or more objects via the enclosed volume. The image data may additionally or alternatively include partially defined 3D image data such that only a portion of one or more 3D objects are defined (e.g., map data retrieved from a mapping or routing application).
At least one surface or part of a surface defined in the image data may be used and/or manipulated to define a surface facing a common plane, such as a ground plane. The surfaces may be extended or urged to a common plane. The pushing may include defining one or more edges of the surface facing the common plane and extending the one or more edges to the common plane. A bottom or cover may then be generated, e.g. based on the surfaces facing the common plane, and connected to the extended rim. For example, any manipulation of the original surface may be reversed, e.g., to enclose a volume representing the object originally defined in the image data in combination with a pushed volume connecting at least one surface of the object to a common plane. In some aspects, the volume that is extruded may be defined separately, e.g., to enable further independent manipulation of one or more of the object and the extruded portion or support.
In one example, the obtained image data may be manipulated or modified (e.g., flipped) to orient one or more objects contained in the image data toward a common plane. In some aspects, this may include segmenting the image data into a plurality of polygons or shapes, and flipping one or more of the polygons to orient the polygons to face a common plane. In some aspects, such as where the image data defines a manifold 3D object, the manipulation may be omitted; however, the object may still be divided into polygons. In both cases, polygons facing a common plane may be identified, and edges around those polygons may be constructed to partially define the pushed portion of the object toward the common plane. A bottom surface or flat surface may then be added to the extruded edge opposite the object to completely enclose the volume within the extruded portion of the object along the common plane. The previously manipulated (flipped) polygon may then be oriented back to its original orientation to recover the objects contained within the image data. The resulting image data defining the at least one pushed 3D object may then be output, such as displayed in a user interface of a 3D modeling or builder application, prepared and/or sent to a 3D printer for 3D printing, and so forth.
In one aspect, the described techniques may be used to generate separate objects or meshes (e.g., different colors, textures, materials, etc.) for the extruded regions that may be allocated, such as support materials for 3D printing. The mesh of extruded objects may be modified in the 3D modeling application and saved, stored, or shared independently of the 3D objects. In this way, the support structure mesh may be defined before sending the 3D object to the 3D printer. On the other hand, current techniques may provide the user with few, if any, customization options when designing and printing the support structure. This implementation may be particularly useful where the 3D object or content creator and the content consumer are different actors, or where tasks are dispersed among different machines, programs, applications, etc.
In another example, image data representing a human face, for example, may be obtained from an image file or via a camera or image sensor that scans the human face. The described techniques may be used to complete or fill in image data representing a face, and push the back side of the face image data towards a plane and also push the image data down towards another plane to form a half-image of the face (bust). In one example, the face's surface may be mirrored about the plane with which the face is aligned and used to create the back surface of the half-image. The face surface may then be pushed against the back surface. A portion or all of the resulting downward-facing surface (e.g., orthogonal to the facial surface) may then be defined (e.g., as a circle or ellipse representing the neck, the dimensions of which may be defined relative to the size of the facial surface) may then be pushed down to the flat surface to provide a stand or support for the semi-image. In this way, a high quality 3D printable half image can be created from the image data with minimal manual input or manipulation.
In another example, the image data may include one or more charts, graphs, or other graphical representations of data. The described techniques may be used to generate a 3D printable model of a chart, for example, by extruding visual elements of the chart (e.g., posts, tubes, lines with thicknesses, etc.) to one or more common planes. In some aspects, the jostling process may include jostling the image data toward the rear surface while creating the bottom surface via a flat or planar edge used to create the backward jostling.
In another example, the described techniques may be applied to a 3D model to crowd a 3D object between two or more identified portions of the 3D object to connect the two or more portions. This example may include or be similar to applying a fill-fill function to the 3D image data.
It should be appreciated that the described techniques may be applied to a variety of 3D image modeling and manipulation other than to crowd or define support for 3D objects.
In some aspects, the described techniques may provide various advantages, including reducing the number of steps a user needs to take (e.g., via a user interface) to define and configure a support structure for a 3D object. The described techniques may also provide support structures that are compatible with multiple different 3D printers, drivers/slicers, etc. by defining support structures that are separate from the 3D object. In some aspects, the described techniques may provide a more efficient (e.g., processor load may be reduced and/or memory resources may be saved) process for defining, configuring, and printing a 3D support structure.
Fig. 1 illustrates an example diagram and process 100 of extruding 3D image data 105 representing a topographical feature onto a ground plane 120. In the illustrated example, the 3D image data 105 may be non-manifold data such that the data includes at least one non-manifold surface or object (e.g., the image data does not completely encompass a spatial volume). The image data 105 may represent data taken from a map or navigation application, such as a Bing map. The image data 105 may define various features, such as hills 110, valleys 115, or any of a variety of other terrain features, objects, buildings, people, and so forth. Terrain image data 105 may be extruded onto ground plane 120 according to techniques described in more detail below. The pushing process may include manipulating the surface definition terrain in image data 105 to face ground plane 120, defining edges of the downward facing surface and pushing the edges to ground plane (indicated via arrow 125 and 160). The downward facing surface may then be translated to the ground plane 120 to define a lid or bottom surface and connected to the extruded edge, thereby enclosing the volume between the surface 105 and the ground plane 120.
In one aspect, the 3D image data 105 may be combined with the extruded portion to form a single object enclosing a single volume. In this scenario, the extruded 3D image data 105 may be 3D printed, for example, so that it may stand securely on a flat horizontal surface. In another aspect, the extruded portion may be separately defined as an object or image data separate from the image data 105 that fits or aligns with the bottom surface of the 3D image data 105. In such a scenario, the extruded portion may be manipulated, modified, configured, etc., independently of the 3D image data 105. This may enable greater customization of the 3D image data 105 and the extruded portion, for example, for different 3D printers, applications, or even different actors.
Fig. 2 illustrates an example user interface 200 displaying a push of 3D image data 210 representing a space shuttle in accordance with the described technology. The 3D image data representing the space shuttle 210 may include manifold image data further including a fully defined top or visible surface 235 and a fully defined bottom surface 215, which together enclose a volume of the space shuttle 210. A first screen or view 205 of the user interface 200 may display a space shuttle 210 above a plane or ground plane 220. In accordance with the described techniques, the space shuttle 210 may be extruded via operation 225 to produce extruded 3D image data 230. The extruded 3D image data 230 may include a top surface 235 of the space shuttle 210 and an extruded portion 240 spanning from the bottom surface 215 of the space shuttle 210 to a bottom surface 245 positioned on the ground plane 220. In some aspects, the bottom surface may be positioned at any height parallel to the ground plane 220.
In some aspects, the extruded portion 240 may be formed or modeled continuously with the space shuttle 210. In other aspects, the extruded portion 240 may be modeled and defined as a separate volume relative to the space diversity 210. Forming the extruded portion 240 as a separate volume may enable greater customization and configuration of the extruded portion 240 relative to the space shuttle 210. In some cases, the extruded portion 240 may be associated with different materials, colors, textures, consistency, for example, to conserve resources for 3D printing (e.g., use less material for the extruded portion 240 if it is to be discarded). In another example, the extruded portion 240 may be associated with different materials or colors to form an aesthetically pleasing support or base for the space shuttle 210 for 3D printing.
Fig. 3 illustrates an example process 300 for extruding a 3D object to a plane (such as a ground plane) in 3D space. In the following description of process 300, it should be appreciated that directions such as top, bottom, etc. are used for ease of reference. The described techniques may be implemented in any plane and in any orientation to achieve a similar effect. Additionally, process 300 may be implemented for both manifold and non-manifold surfaces or both manifold and non-manifold 3D objects. Differences in the processes of these different implementations will be noted in the following description. In one aspect, a non-manifold case may be considered a manifold case where some additional pre-processing and post-processing steps are performed to convert the non-manifold object into a manifold object.
The process 300 may begin at operation 302 and image data corresponding to at least one 3D or partial 3D object may be obtained at operation 320. The image data may include manifold and/or non-manifold objects or surfaces, such as image data 105 and 210, respectively. The image data may be obtained from a camera in communication with a device executing the 3D modeling application, from one or more files or data stores local or remote to the executing device, from an executing application or program (e.g., a map or route finding application) that provides visual or graphical components (e.g., including various forms of media), and so forth. Next, at operation 304, the image data may be segmented into polygons or other segments, such as other shapes of various sizes, etc., for further processing and manipulation. The image data may be segmented or otherwise divided into a plurality of polygons or shapes according to any of a variety of techniques, such as based on one or more features of the image data, changes or variations in the image data (such as changes or variations in color, texture, shape, etc., identified within the image data). In one example, the image data may include map data that may be segmented based on identified features, such as building streets (e.g., into city blocks), water, land, rivers, hills, trees, vegetation, and so forth.
In the non-manifold case, at operation 306, the orientation of one or more of the polygons or fragments defining the image data may be flipped or modified, rotated, etc., such that each polygon faces a common plane, such as down toward the ground plane. For example, to later restore the original surface in process 300, each polygon that is flipped may be identified and labeled or marked at operation 308.
Next, at operation 310, polygons forming the common plane-facing surface of the image data/object may be determined. In one example, the common plane-facing surface may be a bottom surface or an invisible surface of the object. In some aspects, operation 310 may further include operation 312, which operation 312 may further include determining which polygons facing the common plane are not occluded (even partially occluded) by another polygon from the perspective of the common plane. Next, at operation 314, polygons not identified as common plane-facing polygons may be associated with the final 3D object to be printed and output (e.g., to a user interface of a 3D modeling application for display). A more detailed example of operation 310 and 314 will be described below with reference to fig. 6 and 7.
At operation 316, a closed contour or edge of the common plane facing surface from operation 310 may be determined. The contour or edge may be formed in any of a variety of shapes having different dimensions, and the like. In one example, this surface may correspond to a bottom surface of the 3D object. In some aspects, this surface may be incomplete, or may comprise a plurality of different surfaces. The closed contour/edge may then be urged to a common plane at operation 318. In some aspects, operation 318 may further include topologically splitting the shared vertices at one end to preserve the output manifold mesh or object at operation 320. Operation 318 will be described in more detail below with reference to fig. 4. Operation 318 may further include outputting the vertical wall from the edge at operation 322. Operation 322 may include defining each vertical edge wall with two triangles (e.g., defining a rectangle for the edge wall).
Next, a flat polygonal cover may be formed from the polygons determined at operation 310 that are parallel to the common plane and output at operation 324. Operation 324 may topologically close the output mesh from the bottom or common plane. In some aspects, operation 324 may further include retaining an original tessellation or geometric pattern of polygons used to form an original or top surface of the flat polygonal cap at operation 326. Additionally, operation 324 may further include flattening coordinates of the common plane facing surface (such as vertex coordinates) to a common plane level to form a polygonal lid or bottom surface at operation 328. In some aspects, edges and vertices that have been pushed may be prioritized over original (unmodified) edges and vertices.
In non-manifold applications, the orientation of the polygon labeled as facing the bottom or common plane at operation 310 may be flipped and output at operation 330, e.g., without any flattening. Also in the case of non-manifolds, the orientation of the polygon flipped at operation 306 and not marked as facing the bottom or common plane may be restored at operation 332. In some cases, polygons that have already been pushed may be excluded from operation 332. At the end of operation 332, the fully defined 3D object (where at least one surface is extruded to a plane facing a common plane, where the extruded portion encloses a volume) may be output or rendered, e.g., in a user interface of a 3D modeling application, sent to a 3D printer, etc.
Fig. 4 illustrates an example user interface, such as user interface 200 of fig. 2, displaying a jostling of 3D image data representing two different objects 410 and 415 in accordance with the described techniques. Objects 410 and 415 may each comprise a rectangular surface (an upward surface in the illustrated orientation), sharing an angular vertex 470 with each other. Objects 410 and 415 may each be defined by a rectangular pyramidal base portion or surface 420 and 425. Objects 410 and 415 may be displayed in a first view of user interface 405, such as provided by a 3D modeling application.
Objects 410 and 415 may be transformed via process 300 described above to produce objects 440 and 445 that are displayed in second view 435. Objects 440 and 445 may each define a rectangular prism (e.g., a cube) from the top surface of objects 410 and 415 to surfaces 460 and 465 located on ground plane 430. In some aspects, the bottom surfaces 460, 465 may be located above or below the ground plane 430, such as parallel to the ground plane 430. According to the process 300, each edge of the objects 410, 415 may be pushed to the ground plane 430, for example, by forming a vertically oriented rectangular wall via the generation of two triangular portions 450, 455. In the process 300 of pushing objects 410 and 415 to the ground plane 430, the shared vertex 470 between the two objects may be topologically split, e.g., to enable separate and manifold definitions for each pushed object 440, 445. By splitting the vertex 470 such that each object 410, 415 is separate and defined by separate corners at the point 470, each extruded object 440, 445 may each enclose a separate volume, defined in the illustrated example by a rectangular prism.
Fig. 5A illustrates an example 500a of more complex image data 505 squeezed onto a common or ground plane 510. The complex image data 505 may represent a portion of a city and may contain a large number of individual objects (non-manifold, or a combination thereof). By pushing the image data 505 down to the ground plane 510 according to the process 300, the orientation, scaling, etc. of the image data 505 may be maintained and/or enhanced to enable 3D printing of the image data 505 pushed onto the ground. In some aspects, separate support structures 515 including the extruded portions may be separately defined and independently configurable for 3D printing. In the illustrated example, 3D or partial 3D map data may be transformed into a full 3D model, which may be 3D printed or rendered in a 3D modeling application for various purposes. Also in the illustrated example, the user interface controls 520 may provide controls for changing the perspective of the image data 505 (pan, rotate, zoom, etc.), modifying the image data 505 itself (any of various image editing tools and 3D modeling tools), and other tools or selections.
Other variations of user interface control 520 are illustrated in user interface 500B of fig. 5B and 5C, which provide controls for viewing, modifying, and previewing 3D objects. The user interface 500b may include various tools 530 for editing 3D objects, as well as other selectable menu items. Tool 530 may include selection options for imprinting 532, splitting 534, simplifying 536, smoothing 538, pushing down 540 (as described above), merging two or more objects 542, intersecting two or more objects 544, and subtracting portions from one or more objects. In some aspects, a preview tool 550 may also be included in the user interface 500b that provides a preview of the extruded portion of the 3D object or support structure. The preview tool 550 may include various options for previewing the jostling or support structure of the 3D object, including an item selection 552, a material selection 554, which may list selectable outlines or visual indications 556 of preview elements, as well as other options not illustrated.
In the example illustrated in fig. 5B, a jostle 560 of a 3D object 565 (which includes a rectangular prism with a hole on one side) may be previewed, as indicated by the structure illustrated in dashed lines. The preview tool 550 may enable a designer or other user of the user interface 500b to better visualize and/or modify the jostle or support structure 560 of the 3D object 565 while maintaining a better or clearer visualization of the 3D object 565 itself.
Fig. 5C illustrates the same 3D object 565 with a pushing or supporting portion 570 corresponding to the previewed support structure 560, but illustrated with solid lines.
Fig. 6 illustrates an example process 600 for determining a bottom-facing or common-plane surface of an object, e.g., for pushing objects to a common plane. In the illustrated example, the process 600 may correspond to one or more of the operations 310, 312, and 314 of the process 300 described above with reference to fig. 3. Accordingly, the process 600 may begin assuming that image data has been obtained and segmented into polygons (e.g., operations 302 and 304 of process 300). In a non-manifold example, the process 600 may begin after one or more polygons have been flipped to orient them toward a common surface and marked (e.g., operations 306 and 308 of process 300).
The process 600 may begin at operation 602, where each input polygon may be identified or marked to indicate that the polygon is not discarded. Next, at operation 604, for each polygon, a set of candidate polygons that are likely to be occluded by at least one other polygon may be determined, e.g., using the bounding volume. For example, operation 604 may include creating a bounding volume corresponding to a region of the first test polygon and extending the region toward the common surface. For example, the subset of surrounding polygons may be selected based on proximity to the test polygon (e.g., a subset of polygons within a certain distance of the test polygon, such as a subset of polygons selected based on a percentage of at least one dimension of the polygon, contact or close proximity to the test polygon, or based on other parameters, including relative angles of the subset to the test polygon, etc.). Using any of a variety of techniques, the candidate set of possible occlusion polygons may be determined based on which polygons are likely to intersect the bounding volume. At operation 606, it may be determined whether the polygon faces away from the common plane (up) or whether the polygon has a strictly vertical orientation such that it is orthogonal to the common plane. If the result of the determination is yes, the polygon may be marked for discard at operation 608. If the result of the determination at operation 606 is no, then at operation 610, for each remaining polygon, a cropped volume comprising a plurality of planes (e.g., three planes in the case of a triangular polygon) each extending to infinity or at least beyond a common plane and at least one cover or surface orthogonal to the extended planes may be created. A number of these planes may be created as follows: the planes are perpendicular or orthogonal to the common plane, or in some cases orthogonal with respect to the test polygon, and contain or extend from each edge of the polygon. Additional planes may be created based on the test polygons themselves and may represent the bottom "cover" of the cropped volume. In some aspects, the polygon for which operation 610 is performed may include a non-vertical polygon previously marked as occluded, as such a polygon may still have unoccluded portions that occlude other polygons.
Then, at operation 612, each set of candidate polygons determined at operations 604 and 606 may be compared to the cropped volume created at operation 610. In some aspects, the number of comparisons may be reduced, for example, by comparing only the cropped volume to a respective set of candidate polygons selected to likely occlude the test polygon corresponding to the cropped volume. In one example, the selection of the set of candidate occlusion polygons at operation 604 may be configured to be smaller in size, such as by narrowing the requirement to identify the polygon as a possible occlusion test polygon. If it is determined at operation 614 that the polygon is at least partially within the cropped volume of the test polygon, the polygon may be discarded at operation 608. In some aspects, polygons determined to be partially within the cropped volume may be split such that newly defined edges of the polygons follow or align with the cropped volume. In such a scenario, the portion of the split polygon outside of the cropped volume may be retained and the interior portion may be discarded. At operation 616, each polygon that is at least partially not within or intersects the cropped volume may be identified and output, for example to define a bottom surface or common plane-facing surface of the object/image data. This surface may then be used, for example, via process 300, to crowd the object/image data to a common plane and may be used to define a cover that defines the crowded portion of the image data on the common plane.
Fig. 7 illustrates an example diagram 700 of a comparison of a polygon 720 with a cropped volume 705 generated for a test polygon 715, e.g., corresponding to operation 612 described above with reference to fig. 6. As illustrated, the cropped volume 705 may include three vertical walls extending from the edges of the test polygon 715, the test polygon 715 illustrated as a triangle, with a cover 710, the cover 710 having the same area as the test polygon 715 on the opposite end of the vertical walls. In one example, the cropped volume 705 may be generated via operation 610. Polygon 720 may be identified, e.g., via operations 604 and 606 described above, as part or all of a set of candidate polygons that may occlude polygon 715. In one aspect, the polygon 720 (or its projection or an extension thereof in one example) may be compared to the cropped volume 705 and determined to intersect/at least partially overlap the cropped volume 705. As illustrated, the portion 725 of the polygon 720 may be located within the cropped volume 705, as represented by block 730, which shows the intersection of the back vertical wall of the cropped volume 705 with the portion 725. In one aspect, the cover 710 may be reserved as a candidate for, for example, a bottom or common plane cover to use in operations 324 or 328 of the process 300 described above with reference to fig. 3.
It should be appreciated that the example graph 700, the cropped volume 705, the polygon 715, and the test polygon 720 are given as examples only. The described techniques contemplate various shapes for the polygons 715, 720, various different volumes and processes for constructing the cropped volume 705, and so on.
Fig. 8 illustrates an example process 800 for creating a pushed support structure for a 3D object and separate from the 3D object in accordance with the described techniques. Process 800 may be used to generate a separate mesh suitable for or directly aligned with at least one surface of a 3D object, which may be customized separately from the 3D object, e.g., by assigning one or more different attributes to the extruded portion, such as color, texture, or material, density, etc., e.g., for 3D printing. The process 800 may be particularly useful, for example, when different applications, devices, users, etc. are to perform 3D object creation and 3D printing tasks.
In one aspect, operation 800 may be applied to non-manifold image data or objects. In another aspect, process 800 may be implemented when the image data or object is manifold or fully defines a volume of the image data or object. In this manifold scenario, operations 806 and 808 may be omitted from process 800. The output of process 800 may be a fully defined support structure that may match or correspond to a common plane-facing surface or bottom surface of the 3D object/image data and may be separate from the underlying 3D object or image data. Process 800 may share one or more operations of process 300 described above with reference to fig. 3.
The process 800 may begin at operation 802, and image data (such as 3D or partial 3D image data) corresponding to at least one object may be obtained at operation 802. At operation 804, the image data may be segmented or otherwise divided into a plurality of polygons or shapes, according to any of a variety of techniques, such as based on one or more features of the image data, such as color, texture, shapes identified within the image data, and so forth.
In the non-manifold case, at operation 806, the orientation of one or more of the polygons or fragments defining the image data may be flipped or modified, rotated, etc., such that each polygon faces a common plane, such as down toward the ground plane. For example, each polygon that is flipped may be identified and labeled or marked at operation 808 in order to later restore the original surface in process 800.
Next, at operation 810, the polygons that were originally downward or facing the common plane may be mirrored (e.g., not flipped at operation 806), for example, such that the planes perpendicular to each or all of the polygons are inverted. The result of operation 810 may be used to define a top surface of the extruded portion output by process 800 or a surface that interfaces with the 3D object such that the extruded portion may be directly aligned with a downward or common plane-facing surface of the 3D object.
Next, at operation 812, polygons forming the common plane-facing surface of the image data/object may be determined. In one example, the common plane-facing surface may be a bottom surface or an invisible surface of the object. In some aspects, operation 812 may further include determining at operation 814 which polygons facing the common plane are not occluded (even partially occluded) by another polygon from the perspective of the common plane, as described in more detail above with reference to fig. 6 and 7. In some manifold cases, operation 814 may be eliminated because the manifold object may have defined a bottom surface or a surface facing a common plane.
At an operation 816, a closed contour or edge of the common plane facing surface from operation 812 may be determined. The contour or edge may be formed in any of a variety of shapes having different dimensions, and the like. In one example, the surface defined by the closed contour may correspond to a bottom surface of the 3D object. In some aspects, this surface may be incomplete, or may comprise a plurality of different surfaces. The closed contour/edge may then be pushed to a common plane at operation 818. In some aspects, operation 818 may further include topologically splitting vertices shared at one end to preserve the output manifold mesh or object at operation 820 and outputting vertical walls from the edges at operation 822.
Next, a flat polygonal cover may be formed from the polygons determined at operation 812 that are parallel to the common plane and output at operation 824. In one aspect, operation 824 may be performed on the mirrored polygon produced at operation 810. Operation 824 may close the output mesh topologically from the bottom or common plane. In some aspects, operation 824 may further include preserving the original tessellation or geometric pattern of polygons used to form the original surface of the flat polygonal cap at operation 826, and may also include flattening the vertex coordinates of the common plane-facing surfaces to a common plane level to form a polygonal cap or bottom surface at operation 828. In some aspects, edges and vertices that have been pushed may be prioritized over (used in place of) the original (unmodified) edges and vertices.
At operation 830, the mirrored polygons at operation 810 may then be output to form a top surface or 3D object-facing surface of the extruded support structure. In a non-manifold example, the orientation of the common-plane-facing polygon marked at operation 808 may be set opposite the common-plane surface to at least partially define a top surface of the support structure or a surface facing the 3D object. The extruded support structure defining the volume separated from the 3D object may then be output at operation 832.
In some non-manifold examples, a bottom surface of an otherwise undefined surface of an object may be generated via process 800, for example, to enable a supporting interface between a 3D object and a support structure. In some cases, the surface may be defined as a mirror image of the top surface of the 3D object, may be defined as a flat surface parallel to a common plane, or may be defined as a predominantly flat surface with grooves, recesses, pegs, or other structures to achieve a more secure mating surface between the support structure and the 3D object.
In one example, the support structure of the mesh generated by process 800 may be edited (e.g., via a number of operations including crossing, subtracting, moving, resizing, etc.), for example, prior to being sent to a 3D printer, as opposed to known practices. Process 800 may enable a user to share 3D objects and crowd support structures with other users that may implement or use different 3D printer hardware and/or may utilize different 3D editing and modeling software.
In one example, depending on the characteristics of the material in use and the hardware capabilities, a secondary assembly (e.g., a microtome/driver) may convert a support structure (also referred to herein as a support envelope) into a printable support grid. Traditional or existing 3D printing software typically generates the post automatically, such as after a user has specified a print target or according to an automatic (non-editable) configuration. In some cases, existing printing software may enable only a limited amount of configuration, such as by allowing a user to delete or add struts after the support structure has been generated according to a setup process. Instead, process 800 enables a content creator to define a support material 3D envelope and fine tune and configure the envelope for a particular 3D model and application without relying on assumptions about the hardware and without constraints imposed by the slicer/driver through automatic non-editable generation of support structures.
In some aspects, for FDM-based printers of a single-jostle (e.g., where the support material is the same as the object material), the top triangles/polygons defining the top surface of the support structure or surface interfacing with the 3D object may be shifted downward by a z-axis (vertical) gap or offset. This may cause the material (e.g., plastic) used to print the support structure to solidify and reduce adhesion to the 3D object, thereby making the support structure easy to remove. For 3D printers that use dissolvable materials to print the support structure, it is important to maintain the 0 z-axis gap to reduce deformation of the object supporting the top and to enhance adhesion to the object. For printers that do not require a support structure to print with the 3D object (e.g., adhesive jetting), the support structure may simply be discarded in the drive without actually being printed. In many cases, the support structure or mesh may be transformed from solid to a zig-zag or other partially solid pattern (reduced density of material) to reduce the material and energy required to print the support mesh of the 3D object.
As described below, the above-described 3D modeling or builder application, including the 3D object extrusion techniques and/or user interface 200, may be implemented on one or more computing devices or environments. FIG. 9 depicts an example general-purpose computing environment in which some of the techniques described herein may be embodied. The computing system environment 902 is only one example of a suitable computing environment and is not intended to suggest any limitation as to the scope of use or functionality of the presently disclosed subject matter. Neither should the computing environment 902 be interpreted as having any dependency or requirement relating to any one or combination of components illustrated in the exemplary operating environment 902. In some embodiments, the various depicted computing elements may include circuitry configured to instantiate specific aspects of the present disclosure. For example, the term circuitry used in this disclosure may include dedicated hardware components configured to perform the function(s) through firmware or switches. In other example embodiments, the term circuitry may include a general purpose processing unit, memory, etc., configured with software instructions that implement logic that may be used to perform the function(s). In example embodiments where circuitry includes a combination of hardware and software, an implementer may write source code embodying logic and the source code can be compiled into machine readable code that can be processed by the general purpose processing unit. Because those skilled in the art will appreciate that the prior art has evolved to the point where there is little difference between hardware, software, or a combination of hardware/software, the selection of hardware or software to perform a particular function is a design choice left to the implementer. More specifically, those skilled in the art will appreciate that a software process can be transformed into an equivalent hardware structure, and a hardware structure can itself be transformed into an equivalent software process. Thus, the choice of hardware or software implementation is one of design choice and left to the implementer.
Computer 902, which may include any of a mobile device or smartphone, tablet, laptop, desktop computer, cloud computing resource, and the like, typically includes a variety of computer-readable media. Computer readable media can be any available media that can be accessed by computer 902 and includes both volatile and nonvolatile media, removable and non-removable media. The system memory 922 includes computer-readable storage media in the form of volatile and/or nonvolatile memory such as Read Only Memory (ROM)923 and Random Access Memory (RAM) 960. A basic input/output system 924(BIOS), containing the basic routines that help to transfer information between elements within computer 902, such as during start-up, is typically stored in ROM 923. RAM 960 typically contains data and/or program modules that are immediately accessible to and/or presently being operated on by processing unit 959. By way of example, and not limitation, fig. 9 illustrates operating system 925, application programs 926, other program modules 927 including a 3D modeling application 965 having a push tool 970, and program data 929.
The computer 902 may also include other removable/non-removable, volatile/nonvolatile computer storage media. By way of example only, FIG. 9 illustrates a hard disk drive 938 that reads from or writes to non-removable, nonvolatile magnetic media, a magnetic disk drive 939 that reads from or writes to a removable, nonvolatile magnetic disk 954, and an optical disk drive 904 that reads from or writes to a removable, nonvolatile optical disk 953 such as a CD ROM or other optical media. Other removable/non-removable, volatile/nonvolatile computer storage media that can be used in the exemplary operating environment include, but are not limited to, magnetic tape cassettes, flash memory cards, digital versatile disks, digital video tape, solid state RAM, solid state ROM, and the like. The hard disk drive 938 is typically connected to the system bus 921 through a non-removable memory interface such as interface 934, and magnetic disk drive 939 and optical disk drive 904 are typically connected to the system bus 921 by a removable memory interface, such as interface 935 or 936.
The drives and their associated computer storage media discussed above and illustrated in FIG. 9, provide storage of computer readable instructions, data structures, program modules and other data for the computer 902. In fig. 9, for example, hard disk drive 938 is illustrated as storing operating system 958, application programs 957, other program modules 956, and program data 955. Note that these components can either be the same as or different from operating system 925, application programs 926, other program modules 927, and program data 929. Operating system 958, application programs 957, other program modules 956, and program data 955 are given different numbers here to illustrate that, at a minimum, they are different copies. A user can enter commands and information into the computer 902 through input devices such as a keyboard 951 and pointing device 952 (commonly referred to as a mouse, trackball or touch pad). Other input devices (not shown) may include a microphone, joystick, game pad, satellite dish, scanner, or the like. These and other input devices are often connected to the processing unit 959 through a user input interface 936 that is coupled to the system bus 921, but may be connected by other interface and bus structures, such as a parallel port, game port or a Universal Serial Bus (USB). A monitor 942 or other type of display device is also connected to the system bus 921 via an interface, such as a video interface 932. In addition to the monitor, computers may also include other peripheral output devices such as speakers 944 and printer 943 (such as a 3D printer), which may be connected through an output peripheral interface 933.
The computer 902 may operate in a networked environment using logical connections to one or more remote computers, such as a remote computer 946. The remote computer 946 may be a personal computer, a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above relative to the computer 902, although only a memory storage device 947 has been illustrated in fig. 9. The logical connections depicted in fig. 9 include a Local Area Network (LAN)945 and a Wide Area Network (WAN)949, but may also include other networks. Such networking environments are commonplace in offices, enterprise-wide computer networks, intranets and the Internet.
When used in a LAN networking environment, the computer 902 is connected to the LAN 945 through a network interface or adapter 937. When used in a WAN networking environment, the computer 902 typically includes a modem 905 or other means for establishing communications over the WAN 949, such as the Internet. The modem 905, which may be internal or external, may be connected to the system bus 921 via the user input interface 936, or other appropriate mechanism. In a networked environment, program modules depicted relative to the computer 902, or portions thereof, may be stored in the remote memory storage device. By way of example, and not limitation, fig. 9 illustrates remote application programs 948 as residing on memory device 947. It will be appreciated that the network connections shown are examples only, and other means of establishing a communications link between the computers may be used.
In some aspects, other programs 927 may include a 3D modeling or builder application 965 that provides functionality as described above, such as in a crowd tool 970. In some cases, the 3D modeling application 965/crowd tool 970 may perform the processes 300, 600, and/or 800 and provide the user interface 200 through the graphical interface 931, the video interface 932, the output peripheral interface 933, and/or the one or more monitor or touch screen devices 942 as described above. In some aspects, the 3D modeling application 965/jostling tool 970 may communicate with the 3D printer 943 to generate a physical 3D model of the 3D image data and a corresponding support structure or mesh, as described above. In some aspects, other programs 927 may include one or more 3D virtualization applications that may acquire and provide images of 3D models generated by the 3D modeling application 965 that may be displayed.
Each of the processes, methods, and algorithms described in the above sections may be fully or partially automatically instantiated in a code module executed by one or more computers or computer processors. The code modules may be stored on any type of non-transitory computer-readable medium or computer storage device, such as a hard disk, solid state memory, and/or optical disk. The processes and algorithms may be implemented in part or in whole in application specific circuitry. The results of the disclosed processes and process steps may be persisted or otherwise stored in any type of non-transitory computer storage, such as, for example, volatile or non-volatile storage. The various features and processes described above may be used independently of one another or combined in various ways. All possible combinations and sub-combinations are contemplated to be within the scope of the present disclosure. In addition, in some implementations, certain method or process blocks may be omitted. The methods and processes described herein are also not limited to any particular order, and the blocks and states associated therewith may be performed in other orders as appropriate. For example, the blocks or states described may be performed in an order different than that specifically disclosed, or multiple blocks or states may be combined into a single block or state. The example blocks or states may be performed sequentially, in parallel, or in other forms. Blocks or states may be added or removed with respect to the disclosed example embodiments. The example systems and components described herein may be configured differently than described. For example, elements may be added, removed, or rearranged relative to the disclosed example embodiments.
It will also be appreciated that items, or portions thereof, may be transferred between memory and other storage devices while being illustrated as being stored in memory or storage for purposes of memory management and data integrity. Alternatively, in other embodiments, some or all of the software modules and/or systems may execute in memory on another device and communicate with the illustrated computing system via cross-computer communication. Further, in some embodiments, some or all of the systems and/or modules may be implemented or provided in other ways, such as at least partially in firmware and/or hardware, including but not limited to Application Specific Integrated Circuits (ASICs), standard integrated circuits, controllers (e.g., by executing appropriate instructions and including microcontrollers and/or embedded controllers), Field Programmable Gate Arrays (FPGAs), Complex Programmable Logic Devices (CPLDs), and so forth. Some or all of the modules, systems, and data structures may also be stored (e.g., as software instructions or structured data) on a computer-readable medium, such as a hard disk, a memory, a network, or a portable media product for reading by an appropriate drive or via an appropriate connection. The systems, modules, and data structures may also be transmitted as generated data signals (e.g., as part of a carrier wave or other analog or digital propagated signal) on a variety of computer-readable transmission media, including wireless-based and wired/cable-based media, and may take a variety of forms (e.g., as part of a single or multiplexed analog signal, or as multiple discrete digital packets or frames). In other embodiments, such computer program products may take other forms. Thus, the present disclosure may be implemented in other computer system configurations.
Unless specifically stated otherwise, or as may be understood in the context as used, conditional language (such as "can," "might," or "may," "for example," etc.) as used herein is generally intended to convey that a particular embodiment includes but other embodiments do not include particular features, elements, and/or steps. Thus, such conditional language is not generally intended to imply that features, elements, and/or steps are required for one or more embodiments or that one or more embodiments necessarily include logic for deciding, with or without user input or prompting, whether such features, elements, and/or steps are to be included or are to be performed in any particular embodiment. The terms "comprising," "including," "having," and the like, are synonymous and are used inclusively, in an open-ended fashion, and do not exclude other elements, features, acts, operations, and the like. Furthermore, the term "or" is used in its inclusive sense (and not in its exclusive sense) such that, when used, e.g., to connect a list of elements, the term "or" means one, some, or all of the elements in the list.
While certain exemplary embodiments have been described, these embodiments have been provided by way of example only, and are not intended to limit the scope of the inventions disclosed herein. Thus, none of the preceding description is intended to imply that any particular feature, characteristic, step, module, or block is required or alternative. Indeed, the novel methods and systems described herein may be embodied in a variety of other forms; furthermore, various omissions, substitutions and changes in the form of the methods and systems described herein may be made without departing from the spirit of the inventions disclosed herein. The accompanying claims and their equivalents are intended to cover such forms or modifications as would fall within the scope and spirit of certain of the inventions disclosed herein.

Claims (14)

1. A system for integrating a 3D support structure with a 3D object, the system comprising a processor and a memory, the system programmed to perform the operations of:
receiving image data of a 3D object;
processing the image data and defining a common plane-facing surface having one or more edges, comprising:
segmenting the image data into a plurality of polygons;
orienting at least one polygon of the plurality of polygons to face the common plane to define the common plane facing surface;
selecting a first polygon of the plurality of polygons;
determining a subset of polygons neighboring the first polygon;
determining whether one or more polygons of the subset occlude the first polygon;
discarding the one or more polygons determined to occlude the first polygon from being used to define the common plane facing surface, wherein the common plane facing surface is parallel to a common plane;
extending the one or more edges to the common plane;
generating a bottom surface based on the common plane-facing surface to define a 3D support structure associated with the 3D object, wherein the bottom surface is parallel to the common plane, and wherein the bottom surface is defined by one or more extended edges;
integrating the 3D support structure with the 3D object; and
generating a 3D model comprising the integrated 3D support structure and the 3D object.
2. The system of claim 1, further comprising a 3D printer configured to fabricate the 3D model.
3. A method for generating a 3D support structure, the method comprising:
receiving image data of a 3D object;
processing the image data and defining a common plane-facing surface having one or more edges, comprising:
segmenting the image data into a plurality of polygons;
orienting at least one polygon of the plurality of polygons to face the common plane;
selecting a first polygon of the plurality of polygons;
determining a subset of polygons neighboring the first polygon;
determining whether one or more polygons of the subset occlude the first polygon;
discarding the one or more polygons determined to occlude the first polygon from being used to define the common plane-facing surface, wherein the common plane-facing surface is parallel to the common plane;
extending the one or more edges to the common plane;
generating a bottom surface based on the common plane-facing surface to define a 3D support structure associated with the 3D object, wherein the bottom surface is parallel to the common plane, and wherein the bottom surface is defined by one or more extended edges; and
generating a 3D model of the 3D support structure.
4. The method of claim 3, wherein the 3D support structure is integrated with the 3D object.
5. The method of claim 3, further comprising:
reorienting the oriented at least one polygon to an original orientation specified in the image data; and
defining the support structure separate from the 3D object based on the reorientation.
6. The method of claim 5, further comprising:
assigning at least one first attribute to the 3D object; and
assigning at least one second attribute to the support structure, wherein the first attribute is different from the second attribute.
7. The method of claim 3, wherein defining the common plane-facing surface having the one or more edges comprises connecting two or more polygons of the plurality of polygons to form the one or more edges.
8. The method of claim 3, wherein generating the bottom surface comprises splitting one or more vertices shared by at least two polygons of the plurality of polygons.
9. The method of claim 3, wherein extending the one or more edges to the common plane comprises defining at least two triangles for each of the one or more edges to form vertical walls of the 3D object.
10. The method of claim 3, wherein generating the bottom surface based on the common plane facing surface comprises modifying one or more coordinates of the common plane facing surface to lie on the common plane.
11. The method of claim 3, wherein the image data of the 3D object only partially defines the 3D object.
12. A computer-readable storage medium having instructions stored thereon, which, when executed by at least one processor, cause the at least one processor to perform operations for generating a 3D support structure, the operations comprising:
receiving image data of a 3D object;
defining a common plane-facing surface having one or more edges based on the image data, wherein the common plane-facing surface is parallel to a common plane, comprising:
segmenting the image data into a plurality of polygons;
orienting at least one polygon of the plurality of polygons to face the common plane;
selecting a first polygon of the plurality of polygons;
determining a subset of polygons neighboring the first polygon;
determining whether one or more polygons of the subset occlude the first polygon;
discarding the one or more polygons determined to occlude the first polygon from being used to define the common plane facing surface;
extending the one or more edges to the common plane;
generating a bottom surface based on the common plane-facing surface to define a 3D support structure associated with the 3D object, wherein the bottom surface is parallel to the common plane, and wherein the bottom surface is defined by one or more extended edges; and
generating a 3D model of the 3D support structure.
13. The computer-readable storage medium of claim 12, wherein the 3D support structure is integrated with the 3D object.
14. The computer-readable storage medium of claim 12, wherein the operations further comprise:
reorienting the oriented at least one polygon to an original orientation specified in the image data; and
defining the support structure separate from the 3D object based on the reorientation.
CN201680065062.8A 2015-11-06 2016-10-24 Techniques to crowd 3D objects into a plane Active CN108352082B (en)

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
US201562252346P 2015-11-06 2015-11-06
US62/252,346 2015-11-06
US15/052,027 US10210668B2 (en) 2015-11-06 2016-02-24 Technique for extruding a 3D object into a plane
US15/052,027 2016-02-24
PCT/US2016/058411 WO2017078955A1 (en) 2015-11-06 2016-10-24 Technique for extruding a 3d object into a plane

Publications (2)

Publication Number Publication Date
CN108352082A CN108352082A (en) 2018-07-31
CN108352082B true CN108352082B (en) 2022-04-01

Family

ID=57389507

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201680065062.8A Active CN108352082B (en) 2015-11-06 2016-10-24 Techniques to crowd 3D objects into a plane

Country Status (4)

Country Link
US (1) US10210668B2 (en)
EP (1) EP3371782B1 (en)
CN (1) CN108352082B (en)
WO (1) WO2017078955A1 (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10834413B2 (en) * 2018-08-24 2020-11-10 Disney Enterprises, Inc. Fast and accurate block matching for computer generated content
US11554540B2 (en) * 2018-08-24 2023-01-17 Zrapid Technologies Co., Ltd. Conformal manufacture method for 3D printing with high-viscosity material
EP3667623A1 (en) * 2018-12-12 2020-06-17 Twikit NV A system for optimizing a 3d mesh
USD959447S1 (en) 2019-12-20 2022-08-02 Sap Se Display system or portion thereof with a virtual three-dimensional animated graphical user interface
USD959476S1 (en) 2019-12-20 2022-08-02 Sap Se Display system or portion thereof with a virtual three-dimensional animated graphical user interface
USD959477S1 (en) 2019-12-20 2022-08-02 Sap Se Display system or portion thereof with a virtual three-dimensional animated graphical user interface
US11205296B2 (en) * 2019-12-20 2021-12-21 Sap Se 3D data exploration using interactive cuboids
US11507056B1 (en) * 2020-04-06 2022-11-22 Lockheed Martin Corporation System and method for three-dimensional (3D) computer-aided manufacturing (CAM) of an ensemble of pilot equipment and garments
US20220410002A1 (en) * 2021-06-29 2022-12-29 Bidstack Group PLC Mesh processing for viewability testing

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090110327A1 (en) * 2007-10-30 2009-04-30 Microsoft Corporation Semi-automatic plane extrusion for 3D modeling
CN103384881A (en) * 2010-12-23 2013-11-06 谷歌公司 View dependent techniques to determine user interest in a feature in a 3D application
US20150269289A1 (en) * 2014-03-18 2015-09-24 Palo Alto Research Center Incorporated System for visualizing a three dimensional (3d) model as printed from a 3d printer

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101388133B1 (en) 2007-02-16 2014-04-23 삼성전자주식회사 Method and apparatus for creating a 3D model from 2D photograph image
US8659599B2 (en) 2010-08-27 2014-02-25 Adobe Systems Incorporated System and method for generating a manifold surface for a 3D model of an object using 3D curves of the object
US9384591B2 (en) 2010-09-17 2016-07-05 Enventive Engineering, Inc. 3D design and modeling system and methods
US20150187130A1 (en) 2011-02-10 2015-07-02 Google Inc. Automatic Generation of 2.5D Extruded Polygons from Full 3D Models
US20140363532A1 (en) 2013-06-10 2014-12-11 Kirk W. Wolfgram Multiple color extrusion type three dimensional printer
US9821517B2 (en) 2013-06-26 2017-11-21 Microsoft Technology Licensing, Llc 3D manufacturing platform
US9688024B2 (en) 2013-08-30 2017-06-27 Adobe Systems Incorporated Adaptive supports for 3D printing
US10226895B2 (en) 2013-12-03 2019-03-12 Autodesk, Inc. Generating support material for three-dimensional printing
GB2521386A (en) 2013-12-18 2015-06-24 Ibm Improvements in 3D printing
US20150277811A1 (en) 2014-03-27 2015-10-01 Electronics And Telecommunications Research Institute Method for configuring information related to a 3d printer and method and apparatus for recommending a 3d printer

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090110327A1 (en) * 2007-10-30 2009-04-30 Microsoft Corporation Semi-automatic plane extrusion for 3D modeling
CN103384881A (en) * 2010-12-23 2013-11-06 谷歌公司 View dependent techniques to determine user interest in a feature in a 3D application
US20150269289A1 (en) * 2014-03-18 2015-09-24 Palo Alto Research Center Incorporated System for visualizing a three dimensional (3d) model as printed from a 3d printer

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
A new approach to the design and optimisation of support structures in additive manufacturing;G. Strano等;《The International Journal of Advanced Manufacturing Technology》;20120802;第1247-1254页 *
Bridging the Gap: Automated Steady Scaffoldings for 3D Printing;Jérémie Dumas等;《Acm Transactions on Graphics》;20140731;第1-10页 *
D-Printed Landform Models;Martin von Wyss等;《 Cartographic Perspectives》;20141231;第61-67页 *
G. Strano等.A new approach to the design and optimisation of support structures in additive manufacturing.《The International Journal of Advanced Manufacturing Technology》.2012,第1247-1254页. *

Also Published As

Publication number Publication date
US20170132846A1 (en) 2017-05-11
US10210668B2 (en) 2019-02-19
WO2017078955A1 (en) 2017-05-11
EP3371782A1 (en) 2018-09-12
EP3371782B1 (en) 2019-12-25
CN108352082A (en) 2018-07-31

Similar Documents

Publication Publication Date Title
CN108352082B (en) Techniques to crowd 3D objects into a plane
US10861232B2 (en) Generating a customized three-dimensional mesh from a scanned object
EP2833326B1 (en) Lossless compression of a 3D mesh including transforming of the mesh to a image
US8731876B2 (en) Creating editable feature curves for a multi-dimensional model
CN106713923B (en) Compression of three-dimensional modeled objects
US8698809B2 (en) Creation and rendering of hierarchical digital multimedia data
US9692965B2 (en) Omnidirectional image editing program and omnidirectional image editing apparatus
US9485497B2 (en) Systems and methods for converting two-dimensional images into three-dimensional images
US20170091993A1 (en) 3D Model Generation From Map Data and User Interface
CN111161411A (en) Three-dimensional building model LOD method based on octree
CN109003333B (en) Interactive grid model cutting method and device based on texture and modeling equipment
US11450078B2 (en) Generating height maps from normal maps based on virtual boundaries
CN106463003A (en) Fabricating three-dimensional objects with embossing
KR20080018404A (en) Computer readable recording medium having background making program for making game
CN109685095B (en) Classifying 2D images according to 3D arrangement type
Olsen et al. Image-assisted modeling from sketches.
CN112200906A (en) Entity extraction method and system for inclined three-dimensional model
KR100848304B1 (en) Apparatus for nurbs surface rendering of sculpting deformation effect using the multi-resolution curved surface trimming
KR102641060B1 (en) Image processing method and apparatus for facilitating 3d object editing by a user
Phothong et al. Generation and quality improvement of 3D models from silhouettes of 2D images
Bae et al. User‐guided volumetric approximation using swept sphere volumes for physically based animation
Liu et al. Shape from silhouette outlines using an adaptive dandelion model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant