[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

Point Based Modelling and Rendering of Virtual Worlds

A paper covering part of our work on point representations has been published in "Rendering Techniques 2001", Springer (Proceedings EGWR 2001). The final version of the paper is available  here.

Idea

Objects are presented by a point set. Each point consists of its 3D-coordinates, its normal and material properties. By adapting point densities, a very fine level-of-detail control is possible. The following left image shows a small forest of 30 trees. For each tree, we gemerated a random set of sample points, where the number of samples depends on the tree's image size. The shown image was generated from 200.000 points only, rendered with the standard OpenGL GL_POINT primitive at 28 frames per second The right figure shows a side view of the sample set generated for the displayed viewing frustum.
 
 

The powerful level-of-detail of the point sets can also be used for dynamic objects. For the following objects, we simulated movement of the trees in a turbulent breeze, by moving the single points according to the local wind and the point's distance to the stem. We still obtain 20 frames per second.

There is also a video clip showing the point rendered forest: 

This simple example shows the power of point representations: Visually complex objects can be rendered and modified interactively, with only little precomputation and memory requirements. By adapting the point number (and the point size accordingly), frame rate constraints can easily be met. Such frame rate constraints are paramount for all kinds of virtual reality and game-like applications, where a decrease in image quality is more acceptable than a frame rate degradation.

The advantages for rendering also apply to many other problems related to interactive applications and computer games, like dynamics computations, collision detection, visibility determination etc., where the required accuracy mainly depends on the distance to the user(s). Furthermore, as we will show in the following, points are very efficient with procedural models. All changes to a virtual world that cannot be precomputed because they happen due to user input (impacts or bullet holes on a wall, deformed objects, interactively modelled objects, ripples in a lake created by the user, see application scenario section), are usually described procedurally, with user interactions as parameters. Transforming this procedural description to an appropriate polygonal representation, e.g. for rendering, can be difficult. We believe that points are often a better means in these cases.

Of course points are not always the best solution. Big rectangles should remain rectangles and it will take a long time before home computers are fast enough so that all image textures can be replaced by "real" texture geometry. However, points are obviously a good means whenever triangles become smaller than a pixel. It does not really make sense to render triangles of subpixel size, and in particular not at 50 Hz.

In this context the analogy to ray tracing is interesting: even an oversampling ray tracer can only sample a few triangles per pixel. That's why for very complex scenes ray tracing becomes faster than scanline rendering. It is very fast to ray trace a scene with 100,000,000 triangles if 99,999,990 of them are within a single pixel, whereas each scan-line renderer will spend all its time on this one particular pixel. For the remaining pixels, however, scanline rendering will be ways faster. Point-based rendering is in between. It uses sample sets with a density fitting to image resolution like ray tracing, but it projects object points from world to image space like scanline renderer, so it does not require the costly ray casting operations.
 

Point Based Rendering for VR

Virtual reality applications can well benefit from the capabilities of point based rendering. We applied the technique to an active stereo viewing system. By this, interactive stereo viewing of scenes initially described by more than a million small polygons becomes possible with standard PC hardware. Note that with standard VR installations like a CAVE usually not more than 50.000 triangles can be displayed at sufficient rates (on multiple screens, though). In our subjective experiments, the point representations did not lead to decreased stereo perception.
 
video clip

Sqrt5-Sampling

Point representations are particularily well suited for procedural objects. First, they offer very simple level-of-detail control, so the costly surface sample generation can be restricted to a minimum. Second, the lack of topology allows easy adaptive refinement. In the example shown in the following image, we started by a uniform point set representing a rectangle and applied a procedural displacement. Due to the displacement and perspective distortions, holes appear (left half image). Simple local considerations allow to predict holes in the image neighborhood of a sample, which are then filled by recursively inserting additional samples (right half image).

For the adaptive sample insertion described above, we use the sqrt5-sampling scheme. The principle of this scheme is explained in the figure below: On the left, we see a rectangle sampled by four points. In order to sample the neighborhood of each point, we generate samples at relative positions (2/5,1/5), (-1/5,2/5), (-2/5,-1/5), (1/5,-2/5) (center left). Note that the original four points plus the 16 new ones again form a regular grid. The new grid distance is 1/sqrt(5) of the original grid and the grid is rotated by about 26 degrees. We can apply the same refinement rule to the new grid to further increase the resolution (center right).  In practice, the refinement will be local, i.e. only for critical points new nearby points are generated, resulting in an adaptive, deterministic sampling pattern (right)


 

You can download a video clip showing the ideas of sqrt(5) sampling: 
 

Procedural Geometry Modifiers

The lack of topology information in point sets makes interesting geometric modifications possible. Wa can add a "point filter" that gets points as input and displaces them, discards them or even generates several new points. Consider the examples below: the rock on the left is a sphere with a displacement modifier applied. The object in the center is obtained from a sphere, with a Holes-modifier that discards points in holes defined by a noise function and that displaces points near the holes' boundary. The basket on the right is obtained from a truncated cone. According to the position within the wickerwork, zero, one or two displaced points are generated per input point. For all the objects sqrt5-sampling was used to adapt point densities according to the modifier's output. Compare with the generation of similar objects using triangles...
 
 

Procedural modifiers: displaced sphere, "holy" sphere, wickerwork basket. Note the topology changes form the original sphere (center) and the truncated cone (right).

As discussed above, procedural object descriptions are essential for all applications that allow user interaction. One can think of many examples, from users modelling objects in a virtual reality environment to players shooting holes into walls.

The sqrt(5) video clip also shows procedural modifiers: 

Terrains

Point sampling can also be applied to infinte objects like terrains. We consider terrains defined by a procedural displacement of the infinite base plane z=0  We define a paramerization of the visible sector of the base plane, that is dense close to the viewer and becomes coarser towards the horizon. An initial point set on the base plane is created according to this parameterization and the points are displaced. Using local considerations (which require the terrain gradient in a point), undersampling in the region around a point can be predicted. We avoid holes by adaptively inserting additional samples using the sqrt5-scheme in undersampled regions. Additionally, our approach makes occlusion culling simple: by rendering from front to back we can predict base plane points that will be hidden, so that their elevation does not have to be computed. The following figure shows an example terrain rendered with only 30.000 points. This allows rendering the terrain at about 8 frames per second, where by far most time is spent on the terrain elevation computation. Taking the elevation values from a precomputed height field texture makes the rendering significantly faster. Since all elevations are recomputed per frame, the procedural terrain parameters can be changed on the fly. No computation time or memory is spent for precomputations. The image on the right shows the same point set used for the left image from another perspective. Note the decreasing sampling density towards the horizon and the large unsampled regions detected by occlusion culling.
 
 

A videoclip shows the terrain rendering: 
 

Volumetric Objects

Point based representations can also be applied to volumetric objects. Consider as example clouds,represented by a procedural density distribution defined by a 3D turbulent noise function. We generate random sample points in the volume and compute the density at the point. We then render the points with a transparency accordintg to the point's density. With a z-buffer renderer like OpenGL this requires to sort the points from back to front. Since this would be too expensive per frame, we sort the points according to their x, y and z-values and take the order which fits best to the current viewing direction.
 

We apply a purely heurisitc lighting model to the clouds by computing a virtual shading normal that is in opposite direction to the density gradient,  which can be computed cheaply from our cloud model. Although this model has no physical basis, the increase in realism is enormous. The cloud in the right image above is rendered by 65.000 points only (consider that the latest off-the-shelf graphics cards can render 10,000,000 points per second).

We also have a video sequence about cloud rendering: 
 

Application scenario: Virtual Reality / Games


Videoclip: 
 

    Two snapshots of an interactive session in a dynamic procedural virtual world.  The user navigates at about 8 fps. The trees are moving in the wind and the user ``throws rocks'' into the lakes. The terrain is precomputed and stored in a texture.


Application Scenario: Indoor Design


Videoclip: 
 

Interactive design of an interior environment. To a radiosity  solution of an office rendered with polygons, we added a complex  tree, a wicker work basket and a paper weight, all displayed with   75,000 points.  After turning on the fan, the tree is moving in the  wind (center, 13 fps at 400x400). The images on the right show the  interactive change of parameters of procedural objects. Top row:  changes at 4 fps, bottom row: 8 fps, the last one at 1.5 fps.


Application Scenario: Landscape Modelling

Videoclip: 
 

  Interactive design of an outdoors scene (resolution 400x400).  We start with a simple terrain (left: 23,000 points, 6 fps), add 1000  chestnut trees made of 150,000 triangles each and add two clouds (280,000 points, 5 fps).  If we increase accuracy, we get the right image using 3,300,000 points after 2 sec.