YOMEDIA
ADSENSE
Advanced Graphics Programming Techniques Using OpenGL P2
211
lượt xem 27
download
lượt xem 27
download
Download
Vui lòng tải xuống để xem tài liệu đầy đủ
A fairly simple method of converting a model into triangle strips is sometimes known as greedy tristripping.One of the early greedy algorithms was developed for IRIS GL which allowed swapping of vertices to create direction changes to the facet with the least neighbors.
AMBIENT/
Chủ đề:
Bình luận(0) Đăng nhập để gửi bình luận!
Nội dung Text: Advanced Graphics Programming Techniques Using OpenGL P2
- 11 7 9 12 5 8 3 6 10 1 4 2 0 Figure 11. “Greedy” Triangle Strip Generation 3.4.1 Greedy Tri-stripping A fairly simple method of converting a model into triangle strips is sometimes known as greedy tri- stripping. One of the early greedy algorithms was developed for IRIS GL which allowed swapping of vertices to create direction changes to the facet with the least neighbors. However, with OpenGL the only way to get the equivalent behavior of swapping vertices is to repeat a vertex and create a degenerate triangle, which is much more expensive than the original vertex swap operation. For OpenGL a better algorithm is to choose a polygon, convert it to triangles, then continue onto the neighboring polygon from the last edge of the previous polygon. For a given starting polygon beginning at a given edge, there are no choices as to which polygon is the best to choose next since there is only one choice. The strip is continued until the triangle strip runs off the edge of the model or runs into a polygon that is already a part of another strip (see Figure 11). For best results, pick a polygon and go both directions as far as possible, then start the triangle strip from one end. A triangle strip should not cross a hard edge, unless the vertices on that edge are repeated redun- dantly, since you’ll want different normals for the two triangles on either side of that edge. Once one strip is complete, the best polygon to choose for the next strip is often a neighbor to the polygon at one end or the other of the previous strip. More advanced triangulation methods don’t try to keep all triangles of a polygon together. For more information on such a method refer to [17]. 3.5 Capping Clipped Solids with the Stencil Buffer When dealing with solid objects it is often useful to clip the object against a plane and observe the cross section. OpenGL’s user-defined clipping planes allow an application to clip the scene by a plane. The stencil buffer provides an easy method for adding a “cap” to objects that are intersected by the clipping plane. A capping polygon is embedded in the clipping plane and the stencil buffer is used to trim the polygon to the interior of the solid. 15 Programming with OpenGL: Advanced Rendering
- For more information on the techniques using the stencil buffer, see Section 14. If some care is taken when constructing the object, solids that have a depth complexity greater than 2 (concave or shelled objects) and less than the maximum value of the stencil buffer can be rendered. Object surface polygons must have their vertices ordered so that they face away from the interior for face culling purposes. The stencil buffer, color buffer, and depth buffer are cleared, and color buffer writes are disabled. The capping polygon is rendered into the depth buffer, then depth buffer writes are disabled. The stencil operation is set to increment the stencil value where the depth test passes, and the model is drawn with glCullFace(GL BACK). The stencil operation is then set to decrement the stencil value where the depth test passes, and the model is drawn with glCullFace(GL FRONT). At this point, the stencil buffer is 1 wherever the clipping plane is enclosed by the frontfacing and backfacing surfaces of the object. The depth buffer is cleared, color buffer writes are enabled, and the polygon representing the clipping plane is now drawn using whatever material properties are desired, with the stencil function set to GL EQUAL and the reference value set to 1. This draws the color and depth values of the cap into the framebuffer only where the stencil values equal 1. Finally, stenciling is disabled, the OpenGL clipping plane is applied, and the clipped object is drawn with color and depth enabled. 3.6 Constructive Solid Geometry with the Stencil Buffer Before continuing, the it may help for the reader to be familiar with the concepts of stencil buffer usage presented in Section 14. Constructive solid geometry (CSG) models are constructed through the intersection ( ), union ( ), and subtraction (,) of solid objects, some of which may be CSG objects themselves[23]. The tree formed by the binary CSG operators and their operands is known as the CSG tree. Figure 12 shows an example of a CSG tree and the resulting model. The representation used in CSG for solid objects varies, but we will consider a solid to be a collection of polygons forming a closed volume. “Solid”, “primitive”, and “object” are used here to mean the same thing. CSG objects have traditionally been rendered through the use of ray-casting, which is slow, or through the construction of a boundary representation (B-rep). B-reps vary in construction, but are generally defined as a set of polygons that form the surface of the result of the CSG tree. One method of generating a B-rep is to take the polygons forming the surface of each primitive and trim away the polygons (or portions thereof) that don’t satisfy the CSG oper- ations. B-rep models are typically generated once and then manipulated as a static model because they are slow to generate. Drawing a CSG model using stencil usually means drawing more polygons than a B-rep would con- tain for the same model. Enabling stencil also may reduce performance. Nonetheless, some portions 16 Programming with OpenGL: Advanced Rendering
- Resulting solid CGS tree Figure 12. An Example Of Constructive Solid Geometry of a CSG tree may be interactively manipulated using stencil if the remainder of the tree is cached as a B-rep. The algorithm presented here is from a paper by Tim F. Wiegand describing a GL-independent method for using stencil in a CSG modeling system for fast interactive updates. The technique can also process concave solids, the complexity of which is limited by the number of stencil planes avail- able. A reprint of Wiegand’s paper is included in the Appendix. The algorithm presented here assumes that the CSG tree is in “normal” form. A tree is in normal form when all intersection and subtraction operators have a left subtree which contains no union operators and a right subtree which is simply a primitive (a set of polygons representing a single solid object). All union operators are pushed towards the root, and all intersection and subtraction operators are pushed towards the leaves. For example, A B , C D E G , F H is in normal form; Figure 13 illustrates the structure of that tree and the characteristics of a tree in normal form. A CSG tree can be converted to normal form by repeatedly applying the following set of production rules to the tree and then its subtrees: 1. X , Y Z ! X , Y , Z 2. X Y Z ! X Y X Z 3. X , Y Z ! X , Y X , Z 4. X Y Z ! X Y Z 5. X , Y , Z ! X , Y X Z 6. X Y , Z ! X Y , Z 17 Programming with OpenGL: Advanced Rendering
- Union at top of tree Left child of intersection Key or subtraction is never union H Union intersection C Subtraction F A B A Primitive G Right child of intersection D E or subtraction always a primitive ((((A B) - C) (((D E) G) - F)) H) Figure 13. A CSG Tree in Normal Form 7. X , Y Z ! X Z , Y 8. X Y , Z ! X , Z Y , Z 9. X Y Z ! X Z Y Z X, Y, and Z here match either primitives or subtrees. Here’s the algorithm used to apply the produc- tion rules to the CSG tree: normalize(tree *t) { if (isPrimitive(t)) return; do { while (matchesRule(t)) /* Using rules given above */ applyFirstMatchingRule(t); normalize(t->left); } while (!(isUnionOperation(t) || (isPrimitive(t->right) && ! isUnionOperation(T->left)))); normalize(t->right); } Normalization may increase the size of the tree and add primitives which do not contribute to the final image. The bounding volume of each CSG subtree can be used to prune the tree as it is normalized. Bounding volumes for the tree may be calculated using the following algorithm: 18 Programming with OpenGL: Advanced Rendering
- findBounds(tree *t) { if (isPrimitive(t)) return; findBounds(t->left); findBounds(t->right); switch (t->operation){ case union: t->bounds = unionOfBounds(t->left->bounds, t->right->bounds); case intersection: t->bounds = intersectionOfBounds(t->left->bounds, t->right->bounds); case subtraction: t->bounds = t->left->bounds; } } CSG subtrees rooted by the intersection or subtraction operators may be pruned at each step in the normalization process using the following two rules: 1. if T is an intersection and not intersects(T->left->bounds, T->right->bounds), delete T. 2. if T is a subtraction and not intersects(T->left->bounds, T->right->bounds), re- place T with T->left. The normalized CSG tree is a binary tree, but it’s important to think of the tree rather as a “sum of products” to understand the stencil CSG procedure. Consider all the unions as sums. Next, consider all the intersections and subtractions as products. (Subtraction is equivalent to intersection with the complement of the term to the right. For example, A , B = A B .) Imagine all the unions flattened out into a single union with multiple children; that union is the “sum”. The resulting subtrees of that union are all composed of subtractions and intersections, the right branch of those operations is always a single primitive, and the left branch is another operation or a single primitive. You should read each child subtree of the imaginary multiple union as a single expression containing all the intersection and subtraction operations concatenated from the bottom up. These expressions are the “products”. For example, you should think of A B , C G D , E F H as meaning A B , C G D , E F H . Figure 14 illustrates this process. At this time redundant terms can be removed from each product. Where a term subtracts itself (A , A), the entire product can be deleted. Where a term intersects itself (A A), that intersec- tion operation can be replaced with the term itself. 19 Programming with OpenGL: Advanced Rendering
- H H C G -F F E A B D G C B- A D E ((((A B) - C) (((D E) G) - F)) H) (A B - C) (D E G - F) H Figure 14. Thinking of a CSG Tree as a Sum of Products All unions can be rendered simply by finding the visible surfaces of the left and right subtrees and letting the depth test determine the visible surface. All products can be rendered by drawing the visible surfaces of each primitive in the product and trimming those surfaces with the volumes of the other primitives in the product. For example, to render A , B , the visible surfaces of A are trimmed by the complement of the volume of B, and the visible surfaces of B are trimmed by the volume of A. The visible surfaces of a product are the front facing surfaces of the operands of intersections and the back facing surfaces of the right operands of subtraction. For example, in A , B C , the visible surfaces are the front facing surfaces of A and C, and the back facing surfaces of B. Concave solids are processed as sets of front or back facing surfaces. The “convexity” of a solid is defined as the maximum number of pairs of front and back surfaces that can be drawn from the viewing direction. Figure 15 shows some examples of the convexity of objects. The nth front sur- face of a k-convex primitive is denoted Anf , and the nth back surface is Anb . Because a solid may vary in convexity when viewed from different directions, accurately representing the convexity of a primitive may be difficult and may also involve reevaluating the CSG tree at each new view. In- stead, the algorithm must be given the maximum possible convexity of a primitive, and draws the nth visible surface by using a counter in the stencil planes. The CSG tree must be further reduced to a “sum of partial products” by converting each product to a union of products, each consisting of the product of the visible surfaces of the target primitive with the remaining terms in the product. 20 Programming with OpenGL: Advanced Rendering
- 1 1 2 1 2 3 2 3 4 4 5 6 1-Convex 2-Convex 3-Convex Figure 15. Examples of n-convex Solids For example, if A, B, and D are 1-convex and C is 2-convex: A , B C D ! A0f , B C D B0b A C D C0f A , B D C1f A , B D D0f A B C Because the target term in each product has been reduced to a single front or back facing surface, the bounding volumes of that term will be a subset of the bounding volume of the original complete primitive. Once the tree is converted to partial products, the pruning process may be applied again with these subset volumes. In each resulting child subtree representing a partial product, the leftmost term is called the “target” surface, and the remaining terms on the right branches are called “trimming” primitives. The resulting sum of partial products reduces the rendering problem to rendering each partial prod- uct correctly before drawing the union of the results. Each partial product is rendered by drawing the target surface of the partial product and then “classifying” the pixels generated by that surface with the depth values generated by each of the trimming primitives in the partial product. If pixels drawn by the trimming primitives pass the depth test an even number of times, that pixel in the target primitive is “out”, and discarded. If the count is odd, the target primitive pixel is “in”’, and kept. Because the algorithm saves depth buffer contents between each object, we optimize for depth saves and restores by drawing as many of target and trimming primitives for each pass as we can fit in the stencil buffer. 21 Programming with OpenGL: Advanced Rendering
- The algorithm uses one stencil bit (Sp ) as a toggle for trimming primitive depth test passes (parity), n stencil bits for counting to the nth surface (Scount ), where n is the smallest number for which 2n is larger than the maximum convexity of a current object, and as many bits are available (Sa ) to accumulate whether target pixels have to be discarded. Because Scount will require the GL INCR operation, it must be stored contiguously in the least-significant bits of the stencil buffer. Sp and Scount are used in two separate steps, and so may share stencil bits. For example, drawing 2 5-convex primitives would require 1 Sp bit, 3 Scount bits, and 2 Sa bits. Because Sp and Scount are independent, the total number of stencil bits required would be 5. Once the tree has been converted to a sum of partial products, the individual products are rendered. Products are grouped together so that as many partial products can be rendered between depth buffer saves and restores as the stencil buffer has capacity. For each group, writes to the color buffer are disabled, the contents of the depth buffer are saved, and the depth buffer is cleared. Then, every target in the group is classified against its trimming primitives. The depth buffer is then restored, and every target in the group is rendered against the trimming mask. The depth buffer save/restore can be optimized by saving and restoring only the region containing the screen-projected bounding volumes of the target surfaces. for each group glReadPixels(...); glStencilMask(0); /* so DrawPixels won’t affect Stencil */ glDrawPixels(...); Classification consists of drawing each target primitive’s depth value and then clearing those depth values where the target primitive is determined to be outside the trimming primitives. glClearDepth(far); glClear(GL_DEPTH_BUFFER_BIT); a = 0; for (each target surface in the group) for (each partial product targeting that surface) for (each trimming primitive in that partial product) a++; The depth values for the surface are rendered by drawing the primitive containing the the target sur- face with color and stencil writes disabled. ( Scount ) is used to mask out all but the target surface. In practice, most CSG primitives are convex, so the algorithm is optimized for that case. if (the target surface is front facing) glCullFace(GL_BACK); 22 Programming with OpenGL: Advanced Rendering
- else glCullFace(GL_FRONT); if (the surface is 1-convex) glDepthMask(1); glColorMask(0, 0, 0, 0); glStencilMask(0); else glDepthMask(1); glColorMask(0, 0, 0, 0); glStencilMask(Scount); glStencilFunc(GL_EQUAL, index of surface, Scount); glStencilOp(GL_KEEP, GL_KEEP, GL_INCR); glClearStencil(0); glClear(GL_STENCIL_BUFFER_BIT); Then each trimming primitive for that target surface is drawn in turn. Depth testing is enabled and writes to the depth buffer are disabled. Stencil operations are masked to Sp and the Sp bit in the stencil is cleared to 0. The stencil function and operation are set so that Sp is toggled every time the depth test for a fragment from the trimming primitive succeeds. After drawing the trimming primitive, if this bit is 0 for uncomplemented primitives (or 1 for complemented primitives), the target pixel is “out”, and must be marked “discard”, by enabling writes to the depth buffer and storing the far depth value (Zf ) into the depth buffer everywhere that the Sp indicates “discard”. glDepthMask(0); glColorMask(0, 0, 0, 0); glStencilMask(mask for Sp); glClearStencil(0); glClear(GL_STENCIL_BUFFER_BIT); glStencilFunc(GL_ALWAYS, 0, 0); glStencilOp(GL_KEEP, GL_KEEP, GL_INVERT); glDepthMask(1); Once all the trimming primitives are rendered, the values in the depth buffer are Zf for all target pixels classified as “out”. The Sa bit for that primitive is set to 1 everywhere that the depth value for a pixel is not equal to Zf , and 0 otherwise. Each target primitive in the group is finally rendered into the framebuffer with depth testing and depth writes enabled, the color buffer enabled, and the stencil function and operation set to write depth and color only where the depth test succeeds and Sa is 1. Only the pixels inside the volumes of all the trimming primitives are drawn. 23 Programming with OpenGL: Advanced Rendering
- glDepthMask(1); glColorMask(1, 1, 1, 1); a = 0; for (each target primitive in the group) glStencilMask(0); glStencilFunc(GL_EQUAL, 1, Sa); glCullFace(GL_BACK); glStencilMask(Sa); glClearStencil(0); glClear(GL_STENCIL_BUFFER_BIT); a++; Further techniques are available for adding clipping planes (half-spaces), including more normaliza- tion rules and pruning opportunities [63]. This is especially important in the case of the near clipping plane in the viewing frustum. Source code for dynamically loadable Inventor objects implementing this technique is available at the Martin Center at Cambridge web site [64]. 24 Programming with OpenGL: Advanced Rendering
- 4 Geometry and Transformations OpenGL has a simple and powerful transformation model. Since the transformation machinery in OpenGL is exposed in the form of the modelview and projection matrices, it’s possible to develop novel uses for the transformation pipeline. This section describes some useful transformation tech- niques, and provides some additional insight into the OpenGL graphics pipeline. 4.1 Stereo Viewing Stereo viewing is a common technique to increase visual realism or enhance user interaction with 3D scenes. Two views of a scene are created, one for the left eye, one for the right. Some sort of viewing hardware is used with the display, so each eye only sees the view created for it. The apparent depth of objects is a function of the difference in their positions from the left and right eye views. When done properly, objects appear to have actual depth, especially with respect to each other. When animating, the left and right back buffers are used, and must be updated each frame. OpenGL supports stereo viewing, with left and right versions of the front and back buffers. In nor- mal, non-stereo viewing, when not using both buffers, the default buffer is the left one for both front and back buffers. Since OpenGL is window system independent, there are no interfaces in OpenGL for stereo glasses, or other stereo viewing devices. This functionality is part of the OpenGL/Window system interface library; the style of support varies widely. In order to render a frame in stereo: The display must be configured to run in stereo mode. The left eye view for each frame must be generated in the left back buffer. The right eye view for each frame must be generated in the right back buffer. The back buffers must be displayed properly, according to the needs of the stereo viewing hardware. Computing the left and right eye views is fairly straightforward. The distance separating the two eyes, called the interocular distance (IOD), must be determined. Choose this value to give the proper spacing of the viewer’s eyes relative to the scene being viewed. Whether the scene is microscopic or galaxy-wide is irrelevant. What matters is the size of the imaginary viewer relative to the objects in the scene. This distance should be correlated with the degree of perspective distortion present in the scene to produce a realistic effect. 4.1.1 Fusion Distance The other parameter is the distance from the eyes where the lines of sight for each eye converge. This distance is called the fusion distance. At this distance objects in the scene will appear to be on 25 Programming with OpenGL: Advanced Rendering
- IOD Angle Fusion distance Figure 16. Stereo Viewing Geometry the front surface of the display (“in the glass”). Objects farther than the fusion distance from the viewer will appear to be “behind the glass” while objects in front will appear to float in front of the display. The latter illusion is harder to maintain, since real objects visible to the viewer beyond the edge of the display tend to destroy the illusion. Although it is possible to create good looking stereo scenes using dimensionless quantities, the best behavior occurs when everything is measured carefully. This is quite easy to do if the glFrustum call is used rather than the gluPerspective call. Pick a unit of measurement, then use those units for screen size, distance from viewer to screen, interocular distance, and so forth. It is a good idea to keep the code that computes the screen parameters separate from the rest of the application, to make it easier to port the program to different screen sizes or arrangements. The view direction vector and the vector separating the left and right eye position are perpendicular to each other. The two view points are located along a line perpendicular to the direction of view and the “up” direction. The fusion distance is measured along the view direction. The position of the viewer can be defined to be at one of the eye points, or halfway between them. In either case, the left and right eye locations are positioned relative to it. If the viewer is taken to be halfway between the stereo eye positions, and assuming gluLookAt has been called to put the viewer position at the origin in eye space, then the fusion distance is measured along the negative z axis (like the near and far clipping planes), and the two viewpoints are on either side of the origin along the x axis, at (-IOD/2, 0, 0) and (IOD/2, 0, 0). 4.1.2 Computing the Transforms The transformations needed for correct stereo viewing are simple translations and off-axis projec- tions [13]. Computationally, the stereo viewing transforms happen last, after the viewing transform has been applied to put the viewer at the origin. Since the matrix order is the reverse of the order of operations, the viewing matrices should be applied to the modelview matrix first. 26 Programming with OpenGL: Advanced Rendering
- The order of matrix operations should be: 1. Transform from viewer position to left eye view. 2. Apply viewing operation to get to viewer position (gluLookAt or equivalent). 3. Apply modeling operations. 4. Change buffers, repeat for right eye. Assuming that the identity matrix is on the modelview stack and that we want to look at the origin from a distance of EYE BACK: glMatrixMode(GL_MODELVIEW); glLoadIdentity(); /* the default matrix */ glPushMatrix() glDrawBuffer(GL_BACK_LEFT) gluLookAt(-IOD/2.0, 0.0, EYE_BACK, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0); draw() glPopMatrix(); glPushMatrix() glDrawBuffer(GL_BACK_RIGHT) gluLookAt(IOD/2.0, 0.0, EYE_BACK, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0); draw() glPopMatrix() This method of implementing stereo transforms changes the viewing transform directly using a sep- arate call to gluLookAt for each eye view. Move fusion distance along the viewing direction from the viewer position, and use that point for the center of interest of both eyes. Translate the eye po- sition to the appropriate eye, then render the stereo view for the corresponding buffer. This method is quite simple when real-world measurements are used. An alternative, but less correct, method of implementing stereo transforms is to translate the views left and right by half of the interocular distance, then rotate by the inverse tangent of the ratio between the fusion distance and half of the interocular distance: angle = arctan fusiondistance With this IOD 2 method, each viewpoint is rotated towards the centerline halfway between the two viewpoints. 27 Programming with OpenGL: Advanced Rendering
- 4.2 Depth of Field Normal viewing transforms act like a perfect pinhole camera; everything visible is in focus, regard- less of how close or how far the objects are from the viewer. To increase realism, a scene can be rendered to vary sharpness as a function of viewer distance, more accurately simulating a camera with a finite depth of field. Depth-of-field and stereo viewing are similar. In both cases, there is more than one viewpoint, with all view directions converging at a fixed distance along the direction of view. When computing depth of field transforms, however, we only use shear instead of rotation, and sample a number of view- points, not just two, along an axis perpendicular to the view direction. The resulting images are blended together. This process creates images where the objects in front of and behind the fusion distance shift position as a function of viewpoint. In the blended image, these objects appear blurry. The closer an object is to the fusion distance, the less it shifts, and the sharper it appears. The field of view can be expanded by increasing the ratio between the viewpoint shift and fusion distance. This way objects have to be farther from the fusion distance to shift significantly. For details on rendering scenes featuring a limited field of view see Section 9.1. 4.3 The Z Coordinate and Perspective Projection The z coordinates are treated in the same fashion as the x and y coordinates. After transformation, clipping and perspective division, they occupy the range -1.0 through 1.0. The glDepthRange mapping specifies a transformation for the z coordinate similar to the viewport transformation used to map x and y to window coordinates. The glDepthRange mapping is somewhat different from the viewport mapping in that the hardware resolution of the depth buffer is hidden from the appli- cation. The parameters to the glDepthRange call are in the range [0.0, 1.0]. The z or depth asso- ciated with a fragment represents the distance to the eye. By default the fragments nearest the eye (the ones at the near clip plane) are mapped to 0.0 and the fragments farthest from the eye (those at the far clip plane) are mapped to 1.0. Fragments can be mapped to a subset of the depth buffer range by using smaller values in the glDepthRange call. The mapping may be reversed so that frag- ments furthest from the eye are at 0.0 and fragments closest to the eye are at 1.0 simply by calling glDepthRange(1.0,0.0). While this reversal is possible, it may not be practical for the imple- mentation. Parts of the underlying architecture may have been tuned for the forward mapping and may not produce results of the same quality when the mapping is reversed. To understand why there might be this disparity in the rendering quality, it’s important to understand the characteristics of the window z coordinate. The z value specifies the distance from the fragment to the plane of the eye. The relationship between distance and z is linear in an orthographic projec- tion, but not in a perspective projection. In the case of a perspective projection, the amount of the non-linearity is proportional to the ratio of far to near in the glFrustum call (or zFar to zNear in the gluPerspective call). Figure 17 plots the window coordinate z value as a function of the eye- to-pixel distance for several ratios of far to near. The non-linearity increases the resolution of the 28 Programming with OpenGL: Advanced Rendering
- 1 0.8 0.6 window Z 0.4 0.2 1:1 10:1 100:1 1000:1 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 eye Z Figure 17: Window z to Eye z Relationship for near/far Ratios z-values when they are close to the near clipping plane, increasing the resolving power of the depth buffer, but decreasing the precision throughout the rest of the viewing frustum, thus decreasing the accuracy of the depth buffer in the back part of the viewing volume. For objects a given distance from the eye, however, the depth precision is not as bad as it looks in Figure 17. No matter how far back the far clip plane is, at least half of the available depth range is present in the first “unit” of distance. In other words, if the distance from the eye to the near clip plane is one unit, at least half of the z range is used up in the first “unit” from the near clip plane towards the far clip plane. Figure 18 plots the z range for the first unit distance for various ranges. With a million to one ratio, the z value is approximately 0.5 at one unit of distance. As long as the data is mostly drawn close to the near plane, the z precision is good. The far plane could be set to infinity without significantly changing the accuracy of the depth buffer near the viewer. To achieve greatest depth buffer precision, the near plane should be moved as far from the eye as possible without touching the object, which would cause part or all of it to be clipped away. The position of the near clipping plane has no effect on the projection of the x and y coordinates and therefore has minimal effect on the image. Putting the near clip plane closer to the eye than to the object results in loss of depth buffer precision. In addition to depth buffering, the z coordinate is also used for fog computations. Some implemen- tations may perform the fog computation on a per-vertex basis using eye z and then interpolate the resulting colors whereas other implementations may perform the computation for each fragment. In this case, the implementation may use the window z to perform the fog computation. Implementa- tions may also choose to convert the computation into a cheaper table lookup operation which can also cause difficulties with the non-linear nature of window z under perspective projections. If the 29 Programming with OpenGL: Advanced Rendering
- 1 0.8 0.6 window Z 0.4 0.2 1:1 10:1 100:1 1000000:1 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Distance from the near clip plane Figure 18: Available Window z Depth Values near/far Ratios implementation uses a linearly indexed table, large far to near ratios will leave few table entries for the large eye z values. This can cause noticeable Mach bands in fogged scenes. 4.3.1 Depth Buffering We have discussed some of the caveats of using depth buffering, but there are several other aspects of OpenGL rasterization and depth buffering that are worth mentioning [2]. One big problem is that the rasterization process uses inexact arithmetic so it is exceedingly difficult to handle primitives that are coplanar unless they share the same plane equation. This problem is exacerbated by the finite precision of depth buffer implementations. Many solutions have been proposed to handle this class of problems, which involve coplanar primitives: 1. Decaling 2. Hidden line elimination 3. Outlined polygons 4. Shadows Many of these problems have elegant solutions involving the stencil buffer, but it is still worth de- scribing alternative methods to get more insight into the uses of the depth buffer. The problem of decaling one coplanar polygon into another can be solved rather simply by using the painter’s algorithm (i.e., drawing from back to front) combined with color buffer and depth buffer masking, assuming the decal is contained entirely within the underlying polygon. The steps are: 30 Programming with OpenGL: Advanced Rendering
- y Base offset More offset with more slope z Figure 19. Polygon and Outline Slopes 1. Draw the underlying polygon with depth testing enabled but depth buffer updates disabled. 2. Draw the top layer polygon (decal) also with depth testing enabled and depth buffer updates still disabled. 3. Draw the underlying polygon one more time with depth testing and depth buffer updates en- abled, but color buffer updates disabled. 4. Enable color buffer updates and continue on. Outlining a polygon and drawing hidden lines are similar problems. If we have an algorithm to out- line polygons, hidden lines can be removed by outlining polygons with one color and drawing the filled polygons with the background color. Ideally a polygon could be outlined by simply connect- ing the vertices together with line primitives. This seems similar to the decaling problem except that edges of the polygon being outlined may be shared with other polygons and those polygons may not be coplanar with the outlined polygon, so the decaling algorithm can not be used, since it relies on the coplanar decal being fully contained within the base polygon. The solution most frequently suggested for this problem is to draw the outline as a series of lines and translate the outline a small amount towards the eye. Alternately, the polygon could be translated away from the eye instead. Besides not being a particularly elegant solution, there is a problem in determining the amount to translate the polygon (or outline). In fact, in the general case there is no constant amount that can be expressed as a simple translation of the z object coordinate that will work for all polygons in a scene. Figure 19 shows two polygons (solid) with outlines (dashed) in the screen space y -z plane. One of the primitive pairs has a 45-degree slope in the y -z plane and the other has a very steep slope. During the rasterization process the depth value for a given fragment may be derived from a sample point nearly an entire pixel away from the edge of the polygon. Therefore the translation must be as large as the maximum absolute change in depth for any single pixel step on the face of the polygon. The figure shows that the steeper the depth slope, the larger the required translation. If an unduly large 31 Programming with OpenGL: Advanced Rendering
- constant value is used to deal with steep depth slopes, then for polygons which have a shallower slope there is an increased likelihood that another neighboring polygon might end up interposed be- tween the outline and the polygon. So it seems that a translation proportional to the depth slope is necessary. However, a translation proportional to slope is not sufficient for a polygon that has con- stant depth (zero slope) since it would not be translated at all. Therefore a bias is also needed. Many vendors have implemented the EXT polygon offset extension that provides a scaled slope plus bias capability for solving outline problems such as these and for other applications. A modified version of this polygon offset extension has been added to the core of OpenGL 1.1 as well. 4.4 Image Tiling When rendering a scene in OpenGL, the resolution of the image is normally limited to the worksta- tion screen size. For interactive applications this is usually sufficient, but there may be times when a higher resolution image is needed. Examples include color printing applications and computer graphics recorded for film. In these cases, higher resolution images can be divided into tiles that fit on the workstation’s framebuffer. The image is rendered tile by tile, with the results saved into off screen memory, or perhaps a file. The image can then be sent to a printer or film recorder, or undergo further processing, such has downsampling to produce an antialiased image. One very straightforward way to tile an image is to manipulate the glFrustum call’s arguments. The scene can be rendered repeatedly, one tile at a time, by changing the left, right, bottom and top arguments arguments of glFrustum for each tile. Computing the argument values is straightforward. Divide the original width and height range by the number of tiles horizontally and vertically, and use those values to parametrically find the left, right, top, and bottom values for each tile. tilei; j ; i : 0 ! nTileshoriz ; j : 0 ! nTilesvert righttiled i = leftorig + rightorig , leftorig i + 1 nTileshoriz lefttiled i = leftorig + rightorig , leftorig i nTileshoriz toptiled j = bottomorig + toporig , bottomorig j + 1 nTilesvert bottomtiledj = bottomorig + toporig , bottomorig j nTilesvert In the equations above, each value of i and j corresponds to a tile in the scene. If the original scene is divided into nTileshoriz by nTilesvert tiles, then iterating through the combinations of i and j generate the left, right top, and bottom values for glFrustum to create the tile. Since glFrustum has a shearing component in the matrix, the tiles stitch together seamlessly to form the scene. Unfortunately, this technique would have to be modified for use with 32 Programming with OpenGL: Advanced Rendering
- gluPerspective or glOrtho. There is a better approach, however. Instead of modifying the perspective transform call directly, apply transforms to the results. The area of normalized device coordinate (NDC) space corresponding to the tile of interest is translated and scaled so it fills the NDC cube. Working in NDC space instead of eye space makes finding the tiling transforms easier, and is independent of the type of projective transform. Even though it’s easy to visualize the operations happening in NDC space, conceptually, you can “push” the transforms back into eye space, and the technique maps into the glFrustum approach described above. For the transform operations to happen after the projection transform, the OpenGL calls must happen before it. Here is the sequence of operations: glMatrixMode(GL_PROJECTION); glLoadIdentity(); glScalef(xScale, yScale); glTranslatef(xOffset, yOffset, 0.f); setProjection(); The scale factors xScale and yScale scale the tile of interest to fill the the entire scene: xScale = sceneWidth tileWidth sceneHeight yScale = tileHeight The offsets xOffset and yOffset are used to offset the tile so it is centered about the z axis. In this example, the tiles are specified by their lower left corner relative to their position in the scene, but the translation needs to move the center of the tile into the origin of the x-y plane in NDC space: ,2 left 1 xOffset = sceneWidth + 1 , nTiles horiz ,2 bottom + 1 , 1 yOffset = sceneHeight nTilesvert As before nTileshoriz is the number of tiles that span the scene horizontally, while nTileshoriz is the number of tiles that span the scene vertically. Some care should be taken when computing left, bottom, tileWidth and tileHeight values. It’s important that each tile is abutted properly with it’s neighbors. Ensure this by guarding against round-off errors. Some code that properly computes these values is given below: 33 Programming with OpenGL: Advanced Rendering
- /* tileWidth and tileHeight are GLfloats */ GLint bottom, top; GLint left, right; GLint width, height; for(j = 0; j < num_vertical_tiles; j++) { for(i = 0; i < num_horizontal_tiles; i++) { left = i * tileWidth; right = (i + 1) * tileWidth; bottom = j * tileHeight; top = (j + 1) * tileHeight; width = right - left; height = top - bottom; /* compute xScale, yScale, xOffset, yOffset */ } } Note that the parameter values are computed so that left + tileWidth is guaranteed to be equal to right and equal to left of the next tile over, even if tileWidth has a fractional component. If the frustum technique is used, similar precautions should be taken with the left, right, bottom, and top parameters to glFrustum. 4.5 Moving the Current Raster Position Using the glRasterPos command, the raster position will be invalid if the specified position was culled. Since glDrawPixels and glCopyPixels operations applied when the raster position is invalid do not draw anything, it may seem that the lower left corner of a pixel rectangle must be inside the clip rectangle. This problem may be overcome by using the glBitmap command. The glBitmap command takes arguments xoff and yoff which specify an increment to be added to the current raster position. Assuming the raster position is valid, it may be moved outside the clipping rectangle by a glBitmap command. glBitmap is often used with a zero size rectangle to move the raster position. 4.6 Preventing Clipping of Wide Lines and Points It’s important to note that OpenGL points are clipped if their projected position is beyond the view- port. If a point size other than 1 is specified with glPointSize, the object will appear to “pop” out of view when the center of the wide point exits the viewport. This is because the point itself has no area, and as such is clipped based solely on its position. An example scenario is shown in Figure 20. Wide lines have the same problem. The line is clipped to the viewport, and thus some pixels con- tributed by the original line are no longer drawn, as shown in Figure 20. This problem is more significant in a multiple-display setting, such as a three-monitor flight simu- lator, or in a multiple-viewport setting such as a cylindrical projection. 34 Programming with OpenGL: Advanced Rendering
ADSENSE
CÓ THỂ BẠN MUỐN DOWNLOAD
Thêm tài liệu vào bộ sưu tập có sẵn:
Báo xấu
LAVA
AANETWORK
TRỢ GIÚP
HỖ TRỢ KHÁCH HÀNG
Chịu trách nhiệm nội dung:
Nguyễn Công Hà - Giám đốc Công ty TNHH TÀI LIỆU TRỰC TUYẾN VI NA
LIÊN HỆ
Địa chỉ: P402, 54A Nơ Trang Long, Phường 14, Q.Bình Thạnh, TP.HCM
Hotline: 093 303 0098
Email: support@tailieu.vn