volume clipping via per-fragment operations in texture ...weiskopf/publications/vis02.pdf · volume...

8
Volume Clipping via Per-Fragment Operations in Texture-Based Volume Visualization Daniel Weiskopf Klaus Engel Thomas Ertl Visualization and Interactive Systems Group University of Stuttgart, Germany ABSTRACT We propose new clipping methods that are capable of using com- plex geometries for volume clipping. The clipping tests exploit per-fragment operations on the graphics hardware to achieve high frame rates. In combination with texture-based volume rendering, these techniques enable the user to interactively select and explore regions of the data set. We present depth-based clipping techniques that analyze the depth structure of the boundary representation of the clip geometry to decide which parts of the volume have to be clipped. In another approach, a voxelized clip object is used to identify the clipped regions. CR Categories: I.3.3 [Computer Graphics]: Picture/Image Gen- eration; I.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism—Color, shading, shadowing, and texture Keywords: volume rendering, clipping, hardware acceleration 1 INTRODUCTION Volume clipping plays a decisive role in understanding 3D volumet- ric data sets because it allows to cut away selected parts of the vol- ume based on information on the position of voxels in the data set. Very often clipping is the only way to uncover important, otherwise hidden details of a data set. We think that this geometric approach can be regarded as complementary to the specification of transfer functions, which are based on the data values and their derivatives only. Therefore, we suggest to use a combination of data-driven transfer functions and geometry-guided clipping to achieve very ef- fective volume visualization. The design of useful transfer functions has recently attracted some attention [13]. One important issue is how to generate a trans- fer function automatically [10]. Current work is focused on other aspects and mainly deals with the question how multidimensional transfer functions [11] are better suited to extract information than a classification that is only based on the scalar values of the data set. In most applications of interactive volume clipping, however, only a straightforward approach is adopted—with one or multiple clip planes serving as clip geometry. Typical applications are med- ical imaging of radiological data or volume slicing of huge seismic 3D data sets in the oil and gas industry [8, 16]. Although some clip geometries can be approximated by multiple clip planes, many useful and important geometries are not supported. A prominent example is shown in Figure 1: cutting a cube-shaped opening into a volume is not facilitated by clip planes. Therefore, it is quite sur- prising that volume clipping has rather been neglected as a research Figure 1: Volume clipping in an MR data set. topic in scientific visualization and that clipping approaches beyond standard clip planes have rarely been considered. In this paper, we present various ways to efficiently implement complex clip geometries. All methods are tailored to texture-based volume rendering on graphics hardware. All hardware-based vol- ume visualization approaches reduce the full 3D problem to a col- lection of 2D slices. Since these slices are displayed by the graphics hardware, we can make use of its features, such as advanced frag- ment operations. Our clipping approaches support both 3D textures with slices perpendicular to the viewing direction [1, 2] and stacks of 2D textures. For each fragment in a slice, fragment operations and tests on the dedicated graphics hardware decide whether the fragment should be rendered or not. Therefore, operations on the slower CPU are avoided. The proposed clipping techniques allow to change the position, orientation, and size of clip objects in real time. This is an important advantage because interactivity provides the user with an immediate visual feedback. In our first approach, a clip object is represented by a tessellated boundary surface. The basic idea is to store the depth structure of the clip geometry in 2D textures whose texels have a one-to-one correspondence to pixels on the viewing plane. The fragment oper- ations provided by NVidia’s GeForce 3/4 allow to manipulate depth values; in this way, the depth structure of the clip object can be used to clip away unwanted parts of the volume. This depth-based ap- proach is particularly well-suited for producing high-quality images because clipping is performed with per-pixel accuracy. In the second approach, a clip object is voxelized and represented by an additional volume data set. Clip regions are specified by marking corresponding voxels in this volume. Graphics hardware can map multiple textures onto a single slice polygon by means of

Upload: lyminh

Post on 22-Aug-2019

230 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Volume Clipping via Per-Fragment Operations in Texture ...weiskopf/publications/vis02.pdf · Volume Clipping via Per-Fragment Operations in Texture-Based Volume Visualization Daniel

Volume Clipping via Per-Fragment Operationsin Texture-Based Volume Visualization

Daniel Weiskopf Klaus Engel Thomas Ertl

Visualization and Interactive Systems GroupUniversity of Stuttgart, Germany

ABSTRACT

We propose new clipping methods that are capable of using com-plex geometries for volume clipping. The clipping tests exploitper-fragment operations on the graphics hardware to achieve highframe rates. In combination with texture-based volume rendering,these techniques enable the user to interactively select and exploreregions of the data set. We present depth-based clipping techniquesthat analyze the depth structure of the boundary representation ofthe clip geometry to decide which parts of the volume have to beclipped. In another approach, a voxelized clip object is used toidentify the clipped regions.

CR Categories: I.3.3 [Computer Graphics]: Picture/Image Gen-eration; I.3.7 [Computer Graphics]: Three-Dimensional Graphicsand Realism—Color, shading, shadowing, and texture

Keywords: volume rendering, clipping, hardware acceleration

1 INTRODUCTION

Volume clipping plays a decisive role in understanding 3D volumet-ric data sets because it allows to cut away selected parts of the vol-ume based on information on the position of voxels in the data set.Very often clipping is the only way to uncover important, otherwisehidden details of a data set. We think that this geometric approachcan be regarded as complementary to the specification of transferfunctions, which are based on the data values and their derivativesonly. Therefore, we suggest to use a combination of data-driventransfer functions and geometry-guided clipping to achieve very ef-fective volume visualization.

The design of useful transfer functions has recently attractedsome attention [13]. One important issue is how to generate a trans-fer function automatically [10]. Current work is focused on otheraspects and mainly deals with the question how multidimensionaltransfer functions [11] are better suited to extract information thana classification that is only based on the scalar values of the dataset.

In most applications of interactive volume clipping, however,only a straightforward approach is adopted—with one or multipleclip planes serving as clip geometry. Typical applications are med-ical imaging of radiological data or volume slicing of huge seismic3D data sets in the oil and gas industry [8, 16]. Although someclip geometries can be approximated by multiple clip planes, manyuseful and important geometries are not supported. A prominentexample is shown in Figure 1: cutting a cube-shaped opening intoa volume is not facilitated by clip planes. Therefore, it is quite sur-prising that volume clipping has rather been neglected as a research

Figure 1: Volume clipping in an MR data set.

topic in scientific visualization and that clipping approaches beyondstandard clip planes have rarely been considered.

In this paper, we present various ways to efficiently implementcomplex clip geometries. All methods are tailored to texture-basedvolume rendering on graphics hardware. All hardware-based vol-ume visualization approaches reduce the full 3D problem to a col-lection of 2D slices. Since these slices are displayed by the graphicshardware, we can make use of its features, such as advanced frag-ment operations. Our clipping approaches support both 3D textureswith slices perpendicular to the viewing direction [1, 2] and stacksof 2D textures. For each fragment in a slice, fragment operationsand tests on the dedicated graphics hardware decide whether thefragment should be rendered or not. Therefore, operations on theslower CPU are avoided. The proposed clipping techniques allowto change the position, orientation, and size of clip objects in realtime. This is an important advantage because interactivity providesthe user with an immediate visual feedback.

In our first approach, a clip object is represented by a tessellatedboundary surface. The basic idea is to store the depth structure ofthe clip geometry in 2D textures whose texels have a one-to-onecorrespondence to pixels on the viewing plane. The fragment oper-ations provided by NVidia’s GeForce 3/4 allow to manipulate depthvalues; in this way, the depth structure of the clip object can be usedto clip away unwanted parts of the volume. This depth-based ap-proach is particularly well-suited for producing high-quality imagesbecause clipping is performed with per-pixel accuracy.

In the second approach, a clip object is voxelized and representedby an additional volume data set. Clip regions are specified bymarking corresponding voxels in this volume. Graphics hardwarecan map multiple textures onto a single slice polygon by means of

Page 2: Volume Clipping via Per-Fragment Operations in Texture ...weiskopf/publications/vis02.pdf · Volume Clipping via Per-Fragment Operations in Texture-Based Volume Visualization Daniel

multiple parallel texture stages. Therefore, the volume for the ac-tual data set and the clipping volume are combined by fragmentoperations to perform clipping.

We think that complex clip geometries could improve visualiza-tion in many fields of application. For example, current implemen-tations of 3D LIC (line integral convolution) [14] already benefitfrom the use of volume clipping. Without clipping, the dense repre-sentation of streamlines in 3D LIC would cover important featuresof the flow further inside the volume. Complex clip geometriescan be adapted to curved boundaries of the problem domain in flowsimulations. For example, the clip object could be aligned to thesurface of a car body in automotive applications of CFD (computa-tional fluid dynamics). Many interesting features of a flow are closeto such boundaries and could easily be explored after clipping.

Medical imaging is another field that would profit from sophisti-cated clipping approaches. For example, segmentation information,which is often available, could serve as a basis for defining curvedclip geometries. In this way, features of the data set could specif-ically be uncovered. In a sense, volume clipping is able to mapoutside semantic information into the volume data set. Finally, vol-ume clipping could be used for interactively modeling simple CSG(constructive solid geometry) objects.

The paper is organized as follows. In the next section, previouswork is briefly described. In Sections 3–5, different clipping tech-niques based on the evaluation of depth information are presented.Section 6 contains a description of clipping by means of a voxelizedvolumetric object. It is followed by a presentation of results and adiscussion. The paper ends with a short conclusion and an outlookon future work.

2 PREVIOUS WORK

Van Gelder and Kim [15] use clip planes to specify the boundariesof the data set in 3D texture-based volume rendering. Westermannand Ertl [17] propose volume clipping based on stencil tests. Inthis approach, the geometry of the clip object has to be renderedfor each slice to set the stencil buffer correctly. A clip plane that isco-planar with the current slice discards the clip geometry in frontof this slice. In this way, stencil buffer entries are set at those po-sitions where the clip plane is covered by an inside part of the clipgeometry. Finally, the slice that is textured by the actual data set isrendered into the frame buffer. By using the stencil test, fragmentsare correctly clipped away.

In [4], an approach similar to one of the depth-based methods ofthis paper is used for depth sorting semi-transparent surfaces. Thistechnique is related to depth-peeling by Everitt [6], facilitating theideas of virtual pixel maps [12] and of dual depth buffers [3]. Arelated field of research deals with issues of spatial sorting in moregenerality without considering specific issues of volume rendering.For a survey on this well-established topic we refer, for example, toFoley et al. [7] or Durand [5]. These references also cover object-space algorithms, which are not discussed in this paper.

3 BASICS OF DEPTH-BASEDCLIPPING

In this section, we focus on algorithms for volume clipping thatexploit the depth structure of the clip geometry. In all of our depth-based approaches (Sections 3–5), we assume that volumetric clipobjects are defined by boundary surface representations. Graphicshardware expects the surface to be tessellated—usually in the formof triangular meshes.

Figure 2 illustrates the depth structure of a typical clip object. Inwhat follows, we will no longer consider a complete 3D scenario,but reduce the problem to a 1D geometry along a single light ray

DepthStructure

Eye

PlaneImage Object

Ray

z ValueInside

Figure 2: Depth structure of a clip geometry for a single pixel in theframe buffer.

that originates from the eye point and reaches into the scene. Fromthis point of view, just the corresponding single pixel on the imageplane is taken into account. Therefore, the following descriptionscan be mapped to operations on those fragments that correspondto the respective position of the pixel in the frame buffer. In thecomplete volume rendering process, slices through the volume arepartitioned into a collection of these fragments during scan conver-sion. In this sense, the fragment-based point of view directly leadsto volume clipping in full 3D space.

Let us start with a basic algorithm for clip objects of arbitrarytopology and geometry. First, the depth structure of the clip geom-etry is built for the current pixel. This structure stores the depthvalues for each boundary between object and background space. Inthis way, the 1D depth values are partitioned into intervals whosebounds are given by boundaries of the objects. In addition, the in-tervals are classified either as inside or as outside of the clip object.The classification is determined by the clip geometry and, for exam-ple, can be based on the orientation of the tessellated surface or onthe direction of normal vectors associated with the clip geometry.Note that, for a closed, non-selfpenetrating clip surface, the classi-fication alters repeatedly from outside to inside and vice versa. Theuser can choose to clip away either the inside or outside of the clipobject. In this way, the intervals are uniquely specified as visibleor invisible. Secondly, the rendering of each fragment of a volumeslice has to check to which class of interval the fragment belongs.Based on this visibility property, the fragment is blended into theframe buffer or clipped away.

The following issues have to be considered for an actual imple-mentation of the above algorithm. What kind of data structure isappropriate to store the depth structure? How is the depth structuregenerated from the clip geometry? How is the visibility property ofa fragment evaluated?

To be more specific, the question is how the above algorithm canbe efficiently implemented on current graphics hardware. Since thedepth structure is stored on a per-pixel basis, a pixel-oriented mech-anism has to be considered, such as a buffer or texture on the graph-ics board. However, the need for rather high accuracy for depth val-ues rules out color-based “storage media” on the GPU such as theframe buffer or a texture with low accuracy (for example a standardRGB or RGBα texture).

A straightforward approach exploits the depth buffer as stor-age medium for interval boundaries. A major disadvantage of thisapproach is the fact that only one depth value can be stored perpixel. Therefore, just a single boundary can be implemented, whichgreatly reduces the fields of application. The implementation of thisapproach begins with rendering a simple clip geometry to the depthbuffer, i.e., the depth structure is generated from the clip geome-try by standard scan conversion of the graphics hardware. Then,writing to the depth buffer is disabled and depth testing is enabled.Finally, the slices of the volume data set are rendered and blendedinto the frame buffer. Here, the depth test implements the evalua-

Page 3: Volume Clipping via Per-Fragment Operations in Texture ...weiskopf/publications/vis02.pdf · Volume Clipping via Per-Fragment Operations in Texture-Based Volume Visualization Daniel

zfront

zback

EyeRay

ConvexClip Object

Removed byDepth Test

Removed byClipping Test

Visible

Figure 3: Illustration of depth-based volume probing.

tion of the visibility property of a fragment. The user can choosebetween two variants by setting the logical operator for the depthtest either to “less” or “greater”. In the first case, the volume isclipped away behind the geometry from the camera’s point of view.The latter allows to clip away the volume in front of the clip geom-etry. This simple approach can easily be implemented by just usingstandard OpenGL.

4 DEPTH-BASED VOLUME PROBING

In this section, we demonstrate how a maximum number of twoboundaries along the depth structure can be efficiently implementedon graphics hardware by exploiting depth tests, depth clipping, anddepth computations in the fragment operations unit. Let us assumethat the volume is clipped away outside this interval. This volumeprobing approach leaves visible only the volume inside the clipobject. A convex and closed clip geometry always obeys the re-striction to a maximum number of two boundaries because, for anyviewing parameter, an eye ray cannot intersect the object more thantwice. Concave clip geometries can benefit from this approach aswell: Very often a concave clip geometry can be approximated bytwo boundaries without introducing inappropriate artifacts, cf. theresults and discussion in Section 7.

The basic algorithm for depth-based volume probing is as fol-lows. First, the depth values, zfront, for the first boundary are de-termined by rendering the frontfaces of the clip geometry into thedepth buffer. Secondly, the contents of the depth buffer are storedin a texture and the fragment shader is configured to shift the depthvalues of all fragments in the following rendering passes by�zfront.Thirdly, the depth buffer is cleared and the backside of the clip ge-ometry is rendered into the depth buffer (with depth shift enabled)to build the second boundary. Finally, slices through the volumedata set are rendered and blended into the frame buffer. Here, depthshift and depth testing are enabled, but the depth buffer is not mod-ified.

Let us examine in more detail how the depth of a fragment deter-mines its visibility. The visibility depends on both depth clippingand depth testing. The decision whether a fragment passes depthclipping and depth testing can be expressed by a boolean functiond f (z f ), where z f is the depth of the fragment. Depth values areassumed to be described with respect to normalized window coor-dinates, i.e., valid depth values lie in [0;1]. In general, the value ofd f (z f ) can formally be written as a logical expression,

d f (z f ) = dclip(z f )^ (z f op zb) ; (1)

withdclip(z f ) = (z f � 0)^ (z f � 1) :

In this notation, ^ symbolizes a logical “and”. Depth clippingagainst the bounds 0 and 1 of the view frustum is described bydclip(z f ). The binary logical operator op is used for depth testingagainst zb, the current entry in the depth buffer.

wz =

stage tex coords operation

0

1

2

2D tex lookup

W =

HILO tex

ZW

replaces current fragment’s depth

(s , t , r , q )0 0 0

Z =

(H, L, 1)

(H, L, 1)

0

(s , t , r )1 1

(s , t , r )2 2

(s , t , r )2 2

(s , t , r )1 11

2

1

2

t 0q,( )s0

q0 0

Figure 4: Texture shader for DotProductDepthReplace.

Figure 3 illustrates how our clipping algorithm operates on thedepth structure. A shift by �zfront is applied to all depth values.By rendering the backfaces of the clip object, entries in the depthbuffer are initialized to zback� zfront, where zback is the unmodifieddepth of the backface’s fragment. During the final rendering of thevolume slices, the logical operator for the depth test is set to “�”.Therefore, Eq. (1) can be written as

d f (z f ) =�(z f � zfront � 0)^ (z f � zfront � 1)

�^(z f � zfront � zback� zfront)

= dprobe(z f )^ (z f � 1+ zfront) ;

withdprobe(z f ) = (z f � zfront)^ (z f � zback) : (2)

The term dprobe(z f ) exactly represents the required logical opera-tion for displaying the volume only in the interval [zfront;zback]. Thelast term, (z f � 1+ zfront), implies just a clipping against a modi-fied far clipping plane. By shifting depth values by �zfront beforedepth clipping, only the fragments with z � zfront stay in the validrange of depth values and thus the volume in front of the clip ge-ometry is cut away. Conversely, the test against the far boundary ofthe clip object makes use of depth testing analogously to the simpleclipping algorithm in Section 3.

In the remaining of this section, we focus on how a shift ofdepth values by �zfront can be realized on graphics hardware. Ananalogous implementation is used in [4] for depth-sorting semi-transparent surfaces. The following description is based on vertexand texture shader programs available on NVidia’s GeForce 3/4 inthe form of OpenGL extensions [9].

After the frontfaces of the clip geometry are rendered to thedepth buffer, the contents of the depth buffer are transferred to mainmemory as 32 bit unsigned integers per depth value zfront. Sub-sequently, a so-called HILO texture object containing these depthvalues is defined. A HILO texture is a high-resolution 2D texture,consisting of two 16 bit unsigned integers per texel. An original32 bit depth component can be regarded as a HILO pair of two 16bit short integers. In this way, a remapping of depth values can beavoided—the unaltered contents of the depth buffer are transferredfrom main memory to texture memory.

Then, a texture shader program is enabled which replaces the zvalue of a fragment by z� zfront, where zfront represents the z valuestored in the HILO texture. The texture shader program is based onthe DotProductDepthReplace fragment operation. Figure 4illustrates the structure of the depth replace program, which alwaysconsists of three texture stages. Stage zero performs a standard 2Dtexture lookup in a HILO texture. Stage one and two compute twodot products, Z and W , between the respective texture coordinatesand the previously fetched values from the HILO texture. Still instage two, the current fragment’s depth is replaced by Z=W .

The texture coordinates for each texture stage have to be setproperly in order to achieve the required shift of depth values by

Page 4: Volume Clipping via Per-Fragment Operations in Texture ...weiskopf/publications/vis02.pdf · Volume Clipping via Per-Fragment Operations in Texture-Based Volume Visualization Daniel

x

f(x)

x

valid x valuesvalid x values

−1 −1

1 1

00

g(x)

Figure 5: Image (a) shows valid values x for f (x) = x, image (b) shows valid values x for g(x) = 1=x.

�zfront. Texture coordinates are issued on a per-vertex basis withinthe transform and lighting part of the rendering pipeline and canbe computed by a vertex program. The following three coordi-nate systems have to be considered: homogeneous clip coordi-nates (xc;yc;zc;wc), normalized device coordinates (xn;yn;zn) =(xc=wc;yc=wc;zc=wc), and window coordinates (xw;yw;zw). Validnormalized device coordinates lie in [�1;1]3, valid window coordi-nates are assumed to lie in [0;1]3.

The standard transform and lighting part of the vertex programcomputes homogeneous clip coordinates from the original objectcoordinates by taking into account the modeling, viewing, and per-spective transformations. Clip coordinates are linearly interpolatedbetween vertices during the scan conversion of triangles and causea hyperbolic interpolation with respect to normalized device coor-dinates and window coordinates. The division by wc is performedon a per-fragment basis. For perspectively correct texture mapping,texture coordinates analogous to the above clip coordinates have tobe used. Consequently, the vertex program may only assign clip,but not device or window coordinates.

The homogeneous texture coordinates for stage zero are set to

(s0;t0;q0) =

�xc +wc

2;

yc +wc

2;wc

�:

Therefore, the lookup in the HILO is addressed by

�s0

q0;

t0q0

�=

�xn +1

2;

yn +12

�= (xw;yw) ;

allowing for a mapping to the range of texture coordinates, [0;1]2,after the division by q0. Therefore, a one-to-one correspondencebetween x–y coordinates of pixels in the frame buffer and texels inthe HILO texture is established.

The texture coordinates for stage one and two are set to

(s1;t1;r1) =

��wc

216 ;�wc;zc +wc

2

�;

(s2;t2;r2) = (0;0;wc) ;

yielding the intermediate homogeneous coordinate Z from the dotproduct in texture stage one,

Z =

��wc

216 ;�wc;zc +wc

2

�� (HI;LO;1)

= wc

��zw;shift +

zn +12

= wc��zw;shift + zw

;

where 2�16HI+LO is the high-resolution depth zw;shift due to thesplitting of the original 32 bit depth value into two 16 bit values inthe HILO texture. Similarly, W is computed by texture stage two as

W = (0;0;wc) � (HI;LO;1) = wc :

Still in stage two, the depth value of the fragment in window coor-dinates is set by the texture shader to the final, shifted value

zw;final =ZW

= zw� zw;shift :

Although the basic ideas of Everitt’s [6] depth-peeling and ourapproach are closely related, the two implementations differ sig-nificantly. In his approach, the depth structure is stored in aSGIX depth texture, and the visibility property is checked bya shadow mapping comparison (SGIX shadow) and coded intothe α attribute of the fragment. Depth-peeling also makes useof DotProductDepthReplace operation, but only during thegeneration of the depth textures. In our approach, the depth struc-ture is stored in a HILO texture; the visibility property is checkedby a clipping test against the z values of the viewing frustum. Sincethe fragment’s color attribute is not needed to code the result ofthe clipping test, clipped fragments can be distinguished from frag-ments that are transparent because of their α value assigned by thetransfer function. On the other hand, our approach needs three tex-ture stages to implement the depth replace operations, whereas ashadow-mapping approach needs only one lookup for the depth tex-ture.

5 DEPTH-BASED VOLUME CUTTING

In this section, we adapt the previous clipping mechanism to invertthe role of the visibility property. Consequently, a volume cuttingapproach replaces the above volume probing. Here, only the vol-ume outside the clip object remains visible. Once again, we restrictourselves to a maximum number of two boundaries along the depthstructure.

By inverting Eq. (2) for volume probing, one obtains

dcutting(z f ) = (z f � zfront)_ (z f � zback) ; (3)

for volume cutting. Logical “or” is symbolized by “_”. The cuttingfunction dcutting(z f ) is almost identical to the negation of the prob-ing function, :dprobe(z f )—except for the fact that zfront and zbackare valid values in both volume probing and volume cutting. Un-fortunately, logical operations for depth clipping and depth testinghave to be of the form Eq. (1), which is not compatible with Eq. (3).

Our solution to this problem is based on the following consid-erations. Let us first have a look at a simplified scenario with the

Page 5: Volume Clipping via Per-Fragment Operations in Texture ...weiskopf/publications/vis02.pdf · Volume Clipping via Per-Fragment Operations in Texture-Based Volume Visualization Daniel

frontz

midz z

valid z values

1

0

scalez

backz

Figure 6: Schematic illustration of the depth structure for volumecutting via 1=z mapping.

expression (x � �1)^ (x � 1) for values x. This expression canbe rewritten as ( f (x) � �1)^ ( f (x) � 1) by introducing an iden-tity function f (x) = x. Figure 5 (a) illustrates this scenario graph-ically. The valid values x are determined by the intersection off (x) with the horizontal lines at �1. Now consider a similar sit-uation in which the function f (x) is replaced by another functiong(x) = 1=x, as shown in Figure 5 (b). Here, the logical expres-sion (g(x) � �1) ^ (g(x) � 1) leads to valid values x accordingto (x � �1)_ (x � 1). Therefore, a mapping between the form ofEq. (3) and the required form of Eq. (1) is achieved. Please note thatthe DotProductDepthReplace fragment operation is able tocompute an reciprocal value with high accuracy.

The adaption of this idea to the graphics hardware needs to takeinto account that window coordinates lie in [0;1] and not in [�1;1].Moreover, the vertical axis (y axis) in Figure 5 must not meet theorigin, z = 0, but has to be located at the midpoint of the clip geom-etry. The midpoint is obtained by

zmid =zfront + zback

2:

In addition, the clip object will not have a thickness of 2 as impliedin Figure 5. The actual distance between front and back boundariesof the clip geometry is taken into account by the scaling factor

zscale =zback� zfront

2:

Figure 6 illustrates how the quantities of the clip object can be usedfor volume cutting via 1=z mapping. The correct function has to be

g(z) =12

zscale

z� zmid+

12

;

in order to produce the graph in Figure 6.The algorithm for depth-based volume cutting can now be for-

mulated as follows. The implementation is again based on vertexand texture shader programs of the GeForce 3/4. First, the depthvalues of the frontfaces of the clip geometry are rendered into thedepth buffer. Then, these values zfront are transferred to main mem-ory. Analogously, the depth values of the backfaces, zback, are de-termined and copied to main memory. Based on zfront and zback,the CPU computes zmid and zscale and stores them as first and sec-ond components of a unsigned HILO texture. This step reduces theaccuracy of depth values from originally 24 bit (as on the graph-ics board) to 16 bit because both components of the HILO texturehave 16 bit accuracy. However, this reduced accuracy is still appro-priate for almost all volume clipping applications, cf. the results inSection 7.

Then, the same texture shader program is used as for volumeprobing in Section 4. The depth value of each fragment is modi-fied by DotProductDepthReplace, based on the entries in theabove HILO texture. The different requirements for volume cuttingare met by issuing other texture coordinates for texture stages oneand two. These two stages compute the two dot products Z and Wwhich lead to the fragment’s final depth value Z=W . The generationof these texture coordinates are described shortly.

Finally, the depth buffer is cleared and slices through the volumedata set are rendered and blended into the frame buffer. In con-trast to volume probing in Section 4, the depth buffer does not con-tain any information on the clip geometry and depth testing is notneeded. Volume cutting is only based on the data from the HILOtexture and on depth clipping in the graphics hardware.

Again we use vertex programs to set homogenous texture coor-dinates. The computation of coordinates for stage zero is identicalto that in Section 4.

The texture coordinates for stage one and two are chosen as

(s1;t1;r1) =

��wc;wc;

zc +wc

2

�;

(s2;t2;r2) = (�2wc;0;zc +wc) :

In this way, the intermediate homogeneous coordinate Z is com-puted in texture stage one by a dot product with the entries in theHILO texture:

Z =

��wc;wc;

zc +wc

2

���zw;mid;zw;scale;1

= wc��zw;mid + zw;scale + zw

:

The first component of the HILO texture contains zw;mid, themidpoint value in window coordinates; the second component iszw;scale, the scaling factor with respect to window coordinates. Ac-cordingly, W is computed by texture stage two as

W = (�2wc;0;zc +wc) ��zw;mid;zw;scale;1

�= 2wc

��zw;mid + zw

:

Still in stage two, the fragment’s depth in window coordinates is setby the texture shader to the final value

zw;final =ZW

=zw� zw;mid + zw;scale

2(zw� zw;mid)

=12

zw;scale

zw� zw;mid+

12

:

Similarly to volume probing in the previous section, the depth-peeling approach could be used to implement a comparable versionof volume cutting. Essentially, the logical operations for the depthtest and the shadow test have to the inverted.

6 CLIPPING BASED ON VOLUMETRICTEXTURES

In contrast to the depth-based clipping algorithms from the previoussections, texture-based volume clipping approaches model the vis-ibility information by means of a second volumetric texture whosevoxels provide the information for clipping voxels of the “real” vol-ume. Here, the clip geometry is voxelized and stored as a binaryvolume. This clipping texture is either a 3D texture or a stack of2D textures. It is initialized with one for all voxels inside the clipgeometry and with zero for all voxels outside. Thus, arbitrary vox-elized clip objects are possible, i.e., convex and concave objects,like cubes, cylinders, spheres, tori, or even irregular objects.

Page 6: Volume Clipping via Per-Fragment Operations in Texture ...weiskopf/publications/vis02.pdf · Volume Clipping via Per-Fragment Operations in Texture-Based Volume Visualization Daniel

Figure 7: Comparison of volume clipping techniques.

During rendering, a texture slice of the data set and a slice ofthe clip texture are simultaneously mapped onto the same slicepolygon. These two textures are combined using a per-componentmultiplication. By means of the above texture setup and the per-component multiplication, all the voxels to be clipped are multi-plied by zero. Therefore, their color and α values do not contributeto the final image, which corresponds to a clipping of these vox-els. The fragment can be completely discarded by a subsequent αtest. The required functionality is completely covered by standardOpenGL 1.3 and supported by a wide range of today’s GPUs.

Adapting the texture coordinates of the clipping texture allowsscale and translation operations of the clip object. Additionally,when using 3D textures, a rotation of the clip object is possible. Tobe more precise, any affine transformation of the 3D texture-basedclip geometry can be represented by a corresponding transforma-tion of texture coordinates. A change between volume probing andvolume cutting is easily achieved by applying a per-fragment invertmapping to the clipping texture. In this way, the original texturevalue is replaced by (1� value). Since all these operations onlyrequire the change of texture coordinates or a re-configuration ofper-fragment operations, all above transformations of the clip ge-ometry are performed very efficiently. Only a complete change ofthe shape of the clip geometry requires a revoxelization and a reloadof the clipping texture.

The above technique is based on a binary representation andtherefore requires a nearest neighbor sampling of the clipping tex-ture. If a bilinear (for a stack of 2D textures) or a trilinear (fora 3D texture) interpolation was applied, intermediate values be-tween zero and one would result from a texture fetch in the clip-ping texture. Therefore, a clearly defined surface of the clip geom-etry would be replaced by a gradual and rather diffuse transitionbetween visible and clipped parts of the volume, which is inap-propriate for most applications. However, a missing interpolation

within the clipping texture introduces quite visible, jaggy artifactssimilar to those in aliased line drawings.

To overcome this problem we propose the following modifica-tion of the original approach. The voxelized clipping texture is nolonger used as a binary data set, but interpreted as a distance vol-ume: each voxel stores the (signed) Euclidean distance to the clos-est point on the clip object. Since all common 3D texture formatsare limited to values from the interval [0;1], the distance is mappedto this range. In this way, the isosurface for the isovalue 0:5 rep-resents the surface of the clip object. During rendering, a trilinearinterpolation in the 3D clipping texture is applied (here a stack of2D textures is not considered because of the missing interpolationbetween slices). Unlike the original method, a simple multiplica-tion of the values from the data set and the clipping texture is notcomputed. Instead, the value from the clipping texture is comparedto 0:5. Based on this comparison, either the RGBα value from thedata set (i.e., fragment is visible) or zero (i.e., fragment is invisible)is used as final color of the fragment. In this way, a clearly de-fined surface is introduced at the isovalue 0:5; nevertheless, jaggyartifacts are still avoided by trilinear interpolation.

The required per-fragment operations are available on currentPC graphics hardware via Pixel Shader 1.0 programs (DirectX) orOpenGL extensions for NVidia’s register combiners (for GeForce)and ATI’s fragment shaders (for Radeon 8500). Due to the avail-ability of multiple parallel rasterization units, usually no perfor-mance penalty is introduced for the actual rasterization; perfor-mance might be decreased by additional memory access for theclipping texture. An important advantage is the fact that any kind ofclip geometry can be specified—without any limitation with respectto topology or geometry. The main disadvantages of the approachare the additional texture memory requirements for the clipping vol-ume, the need for a voxelization of the clip geometry, and possibleartifacts due to the limited resolution of the clipping texture.

Page 7: Volume Clipping via Per-Fragment Operations in Texture ...weiskopf/publications/vis02.pdf · Volume Clipping via Per-Fragment Operations in Texture-Based Volume Visualization Daniel

(a) (b)

Figure 8: Comparison of trilinear interpolation (a) and nearestneighbor sampling (b) for a voxelized cubic clip object.

7 RESULTS AND DISCUSSION

Figure 7 illustrates the different volume clipping methods of this pa-per. The underlying test data set is of size 1283 and shows a piece-wise linear change of scalar values along the vertical axis, whichare mapped to yellow and blue colors. The boundary of the cubicvolume has a different scalar value that is mapped to a brownish–orange color. For all images of this paper, we use texture-basedrendering with pre-classification based on paletted textures. Light-ing is disabled.

The first three images in the top row of Figure 7 are producedby volume probing, the images in the second row are produced byvolume cutting. Images (a), (b), (e), and (f) show clipping based ona volumetric texture—as described in Section 6. The clipping tex-ture has the same resolution as the data set. In (a) and (e), nearestneighbor sampling is applied; in (b) and (f), a trilinear interpolationwithin the distance field is used. Artifacts are clearly visible fornearest neighbor sampling, but not for trilinear interpolation. Im-ages (c) and (g) demonstrate depth-based clipping as presented inSections 4 and 5. Depth-based clipping produces visualizations ofhigh quality because clipping is performed with per-pixel accuracy.For volume probing in (c), slight artifacts occur at the silhouette ofthe clip geometry. For image (h), only a single clip surface is imple-mented according to the simple depth-based approach of Section 3.This technique erroneously removes parts of the volume data setaround the closest vertical edge. In an interactive application theintroduced errors are quite disturbing; the spatial structure is veryhard to grasp because clipping properties become view-dependentin this simple approach. Finally, image (d) shows the clip geometryin the form of an opaque sphere.

In Figure 8, a cube-shaped voxelized clip object is used for vol-ume cutting within the data set of an engine block. Trilinear inter-polation is applied in image (a), nearest neighbor sampling in (b).Due to trilinear interpolation, the corners and edges of the cubic clipobject are smoothed out in (a), i.e., the original clip geometry can-not be reproduced. An example of a significant difference betweenboth methods is marked by red arrows in Figure 8. For this spe-cific example of an axis-aligned cube, nearest neighbor samplinggives good results. However, a generic clip object that containsboth smooth regions and sharp features can only be appropriatelyhandled by depth-based clipping.

Figure 9 shows clipping in a medical MR data set. Image (a)uses a voxelized spherical clip object and image (b) a voxelizedcylindrical clip object. The clipped part of the volume is movedaway from the remaining parts of the volume. Figure 9 (c) demon-strates that volumetric clipping is capable of processing rather com-plex clip geometries and topologies. Figure 9 (d) shows that depth-

Table 1: Performance measurements in frames per second.

Window size 5122 10242

No clipping 24.1 8.4Voxelized clip object 15.8 5.1Depth-based probing 8.2 2.5Depth-based cutting 7.1 2.1

based volume clipping allows for geometries with a large numberof triangles—such as the Stanford bunny.

The implementation of our volume renderer is based on C++,OpenGL, and GLUT. It is platform-independent and runs on Linuxand Windows PCs. The performance measurements in Table 1 wereconducted on a Windows XP PC with Athlon XP 1800+ CPU andNVidia GeForce 4 Ti 4600 graphics board. The size of the dataset was 2563 and the window sizes were 5122 or 10242. A slicingapproach for a 3D texture was chosen. The performance numbersshow that clipping based on voxelized geometry causes rather mod-est additional rendering costs. Depth-based clipping, however, isslower by a factor 3–4 (compared to rendering without clipping),which is caused by two factors. First, a transfer between graphicsboard and main memory is needed to build the HILO textures onceper frame. Secondly and more importantly, all four texture stagesare needed.

From our experiments with the different clipping technique wehave learnt that each method has specific advantages and disadvan-tages. The best-suited technique has to be chosen by the type ofapplication and the requirements stated by the user.

The main advantage of texture-based volume clipping is that itprovides an explicit control over each voxel of a volume. Therefore,arbitrarily complex objects can be used as clip geometry. Further-more, only modest additional rendering costs are introduced due tothe availability of multiple parallel rasterization units—as long asthe memory bandwidth is not the limiting factor. Another advan-tage is the potential extension to volume tagging applications: Thevoxelized “clip” object would not only contain information on thevisibility property, but more detailed information such as an objectidentification number. This ID could help to apply different transferfunctions to different regions of the data set. As another variation,the “clip” volume could contain continously varying α values toallow for a smooth fading of volumetric regions.

The main disadvantage of the volumetric clipping approach isa possibly inaccurate reproduction of the clip geometry. Nearestneighbor sampling of the clipping texture results in visible, jaggyartifacts and a poor image quality. Trilinear interpolation, on theother hand, does not allow for sharp geometric features of the clipobject. Moreover, the voxelization approach prevents the user fromchanging the shape of a clip object in real time. Affine transfor-mations of the clip object, however, can be performed at interactiveframe rates. Another problem are the additional texture memory re-quirements for the clipping volume. However, the size of the clip-ping texture can be chosen independently of the size of the volume.Certainly, if one wants to have control over each single voxel, theclipping volume has to have the same size as the volume. A way tosave texture memory for the clipping texture is to re-use the sametexture slice of the clip object multiple times. For example, a cylin-drical clipping volume can be generated by a single texture con-taining a zero-filled circle; translated versions of this texture definea cylinder.

An advantage of depth-based volume clipping is the high qualityof the resulting images, since clipping is performed with per-pixelaccuracy. Aliasing problems related to texture mapping cannot oc-cur because there is a one-to-one mapping between pixels on theimage plane and the texels representing the clip geometry. Arbitrarychanges of the clip geometry cause (almost) no additional costs be-

Page 8: Volume Clipping via Per-Fragment Operations in Texture ...weiskopf/publications/vis02.pdf · Volume Clipping via Per-Fragment Operations in Texture-Based Volume Visualization Daniel

(a) (b) (c) (d)

Figure 9: Clipping for an MR data set. In images (a)–(c), voxelized clip objects are used; image (d) illustrates depth-based clipping.

cause no voxelization is required. Although the performance doesnot reach the levels of the voxel approach, interactive applicationsare still feasible. Our depth-based volume clipping methods ben-efit from the fact that the clip geometry has to be processed onlyonce per frame. This is a very important difference to the methodby Westermann and Ertl [17]. Their technique needs to update thevalues in the stencil buffer for each slice by rendering the wholeclip geometry. This approach does not scale well for complex clipgeometries or large volume data sets with many slices.

The main problem of depth-based volume clipping in the presentform is the restriction to convex clip geometries. From our expe-rience, however, artifacts that might occur for concave objects arenegligible in many applications. Once a high enough opacity iscollected along a light ray, voxels further away from the viewer donot contribute to the final image. Often the artifacts caused by con-cave objects are located in these “hidden” regions. Furthermore, theissue of concave geometries could be solved by introducing a multi-pass rendering approach. The clip object could be split in “convex”parts; for each part the presented algorithms provide correct resultswhich are then combined to the final image. However, multi-passrendering would further reduce the overall performance.

8 CONCLUSION AND FUTURE WORK

We have introduced new clipping methods that are capable of us-ing complex clip geometries. All methods are suitable for texture-based volume rendering and exploit per-fragment operations on thegraphics hardware to implement clipping. The techniques achievehigh overall frame rates and therefore support the user in interactiveexplorations of volume data. We have presented depth-based vol-ume probing and volume cutting. Both analyze the depth structureof the clip geometry to decide which regions of the volume have tobe clipped. Depth-based volume clipping is particularly well suitedto produce high-quality images. In another approach, a clip objectis voxelized and represented by a volumetric texture. Informationin this volume is used to identify clipping regions. This approachallows to specify arbitrarily structured clip objects and is a very fasttechnique for volume clipping with complex clip geometries.

In future work we will investigate how visualization applicationscan optimally benefit from sophisticated volume clipping. In par-ticular, we plan to enhance medical imaging by including clippinggeometries extracted from segmentation information. In this con-text, volume tagging will be of specific interest. Furthermore, wewill try to find appropriate methods to visualize dense representa-tions of 3D vector fields by means of volume clipping. Finally, wewill investigate specific interaction techniques and user interfaceswhich allow to model clip geometries effectively.

ACKNOWLEDGMENTS

We would like to thank the anonymous reviewers for helpful comments to improve the

paper. Special thanks to Bettina A. Salzer for proofreading.

REFERENCES

[1] K. Akeley. Reality Engine graphics. In SIGGRAPH 1993 Conference Proceed-ings, pages 109–116, 1993.

[2] B. Cabral, N. Cam, and J. Foran. Accelerated volume rendering and tomographicreconstruction using texture mapping hardware. In 1994 Symposium on VolumeVisualization, pages 91–98, 1994.

[3] P. J. Diefenbach. Pipeline Rendering: Interaction and Realism throughHardware-Based Multi-Pass Rendering. PhD thesis, University of Pennsylva-nia, 1996.

[4] J. Diepstraten, D. Weiskopf, and T. Ertl. Transparency in interactive technicalillustrations. In Eurographics 2002, 2002.

[5] F. Durand. 3D Visibility: Analytical Study and Applications. PhD thesis, Uni-versite Joseph Fourier, Grenoble I, July 1999.

[6] C. Everitt. Interactive order-independent transparency. White paper, NVidia,2001.

[7] J. D. Foley, A. van Dam, S. K. Feiner, and J. F. Hughes. Computer Graphics:Principles and Practice. Addison-Wesley, Reading, Massachusetts, 1990.

[8] B. Frohlich, S. Barrass, B. Zehner, J. Plate, and M. Gobel. Exploring geo-scientific data in virtual environments. In IEEE Visualization Proceedings 1999,pages 169–174, 1999.

[9] M. J. Kilgard, editor. NVIDIA OpenGL Extension Specifications. NVIDIA Cor-poration, 2001.

[10] G. Kindlmann and J. Durkin. Semi-automatic generation of transfer functionsfor direct volume rendering. In 1998 Symposium on Volume Visualization, pages79–86, 1998.

[11] J. Kniss, G. Kindlmann, and C. Hansen. Interactive volume rendering usingmulti-dimensional transfer functions and direct manipulation widgets. In IEEEVisualization Proceedings 2001, pages 255–262, 2001.

[12] A. Mammen. Transparency and antialiasing algorithms implemented with thevirtual pixel maps technique. IEEE Computer Graphics and Applications,9(4):43–55, July 1989.

[13] H. Pfister, B. Lorensen, C. Baja, G. Kindlmann, W. Schroeder, L. S. Avila,K. Martin, R. Machiraju, and J. Lee. Visualization viewpoints: The transferfunction bake-off. IEEE Computer Graphics and Applications, 21(3):16–23,May/June 2001.

[14] C. Rezk-Salama, P. Hastreiter, C. Teitzel, and T. Ertl. Interactive exploration ofvolume line integral convolution based on 3D-texture mapping. In IEEE Visual-ization Proceedings 1999, pages 233–240, 1999.

[15] A. Van Gelder and K. Kim. Direct volume rendering with shading via three-dimensional textures. In 1996 Symposium on Volume Visualization, pages 23–30,1996.

[16] W. R. Volz. Gigabyte volume viewing using split software/hardware interpola-tion. In 2000 Symposium on Volume Visualization, pages 15–22, 2000.

[17] R. Westermann and T. Ertl. Efficiently using graphics hardware in volume ren-dering applications. In SIGGRAPH 1998 Conference Proceedings, pages 169–179, 1998.