3/16/04james r. mcgirr1 interactive rendering of large volume data sets written by : stefan guthe...

22
3/16/04 James R. McGirr 1 Interactive Rendering of Interactive Rendering of Large Volume Data Sets Large Volume Data Sets Written By : Stefan Guthe Michael Wand Julius Gonser Wolfgang Straβer University of Tübingen, DE IEEE Visualization 2002 Boston, MA Presented by James R. McGirr – CS526 UIC

Upload: austen-fitzgerald

Post on 06-Jan-2018

216 views

Category:

Documents


1 download

DESCRIPTION

3/16/04James R. McGirr3 Motivation Many areas in medicine, computational physics, biology, etc, deal with large volumetric data sets that demand adequate visualization. Direct Volume Rendering – Each point in space is assigned a density for the emission and absorption of light and the volume renderer computes the light reaching the eye along viewing rays. Can be done efficiently using texture mapping hardware – volume is discretized into textured slices that are blended over each other using alpha blending. This can now be done in real time on standard off the shelf PCs. The main draw back is that this is only doable with small data sets voxel data set is currently infeasible. What are we going to do?

TRANSCRIPT

Page 1: 3/16/04James R. McGirr1 Interactive Rendering of Large Volume Data Sets Written By : Stefan Guthe Michael Wand Julius Gonser Wolfgang Straβer University

3/16/04 James R. McGirr 1

Interactive Rendering of Large Interactive Rendering of Large Volume Data SetsVolume Data Sets

Written By : Stefan Guthe Michael Wand Julius Gonser Wolfgang Straβer

University of Tübingen, DE

IEEE Visualization 2002 Boston, MA

Presented by James R. McGirr – CS526 UIC

Page 2: 3/16/04James R. McGirr1 Interactive Rendering of Large Volume Data Sets Written By : Stefan Guthe Michael Wand Julius Gonser Wolfgang Straβer University

3/16/04 James R. McGirr 2

Overview of PaperOverview of PaperAlgorithm for rendering very large volume data sets

at interactive framerates on standard PC hardware.

Algorithm takes scalar data sampled on a regular grid as input

Data is compressed into a hierarchical wavelet representation in a preprocessing step

During rendering, the data is decompressed on-the-fly, and rendered using hardware texture mapping

Page 3: 3/16/04James R. McGirr1 Interactive Rendering of Large Volume Data Sets Written By : Stefan Guthe Michael Wand Julius Gonser Wolfgang Straβer University

3/16/04 James R. McGirr 3

MotivationMotivationMany areas in medicine, computational physics, biology, etc, deal

with large volumetric data sets that demand adequate visualization.

Direct Volume Rendering – Each point in space is assigned a density for the emission and absorption of light and the volume renderer computes the light reaching the eye along viewing rays.

Can be done efficiently using texture mapping hardware – volume is discretized into textured slices that are blended over each other using alpha blending.

This can now be done in real time on standard off the shelf PCs. The main draw back is that this is only doable with small data sets. 2563 voxel data set is currently infeasible.

What are we going to do?

Page 4: 3/16/04James R. McGirr1 Interactive Rendering of Large Volume Data Sets Written By : Stefan Guthe Michael Wand Julius Gonser Wolfgang Straβer University

3/16/04 James R. McGirr 4

IDEAIDEA

► Hierarchical wavelet representation – Data Compression

► Projective Classification View-dependent Priority Schedule

► Caching

Page 5: 3/16/04James R. McGirr1 Interactive Rendering of Large Volume Data Sets Written By : Stefan Guthe Michael Wand Julius Gonser Wolfgang Straβer University

3/16/04 James R. McGirr 5

Overview of Wavelet Overview of Wavelet HierarchyHierarchyThe data volume is stored as a hierarchy of wavelet

coefficients (In an octree)

Only the levels of detail necessary for display will be decompressed and sent to the texture hardware.

Typical compression of 30:1 without noticeable artifacts in the image.

Visible human data set can be stored in 222MB instead of 6.5GB

During Rendering, local frequency spectrum can be analyzed and the appropriate rendering resolution can be determined.

Page 6: 3/16/04James R. McGirr1 Interactive Rendering of Large Volume Data Sets Written By : Stefan Guthe Michael Wand Julius Gonser Wolfgang Straβer University

3/16/04 James R. McGirr 6

Octree Construction Octree Construction AlgorithmAlgorithm

► Data divided into cubic blocks of (2k)3 voxels. k typically 16

► Wavelet filter applied to each block. Produces a lowpass filtered block of k3 voxels and (2k)3-k3 wavelet coefficients.

► Group a cube of 8 adjacent lowpass filtered blocks together and apply the filtering algorithm to this block.

► Continue recursively till only a single block remains.

Page 7: 3/16/04James R. McGirr1 Interactive Rendering of Large Volume Data Sets Written By : Stefan Guthe Michael Wand Julius Gonser Wolfgang Straβer University

3/16/04 James R. McGirr 7

Octree Construction...Octree Construction...

k k

k

k

kk

k

k

k(2k)3 – k3

Wavelet Coeff.

+filter

Note : Filter they used was a linearly interpolating spline wavelet

Page 8: 3/16/04James R. McGirr1 Interactive Rendering of Large Volume Data Sets Written By : Stefan Guthe Michael Wand Julius Gonser Wolfgang Straβer University

3/16/04 James R. McGirr 8

Final OctreeFinal Octree

...

k3

k3k3 k3 CC

C

C

k3 C k3 C...

Resolution increases by a factor of 2 each level you go down

CContains the wavelet coefficients. Needed to reconstruct the children nodes

Page 9: 3/16/04James R. McGirr1 Interactive Rendering of Large Volume Data Sets Written By : Stefan Guthe Michael Wand Julius Gonser Wolfgang Straβer University

3/16/04 James R. McGirr 9

Comments on the Comments on the CoefficientsCoefficients

► For k=16, we have ~29,000 coefficients per For k=16, we have ~29,000 coefficients per node.node.

► Wavelet coefficents of low importance are Wavelet coefficents of low importance are discareddiscared Threshold at which all values lower are mapped to Threshold at which all values lower are mapped to

zero. Setting Threshold to zero leads to lossless zero. Setting Threshold to zero leads to lossless compression. (4:1 for typical data sets)compression. (4:1 for typical data sets)

► They are encoded in a compact bit streamThey are encoded in a compact bit stream Run-length encoding combined with Huffman encodingRun-length encoding combined with Huffman encoding Arithmetic encoding - increase compression ratio of Arithmetic encoding - increase compression ratio of

~15% compared to Run-length Huffman encoding~15% compared to Run-length Huffman encoding

Page 10: 3/16/04James R. McGirr1 Interactive Rendering of Large Volume Data Sets Written By : Stefan Guthe Michael Wand Julius Gonser Wolfgang Straβer University

3/16/04 James R. McGirr 10

RenderingRendering►We now have the data represented in We now have the data represented in

a multi-resolution octree. a multi-resolution octree.

►Use projective classification to pick Use projective classification to pick which nodes to render using hardware which nodes to render using hardware texture mapping.texture mapping.

Page 11: 3/16/04James R. McGirr1 Interactive Rendering of Large Volume Data Sets Written By : Stefan Guthe Michael Wand Julius Gonser Wolfgang Straβer University

3/16/04 James R. McGirr 11

Projective ClassificationProjective Classification► Extract nodes from the tree that have the Extract nodes from the tree that have the

same resolution as the display resolution.same resolution as the display resolution.► Exclude all nodes outside of the view frustum. Exclude all nodes outside of the view frustum.

Page 12: 3/16/04James R. McGirr1 Interactive Rendering of Large Volume Data Sets Written By : Stefan Guthe Michael Wand Julius Gonser Wolfgang Straβer University

3/16/04 James R. McGirr 12

Projective Classification Projective Classification AlgorithmAlgorithm

► Start at the root and traverse the tree recursively. Start at the root and traverse the tree recursively. ► For each node, test if it is located outside the view For each node, test if it is located outside the view

frustum. If so, ignore it and stop.frustum. If so, ignore it and stop.► Compare the spacing between the voxel grid and the Compare the spacing between the voxel grid and the

screen resolution. If equal or smaller, then send to the screen resolution. If equal or smaller, then send to the renderer. Otherwise subdivide the node and repeatrenderer. Otherwise subdivide the node and repeat

► Results in O(log n) rendering time.Results in O(log n) rendering time. ► Problem is very large constant in the big O Notation. Problem is very large constant in the big O Notation. ► E.G. For a close-up of a volume with a depth of 2048 E.G. For a close-up of a volume with a depth of 2048

voxels, the algorithm obtains more than 230 million voxels, the algorithm obtains more than 230 million voxels after projective classification. voxels after projective classification.

► This is 4X more than the texture memory of a typical This is 4X more than the texture memory of a typical graphics board (64MB)graphics board (64MB)

Page 13: 3/16/04James R. McGirr1 Interactive Rendering of Large Volume Data Sets Written By : Stefan Guthe Michael Wand Julius Gonser Wolfgang Straβer University

3/16/04 James R. McGirr 13

What do we do now?What do we do now?Need a refined classification Need a refined classification

criterioncriterion Nodes that are near the Nodes that are near the

viewer have a higher priorityviewer have a higher priority Know maximum amount of Know maximum amount of

voxels that can be processed voxels that can be processed in the rendering stage. Stop in the rendering stage. Stop once this is reached.once this is reached.

Use a priority queue, based on Use a priority queue, based on distance from the viewer and distance from the viewer and weighted by the amount of weighted by the amount of detail in the region obtained detail in the region obtained from the coefficent data.from the coefficent data.

slices

viewer| viewing plane

►View-dependent priority View-dependent priority Schedule.Schedule.

Page 14: 3/16/04James R. McGirr1 Interactive Rendering of Large Volume Data Sets Written By : Stefan Guthe Michael Wand Julius Gonser Wolfgang Straβer University

3/16/04 James R. McGirr 14

CachingCaching► So far so good, but we would still not be able to So far so good, but we would still not be able to

perform an interactive walkthrough if we perform an interactive walkthrough if we decompressed the wavelet representation from decompressed the wavelet representation from scratch for each frame.scratch for each frame.

Store decompressed volume blocks from the octree. Store decompressed volume blocks from the octree. Coefficients need not be stored. User defines fixed Coefficients need not be stored. User defines fixed amount of cache memoryamount of cache memory

Must create 3D textures from the cache (OpenGL Must create 3D textures from the cache (OpenGL texture objects)texture objects)

Texture objects must be uploaded to the texture Texture objects must be uploaded to the texture memory. Done automatically by the OpenGL drivermemory. Done automatically by the OpenGL driver

Page 15: 3/16/04James R. McGirr1 Interactive Rendering of Large Volume Data Sets Written By : Stefan Guthe Michael Wand Julius Gonser Wolfgang Straβer University

3/16/04 James R. McGirr 15

ResultsResults►Algorithm was implemented in C++ Algorithm was implemented in C++

using OpenGL with nVidia extensions using OpenGL with nVidia extensions for rendering.for rendering.

►2Ghz Pentium 4 PC with 1GB RAM2Ghz Pentium 4 PC with 1GB RAM

►nVidia GeForce 4 Ti4600 graphics nVidia GeForce 4 Ti4600 graphics board with 128MB local video memoryboard with 128MB local video memory

Page 16: 3/16/04James R. McGirr1 Interactive Rendering of Large Volume Data Sets Written By : Stefan Guthe Michael Wand Julius Gonser Wolfgang Straβer University

3/16/04 James R. McGirr 16

Data Sets ExaminedData Sets Examined► CT scan of a Christmas TreeCT scan of a Christmas Tree

512 x 512 x 999 voxel with 12 bits per voxel512 x 512 x 999 voxel with 12 bits per voxel► Visible Human MaleVisible Human Male

2048 x 1216 x 1877 voxel RGB2048 x 1216 x 1877 voxel RGB► Visible Human WomanVisible Human Woman

2048 x 1216 x 1734 voxel RGB2048 x 1216 x 1734 voxel RGB

► Rendering were made using gradient based Rendering were made using gradient based lighting and a classification function with lighting and a classification function with several semi-transparent iso-surfaces.several semi-transparent iso-surfaces.

Page 17: 3/16/04James R. McGirr1 Interactive Rendering of Large Volume Data Sets Written By : Stefan Guthe Michael Wand Julius Gonser Wolfgang Straβer University

3/16/04 James R. McGirr 17

StatisticsStatistics► Run-length Huffman encoding **UsedRun-length Huffman encoding **Used

decompression speed of 50 MB/sdecompression speed of 50 MB/s

► Arithmetic encodingArithmetic encoding decompression speed of 4.5 MB/sdecompression speed of 4.5 MB/s 10-15% higher compression than Huffman.10-15% higher compression than Huffman.

► Resolution of the output images are 256Resolution of the output images are 2562 2 pixelspixels► Chirstmas tree compressed using lossless Chirstmas tree compressed using lossless

compression (3.4:1)compression (3.4:1)► Visible human compressed using lossy compression Visible human compressed using lossy compression

(man 30:1, woman 40:1)(man 30:1, woman 40:1)► Preprocessing Time : Xmas 1Hr, VH 5Hr eachPreprocessing Time : Xmas 1Hr, VH 5Hr each

These times are dominated by hard disk access (CPU These times are dominated by hard disk access (CPU utilization only 6-7% during compression) utilization only 6-7% during compression)

Page 18: 3/16/04James R. McGirr1 Interactive Rendering of Large Volume Data Sets Written By : Stefan Guthe Michael Wand Julius Gonser Wolfgang Straβer University

3/16/04 James R. McGirr 18

More ResultsMore Results► The more blocks used, the higher the quality The more blocks used, the higher the quality

of the image. of the image. 2048 blocks -> 3-4 f.p.s2048 blocks -> 3-4 f.p.s 1024 blocks -> 7 f.p.s1024 blocks -> 7 f.p.s 512 blocks -> 10 f.p.s512 blocks -> 10 f.p.s

► Cache efficiency very high. For the high Cache efficiency very high. For the high quality rendering, only 40-60 blocks have to quality rendering, only 40-60 blocks have to be decompressed per frame and 20-30 be decompressed per frame and 20-30 textures have to be constructed on averagetextures have to be constructed on average

► If Caching is deactivated, average framerate If Caching is deactivated, average framerate falls to 0.3 f.p.s. for all test scenes.falls to 0.3 f.p.s. for all test scenes.

Page 19: 3/16/04James R. McGirr1 Interactive Rendering of Large Volume Data Sets Written By : Stefan Guthe Michael Wand Julius Gonser Wolfgang Straβer University

3/16/04 James R. McGirr 19

And some more resultsAnd some more results►For the walkthrough animation that For the walkthrough animation that

you will see in a second...you will see in a second... 6% of time spent for decompression6% of time spent for decompression 5% gradient calculations5% gradient calculations 1% uploading textures to graphics board1% uploading textures to graphics board 88% rendering88% rendering

So the processor spends most of the time So the processor spends most of the time waiting for the graphics hardware.waiting for the graphics hardware.

Page 20: 3/16/04James R. McGirr1 Interactive Rendering of Large Volume Data Sets Written By : Stefan Guthe Michael Wand Julius Gonser Wolfgang Straβer University

3/16/04 James R. McGirr 20

ConclusionConclusion► Algorithm presented in the paper uses a Algorithm presented in the paper uses a

hierarchical wavelet representation to store hierarchical wavelet representation to store very large data sets in main memory.very large data sets in main memory.

► Algorithm extracts the levels of detail Algorithm extracts the levels of detail necessary for the current view point on-the-necessary for the current view point on-the-fly.fly.

► By using intelligent caching, interactive By using intelligent caching, interactive walkthroughs of large data sets are feasible.walkthroughs of large data sets are feasible.

Page 21: 3/16/04James R. McGirr1 Interactive Rendering of Large Volume Data Sets Written By : Stefan Guthe Michael Wand Julius Gonser Wolfgang Straβer University

3/16/04 James R. McGirr 21

ReferencesReferences► Interactive Rendering of Large Volume Data Sets : Stefan Guthe, Interactive Rendering of Large Volume Data Sets : Stefan Guthe,

Michael Wand, Julius Gonser, Wolfgang Stasser : IEEE Vis. 2002Michael Wand, Julius Gonser, Wolfgang Stasser : IEEE Vis. 2002

► Real-time Decompression and Visualization of Animated Volume Data Real-time Decompression and Visualization of Animated Volume Data : Stefan Guthe, Wolfgang Strasser : IEEE 2001: Stefan Guthe, Wolfgang Strasser : IEEE 2001

► Interactive Lighting Models and Pre-Integration for Volume REndering Interactive Lighting Models and Pre-Integration for Volume REndering on PC Graphics Accelerators : Michael Meissner, Stefan Guthe, on PC Graphics Accelerators : Michael Meissner, Stefan Guthe, Wolfgang Stasser Wolfgang Stasser

Page 22: 3/16/04James R. McGirr1 Interactive Rendering of Large Volume Data Sets Written By : Stefan Guthe Michael Wand Julius Gonser Wolfgang Straβer University

3/16/04 James R. McGirr 22

¿ Questions ?¿ Questions ?