sparse pdf volumes for consistent multi-resolution volume rendering

Post on 03-Jul-2015

124 Views

Category:

Engineering

1 Downloads

Preview:

Click to see full reader

DESCRIPTION

This paper was accepted at IEEEVis 2014 . This presentation is based on the pre-print copy of the paper.

TRANSCRIPT

Sparse PDF Volumes for Consistent Multi-Resolution Volume Rendering

Authors:

Ronell Sicat, KAUST

Jens Kruger, University of Duisburg-Essen

Torsten Moller, University of Vienna

Markus Hadwiger, KAUST

Presented by:

Subhashis Hazarika,

Ohio State University

9/11/2014 2

Challenges

• Substitutes each voxel by a weighted average of its neighbourhood, this changes the distribution values in the volume. Standard approaches use a single value (mean) to represent the voxel footprint distribution.

• Application of transfer functions becomes incompatible and results in inconsistent image across resolution levels. (inconsistency artifacts).

• Ideally an accurate representation of voxel footprint would provide a consistent multi-resolution volume rendering.

– Histogram storage overhead.

– Application of transfer function becomes expensive.

3 9/11/2014

Proposed Idea

• A compact sparse pdf representation for voxels in 4D (joint space X range domain of the volume).

• Optimize the sparse pdf volume data structure for parallel rendering in GPU.

• A novel approach for computing a sparse 4D pdf approximation via a greedy pursuit algorithm.

• An out-of-core framework for efficient parallel computation of sparse pdf volumes for large scalar volume data.

4 9/11/2014

Process Overview

9/11/2014 5

Basic Model

• Xp random variable for voxels associated with position p across different resolution levels.

• fp(r) pdf at position p, r is the intensity range of the volume data.

• t(r) transfer function in the domain of the range of the volume, r.

• Goal of the paper is to: – Store fp(r) effectively and apply t(r)

– Challenge : • Storage overhead

• How to evaluate eq 1.

6 9/11/2014

Joint 4D space x range domain

7 9/11/2014

Hierarchy of 4D Gaussian Mixtures

• All the Gaussians at level m have the same standard deviation. – Easy of using convolutions

– Don’t have to store s.d for all Gaussians.

• d

8 9/11/2014

Hierarchy Computation

• Initial Gaussian Mixture: – Start at level l0 and Gaussian Mixture vo

– Standard deviation:

– Weight:

• Subsequent computation: – Compute m from preceding level m.

– Low pass filter vm to avoid artifacts • By updating spatial s.d and the coefficient ci.

– Our goal is to represent m with fewer Gaussians than vm

– km=km..

– This is done by sparse approximation to m.

9 9/11/2014

Sparse PDF Volume Computation

• Sparse Approximation Theory:

– H dictionary of atoms (basis functions)

– c is the coefficient vector that determines the linear combination that should best approximate v, given H.

– H in our case consists of translates of Gaussians.

– Target signal v to approximate is a chosen vm after low-pass filter.

– Inorder to obtain sparse representation, c should have as few non-zero elements as possible.

• An NP-hard problem.

• Pursuit Algorithm: greedy iterative method of finding sparse c. – In each iteration the atom from H that best approximates the target function g(x) is

picked by projecting the g(x) into the dictionary.

10 9/11/2014

Dictionary Projection as Convolution

• Consider 1D function g(x) that we want to approximate.

• h() dictionary of atoms, where u selects the atom

• We will project g(x) onto h(x) (i.e finding inner product of the two functions)

• All dictionary atoms are translates of the same kernel h(x), where h is symmetric around zero. Therefore in terms of kernels h(x).

• This converts the eq.9 to convolution form:

11 9/11/2014

Dictionary Projection as Convolution

• In order to determine the atom that best approximates g(x) we have to determine which atom results in the largest inner product.

• In terms of convolution:

• Observation: in order to find the dictionary element that best approximates g(x) we simply have to find the max of the function

12 9/11/2014

Gaussian Dictionaries & Mixtures

• Gaussian Dictionaries:

• Gaussian Mixture: the g(x) function that we approximate is given by k Gaussians with identical s.d.

13 9/11/2014

Pursuit Algorithm

14 9/11/2014

Projection in 4D using mode finding

15 9/11/2014

Sparse PDF Volume Data Structure

• Original volume is subdivided into bricks.

• At l0, stored in usual way, with one scalar per voxel.

• For the other levels, lm, m>0: – 1st sort the set of mixture component …………… based on spatial position p.

– For each voxel with position p we count how many tuples have the same p(p=pi)

– This count is stored in a coefficient count block.

– The pi value is dropped from the tuple and the r and c values are stored in coefficient info array.

16 9/11/2014

Sparse PDF Volume Data Structure

17 9/11/2014

Run Time Classification

• Applying the transfer function to the Gaussian mixture.:

18 9/11/2014

Performance & Scalability

19 9/11/2014

Results

20 9/11/2014

Thank You

21 9/11/2014

top related