a survey of visibility for walkthrough applications · mohit gupta 1 abstract visibilityalgorithms...
TRANSCRIPT
Advanced Computational Geometry Spring 2004
A Survey of Visibility for Walkthrough Applications
Mohit Gupta
1 Abstract
Visibility algorithms for walkthrough and related applications have grown into a significant
area, spurred by the growth in the complexity of models and the need for highly interactive
ways of navigating them. Visibility determination, the process of deciding what surfaces can
be seen from a certain point, is one of the fundamental problems in computer graphics. It
is required not only for the correct display of images but also for such diverse applications
as shadow determination, global illumination, culling and interactive walkthrough. The
importance of visibility has long been recognized, and much research has been done in this
area in the last three decades. This survey reviews the fundamental issues in visibility and
related data organisation techniques, with a focus on work performed in recent years. The
taxonomy used distinguishes among various algorithms using different axes like point-based-
or-region-based, Image-versus-Object-Precision, Conservative-versus-approximate, tightness
of approximation, dimension of input scene etc.
2 Introduction
Visibility determination is a fundamental problem in the field of computer graphics, and
the algorithm of choice to solve the problem these days is usually the Z-buffer. In modern
context, visibility algorithms have regained attention because the ever-increasing size of 3D
data-sets, coupled with the interactive requirement from walkthrough applications render
the classical approaches unfit for their purpose. This survey reviews recent visibility algo-
rithms for the acceleration of walkthrough applications.
Visibility culling aims at quickly rejecting invisible geometry before actual hidden-surface
removal is performed. This can be accomplished by only sending the visible set to the
graphics hardware pipeline, that is, the subset of primitives which may contribute to at
least one pixel of the screen. Much of visibility-culling research focuses on algorithms for
1
Figure 1: Three types of visibility culling techniques: (i) view frustum culling, (ii) back-face
culling and (iii) occlusion culling.
computing (hopefully tight) estimations of the visible set. A Z-buffer algorithm is then
usually used to obtain correct images. Visibility culling starts with two classical strategies:
back-face and view-frustum culling. Back-face culling algorithms avoid rendering geometry
that faces away from the viewer, while viewing-frustum culling algorithms avoid rendering
geometry that is outside the viewing frustum.
These two techniques, however, may still leave a lot to be desired, i.e. they still may send a
large number of invisible polygons to the rendering pipeline, thus slowing down the appli-
cation. Occlusion culling techniques aim at avoiding rendering primitives that are occluded
by some other part of the scene. This technique is global as it involves interrelationship
among polygons and is thus far more complex than backface and view-frustum culling. The
three kinds of culling can be seen in Fig. 1. In this survey, I will focus on occlusion culling
and its recent developments.
A very important concept is conservative visibility. The conservative visibility set is the set
that includes at least all of the visible set plus maybe some additional invisible objects. In
it, one may classify an occluded object as visible, but may never classify a visible object as
occluded. Such estimation needs to be as close to the visible set as possible, and still be
easy to compute. By constructing conservative estimates of the visible set, one can define a
potentially visible set (PVS), which includes all the visible objects, plus a (hopefully small)
number of occluded objects, for that whole region of space. The PVS can be defined with
respect to a single viewpoint or with respect to a region of space.
2
It is important to point out the differences between occlusion culling and HSR. Unlike
occlusion culling methods, HSR algorithms invest computational efforts in identifying the
exact portions of a visible polygon. Occlusion culling algorithms need to merely identify
which polygons are not visible, without the need to deal with the expensive subpolygon
level. Moreover, occlusion culling can (hopefully conservatively) overestimate the set of
visible objects since classical HSR will eventually discard the invisible primitives.
What makes visibility an interesting problem is that, for large scenes, the number of visible
fragments is often much smaller than the total size of the input. For example, in a typical
urban scene, or a ’forest-like’ scene, one can see only a very small portion of the entire
model, assuming the viewpoint is situated at or near the ground. Such scenes are said to be
densely occluded in the sense that, from any given viewpoint, only a small fraction of the
scene is visible. In indoor scenes, where the walls of a room occlude most of the scene, and
in fact, from any viewpoint inside the room, one may only see the details of that room or
those visible through the portals(see Figure 2). The goal of visibility culling is to bring the
cost of rendering a large scene down to the complexity of the visible portion of the scene
and mostly independent of the overall size. Ideally, visibility techniques should be output
sensitive; the running time should be proportional to the size of the visible set.
3 Taxonomy
The organisation of the present survey is based on the following taxonomy:
• From-Point-Visibility versus From-Region-Visibility : The distinction between
these two approaches is whether the algorithm performs computations with respect
to the location of the current viewpoint only or performs bulk computations that are
valid anywhere in a given region of space. One strength of the from-region visibility
set is that it is valid for a number of frames and, thus, its cost is amortized over
time. More importantly, from-region visibility also has predicting capabilities, which is
crucial for network applications or for disk-to-memory prefetching. Using the visibility
information from adjacent cells, the geometry can be prefetched as it is about to
be visible. However, from-region algorithms usually require a long preprocessing,
significant storage cost, and do not handle moving objects as well as point-based
3
Figure 2: With indoor scenes often only a very small part of the geometry is visible from
any given viewpoint.
methods.
• Accuracy : According to estimate of the visible set that they generate, visibility
algorithms can be classified into the following three categories:
– Exact
– Conservative
– Approximate
An exact algorithm, as the name suggests, will classify exactly those primitives as visi-
ble which actually are visible from the point(from-point-visibility) or somewhere from
a region(from-region-visibility). A conservative algorithm overestimates visibility, i.e.
it never misses any visible object, surface or point. However, it can classify some
hidden primitives as visible and sends them to the rendering pipeline; the Z-buffer
eventually finds out that they are invisible and doesn’t render them. An approximate
algorithm, on the other hand, provides only an approximation of the result, i.e. it
can both overestimate and underestimate visibility. A common class of approximate
algorithms is sampling. It uses random or structured sampling (ray-casting or sam-
ple views) to estimate the visible set and hope that they will not miss visible objects.
They trade conservativeness for speed and simplicity of implementation.
4
• Tightness of approximation : For the conservative class of algorithms, it would
be interesting to study the degree of overestimation. Unfortunately, very few papers
reviewed discuss the ratio between the size of their potentially visible set and the size
of the exact visible set.
• 2D versus 3D : Some methods are restricted to 2D floorplans or to 2.5D (height
fields), while others handle general 3D scenes.
• Individual versus fused occluders : Given three primitives, A, B, and C, it
might happen that neither A nor B occlude C, but together they do occlude C.
Some occlusion-culling algorithms are able to perform such an occluder-fusion, while
others are only able to exploit single primitive occlusion. Occluder fusion used to
be mostly restricted to image-precision point-based methods. However, from-region
visibility methods performing occluder fusion in object space as well have recently
been developed, and will be reviewed in this survey.
• Need for precomputation: Most from-region methods precompute and store the
visibility information, but some point-based techniques also require a preprocessing
of the data (e.g., for occluder selection).
4 Various Algorithms
Following is a review of some of the algorithms that have been proposed as a solution for
the problem of occlusion culling. Each of the given algorithm will be classified according to
one or more criteria provided in the above taxonomy.
4.1 Cells and Portals Techniques
These algorithms exploit the regular nature of scene geometry, as is offered by architectures
and indoor scenes. Cell and Portal Techniques date back to the work of Teller and Se-
quin [11] and Airey [12] et al on indoor visibility. The work of Teller and S‘equin is mostly
based on 2D, since it deals with computing potentially visible sets for cells(from-region-
visibility) in an architectural environment. Their algorithm first subdivides space into cells
using a 2D BSP tree. Then it uses the connectivity between the cells, and computes whether
straight lines can hit a set of portals (mostly doors) in t model. They elegantly model the
stabbing problem as a linear programming problem, and in each cell save the collection of
5
potentially visible cells.
Another technique that exploits cells and portals in models is described in Luebke and
Georges [4]. Instead of precomputing the visibility, Luebke and Georges perform an on-the-
fly recursive depth-first traversal of the cells using screen-space projections of the portals
to overestimate the portal sequences. In their technique they use a cull box for each portal,
which is the axial 2D bounding box of the projected vertices of the portal. The idea is then
to clip the portal1s cull boxes as the cells are traversed, and only to continue the traversal
into cells which have a non-zero (intersection) portal-sequence.
• Criticism: These techniques, as efficient as they are in architectural environments and
indoor scenes, rely heavily on the regular nature of geometry. In an ’irregular’ setting
like a forest, they will not be applicable.
4.2 Visibility Relationships using Supporting and separating planes
Coorg and Teller have proposed object-space techniques for occlusion culling, which explore
the visibility relationships between two convex objects as shown in Figure 3. For example,
while an observer is between the two supporting planes to the left of A, it is never possible for
it to see B. In [10], Coorg and Teller give sufficiency conditions for computing the visibility
of two objects (that is, whether one occludes the other), based on tracking relationships
among the silhouette edges supporting and separating the planes of the different objects (
See figure 3).
• Criticism: This technique relies on the identification of many big convex occluders in
the scene to carry out the visibility calculations. In real scenes, objects being convex
can be a severe restriction. Also, if all the objects in the scene are of comparable size,
then this algorithm will have to do O(n2) visibility tests each time the view point
changes.
4.3 Image Based Occlusion Culling
As the name suggests image-space algorithms perform the culling in the viewing coordinates.
The key feature in these algorithms is that during rendering of the scene, the image gets
6
Figure 3: The figure highlights the visibility properties exploited by the algorithm of Coorg
and Teller [10]. While an observer is between the two supporting planes to the left of A, it
is never possible to see B.
filled up and subsequent objects can be culled away quickly by the already-filled parts of
the images. Since they operate on a discrete array of finite resolution they also tend to
be simpler to implement and more robust than the object-space ones, which tend to have
numerical precision problems. Following is a brief description of the Hierarchical Occlusion
Map method, which is a representative of the class.
4.3.1 Hierarchical Occlusion Maps
Zhang et al [5] use an object space bounding volume hierarchy and a hierarchy of object
space occlusion maps. Occlusion maps represent the aggregate of projections of the occlud-
ers onto the image plane. For each frame, the algorithm selects a small set of objects from
the model as occluders and renders them to form an initial occlusion map, from which a
hierarchy of occlusion maps is built. The occlusion maps are used to conservatively cull
away a portion of the model not visible from the current view-point. The algorithm is appli-
cable to all models and makes no assumptions about the size, shape or type of the occluders.
An object is tested for occlusion by first projecting its bounding box onto the screen and
7
Figure 4: A hierarchy of occlusion maps created by recursively averaging blocks of pixels
finding the level in the hierarchy where the pixels have approximately the same size as the
extent of the projected box. If the box overlaps pixels of the HOM which are not opaque,
it means that the box cannot be culled. See figure 4 for a hierarchy of occlusion maps.
If the pixels are opaque (or have opacity above the specified threshold when approximate
visibility is enabled) then the object is projected on a region of the image that is covered.
In this case a depth test is needed to determine whether the object is behind the occluders.
• Criticism: As a pre-processing step, this algorithm identifies big objects as occluders,
thereby failing to exploit the occlusion that a group of small objects close to each other
can induce. Also, this algorithm belongs to the approximate occlusion culling
genre, and accurate results can depend on the fine tuning of a voo-doo parameter
– the opacity threshold . This technique relies on being able to read information
from the graphics hardware. Unfortunately, on most current architectures, using any
sort of feedback from the graphics hardware is quite slow and places a limit on the
achievable frame rate. Preselection of occluders is also a problem.
4.4 From Region Techniques
From-region-occlusion-culling techniques entail dividing the viewing space into cells and
finding out the Potentially Visible Set(PVS) for each cell. The computation cost of the
8
PVS from a viewcell would then be amortized over all the frames generated from the given
viewcell. Following is a review of some of the from-region methods:
4.4.1 Conservative volumetric visibility using occluder fusion
Schaufler et al [19] introduce a conservative technique for the computation of viewcell vis-
ibility. The method operates on a discrete representation of space and uses the opaque
interior of objects as occluders. It detects and represents the regions of space hidden by
occluders and is the first to use the property that occluders can also be extended into empty
space provided this space itself is occluded from the viewcell. This is proved to be effective
for computing the occlusion by a set of occluders, successfully realizing occluder fusion in
object space.
• Criticism: Objects are represented as a collection of voxels. Hence, their represen-
tation is highly memory-intensive. A large scene can consist of a prohibitively large
number of voxels, and using this approach, one might have to record data for every
voxel in the space. Also, if an object occupies a small part of a large voxel, then
it might be classified as not occupying that voxel(for efficiency reasons), producing
inaccurate results.
4.4.2 Conservative Visibility using extended projections
Durand et al [2] present an extension of point-based image-space methods such as the Hi-
erarchical Occlusion Maps [5] to volumetric visibility from a viewcell, in the context of
preprocessing PVS computation. Occluders and occludees are projected onto a plane, and
an occludee is declared hidden if its projection is completely covered by the cumulative
projection of occluders (and if it lies behind). The projection is however more involved in
the case of volumetric visibility: to ensure conservativeness, the Extended Projection of an
occluder underestimates its projection from any point in the view-cell, while the Extended
Projection of an occludee is an overestimation (see Figure 5).
The position of the projection plane is however crucial for the effectiveness of Extended
Projections. This is why a reprojection operator was developed for hard-to-treat cases. It
permits a group of occluders to be projected onto one plane where they aggregate, and then
reproject this aggregated representation onto a new projection plane (see Figure 5). This
9
Figure 5: (a) Principle of Extended Projections. The Extended Projection of the occluder is
the intersection of its projections from all the points in the viewing cell, while the Extended
Projection of the occludee is the union of its projections. (b) If plane 2 is used for projection,
the occlusion of group 1 is not taken into account. The shadow cone of the cube shows that
its Extended Projection would be void, since it vanishes in front of the plane. The same
constraint applies for group 2 and plane 1. Thus group 1 is projected onto plane 1, then
this aggregate projection is reprojected onto plane 2.
re-projection is used to define an occlusion-sweep where the scene is swept by parallel planes
leaving the cell. The cumulative occlusion obtained on the current plane is reprojected onto
the next plane as well as new occluders.This allows the handling of very different cases such
as the occlusion caused by leaves in a forest.
• Criticism: Making choice of occluders and choice of projection planes is a hard prob-
lem, and hence make the implementation of this algorithm really complex.
4.4.3 Aspect Graph
The aspect graph (Plantinga et al., 1990) partitions the view space into cells that group
view points from which the projection of the scene is qualitatively equivalent. The aspect
graph is a graph describing the view of the scene (aspect) for each cell of the partitioning.
10
• Criticism: The major drawback of this approach is that for polygonal scenes with n
polygons there can be θ(n9) cells in the partitioning for an unrestricted view space,
rendering this method impractical for scenes consisting of many primitives.
4.4.4 Visibility Complex
Visibility complex Pocchiola and Vegter (1993) introduced the visibility complex that de-
scribes global visibility in 2D scenes. Rivi‘ere (1997) discussed the visibility complex for
dynamic polygonal scenes and applied it for maintaining a view around a moving point.
The visibility complex was generalized to three dimensions by Durand et al. (1996).
• Criticism: Again, visibility complex for a scene is known to be θ(n4) and is really
complex to implement. No implementation of the 3D visibility complex is known.
4.4.5 Approximate Methods : Random Sampling
Gotsman et al [3] proposed an approximate visibility algorithm that uses a 5D subdivision
of ray space and maintains a PVS for each cell of the subdivision. The visibility of each
of the objects is determined using a ray-casting procedure. Object visibility from a cell
in 5D space may be determined by a statistical sampling process. A given cell’s visibility
list is constructed by casting rays at each object. The rays originate at random points
distributed uniformly throughout the cell’s 3D extent, and are cast at points chosen on the
target object’s surface at random with a uniform distribution, as long as they are within
the 2D angular extent of the 5D cell. The objective is to determine whether a given object
is significantly visible from any point in the cell.
• Criticism: Besides only providing an approximate solution, which can lead to ’holes’
in the image, this method may require large memory to hold the 5D k−D tree, with
a PVS associated with each of its cells.
4.4.6 Occluder Shrinking
Wonka [6] presents an approach based on the following observations:
• From-region visibility is a much harder problem than From-point visibility.
11
Figure 6: When performing point sampling for occlusion after occluders have been shrunk
(as indicated by small triangles), all four occluders can be considered simultaneously, coop-
eratively blocking the view (indicated by the shaded area) from all sample points.
• It is possible to compute a conservative approximation of the umbra for a viewcell from
a set of discrete point samples placed on the view-cell’s boundary, thereby reducing a
from-region visibility problem to a set of from-point visibility problems.
He observes that for a view cell and an occluder, shrinking an occluder by ε provides a
smaller umbra with a unique property: an object classified as occluded by the shrunk oc-
cluder will remain occluded with respect to the original larger occluder when moving the
viewpoint no more than ε from its original position in the view cell. Consequently, a point
sample used together with a shrunk occluder is a conservative approximation for a small
view cell with radius ε centered at the sample point. If the original view cell is covered with
sample points so that every point on the boundary is contained in an ε-neighborhood of at
least one sample point, then an object lying in the intersection of the umbrae from all sample
points is occluded for the original viewcell. See Figure 6 and 7 for an illustration of the idea.
Using this idea, multiple occluders can be considered simultaneously. If the object is oc-
cluded by the joint umbra of the shrunk occluders for every sample point of the viewcell, it
is occluded for the whole view cell. In that way, occluder fusion for an arbitrary number of
occluders is implicitly performed.
• Criticism: The shrinking of the occluder is a function of the radius of the view cell and
does not take into account the geometry of the occluder. Thus long or thin view-cells
cause over-shrinking. Furthermore, shrinking is performed uniformly in all directions
12
Figure 7: The fused umbra from the 5 point samples shown in figure 7 is the intersection
of the individual umbrae (shaded light). It is here compared to the union of umbrae from
the original view cell (shaded dark). As can be seen, point sampling computes superior
occlusion.
in 3D, requiring the occluders to have large interiors. As a result, planar occluders will
disappear when shrinking is performed, as they don’t have any interior. Also, since
small occluders will shrink to zero and since occluder fusion is performed implicitly,
a group of small objects, collectively occluding a large part of the scene would not be
considered as a single occluder ( need for explicit object-space-fusion). These result
in the method being overly-conservative in certain settings.
4.4.7 Varying Level of Detail
Subodh Kumar et al [8] present visibility computation algorithms based on vLOD (Varying
Level of Details). The algorithm performs work proportional only to the required detail
in visible geometry at the rendering time. To accomplish this, they use a pre-computation
phase that efficiently generates per-cell vLOD: the geometry visible from a view-region at
the right level of detail. They encode changes between neighboring cells vLOD (using Huff-
man like coding), which are not required to be memory resident.
For visibility calculations, they use a variation of occluder shrinking. For each occluder, they
first draw the supporting planes between the view-cell and the occluder. This is followed
by identifying a projection point contained within the supporting planes. Planes passing
through this point, parallel to the supporting planes would clip the occluder into a shrunk
occluder. The observation that any object hidden from this projection point with the shrunk
13
Figure 8: From-Region-Visibility reduced to From-Point-Visibility using occluder shrinking
occluder would be hidden from the original view cell with the original occluder, leads to a
conservative estimate of the PVS from the view cell. This process is illustrated with figure 8.
Occluders are partitioned into clusters based on their direction from the projection point v
to simplify rendering. To compute visibility with respect to each cluster, they first construct
the shadow frustum for each occluder (see Fig. 9). They next find a projection point v
contained in all frusta of the cluster. That is followed by shrinking each occluder in the
cluster by its reduced-shadow planes: planes parallel to shadow planes and passing through
v. For multiple occluders, Oi, v must lie in the shadow frusta of all occluders. Best loca-
tion for v in the intersection of the frusta is located by maximizing the sum of volumes of
all shrunk frusta, formulating this as a convex quadratic optimisation problem ( see figure 9).
They also provide a way to customise the disk layout by object reoredering to help reduce
the time taken to load data needed in any given frame. Also, to reduce the disk space
needed, they use the technique of visibility mask compression.
• Criticism: The biggest problem with vLOD technique is the prohibitively large disk
space required to store all the level of details for each cell and each object. Also, their
occlusion shrinking method, as Wonka’s, suffers from the disadvantage of a group of
small occluders being shrunk to null, when as a group, they can lead to significant
amount of occlusion. Occlusion-fusion is not performed explicitly. Also, their method
assumes occluders to be much bigger than the size of view-cell, an assumption which
might be too constrained in certain settings.
14
Figure 9: Implicit Occluder Fusion by clustering a bunch of occluders together and finding
a common projection point lying in the shadow frusta of all occluders. Best location for
this point is located by maximizing the sum of volumes of all shrunk frusta, formulating
this as a convex quadratic optimisation problem
4.4.8 Ray Space Factorization : An almost exact solution
Leyv et al [16] present a conservative occlusion culling method based on factorizing the 4D
visibility problem into horizontal and vertical components. The visibility of the two com-
ponents is solved asymmetrically: the horizontal component is based on a parameterization
of the ray space, and the visibility of the vertical component is solved by incrementally
merging umbrae. Similar to image-based from-point methods, they use an occlusion map
to encode visibility; however, the image-space occlusion map is in the ray space rather than
in the primal space. Their technique utilizes the capabilities of the latest graphics hardware
to accelerate the performance for from-region visibility.
Their algorithm is one of the many algorithms which deal with visibility in the dual space,
i.e. the ray space. Visibility problems and in particular occlusion problems are often ex-
pressed as problems in line spaces. For example, the following is a basic occlusion problem.
Given a segment C, an object A is occluded from any point on C by a set of objects Bi, if
all the lines intersecting C and A also intersect the union of Bi. Using the line space, lines
in the primal space are mapped to points. The mapping between 2D lines and points is
commonly defined by the coefficients of the lines in the primal space. The line y = ax+ b is
mapped to the point (a,−b) in the line parameter space. All the lines that intersect segment
A in the primal space are mapped to a double wedge in the parameter space, called the
footprint of A. All the lines intersecting the union of segments Bi are mapped to the union
15
Figure 10: Boolean operations in dual space can be used to determine visibility between
two line segments. In (a), the orange segments are mutually occluded by the blue segments.
In (b), Boolean set operations are applied to the double-wedge footprints to test visibility.
A ∩ C (in dark orange) represents all lines passing through both A and C; thus, A and C
are mutually hidden if and only if A ∩ C is a subset of the union of occluder footprints (in
blue).
of their footprints. All the lines passing through two given segments A and C are mapped
to the intersections of the footprints.
The above occlusion problem can be expressed as a simple Boolean set operation on the
footprints (see Figure 10). In 2D these footprints can be discretized and drawn as polygons,
and their intersection can be applied in image space, using fast per-pixel Boolean operations.
This approach, and other similar approaches by Wonka, Bittner et al [15], [14], [9] are
notable in that they are almost exact, i.e. they compute the visible set from a cell almost
exactly, which is exact upto discretization of the polygonal foot-prints.
• Criticism : A notable property of this solution is that it is asymmetric, since it favors
the horizontal component over the vertical one. This might be helpful in urban scenes,
where there is a preferable orientation since the vertical direction is more complex. In
scenes which do not present such a structure, this asymmetry can not be exploited.
Due to this, the merging of umbrae in the vertical direction step has to consider almost
all the occluders, which would result in this method being extremely slow.
16
5 Occluder Fusion
When the visibility from a region is concerned, occlusion caused by individual occluders in
a general setting may be insignificant as compared to that affected by a group of spatially
close occluders. Thus, it is essential to take advantage of aggregate occlusion caused by
groups of nearby objects. Many dense ’forest-like’ scenes have clusters of numerous small
’leaf-like’ objects, which do no occlude any object individually, but as a cluster, induce
significant occlusion. The performance of an occlusion culling algorithm depends on the
extent of occlusion fusion that it can manage. Many algorithms discussed above perform
implicit occlusion fusion in image space ( Wonka, Bittner, Durand’s extended image pro-
jection, Ray space Factorization etc.). Some of them fuse occluders in object space only
if they are convex, or share edges(faces). Schaufler et al. use blocker extension, but that
requires the represenation of objects as volumetric voxels, which in itself, has large memory
requirements.
One can distinguish three types of fusion of umbra for visibility from a point algorithms.
In the case of visibility from a region there are additional four types that express fusion of
penumbra (see Figure 11). For a visibility algorithm to make full use of spatial coherence
of occluders, it should perform the complete fusion efficiently.
6 Conclusion
In summary, this survey is an attempt to cover the visibility literature as it relates to walk-
through applications. A focus has been given on the latest papers and algorithms in the
field, such as Wonka’s occluder shrinking, Ray Space Factorization by Wonka-Bittner and
Levy et al, the vLOD system by Subodh Kumar et al. For a general and more compre-
hensive survey, [1], [17] should be refered. [1] and [17] provide a comparison (in form of
charts) among different methods based on various criteria mentioned in this survey.
Although a considerable amount of knowledge has been assembled in the last decade and
the number of papers in the area has increased substantially in the last couple of years,
much interesting work remains:
• Quantitative comparison of existing approaches: At this point in time, very
little work has been done in performing direct comparisons of different techniques.
17
Figure 11: Illustration of occluder fusion. Occluders are shown as black lines and occludees
as circles. An occludee that is marked white is classified visible due to the lack of occluder
fusion.
None of the conservative approaches provide a bound on the ratio of their solution to
the exact solution.
• Preprocessing time; PVS storage: Most from-region techniques perform a con-
siderable amount of preprocessing, which generates quite a bit of storage overhead.
Reducing this overhead is an important area of research. Moreover, further research
is necessary into techniques which lower the amount of preprocessing required (and
not only for from-region techniques, but for visibility culling algorithms in general).
Also, memory is a big issue for large scenes, especially in the context of fromregion
techniques.
• Hardware-assisted culling: As mentioned before, hardware manufacturers inte-
grate more and more occlusionculling capabilities in the graphics cards. Interaction
between the hardware and the CPU for efficient high-level culling is an important
issue, especially because of latency problems. Algorithms should be designed keeping
in view the functionalities provided by the graphics card so that their capabilities can
be exploited to the fullest to achive improved efficiency.
• Occluder Fusion: I would also like to highlite the importance of doing efficient and
18
explicit occluder fusion to exploit scene geometry. This would be critical in ’forest’
like scenes, where spatial coherence plays an important role in scene geometry.
• Dynamic objects: Another largely underexplored area is the handling of dynamic
objects.
7 Acknowledgements
The author would like to thank Prof. Joseph Mitchell for introducing the problem to him
and for valuable guidance and references, and Olaf Hall-Holt for useful discussions on the
topic.
References
[1] Daniel Cohen-Or, Yiorgos Chrysanthou, Claudio T. Silva. ”A survey of visibility for
Walkthrough Applications”, IEEE TRANSACTIONS ON VISUALIZATION AND
COMPUTER GRAPHICS, VOL. 9, NO. 3, JULY-SEPTEMBER 2003.
[2] F. Durand, G. Drettakis, and C. Puech. ”The visibility skeleton: A powerful and
efficient multi-purpose global visibility tool”, In Turner Whitted, editor, SIGGRAPH
97 Conference Proceedings, Annual Conference Series, pages 89100. ACM SIGGRAPH,
Addison Wesley, August 1997.
[3] Craig Gotsman, Oded Sudarsky, Jeffrey A. Fayman. ”Optimized occlusion culling us-
ing five-dimensional subdivision”, Computers and Graphics 23 (1999), Visibility - tech-
niques and applications.
[4] David P. Luebke and Chris George. ”Portals and Mirros: Simple, Fast Evaluation of
Potentially Visible Sets”.
[5] Hansong Zhang, Dinesh Manocha, Tom Hudson, Kenny Hoff. ”Visibility Culling using
Hierarchical Occlusion Maps”.
[6] Peter Wonka. ” Occlusion Culling for Real-Time Rendering of Urban Environments”,
Dissertation.
[7] Xavier Decoret, Gilles Debunne and Francois Sillion. ”Erosion Based Visibility Prepro-
cesing”, Eurographics Symposium on Rendering 2003.
19
[8] Jatin Chuggani, Budirijanto Purnomo, Shankar Krishnan, Jonathan Cohen, Suresh
Venkatasubramanian, David Johnson and Subodh Kumar. ”vLOD: High-Fidelity Walk-
through of Large Virtual Environments”, IEEE TRANSACTIONS ON VISUALIZA-
TION AND COMPUTER GRAPHICS.
[9] Jiri Bittner, Peter Wonka, Michael Wimmer. ”Fast Exact From-Region Visibility in
Urban Scenes”, Eurographics Symposium on Rendering 2003, pp. 1-11.
[10] Satyan Coorg, Seth Teller. ” Real-Time Occlusion Culling for Models with Large Oc-
cluders”, Proc. 1997 ACM Symposium on Interactive 3D Graphics, pp. 83-90 and 189.
[11] S. J. Teller and C. H. Sequin. Visibility preprocessing for interactive walkthroughs.
Computer Graphics (Proceedings of SIGGRAPH 91), 25(4):6169, July 1991.
[12] J. M. Airey, J. H. Rohlf, and F. P. Brooks, Jr. Towards image realism with interac-
tive update rates in complex virtual building environments. Computer Graphics (1990
Symposium on Interactive 3D Graphics), 24(2):4150, March 1990.
[13] Vladlen Koltun, Yiorgos Chrysanthou, Daniel Cohen-Or. ” Virtual Occluders: An
Efficient IntermediatePVS Representation”.
[14] Jiri Bittner, Jan Prikryl, Pavel Slavik. ” Exact Regional Visibility using Line Space
Partitioning”.
[15] Jiri Bittner. ”Hierarchical Techniques for Visibility Computations”, Dissertation.
[16] Tommer Levyand, Olga Sorkine, Daniel Cohen-Or. ”Ray Space Factorization for From-
Region Visibility”.
[17] Jiri Bittner, Peter Wonka. ”Visibility in Computer Graphics”.
[18] Vladlen Koltun and Daniel Cohen-Or. ”Selecting Effective Occluders for Visibility
Culling”, EUROGRAPHICS 2000.
[19] Gernot Schaufler, Julie Dorsey, Xavier Decoret, Francois X.Sillion. ”Conservative Vol-
umetric Visibility with Occluder Fusion”, SIGGRAPH 2000 conference proceedings.
20