18.1Si31_2001
SI31Advanced Computer
GraphicsAGR
SI31Advanced Computer
GraphicsAGR
Lecture 18Image-based Rendering
Light MapsWhat We Did Not Cover
Learning More...
18.2Si31_2001
Model-based RenderingModel-based Rendering
Conventional approach is:– create 3D model of a virtual world– project each object to 2D and
render into frame buffer Scene complexity is a major
factor– real-time walkthroughs of complex
scenes needs powerful processing– affects major application areas such
as computer games and VR
18.3Si31_2001
Image-based RenderingImage-based Rendering
Goal:– make rendering time independent
of scene complexity Approach:
– make use of pre-calculated imagery– many variations - we look just at
two Question:
– where have we met pre-calculated imagery before?
18.4Si31_2001
Image Caching - ImpostorsImage Caching - Impostors
Basic idea:– cache image of an object rendered in one frame
for re-use in subsequent frames Technique
– project bounding box of object onto image plane to get rectangular extent for that view
– capture image and put in texture memory– for next view, render an ‘impostor’ which is a
quadrilateral in plane parallel to initial view plane, and texture map with the original image
– texture mapping uses current view so image is warped appropriately
18.5Si31_2001
Image CachingImage Caching
Validity of impostors:– once view direction changes
substantially, the impostor is no longer valid
– object then re-rendered Hierarchical image caching:
– use BSP trees to cluster objects in a hierarchy
– distant objects can be clustered and a single image used to render the cluster
18.6Si31_2001
Environment Mapping - Revision
Environment Mapping - Revision
Pre-computation:– from object at centre of scene we
rendered 6 views and stored resulting images as walls of a surrounding box
– caches light arriving at object from different directions
Rendering time– specular reflection calculation then
bounced a viewing ray onto point on interior of box and used its colour as the specular colour of the object
18.7Si31_2001
Light FieldsLight Fields
Concept:– for every point, cache the light or
radiance emanating from that point in each direction
– rendering involves looking up (very large) table
– five dimensions: (x,y,z) to give position and ( to give direction
– in ‘free’ space, radiance constant along a line, so we have a 4D light field - we pre-compute the radiance along all lines in the space
18.8Si31_2001
Indexing the LinesIndexing the Lines
Use two parallel planes - think of these as between viewer and scene
u
v
s
tL(u,v,s,t)
For each pointon (u,v) grid, we havea line to every point on (s,t) grid- ie 4D set of lines - knownas light slab
viewer
scene
18.9Si31_2001
Constructing a Light FieldConstructing a Light Field
Place camera on (u,v) plane– for each point on grid ( ui, vj ),
render scene and store image as:
Imageij (sk, tl )– giving a 2D array of images!
Do this from all six surrounding directions of the scene - ie six light slabs
18.10Si31_2001
RenderingRendering
The rendering operation is now a linear look up operation on our 2D array of images
For example, any ray in a ray tracing approach will correspond to a particular 4D point (u,v,s,t) - we look up its value in the light field (using interpolation if it is not exactly on a grid point
18.11Si31_2001
CompressionCompression
Technique is only feasible because there is coherence between successive images
Hence the 2D array of images can be compressed by factors of over 100
18.12Si31_2001
Model-based versus Image-based Rendering
Model-based versus Image-based Rendering
virtual world
model
real-time interactive flythrough
real world
images
modelconstruction
real-timerendering
imageacquisition
image-basedrendering
off-linerendering
imageanalysis
18.14Si31_2001
The Problem with Gouraud….
The Problem with Gouraud….
Gouraud shading is established technique for rendering but has well known limitations– Vertex lighting only works well for
small polygons…– … but we don’t want lots of
polygons!
18.15Si31_2001
Pre-Compute the LightingPre-Compute the Lighting
Solution is to pre-compute some canonical light effects as texture maps
For example…
18.16Si31_2001
Rendering using Light Maps
Rendering using Light Maps
Suppose we want to show effect of a wall light– Create wall as a
single polygon– Apply vertex
lighting – Apply texture
map– In a second
rendering pass, apply light map to the wall
18.17Si31_2001
Light MapsLight Maps
Widely used in games industry
Latest graphics cards will allow multiple texture maps per pixel
18.19Si31_2001
Parametric Surface Representation
Parametric Surface Representation
Rather than require the user to represent curved surfaces as an ‘IndexedFaceSet’ of flat polygons, some modelling systems allow representation as Bezier or spline surfaces
Hearn & Baker Chap 10
18.20Si31_2001
Constructive Solid Geometry
Constructive Solid Geometry
Rather than model objects as surfaces, some systems work in terms of solid objects - field known as Constructive Solid Geometry (CSG)
Hearn & Baker, Chap 10
Primitive objects (sphere, cylinder, torus, ..) combined by operators (union, intersection, difference)
Result is always a solid
Rendering via ray tracing typically
18.21Si31_2001
Volume GraphicsVolume Graphics
A very new approach is to model using volumes - with varying transparency
OpenGL Volumizer adds this capability to OpenGL
See:www.sgi.com/software/volumizer
Mitsubishi Volume Pro 500 board– www.rtviz.com
18.22Si31_2001
Procedural ModellingProcedural Modelling
Objects can be defined procedurally - ie by mathematical functions
See Hearn & Baker, Chap 10
Fractals are well-known example
See The Fractory:library.advanced.org/
3288/
18.23Si31_2001
Other Important TopicsOther Important Topics
Colour Anti-aliasing Animation … and much more!
18.24Si31_2001
Learning MoreLearning More
Journals:– IEEE Computer Graphics and its
Applications– Computer Graphics Forum– Computers and Graphics
Conferences:– ACM SIGGRAPH (Proceedings as ACM
Computer Graphics)– Eurographics– Eurographics UK