texturing - universitetet i oslo · decal lets texture blends between texel color and texture...
TRANSCRIPT
Texturing
Christopher Dyken Martin Reimers Atgeirr F. Rasmussen
1.10.2008
Page 1
Real Time Rendering:I Chapter 5: Visual Appearance
I Aliasing and Antialiasing
I Chapter 6: TexturingI The Texturing PipelineI Image TexturingI Procedural Texturing
The Red Book:
I Chapter 9: Texture Mapping
Page 2
Texturing
Page 3
What is texturing?
Texturing in computer graphics
Texturing means modifying a surface’s appearance using an image,function or other data source.
Texturing may for example modify:I Color (image mapping).
I The most common case.
I Normals (bump mapping).
I Geometry (displacement mapping).
Page 4
Motivation
I Better appearance at lower cost.I May use lower quality geometry, i.e. fewer vertices.I May use less advanced lighting algorithms.
I Easy to add decals, text, images.
Page 5
Example textures
Example textures from www.arroway.de.
Diffuse
Bump
Specular
Page 6
Example rendering
Page 7
Tradeoffs
Basic rule: Fine details are in textures, coarser features need actualgeometry.
The optimal boundary between the two may depend on amultitude of factors.
Typical case
1. Far away: Image texturing is fine.
2. Closer: Illusion breaks down, bump mapping optimal solution.
3. View approaches tangential: Illusion breaks down again,displacement mapping or proper geometry necessary.
Page 8
Defining textures in OpenGL
Textures are managed through texture names:GLuint texture_names[n];
glGenTextures(n, texture_names);
glDeleteTextures(n, texture_names);
OpenGL supports several texture targets:GL_TEXTURE_1D, GL_TEXTURE_2D, GL_TEXTURE_3D,. . .
To say we want to use a texture, we bind to it:glBindTexture( target, texture_name[i]);
We specify texture data by binding to it and:GLubyte image[512][512][3];
glTexImage2D( GL_TEXTURE_2D, level, iformat
width, height, border,
format, type, &image[0][0][0] );
Page 9
Texture coordinates
Texture coords t1, t2, t3 are parameter space positions that assignsa position in the texture to each vertex v1, v2, v3 (red arrows)
p3
p1 1
t3
t2
t
p2
I Specified arbitrarily at each vertex.I Interpolated over each triangle.
=⇒ texture coordinate for each texel (blue arrow).
Notice that the images of texels and fragments rarely matchPage 10
Parametric surfaces usually have a natural set of texcoords
Parameterization
Objects without parameterization needs to be parameterized.
We must find a mapping φ fromthe domain T to the image S .
Parameterization is not trivialand is an open research question.
T
S
φ
Sometimes we can obtain texcoords by projecting the vertexpositions onto an intermediate geometry (projector funcs):
Parallel projectiononto a plane
Along surface normalonto a sphere
Along surface normalonto a cube
Page 11
In OpenGL we use glTexCoord to specify the current texcoord,glVertex creates a vertex with the current texcoord associated.
glTexCoord before glVertex
Corresponder functions
Corresponder functions specifies the mapping between texturecoordinates and texels.
Vanilla OpenGL corresponder function
glTexCoord( 0.0, 0.0 ) ⇐⇒ my_texels[0][0]
glTexCoord( 1.0, 0.0 ) ⇐⇒ my_texels[511][0]
glTexCoord( 0.0, 1.0 ) ⇐⇒ my_texels[0][511]
glTexCoord( 1.0, 1.0 ) ⇐⇒ my_texels[511][511]
Page 12
Generalized Texturing
Texturing works by modifying surface attributes over the triangle.
1. The fragment location in space is the starting point.
2. A projector function gives parameter space values.
3. The corresponder function transforms parameter spacevalues to texture image space values.
4. Sampling the texture at the image space pos give a value.
5. The value transform function transforms the value1.
6. The resulting value is used to modify a surface property2.
1e.g. gamma correction2e.g. surface color
Page 13
The projector function
takes an object space pos and gives a parameter space pos.
We can project onto a simple intermediate object and use thatobject’s parameterization to determine the parameter position:
Parallel projectiononto a plane
Along surface normalonto a sphere
Along surface normalonto a cube
Parametric curved surfaces usually have an intrinsic parameterspace, and we can use this directly.
OpenGL’s glTexGen provides some different projector functions.
Page 14
Ideally, the texture is glued to the object in a way that:
I minimizes distortion
I good correspondence between object and parameter space.
T
S
φ
The problem of fitting a parameter space onto an object is knownas parameterization and is an open research question.
Parameterization is a topic in INF 4360.
Page 15
The corresponder function
maps from texture domains to image locations.
I Texture domains usually 2D with (u, v) in [0, 1]× [0, 1].
I Sometimes 3D where the third coordinate can represent depth=⇒ volumetric texture.
I And in some cases 4D (homogeneous coordinates)=⇒ e.g. spotlight effects.
Often, the corresponder function simply scales the coordinates
[0, 1]× [0, 1] → [0, 256]× [0, 256]
for a texture of size 256× 256.
Page 16
Corresponder func determines behavior outside [0, 1]× [0, 1]
mirrorrepeat clamp clamp to border
This is called wrap mode and is specified with glTexParameter
We can make the texture repeat itself
[1, 0] [0,0]
[0,4] [4, 4]
[4, 0][2, 0][0, 0]
[0, 2] [2, 2]
[3/4, 0]
[3/4, 3/4][0, 3/4]
[0, 0]
[2, 4]
[2, 0][0, 0]
[0, 4][0, 1] [1, 1]
[0, 0]
Page 17
Texture magnification
In this case, a texel covers several pixels.
OpenGL has two strategies to deal with this:
Nearest neighbour. Bilinear interpolation.
The texture magnification filter is specified using one ofglTexParameter(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameter(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
Page 18
Texture minification
If a pixel covers several texels, the texture is under-sampled.We get aliasing artifacts.
Mipmaps are pre-filtered textures, halving the size for each level:
0 1 2 3 4 5 6
Then, use the mipmap level s.t. pixel and texel size matches and
I Choose the nearest mipmap level.
I Interpolate between two adjacent mipmap levels.
The texture minification filter is specified usingglTexParameter(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, filter);
Page 19
Texture combine function
The texture combine function specifies how the texture color iscombined with the color of the fragment.
OpenGL fixed function texture combine functions
Replace replaces the texel color with the texture color.
Decal lets texture α blends between texel color and texture color.
Blend uses texture color to blend texel and a pre-specified color.
Modulate multiplies the surface colour with texture colour.
Lighted textures
Render white shapes with lighting enabled and use the modulatetexture combine.
In OpenGL, glTexEnvf to specifies the texture combine function.Page 20
Texturing Methods
Page 21
Environmental maps
Shiny surfaces, such as mirrors, produce specular reflections thatmirror the whole environment (not just the light sources).
E.g., if a shiny metal ball is placed in a room, we see the contentsof the room, albeit distorted, in the ball.
Environmental maps is a simple yet powerful method ofgenerating approximations of reflections in curved surfaces.
We assume that reflected objects and lights are far away and thereflector will not reflect itself =⇒ only the reflected view directionmatters, not the position on the surface.
We will look at two environment mapping techniques:
I sphere mapping
I cube mapping
Page 22
Sphere mapping
(first environment mapping technique supported in hardware)
surface normalreflected ray
Imag
e pl
ane
view direction ray
A sphere map is an orthogonal image of a tiny reflective sphere.
If the sphere is small compared to the distance to the environment,=⇒ each pixel captures the environment in a particular direction.
Page 23
For a point p with surface normal n, the view direction vector is
v = e− p
The reflected view direction vector is
r = v − 2(n · v)n.
The sphere normal m corresponding to this reflected direction
m = r+[0 0 1]T
‖r+[0 0 1]T ‖ ,
and the sphere normal at texture coordinate t′ ∈ [−1, 1]× [−1, 1]
m = [t′x , t′y ,√
t′ · t′]t .
This implies that the texpos encoding the reflection direction ist′ = 1
‖r+z‖rxy .
And finally, we find t ∈ [0, 1]× [0, 1],
t = 12(t′ + 1)
(OpenGL can do these calculations for us using glTexGen).Page 24
Cube mapping
Each of the six images of a cube map captures the environmentseen from the center of a cube through one of the cube’s faces.
face frustumface image
view point
box around view point
Since regular perspective is used, these images can be rendered atruntime with render-to-texture (dynamic environment maps)
Page 25
Bump and normal maps
Geometry Phong shading Normal map Parallaxocclusion map
Idea: Use textures to modulate the interpolated surface normal.=⇒ the surface looks more geometric detailed!
Page 26
Normal mapping
Phong: interpolate normal vector and normalizevarying vec3 normal;
void main() {
vec3 n = normalize( normal );
gl_FragColor = max( dot(l,n),0.0 ) * gl_Color
+ pow( max( dot(h,n) ,0.0), 40 )*vec4(1.0);
}
Normal map: fetch normal vector from textureuniform sampler2D nm;
uniform mat3 nmat; // normal transform usually done by vertex shadervoid main() {
vec3 n = 2*nmat*texture2D( ntex, gl_TexCoord[0].xy )-vec(1);
gl_FragColor = max( dot(l,n),0.0 ) * gl_Color
+ pow( max( dot(h,n) ,0.0), 40 )*vec4(1.0);
}
Excellent results, but difficult to animate (must update texture)!Page 27
Tangent space bump-mapping
Tangent space
For each vertex i , specify:
I a tangent vector tiI a bi-tangent vector bi (orthognal to ti )
I a normal vector ni (orthogonal to ti and bi ).
Interpolate these three vectors over triangles.
t1b1
n1
p1
t2b2
n2
p2
t3b3
n3
p3
tb
n
p
Page 28
Tangent space bump mapping code sketch
varying vec3 vt; // interpolated tangentvarying vec3 vb; // interpolated bi-tangentvarying vec3 vn; // interpolated normalvarying sampler2D bm; // bump map
void main() {
// Estimate slopes along u and v using forward differencesfloat f = texture2D(bm, gl_TexCoord[0].xy ).a;
float dfu = texture2D(bm, gl_TexCoord[0].xy+vec2(0,1.0/tw).a-f;
float dfv = texture2D(bm, gl_TexCoord[0].xy+vec2(1.0/th,0).a-f;
// Perturb tangent space vectors accordinglyvec3 t = normalize( vt ) + dfu * normalize( vn );
vec3 b = normalize( vb ) + dfv * normalize( vn );
vec3 n = cross( t, b );
// And shadegl_FragColor = max( dot(l,n),0.0 ) * gl_Color
+ pow( max( dot(h,n) ,0.0), 40 )*vec4(1.0);
}
Page 29
1D textures and NPR rendering
Photorealistic images are not always the best way of conveying avisual message. Non-photorealistic rendering (NPR) is the processof making “stylistic images”, images that don’t look realistic:
I tecnical drawings
I hand-drawn-look
I cartoon-look
I oil-paint-look
I . . .
Cow shading:texture1D( four-color-ramp, max(dot(l,n),0.0 ) );
+ silhouette edge detection and rendering.
Page 30