cg algorithms and implementation: “from vertices to fragments” angel: chapter 6 opengl...

106
CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel, AW, van Dam, etc. CSCI 6360/4360

Upload: jesse-horatio-watkins

Post on 27-Dec-2015

224 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

CG Algorithms and Implementation:

“From Vertices to Fragments”Angel: Chapter 6

OpenGL Programming and Reference Guides, other sources.ppt from Angel, AW, van Dam, etc.

CSCI 6360/4360

Page 2: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

IntroductionImplementation -- Algorithms

• Angel’s chapter title: “From Vertices to Fragments”– … and in the chapter are “various” algorithms

• From one perspective of pipeline:

• Next steps in viewing pipeline:– Clipping

• Eliminating objects that lie outside view volume - and, so, not visible in image– Rasterization

• Produces fragments from remaining objects– Hidden surface removal (visible surface determination)

• Determines which object fragments are visible

Page 3: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

IntroductionImplementation -- Algorithms

• Need to consider another perspective, as well

• Next steps in viewing pipeline:– Clipping– Rasterization

• Produces fragments from remaining objects

– Hidden surface removal (visible surface determination)

• Determines which object fragments are visible

• Show objects (surfaces)not blocked by objects closer to camera

• And next week …

Page 4: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

IntroductionImplementation -- Algorithms

• Next steps in viewing pipeline:– Clipping– Rasterization– Hidden surface removal (visible surface

determination)

• Will consider above in some detail in order to give feel for computational cost of these elements

• “in some detail” = algorithms for implementing

– … algorithms that are efficient– Same algorithms for any standard API– Will see different algorithms for same basic

tasks

Page 5: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

About Implementation Strategies

• Angel: At most abstract level …– Start with application program generated vertices– Do stuff like transformation, clipping, …– End up with pixels in a frame buffer

• Can consider two basic strategies• Will see again in hidden surface removal

• Object-oriented -- An object at a time …– For each object

• Render the object– Each object has series of steps

• Image-oriented -- A pixel at a time …– For each pixel

• Assign a frame buffer value– Such scanline based algorithms exploit fact that in images values from one pixel to

another often don’t change much• Coherence

– So, can use value of a pixel in calculating value of next pixel• Incremental algorithm

Page 6: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

Tasks to Render a Geometric Entity1

Review and Angel Explication

• Angel introduces more general terms and ideas, than just for OpenGL pipeline…

– Recall, chapter title “From Vertices to Fragments” … and even pixels– From definition in user program to (possible) display on output device– Modeling, geometry processing, rasterization, fragment processing

• Modeling– Performed by application program, e.g., create sphere polygons (vertices)– Angel example of spheres and creating data structure for OpenGL use– Product is vertices (and their connections)– Application might even reduce “load”, e.g., no back-facing polygons

Page 7: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

Tasks to Render a Geometric Entity2

Review and Angel Explication

• Geometry Processing– Works with vertices– Determine which geometric objects appear on display– 1. Perform clipping to view volume

• Changes object coordinates to eye coordinates

• Transforms vertices to normalized view volume using projection transformation

– 2. Primitive assembly• Clipping object (and it’s surfaces) can result in new surfaces (e.g., shorter

line, polygon of different shape)• Working with these “new” elements to “re-form” (clipped) objects is primitive

assembly• Necessary for, e.g., shading

– 3. Assignment of color to vertex

• Modeling and geometry processing called “front-end processing”– All involve 3-d calculations and require floating-point arithmetic

Page 8: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

Tasks to Render a Geometric Entity3

Review and Angel Explication

• Rasterization– Only x, y values needed for (2-d) frame buffer

• … as the frame buffer is what is displayed– Rasterization, or scan conversion, determines which fragments displayed (put in

frame buffer)• For polygons, rasterization determines which pixels lie inside 2-d polygon

determined by projected vertices– Colors

• Most simply, fragments (and their pixels) are determined by interpolation of vertex shades & put in frame buffer

– Output of rasterizer is in units of the display (window coordinates)

Page 9: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

Tasks to Render a Geometric Entity4

Review and Angel Explication

• Fragment Processing– Colors

• OpenGL can merge color (and lighting) results of rasterization stage with geometric pipeline

• E.g., shaded, texture mapped polygon (next chapter)– Lighting/shading values of vertex merged with texture map

• For translucence, must allow light to pass through fragment• Blending of colors uses combination of fragment colors, using colors already in

frame buffer– e.g., multiple translucent objects

– Hidden surface removal performed fragment by fragment using depth information– Anti-aliasing also dealt with

Page 10: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

Efficiency and Algorithms

• For cg illumination/shading, saw how role of efficiency drove algorithms

– Phong shading is “good enough” to be perceived as “close enough” to real world– Close attention to algorithmic efficiency

• Similarly, for often calculated geometric processing efficiency is a prime consideration

• Will consider efficient algorithms for:– Clipping– Line drawing– Visible surface drawing

Page 11: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

Recall, Clipping …

• Scene's objects are clipped against clip space bounding box– Eliminates objects (and pieces of objects) not visible in image– Efficient clipping algorithms for homogeneous clip space

• Perspective division divides all by homogeneous coordinates, w

• Clip space becomes Normalized Device Coordinate (NDC) space after perspective division

Page 12: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

ClippingEfficiency matters

Page 13: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

ClippingEfficiency matters

• Clipping is performed many times in cg pipeline– Depending on algorithms, number of lines, polygons

• Different kinds of clipping– 2D against clipping window, 3D against clipping vol.

• Easy for line segments of polygons– Polygons can be handled in other ways, too

• E.g., Bounding boxes

• Hard for curves and text– Convert to lines and polygons first

• Will see, 1st example of a cg algorithm– Designed for very efficient execution– Efficiency includes:

• multiplication vs. addition• use of Boolean vs. arithmetic operations• Integer vs. real• Space complexity• Even the “constant” in time complexities

Page 14: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

Clipping - 2D Line Segments

• Could clip using brute force– Compute intersections with all sides of clipping window

• Computing intersections is expensive– To explicitly find intersection, essentially solve y = mx + b

• Use lines end points to find slope and intercept• See if line is in there

– Requires multiplication/division

Page 15: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

Cohen-Sutherland AlgorithmAn example clipping algorithm

• Cohen-Sutherland clipping algorithm considers all different cases for where line may be wrt clipping region

• E.g., will first eliminate as many cases as possible without computing intersections

– E.g., both ends of line outside (line C-D), or inside (line A-B)

– Again, computing intersections is expensive

• Start with four lines that determine sides of clipping window

– As if extending sides, top, and bottom of window out

– Will use xmin, xmax, ymin ,ymax, in algorithm

x = xmaxx = xmin

y = ymax

y = ymin

Page 16: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

Consider Cases: Where Endpoints Are

• Based on relationship of endpoints and clipping region xmin, xmax, ymin, ymax, will define cases

• Case 1: – Both endpoints of line segment inside all four

lines– Draw (accept) line segment as is

• Case 2: – Both endpoints outside all lines and on same side

of a line– Discard (reject) the line segment– “trivially reject”

• Case 3: – One endpoint inside, one outside– Must do at least one intersection

• Case 4: – Both endpoints outside, not on same side– May have part inside– Must do at least one intersection

x= xminx = xmin

y = ymax

y = ymin

x = xmaxx = xmin

y = ymax

y = ymin

Page 17: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

Defining OutcodesA representation for efficiency

• For each line endpoint defines an outcode:– Endpoint includes both x1, y1 and x2, y2

– 4 bits for each endpoint: b0b1b2b3, • b0 = 1 if y > ymax, 0 otherwise

• b1 = 1 if y < ymin, 0 otherwise

• b2 = 1 if x > xmax, 0 otherwise

• b3 = 1 if x < xmin, 0 otherwise

• Examples in red with blue ends at right:– Tedious, but automatic– E.g., left line outcodes: 0000, 0000– E.g., right line outcodes: 0110, 0010

• Outcodes divide space into 9 regions

• Computation of outcode requires at most 4 subtractions

– E.g., y1 - ymax

• Testing of outcodes can be done with bitwise comparisons

x= xminx = xmin

y = ymax

y = ymin

Page 18: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

Now will consider each case using outcodes …

• Based on relationship of endpoints and clipping region xmin, xmax, ymin, ymax, will define cases

• Case 1: – Both endpoints of line segment inside all four

lines– Draw (accept) line segment as is

• Case 2: – Both endpoints outside all lines and on same side

of a line– Discard (reject) the line segment– “trivially reject”

• Case 3: – One endpoint inside, one outside– Must do at least one intersection

• Case 4: – Both endpoints outside, not on same side– May have part inside– Must do at least one intersection

x= xminx = xmin

y = ymax

y = ymin

x = xmaxx = xmin

y = ymax

y = ymin

Page 19: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

Using Outcodes, Case 1Example from Angel

• Based on relationship of endpoints and clipping region xmin, xmax, ymin, ymax, will define cases

• Case 1: – Both endpoints of line segment inside all

four lines– Draw (accept) line segment as is

• AB: outcode(A) = outcode(B) = 0– A = 0000, B = 0000– Accept line segment

x= xminx = xmin

y = ymax

y = ymin

Page 20: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

Using Outcodes, Case 2Example from Angel

• Case 2: – Both endpoints outside all lines and on same

side of a line– Discard (reject) the line segment– “trivially reject

• EF: outcode(E) && outcode(F) 0– && is bitwise logical AND– E = 0010, F = 0010– Both outcodes have 1 bit in same place– Line segment is outside of corresponding

side of clipping window– Reject – typically, most frequent case

x= xminx = xmin

y = ymax

y = ymin

Page 21: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

Using Outcodes, Case 3Example from Angel

• Case 3: – One endpoint inside, one outside– Must do at least one intersection

• CD: outcode (C) = 0, outcode(D) 0– C = 0000, D = anything else– Here, D = 0010– Do need to compute intersection– Location of 1 in outcode (D) determines which

edge to intersect with– So, “shortened” line is what is displayed

• C – D’– Note:

• If there were a segment from A to a point in a region with 2 ones in outcode, might have to do two intersections

D’

x = xmaxx = xmin

y = ymax

y = ymin

Page 22: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

Using Outcodes, Case 4Example from Angel

• Case 4: – Both endpoints outside, not on same side– May have part inside– Must do at least one intersection

• GH, IJ (same outcodes), neither zero, but && of endpoints = zero

– G (and I) = 0001, H (and J) = 1000– Test for intersection– If found, shorten line segment by intersecting

with one of sides of window– Compute outcode of intersection (new

endpoint of shortened line segment)– (Recursively) reexecute algorithm

I’

J’

x = xmaxx = xmin

y = ymax

y = ymin

Page 23: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

Efficiency and Extension to 3D• Very efficient in many applications

– Clipping window small relative to size of entire data base– Most line segments are outside one or more side of the window and can be

eliminated based on their outcodes

• Inefficient, when code has to be reexecuted for line segments that must be shortened in more than one step

• For 3 dimensions– Use 6-bit outcodes – When needed, clip line segment against planes

Page 24: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

Rasterization

Page 25: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

Rasterization

• End of geometry pipeline (processing) – – Putting values in the frame buffer (or raster)– As, write_pixel (x, y, color)

• At this stage, fragments – clipped, colored, etc. at level of vertices, are turned into values to be displayed

– (deferring for a moment the question of hidden surfaces and colors)

• Essential question is “how to go from vertices to display elements?”– E.g., lines

• Algorithmic efficiency is a continuing theme

Page 26: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

Drawing Algorithms

• As noted, implemented in graphics processor– Used bazillions of times per second– Line, curve, … algorithms

• Line is paradigm example– most common 2D primitive - done 100s or 1000s or 10s of 1000s of times each

frame– even 3D wireframes are eventually 2D lines– optimized algorithms contain numerous tricks/techniques that help in designing

more advanced algorithms

• Will develop a series of strategies, towards efficiency

Page 27: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

Drawing Lines: Overview

• Recall, fundamental “challenge” of computer graphics:

– Representing the analog (physical) world on a discrete (digital) device

– Consider a very low resolution display:

• Sampling a continuous line on a discrete grid introduces sampling errors: the “jaggies”

– For horizontal, vertical and diagonal lines all pixels lie on the ideal line: special case

• For lines at arbitrary angle, pick pixels closest to the ideal line

• Will consider several approaches– But, “fast” will be best

Page 28: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

Strategy 1 Really Basic Algorithm

• First, the (really) basic algorithm: – Find equation of line that connects 2 pts, Px,y, Qx,y

– y = mx + b– m = y / x , wherex = xend – xstart, y = yend – ystart

• Starting with the leftmost point P, – increment x by 1 and calculate y = mx + b at each x point/value – where m = slope, b = y intercept

for x = Px to Qx

y = round (m * x + b) // compute y write-pixel (x, y)

• This works, but uses computationally expensive operations (multiply) at each step

– Perhaps worked for your homework– And do note that when turn on a pixel, in fact does approximate an ideal line

Px,y

Qx,y

Page 29: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

Strategy 2 Incremental Algorithm, 1

• So, (really) basic algorithm: for x = Px to Qx

y = round (m*x + b) // compute y write-pixel (x, y)

• Can modify basic algorithm to be an incremental algorithm– Use current state of computation in finding next state

• i.e., incrementally going toward the solution– Not “recompute” the entire solution, as above –

• Not same computation regardless of where are• Use partial solution, here, last y value, to find next value

• Modify (really) Basic Algorithm to just add slope, vs. multiply – next slidem = y / x // compute slope (to be added)y = m * Px + b // still multiply to get first y valuefor x = Px, Qx

write-pixel (x, round(y)) y = y + m // increment y for next value, just by adding

• Make incremental calcs based on preceding step to find next y value– Works here because going unit/1 to right, incrementing x by 1 and y by (slope) m

Px,y

Qx,y

Page 30: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

Strategy 2 Incremental Algorithm, 2

• Incremental algorithmm = y / x // slopey = m * Px + b // first y valuefor x = Px to Qx

write-pixel (x, round(y)) y = y + m // inc y for next

• Definite improvement over basic algorithm

• Still problems– Still too slowa. Rounding integers takes timeb. Variables y and m must be a real or fractional

binary because slope is a fraction

• Ideally, want just integers and add

m

Page 31: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

Strategy 3 Midpoint Line Algorithm

• Midpoint line algorithm (MLA) considers that “ideal” line is in fact approximated on a raster (pixel based) display

• Hence, will be “error” in where ideal line should be and how it is represented by turning on pixels

• Will use amount of “possible error” to decide which pixel to turn on for successive steps

• … and will do this by only adding and comparing (=, >, <)

error

error

Page 32: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

Strategy 3 MLA, 1

• Assume that the (ideal) line's slope is shallow and positive (0 < m < 1).

– Other slopes can be handled by suitable reflections about principal axes

• Note are calculating “ideal line” and turning on pixels as approximation

• Assume that we have just selected the pixel P at (xp ,yp)

• Next, must choose between: – pixel to right (pixel E), or – pixel one right and one up (pixel NE)

• Let Q be intersection point of line being scan converted with grid line at x = xp + 1

• Note that pixel turned on is not exactly on ideal line – so, “error”

Q

Page 33: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

Strategy 3 - MLA, 2

• Observe on which side of (ideal) line the midpoint M lies:

– E pixel closer to line if midpoint lies above line

– NE pixel closer to line if midpoint lies below line

• (Ideal) line passes between E and NE – Point closer to point Q must be chosen

• Either E or NE

• Error: (here, )– Vertical distance between chosen pixel and

actual line - always <= ½

• Here, algorithm chooses NE as next pixel for line shown

• Now, find a way to calculate on which side of line midpoint lies

Page 34: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

MLA – Use Equation of Line for Selection

• How to choose which pixel, based on M and distance of M from ideal line

• Line equation as a function, f(x): – y = m * x + b – y = dy/dx * x + b

• And line equation as an implicit function: – f(x, y) = a*x + b*y + c = 0– From above, algebraically (multiply by dx)

y * dx = dy * x + b * dx – So, algebraically,

a = dy, b = dx, c = b*dx, a>0 for y0<y1

• Properties (proof by case analysis): – f(xm, ym) = 0 when any point M is on line– f(xm, ym) < 0 when any point M above line– f(xm, ym) > 0 any point M below line - here

• Decision (to choose E or NE) will be based on value of function at midpoint

– M at (xp+1, yp+1/2) – ½ - midpoint

Page 35: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

MLA, Decision Variable

• So, find a way to (efficiently) calculate on which side of line midpoint lies

• And that’s what we just saw– E.g., f(xm, ym) < 0 when any point M above line

• Decision variable d: – Only need sign (fast) of f (xp+1, yp+1/2)

• to see where the line lies, – and then pick nearest pixel.

– d = f (xp+1, yp+1/2) • if d > 0 choose pixel NE. • if d < 0 choose pixel E. • if d = 0 choose either one

consistently.

• Next, how to update d: – On basis of picking E or NE,

• figure out the location of M to that pixel, and the corresponding value of d for the next grid line

Page 36: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

How to update d, if E was chosen:• M is incremented by one step in x

direction • Recall, f(x, y) = a*x + b*y + c = 0 and rewrite

– To get the incremental difference, DE, subtract dold from dnew

dnew = f(xp + 2, yp + 1/2) = a(xp +2) + b(yp + 1/2) + c d old = a(xp + 1) + b(yp + 1/2) + c

• Derive value of decision variable at next step incrementally without computing f(M) directly (recall “incremental algorithm):

– dnew = dold + DE = dold + dy – DE = a = dy

• DE can be thought of as the correction, or update factor, to take dold to dnew

– (and this is the insight: “carrying along the error, vs. recalculating”).

– dnew = dold + a

• Called “forward difference”

Page 37: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

How to update d, ff NE was chosen:

• M is incremented by one step each in both the x and y directions

– dnew = f(xp+2, yp+3/2) – dnew = a(xp+2) + b(yp+3/2) + c

• Subtract dold from dnew to get the incremental difference

– dnew = dold + a + b – DNE = a + b = dy dx

• So, incrementally, – dnew = dold + DNE = dold + dy dx

Page 38: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

MLA Summary• At each step, algorithm chooses between 2 pixels based on

sign of decision variable, d, calculated in previous iteration

• Update decision variable, d, by adding either DE or DNE to old value d depending on choice of pixel.

• Note - First pixel is first endpoint (x0, y0), so can directly calc init val of d for choosing between E and NE.

• First midpoint is at (x0 + 1, y0 + 1/2)

• F(x0+1, y0+1/2) = a(x0 + 1) + b(y0 + 1/2) + c = ax 0 + by 0 + c + a + b/2

= F(x0, y0 ) + a + b/2

• But (x0 , y0 ) is point on line and F(x0, y0 ) = 0 Therefore, dstart = a + b/2 = dy dx/2. Use d start to choose the second pixel, etc.

• To eliminate fraction in d start: Redefine F by multiplying it by 2; F(x,y) = 2(ax + by + c). This multiplies each constant and the decision variable by 2, but does not

change the sign.

Page 39: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

Hidden Surface Removal

• Or, Visible Surface Determination (VSD)

Page 40: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

Recall, Projection …

• Projectors

• View plane (or film plane)

• Direction of projection

• Center of projection– Eye, projection reference point

Page 41: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

About Visible Surface Determination, 1

• Have been considering models, and how to create images from models

– e.g., when viewpoint/eye/COP changes, transform locations of vertices (polygon edges) of model to form image

• In fact, projectors are extended from front and back of all polygons– Though only concerned with “front” polygons

Projectors from front (visible) surface only

Page 42: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

About Visible Surface Determination, 2

• To form image, must determine which objects in scene obscured by other objects

– Occlusion

• Definition of visible surface determination (VSD): – Given a set of 3-D objects and a view specification (camera), determine which

lines or surfaces of the object are visible– Also called Hidden Surface Removal (HSR)

Page 43: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

Visible Surface Determination: Historical notes

• Problem first posed for wireframe rendering

– doesn’t look too “real” (and in fact is ambiguous)

• Solution called “hidden-line (or surface) removal”

– Lines themselves don’t hide lines • Lines must be edges of opaque surfaces that

hide other lines– Some techniques show hidden lines as dotted

or dashed lines for more info

• Hidden surface removal appears as one stage

Page 44: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

Classes of VSD Algorithms

• Different VSD algorithms have advantages and disadvantages:

0. “Conservative” visibility testing: – only trivial reject - does not give final answer

• E.g., back-face culling, canonical view volume clipping

• Have to feed results to algorithms mentioned below

1. Image precision– resolve visibility at discrete points in image

• Z-buffer, scan-line (both in hardware), ray-tracing

2. Object precision– resolve for all possible view directions from a given

eye point

Page 45: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

Image Precision

• Resolve visibility at discrete points in image

• Sample model, then resolve visibility– raytracing, Z-buffer, scan-line

• operate on display primitives. e.g., pixels, scan-lines • visibility resolved to the precision of the display

• (very) High Level Algorithm: for (each pixel in image, i.e., from COP to model) { 1. determine object closest to viewer pierced by projector thru pixel 2. draw pixel in appropriate color }

• Complexity: – O( n . p), where n = objects, p = pixels,

from above for loop or just, at each pixel consider all objects and find closest point

Page 46: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

Object Precision, 1

• Resolve for all possible view directions from a given eye point

• Each polygon is clipped by projections of all other polygons in front of it

• Irrespective of view direction or sampling density

• Resolve visibility exactly, then sample the results

• Invisible surfaces are eliminated and visible sub-polygons are created

– e.g., variations on painter's algorithm, poly’s clipping poly’s, 3-D depth sort, BSP: binary-space partitions

Page 47: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

Object Precision, 2

• (very) High Level Algorithm for (each object in the world)

{ 1. determine parts of object whose view is unobstructed by other parts of it or any

other object 2. draw pixel in appropriate color }

• Complexity: – O( n2 ), where n = number of objects – from above for loop or just– must consider all objects (visibility) interacting with all

others– (but, even when n << p, “steps” are longer, as a

constant factor)

Page 48: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

“Ray Casting” for VSD

• Recall ray tracing– For each pixel, follow the path of “light” back to

the source, considering surface properties (smoothness, color)

– “Color” of pixel is result

• For “ray casting”, – Just follow a ray from cop to first polygon

encountered – and that polygon will be visible, and others won’t be

– Conceptually simple, but not used

• An image precision algorithm– Time proportional to number of pixels

Page 49: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

Painter’s Algorithm• Another simple algorithm …

– Way to resolve visibility– Create drawing order, each poly

overwriting the previous ones guarantees correct visibility at any pixel resolution

• Strategy is to work back to front– Find a way to sort polygons by depth (z),

then draw them in that order• Sort of polygons by smallest

(farthest) z-coordinate in each polygon

• Draw most distant polygon first, then work forward towards the viewpoint (“painters’ algorithm”)

• An object precision algorithm– Time proportional to number of objs/polys

Page 50: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

Painter’s Algorithm Problems

• Intersecting polygons present a problem

• Even non-intersecting polygons can form a cycle with no valid visibility order:

Page 51: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

Back-Face CullingOverview

• Back-face culling directly eliminates (culls) polygons not facing viewer

• Makes sense given constraint of convex (no “inward” face) polygons

• Computationally, can eliminate back faces by:

– Line of sight calculations– Plane half-spaces

• In practice, – Surface (and vertex) normals often

stored with vertex list representations– Normals used both in back face culling

and illumination/shading models

Page 52: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

Back-Face CullingLine of Sight Interpretation

• Line of Sight Interpretation

• Use outward normal (ON) of polygon to test for rejection

• LOS = Line of Sight, – The projector from the center of projection (COP) to any point P on the polygon. – (note: For parallel projections LOS = DOP = direction of projection)

• If normal is facing in same direction as LOS, it’s a back face:– Use cross-product– if LOS ON >= 0, then polygon is invisible—discard– if LOS ON < 0, then polygon may be visible

Page 53: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

Back-Face CullingOpenGL

• OpenGL automatically computes an outward normal from the cross product of two consecutive screen-space edges and culls back-facing polygons– just checks the sign of the resulting z component

Page 54: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

Z-Buffer Algorithm

Page 55: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

Z-Buffer AlgorithmAbout

• Recall, frame/refresh buffer:– Screen is refreshed one scan line at a

time, from pixel information held in a refresh or frame buffer

• Additional buffers can be used to store other pixel information– E.g., double buffering for animation

• 2nd frame buffer to which to draw an image (which takes a while)• then, when drawn, switch to this 2nd frame/refresh buffer and start

drawing again in 1st

• Also, a z-buffer in which z-values (depth of points on a polygon) stored for VSD

Page 56: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

Z-Buffer AlgorithmOverview

• Init Z-buffer to background value– furthest plane view vol., e.g, 255, 8-bit

• Polygons scan-converted in arbitrary order– When pixels overlap, use Z-buffer to decide which polygon “gets” that pixel

• If new point has z values less than previous one (i.e., closer to the eye), its z-value is placed in the z-buffer and its color placed in the frame buffer at the same (x,y)

• Otherwise the previous z-value and frame buffer color are unchanged– Below shows numeric z-values and color to represent fb values

• Just draw every polygon– If find a piece (one or more pixels) of a polygon is closer to the front of what

there already, draw over it

Page 57: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

Z-Buffer AlgorithmExample

• Polygons scan-converted in arbitrary order

• After 1st polygon scan-converted, at depth 127

• After 2nd polygon, at depth 63 – in front of some of 1st polygon

Page 58: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

Z-Buffer AlgorithmPseudocode

• Algorithm again:– Draw every polygon that we can’t reject trivially– “If find piece of polygon closer to front, paint

over whatever was behind it”

void zBuffer() { // Initialize to “far” for ( y = 0; y < YMAX; y++) for ( x = 0; x < XMAX; x++) { WritePixel (x, y, BACKGROUND_VALUE); WriteZ (x, y, 0); } // Go through polygons for each polygon for each pixel in polygon’s projection { // pz = polygon’s Z-value at pixel (x, y); if ( pz < ReadZ (x, y) ) { // New point is closer to front of view WritePixel (x, y, poly’s color at pixel (x, y)); WriteZ (x, y, pz); } } }

Frame buffer holds values of polygons’ colors:

Z buffer holds z values of polygons:

Page 59: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

FYI - Z-Buffer AlgorithmScan line computation

• How to compute this efficiently?– incrementally

• As in polygon filling– As we moved along the Y-axis, we tracked an x position where each edge intersected the current scan-line

• Can do same thing for z coord. using simple “remainder” calculations with y-z slope

• Once we have za and zb for each edge, can incrementally calculate zp as scan across

• Do similar when calculating color per pixel... (Gouraud shading)

Page 60: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

Z-Buffer Pros

• Simplicity lends itself well to hardware implementations - fast– ubiquitous

• Polygons do not have to be compared in any particular order: – no presorting in z is necessary … just throw them out! (maybe)

• Only consider one polygon at a time– ...even though occlusion is a global problem!– brute force, but it is fast!

• Z-buffer can be stored with an image– allows you to correctly composite multiple images (easy!) – w/o having to merge the models (hard!)– great for incremental addition to a complex scene– all VSD algorithms could produce a Z-buffer for this

• Easily handles polygon interpenetration

• Enables deferred shading

– rasterize shading parameters (e.g., surface normal) and only shade final visible fragments

Page 61: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

Z-Buffer Problems, 1

• Requires lots of memory (sort of)– E.g. 1280x1024x32 bits

• Requires fast memory– Read-Modify-Write in inner loop

• Hard to simulate translucent polygons– Throw away color of polygons behind closest one– Works if polygons ordered back-to-front

• extra work throws away much of the speed advantage

Page 62: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

Z-Buffer Problems, 2

• Can’t do anti-aliasing– Requires knowing all poly’s involved in a given pixel

• Perspective foreshortening– Compression in z axis caused in post-perspective space– Objects originally far away from camera end up having Z-values that are very close

to each other

• Depth information loses precision rapidly, which gives Z-ordering bugs (artifacts) for distant objects

– Co-planar polygons exhibit “z-fighting” - offset back polygon – Floating-point values won’t completely cure this problem

Page 63: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

Z – Fighting, 1

• Because of limited z-buffer precision (e.g., 16, 24 bits), z-values must be rounded

– Due to floating point rounding errors, z-values end up in different “bins”, or equivalence classes

• Z-fighting occurs when two primitives have similar values in the z-buffer

– Coplanar polygons (two polygons occupy the same space)

– One is arbitrarily chosen over the other – Behavior is deterministic: the same

camera position gives the same z-fighting pattern

Two intersecting cubes

Page 64: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

Z – Fighting, 2

• Lack of precision in z-buffer leads to artifacts

Van Dam, 2010

Page 65: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

Aliasing

Page 66: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

The Aliasing Problem• Aliasing is cause by finite addressability of the display

• Approximation of lines and circles with discrete points often gives a staircase appearance or "Jaggies“

• Recall, ideal line and turning on pixels to approximate

• Fundamental “challenge” of computer graphics:– Representing the analog (physical) world on a discrete (digital) device

Page 67: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

Aliasing

• Ideal rasterized line should be 1 pixel wide– But, of course, not possible with discrete display

• Color multiple pixels for each x depending on, e.g., per cent coverage by ideal line

Page 68: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

Aliasing / Antialiasing Examples

• x

(C) Doug Bowman, Virginia Tech, 2002

Page 69: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

Antialiasing - solutions• Aliasing can be smoothed out by using higher addressability

• If addressability is fixed, but intensity is variable, can use intensity to control the address of a "virtual pixel"

– 2 adjacent pixels can be used to give impression of point part way between them – Perceived location of point dependent upon ratio of intensities used at each– The impression of a pixel located halfway between two addressable points can be

given by having two adjacent pixels at half intensity.

• Antialiased line has “virtual pixels” each located at proper address

Page 70: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

Aliasing and Sampling

• Maybe

Page 71: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

Aliasing and Sampling1

• Term “aliasing” comes from sampling theory

• Consider trying to discover nature of sinusoidal wave (solid line) of some frequency

• Measure at a number of different times

• Use to infer shape/frequency of wave form

• If sample very densely (many times in each period of what measuring), can determine frequency and amplitude

http://www.daqarta.com/dw_0haa.htm

Page 72: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

Aliasing and Sampling2

• But, if samples taken at too low a rate, cannot capture attributes of the sine wave (underlying function)

• Examples at right illustrate– Actual function is solid line– Squares indicate when sampled– Dotted line waveforms from sample

• Dotted line waveforms are aliases of solid line (underlying function)

– Though samples creating dotted lines could have come from true waveforms of these amplitudes and frequencies, they don’t

– E.g., 14.1 hz signal sampled 14 times/sec • Result seems same as if a 0.1 Hz signal

were sampled 14 times per second, • So, 0.1 Hz said to be an "alias" of 14.1 Hz

• Nyquist sampling theorem– Samples of continuous function contain all

information in original function iff cont. function is sampled at frequency greater than twice highest frequency in function

Page 73: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

Aliasing and Sampling in CG

• Aliasing in computer graphics arises from sampling effects

• For cg, samples (of visual elements) are taken spatially, i.e., at different points

– Rather than temporally, at different times– E.g., for line, a pixel is set based pixel

center and relation of line to pixel at one point – the pixel’s center

• In effect, line sampled at this one point– How “well”, or densely, can be sampled

depends on, here, spatial resolution (dpi) – No information is obtained about the line’s

presence in other portions of the pixel• As, “not looking densely enough” within the

pixel region• So, should test more points there to see which

ones line is covering

Density of Spatial Sampling

Page 74: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

Antialiasing in CG

• Yet, higher resolution does not eliminate problem of insufficient spatial sampling

– Still are “representing analog world on a discrete device”

• Antialising techniques involve one form or another of “blurring” to “smooth” the image

– E.g, jitter of scene

• Can differentially color/shade pixels as they differ from “ideal” line

• Make discontinuity of “jaggie” less noticeable

– E.g., gray pixels by border (using amounts)

– Decreases illumination discontinuity to which eye sensitive

Antialising

Page 75: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

Temporal Aliasing

• What is a temporal “jaggie”?– When animation appears “jerky”– E.g., frame rate too low (<~15 frames/sec)

• How might the problem be solved?– Sample more frequently

• Motion is continuous• A single frame is discrete• To increase frame rate is increase the temporal sampling density

• However, more to temporal aliaising?– http://www.michaelbach.de/ot/mot_wagonWheel/index.html– Frequency (rate of spin) varies 0-120, and sampled at ~24 fps

• Depending on system

Page 76: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

Temporal Aliasing

• Recall, spatial alias– E.g., 14.1 hz signal sampled

14 times/sec • Result seems same as if a

0.1 Hz signal were sampled 14 times per second,

• So, 0.1 Hz said to be an "alias" of 14.1 Hz

• Analog for temporal sampling– In example 24 fps is

sampling rate– “stroboscopic effect” at

multiples of sampling rate– Wheel has 0 rotational rate

when sampled at frequency of display

Page 77: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

Temporal Aliasing

• Why appear to go backwards?

• Again, sampling rate …– Say, sampled just a bit “earlier”

in original each time• Sampling frequency a bit less than

rotational frequency

– “rotates a bit less than a full revolution per sampling”

• Human perceptual system combines/integrates images and perceives motion

• In this case, resulting in a “wrong” perception

– Except that we now know that it is the “right” perception given the understanding of sampling rate

– Nature of aliases• Here, a sequence of images in time

– And the perceptual system is just integrating a series of images …

“At top”

Page 78: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

FYI - A Second Factor1

• To accomplish animation (with observer perceiving smooth motion) need to

– Display element at point a– Display element at point a’– Repeat (at a pretty fast frame rate)

• Sequence of static pictures is then perceived as a smoothly moving object

• But, limitation on “throughput” (information) – How much data can be displayed to user per unit

time?– Here, amount that an object can be moved before

it becomes confused with another object in the next frame

– Correspondence problem

• Let = distance between pattern elements– Distance at which subsequent display of

elements is “right on top of” next

a

b

c

a

Page 79: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

FYI - A Second Factor2

• Let = distance between pattern elements– Distance at which subsequent display of elements

is “right on top of” next

• / 2 (in practice minus a bit, /3 emprically) is the maximum displacement/inter-frame movement for the element before the pattern is more likely to be seen as moving in reverse direction than what intended

• When elements identical, brain constructs correspondences based on object proximity in successive frames

– Sometimes called “wagon-wheel” effect, recall old western films

– With /3, frame rate = 60 fps, have upper bound of 20 messages per second

• Can increase by, e.g., using different colors (b) and shapes (c)

a

b

c

a

Page 80: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

Color and Displays

Page 81: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

Display Considerations

• Color Systems

• Color cube and tristimulus theory

• Gamuts

• “XYZ” system – CIE

• Hue-lightness-saturation

Page 82: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

Environment: Visible Light

• Generally, body’s sensory system is way it is because it had survival value

– Led to success (survival and reproduction)

• Focus on human vision– but all senses share basic notions

• Humans have receptors for (small part of) the electromagnetic spectrum

– Receptors sensitive to (fire when excited by):• energy of 400-700nm wavelength

– Note:• Snakes “see” infrared, some insects ultraviolet

– i.e., have receptors that fire

• Perceived color of visible light – is to some extent a subjective experience

Page 83: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

Why is CG Color Difficult?

• Many theories, measurement techniques, and standards for colors– Yet no one theory of human color perception universally accepted

• Color of object depends on:– Color of material of object– Light illuminating object– Color of surrounding area– And …the human visual system – the role of which is all too infrequently considered

• As detailed last week, some objects – reflect light (wall, desk, paper), – others transmit light (cellophane, glass)– others emit light (hot wires)

• And again, examples of interaction of surface color with light color:– 1. Surface that reflects only pure blue light

• illuminated with pure red light appears black– 2. Pure green light

• viewed through glass that transmits only pure red also appears black

Page 84: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

Achromatic and Chromatic Light -

• Distinction between “black & white (grays)” and “color” useful – Highlights some distinctions

• “Grays” … Achromatic light: intensity (quantity of light) only– gray levels– seen on black and white TV or display monitors– quantity of light the only attribute affecting perception– generally need 64 to 256 gray levels for continuous-tone images without contouring

• “Color” … Chromatic light– visual color sensations:

• brightness/intensity• chromatic/color

– hue/position in spectrum– saturation/vividness

• Will be examining distinctions between:– Physics – how describe light waves and intensities– Human perception of light and color

• How user (human) experiences light and color• Pyschology, psychophysics

Page 85: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

Intensity vs. Brightness• Intensity - term of physics describing energy, etc.

• Brightness - term of psychology describing perception of “light intensity”

• In figure below, consider physical intensities 0 … 1 (foot-candles, lumens, etc.)– with .1 intervals of intensities: 0.1, 0.2, 0.3, …– though equal increases in energy/intensity of light,– not perceived as equal increases in intensity

• Eye / human sensitive to ratios of intensity levels, vs. absolute levels– e.g., perceived difference in brightness

• -> 0.11 is same as 0.5 -> 0.55• i.e., 0.01 increase in absolute intensity at the lower level

– is perceived as same increase in brightness as 0.05 increase at higher level

brightness - perceived

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 log (intensity)

• Brightness is perceived as a log function of light intensity– So is sound – Why???

Intensity - physical

Page 86: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

Chromatic Color: Introduction

• Hue – distinguishes among colors such as red, green, purple, and yellow

• Saturation – refers to how pure the color is, how much white/gray is mixed with it

• red is highly saturated; pink is relatively unsaturated• royal blue is highly saturated; sky blue is relatively unsaturated• pastels are less vivid, less intense

• Lightness – embodies the achromatic notion of perceived intensity of a reflecting object

• Brightness – is used instead of lightness to refer to the perceived intensity of a self-luminous (i.e.,

emitting rather than reflecting light) object, such as a light bulb, the sun, or a CRT

• Humans can distinguish ~7 million colors – when samples placed side-by-side (JNDs)– when differences only in hue, l difference of JND colors are 2 nm in central part of visible

spectrum, 10 nm at extremes - non-uniformity!– about 128 fully saturated hues are distinct– eye less discriminating for less saturated light (from 16 to 23 saturation steps for fixed hue

and lightness), and less sensitive for less bright light

Page 87: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

Trichromacy Theory of Color Perception

• “Sensation of light wavelength entering eye leads to perception of color”

– Sensation: “firing” or photosensitive receptors– Perception: subjective “experience” of color

• Trichromacy theory is one account of human color perception

– Follows naturally from human physiology

• 2 types of retinal receptors:– Rods, low light, monochrome

• So overstimulated at all but low levels contribute little– Cones, high light, color– Not evenly distributed on retina

Distribution of receptors across the retina, left eye shown; the cones are concentrated in the fovea,

which is ringed by a dense concentration of rods http://www.handprint.com/HP/WCL/color1.html#oppmodelWandell, Foundations of Vision

Page 88: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

Trichromacy Theory of Color Perception

• Cones responsible for sensation at all but lowest light levels

• 3 types of cones – Differentially sensitive (fire in response) to

wavelengths– Hence, “trichromacy”– No accident 3 colors in monitor

• Red, green, blue• Printer

– Cyan, magenta, yellow• Can match colors perceived with 3 colors

– Cone receptors least sensitive to (least output for) to blue

• Will return to human after some computer graphics fleshing out

Cone sensitivity functions

Page 89: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

RGB Color Cube

• Again, can specify color with 3– Will see other way

• RGB Color Cube– Neutral Gradient Line– Edge of Saturated Hues

http://graphics.csail.mit.edu/classes/6.837/F01/Lecture02/ http://www.photo.net/photo/edscott/vis00020.htm

Page 90: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

Color Gamut

• Gamut is the colors that a device can display, or a receptor can sense

• Figure at right:– CIE standard describing human

perception– E.g, color printer cannot reproduce

all the colors visible on a color monitor

• From figure at right, neither film, monitor, or printer reproduce all colors humans can see!

Page 91: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

Color Blindness Cone Sensitivity Functions

• Again, cone receptors least sensitive to (least output for) to blue

Relative sensitivity curves for the three types of cones, log vertical scale, cone spectral curves from Vos & Walraven, 1974

Relative sensitivity curves for the three types of cones, the Vos & Walraven curves on a normal vertical scale

Page 92: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

Color Blindness

• ~10% of male, ~1% of females some form of color vision deficiency

• Most common:– Lack of long wave length sensitive receptors (red,

protanopia) - at right– Lack of mid wave length receptors (green,

deuteranopia)• Results in inability to distinguish red and green• E.g., cherries in 1st figure hard to see

• Trichromatic vs. dichromatic vision– See figures

Cone response space, defined byresponse of each of the three conetypes. Becomes 2d with color deficiency

Page 93: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

Color Blindness Examples

• Normal:

• No red, green, blue:

Page 94: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

CIE System of Color Standards

• CIE color standards– Commission International de

l’Eclairage (CIE)– Standard observer– Lights vs. surfaces– Often used for calibration

• Uses “abstract” primaries– Not correspond to eye, etc.– Y axis is luminance

• Gamuts– Perceivable colors, gray cone– Produced by monitor, RGB axes

Page 95: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

CIE Standard and Colorimetric Properties

• Chromaticity coordinates:– x, y coordinates– Correspond to wave length

1. If 2 colored lights are represented by 2 points, color of mixture lies on line between points

2. Any set of 3 lights specifies triangle, and within it are realizableGamut

3. Spectrum locus– Chromaticity coordinates of pure

monochromatic (single wavelength) lights– E.g., 0.1, 1.1 ~ 480 nm, “blue”

4. Purple boundary – line connecting chrom. coords. of longest

visible red (~700nm) and blue (~400nm)

Page 96: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

CIE Standard and Colorimetric Properties

5. White light– Has equal mixture of all wavelengths– ~ 0.33, ~0.33– Incadescent tungsten source: ~0.45, ~0.47

• More yellow than daylight

6. Excitation purity– Distance along a line between a pure spectral

wave length and the white point– Dis pt to white pt / Dis white pt to spectrum line– Vividness or saturation of color

7. Complementary wavelength of a color– Draw line from that color to white and extrapolate

to opposite spectrum locus– Adding a color to its complement produces white

Page 97: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

Gamuts for Color Cube and CIE

• Again, monitor gamut lies within gamut of human perception

– In figure below CIE and color cube within– In figure at right, other gamuts

http://graphics.csail.mit.edu/classes/6.837/F01/Lecture02/

Page 98: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

HSV: Hue-Saturation-Value

• HSV: Hue-Saturation-Value

• Simple transformation:– From hue, saturation, value– To red, green, blue

• Hue – color

• Saturation – vividness

• Value – Black->White– Luminance separation makes sense

• Hue and saturation on 2 axes– Not perceptually equal

Page 99: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

FYI - Afterimage

• Occurs due to bleaching of photopigments– (big demo next)

• Implications for misperceiving (especially contiguous colors – and black and white)– “I thought I saw …”

• To illustrate:– Stare at + sign on left

• May see colors around circle

– Move gaze to right– See yellow and desaturated

red

Page 100: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

Afterimage Example

Page 101: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,
Page 102: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

Moving Dots

• Another illusion …

• Follow the moving dot - see pink dot moving in circle

• Look at the center cross and the dots appear green

• Concentrate on center cross and pink dots slowly disappear and only a green dot will be seen rotating

• Follow the “space” and all seems to be rotating (maybe)

• “Blink” (redirect eyes) at any time to stop

Page 103: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,
Page 104: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,
Page 105: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

Afterimage + attentional shift + …

• “moving dots”

• Particularly compelling– Again, an illusion is an extreme case – somewhat “surprising” because it

leads to error

• Explained by afterimage, attentional shift, lateral inhibition (last time), …

• For computer graphics, illusions illustrate:– “things are not always as they appear to be”

• What is perceived by user is not necessarily what is displayed– Human visual system is complex, but not unknowable

• Should at least “be sensitive to” challenges and be able to explain – Are both sensory and perceptual mechanisms at work

• Sensory, – Here, afterimage – just bleached pigments

• Perceptual– Here, shifting attention and inhibition

Page 106: CG Algorithms and Implementation: “From Vertices to Fragments” Angel: Chapter 6 OpenGL Programming and Reference Guides, other sources. ppt from Angel,

End

• .