week 2 - fridayfaculty.otterbein.edu/wittman1/comp4290/slides/comp4290 - week 2 - 2.pdfkeep working...
TRANSCRIPT
Week 2 - Friday
What did we talk about last time? Graphics rendering pipeline Geometry Stage
We're going to start by drawing a 3D model Eventually, we'll go back and create our own primitives
Like other MonoGame content, the easiest way to manage it is to add it to your Content folder and load it through the Content Management Pipeline MonoGame can load (some).fbx, .x, and .obj files Note that getting just the right kind of files (with textures or not) is
sometimes challenging
First, we declare a member variable to hold the model
Then we load the model in the LoadContent() method
model = Content.Load<Model>("Ship");
Model model;
To draw anything in 3D, we need a world matrix, a view matrix and a projection matrix
Since you'll need these repeatedly, you could store them as members
Matrix world = Matrix.CreateTranslation(new Vector3(0, 0, 0));
Matrix view = Matrix. CreateLookAt(new Vector3(0, 0, 7), new Vector3(0, 0, 0), Vector3.UnitY);
Matrix projection = Matrix.CreatePerspectiveFieldOfView(0.9f, (float)GraphicsDevice.Viewport.Width / GraphicsDevice.Viewport.Height, 0.1f, 100.0f);
The world matrix controls how the model is translated, scaled, and rotated with respect to the global coordinate system
This code makes a matrix that moves the model 0 units in x, 0 units in y, and 0 units in z In other words, it does nothing
Matrix world = Matrix.CreateTranslation(new Vector3(0, 0, 0));
The view matrix sets up the orientation of the camera The easiest way to do so is to give Camera location What the camera is pointed at Which way is up
This camera is at (0, 0, 7), looking at the origin, with positive y as up
Matrix view = Matrix.CreateLookAt(new Vector3(0, 0, 7), new Vector3(0, 0, 0), Vector3.UnitY);
The projection matrix determines how the scene is projected into 2D
It can be specified with Field of view in radians Aspect ratio of screen (width / height) Near plane location Far plane location
Matrix projection = Matrix.CreatePerspectiveFieldOfView( .9f, (float)GraphicsDevice.Viewport.Width / GraphicsDevice.Viewport.Height, 0.1f, 100f);
Drawing the model is done by drawing all the individual meshes that make it up Each mesh has a series of effects Effects are used for texture mapping, visual appearance, and other things They need to know the world, view, and projection matrices
foreach(ModelMesh mesh in model.Meshes){
foreach(BasicEffect effect in mesh.Effects){
effect.World = world;effect.View = view;effect.Projection = projection;
}
mesh.Draw();}
I did not properly describe an important optimization done in the Geometry Stage: backface culling
Backface culling removes all polygons that are not facing toward the screen A simple dot product is all that is needed This step is done in hardware in MonoGame and OpenGL You just have to turn it on Beware: If you screw up your normals, polygons could vanish
For API design, practical top-down problem solving, and hardware design, and efficiency, rendering is described as a pipeline
This pipeline contains three conceptual stages:
Produces material
to be rendered
Application
Decides what,
how, and where to
render
Geometry
Renders the final image
Rasterizer
The goal of the Rasterizer Stage is to take all the transformed geometric data and set colors for all the pixels in the screen space
Doing so is called: Rasterization Scan Conversion
Note that the word pixel is actually a portmanteau for "picture element"
As you should expect, the Rasterization Stage is also divided into a pipeline of several functional stages:
Triangle Setup
Triangle Traversal
Pixel Shading Merging
Data for each triangle is computed This could include normals This is boring anyway because fixed-operation (non-
customizable) hardware does all the work
Each pixel whose center is overlapped by a triangle must have a fragment generated for the part of the triangle that overlaps the pixel
The properties of this fragment are created by interpolating data from the vertices
Again, boring, fixed-operation hardware does this
This is where the magic happens Given the data from the other stages, per-pixel shading
(coloring) happens here This stage is programmable, allowing for many different
shading effects to be applied Perhaps the most important effect is texturing or texture
mapping
Texturing is gluing a (usually) 2D image onto a polygon To do so, we map texture coordinates onto polygon coordinates Pixels in a texture are called texels This is fully supported in hardware Multiple textures can be applied in some cases
The final screen data containing the colors for each pixel is stored in the color buffer
The merging stage is responsible for merging the colors from each of the fragments from the pixel shading stage into a final color for a pixel
Deeply linked with merging is visibility: The final color of the pixel should be the one corresponding to a visible polygon (and not one behind it)
To deal with the question of visibility, most modern systems use a Z-buffer or depth buffer
The Z-buffer keeps track of the z-values for each pixel on the screen
As a fragment is rendered, its color is put into the color buffer only if its z value is closer than the current value in the z-buffer (which is then updated)
This is called a depth test
Pros Polygons can usually be rendered in any order Universal hardware support is available
Cons Partially transparent objects must be rendered in back to front order
(painter's algorithm) Completely transparent values can mess up the z buffer unless they
are checked z-fighting can occur when two polygons have the same (or nearly the
same) z values
A stencil buffer can be used to record a rendered polygon This stores the part of the screen covered by the polygon and can be
used for special effects Frame buffer is a general term for the set of all buffers Different images can be rendered to an accumulation buffer
and then averaged together to achieve special effects like blurring or antialiasing
A back buffer allows us to render off screen to avoid popping and tearing
This pipeline is focused on interactive graphics Micropolygon pipelines are usually used for film production Predictive rendering applications usually use ray tracing renderers
The old model was the fixed-function pipeline which gave little control over the application of shading functions
The book focuses on programmable GPUs which allow all kinds of tricks to be done in hardware
GPU architecture Programmable shading
Read Chapter 3 Start on Assignment 1, due next Friday, September 13 by
11:59 Keep working on Project 1, due Friday, September 27 by 11:59 Amazon Alexa Developer meetup Thursday, September 12 at 6 p.m. Here at The Point Hear about new technology There might be pizza…
Want a Williams-Sonoma internship? Visit http://wsisupplychain.weebly.com/
Interested in coaching 7-18 year old kids in programming? Consider working at theCoderSchool For more information:▪ Visit https://www.thecoderschool.com/locations/westerville/▪ Contact Kevin Choo at [email protected]▪ Ask me!