introduction and admin matters - university of western ...€¦ · • luminance image o...
TRANSCRIPT
CITS3003 Graphics & Animation
Introduction and Admin Matters
Admin
• Unit coordinator & Lecturer
• Dr. Naveed Akhtar
• Consultation Time: 4:00-5:00pm Fridays(Room 1.12, First Floor, Comp Science building)
• Lab facilitators
• Asim Jalwana
• Muhammad Ibrahim
2
Admin
• Recorded lectures will be on LMS
• Everything else is on the unit website http://teaching.csse.uwa.edu.au/units/CITS3003/index.php
• 2019 Lecture slides are available for those who wish to look ahead
• Slides will be updated on weekly basis and marked with 2020
• Use cshelp (help3003) for discussions
3
Assessment
• The assessment will consist of:• 10%: Mid-semester test during lecture time
(week 06 Tuesday 31 March)
• 40%: Programming project(due in week 12)
• 50%: Final exam worth
• Mid-semester test marks will be released on csmarks
• Project will need to be uploaded on cssubmit
4
Breakdown of Lectures
1. Introduction & Image Formation2. Programming with OpenGL3. OpenGL: Pipeline Architecture4. OpenGL: An Example Program5. Vertex and Fragment Shaders 16. Vertex and Fragment Shaders 27. Representation and Coordinate
Systems8. Coordinate Frame Transformations9. Transformations and Homogeneous
Coordinates10. Input, Interaction and Callbacks11. More on Callbacks12. Mid-semester Test13. 3D Hidden Surface Removal14. Mid term-test solution and project
discussion
Study Break15. Computer Viewing16. Shading17. Shading Models18. Shading in OpenGL19. Texture Mapping20. Texture Mapping in OpenGL 21. Hierarchical Modelling22. 3D Modelling: Subdivision Surfaces23. Animation Fundamentals and
Quaternions24. Skinning
5
Project & Labs
• There will be labs from week 2-6. Lab sheets are already online along with solutions.
• Labs are not assessed but it is important to complete them to be able to complete the project.
• Project will be released in week 06 (tentative), but discussed in week 07.
6
Mini Lectures
• Mini lectures from last year are available on unit webpage
• The covered technical material will be very similar
• Provide good quick refresher
• Credit: Prof. Ajmal Mian
7
CITS3003 Graphics & Animation
Lecture 1: Introduction to Image Formation
8
Objectives
• Fundamental imaging notions
• Physical basis for image formation
o Light
o Color
o Perception
• Synthetic camera model
• Ray tracing
• Introduction to OpenGL
9
Image Formation
• In computer graphics, we form images which are generally two dimensional using a process analogous to how images are formed by physical imaging systems
o Cameras
o Microscopes
o Telescopes
o Human visual system
10
Elements of Image Formation
1. Objects
2. Viewer
3. Light source(s)
• Attributes that govern how light interacts with the materials in the scene
• Note the independence of the objects, the viewer, and the light source(s)
11
Elements of Image Formation (cont.)
• Advantages:
• Separation of objects, viewer, light sources (can model them separately).
• Two-dimensional graphics becomes a special case of three-dimensional graphics
• Leads to simple software API
• Can specify objects, lights, camera, attributes separately
• Let implementation determine image
• Leads to fast hardware implementation
12
Objects
• Objects in space are independent of any image formation process and of any viewer
• A set of locations (vertices) in space is sufficient to define or approximate most objects
13
Viewer
• To form an image, we must have someone orsomething that is viewing our objects, be it ahuman, a camera, or a digitizer. It is theviewer that forms the image of our objects.
14
Light
• Light is the part of the electromagnetic spectrum that causes a reaction in our visual system
• Generally these are wavelengths in the range of about 350-750 nm (nanometers)
• Long wavelengths appear as reds and short wavelengths as blues
15
Light (Contd..)
• Light sources can emit light either as a set of discretefrequencies or continuously.
• Geometric optics models light sources as emitters of lightenergy, each of which have a fixed intensity.
• Light travels in straight lines, from the sources to thoseobjects with which it interacts.
• An ideal point source emits energy from a single location atone or more frequencies equally in all directions.
• A particular source is characterized by– the intensity of light that it emits at each frequency – that light’s directionality
16
Ray Tracing and Geometric Optics
One way to form an image is tofollow rays of light from a source, finding which rays enter the camera lens.
However, rays of light may have multiple interactions with objects, get absorbed, or go to infinity.
17
Luminance and Colour Images
• Luminance Image
o Monochromatic
o Values are gray levels
o Analogous to working with black and white film or television
• Colour Image
o Has perceptional attributes of hue, saturation, and lightness
18
IMAGING SYSTEMS
19
Pinhole Camera
• Use trigonometry to find projection of point at 𝑥, 𝑦, 𝑧
• 𝑥𝑝 𝑧𝑝 = 𝑥 𝑧 𝑦𝑝 𝑧𝑝= 𝑦 𝑧 𝑧𝑝 = −𝑑• These are equations of simple perspective
• The point (xp, yp, −d) is called the projection of the point (x, y, z).
20
z)
Pinhole Camera (contd..)
• The field, or angle, of view of our camera is the angle made by the largest object that our camera can image on its film plane.
• The ideal pinhole camera has an infinite depth of field
• The pinhole camera has two disadvantages:– It admits only a single ray from a point source—almost no light enters
the camera.
– The camera cannot be adjusted to have a different angle of view
21
The Three-Color Theory
• The human visual system has two types of sensors
• Rods: monochromatic, night vision
• Cones
o Color sensitive
o Three types of cones
o Only three values (the tristimulus values) are sent to the brain
• That is, we need only match these three values
– Need only three primary colors
22
Additive and Subtractive Color
• Additive coloro Form a color by adding amounts of
three primaries• CRTs, projection systems, positive film
o Primaries are Red (R), Green (G), Blue (B)
• Subtractive coloro Form a color by filtering white light
with cyan (C), Magenta (M), and Yellow (Y) filters• Light-material interactions• Printing• Negative film
Synthetic Camera Model
• OpenGL uses the synthetic pin hole camera model
• Since the image of the object is flipped relative to the object on the back of the camera, we draw another plane in front of the lens.
• With this synthetic camera model, the object is the right way up.
center of projection
image plane
projector
p (a point)
projection of point p
image is upside down
image is right way up
24
Lecture 1 – OpenGL: Introduction
Introduction to OpenGL
Lecture 1 – OpenGL: Introduction
Introduction to OpenGL
• Most of the concepts related to OpenGL
covered in week 01 are for introduction
purpose.
• Many of these concepts will be repeated
in more detail in the weeks to follow.
26
Why use OpenGL and Not Ray Tracing?
• Ray tracing seems more physically based so why don’t we use it to design a graphics system?
• Possible and is actually simple for simple objects such as polygons and quadrics with simple point sources
• In principle, can produce global lighting effects such as shadows and multiple reflections but ray tracing is slow and not well-suited for interactive applications
• Having said that, practical ray tracing with GPUs is getting close to real time
27
What is OpenGL?
OpenGL is a platform-independent Application Programmers’ Interface (API) that
• Is close enough to the hardware to get excellent performance
• Focuses on rendering
28
OpenGL Architecture
OpenGL is a platform-independent Application Programmers’ Interface (API) that
• Is easy to use
• Provides a link between the low-level graphics hardware and the high-level application program that you write
OpenGL Architecture29
Versions of OpenGL
• Latest versions are completely shader-based:o No default shaders
o Each application must provide both a vertex and a fragment shaderi.e. you must additionally write vertex and fragment shaderprograms
• OpenGL ESo Is suitable for embedded systems
o Version 1.0 is a simplified version of OpenGL 2.1
o Version 2.0 is a simplified version of OpenGL 3.1
• OpenGL 4.1 and 4.2o Add geometry shaders and tessellator
o Version 4.6 (released on 30/07/2017) is still the latest version
30
Versions of OpenGL (cont.)
o WebGL
o Is a derivative of OpenGL ES version 2.0
o Provides JavaScript bindings for OpenGL functions, allowing an HTML page to render images using any GPU resources available on the computer where the web browser is running
o WebGL is not included in the curriculum
31
OpenGL Libraries
• OpenGL core libraryo OpenGL32 on Windowso GL on most unix/linux systems (libGL.a)
• OpenGL Utility Library (GLU)o Provides functionality in OpenGL core but avoids
having to rewrite codeo Will only work with legacy code
• Links with window systemo GLX for X window systemso WGL for Windowso AGL for Macintosh
32
GLUT
• OpenGL Utility Toolkit (GLUT)o Provides functionality common to all window
systems• Open a window
• Get input from mouse and keyboard
• Menu items
• Event-driven
o Code is portable but GLUT lacks the functionality of a good toolkit for a specific platform• No slide bars
33
freeglut
• GLUT was created long ago and has been unchanged
o Amazing that it still works
o Some functionality can’t work since it requires deprecated functions
• freeglut updates GLUT
o Added capabilities
o Context checking
34
GLEW
• GLEW is an OpenGL Extension Wrangler Library
• GLEW makes it easy to access OpenGL extensions available on a particular system
• Application only needs to include glew.h and run a glewInit()
35
Which function is in which library?
• You don’t need to memorize the functionalities of different OpenGL libraries
• Instead, you decide on your objects, lights and camera, then work out which OpenGL functions are required.
• Include libraries that contain your functions.
• For the practical issues you will have the OpenGL documentation to help.
36
Further Reading
•“Interactive Computer Graphics – A Top-Down Approach with Shader-Based OpenGL” by Edward Angel and Dave Shreiner, 6th Ed, 2012
• Sec. 1.2 A graphics system
• Sec. 1.3 Images: Physical and Synthetic
• Sec. 1.4 Imaging Systems
• Sec. 1.5 The Synthetic Camera Model
• Sec. 1.6 The Programmer’s Interface
37