cse-vi-computer graphics and visualization [10cs65]-solution

Upload: shinu-gopal-m

Post on 02-Jun-2018

219 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/11/2019 Cse-Vi-computer Graphics and Visualization [10cs65]-Solution

    1/77

    Computer Graphics And Visualization 10CS65

    Dept of CSE, SJBIT Page 1

    VTU QUESTION BANK SOLUTION

    UNIT-1

    INTRODUCTION

    1. Explain the concept of pinhole cameras which is an example of an imaging

    system.(10 marks)(Dec/Jan 12)(Jun/Jul 13)

    The geometry related to the mapping of a pinhole camera is illustrated in the figure. The figurecontains the following basic objects

    A 3D orthogonal coordinate system with its origin at O. This is also where the camera aperture is

    located. The three axes of the coordinate system are referred to as X1, X2, X3. Axis X3 is pointing in

    the viewing direction of the camera and is referred to as the optical axis, principal axis, or principal

    ray. The 3D plane which intersects with axes X1 and X2 is the front side of the camera, or principal

    plane.

    An image plane where the 3D world is projected through the aperture of the camera. The image plane

    is parallel to axes X1 and X2 and is located at distance f from the origin O in the negative direction of

    the X3 axis. A practical implementation of a pinhole camera implies that the image plane is located

    such that it intersects the X3 axis at coordinate -f where f > 0. f is also referred to as the focal

    length[citation needed] of the pinhole camera.

    A point R at the intersection of the optical axis and the image plane. This point is referred to as the

    principal point or image center.

    A point P somewhere in the world at coordinate (x_1, x_2, x_3) relative to the axes X1,X2,X3.

    The projection line of point P into the camera. This is the green line which passes through point P and

    the point O.

    The projection of point P onto the image plane, denoted Q. This point is given by the intersection of

    the projection line (green) and the image plane. In any practical situation we can assume that x_3 > 0

    which means that the intersection point is well defined.

    There is also a 2D coordinate system in the image plane, with origin at R and with axes Y1 and Y2

  • 8/11/2019 Cse-Vi-computer Graphics and Visualization [10cs65]-Solution

    2/77

    Computer Graphics And Visualization 10CS65

    Dept of CSE, SJBIT Page 2

    which are parallel to X1 and X2, respectively. The coordinates of point Q relative to this coordinate

    system is (y_1, y_2) .

    The pinhole aperture of the camera, through which all projection lines must pass, is assumed to be

    infinitely small, a point. In the literature this point in 3D space is referred to as the optical (or lens or

    camera) center.

    Next we want to understand how the coordinates (y_1, y_2) of point Q depend on the coordinates

    (x_1, x_2, x_3) of point P. This can be done with the help of the following figure which shows the

    same scene as the previous figure but now from above, looking down in the negative direction of the

    X2 axis.

    2. Derive the expression for angle of view. Also indicate the advantages and

    disadvantages. (10 marks) (Dec/Jan 12)

    In this figure we see two similar triangles, both having parts of the projection line (green) as theirhypotenuses. The catheti of the left triangle are -y_1 and f and the catheti of the right triangle are x_1

    and x_3 . Since the two triangles are similar it follows that

    \frac{-y_1}{f} = \frac{x_1}{x_3} or y_1 = -\frac{f \, x_1}{x_3}

    A similar investigation, looking in the negative direction of the X1 axis gives

    \frac{-y_2}{f} = \frac{x_2}{x_3} or y_2 = -\frac{f \, x_2}{x_3}

    This can be summarized as

    \begin{pmatrix} y_1 \\ y_2 \end{pmatrix} = -\frac{f}{x_3} \begin{pmatrix} x_1 \\ x_2 \end{pmatrix}

    which is an expression that describes the relation between the 3D coordinates (x_1,x_2,x_3) of point

    P and its image coordinates (y_1,y_2) given by point Q in the image plane

    3. With an aid of a functional schematic, describe the graphics pipeline with major steps in the

    imaging process (10 marks)(Jun/Jul 12) (Dec/Jan 13) (Jun/Jul 13)

    Pipeline Architecture

  • 8/11/2019 Cse-Vi-computer Graphics and Visualization [10cs65]-Solution

    3/77

    Computer Graphics And Visualization 10CS65

    Dept of CSE, SJBIT Page 3

    E.g. An arithmetic pipelineTerminologies :Latency : time taken from the first stage till the end result is produced. Throughput: Number of outputs per given time.

    Graphics Pipeline :

    Process objects one at a time in the order they are generated by the application All steps can be implemented in hardware on the graphics card

    Vertex Processor Much of the work in the pipeline is in converting object representations from one coordinate

    system to another Object coordinates Camera (eye) coordinates

    Screen coordinates Every change of coordinates is equivalent to a matrix transformation Vertex processor also computes vertex colors

    Primitive AssemblyVertices must be collected into geometric objects before clipping and rasterization can take place

    Line segments Polygons Curves and surfaces

    Clipping

    Just as a real cam era cannot see the whole world, the virtual camera can only see part of the worldor object space Objects that are not within this volume are said to be clipped out of the scene

    Rasterization : If an object is not clipped out, the appropriate pixels in the frame buffer must be assigned colors Rasterizer produces a set of fragments for each object Fragm ents are potential pixels

    Have a location in frame bufffer Color and depth attributes

  • 8/11/2019 Cse-Vi-computer Graphics and Visualization [10cs65]-Solution

    4/77

    Computer Graphics And Visualization 10CS65

    Dept of CSE, SJBIT Page 4

    Vertex attributes are interpolated over objects by the rasterizer

    Fragment Processor : Fragments are processed to determine the color of the corresponding pixel in the frame buffer Colors can be determined by texture mapping or interpolation of vertex colors Fragments may be blocked by other fragments closer to the camera

    Hidden-surface removal4. Describe the working of an output device with an example.(5 marks) (Jun/Jul 12)

    Graphics

    Graphical output displayed on a screen.

    A digital image is a numeric representation of an image stored on a computer. They don't have any

    physical size until they are displayed on a screen or printed on paper. Until that point, they are just a

    collection of numbers on the computer's hard drive that describe the individual elements of a picture

    and how they are arranged.Some computers come with built-in graphics capability. Others need a

    device, called a graphics card or graphics adapter board, that has to be added.Unless a computer has

    graphics capability built into the motherboard, that translation takes place on the graphics

    card.Depending on whether the image resolution is fixed, it may be of vector or raster type. Without

    qualifications, the term "digital image" usually refers to raster images also called bitmap images.

    Raster images that are composed of pixels and is suited for photo-realistic images. Vector images

    which are composed of lines and co-ordinates rather than dots and is more suited to line art, graphs or

    fonts.To make a 3-D image, the graphics card first creates a wire frame out of straight lines. Then, it

    rasterizes the image (fills in the remaining pixels). It also adds lighting, texture and color.

    Tactile

    Haptic technology, or haptics, is a tactile feedback technology which takes advantage of the sense oftouch by applying forces, vibrations, or motions to the user.Several printers and wax jet printers have

    the capability of producing raised line drawings. There are also handheld devices that use an array of

    vibrating pins to present a tactile outline of the characters or text under the viewing window of the

    device.

    Audio

  • 8/11/2019 Cse-Vi-computer Graphics and Visualization [10cs65]-Solution

    5/77

    Computer Graphics And Visualization 10CS65

    Dept of CSE, SJBIT Page 5

    Speech output systems can be used to read screen text to computer users. Special software programs

    called screen readers attempt to identify and interpret what is being displayed on the screen and speech

    synthesizers convert data to vocalized sounds or text.

    5. Name the different elements of graphics system and explain in detail.(8 marks)(Dec/Jan

    13)

    A user interacts with the graphics system with self-contained packages and input devices. E.g. A paint

    editor.

    This package or interface enables the user to create or modify images without having to write

    programs. The interface consists of a set of functions (API) that resides in a graphics library

    The application programmer uses the API functions and is shielded from the details of its

    implementation.

    The device driver is responsible to interpret the output of the API and converting it into a form

    understood by the particular hardware.-plotter model :

    This is a 2-D system which moves a pen to draw images in 2 orthogonal directions.

    E.g. : LOGO language implements this system.

    moveto(x,y) moves pen to (x,y) without tracing a line. lineto(x,y) moves pen to (x,y) by tracing a

    line.

    Alternate raster based 2-D model : Writes pixels directly to frame buffer

    E.g. : write_pixel(x,y,color)

    In order to obtain images of objects close to the real world, we need 3-D object model.

    6. Describe the working of an output device with an example.( 5 marks) (Jun/Jul 12)

  • 8/11/2019 Cse-Vi-computer Graphics and Visualization [10cs65]-Solution

    6/77

    Computer Graphics And Visualization 10CS65

    Dept of CSE, SJBIT Page 6

    The cathode ray tube or CRT was invented by Karl Ferdinand Braun. It was the most common type of

    display for many years. It was used in almost all computer monitors and televisions until LCD and

    plasma screens started being used.

    A cathode ray tube is an electron gun. The cathode is an electrode (a metal that can send out electrons

    when heated). The cathode is inside a glass tube. Also inside the glass tube is an anode that attracts

    electrons. This is used to pull the electrons toward the front of the glass tube, so the electrons shoot outin one direction, like a ray gun. To better control the direction of the electrons, the air is taken out of

    the tube, making a vacuum.

    The electrons hit the front of the tube, where a phosphor screen is. The electrons make the phosphor

    light up. The electrons can be aimed by creating a magnetic field. By carefully controlling which bits

    of phosphor light up, a bright picture can be made on the front of the vacuum tube. Changing this

    picture 30 times every second will make it look like the picture is moving. Because there is a vacuum

    inside the tube (which has to be strong enough to hold out the air), and the tube must be glass for the

    phosphor to be sible, the tube must be made of thick glass. For a large television, this vacuum tube can

    be quite heavy.

    6. Define computer graphics. Explain the application of computer graphics.(10 marks)

    (Jun/Jul 13)(Dec/Jan 14)

    Computer graphics is a sub-field of computer science which studies methods for digitally synthesizing

    and manipulating visual content. Although the term often refers to the study of three-dimensional

    computer graphics, it also encompasses two-dimensional graphics and image processing.

    The following are also considered graphics applications

    Paint programs: Allow you to create rough freehand drawings. The images are stored as bit maps and

    can easily be edited. It is a graphics program that enables you to draw pictures on the display screen

    which is represented as bit maps (bit-mapped graphics). In contrast, draw programs use vector graphics

    (object-oriented images), which scale better.

    Most paint programs provide the tools shown below in the form of icons. By selecting an icon, you can

  • 8/11/2019 Cse-Vi-computer Graphics and Visualization [10cs65]-Solution

    7/77

    Computer Graphics And Visualization 10CS65

    Dept of CSE, SJBIT Page 7

    perform functions associated with the tool.In addition to these tools, paint programs also provide easy

    ways to draw common shapes such as straight lines, rectangles, circles, and ovals.

    Sophisticated paint applications are often called image editing programs. These applications support

    many of the features of draw programs, such as the ability to work with objects. Each object, however,

    is represented as a bit map rather than as a vector image.

    Illustration/design programs: Supports more advanced features than paint programs, particularly for

    drawing curved lines. The images are usually stored in vector-based formats. Illustration/design

    programs are often called draw programs. Presentation graphics software: Lets you create bar charts,

    pie charts, graphics, and other types of images for slide shows and reports. The charts can be based on

    data imported from spreadsheet applications.

    A type of business software that enables users to create highly stylized images for slide shows and

    reports. The software includes functions for creating various types of charts and graphs and for

    inserting text in a variety of fonts. Most systems enable you to import data from a spreadsheet

    application to create the charts and graphs. Presentation graphics is often called business graphics.

    Animation software: Enables you to chain and sequence a series of images to simulate movement.

    Each image is like a frame in a movie. It can be defined as a simulation of movement created by

    displaying a series of pictures, or frames. A cartoon on television is one example of animation.

    Animation on computers is one of the chief ingredients of multimedia presentations. There are many

    software applications that enable you to create animations that you can display on a computer monitor.

    There is a difference between animation and video. Whereas video takes continuous motion and breaksit up into discrete frames, animation starts with independent pictures and puts them together to form

    the illusion of continuous motion.

    CAD software: Enables architects and engineers to draft designs. It is the acronym for computer-aided

    design. A CAD system is a combination of hardware and software that enables engineers and architects

    to design everything from furniture to airplanes. In addition to the software, CAD systems require a

    high-quality graphics monitor; a mouse, light pen, or digitizing tablet for drawing; and a special printer

    or plotter for printing design specifications.

    CAD systems allow an engineer to view a design from any angle with the push of a button and to zoom

    in or out for close-ups and long-distance views. In addition, the computer keeps track of design

    dependencies so that when the engineer changes one value, all other values that depend on it are

    automatically changed accordingly. Until the mid 1980s, all CAD systems were specially constructed

    computers. Now, you can buy CAD software that runs on general-purpose workstations and personal

  • 8/11/2019 Cse-Vi-computer Graphics and Visualization [10cs65]-Solution

    8/77

    Computer Graphics And Visualization 10CS65

    Dept of CSE, SJBIT Page 8

    computers.

    Desktop publishing: Provides a full set of word-processing features as well as fine control over

    placement of text and graphics, so that you can create newsletters, advertisements, books, and other

    types of documents. It means by using a personal computer or workstation high-quality printed

    documents can be produced. A desktop publishing system allows you to use different typefaces,

    specify various margins and justifications, and embed illustrations and graphs directly into the text.

    The most powerful desktop publishing systems enable you to create illustrations; while less powerful

    systems let you insert illustrations created by other programs.

    7. Explain the different graphics architectures in detail, with the aid of functional

    schematics.(10 Marks)(Dec/Jan 14)

    A Graphics system has 5 main elements

    Pixels and the Frame Buffer

    ray (raster) of picture elements (pixels).

    Properties of frame buffer:

    Resolution number of pixels in the frame buffer

    Depth or Precision number of bits used for each pixel

    E.g.: 1 bit deep frame buffer allows 2 colors

    8 bit deep frame buffer allows 256 colors.

    A Frame buffer is implemented either with special types of memory chips or it can be a part of system

    memory.

    In simple systems the CPU does both normal and graphical processing.

    Graphics processing - Take specifications of graphical primitives from application program and assign

    values to the pixels in the frame buffer It is also known as Rasterization or scan conversion.

  • 8/11/2019 Cse-Vi-computer Graphics and Visualization [10cs65]-Solution

    9/77

    Computer Graphics And Visualization 10CS65

    Dept of CSE, SJBIT Page 9

    UNIT-2

    THE OPENGL

    1. Write an OpenGL program for a 2d sierpinski gasket using mid-point of each triangle.( 10

    marks) (Dec/Jan 12) (Dec/Jan 13)

    #include

    /* a point data type

    typedef GLfloat point2[2];

    /* initial triangle global variables */

    point2 v[]={{-1.0, -0.58}, {1.0, -0.58},

    {0.0, 1.15}};

    int n; /* number of recursive steps */

    int main(int argc, char **argv)

    {

    n=4;

    glutInit(&argc, argv);

    glutInitDisplayMode(GLUT_SINGLE|GLUT_RGB);

    glutInitWindowSize(500, 500);glutCreateWindow(2D Gasket");

    glutDisplayFunc(display);

    myinit();

    glutMainLoop();

    }

  • 8/11/2019 Cse-Vi-computer Graphics and Visualization [10cs65]-Solution

    10/77

    Computer Graphics And Visualization 10CS65

    Dept of CSE, SJBIT Page 10

    void display(void)

    {

    glClear(GL_COLOR_BUFFER_BIT);

    divide_triangle(v[0], v[1], v[2], n);

    glFlush();

    }

    void myinit()

    {

    glMatrixMode(GL_PROJECTION);

    glLoadIdentity();

    gluOrtho2D(-2.0, 2.0, -2.0, 2.0);

    glMatrixMode(GL_MODELVIEW);

    glClearColor (1.0, 1.0, 1.0,1.0)

    glColor3f(0.0,0.0,0.0);

    }

    2. Briefly explain the orthographic viewing with OpenGL functions for 2d

    and 3d viewing. Indicate the significance of projection plane and viewing point in this.(10

    marks) (Dec/Jan 12) (Jun/Jul 12)

    gluOrtho2D define a 2D orthographic projection matrix

    C Specificationvoid gluOrtho2D( GLdouble left,

    GLdouble right,

    GLdouble bottom,

    GLdouble top);

    Parameters

    left, right

    Specify the coordinates for the left and right vertical clipping planes.

    bottom, top

    Specify the coordinates for the bottom and top horizontal clipping planes.

    Description

    gluOrtho2D sets up a two-dimensional orthographic viewing region. This is equivalent to calling

    glOrtho with near = -1 and far = 1 .

  • 8/11/2019 Cse-Vi-computer Graphics and Visualization [10cs65]-Solution

    11/77

  • 8/11/2019 Cse-Vi-computer Graphics and Visualization [10cs65]-Solution

    12/77

    Computer Graphics And Visualization 10CS65

    Dept of CSE, SJBIT Page 12

    coincidence that those numbers are exact powers of two (the binary code): 22 = 4, 24 = 16 and 28 =

    256. While 256 values can be fit into a single 8-bit byte (and then a single indexed color pixel also

    occupies a single byte), pixel indices with 16 (4-bit, a nibble) or fewer colors can be packed together

    into a single byte (two nibbles per byte, if 16 colors are employed, or four 2-bit pixels per byte if using

    4 colors). Sometimes, 1-bit (2-color) values can be used, and then up to eight pixels can be packed into

    a single byte; such images are considered binary images (sometimes referred as a bitmap or bilevel

    image) and not an indexed color image.

    If simple video overlay is intended through a transparent color, one palette entry is specifically

    reserved for this purpose, and it is discounted as an available color. Some machines, such as the MSX

    series, had the transparent color reserved by hardware.

    Indexed color images with palette sizes beyond 256 entries are rare. The practical limit is around 12-bit

    per pixel, 4,096 different indices. To use indexed 16 bpp or more does not provide the benefits of the

    indexed color images' nature, due to the color palette size in bytes being greater than the raw image

    data itself. Also, useful direct RGB Highcolor modes can be used from 15 bpp and up.

    5. List and explain graphics functions.(10 marks) (Jun/Jul 13) (Dec/Jan 14)

    Control Functions (interaction with windows)

    Single buffering

    Properties logically ORed togetherglutWindowSize in pixels

    glutWindowPosition from top-left corner of display

    glutCreateWindow create window with a particular titleAspect ratio and viewports

    Aspect ratio is the ratio of width to height of a particular object.

    We may obtain undesirable output if the aspect ratio of the viewing rectangle (specified by glOrtho), is

    not same as the aspect ratio of the window (specified by glutInitWindowSize)

    Viewport A rectangular area of the display window, whose height and width can be adjusted to

    match that of the clipping window, to avoid distortion of the images.

    void glViewport(Glint x, Glint y, GLsizei w, GLsizei h) ;

    The main, display and myinit functions

    In our application, once the primitive is rendered onto the display and the application program ends,

    the window may disappear from the display.

    Event processing loop :

    void glutMainLoop();

  • 8/11/2019 Cse-Vi-computer Graphics and Visualization [10cs65]-Solution

    13/77

    Computer Graphics And Visualization 10CS65

    Dept of CSE, SJBIT Page 13

    Graphics is sent to the screen through a function called display callback.

    void glutDisplayFunc(function name)

    The function myinit() is used to set the OpenGL state variables dealing with viewing and attributes.

    Control Functions

    glutInit(int *argc, char **argv) initializes GLUT and processes any command line arguments (for X,

    this would be options like -display and -geometry). glutInit() should be called before any other GLUT

    routine.

    glutInitDisplayMode(unsigned int mode) specifies whether to use an RGBA or color- index color

    model. You can also specify whether you want a single- or double- buffered window. (If youre

    working in color-index m ode, youll want to load certain colors into the color map; use glutSetColor()

    to do this.)

    glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB | GLUT_DEPTH).

    If you want a window with double buffering, the RGBA color model, and a depth buffer, you might

    call

    glutInitWindowPosition(int x, int y) specifies the screen location for the upper-left corner of your

    window.

    glutInitWindowSize(int width, int size) specifies the size, in pixels, of your window.

    int glutCreateWindow(char *string) creates a window with an OpenGL context. It returns a unique

    identifier for the new window. Be warned: Until glutMainLoop() is

    called.6. Write a typical main function that is common to most non-interactive applications and

    explain each function call in it. (10 Marks) (Dec/Jan 14)

    int main(int argc, char **argv)

    {

    n=4;

    glutInit(&argc, argv);

    glutInitDisplayMode(GLUT_SINGLE|GLUT_RGB);

    glutInitWindowSize(500, 500);

    glutCreateWindow(2D Gasket");

    glutDisplayFunc(display);

    myinit();

    glutMainLoop();

  • 8/11/2019 Cse-Vi-computer Graphics and Visualization [10cs65]-Solution

    14/77

    Computer Graphics And Visualization 10CS65

    Dept of CSE, SJBIT Page 14

    }

    7. Explain Color Cube in brief. (03 Marks) (Dec/Jan 14)

    The colour space for computer based applications is often visualised by a unit cube. Each colour (red,

    green, blue) is assigned to one of the three orthogonal coordinate axes in 3D space. An example of such

    a cube is shown below along with some key colours and their coordinates.

    Along each axis of the colour cube the colours range from no contribution of that component to a

    fully saturated colour. The colour cube is solid, any point (colour) within the cube is specified by three numbers,

    namely, an r,g,b triple. The diagonal line of the cube from black (0,0,0) to white (1,1,1) represents all the greys, that is,

    all the red, green, and blue components are the same. In practice different computer hardware/software combinations will use different ranges for the

    colours, common ones are 0-256 and 0-65536 for each component. This is simply a linear scaling of theunit colour cube described here. This RGB colour space lies within our perceptual space, that is, the RGB cube is smaller and

    represents fewer colours than we can see.

    UNIT - 3

    INPUT AND INTERACTION

  • 8/11/2019 Cse-Vi-computer Graphics and Visualization [10cs65]-Solution

    15/77

    Computer Graphics And Visualization 10CS65

    Dept of CSE, SJBIT Page 15

    1. What are the various classes of logical input devices that are supported by openGL?

    Explain the functionality of each of these classes.(10 marks) (Dec/Jan 12)

    The six input classes and the logical input values they provide are:

    LOCATOR

    Returns a position (an x,y value) in World Coordinates and a Normalization Transformation number

    corresponding to that used to map back from Normalized Device Coordinates to World Coordinates.

    The NT used corresponds to that viewport with the highest Viewport Input Priority (set by calling

    GSVPIP). Warning: If there is no viewport input priority set then NT 0 is used as default, in which case

    the coordinates are returned in NDC. This may not be what is expected!

    CALL GSVPIP(TNR, RTNR, RELPRI)

    TNR

    Transformation Number

    RTNR

    Reference Transformation Number

    RELPRI

    One of the values 'GHIGHR' or 'GLOWER' defined in the Include File, ENUM.INC, which is listed in

    the Appendix on Page gif.

    STROKE

    Returns a sequence of (x,y) points in World Coordinates and a Normalization Transformation as for the

    Locator. VALUATOR

    Returns a real value, for example, to control some sort of analogue device.

    CHOICE

    Returns a non-negative integer which represents a choice from a selection of several possibilities. This

    could be implemented as a menu, for example.

    STRING

    Returns a string of characters from the keyboard.

    PICK

    Returns a segment name and a pick identifier of an object pointed at by the user. Thus, the application

    does not have to use the locator to return a position, and then try to find out to which object the position

    corresponds

  • 8/11/2019 Cse-Vi-computer Graphics and Visualization [10cs65]-Solution

    16/77

    Computer Graphics And Visualization 10CS65

    Dept of CSE, SJBIT Page 16

    2. Enlist the various features that a good interactive program should include. (5 marks)

    (Dec/Jan 12) (Dec/Jan 13)

    Interactive programming is the procedure of writing parts of a program while it is already active. Thisfocuses on the program text as the maininterface for a running process, rather than an interactiveapplication, where the program is designed in development cycles and used thereafter (usually by a so-called "user", in distinction to the "developer"). Consequently, here, the activity of writing a program

    becomes part of the program itself.

    It thus forms a specific instance of interactive computation as an extreme opposite to batch processing,where neither writing the program nor its use happens in an interactive way. The principle of rapidfeedback in Extreme Programming is radicalized and becomes more explicit.

    3. Suppose that the openGL window is 500 X 50 pixels and the clipping window is a unit

    square with the origin at the lower left corner. Use simple XOR mode to draw erasable

    lines.(10 marks) (Jun/Jul 12)

    void MouseMove(int x,int y)

    {

    if(FLAG == 0){

    X = x;Y = winh - y;

    Xn = x;

    Yn = winh - y;

  • 8/11/2019 Cse-Vi-computer Graphics and Visualization [10cs65]-Solution

    17/77

    Computer Graphics And Visualization 10CS65

    Dept of CSE, SJBIT Page 17

    FLAG = 1;

    }

    else if(FLAG == 1){

    glEnable(GL_COLOR_LOGIC_OP);

    glLogicOp(GL_XOR);

    glBegin(GL_LINES);

    glVertex2i(X,Y);

    glVertex2i(Xn,Yn);

    glEnd();

    glFlush();/*Old line erased*/

    glBegin(GL_LINES);

    glVertex2i(X,Y);

    glVertex2i(x, winh - y);

    glEnd();

    glFlush();

    Xn = x;Yn = winh - y;

    }

    }

    4. What is the functionality of display lists in modeling. Explain with an example (5 marks)

    (Dec/Jan 12) (Jun/Jul 13)

    Display lists may improve performance since you can use them to store OpenGL commands for later

    execution. It is often a good idea to cache commands in a display list if you plan to redraw the same

    geometry multiple times, or if you have a set of state changes that need to be applied multiple times.

    Using display lists, you can define the geometry and/or state changes once and execute them multiple

    times

    A display list is a convenient and efficient way to name and organize a set of OpenGL commands. For

    example, suppose you want to draw a torus and view it from different angles. The most efficient way to

  • 8/11/2019 Cse-Vi-computer Graphics and Visualization [10cs65]-Solution

    18/77

    Computer Graphics And Visualization 10CS65

    Dept of CSE, SJBIT Page 18

    do this would be to store the torus in a display list. Then whenever you want to change the view, you

    would change the modelview matrix and execute the display list to draw the torus. Example illustrates

    this.

    Creating a Display List: torus.c

    #include #include #include #include #include #include

    GLuint theTorus;

    /* Draw a torus */static void torus(int numc, int numt){

    int i, j, k;double s, t, x, y, z, twopi;

    twopi = 2 * (double)M_PI;for (i = 0; i < numc; i++) {

    glBegin(GL_QUAD_STRIP);for (j = 0; j = 0; k--) {s = (i + k) % numc + 0.5;t = j % numt;

    x = (1+.1*cos(s*twopi/numc))*cos(t*twopi/numt);y = (1+.1*cos(s*twopi/numc))*sin(t*twopi/numt);z = .1 * sin(s * twopi / numc);glVertex3f(x, y, z);

    }}glEnd();

    }}

    5.

    Explain Picking operation in openGL with an example. (10 marks) (Jun/Jul 12)The OpenGL API provides a mechanism for picking objects in a 3D scene. This tutorial will show you

    how to detect which objects are bellow the mouse or in a square region of the OpenGL window. The

    steps involved in detecting which objects are at the location where the mouse was clicked are:

    1. Get the window coordinates of the mouse

    2. Enter selection mode

  • 8/11/2019 Cse-Vi-computer Graphics and Visualization [10cs65]-Solution

    19/77

    Computer Graphics And Visualization 10CS65

    Dept of CSE, SJBIT Page 19

    3. Redefine the viewing volume so that only a small area of the window around the cursor is rendered

    4. Render the scene, either using all primitives or only those relevant to the picking operation

    5. Exit selection mode and identify the objects which were rendered on that small part of the screen.

    In order to identify the rendered objects using the OpenGL API you must name all relevant objects in

    your scene. The OpenGL API allows you to give names to primitives, or sets of primitives (objects).

    When in selection mode, a special rendering mode provided by the OpenGL API, no objects are actually

    rendered in the framebuffer. Instead the names of the objects (plus depth information) are collected in an

    array. For unnamed objects, only depth information is collected.

    Using the OpenGL terminology, a hit occurs whenever a primitive is rendered in selection mode. Hit

    records are stored in the selection buffer. Upon exiting the selection mode OpenGL returns the selection

    buffer with the set of hit records. Since OpenGL provides depth information for each hit the application

    can then easily detect which object is closer to the user.

    Introducing the Name Stack

    As the title suggests, the names you assign to objects are stored in a stack. Actually you don't give

    names to objects, as in a text string. Instead you number objects. Nevertheless, since in OpenGL the

    term name is used, the tutorial will also use the term name instead of number.

    When an object is rendered, if it intersects the new viewing volume, a hit record is created. The hit

    record contains the names currently on the name stack plus the minimum and maximum depth for the

    object. Note that a hit record is created even if the name stack is empty, in which case it only contains

    depth information. If more objects are rendered before the name stack is altered or the application leavesthe selection mode, then the depth values stored on the hit record are altered accordingly.

    A hit record is stored on the selection buffer only when the current contents of the name stack are altered

    or when the application leaves the selection mode.

    The rendering function for the selection mode therefore is responsible for the contents of the name stack

    as well as the rendering of primitives.

    OpenGL provides the following functions to manipulate the Name Stack:

    void glInitNames(void);

    This function creates an empty name stack. You are required to call this function to initialize the stack

    prior to pushing names.

    void glPushName(GLuint name);

    Adds name to the top of the stack. The stacks maximum dimension is implementation dependent,

    however according to the specs it must contain at least 64 names which should prove to be more than

  • 8/11/2019 Cse-Vi-computer Graphics and Visualization [10cs65]-Solution

    20/77

    Computer Graphics And Visualization 10CS65

    Dept of CSE, SJBIT Page 20

    enough for the vast majority of applications. Nevertheless if you want to be sure you may query the state

    variable GL_NAME_STACK_DEPTH (use glGetIntegerv(GL_NAME_STACK_DEPTH)). Pushing

    values onto the stack beyond its capacity causes an overflow error GL_STACK_OVERFLOW.

    void glPopName();

    Removes the name from top of the stack. Popping a value from an empty stack causes an underflow,

    error GL_STACK_UNDERFLOW.

    void glLoadName(GLunit name);

    This function replaces the top of the stack with name. It is the same as calling

    glPopName();

    glPushName(name);

    This function is basically a short cut for the above snippet of code. Loading a name on an empty stack

    causes the error GL_INVALID_OPERATION.

    Note: Calls to the above functions are ignored when not in selection mode. This means that you may

    have a single rendering function with all the name stack functions inside it. When in the normal

    rendering mode the functions are ignored and when in selection mode the hit records will be collected.

    6. Explain logical classifications of Input devices with examples. (Dec/Jan 13) (Dec/Jan 14)

    LOCATOR

    Returns a position (an x,y value) in World Coordinates and a Normalization Transformation number

    corresponding to that used to map back from Normalized Device Coordinates to World Coordinates.

    The NT used corresponds to that viewport with the highest Viewport Input Priority (set by callingGSVPIP). Warning: If there is no viewport input priority set then NT 0 is used as default, in which

    case the coordinates are returned in NDC. This may not be what is expected!

    CALL GSVPIP(TNR, RTNR, RELPRI)

    TNR

    Transformation Number

    RTNR

    Reference Transformation Number

    RELPRI

    One of the values 'GHIGHR' or 'GLOWER' defined in the Include File, ENUM.INC, which is listed in

    the Appendix on Page gif.

    STROKE

    Returns a sequence of (x,y) points in World Coordinates and a Normalization Transformation as for

  • 8/11/2019 Cse-Vi-computer Graphics and Visualization [10cs65]-Solution

    21/77

    Computer Graphics And Visualization 10CS65

    Dept of CSE, SJBIT Page 21

    the Locator.

    VALUATOR

    Returns a real value, for example, to control some sort of analogue device.

    CHOICE

    Returns a non-negative integer which represents a choice from a selection of several possibilities. This

    could be implemented as a menu, for example.

    STRING

    Returns a string of characters from the keyboard.

    PICK

    Returns a segment name and a pick identifier of an object pointed at by the user. Thus, the application

    does not have to use the locator to return a position, and then try to find out to which object the

    position corresponds

    7. How are menus and submenus created in openGL? Explain with an example.(Dec/Jan 13)

    //#include

    #include

    #include

    /* process menu option 'op' */

    void menu(int op) {

    switch(op) {

    case 'Q':

    case 'q':

    exit(0);

    }

    }

    /* executed when a regular key is pressed */

    void keyboardDown(unsigned char key, int x, int y) {

    switch(key) {

  • 8/11/2019 Cse-Vi-computer Graphics and Visualization [10cs65]-Solution

    22/77

    Computer Graphics And Visualization 10CS65

    Dept of CSE, SJBIT Page 22

    case 'Q':

    case 'q':

    case 27: // ESC

    exit(0);

    }

    }

    /* executed when a regular key is released */

    void keyboardUp(unsigned char key, int x, int y) {

    }

    /* executed when a special key is pressed */

    void keyboardSpecialDown(int k, int x, int y) {

    }

    /* executed when a special key is released */

    void keyboardSpecialUp(int k, int x, int y) {

    }

    /* reshaped window */

    void reshape(int width, int height) {

    GLfloat fieldOfView = 90.0f;

    glViewport (0, 0, (GLsizei) width, (GLsizei) height);

    glMatrixMode (GL_PROJECTION);

    glLoadIdentity();

    gluPerspective(fieldOfView, (GLfloat) width/(GLfloat) height, 0.1, 500.0);

  • 8/11/2019 Cse-Vi-computer Graphics and Visualization [10cs65]-Solution

    23/77

    Computer Graphics And Visualization 10CS65

    Dept of CSE, SJBIT Page 23

    glMatrixMode(GL_MODELVIEW);

    glLoadIdentity();

    }

    /* executed when button 'button' is put into state 'state' at screen position ('x', 'y') */

    void mouseClick(int button, int state, int x, int y) {

    }

    /* executed when the mouse moves to position ('x', 'y') */

    void mouseMotion(int x, int y) {

    }

    /* render the scene */

    void draw() {

    glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

    glMatrixMode(GL_MODELVIEW);

    glLoadIdentity();

    /* render the scene here */

    glFlush();

    glutSwapBuffers();

    }

    /* executed when program is idle */

    void idle() {

    }

  • 8/11/2019 Cse-Vi-computer Graphics and Visualization [10cs65]-Solution

    24/77

  • 8/11/2019 Cse-Vi-computer Graphics and Visualization [10cs65]-Solution

    25/77

  • 8/11/2019 Cse-Vi-computer Graphics and Visualization [10cs65]-Solution

    26/77

    Computer Graphics And Visualization 10CS65

    Dept of CSE, SJBIT Page 26

    out in the same configuration used by most addingmachine and calculators.

    3 Function Keys

    The twelve functions keys are present on the keyboard.These are arranged in a row along the top of thekeyboard. Each function key has unique meaning andis used for some specific purpose.

    4 Control keys

    These keys provide cursor and screen control. Itincludes four directional arrow key. Control keys alsoinclude Home, End, Insert, Delete, Page Up, PageDown, Control(Ctrl), Alternate(Alt), Escape(Esc).

    5 Special Purpose KeysKeyboard also contains some special purpose keyssuch as Enter, Shift, Caps Lock, Num Lock, Space bar,Tab, and Print Screen.

    Mouse

    Mouse is most popular Pointing device. It is a very famous cursor-control device. It is a small palm size box with a round ball at its base which senses the movement of mouse and sends corresponding signalsto CPU on pressing the buttons.

    Generally, it has two buttons called left and right button and scroll bar is present at the mid. Mouse can be used to control the position of cursor on screen, but it cannot be used to enter text into the computer.

    ADVANTAGES

    Easy to use

    Not very expensive

    Moves the cursor faster than the arrow keys of keyboard.

  • 8/11/2019 Cse-Vi-computer Graphics and Visualization [10cs65]-Solution

    27/77

    Computer Graphics And Visualization 10CS65

    Dept of CSE, SJBIT Page 27

    Joystick

    Joystick is also a pointing device, which is used to move cursor position on a monitor screen. It is a stickhaving a spherical ball at its both lower and upper ends. The lower spherical ball moves in a socket. The

    joystick can be moved in all four directions.

    The function of joystick is similar to that of a mouse. It is mainly used in Computer Aided Designing(CAD) and playing computer games.

    Light Pen

    Light pen is a pointing device, which is similar to a pen. It is used to select a displayed menu item ordraw pictures on the monitor screen. It consists of a photocell and an optical system placed in a smalltube.

    When light pen's tip is moved over the monitor screen and pen button is pressed, its photocell sensingelement, detects the screen location and sends the corresponding signal to the CPU.

  • 8/11/2019 Cse-Vi-computer Graphics and Visualization [10cs65]-Solution

    28/77

    Computer Graphics And Visualization 10CS65

    Dept of CSE, SJBIT Page 28

    Track Ball

    Track ball is an input device that is mostly used in notebook or laptop computer, instead of a mouse.This is a ball, which is half inserted and by moving fingers on ball, pointer can be moved.

    Since the whole device is not moved, a track ball requires less space than a mouse. A track ball comes invarious shapes like a ball, a button and a square.

    ScannerScanner is an input device, which works more like a photocopy machine. It is used when someinformation is available on a paper and it is to be transferred to the hard disc of the computer for furthermanipulation.

    Scanner captures images from the source which are then converted into the digital form that can bestored on the disc. These images can be edited before they are printed.

  • 8/11/2019 Cse-Vi-computer Graphics and Visualization [10cs65]-Solution

    29/77

    Computer Graphics And Visualization 10CS65

    Dept of CSE, SJBIT Page 29

    Digitizer

    Digitizer is an input device, which converts analog information into a digital form. Digitizer can converta signal from the television camera into a series of numbers that could be stored in a computer. They can

    be used by the computer to create a picture of whatever the camera had been pointed at.

    Digitizer is also known as Tablet or Graphics Tablet because it converts graphics and pictorial data into binary inputs. A graphic tablet as digitizer is used for doing fine works of drawing and imagesmanipulation applications.

    9. Write a program on rotating a color cube.(10 marks) (Jun/Jul 13)

    #include#include

    GLfloat vertices[][3] = {{-1.0,-1.0,-1.0},{1.0,-1.0,-1.0},{1.0,1.0,-1.0},{-1.0,1.0,-1.0},{-1.0,-

    1.0,1.0},{1.0,-1.0,1.0},{1.0,1.0,1.0},{-1.0,1.0,1.0}};

    GLfloat colors[][3] = {{0.0,0.0,0.0},{1.0,0.0,0.0},{1.0,0.0,0.0},{1.0,1.0,0.0},

  • 8/11/2019 Cse-Vi-computer Graphics and Visualization [10cs65]-Solution

    30/77

    Computer Graphics And Visualization 10CS65

    Dept of CSE, SJBIT Page 30

    {0.0,0.0,1.0},{1.0,0.0,1.0},{1.0,1.0,1.0},{0.0,1.0,1.0}};

    static GLfloat theta[]={0.0,0.0,0.0};

    GLint axis =1;

    void polygon(int a, int b,int c,int d)

    {

    //draw a polygon via list of vertices

    glBegin(GL_POLYGON);

    glColor3fv(colors[a]);

    glVertex3fv(vertices[a]);

    glColor3fv(colors[b]);

    glVertex3fv(vertices[b]);

    glColor3fv(colors[c]);

    glVertex3fv(vertices[c]);

    glColor3fv(colors[d]);

    glVertex3fv(vertices[d]);

    glEnd();

    }

    void colorcube(void)

    { //map vertices to faces

    polygon(0,3,2,1); polygon(2,3,7,6);

    polygon(0,4,7,3);

    polygon(1,2,6,5);

    polygon(4,5,6,7);

    polygon(0,1,5,4);

    }

    void display(void)

    {

    // display callback , clear frame buffer an Z buffer ,rotate cube and draw , swap buffer.

    glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);

    glLoadIdentity();

  • 8/11/2019 Cse-Vi-computer Graphics and Visualization [10cs65]-Solution

    31/77

    Computer Graphics And Visualization 10CS65

    Dept of CSE, SJBIT Page 31

    glRotatef(theta[0],1.0,0.0,0.0);

    glRotatef(theta[1],0.0,1.0,0.0);

    glRotatef(theta[2],0.0,0.0,1.0);

    colorcube();

    glFlush();

    glutSwapBuffers();

    }

    void spinCube()

    {

    // idle callback,spin cube 2 degreees about selected axis

    theta[axis] +=1.0;

    if(theta[axis]>360.0)

    theta[axis]-= 360.0;

    glutPostRedisplay();

    }

    void mouse(int btn,int state,int x,int y)

    {

    //mouse calback ,select an axis about which to rotate

    if(btn== GLUT_LEFT_BUTTON && state ==GLUT_DOWN) axis =0;

    if(btn== GLUT_MIDDLE_BUTTON && state ==GLUT_DOWN) axis =1;if(btn== GLUT_RIGHT_BUTTON && state ==GLUT_DOWN) axis =2;

    }

    void myReshape(int w,int h)

    {

    glViewport(0,0,w,h);

    glMatrixMode(GL_PROJECTION);

    glLoadIdentity();

    glOrtho(-2.0,2.0,-2.0 , 2.0, -10.0,10.0);

    glMatrixMode(GL_MODELVIEW);

    }

    int main(int argc,char** argv)

    {

  • 8/11/2019 Cse-Vi-computer Graphics and Visualization [10cs65]-Solution

    32/77

    Computer Graphics And Visualization 10CS65

    Dept of CSE, SJBIT Page 32

    glutInit(&argc,argv);

    //need both double buffering and Zbuffer

    glutInitDisplayMode(GLUT_DOUBLE|GLUT_RGB|GLUT_DEPTH);

    glutInitWindowSize(500,500);

    glutCreateWindow("Rotating a color cube ");

    glutReshapeFunc(myReshape);

    glutDisplayFunc(display);

    glutIdleFunc(spinCube);

    glutMouseFunc(mouse);

    glEnable(GL_DEPTH_TEST); //Enable hidden surface removal

    glutMainLoop();

    return 0;

    }

    10. What is double buffering? How does openGL support this? Discuss. (06 Marks) (Dec/Jan

    14)

    Double buffering is one of the most basic methods of updating the display. This is often the first

    buffering technique adopted by new coders to combat flickering.

    Double buffering uses a memory bitmap as a buffer to draw onto. The buffer is then drawn onto

    screen. If the objects were drawn directly to screen, the display could be updated mid-draw leaving

    some objects out. When the buffer, with all the objects already on it, is drawn onto screen, the newimage will be drawn over the old one (screen should not be cleared). If the display gets updated before

    the drawing is complete, there may be a noticeable shear in the image, but all objects will be drawn.

    Shearing can be avoided by using vsync.

    UNIT - 4

    GEOMETRIC OBJECTS AND TRANSFORMATIONS 1

    1. Explain the complete procedure of converting a world object frame into camera or eye

    frame, using the model view matrix. (10 marks) (Dec/Jan 12)

    World Frame

    The world frame is right-handed as the body frame

    Origin

  • 8/11/2019 Cse-Vi-computer Graphics and Visualization [10cs65]-Solution

    33/77

    Computer Graphics And Visualization 10CS65

    Dept of CSE, SJBIT Page 33

    On a ground point

    Vectors

    X - front

    Y - left

    Z - up

    Angles

    (Phi), Aviation: roll

    (Theta), Aviation: pitch / nick (helicopters)

    (Psi), Aviation: yaw

    Coordination transformation

    To transform from the world frame to the body frame the following convention is used:

    2. With respect to modeling discuss vertex arrays. (10 marks) (Dec/Jan 12)

    The vertex specification commands described in section 2.7 accept data in almost any format, but their

    use requires many command executions to specify even simple geometry. Vertex data may also be

    placed into arrays that are stored in the client's address space. Blocks of data in these arrays may then

    be used to specify multiple geometric primitives through the execution of a single GL command. The

    client may specify up to six arrays: one each to store edge flags, texture coordinates, colors, color

    indices, normals, and vertices. The commands

    void EdgeFlagPointer ( sizei stride, void *pointer ) ;

    void TexCoordPointer ( int size, enum type, sizei stride, void *pointer ) ;

    void ColorPointer ( int size, enum type, sizei stride, void *pointer ) ;

    void IndexPointer ( enum type, sizei stride, void *pointer ) ;

  • 8/11/2019 Cse-Vi-computer Graphics and Visualization [10cs65]-Solution

    34/77

    Computer Graphics And Visualization 10CS65

    Dept of CSE, SJBIT Page 34

    void NormalPointer ( enum type, sizei stride, void *pointer ) ;

    void VertexPointer ( int size, enum type, sizei stride, void *pointer ) ;

    describe the locations and organizations of these arrays. For each command, type specifies the data

    type of the values stored in the array. Because edge flags are always type boolean, EdgeFlagPointer

    has no type argument. size, when present, indicates the number of values per vertex that are stored in

    the array. Because normals are always specified with three values, NormalPointer has no size

    argument. Likewise, because color indices and edge flags are always specified with a single value,

    IndexPointer and EdgeFlagPointer also have no size argument. Table 2.4 indicates the allowable

    values for size and type (when present). For type the values BYTE, SHORT, INT, FLOAT, and

    DOUBLE indicate types byte, short, int, float, and double, respectively; and the values

    UNSIGNED_BYTE, UNSIGNED_SHORT, and UNSIGNED_INT indicate types ubyte, ushort, and

    uint, respectively. The error INVALID_VALUE is generated if size is specified with a value other than

    that indicated in the table.

    3. Explain modeling a color cube in detail (10 marks) (Jun/Jul 12)

    The case is as simple 3D object. There are number of ways to model it. CSG systems

    regard it as a single primitive. Another way is, case as an object defined by eight vertices.

    We start modeling the cube by assuming that vertices of the case are available through an

    array of vertices i.e,

    GLfloat Vertices [8][3] =

    {{-1.0, -1.0, -1.0},{1.0, -1.0, -1.0}, {1.0, 1.0, -1.0},{-1.0, 1.0, -1.0} {-1.0, -1.0, 1.0}, {1.0,-1.0, 1.0}, {1.0, 1.0, 1.0}, {-1.0, 1.0, 1.0}}

    We can also use object oriented form by 3D point type as follows

    Typedef GLfloat point3 [3];

    The vertices of the cube can be defined as follows

    Point3 Vertices [8] =

    {{-1.0, -1.0, -1.0},{1.0, -1.0, -1.0}, {1.0, 1.0, -1.0},{-1.0, 1.0, -1.0} {-1.0, -1.0, 1.0}, {1.0,

    -1.0, 1.0}, {1.0, 1.0, 1.0}, {-1.0, 1.0, 1.0}}

    We can use the list of points to specify the faces of the cube. For example one face is

    glBegin (GL_POLYGON);

    glVertex3fv (vertices [0]);

    glVertex3fv (vertices [3]);

    glVertex3fv (vertices [2]);

  • 8/11/2019 Cse-Vi-computer Graphics and Visualization [10cs65]-Solution

    35/77

    Computer Graphics And Visualization 10CS65

    Dept of CSE, SJBIT Page 35

    glVertex3fv (vertices [1]);

    glEnd ();

    Similarly we can define other five faces of the cube.

    3. 2 Inward and outward pointing faces

    When we are defining the 3D polygon, we have to be careful about the order in which we

    specify the vertices, because each polygon has two sides. Graphics system can display

    either or both of them. From the cameras perspective, we need to distinguish between the

    two faces of a polygon. The order in which the vertices are specified provides this

    information. In the above example we used the order 0,3,2,1 for the first face. The order 2

    1,0,2,3 would be same because the final vertex in polygon definition is always linked

    back to the first, but the order 0,1,2,3 is different.

    We call face outward facing, if the vertices are traversed in a counter clockwise order,

    when the face is viewed from the outside.

    In our example, the order 0,3,2,1 specifies outward face of the cube. Whereas the order

    0,1,2,3 specifies the back face of the same polygon.

    4. Explain affine transformations(10 marks) (Jun/Jul 12) (Dec/Jan 12) (Dec/Jan 14)

    The essential power of afne transformations is that we only need to transform the en dpoints of a

    segment, and every point on the segment is transformed, because lines map to lines. Hence, the

    transformation must be linear. What linear means in this context is the following:

    f(p + q) = f(p) + f(q)

    What this equation means is that the mapping f, applied to a linear combination of p and q, is the same

    as the linear combination of f applied to the p and q. Thus, In order to transform every point on a line

    segment, its sufcient to transform the endpoints. In particular, to transform a line drawn from A to B,

    its sufcient to transform A and B and then draw the lines between the transformed

    endpoints.Fortunately, matrix multiplication has this property of being linear.

    5. List and explain different frame coordinates in openGL.(10 marks) (Jun/Jul 13)

  • 8/11/2019 Cse-Vi-computer Graphics and Visualization [10cs65]-Solution

    36/77

    Computer Graphics And Visualization 10CS65

    Dept of CSE, SJBIT Page 36

    Cartesian coordinate system

    The prototypical example of a coordinate system is the Cartesian coordinate system. In the plane, two

    perpendicular lines are chosen and the coordinates of a point are taken to be the signed distances to the

    lines.

    Rectangular coordinates

    In three dimensions, three perpendicular planes are chosen and the three coordinates of a point are the

    signed distances to each of the planes. This can be generalized to create n coordinates for any point in

    n-dimensional Euclidean space. Depending on the direction and order of the coordinate axis the system

    may be a right-hand or a left-hand system.

    Polar coordinate system

    Another common coordinate system for the plane is the Polar coordinate system. A point is chosen as

    the pole and a ray from this point is taken as the polar axis. For a given angle , there is a single line

    through the pole whose angle with the polar axis is (meas ured counterclockwise from the axis to the

    line). Then there is a unique point on this line whose signed distance from the origin is r for given

    number r. For a given pair of coordinates (r, ) there is a single point, but any point is represented by

    many pairs of coordinates. For example (r, ), (r, +2) and (r, +) are all polar coordinates for the

    same point. The pole is represented by (0, ) for any value of .

    Cylindrical and spherical coordinate systems

    There are two common methods for extending the polar coordinate system to three dimensions. In the

    cylindrical coordinate system, a z-coordinate with the same meaning as in Cartesian coordinates isadded to the r and polar coordinates. Spherical coordinates take this a step further by converting th e

    pair of cylindrical coordinates (r, z) to polar coordinates (, ) giving a triple (, , )

    Homogeneous coordinates

    A point in the plane may be represented in homogeneous coordinates by a triple (x, y, z) where x/z and

    y/z are the Cartesian coordinates of the point. This introduces an "extra" coordinate since only two are

    needed to specify a point on the plane, but this system is useful in that it represents any point on the

    projective plane without the use of infinity. In general, a homogeneous coordinate system is one where

    only the ratios of the coordinates are significant and not the actual values.

    6. Define and discuss with diagram translation, rotation and scaling.(10 marks) (Jun/Jul 13)

    Rotation

    For rotation by an angle clockwise about the origin (Note that this definition of clockwise is dependenton the x axis pointing right and the y axis pointing up. In for example SVG, where the y axis points

  • 8/11/2019 Cse-Vi-computer Graphics and Visualization [10cs65]-Solution

    37/77

    Computer Graphics And Visualization 10CS65

    Dept of CSE, SJBIT Page 37

    down, the below matrices must be swapped) the functional form

    is and . Written in matrix form, this becomes:

    Similarly, for a rotation counter clockwise about the origin, the functional form

    is and and the matrix form is:

    Scaling

    For scaling (that is, enlarging or shrinking), we have and . The matrix formis:

    When , then the matrix is a squeeze mapping and preserves areas in the plane.

    If or is greater than 1 in absolute value, the transformation stretches the figures in thecorresponding direction; if less than 1, it shrinks them in that direction. Negative values of or alsoflips (mirrors) the points in that direction.

    Applying this sort of scaling times is equivalent to applying a single scaling with factors and .More generally, any symmetric matrix defines a scaling along two perpendicular axes(the eigenvectors of the matrix) by equal or distinct factors (the eigenvaluescorresponding to those

    eigenvectors).Shearing

    For shear mapping (visually similar to slanting), there are two possibilities.

    A shear parallel to the x axis has and . Written in matrix form, this becomes:

    A shear parallel to the y axis has and , which has matrix form:

    Reflection

    To reflect a vector about a line that goes through the origin, let be a vector in the direction ofthe line:

  • 8/11/2019 Cse-Vi-computer Graphics and Visualization [10cs65]-Solution

    38/77

    Computer Graphics And Visualization 10CS65

    Dept of CSE, SJBIT Page 38

    Orthogonal projection

    To project a vector orthogonally onto a line that goes through the origin, let be a vector in thedirection of the line. Then use the transformation matrix:

    As with reflections, the orthogonal projection onto a line that does not pass through the origin is anaffine, not linear, transformation.

    Parallel projections are also linear transformations and can be represented simply by a matrix. However, perspective projections are not, and to represent these with a matrix,homogeneous coordinates must beused.

    7.

    Explain the mathematical entities point, scalar and vector with examples for each.(06Marks) (Dec/Jan 14)

    Vector

    Magnitude

    Direction

    NO position

    Can be added, scaled, rotated

    CG vectors: 2, 3 or 4 dimensions

    Points

    Location in coordinate system

    Cannot add or scale

    Subtract 2 points = vector

  • 8/11/2019 Cse-Vi-computer Graphics and Visualization [10cs65]-Solution

    39/77

    Computer Graphics And Visualization 10CS65

    Dept of CSE, SJBIT Page 39

    Vector-Point Relationship

    Diff. b/w 2 points = vector

    v = Q P

    Sum of point and vector = point

    v + P = Q

    8. Explain Bilinear interpolation method of assigning colors to points inside a

    quadrilateral.(04 Marks) (Dec/Jan 14)

    Algorithm

    Suppose that we want to find the value of the unknown function f at the point P = ( x, y). It is assumedthat we know the value of f at the four points Q11 = ( x1, y1), Q12 = ( x1, y2), Q21 = ( x2, y1), and Q22 =( x2, y2).

    We first do linear interpolation in the x-direction. This yields

    where ,

    where

    We proceed by interpolating in the y-direction.

    This gives us the desired estimate of f ( x, y).

  • 8/11/2019 Cse-Vi-computer Graphics and Visualization [10cs65]-Solution

    40/77

    Computer Graphics And Visualization 10CS65

    Dept of CSE, SJBIT Page 40

    UNIT - 5

    GEOMETRIC OBJECTS AND TRANSFORMATIONS 2

    1. Write the translation matrices 3D translation, rotation and scaling and explain.(6 marks)

    (Dec/Jan 13) (Jun/Jul 12) (Dec/Jan 12) (Dec/Jan 14)

    Rotation

    The matrix to rotate an angle about the axis defined by unit vector (l,m,n) is

    \begin{bmatrix}ll(1-\cos \theta)+\cos\theta & ml(1-\cos\theta)-n\sin\theta & nl(1-\cos\theta)+m\sin\theta\\

    lm(1-\cos\theta)+n\sin\theta & mm(1-\cos\theta)+\cos\theta & nm(1-\cos\theta)-l\sin\theta \\

    ln(1-\cos\theta)-m\sin\theta & mn(1-\cos\theta)+l\sin\theta & nn(1-\cos\theta)+\cos\theta

    \end{bmatrix}.

    Reflection

    To reflect a point through a plane ax + by + cz = 0 (which goes through the origin), one can use

    \mathbf{A} = \mathbf{I}-2\mathbf{NN} T , where \mathbf{I} is the 3x3 identity matrix and

    \mathbf{N} is the three-dimensional unit vector for the surface normal of the plane. If the L2 norm of

    a, b, and c is unity, the transformation matrix can be expressed as:

    \mathbf{A} = \begin{bmatrix} 1 - 2 a^2 & - 2 a b & - 2 a c \\ - 2 a b & 1 - 2 b^2 & - 2 b c \\ - 2 a c &

    - 2 b c & 1 - 2c^2 \end{bmatrix}

    Composing and inverting transformations

  • 8/11/2019 Cse-Vi-computer Graphics and Visualization [10cs65]-Solution

    41/77

    Computer Graphics And Visualization 10CS65

    Dept of CSE, SJBIT Page 41

    One of the main motivations for using matrices to represent linear transformations is that

    transformations can then be easily composed (combined) and inverted.

    Composition is accomplished by matrix multiplication. If A and B are the matrices of two linear

    transformations, then the effect of applying first A and then B to a vector x is given by:

    \mathbf{B}(\mathbf{A} \vec{x} ) = (\mathbf{BA}) \vec{x}

    (This is called the associative property.) In other words, the matrix of the combined transformation A

    followed by B is simply the product of the individual matrices. Note that the multiplication is done in

    the opposite order from the English sentence: the matrix of "A followed by B" is BA, not AB. A

    consequence of the ability to compose transformations by multiplying their matrices is that

    transformations can also be inverted by simply inverting their matrices. So, A1 represents the

    transformation that "undoes" A.

    2. What are vertex arrays? Explain how vertex arrays can be used to model a color cube. (8

    marks) (Dec/Jan 13) (Jun/Jul 12)

    Instead you specify individual vertex data in immediate mode (between glBegin() and glEnd() pairs),

    you can store vertex data in a set of arrays including vertex positions, normals, texture coordinates and

    color information. And you can draw only a selection of geometric primitives by dereferencing the

    array elements with array indices.

    Take a look the following code to draw a cube with immediate mode.

    Each face needs 6 times of glVertex*() calls to make 2 triangles, for example, the front face has v0-v1-

    v2 and v2-v3-v0 triangles. A cube has 6 faces, so the total number of glVertex*() calls is 36. If youalso specify normals, texture coordinates and colors to the corresponding vertices, it increases the

    number of OpenGL function calls.

    The other thing that you should notice is the vertex "v0" is shared with 3 adjacent faces; front, right

    and top face. In immediate mode, you have to provide this shared vertex 6 times, twice for each side as

    shown in the code.

    glBegin(GL_TRIANGLES); // draw a cube with 12 triangles

    // front face =================

    glVertex3fv(v0); // v0-v1-v2

    glVertex3fv(v1);

    glVertex3fv(v2);

  • 8/11/2019 Cse-Vi-computer Graphics and Visualization [10cs65]-Solution

    42/77

    Computer Graphics And Visualization 10CS65

    Dept of CSE, SJBIT Page 42

    glVertex3fv(v2); // v2-v3-v0

    glVertex3fv(v3);

    glVertex3fv(v0);

    // right face =================

    glVertex3fv(v0); // v0-v3-v4

    glVertex3fv(v3);

    glVertex3fv(v4);

    glVertex3fv(v4); // v4-v5-v0

    glVertex3fv(v5);

    glVertex3fv(v0);

    // top face ===================

    glVertex3fv(v0); // v0-v5-v6

    glVertex3fv(v5);

    glVertex3fv(v6);

    glVertex3fv(v6); // v6-v1-v0

    glVertex3fv(v1);glVertex3fv(v0);

    ... // draw other 3 faces

    glEnd();

    sing vertex arrays reduces the number of function calls and redundant usage of shared vertices.

    Therefore, you may increase the performance of rendering. Here, 3 different OpenGL functions are

    explained to use vertex arrays; glDrawArrays(), glDrawElements() and glDrawRangeElements().

    3. Write a short note on current transformation matrix.(8 marks) (Dec/Jan 12) (Jun/Jul 13)

    The Current Transformation Matrix (CTM)

    Current transformation matrix that is applied to any vertex that is defined subsequent to its

  • 8/11/2019 Cse-Vi-computer Graphics and Visualization [10cs65]-Solution

    43/77

    Computer Graphics And Visualization 10CS65

    Dept of CSE, SJBIT Page 43

    setting. If we change the CTM, we change the state of the system. The CTM is part of the

    pipeline shown in figure below. The CTM is a 4 X 4 Matrix and it can be altered by a set of functions

    provided by the graphics package.

    We can do the next replacement operations:

    Initialization C I

    Post multiplication

    C CT

    C CSC CR

    C CM

    Setting C T

    C S

    C R

    C M

    where C is the CTM

    I is an identity matrix

    T is a translation matrix

    S is a scaling matrix

    R is a rotation matrix

    M is an arbitrary matrix

    4. What is transformation? Explain affine transformation.(12 marks) (Jun/Jul 13)

    An affine transformation is any transformation that preserves collinearity (i.e., all points lying on

    a line initially still lie on a line aftertransformation) and ratios of distances (e.g., the midpoint of a linesegment remains the midpoint after transformation). In this sense, affine indicates a special class of

    projective transformations that do not move any objects from the affine space to the plane at infinity

    or conversely. An affine transformation is also called an affinity.

  • 8/11/2019 Cse-Vi-computer Graphics and Visualization [10cs65]-Solution

    44/77

    Computer Graphics And Visualization 10CS65

    Dept of CSE, SJBIT Page 44

    Geometric contraction, expansion, dilation, reflection, rotation, shear, similarity transformations, spiral

    similarities, and translation are all affine transformations, as are their combinations. In general, an affine

    transformation is a composition of rotations, translations, dilations, and shears.

    While an affine transformation preserves proportions on lines, it does not necessarily preserve angles or

    lengths. Any triangle can be transformed into any other by an affine transformation, so all triangles are

    affine and, in this sense, affine is a generalization of congruent and similar. A particular example

    combining rotation and expansion is the rotation-enlargement transformation

    Separating the equations,

    This can be also written as

    where

    The scale factor is then defined by

    and the rotation angle by

    An affine transformation of is a map of the form

  • 8/11/2019 Cse-Vi-computer Graphics and Visualization [10cs65]-Solution

    45/77

    Computer Graphics And Visualization 10CS65

    Dept of CSE, SJBIT Page 45

    for all , where is a linear transformation of . If , the transformation is orientation-

    preserving; if , it is orientation-reversing.

    5. How does instance transformation help in generating a scene? Explain. (06 Marks)

    (Dec/Jan 14)

    Start with unique object (a symbol)

    Each appearance of object in model is an instance

    Must scale, orient, position

    UNIT - 6

    VIEWING

    1. What is canonical view volume? Explain the mapping of a given view volume to the

    canonical form.(10 marks) (Dec/Jan 12)

    The canonical view volumes are a 2x2x1 prism for parallel projections and the truncated right regular

    pyramid for perspective projections. It is much easier to clip to these volumes than to an arbritrary view

    volume, so view volumes are often transformed to one of these before clipping. It is also possible to

    transform a perspective view volume to the canonical parallel view volume, so the same algorithm can

    be used for clipping perspective scenes as for parallel scenes.

  • 8/11/2019 Cse-Vi-computer Graphics and Visualization [10cs65]-Solution

    46/77

  • 8/11/2019 Cse-Vi-computer Graphics and Visualization [10cs65]-Solution

    47/77

    Computer Graphics And Visualization 10CS65

    Dept of CSE, SJBIT Page 47

    Unfortunately this is not longer a linear mapping (because of the term). In order to describe this

    again by a matrix we consider homogeneous coordinates, i.e. we extend the coordinates of points

    in by appending a forth coordinate 1. And on the other hand we associate to each vector

    in (with non-vanishing last coordinate) that point in , which we get by dividing the first 3

    coordinates by the forth. More generally we identify two vectors ( ) in if they are

    collinear, i.e. there exists a number such that . Up to this identification we can write

    this projection as

    Similar formulas hold for arbitrary projection planes (and location).

    The projection onto the plane is given by:

    or, equivalently, by

    If we translate 0 and the plane to the projection becomes

  • 8/11/2019 Cse-Vi-computer Graphics and Visualization [10cs65]-Solution

    48/77

  • 8/11/2019 Cse-Vi-computer Graphics and Visualization [10cs65]-Solution

    49/77

    Computer Graphics And Visualization 10CS65

    Dept of CSE, SJBIT Page 49

    3. Explain the following (10 marks) (Jun/Jul 12)

    i. gluLookAt

    void gluLookAt( GLdouble eyeX,

    GLdouble eyeY,

    GLdouble eyeZ,

    GLdouble centerX,

    GLdouble centerY,

    GLdouble centerZ,GLdouble upX,

    GLdouble upY,

    GLdouble upZ);

    Parameters

    eyeX, eyeY, eyeZ

    Specifies the position of the eye point.

    centerX, centerY, centerZ

    Specifies the position of the reference point.

    upX, upY, upZ

    Specifies the direction of the up vector.

    Description

  • 8/11/2019 Cse-Vi-computer Graphics and Visualization [10cs65]-Solution

    50/77

    Computer Graphics And Visualization 10CS65

    Dept of CSE, SJBIT Page 50

    gluLookAt creates a viewing matrix derived from an eye point, a reference point indicating the center

    of the scene, and an UP vector.

    The matrix maps the reference point to the negative z axis and the eye point to the origin. When a

    typical projection matrix is used, the center of the scene therefore maps to the center of the viewport.

    Similarly, the direction described by the UP vector projected onto the viewing plane is mapped to the

    positive y axis so that it points upward in the viewport. The UP vector must not be parallel to the line

    of sight from the eye point to the reference point.

    ii. gluPerspective

    void gluPerspective( GLdouble fovy,

    GLdouble aspect,

    GLdouble zNear,

    GLdouble zFar);

    Parameters

    fovy

    Specifies the field of view angle, in degrees, in the y direction.

    aspect

    Specifies the aspect ratio that determines the field of view in the x direction. The aspect ratio is the

    ratio of x (width) to y (height).

    zNear

    Specifies the distance from the viewer to the near clipping plane (always positive).

    zFar

    Specifies the distance from the viewer to the far clipping plane (always positive).

    Description

    gluPerspective specifies a viewing frustum into the world coordinate system. In general, the aspect

    ratio in gluPerspective should match the aspect ratio of the associated viewport. For example, aspect =

    2.0 means the viewer's angle of view is twice as wide in x as it is in y. If the viewport is twice as wide

  • 8/11/2019 Cse-Vi-computer Graphics and Visualization [10cs65]-Solution

    51/77

    Computer Graphics And Visualization 10CS65

    Dept of CSE, SJBIT Page 51

    as it is tall, it displays the image without distortion.

    The matrix generated by gluPerspective is multipled by the current matrix, just as if glMultMatrix

    were called with the generated matrix. To load the perspective matrix onto the current matrix stack

    instead, precede the call to gluPerspective with a call to glLoadIdentity.

    Given f defined as follows:

    f = cotangent fovy 2

    The generated matrix is

    f aspect 0 0 0 0 f 0 0 0 0 zFar + zNear zNear - zFar 2 zFar zNear zNear - zFar 0 0 -1 0

    4. Write a note on hidden surface removal.(10 marks) (Jun/Jul 12)

    The elimination of parts of solid objects that are obscured by others is called hidden-surface removal.

    (Hidden-line removal, which does the same job for objects represented as wireframe skeletons, is a bit

    trickier.

    Methods can be categorized as:

    Object Space Methods

    These methods examine objects, faces, edges etc. to determine which are visible. The complexitydepends upon the number of faces, edges etc. in all the objects.

    Image Space Methods

    These methods examine each pixel in the image to determine which face of which object should be

    displayed at that pixel. The complexity depends upon the number of faces and the number of pixels to

    be considered.

    Z Buffer

    The easiest way to achieve hidden-surface removal is to use the depth buffer (sometimes called a z-

    buffer). A depth buffer works by associating a depth, or distance from the viewpoint, with each pixel

    on the window. Initially, the depth values for all pixels are set to the largest possible distance, and then

    the objects in the scene are drawn in any order.

    Graphical calculations in hardware or software convert each surface that's drawn to a set of pixels on

    the window where the surface will appear if it isn't obscured by something else. In addition, the

  • 8/11/2019 Cse-Vi-computer Graphics and Visualization [10cs65]-Solution

    52/77

    Computer Graphics And Visualization 10CS65

    Dept of CSE, SJBIT Page 52

    distance from the eye is computed. With depth buffering enabled, before each pixel is drawn, a

    comparison is done with the depth value already stored at the pixel.

    If the new pixel is closer to the eye than what's there, the new pixel's colour and depth values replace

    those that are currently written into the pixel. If the new pixel's depth is greater than what's currently

    there, the new pixel would be obscured, and the colour and depth information for the incoming pixel is

    discarded. Since information is discarded rather than used for drawing, hidden-surface removal can

    increase your performance.

    5. Derive the perspective projection matrix.(8 marks) (Dec/Jan 13)

    Perspective projection depends on the relative position of the eye and the viewplane . In the usual

    arrangement the eye lies on the z-axis and the viewplane is the xy plane. To determine the projection

    of a 3D point connect the point and the eye by a straight line, where the line intersects the viewplane.

    This intersection point is the projected point.

    6. Explain glFrustrum API.(8 marks) (Dec/Jan 13)

    void glFrustum (GLdouble left ,

    GLdouble right ,

    GLdouble bottom ,

    GLdouble top ,

    GLdouble nearVal ,

    GLdouble arVal );

    Parameters

    left , right

  • 8/11/2019 Cse-Vi-computer Graphics and Visualization [10cs65]-Solution

    53/77

    Computer Graphics And Visualization 10CS65

    Dept of CSE, SJBIT Page 53

    Specify the coordinates for the left and right vertical clipping planes.

    bottom , top

    Specify the coordinates for the bottom and top horizontal clipping planes.

    nearVal , farVal

    Specify the distances to the near and far depth clipping planes. Both distances must be positive.

    Description

    glFrustum describes a perspective matrix that produces a perspective projection. The current matrix

    (see glMatrixMode) is multiplied by this matrix and the result replaces the current matrix, as

    if glMultMatrixwere called with the following matrix as its argument:

    2 nearVal right - left 0 A 0 0 2 nearVal top - bottom B 0 0 0 C D 0 0 -1 0

    A = right + left right - left

    B = top + bottom top - bottom

    C = - farVal + nearVal farVal - nearVal

    D = - 2 farVal nearVal farVal - nearVal

    Typically, the matrix mode is GL_PROJECTION, and left bottom - nearVal and right top -

    nearVal specify the points on the near clipping plane that are mapped to the lower left and upper right

    corners of the window, assuming that the eye is located at (0, 0, 0). - farVal specifies the location of the

    far clipping plane. Both nearVal and farVal must be positive.Use glPushMatrix and glPopMatrix to save

    and restore the current matrix stack.

    7. Bring out the difference between object space and image space algorithm. (4 marks)

    (Dec/Jan 13)

    In 3D computer animation images have to be stored in frame buffer converting two dimensional arrays

    into three dimensional data. This conversion takes place after many calculations like hidden surface

    removal, shadow generation and Z buffering. These calculations can be done in Image Space or Object

  • 8/11/2019 Cse-Vi-computer Graphics and Visualization [10cs65]-Solution

    54/77

    Computer Graphics And Visualization 10CS65

    Dept of CSE, SJBIT Page 54

    Space. Algorithms used in image space for hidden surface removal are much more efficient than object

    space algorithms. But object space algorithms for hidden surface removal are much more functional than

    image space algorithms for the same. The combination of these two algorithms gives the best output.

    Image Space

    The representation of graphics in the form of Raster or rectangular pixels has now become very popular.

    Raster display is very flexible as they keep on refreshing the screen by taking the values stored in frame

    buffer. Image space algorithms are simple and efficient as their data structure is very similar to that of

    frame buffer. The most commonly used image space algorithm is Z buffer algorithm that is used to

    define the values of z coordinate of the object.

    Object Space

    Space object algorithms have the advantage of retaining the relevant data and because of this ability the

    interaction of algorithm with the object becomes easier. The calculation done for the color is done only

    once. Object space algorithms also allow shadow generation to increase the depth of the 3 dimensional

    objects on the screen. The incorporation of these algorithms is done in software and it is difficult to

    implement them in hardware.

    8. What are the two types of simple projection? List and explain the details. (10 marks)

    (Jun/Jul 13)

    Parallel projection

    In parallel projection,the lines of sight from the object to the projection plane are parallel to each

    other. Within parallel projection there is an ancillary category known as "pictorials". Pictorials showan image of an object as viewed from a skew direction in order to reveal all three directions (axes) of

    space in one picture. Because pictorial projections innately contain this distortion, in the rote, drawing

    instrument for pictorials, some liberties may be taken for economy of effort and best effect.it is a

    simultaneous process of viewing the image give pictures

    Orthographic projection

    The Orthographic projection is derived from the principles of descriptive geometry and is a two-

    dimensional representation of a three-dimensional object. It is a parallel projection (the lines of

    projection are parallel both in reality and in the projection plane). It is the projection type of choice for

    working drawings.

    Pictorials

    Within parallel projection there is a subcategory known as Pictorials. Pictorials show an image of an

    object as viewed from a skew direction in order to reveal all three directions (axes) of space in one

  • 8/11/2019 Cse-Vi-computer Graphics and Visualization [10cs65]-Solution

    55/77

    Computer Graphics And Visualization 10CS65

    Dept of CSE, SJBIT Page 55

    picture. Parallel projection pictorial instrument drawings are often used to approximate graphical

    perspective projections, but there is attendant distortion in the approximation. Because pictorial

    projections inherently have this distortion, in the instrument drawing of pictorials, great liberties may

    then be taken for economy of effort and best effect. Parallel projection pictorials rely on the technique

    of axonometric projection ("to measure along axes").

    Axonometric projection

    Axonometric projection is a type of parallel projection used to create a pictorial drawing of an object,

    where the object is rotated along one or more of its axes relative to the plane of projection.

    There are three main types of axonometric projection: isometric, dimetric, and trimetric projection.

    The three axonometric views.

    Isometric projection

    In isometric pictorials (for protocols see isometric projection), the direction of viewing is such that the

    three axes of space appear equally foreshortened, and there is a common angle of 60 between them.

    As the distortion caused by foreshortening is uniform the proportionality of all sides and lengths are

    preserved, and the axes share a common scale. This enables measurements to be read or taken directly

    from the drawing.

    Dimetric projection

    In dimetric pictorials (for protocols see dimetric projection), the direction of viewing is such that two

    of the three axes of space appear equally foreshortened, of which the attendant scale and angles of

    presentation are determined according to the angle of viewing; the scale of the third direction (vertical)is determined separately. Approximations are common in dimetric drawings.

    Trimetric projection

    In trimetric pictorials (for protocols see trimetric projection), the direction of viewing is such that all

    of the three axes of space appear unequally foreshortened. The scale along each of the three axes and

    the angles among them are determined separately as dictated by the angle of viewing. Approximations

    in Trimetric drawings are common.

    Oblique projection

    In oblique projections the parallel projection rays are not perpendicular to the viewing plane as with

    orthographic projection, but strike the projection plane at an angle other than ninety degrees. In both

    orthographic and oblique projection, parallel lines in space appear parallel on the projected image.

    Because of its simplicity, oblique projection is used exclusively for pictorial purposes rather than for

    formal, working drawings. In an oblique pictorial drawing, the displayed angles among the axes as

  • 8/11/2019 Cse-Vi-computer Graphics and Visualization [10cs65]-Solution

    56/77

    Computer Graphics And Visualization 10CS65

    Dept of CSE, SJBIT Page 56

    well as the foreshortening factors (scale) are arbitrary. The distortion created thereby is usually

    attenuated by aligning one plane of the imaged object to be parallel with the plane of projection

    thereby creating a true shape, full-size image of the chosen plane. Special types of oblique projections

    are:

    9. Derive matrix representation for prospective projection, with diagram if necessary.(10

    marks) (Jun/Jul 13) (Dec/Jan 14)

    In perspective projection, a 3D point in a truncated pyramid frustum (eye coordinates) is mapped to a

    cube (NDC); the range of x-coordinate from [l, r] to [-1, 1], the y-coordinate from [b, t] to [-1, 1] and the

    z-coordinate from [n, f] to [-1, 1].

    Note that the eye coordinates are defined in the right-handed coordinate system, but NDC uses the left-

    handed coordinate system. That is, the camera at the origin is looking along -Z axis in eye space, but it is

    looking along +Z axis in NDC. Since glFrustum() accepts only positive values

    of near and far distances, we need to negate them during the construction of GL_PROJECTION matrix.

  • 8/11/2019 Cse-Vi-computer Graphics and Visualization [10cs65]-Solution

    57/77

    Computer Graphics And Visualization 10CS65

    Dept of CSE, SJBIT Page 57

    In OpenGL, a 3D point in eye space is projected onto the near plane (projection plane). The following

    diagrams show how a point (x e, ye, z e) in eye space is projected to (x p, y p, z p) on the near plane.

    Top View of Frustum

    Side View of Frustum

    From the top view of the frustum, the x-coordinate of eye space, x e is mapped to x p, which is calculated

    by using the ratio of similar triangles;

  • 8/11/2019 Cse-Vi-computer Graphics and Visualization [10cs65]-Solution

    58/77

    Computer Graphics And Visualization 10CS65

    Dept of CSE, SJBIT Page 58

    From the side view of the frustum, y p is also calculated in a similar way;

    Note that both x p and y p depend on z e; they are inversely propotional to -z e. In other words, they are both

    divided by -z e. It is a very first clue to construct GL_PROJECTION matrix. After the eye coordinates

    are transformed by multiplying GL_PROJECTION matrix, the clip coordinates are still a homogeneous

    coordinates . It finally becomes the normalized device coordinates (NDC) by divided by the w-

    component of the clip coordinates. ( See more details on OpenGL Transformation .)

    ,

    Therefore, we can set the w-component of the clip coordinates as -z e. And, the 4th of

    GL_PROJECTION matrix becomes (0, 0, -1, 0).

    10. List the differences between perspective projection and parallel projection. (04 Marks)

    (Dec/Jan 14)

    Drawing is a visual art that has been used by man for self-expression throughout history. It uses

    pencils, pens, colored pencils, charcoal, pastels, markers, and ink brushes to mark different types of

    medium such as canvas, wood, plastic, and paper.

    It involves the portrayal of objects on a flat surface such as the case in drawing on a piece of paper or a

    canvas and involves several methods and materials. It is the most common and easiest way of

  • 8/11/2019 Cse-Vi-computer Graphics and Visualization [10cs65]-Solution

    59/77

    Computer Graphics And Visualization 10CS65

    Dept of CSE, SJBIT Page 59

    recreating objects and scenes on a two-dimen