chapter 4 implementation of - shodhganga : a...

49
CHAPTER 4 IMPLEMENTATION OF PARSER AND RENDERER FOR STL, VRML VISUALIZATION

Upload: vuongtram

Post on 30-Apr-2018

222 views

Category:

Documents


4 download

TRANSCRIPT

Page 1: CHAPTER 4 IMPLEMENTATION OF - Shodhganga : a …shodhganga.inflibnet.ac.in/bitstream/10603/4045/11/11_chapter 4.pdf · Visualization application reads STL or VRML file consisting

CHAPTER 4

IMPLEMENTATION OF

PARSER AND

RENDERER FOR STL,

VRML VISUALIZATION

Page 2: CHAPTER 4 IMPLEMENTATION OF - Shodhganga : a …shodhganga.inflibnet.ac.in/bitstream/10603/4045/11/11_chapter 4.pdf · Visualization application reads STL or VRML file consisting

CHAPTER 4

IMPLEMENTATION OF PARSER AND RENDERER FOR STL,

VRML VISUALIZATION

4.1. Parser

The models designed in CAD software can be categorized into native file

formats and standard file formats. The native file format is the default one and used

by a specific CAD application. Limitation is that native file formats won’t allow for

transfer to other applications. But the standard file formats like STEP, XML, IGES,

VRML, STL etc. support the variety of applications. Each of these file formats

varies in its structure and complexity. Readability is the major benefit of these files.

Model descriptions are encoded here in a human readable form. For our research

VRML and STL files formats considered. These files consist of a list of triangles in

3D space. Each triangle definition contains coordinates in 3D space for each point

of the triangle as well as other details about the CAD model. These formats are ideal

for Virtual Reality since the representation of objects in VR is itself a tessellated

representation. This Visualization application reads STL or VRML file consisting of

a list of triangles which represent the object to be built in rapid prototyping.

The models are designed in modeling software. After the design models are

exported in to ASCII format i.e. VRML and STL. 3D models generated using the

state of art modeling software leads to very heavy file size. This is due to the fact

that the modeler not only holds the geometric information, but also topological

information of the object. Therefore it requires a powerful computer system to view

the components. Since the file contains data sets, which are not required for

visualization, manipulating such files solely for visualizing stereo mode results in

slow operation on a general-purpose computer. This visualization solution parses

the file and considers only the required data sets for visualization in order to obtain

the desired displays. As an example the VRML file Cylinder.wrl generated from I-

Deas having the size 456KB. This file contains nearly 150KB of data that are not

needed for visualization.

Page 3: CHAPTER 4 IMPLEMENTATION OF - Shodhganga : a …shodhganga.inflibnet.ac.in/bitstream/10603/4045/11/11_chapter 4.pdf · Visualization application reads STL or VRML file consisting

The Modeling software used for drawing the components exports the data

into the files in ASCII format. The software suite developed imports the above said

datasets and processes the same for visualization. The key component of this

visualization software is parsing the ASCII files generated by modeling software

and to render the corresponding image on screen. Even though these files contain

huge details about the model, visualization software imports only the required data,

required for visualization.

Translating an object from the STL, VRML format generated by CAD

systems to the VR space is facilitated by the wealth of information held in these

files. These files consist of a list of triangles in 3D space. Each triangle definition

contains coordinates in 3D space for each point of the triangle as well as other

details about the CAD model. These formats are ideal for Virtual Reality since the

representation of objects in VR is itself a tessellated representation. This

Visualization application reads STL or VRML file consisting of a list of triangles

which represent the object to be built in rapid prototyping. Parser module receives

VRML, STL files as input and creates display structure after parsing.

4.1.1. STL File Analysis

STL uses facet-based representation, means it consists of list of facet data.

The facets define the surface of a 3-Dimenssional object. Each facet identified by

three vertices, corners of the triangle and a unit normal. Three co-ordinates used to

specify the normal and each vertex. So each facet contains total of 12 numbers. As

such, each facet is part of the boundary between the interior and the exterior of the

object. The orientation of the facets is specified redundantly in two ways which

must be consistent. First, the direction of the normal is outward. Second, the

vertices are listed in counterclockwise order when looking at the object from the

outside. When the size of the individual facets in a model is small, the 3D model has

a higher resolution. The number of facets can be reduced and the size of the facets

increased by decimating the computer model, but this may decreases the accuracy

or resolution of the model as well.

Page 4: CHAPTER 4 IMPLEMENTATION OF - Shodhganga : a …shodhganga.inflibnet.ac.in/bitstream/10603/4045/11/11_chapter 4.pdf · Visualization application reads STL or VRML file consisting

STL file follows vertex-to-vertex rule, means each triangle share two

vertices with each of its adjacent triangles. In other words, a vertex of one triangle

cannot lie on the side of another. The object represented must be located in the all-

positive octant. All vertex coordinates must be positive-define numbers. The STL

file does not contain any scale information; the coordinates are in the arbitrary units.

An STL file is saved with the extension stl, case-insensitive. The slice program

may require this extension or it may allow a different extension to be specified. The

syntax for an ASCII STL file is as shown in the figure 4.1.

Figure 4.1: Syntax of an ASCII STL File

Bold face indicates a keyword; these must appear in lower case. Words in

italics are variables which are to be replaced with user-specified values. The file

begins with a solid record and ends with endsolid record. Each triangle enclosed

within facet and endfacet records. The normal vector is the part of the facet,

specified by normal keyword. The vertices of triangle are delimited by outer loop

and endloop. Each vertex specified by the keyword, vertex. The numerical data in

the facet normal and vertex lines are single precision floats. A facet normal

coordinate may have a leading minus sign; a vertex coordinate may not. The

notation {...} means that the contents of the brace brackets can be repeated one or

more times.

Page 5: CHAPTER 4 IMPLEMENTATION OF - Shodhganga : a …shodhganga.inflibnet.ac.in/bitstream/10603/4045/11/11_chapter 4.pdf · Visualization application reads STL or VRML file consisting

Sample STL File has given below,

solid FLIRIS

facet normal 0.955654E-01 -0.966960E+00 0.236339E+00

outer loop

vertex 0.000000E+00 0.800000E+01 0.000000E+00

vertex -0.675880E+01 0.442610E+01 -0.118893E+02

vertex -0.335710E+01 0.442610E+01 -0.132648E+02

endloop

endfacet

facet normal -0.350878E+00 0.349503E+00 -0.868753E+00

outer loop

vertex -0.675880E+01 0.442610E+01 -0.118893E+02

vertex -0.634850E+01 0.666400E+01 -0.111547E+02

vertex -0.315330E+01 0.666400E+01 -0.124452E+02

endloop

endfacet

facet normal -0.351159E+00 0.350063E+00 -0.868414E+00

outer loop

vertex -0.315330E+01 0.666400E+01 -0.124451E+02

………………………………………………………….

………………………………………………………….

………………………………………………………….

facet normal 0.430198E+00 0.893345E+00 0.129865E+00

outer loop

vertex 0.648000E+01 0.891924E+02 0.000000E+00

vertex 0.542840E+01 0.891924E+02 0.348360E+01

vertex 0.916020E+01 0.870452E+02 0.589210E+01

endloop

endfacet

facet normal 0.430518E+00 0.893218E+00 0.129675E+00

outer loop

vertex 0.916020E+01 0.870451E+02 0.589200E+01

Page 6: CHAPTER 4 IMPLEMENTATION OF - Shodhganga : a …shodhganga.inflibnet.ac.in/bitstream/10603/4045/11/11_chapter 4.pdf · Visualization application reads STL or VRML file consisting

vertex 0.109347E+02 0.870452E+02 0.000000E+00

vertex 0.648000E+01 0.891923E+02 0.000000E+00

endloop

endfacet

facet normal 0.153467E+00 0.987066E+00 0.463569E-01

outer loop

vertex 0.542840E+01 0.891923E+02 0.348350E+01

vertex 0.648000E+01 0.891924E+02 0.000000E+00

vertex 0.000000E+00 0.901999E+02 0.000000E+00

endloop

endfacet

endsolid FLIRIS

4.1.2 VRML 1.0 File Analysis

The first version of VRML was developed between 1994 and 1995. The

need for a three dimensional equivalent of HTML was established at the First

International Conference on the World-Wide Web in 1994. It was here that the

name VRML (originally Virtual Reality Markup Language) was first coined. The

first version of VRML would only describe static worlds. A number of different

formats were proposed as the basis of VRML 1.0, but after some debate the

members decided that Silicon Graphics' Open Inventor ASCII format best met the

agreed requirements. The first draft of the VRML 1.0 specification was presented at

the Second WWW Conference in October 1994.

VRML is a scene description language. It encloses all the visualization

details of the model. It is a 3D interchange format defines semantics of object’s

descriptions such as hierarchical transformations, geometry, viewpoints, light

sources, material properties etc. The features of VRML are simplicity, highly

optimized, support for file interchange format, compos ability and scalability.

VRML is a text file begins with header line #VRML V1.0 specifies the version of

this file is VRML 1.0. This version has some limitations. Following the header is a

hierarchical description of the model. It contains vertices and edges for 3D polygon

Page 7: CHAPTER 4 IMPLEMENTATION OF - Shodhganga : a …shodhganga.inflibnet.ac.in/bitstream/10603/4045/11/11_chapter 4.pdf · Visualization application reads STL or VRML file consisting

which are specified along with color, material, light, texture, shininess and so on.

The datasets are organized from point by point followed by polygon’s vertices

references. To get the vertices arranged in triangles, the co-ordinates have to be

taken according to the point references. This requires the rearrangement of vertices

for the display.

The VRML 1.0 specification allowed the following:

� Virtual 3D worlds created from primitive shapes, such as cubes, cones,

spheres and text, or with custom defined shapes.

� Material properties including texture maps to be applied to these shapes.

� Initial viewpoints and the ability for the user to examine or move freely

through a scene.

� Objects that are clickable hyperlinks to other VRML worlds or documents.

� Multiple light sources.

� Inline objects, where the geometry is defined in a separate VRML file.

� Objects with different levels of detail.

� Object definitions named and reused.

Sample of VRML 1.0 file has given below,

#VRML V1.0 ascii

Separator {

DEF STLShape ShapeHints {

vertexOrdering COUNTERCLOCKWISE

faceType CONVEX

shapeType SOLID

creaseAngle 0.0

}

DEF STLModel Separator {

DEF STLColor Material {

emissiveColor 0.700000 0.700000 0.000000

}

DEF STLVertices Coordinate3 {

point [

Page 8: CHAPTER 4 IMPLEMENTATION OF - Shodhganga : a …shodhganga.inflibnet.ac.in/bitstream/10603/4045/11/11_chapter 4.pdf · Visualization application reads STL or VRML file consisting

0.553640 1.107283 0.000000,

0.536024 1.107283 0.000000,

0.536024 1.107283 0.196850,

0.553640 1.107283 0.196850,

0.571259 1.107283 0.196850,

0.571259 1.107283 0.000000,

0.601910 1.043194 0.196850,

0.592020 0.972843 0.196850,

0.592020 0.972843 0.000000,

0.601910 1.043194 0.000000,

0.635765 0.966507 0.196850,

0.678606 0.955620 0.196850,

0.678606 0.955620 0.000000,

0.635765 0.966507 0.000000,

0.749235 1.071882 0.196850,

0.781786 1.058397 0.000000,

0.781786 1.058397 0.196850,

0.785578 0.987457 0.196850,

0.749519 0.926245 0.196850,

----------------- --------------

----------------- --------------

----------------- -------------- ] }

DEF STLTriangles IndexedFaceSet {

coordIndex [

0, 1, 2, -1,

3, 4, 5, -1,

0, 2, 3, -1,

3, 5, 0, -1,

6, 7, 8, -1,

8, 9, 6, -1,

9, 5, 4, -1,

4, 6, 9, -1,

10, 11, 12, -1,

Page 9: CHAPTER 4 IMPLEMENTATION OF - Shodhganga : a …shodhganga.inflibnet.ac.in/bitstream/10603/4045/11/11_chapter 4.pdf · Visualization application reads STL or VRML file consisting

12, 13, 10, -1,

13, 8, 7, -1,

7, 10, 13, -1,

14, 15, 16, -1,

16, 17, 14, -1,

16, 12, 11, -1,

11, 17, 16, -1,

18, 15, 14, -1,

20, 21, 22, -1,

22, 23, 20, -1,

23, 18, 19, -1,

19, 20, 23, -1,

------------------

------------------ } } }

4.1.3 VRML 2.0 File Analysis

The limitations of the static VRML 1.0 worlds were soon becoming apparent

and there was demand for new features, especially for more interactivity and

behaviors. By the year 1996 the final VRML 2.0 specification was ready to be

presented at the Siggraph96 conference. New version, VRML 2.0 solves the

limitations of version 1.0. VRML 2.0 was created to extend the capabilities of

VRML 1.0. VRML 1.0 specified static objects and scenes, and although it is very

successful as a file format for 3D objects and worlds. The overall goal of VRML 2.0

was fairly modest, i.e., to allow objects in the world to move and to allow the user

to interact with the objects in the world, allowing the creation of more interesting

user experiences than those created with VRML 1.0.

The new features introduced in VRML 2.0 include:

� Enhanced Static Worlds - Sound, movie textures, fog and backgrounds can

now be added to a VRML world. There are new ways to define complex

geometries such as terrains and extruded shapes. The VRML 2.0 scene

graph structure has also been simplified.

Page 10: CHAPTER 4 IMPLEMENTATION OF - Shodhganga : a …shodhganga.inflibnet.ac.in/bitstream/10603/4045/11/11_chapter 4.pdf · Visualization application reads STL or VRML file consisting

� Interaction and Animation - VRML 1.0 worlds were static. VRML 2.0

allows objects within the scene to move and respond to both users initiated

and time based events. This is achieved through sensors that detect or

generate these events, interpolators that describe what should happen when

an event occurs and routes which wire everything together. Complex

animations and behaviors can be programmed into a VRML 2.0 world.

Collision detection and navigation style information also help to improve the

user interaction experience.

� Prototyping New VRML Objects- New VRML properties or objects can

be defined and reused using the prototyping mechanism.

VRML is a true modeling language defines richer geometric modeling

primitives and mechanisms. It is a language for describing multi participant

interactive simulations. All aspects of virtual world display, interaction and

internetworking can be specified using VRML. At its core, VRML is simply a 3D

interchange format. It defines most of the commonly used semantics found in

today’s applications such as hierarchical transformations, light sources, viewpoints,

geometry, animation, material properties etc. VRML is a scene description

language. It is not a general-purpose programming language. It is a persistent file

format designed to store the state of a virtual world and to be read and written easily

by a wide variety of tools.

The language specification of VRML is consists of,

� Coordinate system – VRML uses a Cartesian, right handed, 3-dimensional

coordinate system.

� Fields – Field type defines the format for the values it writes.

� Nodes – VRML defines the model in-terms of shape, property and grouping

nodes. Shape nodes define the geometry in the scene. Property nodes affect

the way shapes are drawn. Grouping nodes gather other nodes together.

� Instancing – A node may be child of more than one group. The usage of

same instance of a node multiple times called as instancing.

� Extensibility – The self describing nodes support extensions to VRML.

Page 11: CHAPTER 4 IMPLEMENTATION OF - Shodhganga : a …shodhganga.inflibnet.ac.in/bitstream/10603/4045/11/11_chapter 4.pdf · Visualization application reads STL or VRML file consisting

This visualization solution parses the VRML file and considers only the

required data sets for visualization in order to obtain the desired displays. As an

example the VRML file Cylinder.wrl generated from I-Deas having the size 456KB.

This file contains nearly 150KB of data that are not needed for visualization. The

file Cylinder.wrl is given below. Here the data which are in italic form indicates that

those details not required for visualization. Parser takes care of fetching the required

data set for the efficient execution.

Cylinder.wrl - VRML 2.0 File

#VRML V2.0 utf8

WorldInfo{ info Sep-12-2000 09:36:52 title I-DEAS: cylinder.wrl } DEF

C0_H91_VIS_SW Switch{ whichChoice 0 choice[ DEF C0_H91_ANC_SW

Switch{ whichChoice 0 choice[ DEF C0_H91_XF Transform{ children[ DEF

P3_3_z91_XF Transform{ children[ DEF P3_3_z91_TE_VIS_SW Switch{

whichChoice 1 choice[ LOD{ level[ DEF P3_3_z91_L0_GRP Group{ children[

Shape{ geometry IndexedFaceSet{ coord DEF P3_3_z91_L0_CRD Coordinate{

point[0.19039 -0.055365 -0.032746,0.19039 -0.055119 -0.03175, 0.18976 -

0.033599,0.18976 -0.056504 -0.033049,……….. ] } normal DEF

P3_3_z91_L0_NRM Normal{ vector[0.70711 -0.62552 0.32974, 0.70711 -0.70711

1.4865e-014,0.70711 -0.70711 2.586e-014,0.70711 -0.35355 0.61237,0.70711

0.027196 0.70658, ……………………] } } } DEF P3_3_z91_A24_VIS_SW

Switch{ whichChoice 0 choice[ Group{ children[ Group{ children[ Shape{

appearance DEF P3_3_z91_L0_0_APP Appearance{ material DEF

P3_3_z91_L0_0_MAT Material{ ambientIntensity 0.15 shininess 0.75 diffuseColor

0 1 0 emissiveColor 0 0.15 0 specularColor 0.8 0.8 0.8 } } geometry

IndexedFaceSet{ coord USE P3_3_z91_L0_CRD normal USE

P3_3_z91_L0_NRM coordIndex[3,1,22,-1,23,3,22,-1,20,21,23,-1,22,20,23,-

1,18,19,21,-1………….] normalIndex[14,15,13,-1,13,14,13,-1,12,…………. ] } } ]

} Group{ children[ Shape{ appearance USE P3_3_z91_L0_0_APP geometry

IndexedFaceSet{ coord USE P3_3_z91_L0_CRD normal USE

P3_3_z91_L0_NRM coordIndex[27,25,46,-1,47,27,46,-1,44,45,47,-

…………… …………… …………….. ……………..

…………… ……………… …………….. ………………

Page 12: CHAPTER 4 IMPLEMENTATION OF - Shodhganga : a …shodhganga.inflibnet.ac.in/bitstream/10603/4045/11/11_chapter 4.pdf · Visualization application reads STL or VRML file consisting

…………… ……………… ……………. } ] } ] }

DEF P3_3_z1_XF Transform{ rotation -1 0 0 1.570796 translation 0 -0.07476867 -

0.05858135 children[ DEF P3_3_z1_AN_VIS_SW Switch{ whichChoice 0

………………………. } } ] }

DEF UNIT_LNG_SW Switch{ whichChoice 0 choice[ Shape{ appearance

Appearance{ material DEF UNIT_TXT_MT Material{ diffuseColor 0.75 0.75 0.75 }

} geometry Text{ fontStyle DEF HUD_FS_EN FontStyle{ size 0.00125 family SANS

style BOLD language en } string Units: MM... } } Shape{ appearance Appearance{

material USE UNIT_TXT_MT } geometry Text{ fontStyle USE HUD_FS_EN string

Units: mm (milli-newton)... } } Shape{ appearance Appearance{

material………………………… } } ] } ] } ] } ] } DEF LANG_VIS_SW Switch{

choice[ Transform{ translation 0.0128 0.0156 -0.125 rotation 1 0 0 1.5708

children……………… } } ] }DEF LANG_LNG_SW Switch{ whichChoice 0 choice[

Shape{ appearance Appearance{ material DEF LANG_TXT_MT

Material{……………. } } ] } ] } ] } ] } DEF WIRE_LNG_SW Switch{ whichChoice

0 choice[ Shape{ appearance Appearance{ material DEF WIRE_TXT_MT

Material{ diffuseColor 0.75 0.75 0.75 } } geometry Text{

fontStyle USE HUD_FS_EN string Wireframe... } } Shape{ appearance

Appearance{ material USE WIRE_TXT_MT } geometry………………….. geometry

Text{ fontStyle USE HUD_FS_ES string Es3 } } ] } ] } ] } ] }DEF MANU_LNG_SW

Switch{ whichChoice 0 choice[ Shape{ appearance Appearance{ material

DEFMANU_TXT_MT Material{ diffuseColor 0.75 0.75 0.75 } } geometry Text{

fontStyle USE HUD_FS_EN string Manufacturing... } } Shape{ appearance

Appearance

{ material USE MANU_TXT_MT } geometry Text{ fontStyle …………….. fontStyle

USE HUD_FS_ES string Es3 } } Shape{ appearance Appearance{ material USE

MANU_TXT_MT } geometry Text{

fontStyle USE HUD_FS_ES string Es4 } } ] } ] } ] } ] } DEF DIME_LNG_SW

Switch{ whichChoice 0 choice[ Shape{ appearance Appearance{ material

DEF DIME_TXT_MT Material{ diffuseColor 0.75 0.75 0.75 } } geometry Text{

fontStyle USE HUD_FS_EN string Dimensions... } } Shape{ appearance

Page 13: CHAPTER 4 IMPLEMENTATION OF - Shodhganga : a …shodhganga.inflibnet.ac.in/bitstream/10603/4045/11/11_chapter 4.pdf · Visualization application reads STL or VRML file consisting

Appearance{ material USE DIME_TXT_MT } geometry Text{ fontStyle USE

HUD_FS_EN string Ref dims }…………………… USE HUD_FS_ES string Es4 } } ]

} ] } ] } ] }

…………… …………… …………….. ……………..

…………… ……………… …………….. ………………

ROUTE HUD_PROX.orientation_changed TO HUD_XF.set_rotation ROUTE

HUD_PROX.position_changed TO HUD_XF.set_translation } ] } DEF HUD_SC

Script { eventIn SFTime i_tAnnoBtnTouch eventIn SFTime i_tDimeBtnTouch

eventIn SFTime i_tDispBtnTouch eventIn SFTime i_tLangBtnTouch eventIn SFTime

i_tManuBtnTouch eventIn SFTime i_tRefrBtnTouch eventIn SFTime

i_tSpfxBtnTouch eventIn SFTime i_tWireBtnTouch eventIn SFTime

i_tAnnoTxtTouch ……………………………………..

……………………………………………..ROUTE INFO_BTN_TS.touchTime TO

HUD_SC.i_tInfoBtnTouch ROUTE INFO_TXT_TS.touchTime TO

HUD_SC.i_tInfoBtnTouch ROUTE HUD_SC.o_cInfoBtnColor TO

INFO_BTN_MT.set_diffuseColor ROUTE HUD_SC.o_iSmplTxtLang

TO INFO_LNG_SW.set_whichChoice ROUTE HUD_SC.o_iMenuVisSw TO

INFO_VIS_SW.set_whichChoice

4.1.4 Open and Invoke of File

The execution of this visualization software starts with reading the inputted

file. It opens the file, checks for extension and stores file contents to buffer. This

module allows browsing file dialog box asking the user to select .stl or .wrl file. The

browse dialog box returns the filename and its extensions so that the appropriate file

contents are displayed. As soon as the file is opened the details are stored into a

temporary buffer. After the parsing, while displaying the model the initial setting is

done this includes setting up light, background etc.

Pseudocode for Opening a file,

START

Check whether open button has been clicked.

If yes

Page 14: CHAPTER 4 IMPLEMENTATION OF - Shodhganga : a …shodhganga.inflibnet.ac.in/bitstream/10603/4045/11/11_chapter 4.pdf · Visualization application reads STL or VRML file consisting

extension = get the extension of the file.

If(extension = = .wrl or .stl)

Copy the contents into a temporary buffer

Else

Display error message

END

Check Version - It checks the file version and sets the corresponding flag.

Pseudocode for checking the version of input file,

START

Line = get the first line of the file.

If(compare(Line, solid)

Set Flag = 1

Else if(compare(Line, V1.0)

Set Flag = 2

Else if(compare(Line, V2.0)

Set Flag = 3

END

Invoke file - According to the value of flag it calls the respective parser to parse the

required file.

Pseudocode for calling respective parser after the selection,

START

If flag = = 1

Parse STL file

Else if flag = = 2

Parse VRML 1 file

Else if flag = = 3

Parse VRML 2 file

END

Page 15: CHAPTER 4 IMPLEMENTATION OF - Shodhganga : a …shodhganga.inflibnet.ac.in/bitstream/10603/4045/11/11_chapter 4.pdf · Visualization application reads STL or VRML file consisting

4.1.5 STL File Parser

Reads triangulated information about CAD model for rendering the object

on the screen.

Pseudocode for STL file Parser,

START

OffsetToString(facet normal) // pointer to locate the position in buffer

Scan a word into word buffer

Normal vector’s x, y, z co-ordinates read from the buffer and stored in the

array ‘norm’, stores normal vector of all the triangles.

ReadWord() and copy to norm[i][0]

ReadWord() and copy to norm[i][1]

ReadWord() and copy to norm[i][2]

OffsetToString(vertex)

Scan a word into word buffer

Triangle’s first vertex’s x, y, z co-ordinates are read from the buffer and

stored in the array ‘v1’.

v1[i][0]=x;

v1[i][1]=y;

v1[i][2]=z;

In same way x, y, z co-ordinates of all the vertices are read from the buffer

and stored in v1, v2 and v3. The array v1 contains the coordinate’s values of

the first vertex of all the triangles. v2 contains the coordinates values of the

second vertex of all the triangles and v3 contains the coordinates values of

the third vertex of all the triangles.

END

Page 16: CHAPTER 4 IMPLEMENTATION OF - Shodhganga : a …shodhganga.inflibnet.ac.in/bitstream/10603/4045/11/11_chapter 4.pdf · Visualization application reads STL or VRML file consisting

4.1.6 VRML1.0 File Parser

Parses the inputted VRML file i.e., reads details of the model for rendering

the object on the screen.

Pseudocode for VRML 1.0 file Parser,

START

OffsetToString(viewpoint) // pointer to locate the position in buffer

Allocate the memory for data structures

Scan the material properties, save it into structure

OffsetToString(point)

Each vertex’s x, y, z co-ordinates are read from the buffer stored in data

structure according to their CoordIndex

vertex[i][0]=x;

vertex[i][1]=y;

vertex[i][2]=z;

OffsetToString(IndexeFaceSet)

Each polygon’s coord-indices are read from the buffer and stored in data

structure

face[i][0]=v1;

face[i][1]=v2;

face[i][2]=v3;

If the vrml file contains color, light source, viewpoint etc details, these

details are saved in respective data structures.

If the vrml file contains more than one component same thing continues for

remaining.

END

4.1.7 VRML 2.0 File Parser

This has various modules which are explained here with pseudocodes.

Page 17: CHAPTER 4 IMPLEMENTATION OF - Shodhganga : a …shodhganga.inflibnet.ac.in/bitstream/10603/4045/11/11_chapter 4.pdf · Visualization application reads STL or VRML file consisting

ReadVRML2File

START

Place the index pointer to the beginning of the specified string

Allocate the memory to the linked list

If(OffsetToString(DEF ROOT Group{ Children[))

Check for assembly or single component

If(nextWord = DEF) //single component

PART : OffsetToString(point);

Coordinates = CountAnd StoreCoordinates();

NEXTFACE : OffsetToString(Face);

OffsetToString(CoordIndex);

ReadCoordIndexAndStoreCoord();

If(nextWord = FACE)

goto NEXTFACE

else if(OffsetToString(ConfigInstance{)) // assembly

goto PART

END

CountAndStoreCoordinates – Before storing triangle information, the number of

points were counted and each point’s co-ordinates were stored in array.

START

Declare NoVertex as integer

( The coordinates will be between ‘[‘ and ‘]’. So count number of vertices

until ]’one vertex will be considered as three floating points. )

NEXT: Scan a word into a wordBuffer

From wordBuffer assign to x

Goto NEXT to scan values for y & z

Increment NoVertex by 1 each time after reading for x, y & z

Goto NEXT till ‘]’

Allocate an array of size NoVertex

END

Page 18: CHAPTER 4 IMPLEMENTATION OF - Shodhganga : a …shodhganga.inflibnet.ac.in/bitstream/10603/4045/11/11_chapter 4.pdf · Visualization application reads STL or VRML file consisting

ReadCoordIndexAndStoreCoord – Stores line coordinates in data structure

according to coord index.

START

NEXT: Scan a word in to wordBuffer

From wordBuffer assign in to x

Goto NEXT to scan values for y & z

Allocate the data structure to store coordinates;

( x, y, z gives the index where the coordinates are stored in function

CountAndStoreCoordinates(). So coordinates are got from corresponding

Index from corresponding array index and store into data structure, which is

ready to display)

CaculateNormal(p1, p2, p3, n)

(Calculate the normal for each triangle)

Store the normal with corresponding triangles

Goto NEXT till ‘]’

END

ReadLineIndexAndStoreCoord – reads and stores the line information.

START

NEXT: Scan a word in to wordBuffer

From wordBuffer assign it to x

Goto NEXT to scan the value for y

Allocate the data structure to store line coordinates and store it

Goto NEXT until ‘]’

END

ReadColorCoordAndStoreCoord – reads the color details and stores information

according to the co-ordinate index.

START

NEXT: Scan a word in to wordBuffer

From wordBuffer assign it to r

Goto NEXT to scan the value for g & b

Allocate the data structure to store color coordinates

Page 19: CHAPTER 4 IMPLEMENTATION OF - Shodhganga : a …shodhganga.inflibnet.ac.in/bitstream/10603/4045/11/11_chapter 4.pdf · Visualization application reads STL or VRML file consisting

Goto NEXT until ‘]’

END

4.1.8 Calculation of Normal Vector for VRML Files

Some of the VRML files do not come with the surface normal which is

required for the lighting calculations. If the VRML file doesn’t contains normal

values then calculated using co-odinates

Pseudocode for calculating normal vector,

START

Form two vectors vct1 and vct2 from the points p, p2 and p3

Calculate the cross product of two these vectors and stored as vx, vy and vz

New vector is normalized to get the length

Normal is computed by dividing vx, vy and vz by length

END

An object's normal vectors define the orientation of its surface in space - in

particular, its orientation relative to light sources. A normal vector is a vector that

points in a direction that's perpendicular to a surface. For a flat surface, one

perpendicular direction suffices for every point on the surface, but for a general

curved surface, the normal direction might be different at each point. The normal

vectors are defined for surfaces that are described with polygonal data such that the

surfaces appear smooth rather than faceted. This can be done by calculating the

normal vectors for each of the polygonal facets and then finding the average of the

normals for neighboring facets. Use the averaged normal for the vertex that the

neighboring facets have in common. Figure shows a surface and its polygonal

approximation.

To find the normal for a flat polygon, any three vertices v1, v2, and v3 of the

polygon that do not lie in a straight line are considered. The cross product

[v1 - v2] × [v2 - v3]

Page 20: CHAPTER 4 IMPLEMENTATION OF - Shodhganga : a …shodhganga.inflibnet.ac.in/bitstream/10603/4045/11/11_chapter 4.pdf · Visualization application reads STL or VRML file consisting

is perpendicular to the polygon. The average of the normals for adjoining facets has

to be calculated. For instance, in the example shown in Figure 4.2, if n1, n2, n3, and

n4 are the normals for the four polygons meeting at point P, calculate n1+n2+n3+n4

and then normalize it. The resulting vector can be used as the normal for point P.

Figure 4.2: Normal Calculation at Point P

With OpenGL, glNormal*() command is used to set the normal to vertices and

polygons. This vector determines how much light the object receives at its vertices.

Normal vectors are defined for an object before defining the object's geometry.

glNormal3fv(n0);

glBegin (GL_POLYGON);

glVertex3fv(v0);

glVertex3fv(v1);

glVertex3fv(v2);

glVertex3fv(v3);

glEnd();

4.2 Renderer

Rendering, is the process of creating and displaying image on the screen.

Regarding the object the triangulated details are present in VRML file. It is difficult

to see the shape of the object clearly when only points are displayed. Lines to form

a mesh or wire frame then connect the points. The wire frame clearly shows a group

of polygons. It is important to note that polygons have only one side. A polygon's

Page 21: CHAPTER 4 IMPLEMENTATION OF - Shodhganga : a …shodhganga.inflibnet.ac.in/bitstream/10603/4045/11/11_chapter 4.pdf · Visualization application reads STL or VRML file consisting

face is the side from which its normal points away from the polygon. To produce

the appearance of solid objects the shading done to individual polygons.

Initially shapes are constructed from geometric primitives, thereby creating

mathematical descriptions of objects. Next arranging the objects in three-

dimensional space and select the proper point for viewing the composed scene takes

place. After this calculation of color, material, light and texture of the object is

done. The final rendered image consists of pixels drawn on the screen. Table 4.1

gives meanings of various CAD terminologies which are used in this section.

OpenGL, an application program interface used as a software development

tool for graphics application. It provides direct control over fundamental operations

of graphics. It provides primitive set of rendering commands that can be used to

display the CAD model. Since the .stl and .wrl file represents the model in mesh

form, GL_TRIANGLES command is used for rendering.

Table 4.1: CAD Terminologies used in this Thesis

2-D A concept of displaying real-world objects on a flat surface showing

only height and width. This system uses only the X and Y axes.

3-D A way of displaying real-world object in a more natural way by

adding depth to the height and width. This uses the X Y and Z axes.

Elevation The difference between an object being at zero on the Z-axis and the

height that it is above zero.

Face The simplest true 3-D surface.

Facet A three or four sided polygon that represents a piece (or section) of a

3-D surface.

Hidden

line

removal

A way of hiding lines of the actual object that would not be visible

Isometric

Drawing

A simple way of achieving a 3D appearance using 2D drawing

methods.

Page 22: CHAPTER 4 IMPLEMENTATION OF - Shodhganga : a …shodhganga.inflibnet.ac.in/bitstream/10603/4045/11/11_chapter 4.pdf · Visualization application reads STL or VRML file consisting

Primitive A basic solid building block. Examples would be boxes, cones,

cylinders.

Region A 2D area consisting of lines, arcs, etc.

Rendering A complex way of adding photo-realistic qualities to a 3-D model

which has created

Shading A quick way of adding color to a 3-D object which has drawn

Solid

Model

A 3D model creating using solid 'building blocks'. This is the most

accurate way of representing real-world objects in CAD.

Surface

Model

A 3D model defined by surfaces. The surface consists of polygons. (

Thickness A property of lines and other objects that give them 3D like

appearance.

View A particular view of the object which has created

Viewport A window in a drawing, showing a particular view. A screen can

have several viewports.

Wire-

frame

Model

A 3-D shape that is defined by lines and curves. A skeletal

representation. Hidden line removal is not possible with this model.

Z-Axis The third axis that defines the depth.

4.2.1 Display of CAD Model

After loading the triangle mesh data, the program displays it. The projection

and modelview transformations are set using perspective projection so that the

object is completely visible on the screen, with no parts cut off by the near or far

clipping planes. The different models don't have the same scale, so the program

does some way of accounting for that, setting the camera and projection values

correctly and automatically. This can be done by setting parameters for gluLookAt

Page 23: CHAPTER 4 IMPLEMENTATION OF - Shodhganga : a …shodhganga.inflibnet.ac.in/bitstream/10603/4045/11/11_chapter 4.pdf · Visualization application reads STL or VRML file consisting

and the projection transformation. The three aspects of the viewing process, all of

which are implemented in the pipeline are,

� Positioning the camera - Setting the model-view matrix

� Selecting a lens - Setting the projection matrix

� Clipping - Setting the view volume

Pseudocode for display function,

START

Clearing the depth buffer and color buffer

Enable lighting

Set material Properties - SetMaterial( )

To change the viewpoint of the model – view( )

If (Texture = = true)

Call texture( );

If(cut section = = true)

Call cutsection();

If(fog = = true)

Call fog( );

If(walkthrougth = = true)

Call walkthrough();

If( passive stereo = = true)

Call passive Stereo( );

Draw(); // draws .stl or .wrl model

END

The rendered scene obeys the following constraints,

� Realistic effect is given to the rendered scene by making it to reflect white

light

� Depth buffer, color buffers are cleared to obtain 3D depth

� For better visualization of the model, the required transformation functions

are added

Page 24: CHAPTER 4 IMPLEMENTATION OF - Shodhganga : a …shodhganga.inflibnet.ac.in/bitstream/10603/4045/11/11_chapter 4.pdf · Visualization application reads STL or VRML file consisting

� Graphics library functions and Windows related functions are written to

send each triangle, triangle color and its normal to display device

� Refresh function i.e., swapping the buffers are added at the end of the

display function to avoid flickering

4.2.2 Drawing STL Model

STL file is a triangular representation of 3D object. Each triangle is defined

with its normal and three vertices. Figure 4.3, briefs the part of code for display of

STL file. Data present in a triangle is same as the structure of GL_TRIANGLES.

This has simples task of displaying the model, by directly taking the data from data

structures. Parser module makes the arrays v1, v2 and v3 to hold co-ordinate’s

values of the first vertex, second vertex, and third vertex of all the triangles

respectively. Normal values applied to the triangle. The combination of all these

triangles forms display of the model. The STL model displayed in this suite is

shown in the figure 4.4.

Figure 4.3: Sample Code for STL File Display

Page 25: CHAPTER 4 IMPLEMENTATION OF - Shodhganga : a …shodhganga.inflibnet.ac.in/bitstream/10603/4045/11/11_chapter 4.pdf · Visualization application reads STL or VRML file consisting

Figure 4.4: Display of STL Model

4.2.3. Drawing VRML 1.0 Model

VRML is a scene description language. It encloses all the visualization

details of the model. The datasets are organized from point by point followed by

polygon’s vertices references. To get the vertices arranged in triangles, the co-

ordinates have to be taken according to the point references. This requires the

rearrangement of vertices for the display. Figure 4.5, briefs the part of code for

display of VRML file. Some of the VRML format does not contain the polygon’s

normal vector. The polygon’s normal vector is obtained by the cross product of two

vectors. This gives the vector, which is perpendicular to the two vectors. Figure 4.6

shows display of VRML 1.0 model in our software.

Figure 4.5: Sample Code for VRML File Display

Page 26: CHAPTER 4 IMPLEMENTATION OF - Shodhganga : a …shodhganga.inflibnet.ac.in/bitstream/10603/4045/11/11_chapter 4.pdf · Visualization application reads STL or VRML file consisting

Figure 4.6: Display of VRML 1.0 Model

4.2.4. Drawing VRML 2.0 Model

Figure 4.7: Display of VRML 2.0 Model – bearing.wrl

Page 27: CHAPTER 4 IMPLEMENTATION OF - Shodhganga : a …shodhganga.inflibnet.ac.in/bitstream/10603/4045/11/11_chapter 4.pdf · Visualization application reads STL or VRML file consisting

Figure 4.8: Display of VRML 2.0 Model – cylinder.wrl

4.2.5 Transformation Options

Transformation options like Rotation, Panning and Zooming. The model can

be able to rotate, translate and scale using transformation options.

� Dragging the mouse with left button down work to translate the object in the

direction in which the mouse is moved

� Dragging the mouse with right button down work to zoom the object (in or

out) in the direction in which the mouse is moved. Zoom in and zoom out

depends on how the user drags.

� Dragging the mouse with left and right button down work to rotate the

object in the direction in which the mouse is moved. Rotation along axis

depends on how the mouse moves.

Figure 4.9 shows viewpoint navigation along the axis. A few variables and

commands inserted in the view permit mouse interaction.

Page 28: CHAPTER 4 IMPLEMENTATION OF - Shodhganga : a …shodhganga.inflibnet.ac.in/bitstream/10603/4045/11/11_chapter 4.pdf · Visualization application reads STL or VRML file consisting

Figure 4.9: Viewpoint Navigation

Pseudocode for transformation options,

START

The model displayed on the screen can be translated by holding mouse Left

Button Down and mooving the mouse.

glTranslated(m_xTranslation,m_yTranslation,m_zTranslation)

The model displayed on the screen can be Zoomed by holding mouse Right

Button Down and mooving the mouse.

glScalef(m_xScaling,m_yScaling,m_zScaling);

The model displayed on the screen can be Rotated by holding mouse Left

and Right Button Down and mooving the mouse.

glRotatef(m_xRotation, 1.0, 0.0, 0.0);

glRotatef(m_yRotation, 0.0, 1.0, 0.0);

glRotatef(m_zRotation, 0.0, 0.0, 1.0);

END

4.2.6 Set Material Properties

This feature allows selecting the required material properties and lightings.

The standard specified materials are put up and the provision is given to the user to

select whichever material he likes. The selected material is then applied for the

object. With Materials color values represent how much of a light component is

reflected by a surface that is rendered with that material. A material whose color

components are R – 1.0, G – 1.0, B – 1.0 reflects all the light that comes its way.

Page 29: CHAPTER 4 IMPLEMENTATION OF - Shodhganga : a …shodhganga.inflibnet.ac.in/bitstream/10603/4045/11/11_chapter 4.pdf · Visualization application reads STL or VRML file consisting

Likewise, a material with R – 0.0, G – 1.0, B – 0.0 reflects all the green light that is

directed at it. Materials have multiple reflectance values to create various types of

effects.

One of the more interesting aspects of working in 3-D is that to visualize

how the design will look like. The realistic effect can be achieved by adding

lighting and materials to the design. Applying the materials makes the model to look

exactly the way how it is required. Once the materials are added, getting the lights

and shadows to look realistic is another task. Material properties detail a material’s

diffuse reflection, ambient reflection, light emission and specular highlight

characteristics. Material properties affect the colors that the application uses to

rasterize the models that use the material. All these properties are described as an

RGB color that represents how much of the red, green and blue parts of a given type

of light it reflects. Figure 4.10 and Figure 4.11 shows the change in material

properties of the models.

Follow these steps to get a basic, accurate rendering:

1. Draw the object using solids or surfaces

2. Apply the materials

3. Render the scene

Pseudocode for setting material properties,

START

Material properties such as Ambient, Diffuse, Specular and shininess values

are defined for gold, silver, chrome, emerald, perl, copper, brass, bronze etc

Desired material properties are specified to the model by,

glMaterialfv(GL_FRONT, GL_AMBIENT, matAmbient);

glMaterialfv(GL_FRONT, GL_DIFFUSE, matDiffuse);

glMaterialfv(GL_FRONT, GL_SPECULAR, matSpecular);

glMaterialf(GL_FRONT, GL_SHININESS, shine * 128.0f);

Draw( ), draw the model

END

Page 30: CHAPTER 4 IMPLEMENTATION OF - Shodhganga : a …shodhganga.inflibnet.ac.in/bitstream/10603/4045/11/11_chapter 4.pdf · Visualization application reads STL or VRML file consisting

Figure 4.10: STL Model with Emerald Material

Figure 4.11: VRML Model with Copper Material

4.2.7 Display Mode

This feature provides options for rendering the model in various modes i.e.

solid, wireframe and point. Figure 4.12 and figure 4.13 shows the wireframe view

of the models. Display mode specifies how to display the scene. There are three

modes,

� Solid – The default mode in which object has rendered. Here the object has

filled up and it gives the viewer the feeling that it is made of hard solid.

� Wireframe – Only the wire mesh is displayed with no part of it being filled

up as in the solid mode. It displays the triangularly linked vertices

� Point – Only the vertices are displayed without connecting them to one

another

Page 31: CHAPTER 4 IMPLEMENTATION OF - Shodhganga : a …shodhganga.inflibnet.ac.in/bitstream/10603/4045/11/11_chapter 4.pdf · Visualization application reads STL or VRML file consisting

Pseudocode for various display modes,

START

The drawing mode is specified before drawing the polygons by

glPolygonMode(GL_FRONT_AND_BACK,m_Mode);

Point – m_Mode = GL_POINT

Line – m_Mode = GL_LINE

Solid – m_Mode = GL_FILL

END

Figure 4.12: STL Model in Wireframe View

Figure 4.13: VRML Model in Wireframe View

Page 32: CHAPTER 4 IMPLEMENTATION OF - Shodhganga : a …shodhganga.inflibnet.ac.in/bitstream/10603/4045/11/11_chapter 4.pdf · Visualization application reads STL or VRML file consisting

4.2.8 Camera Views

Different camera views displays the model’s front, back, left, right, top,

bottom and isometric view. Sometimes the viewer wants to view the rendered

model through different angles along different axes.

� Front – Default placement of the 3D scene, i.e., viewed along the XY plane

� Back – Along the YX plane

� Top – Along the XZ plane

� Bottom – Along the ZX plane

� Left – Along the ZY plane

� Right – Along the YZ plane

� Isometric – Along the three axes having a mutual inclination of an angle of

45. Isometric view is the simplest way to give a 3D representation of 2-D

drawing. This has been the usual way of doing things before CAD allowed

true 3-D work to be done. Many times an isometric drawing is used to

compliment a 3 view orthographic drawing.

The X-Y co-ordinate system has been shown in the figure 4.14. This can be

figured out by looking from the plan (top) view.

Figure 4.14: XY Co-ordinate System

Same picture can be viewed by slightly different angle, and then third axis

should be considered. This new axis is called the Z-axis. The positive Z-axis is

imagined as coming out of the monitor. The figure 4.16 shows the line from 3

different views to illustrate how things can look different in 3D.

Page 33: CHAPTER 4 IMPLEMENTATION OF - Shodhganga : a …shodhganga.inflibnet.ac.in/bitstream/10603/4045/11/11_chapter 4.pdf · Visualization application reads STL or VRML file consisting

Figure 4.15: XYZ Co-ordinate System

This is the usual view one can seen when using

CAD in 2D. Here looking straight down the Z

axis.

Here the line is viewed from the front, instead

of the top. The elevation in the Z axis can be

noticed here. This is the same line as above,

only viewed from a different

t angle. Here looking straight down the -Y

axis.

Here is the same line but viewed from the left.

This would be looking down the -X axis.

Figure 4.16: Different Views of a Line

Page 34: CHAPTER 4 IMPLEMENTATION OF - Shodhganga : a …shodhganga.inflibnet.ac.in/bitstream/10603/4045/11/11_chapter 4.pdf · Visualization application reads STL or VRML file consisting

Figure 4.17 shows the isometric view of the VRML model and Figure 4.18

shows the front view of STL model.

Pseudocode for different views of a model,

START

Model is rotated according to the required viewpoint.

Front – Model is displayed in original viewpoint.

glRotatef(0.0,0.0,0.0,0.0);

Back - model is rotated along 180 deg along Y axis,

glRotatef(180.0,0.0,1.0,0.0)

Top - model is rotated along 90 deg along X axis

glRotatef(90.0,1.0,0.0,0.0)

Bottom - model is rotated along 270 deg along X axis

glRotatef(270.0,1.0,0.0,0.0

Left - model is rotated along 90 deg along Y axis

glRotatef(90.0,0.0,1.0,0.0);

Right - model is rotated along 270 deg along Y axis

glRotatef(270.0,0.0,1.0,0.0);

Isometric - model is rotated along 45 deg along X, Y and Z axis

glRotatef(45.0,1.0,1.0,1.0);

END

Figure 4.17: Isometric View of VRML Model

Page 35: CHAPTER 4 IMPLEMENTATION OF - Shodhganga : a …shodhganga.inflibnet.ac.in/bitstream/10603/4045/11/11_chapter 4.pdf · Visualization application reads STL or VRML file consisting

Figure 4.18: Front View of STL Model

4.2.9 Texture Mapping

Choice to select the image files to wrap it around the component. Texture is

important to provide the illusion of reality. It is a method of wallpapering the

existing polygons. Figure 4.19 and Figure 4.20 shows the texture mapping on the

models. As each pixel is shaded, its corresponding texture coordinates are obtained

from the texture map, and a lookup is performed in a two dimensional array of

colors containing the texture. The value in this array is used as the color of the

polygon at this pixel thus providing the wallpaper.

This tool adds the realism by wrapping the user defined texture over the

object. A dialog box pop ups and asks the user to select any image file. After the

selection of image file it shows how the object looks if it is wrapped by such an

image. This finds its application when the user wishes to see say, for example, a car

model which will be coated with the aluminum foil then he can do so by supplying

the same picture file. Before a model is released with any coating, the manufacturer

does not know how the product will emerge, hence in such situations he can use

textures and look at the object it really looks good if at all it is done so. Instead of

manually going on to the hard work developing such an object with the unknown

look which at the minimum may take few days, the developer can just view it in

stereo mode itself which may hardly take few minutes. If the manufacturer is not

satisfied with his previous selection, he can change his option there only by

Page 36: CHAPTER 4 IMPLEMENTATION OF - Shodhganga : a …shodhganga.inflibnet.ac.in/bitstream/10603/4045/11/11_chapter 4.pdf · Visualization application reads STL or VRML file consisting

applying another texture that otherwise he should do by manufacturing such a

textured product.

Pseudocode for texture mapping,

START

Browse the image file to be wrapped on the model

Read the texture data

glGenTextures( ), specify the texture

glTexParameterf( ), Indicate how the texture is to be applied to each

pixel

glEnable( GL_TEXTURE ), enable texture mapping.

Draw the scene, supplying both texture and geometric coordinates by

glBegin(GL_TRIANGLES);

glNormal3d(norm[i][0],norm[i][1],norm[i][2]);

glTexCoord2d(v1[i][0],v1[i][1]);

glVertex3d(v1[i][0],v1[i][1],v1[i][2]);

------

glEnd();

END

Figure 4.19: VRML Model with Texture Mapped

Page 37: CHAPTER 4 IMPLEMENTATION OF - Shodhganga : a …shodhganga.inflibnet.ac.in/bitstream/10603/4045/11/11_chapter 4.pdf · Visualization application reads STL or VRML file consisting

Figure 4.20: STL Model with texture Mapped

4.2.10 CutSection

Cutting the model either along XY-plane, XZ-plane, YZ-plane or with any

angle. After the vertices of the objects in the scene have been transformed, any

primitives that lie outside the viewing volume are clipped. This is useful for

removing extraneous objects in the scene, for example to display a cut away view of

an object. Always the object is made to view as a whole and whenever the user

needs that a part of it is just sufficient to visualize, either along any plane or by any

specified angle then this tool finds its way.

� XY Plane – It gives the view of the object with XY plane cuts it. When it

cut so, it leads to Far and Near sections of the model.

� XZ Plane - It gives the view of the object with XZ plane cuts it. When it cut

so, it leads to Bottom and Top sections of the model.

� YZ Plane - It gives the view of the object with YZ plane cuts it. When it cut

so, it leads to Left and right sections of the model.

Pseudocode to apply cut section,

START

Define the equation for clipping plane

glClipPlane (GL_CLIP_PLANE1, eqn), clip the specified plane

glEnable (GL_CLIP_PLANE1) , enable the clipping plane

Draw( ), draw the model

END

Page 38: CHAPTER 4 IMPLEMENTATION OF - Shodhganga : a …shodhganga.inflibnet.ac.in/bitstream/10603/4045/11/11_chapter 4.pdf · Visualization application reads STL or VRML file consisting

4.2.11 Application of Foggy Environment

This feature gives the user an illusion of a product being placed in a foggy

environment. This technique is used in flight simulators, where the object as it goes

far away, it gets faded also. Fog allows visualizing the object where limited

visibility also needed to approximate. It aids the designer to give the various density

values as input so that as the distance of the object goes away from the user it goes

faded away finally disappearing from the field of view. Figure 4.21 and Figure 4.22

shows the effect of foggy environment.

Pseudocode to apply foggy effect,

START

Define the values for fog color

glEnable (GL_Fog) , enable the fog effect

set fog density and other parameters

Draw( ), draw the model

END

Figure 4.21: VRML Model in Foggy Environment

Page 39: CHAPTER 4 IMPLEMENTATION OF - Shodhganga : a …shodhganga.inflibnet.ac.in/bitstream/10603/4045/11/11_chapter 4.pdf · Visualization application reads STL or VRML file consisting

Figure 4.22: Model in Foggy Envt with Emerald Material

4.2.12 Walkthrough

Often the viewer may like to go inside (virtually) the object to see the inner

details and just to have a closer look at its minute structure. Here the object should

be at rest but the viewer should be allowed to go towards it in whichever direction

he likes and also give a provision to take a turn at the corners, or to go to its extreme

ends or to retrace back. The viewer is allowed to go towards it rather than object

coming closer to the viewer. So this acts as a means to the viewer of walking thro

the object. It provides closer look at it if at all one finds it that attractive. Figure 4.23

and 4.24 gives the snapshots of models during walkthrough.

Pseudocode for walkthrough,

START

Construct the grid as floor

Set the viewpoint such that the field of view will change as the keys

are pressed

Draw(), draw the model

END

Page 40: CHAPTER 4 IMPLEMENTATION OF - Shodhganga : a …shodhganga.inflibnet.ac.in/bitstream/10603/4045/11/11_chapter 4.pdf · Visualization application reads STL or VRML file consisting

Figure 4.23: Walkthrough of VRML Model

Figure 4.24: Walkthrough of STL Model

4.3 Cost Effective 3D Visualization of CAD Models

Stereoscopic display is a fundamental part of virtual reality. It is an effective

way to enhance insight in 3D scientific visualization. This visualization suit

demonstrates how a low end, inexpensive viewing technique can be used as a trick

to produce the same effect as high-end stereo viewing. This low cost stereo

technique is called as passive stereo. The model rendered in this tool provides depth

as its parameter along with height and width. This enables the rotation of model to

view the model from any angle. Even though stereo offers 3 dimensions of viewing

angle, the image is generally viewed on a 2D plane. The 3rd dimension is visualized

by wearing stereo enabled glasses. Special 3D glasses to trick the viewer's eyes into

seeing the virtual environment in three dimensions. An anaglyph is a method of

viewing stereoscopic images using red-blue colored spectacles. The left view in

Page 41: CHAPTER 4 IMPLEMENTATION OF - Shodhganga : a …shodhganga.inflibnet.ac.in/bitstream/10603/4045/11/11_chapter 4.pdf · Visualization application reads STL or VRML file consisting

blue (or green) is superimposed on the same image with the right eye view in red.

When viewed through eye ware of corresponding colors but reversed, the 3D effect

is perceived.

Stereographics primarily requires that two images (one for each eye) are

presented independently to our two eyes. If these two views of a 3D model are

computed correctly the human brain will fuse them and give us a stronger depth

perception than we normally get with a single image There are a number of ways

this can be achieved, this document deals with anaglyphs in which the left and right

eye images are made up of two independent colors. By wearing glasses with

matching filters, the left eye image is delivered to the left eye and the right eye

image to the right eye. This section discusses computer based generation of stereo

pairs as used to create a perception of depth. The major benefits of this technique in

manufacturing industries are,

� Visualization of CAD/CAM/CAE Models and Simulations

� Enhanced Visualization of Product Design and Analysis Data in Stereo

� Smooth navigation thru large models and simulations

� 3D Design Reviews

4.3.1 Time Parallelism Technique for 3D Vision

In this research time parallelism was used to achieve passive stereo vision,

Time-parallel methods present both eye views to the viewer simultaneously and use

optical techniques to direct each view to the appropriate eye. This method requires

the user to wear glasses with red and blue lenses or filters. Both images were

presented on a screen simultaneously; hence, it is a time-parallel method.

The real trick is figuring out the best way to present the left and right eye

images to just the left and right eyes, respectively. In passive stereo technique to

view the 3D scene Red-Blue Anaglyph is used. Left and right eye images are

combined into a single image consisting of blues for the left eye portion of the

scene, reds for the right eye portion of the scene, and shades of magenta for portions

of the scene occupied by both images. This has shown in the Figure 4.25. The

viewer wears a pair of glasses with red over the left eye and blue over the right eye.

Page 42: CHAPTER 4 IMPLEMENTATION OF - Shodhganga : a …shodhganga.inflibnet.ac.in/bitstream/10603/4045/11/11_chapter 4.pdf · Visualization application reads STL or VRML file consisting

Each eyepiece causes the line work destined for the other eye meld into the

background and causes line work destined for its own eye to appear black. The key

fact of stereo viewing is to generate two views of the scene, one from each eye

position. This can be achieved by maintaining separate drawing buffers for the left

and right eyes. Both the images are drawn on the screen. Left eye sees one image

and the right eye sees another image. Combination of this in brain gives the 3D

effect. The human brain processes received information from two eyes and displays

3D visualization system.

Figure 4.25: Passive Stereo Vision –Image drawn in Red & Blue

In this application 3D stereo vision is implemented by following the below given

sequence,

1. Set the geometry for the view from left human eye

2. Set the left eye rendering buffer

3. Render the left eye image

4. Set the geometry for the view from right human eye

5. Set the right eye rendering buffer

6. Render the right eye image

7. Swap buffers

Objects that lie in front of the projection plane will appear to be in front of

the computer screen, objects that are behind the projection plane will appear to be

into the screen. It is generally easier to view stereo pairs of objects that recede into

the screen; to achieve this one would place the focal point closer to the camera than

the objects of interest. The degree of the stereo effect depends on both the distance

Page 43: CHAPTER 4 IMPLEMENTATION OF - Shodhganga : a …shodhganga.inflibnet.ac.in/bitstream/10603/4045/11/11_chapter 4.pdf · Visualization application reads STL or VRML file consisting

of the camera to the projection plane and the separation of the left and right camera.

Too large a separation can be hard to resolve and is known as hyper stereo. A good

ballpark separation of the cameras is 1/20 of the distance to the projection plane;

this is generally the maximum separation for comfortable viewing. Another

constraint in general practice is to ensure the negative parallax does not exceed the

eye separation.

Figure 4.26: Parallel View of Left and Right Eye Images

The ability to see stereo-vision comes from each of eyes seeing a slightly

different view of the world. Then brain integrates these two images into one three-

dimensional picture. The key element in producing the stereoscopic depth effect is

parallax. Parallax is the horizontal distance between corresponding left and right

image point. This is as shown in figure 4.26. The stereoscopic image is composed of

two images generated from two related perspective viewpoints, and the viewpoints

are responsible for the parallax content of a view.

4.3.2 Algorithm and Implementation

START

Choose and select Stereoscopic Pixel Format Descriptor

Create the OpenGL rendercontext and make it current

Select GL_BACK as drawbuffer and clear both left and right buffers

Select the left(back) buffer to render the blue color mask

Setup the projection matrix for left eye

Setup the modelview matrix (this is same for both the eyes)

Draw the complete image

Page 44: CHAPTER 4 IMPLEMENTATION OF - Shodhganga : a …shodhganga.inflibnet.ac.in/bitstream/10603/4045/11/11_chapter 4.pdf · Visualization application reads STL or VRML file consisting

Select the right(back) buffer to render the red color mask

Setup the projection matrix for right eye

Setup the modelview matrix

Draw the complete image

END

Rendering anaglyphs in OpenGL has done with the help of the routine

glColorMask(). The basic idea was to create the scene with all the surfaces colored

pure white. The scene has rendered twice, once for each eye. Before rendering the

right eye glColorMask(GL_TRUE,GL_FALSE,GL_FALSE,GL_FALSE) has called

and glColorMask(GL_FALSE,GL_FALSE,GL_TRUE,GL_FALSE) called before

rendering the left eye. The red filter placed on the right eye and the blue filter on

the left eye then.

Implentation steps are given as follow,

1. Initialise the accumulation buffer in glut, for example:

glutInitDisplayMode(GLUT_DOUBLE | GLUT_ACCUM | GLUT_RGB |

GLUT_DEPTH);

2. Set the clear colour for the accumulation buffer (black as used here is the

default), for example:

glClearAccum(0.0,0.0,0.0,0.0);

3. Clear the accumulation buffer when necessary

glClear(GL_ACCUM_BUFFER_BIT);

4. Copy the current drawing buffer to the accumulation buffer. This would

normally be done after the left eye image has been drawn.

glAccum(GL_LOAD,1.0);

5. Add the current drawing buffer to the accumulation buffer. This would

normally done after the right eye image has been drawn.

glAccum(GL_ACCUM,1.0);

6. Copy the accumulation buffer to the current drawing buffer. After this the

current drawing buffer (back buffer say) would be swapped to the front

buffer.

glAccum(GL_RETURN,1.0);

Page 45: CHAPTER 4 IMPLEMENTATION OF - Shodhganga : a …shodhganga.inflibnet.ac.in/bitstream/10603/4045/11/11_chapter 4.pdf · Visualization application reads STL or VRML file consisting

7. The glAccum() functions act upon the current read and write buffers, for

example these might be set as follows:

glDrawBuffer(GL_BACK);

8. glReadBuffer(GL_BACK);

In order to draw the left and right eye scene in the appropriate color for the

filters in the glasses, it has specified one color when drawing the left eye image and

another for the right eye image. To get the suitable effect of 3D, there will be

overlapping of images drawn on the screen. Actually second image is not exactly

placed on the first. After drawing the left image, the viewpoint is translated little

and then right image is drawn as shown in the Figure 4.27. This distance between

left and right image is called as ‘eye separation factor’ or ‘interocular distance’. To

perceive proper 3D effect eye separation value can be changed. Left and Right

arrow buttons are used to increase and decrease interocular distance between the

two rendered scenes.

Figure 4.27: Drawing Two Images for Passive Stereo Vision

Two types of passive stereo vision achieved in this implementation are,

� Pop In – The rendered scene has pushed inside the screen that gives the

viewer a feeling that it is too far away from him. This is implemented by

drawing left eye image in red and right eye image in blue. Figure 4.28 gives

pop in mode of passive stereo vision.

Page 46: CHAPTER 4 IMPLEMENTATION OF - Shodhganga : a …shodhganga.inflibnet.ac.in/bitstream/10603/4045/11/11_chapter 4.pdf · Visualization application reads STL or VRML file consisting

� Pop Out- Viewer gets a realism that the rendered scene is straight in front of

his eyes and he is there to feel its presence. This is implemented by drawing

left eye image in blue and right eye image in red. Figure 4.29 gives pop out

mode of passive stereo vision.

Figure 4.28 : Pop In mode of Passive Stereo Vision

Figure 4.29 : Pop Out mode of Passive Stereo Vision

Page 47: CHAPTER 4 IMPLEMENTATION OF - Shodhganga : a …shodhganga.inflibnet.ac.in/bitstream/10603/4045/11/11_chapter 4.pdf · Visualization application reads STL or VRML file consisting

4.3.3 Visualization Constraints

Passive stereo is a low cost tool to view 3D models only through Red/Blue

glasses. The drawback of this mode compared to active stereo is that the object is

visible only in the composure of red and blue colors i.e., rendering a model in

magenta color. So the original colors are not visible to the viewer.

This technique causes colors in the image to be compromised because of too

many different colors in different eyes. Practically one can see objects coming out

of the screen but they are gray. The colors also create some eyestrain and distortion.

The passive stereo vision limitations are,

� Blind people cannot see 3D

� One eyed people cannot see 3D

� People where the 2 eyes don't work together cannot see 3D

� People using glasses with a huge difference between the prescription for

Left and Right eye cannot see 3D

� People with some difference in vision for Right- and Left eye will have

problems or cannot see 3D

� If the stereo base is too big people cannot see 3D

� If there is vertical misalignment people cannot see 3D

� If there are rotations people cannot see 3D

� If there is convergence between the cameras people cannot see 3D

The figure 4.30 shows the snapshots of CAD model in passive vision mode. These

are the anaglyph output of the visualization tool implemented in this research.

Page 48: CHAPTER 4 IMPLEMENTATION OF - Shodhganga : a …shodhganga.inflibnet.ac.in/bitstream/10603/4045/11/11_chapter 4.pdf · Visualization application reads STL or VRML file consisting

Figure 4.30: Snapshots of CAD Models in Passive Stereo Vision

Page 49: CHAPTER 4 IMPLEMENTATION OF - Shodhganga : a …shodhganga.inflibnet.ac.in/bitstream/10603/4045/11/11_chapter 4.pdf · Visualization application reads STL or VRML file consisting

The figure 4.31 shows the snapshots of more than one model in tiling mode to view

all simultaneously.

Figure 4.31: Tiling More Than One Model to View Simultaneously

4.4 Conclusion

Implementation of Parser and Renderer for STL, VRML file visualization

has presented in this chapter. The .stl or .wrl file, which contains description about

the object, is taken as an input to the software. Parser code stores these details in

data structures. Information contained in data structure displayed as object on the

screen using OpenGL commands. The primary objective of our research work was

to provide a fully functional stereovision system that allows exploring datasets with

less expensive resources during development. As described here we have presented

the implementation of cost effective three-dimensional visualization.