software engineeringassignment

47
Master of Computer Application (MCA) – Semester 3 Software Engineering Assignment Set – 1 Que: 1. Explain Iterative Development Model in detail. Ans: Iterative Model The iterative enhance model counters the third limitation of the waterfall model and tires to combine a benefit of both prototyping and the waterfall mode. The basic idea is that the software should be developed in increments, each increment adding some functional capability to the system until the full system is implemented. At each step, extensions and design modifications can be made. An advantage of this approach is that it can result in better testing because testing each increment is likely to be easier than testing the entire system as in the waterfall model. The increment models provide feedback to the client i.e., useful for determining the final requirements of the system. In the first step of this model, a simple initial implementation is done for a subset of the overall problem. This subset is one that contains some of the key aspects of the problem that are easy to Software Engineering – MC0071 Roll No. 521150974

Upload: vajahat07

Post on 12-Nov-2014

16 views

Category:

Documents


0 download

DESCRIPTION

SMU S.E. Assignment

TRANSCRIPT

Page 1: Software EngineeringAssignment

Master of Computer Application (MCA) – Semester 3

Software Engineering

Assignment Set – 1

Que: 1. Explain Iterative Development Model in detail.

Ans:

Iterative Model

The iterative enhance model counters the third limitation of the waterfall model and tires to combine a

benefit of both prototyping and the waterfall mode. The basic idea is that the software should be

developed in increments, each increment adding some functional capability to the system until the full

system is implemented. At each step, extensions and design modifications can be made. An advantage

of this approach is that it can result in better testing because testing each increment is likely to be

easier than testing the entire system as in the waterfall model. The increment models provide feedback

to the client i.e., useful for determining the final requirements of the system.

In the first step of this model, a simple initial implementation is done for a subset of the overall

problem. This subset is one that contains some of the key aspects of the problem that are easy to

understand and implement and which form a useful and usable system. A project control list is created

that contains, in order, all the tasks that must be performed to obtain the final implementation this

project control list gives a idea of how far the project is at any given step from the final system.

Each step consists of removing the next task from the list, designing the implementation for the

selected task, coding and testing the implementation, performing an analysis of the partial system

obtained after this step, and updating the list as a result on the analysis. These three phases are called

the design phase, implementation phase and analysis phase. The process is integrated until the project

control list is empty, at which time the final implementation of the system will be available. The

iterative enhancement process model is shown in below diagram:

Software Engineering – MC0071 Roll No. 521150974

Page 2: Software EngineeringAssignment

The project control list guiders the iteration steps and keeps track of all tasks that must be done. Based

on the analysis, one of the tasks in the list can include redesign of defective components or redesign of

the entire system. Redesign of the system will occur only in the initial steps. In the later steps, the

design would have stabilized and there is less chance of redesign. Each entry in the list is a task that

should be performed in one step of the iterative enhancement process and should be completely

understood. Selecting tasks in this manner will minimize the chance of error and reduce the redesign

work. The design and implementation phases of each step can be performed in a top-down manner or

by using some other technique.

One effective use of this type of model is for product development, in which the developers themselves

provide the specifications and therefore have a lot of control on what specifications go in the system

and what stay out.

In a customized software development, where the client has to essentially provide and approve the

specifications, it is not always clear how this process can be applied. Another practical problem with

this type of development project comes in generating the business contract-how will the cost of

additional features be determined and negotiated, particularly because the client organization is likely

to e tied to the original vendor who developed the first version. Overall, in these types of projects, this

process model can be useful if the "core"

Of the applications to be developed is well understood and the "increments" can be easily defined and

negotiated. In client-oriented project, this process has the major advantage that the client's

organization does not have to pay for the entire software together, it can get the main part of the

software developed and perform cost benefit analysis for it before enhancing the software with more

capabilities.

Software Engineering – MC0071 Roll No. 521150974

Page 3: Software EngineeringAssignment

Que: 2. Describe the Object Interface Design.

Ans:

Object Interface design is concerned with specifying the detail of the object interfaces. This means

defining the types of the object attributes and the signatures and the semantics of the object

operations. If an object-oriented programming language is being used for implementation, it is natural

to use it to express the interface design.

Designers should avoid interfaces representation information in their interface design. Rather, the

representation should be hidden and object operations provided to access and update the data. If the

representation is hidden, it can be changed without affecting the objects that use these attributes. This

leads to a design which is inherently more maintainable. For example, an array representation of a

stack may be changed to a list representation without affecting other objects, which use the stack.

Design evolution

An important advantage of an object-oriented approach to design is that it simplifies the problem of

making changes to the design. The reason for this is that object state representation does not influence

the design. Changing the internal details of an object is unlikely to affect any other system objects.

Furthermore, because objects are loosely coupled, it is usually straightforward to introduce new

objects without significant effects on the rest of the system.

To illustrate the robustness of the object-oriented approach, assume that pollution-monitoring

capabilities are to be added to each weather station. This involves adding an air quality meter to

compute the amount of various pollutants in the atmosphere. The pollution readings are transmitted at

the same time as the weather data. To modify the design, the following changes must be made:

Figure below shows weather station and the new objects to the system. The abbreviation NO in Air

quality stands for nitrous oxide.

Software Engineering – MC0071 Roll No. 521150974

Page 4: Software EngineeringAssignment

New objects to support pollution monitoring

(1) An object Air quality should be introduced as part of Weather station at the same level as Weather

data.

(2) An operation Transmit pollution data should be added to Weather station to send the pollution

information to the central computer. The weather station control software must be modified so that

pollution readings are automatically collected when the system is switched on.

(3) Objects representing the types of pollution, which can be monitored, should be added. Levels of

nitrous oxide, smoke and benzene can be measured.

(4) A hardware control object Air quality meter should be added as a sub-object to Air quality. This has

attributes representing each of the types of measurement, which can be made.

The addition of pollution data collection does not affect weather data collection in any way. Data

representations are encapsulated in objects so they are not affected by the additions to the design.

Software Engineering – MC0071 Roll No. 521150974

Page 5: Software EngineeringAssignment

Function oriented design

A function-oriented design strategy relies on decomposing the system into a set of interacting functions

with a centralized system state shared by these functions as shown in figure below. Functions may also

maintain local state information but only for the duration of their execution.

A function-oriented view of design

Function-oriented has been practiced informally since programming began. Programs were

decomposed into subroutines, which were functional in nature. In the late 1960s and early 1970s

several books were published which described „top-down‟ functional design. They specifically

proposed this as a „structured‟ design strategy. These led to the development of many design methods

based on functional decomposition.

Function-oriented design conceals the details of an algorithm in a function but system state

information is not hidden. This can cause problems because a function can change the state in a way,

which other functions do not expect. Changes to a function and the way in which it uses the system

state may cause unanticipated changes in the behavior of other functions.

A functional approach to design is therefore most likely to be successful when the amount of system

state information is minimized and information sharing is explicit. Systems whose responses depend on

a single stimulus or input and which are not affected by input histories are naturally function-oriented.

Many transaction-processing systems and business data-processing systems fall into this class. In

Software Engineering – MC0071 Roll No. 521150974

Page 6: Software EngineeringAssignment

essence, they are concerned with record processing where the processing of one record is not

dependent on any previous processing.

Software Engineering – MC0071 Roll No. 521150974

Page 7: Software EngineeringAssignment

Que: 3. Explain why it is important to describe software designs.

Ans:

Importance of describing Software Design: There are many aspects to software design, and many

varying methodologies. When attempting to produce any piece of software (and particularly a large and

complex one) it is vital to consider several areas. Many people only look at a small component of this

when "designing" their programs. Why? Either they are not aware the other components require

design, or they cannot be bothered to do it.

The main components in software include:

- required functionality

- additional (optional) functionality

- user interface

- internal data structures

- external data structures

- program/object structure

Without applying design principles to these and often other areas, you end up with a lot of common

software problems: scope creep, poor functionality, unwieldy or difficult to use UI, poor performance,

over-sized data files, overly complex processing, impossible to maintain code.

"Design" of required and optional functionality is traditionally the Specification phase of a

project/program. The remaining areas should occur during the Design Phase of a project and be revised

during Implementation (or post-Prototype). The exact timing and processes will

depend on the methodology being used, but all should occur.

In the early stages of a project or program, the "functionality design" or specifications need to occur.

Without specifications, the development is unguided. It is difficult to know when the program is

complete - and impossible to accurately measure progress. It is also difficult for a programmer to be

motivated when the task is not well defined.

Software Engineering – MC0071 Roll No. 521150974

Page 8: Software EngineeringAssignment

It is just as important to perform this stage when writing software for yourself as it is when writing for a

client.

User interface design is one of the most considered aspects of software design. Everybody is aware that

without a usable interface, no software product will be widely used. It is important to plan the user

interface, and discuss or demonstrate it to the end users - something that you may consider

appropriate may not fit well with the way they expect your product to work.

Even if coding for yourself; plan what information needs to be displayed and entered, how it relates to

other information, and how to efficiently present that information.

Without performing UI design (whether graphical or command-line), a program will seem

unprofessional, and most likely will have over-crowded and difficult to use screens.

Program design refers to the structure of the program. The methods here will vary greatly (particularly

for OO vs procedural languages). A well designed program will be easier to implement; easier to

distribute; and easier to maintain.

Internal and external data structures refer to how the data used by your program is held and handled

within the program, or when stored for later use, respectively. Well designed data structures make it

easier to handle the information within your program. More importantly, they make your program

more flexible and easier to add or change functionality. They can also have a huge impact on the

performance of your completed application.

Using design patterns may help solve certain problems, but if used improperly, they add complexity

where not required. However, when used correctly and as part of the complete process, design

patterns can save a lot of work in the program and data structure design areas.

Many tools, techniques and methodologies exist to aid with design - and not all are appropriate for all

situations. However, the design of a product is of vital importance for it to be complete and usable, and

robust.

Software Engineering – MC0071 Roll No. 521150974

Page 9: Software EngineeringAssignment

Que: 4. Write an overview on the Rational Unified Process.

Ans:

The Rational Unified Process or RUP product is a software engineering process. It provides a disciplined

approach to assigning tasks and responsibilities within a development organization. Its goal is to

ensure the production of high-quality software that meets the needs of its end users within a

predictable schedule and budget.

UML has become a widely accepted, standard notation for object-oriented architecture and design. The

widespread acceptance of UML allows developers to perform system design and provide design

documentation in a consistent and familiar manner. The standardization reduces the need for

developers to learn new notational techniques and improves communication among the development

team and stakeholders. The Rational Rose software suite is a GUI or visual modeling tool available from

Rational Software that lets developers model a problem, its design, implementation, and indeed the

entire development process, all the way through testing and configuration management, using the

UML notation. The Rational suite is arguably one of the most important demonstrations of an approach

that reflects Osterweil’s famous aphorism that “software processes are software, too” (Osterweil

1987).

This development product is widely used, especially for e-business applications. Krutchen (2003)

provides an excel-lent overview of the approach and its software engineering motivations. Jacobson,

Booch, and Rambaugh (1999) give a detailed description of the Rational Rose product and its

associated development process. Tool mentors are provided with the actual product that can be used

as a sort of “electronic coach on software engineering” (Krutchen 2003). For a simple but useful

introductory tutorial about how to use at least a part of the Rational Rose CASE tool, with detailed

discussions and illustrations of the Rose tools and windows for use case diagrams and class diagrams,

see cse.dmu.ac.uk/Modules. Refer to Booch, Jacobson, and Rumbaugh (1998) for UML and Larman

(2001) for UML object-based design and analysis. The Web site, http://www.agilemodeling.com,

provides useful guidelines on UML diagramming tools.

The rational unified process (RUP) is built around visual software support for what its designers believe

are the essential best practices for effective software development, namely:

Software Engineering – MC0071 Roll No. 521150974

Page 10: Software EngineeringAssignment

■ Iterative development. The iterative, Boehm-style, spiral approach is intended to mitigate

development risk by using a combination of early implementation and requirements testing and

modification in order to expose requirements errors early.

■ So-called requirements management. The management requirements objective specifically

addresses evaluating and tracking the effect of changes to the requirements.

■ Use of component-based software architectures. Component-based development allows the use (or

reuse) of commercially available system components and ultimately continuous (re)development, but

involves the complexities of gluing the components together. This is also highly consistent with the

fundamental principle of separation of concerns.

■ Use of tools that support visual design of the system, continuous verification, and change

management. Intelligently designed visual modeling tools help manage and share development

artifacts, allow differing levels of design resolution, and support classic UML artifacts such as cases and

scenarios. Computer-supported testing tools simplify verification. Automated coordination tools

organize the workflow of system requirements, a coordination that involves a complex network of

development activities and artifacts executed by multiple development teams at possibly many sites,

and coordinate the process iterations and product releases.

The RUP constitutes a complete framework for software development. The elements of the RUP (not of

the problem being modeled) are the workers who implement the development, each working on some

cohesive set of development activities and responsible for creating specific development artifacts. A

worker is like a role a member plays and the worker can play many roles (wear many hats) during the

development. For example, a designer is a worker and the artifact that the designer creates may be a

class definition. An artifact supplied to a customer as part of the product is a deliverable. The artifacts

are maintained in the Rational Rose tools, not as separate paper documents. A workflow is defined as a

“meaningful sequence of activities that produce some valuable result” (Krutchen 2003). The

development process has nine core workflows: business modeling; requirements; analysis and design;

implementation; test; deployment; configuration and change management; project management; and

environment. Other RUP elements, such as tool mentors, simplify training in the use of the Rational

Rose system. These core workflows are spread out over the four phases of development:

■ The inception phase defines the vision of the actual user end-product and the scope of the project.

Software Engineering – MC0071 Roll No. 521150974

Page 11: Software EngineeringAssignment

■ The elaboration phase plans activities and specifies the architecture.

■ The construction phase builds the product, modifying the vision and the plan as it proceeds.

■ The transition phase transitions the product to the user (delivery, training, support, maintenance).

In a typical two-year project, the inception and transition might take a total of five months, with a year

required for the construction phase and the rest of the time for elaboration. It is important to

remember that the development process is iterative, so the core workflows are repeatedly executed

during each iterative visitation to a phase. Although particular workflows will predominate during a

particular type of phase (such as the planning and requirements workflows during inception), they will

also be executed during the other phases. For example, the implementation workflow will peak during

construction, but it is also a workflow during elaboration and transition. The goals and activities for

each phase will be examined in some detail.

The purpose of the inception phase is achieving “concurrence among all stakeholders” on the

objectives for the project. This includes the project boundary and its acceptance criteria. Especially

important is identifying the essential use cases of the system, which are defined as the “primary

scenarios of behavior that will drive the system’s functionality.” Based on the usual spiral model

expectation, the developers must also identify a candidate or potential architecture as well as

demonstrate its feasibility on the most important use cases. Finally, cost estimation, planning, and risk

estimation must be done. Artifacts produced during this phase include the vision statement for the

product; the business case for development; a preliminary description of the basic use cases; business

criteria for success such as revenues expected from the product; the plan; and an overall risk

assessment with risks rated by likelihood and impact. A throw-away prototype may be developed for

demonstration purposes but not for architectural purposes.

The following elaboration phase “ensures that the architecture, requirements, and plans are stable

enough, and the risks are sufficiently mitigated, that [one] can reliably determine the costs and

schedule” for the project. The outcomes for this phase include an 80 percent complete use case model,

nonfunctional performance requirements, and an executable architectural prototype. The components

of the architecture must be understood in sufficient detail to allow a decision to make, buy, or reuse

components, and to estimate the schedule and costs with a reasonable degree of confidence. Krutchen

observes that “a robust architecture and an understandable plan are highly correlated, so one of the

Software Engineering – MC0071 Roll No. 521150974

Page 12: Software EngineeringAssignment

critical qualities of the architecture is its ease of construction.” Prototyping entails integrating the

selected architectural components and testing them against the primary use case scenarios.

The construction phase leads to a product that is ready to be deployed to the users. The transition

phase deploys a usable subset of the system at an acceptable quality to the users, including beta

testing of the product, possible parallel operation with a legacy system that is being replaced, and

software staff and user training.

Software architecture is concerned with the major elements of the design, including their structure,

organization, and interfaces. The representation of architecture traditionally uses multiple views – for

example, in the architectural plans for a building: floor plans, electrical layout, plumbing, elevations,

etc. (Krutchen 2003). The same holds for RUP architectural plans, which include the logical view of the

system, an organized view of the system functionality, and concurrency issues.

RUP recommends a so-called 4 + 1 view of architecture. The logical view addresses functional

requirements. The implementation view addresses the software module organization and issues such

as reuse and off-the-shelf components. The process view addresses concurrency, response time,

scalability, etc. The deployment view maps the components to the platforms. The use-case view is

initially used to define and design the architecture, then subsequently to validate the other views.

Finally, the architecture is demonstrated by building it. This prototype is the most important

architectural artifact; the final system evolves from this prototype.

The RUP is driven by use cases that are used to understand a problem in a way accessible to developers

and users. A use case may be defined as a “sequence of actions a system performs that yields an

observable result that is of value to a particular actor” (Krutchen 2003); an actor is any person or

external system that interacts with the proposed system. Put another way, a use case accomplishes an

actor’s goal. The requirement that the action be useful to the end-user establishes an appropriate level

of granularity for the requirements so that they are understandable and meaningful to the users.

Software Engineering – MC0071 Roll No. 521150974

Page 13: Software EngineeringAssignment

Que: 5. Describe the Capability Maturity Model.

Ans:

The Capability Maturity Model

A set of quality assurance standards was originally developed by the International Standards

Organization (ISO) under the designation ISO 9000 in 1987 and revised subsequently in 1994 and 2000.

The purpose of these standards was to define the quality system that a business or industrial

organization would need to follow to ensure the consistency and quality of its products. A procedure

for certifying that a business met these standards was also established so that potential customers

would have some confidence in the organization’s processes and products. The Capability Maturity

Model developed by the Software Engineering Institute (SEI) at Carnegie–Mellon University is a model

for identifying the organizational processes required to ensure software process quality.

The Capability Maturity Model (CMM) is a multistage, process definition model intended to

characterize and guide the engineering excellence or maturity of an organization’s software

development processes. The model prescribes practices for “planning, engineering, and managing

software development and maintenance” and addresses the usual goals of organizational system

engineering processes: namely, “quality improvement, risk reduction, cost reduction, predictable

process, and statistical quality control” (Oshana & Linger 1999).

However, the model is not merely a program for how to develop software in a professional,

engineering-based manner; it prescribes an “evolutionary improvement path from an ad hoc, immature

process to a mature, disciplined process” (Oshana & Linger 1999). Walnau, Hissam, and Seacord (2002)

observe that the ISO and CMM process standards “established the context for improving the practice of

software development” by identifying roles and behaviors that define a software factory.

Software Engineering – MC0071 Roll No. 521150974

Page 14: Software EngineeringAssignment

The CMM identifies five levels of software development maturity in an organization:

· At level 1, the organization’s software development follows no formal development process.

· The process maturity is said to be at level 2 if software management controls have been introduced

and some software process is followed. A decisive feature of this level is that the organization’s process

is supposed to be such that it can repeat the level of performance that it achieved on similar successful

past projects. This is related to a central purpose of the CMM: namely, to improve the predictability of

the development process significantly. The major technical requirement at level 2 is incorporation of

configuration management into the process. Configuration management (or change management, as it

is sometimes called) refers to the processes used to keep track of the changes made to the devel-

opment product (including all the intermediate deliverables) and the multifarious impacts of these

changes. These impacts range from the recognition of development problems; identification of the

need for changes; alteration of previous work; verification that agreed upon modifications have

corrected the problem and that corrections have not had a negative impact on other parts of the

system; etc.

· An organization is said to be at level 3 if the development process is standard and consistent. The

project management practices of the organization are supposed to have been formally agreed on,

defined, and codified at this stage of process maturity.

· Organizations at level 4 are presumed to have put into place qualitative and quantitative measures of

organizational process. These process metrics are intended to monitor development and to signal

trouble and indicate where and how a development is going wrong when problems occur.

· Organizations at maturity level 5 are assumed to have established mechanisms designed to ensure

continuous process improvement and optimization. The metric feedbacks at this stage are not just

applied to recognize and control problems with the current project as they were in level-4

organizations. They are intended to identify possible root causes in the process that have allowed the

problems to occur and to guide the evolution of the process so as to prevent the recurrence of such

problems in future projects, such as through the introduction of appropriate new technologies and

tools.

The higher the CMM maturity level is, the more disciplined, stable, and well-defined the development

process is expected to be and the environment is assumed to make more use of “automated tools and

Software Engineering – MC0071 Roll No. 521150974

Page 15: Software EngineeringAssignment

the experience gained from many past successes” (Zhiying 2003). The staged character of the model

lets organizations progress up the maturity ladder by setting process targets for the organization. Each

advance reflects a further degree of stabilization of an organization’s development process, with each

level “institutionalizing a different aspect” of the process (Oshana & Linger 1999).

Each CMM level has associated key process areas (KPA) that correspond to activities that must be

formalized to attain that level. For example, the KPAs at level 2 include configuration management,

quality assurance, project planning and tracking, and effective management of subcontracted software.

The KPAs at level 3 include intergroup communication, training, process definition, product

engineering, and integrated software management. Quantitative process management and

development quality define the required KPAs at level 4. Level 5 institutionalizes process and tech-

nology change management and optimizes defect prevention.

The CMM model is not without its critics. For example, Hamlet and Maybee (2001) object to its

overemphasis on managerial supervision as opposed to technical focus. They observe that agreement

on the relation between the goodness of a process and the goodness of the product is by no means

universal. They present an interesting critique of CMM from the point of view of the so-called process

versus product controversy. The issue is to what extent software engineers should focus their efforts on

the design of the software product being developed as opposed to the characteristics of the software

process used to develop that product.

The usual engineering approach has been to focus on the product, using relatively straightforward

processes, such as the standard practice embodied in the Waterfall Model, adapted to help organize

the work on developing the product. A key point of dispute is that no one has really demonstrated

whether a good process leads to a good product. Indeed, good products have been developed with

little process used, and poor products have been developed under the guidance of a lot of purportedly

good processes. Furthermore, adopting complex managerial processes to oversee development may

distract from the underlying objective of developing a superior product.

Software Engineering – MC0071 Roll No. 521150974

Page 16: Software EngineeringAssignment

Que: 6. Explain the round-trip problem solving approach.

Ans:

Round-Trip Problem Solving Approach

The software engineering process represents a round-trip framework for problem solving in a business

context in several senses.

· The software engineering process is a problem-solving process entailing that software engineering

should incorporate or utilize the problem-solving literature regardless of its interdisciplinary sources.

· The value of software engineering derives from its success in solving business and human problems.

This entails establishing strong relationships between the software process and the business metrics

used to evaluate business processes in general.

· The software engineering process is a round-trip approach. It has a bidirectional character, which

frequently requires adopting forward and reverse engineering strategies to restructure and reengineer

information systems. It uses feedback control loops to ensure that specifications are accurately

maintained across multiple process phases; reflective quality assurance is a critical metric for the

process in general.

· The non-terminating, continuing character of the software development process is necessary to

respond to ongoing changes in customer requirements and environmental pressures.

Software Engineering – MC0071 Roll No. 521150974

Page 17: Software EngineeringAssignment

Master of Computer Application (MCA) – Semester 3

Software Engineering

Assignment Set – 2

Que: 1. Describe the following with respect to Software Design:

a. The design process

b. Design Methods

c. Design description

d. Design strategies.

Ans:

The design process

A general model of a software design is a directed graph. The target of the design process is the

creation of such a graph without inconsistencies. Nodes in this graph represent entities in the design

entities such as process function or types. The link represents relation between these design entities

such as calls, uses and so on. Software designers do not arrive at a finished design graph immediately

but develop the design iteratively through a number of different versions. The design process involves

adding formality and detail as the design is developed with constant backtracking to correct earlier, less

formal, designs. The starting point is an informal design, which is refined by adding information to

make it consistent and complete as shown in figure below.

Software Engineering – MC0071 Roll No. 521150974

Page 18: Software EngineeringAssignment

The progression from an informal to a detailed design

Figure shows a general model of the design process suggests that the stages of the design process are

sequential. In fact, design process activities proceed in parallel. However, the activities shown are all

part of the design process for large software systems. These design activities are:

(1) Architectural designs the sub-systems making up the system and their relationships are identified

and documented.

(2) Abstract specification for each sub-system, an abstract specification of the services it provides and

the constraints under which it must operate is produced.

(3) Interface design for each sub-system, its interface with other sub-systems is designed and

documented. This interface specification must be unambiguous as it allows the sub-system to be used

without knowledge of the sub-system operation.

(4) Component design Services are allocated to different components and the interfaces of these

components are designed.

(5) Data structure design the data structures used in the system implementation is designed in detail

and specified.

(6) Algorithm design the algorithms used to provide services are designed in detail and specified.

A general model of the design process

This process is repeated for each sub-system until the components identified can be mapped directly

into programming language components such as packages, procedures or functions.

Software Engineering – MC0071 Roll No. 521150974

Requirement Specification

Architectural Design

Abstract Specification

Interface Design

Component Design

Data Structure Design

Algorithm Design

System Architecture

Software Specification

Interface Specification

Component Specification

Structure Specification

Algorithm Specification

Page 19: Software EngineeringAssignment

Design Methods

A more methodical approach to software design is purposed by structured methods, which are sets of

notations and guidelines for software design. Budgen (1993) describes some of the most commonly

used methods such as structured design, structured systems analysis, Jackson System Development

and various approaches to object-oriented design.

The use of structured methods involves producing large amounts of diagrammatic design

documentation. CASE tools have been developed to support particular methods. Structured methods

have been applied successfully in many large projects. They can deliver significant cost reductions

because they use standard notations and ensure that standard design documentation is produced.

A mathematical method (such as the method for long division) is a strategy that will always lead to the

same result irrespective of who applies the method. The term ‘structured methods’ suggests,

therefore, that designers should normally generate similar designs from the same specification. A

structured method includes a set of activities, notations, report formats, rules and design guidelines. So

structured methods often support some of the following models of a system:

(1) A data-flow model where the system is modeled using the data transformations, which take place

as it, is processed.

(2) An entity-relation model, which is used to describe the logical data, structures being used.

(3) A structural model where the system components and their interactions are documented.

(4) If the method is object-oriented it will include an inheritance model of the system, a model of how

objects are composed of other objects and, usually, an object-use model which shows how objects are

used by other objects.

Particular method supplement these with other system models such as state transition diagrams, entity

life histories that show how each entity is transformed as it is processed and so on. Most methods

suggest a centralized repository for system information or a data dictionary should be used. No one

method is better or worse than other methods: the success or otherwise of methods often depends on

their suitability for an application domain.

Software Engineering – MC0071 Roll No. 521150974

Page 20: Software EngineeringAssignment

Design description

A software design is a model system that has many participating entities and relationships. This design

is used in a number of different ways. It acts as a basis for detailed implementation; it serves as a

communication medium between the designers of sub-systems; it provides information to system

maintainers about the original intentions of the system designers, and so on.

Designs are documented in a set of design documents that describes the design for programmers and

other designers. There are three main types of notation used in design documents:

(1) Graphical notations: These are used to display the relationships between the components making

up the design and to relate the design to the real-world system is modeling. A graphical view of a

design is an abstract view. It is most useful for giving an overall picture of the system.

(2) Program description languages these languages (PDLs) use control and structuring constructs based

on programming language constructs but also allow explanatory text and (sometimes) additional types

of statement to be used. These allow the intention of the designer to be expressed rather than the

details of how the design is to be implemented.

(3) Informal text much of the information that is associated with a design cannot be expressed

formally. Information about design rationale or non-functional considerations may be expressed using

natural language text.

All of these different notations may be used in describing a system design.

Design strategies

The most commonly used software design strategy involved decomposing the design into functional

components with system state information held in a shared data area. Since from late 1980s that this

alternative, object oriented design has been widely adopted.

Two design strategies are summarized as follows:

Software Engineering – MC0071 Roll No. 521150974

Page 21: Software EngineeringAssignment

(1) Functional design: The system is designed from a functional viewpoint, starting with a high-level

view and progressively refining this into a more detailed design. The System State is centralized and

shared between the functions operating on that state. Methods such as Jackson Structured

Programming and the Warnier-Orr method are techniques of functional decomposition where the

structure of the data is used to determine the functional structure used to process that data.

(2) Object-oriented design: The system is viewed as a collection of objects rather than as functions.

Object-oriented design is based on the idea of information hiding and has been described by Meyer,

Booch, and Jacobsen and many others. JSD is a design method that falls somewhere between function-

oriented and object-oriented design.

In an object-oriented design, the System State is decentralized and each object manages its own state

information. Objects have a set of attributes defining their state and operations, which act on these

attributes. Objects are usually members of an object class whose definition defines attributes and

operations of class members. These may be inherited from one or more super-classes so that a class

definition need only set out the differences between that class and its super-classes. Objects

communicated by exchanging messages; an object calling a procedure associated with another object

achieve most object communication.

There is no ‘best’ design strategy, which is suitable for all projects and all types of application.

Functional and object-oriented approaches are complementary rather than opposing techniques.

Software engineers select the most appropriate approach for each stage in the design process. In fact,

large software systems are complex entities that different approaches might be used in the design of

different parts of the system.

An object-oriented approach to software design seems to be natural at the highest and lowest levels of

system design. Using different approaches to design may require the designer to convert his or her

design from one model to another. Many designers are not trained in multiple approaches so prefer to

use either object-oriented or functional design.

Software Engineering – MC0071 Roll No. 521150974

Page 22: Software EngineeringAssignment

Que: 2. Describe the following with respect to Software Testing:

a. Control Structure Testing

b. Black Box Testing

c. Boundary Value Analysis

d. Testing GUIs

e. Testing Documentation and Help Facilities

Ans:

Control Structure Testing

The basis path testing technique described in Section 17.4 is one of a number of techniques for control

structure testing. Although basis path testing is simple and highly effective, it is not sufficient in itself. in

this section, other variations on control structure testing are discussed. These broaden testing coverage

and improve quality of white-box testing.

1. Condition Testing

2. Data Flow Testing

3. Loop Testing

Black-Box Testing

Black-box testing, also called behavioral testing, focuses on the functional requirements of the

software. That is, black-box testing enables the software engineer to derive sets of input conditions

that will fully exercise all functional requirements for a program. Black-box testing is not an alternative

to white-box techniques. Rather, it is a complementary approach that is likely to uncover a different

class of errors than white-box methods.

Black-box testing attempts to find errors in the following categories:

(1) incorrect or missing functions, (2) interface errors, (3) errors in data structures or external data base

access, (4) behavior or performance errors, and (5) initialization and termination errors.

Software Engineering – MC0071 Roll No. 521150974

Page 23: Software EngineeringAssignment

Unlike white-box testing, which is performed early in the testing process, black- box testing tends to be

applied during later stages of testing. Because black-box testing purposely disregards control structure,

attention is focused on the information domain. Tests are designed to answer the following questions:

· How is functional validity tested?

· How is system behavior and performance tested?

· What classes of input will make good test cases?

· Is the system particularly sensitive to certain input values?

· How are the boundaries of a data class isolated?

· What data rates and data volume can the system tolerate?

· What effect will specific combinations of data have on system operation?

By applying black-box techniques, we derive a set of test cases that satisfy the following criteria: (1)

test cases that reduce, by a count that is greater than one, the number of additional test cases that

must be designed to achieve reasonable testing and (2) test cases that tell us something about the

presence or absence of classes of errors, rather than an error associated only with the specific test at

hand.

Boundary Value Analysis

For reasons that are not completely clear, a greater number of errors tends to occur at the boundaries

of the input domain rather than in the "center."

It is for this reason that boundary value analysis (BVA) has been developed as a testing technique.

Boundary value analysis leads to a selection of test cases that exercise bounding values.

Boundary value analysis is a test case design technique that complements equivalence partitioning.

Rather than selecting any element of an equivalence class, BVA leads to the selection of test cases at

the "edges” of the class. Rather than focusing solely on input conditions, BVA derives test cases from

the output domain as well.

Guidelines for BVA are similar in many respects to those provided for equivalence partitioning:

Software Engineering – MC0071 Roll No. 521150974

Page 24: Software EngineeringAssignment

1. If an input condition specifies a range, bounded by values a and b, test cases should be designed with

values a and b and just above and just below a and b.

2. If an input condition specifies a number of values, test cases should be developed that exercise the

minimum and maximum numbers. Values just above and below minimum and maximum are also

tested.

3. Apply guidelines 1 and 2 to output conditions. For example, assume that a temperature vs. pressure

table is required as output from an engineering analysis program. Test cases should be designed to

create an output report that produces the maximum (and minimum) allowable number of table entries.

4. If internal program data structures have prescribed boundaries (e.g., an array has a defined limit of

100 entries), be certain to design a test case to exercise the data structure at its boundary.

Most software engineers intuitively perform BVA to some degree. By applying these guidelines,

boundary testing will be more complete, thereby having a higher likelihood for error detection.

Testing GUIs

Graphical user interfaces (GUIs) present interesting challenges for software engineers. Because of

reusable components provided as part of GUI development environments, the creation of the user

interface has become less time consuming and more precise. But, at the same time, the complexity of

GUIs has grown, leading to more difficulty in the design and execution of test cases.

Because many modern GUIs have the same look and feel, a series of standard tests can be derived.

Finite state modeling graphs may be used to derive a series of tests that address specific data and

program objects that are relevant to the GUI.

Due to the large number of permutations associated with GUI operations, testing should be

approached using automated tools. A wide array of GUI testing tools has appeared on the market over

the past few years.

Software Engineering – MC0071 Roll No. 521150974

Page 25: Software EngineeringAssignment

Testing Documentation and Help Facilities

The term software testing conjures images of large numbers of test cases prepared to exercise

computer programs and the data that they manipulate. From the definition of software, it is important

to note that testing must also extend to the third element of the software configuration-

documentation.

Errors in documentation can be as devastating to the acceptance of the program as errors in data or

source code. Nothing is more frustrating than following a user guide or an on-line help facility exactly

and getting results or behaviors that do not coincide with those predicted by the documentation. it is

for this reason that that documentation testing should be a meaningful part of every software test

plan.

Documentation testing can be approached in two phases. The first phase, review and inspection,

examines the document for editorial clarity. The second phase, live test, uses the documentation in

conjunction with the use of the actual program.

Surprisingly, a live test for documentation can be approached using techniques that are analogous to

many of the black-box testing methods discussed in Section 17.6. Graph-based testing can be used to

describe the use of the program; equivalence partitioning and boundary value analysis can be used to

define various classes of input and associated interactions. Program usage is then tracked through the

documentation. The following questions should be answered during both phases:

· Does the documentation accurately describe how to accomplish each mode of use?

· Is the description of each interaction sequence accurate?

· Are examples accurate?

· Are terminology, menu descriptions, and system responses consistent with the actual program?

· Is it relatively easy to locate guidance within the documentation?

· Can troubleshooting be accomplished easily with the documentation?

· Are the document table of contents and index accurate and complete?

Software Engineering – MC0071 Roll No. 521150974

Page 26: Software EngineeringAssignment

· Is the design of the document (layout, typefaces, indentation, graphics) conducive to understanding

and quick assimilation of information?

· Are all software error messages displayed for the user described in more detail in the document? Are

actions to be taken as a consequence of an error message clearly delineated?

· If hypertext links are used, are they accurate and complete?

· If hypertext is used, is the navigation design appropriate for the information required?

The only viable way to answer these questions is to have an independent third party (e.g., selected

users) test the documentation in the context of program usage. All discrepancies are noted and areas

of document ambiguity or weakness are defined for potential rewrite.

Software Engineering – MC0071 Roll No. 521150974

Page 27: Software EngineeringAssignment

Que. 3. Draw possible data flow diagram of system design for the following application. Part of the

electronic mail system which presents a mail form to a user, accepts the completed form and sends it

to the identified destination.

Ans:

Software Engineering – MC0071 Roll No. 521150974

Page 28: Software EngineeringAssignment

Que: 4. What are the main characteristics of successful teams?

Ans:

The Main Characteristics of A Successful Team:

A team can be defined as a group of individuals who have been organized for the purpose of working together to achieve a set of objectives that cannot be effectively achieved by the individuals working alone. The effectiveness of a team may be measured in terms ranging from its outcomes to customer acceptance, team capability, and individual satisfaction. Organizational and individual inputs significantly affect the team’s inputs. The team work process is characterized by the efforts exerted towards the goal; the knowledge and skills utilized; the strategy adopted; and the dynamics of the group. Team construction and management are a critical challenge in software-driven problem solving. They require:

· Goal identification

· Strategy definition

· Task management

· Time management

· Allocation of resources

· Interdisciplinary team composition

· Span of control

· Training

· Team communication

· Team cohesiveness

· Quality assurance and evaluation

The main characteristics of successful teams include:

· Shared goal. There must be a shared awareness of the common team goal among all the team members. This shared goal is the objective that directs, guides, and integrates the individual efforts to achieve the intended results.

· Effective collaboration. A team must work as a team. This entails collaborating, individuals making contributions, exchanging their ideas and knowledge, and building interpersonal relationships and trust. The project environment should facilitate and encourage effective collaboration and interoperation.

Software Engineering – MC0071 Roll No. 521150974

Page 29: Software EngineeringAssignment

· Individual capabilities. Each team member must be trained and guided so as to be able to cooperate with the other team members towards the common goal.

Some other characteristics of well-functioning teams include:

· Sharing the mission and goal

· Disseminating complete information about schedules, activities and priorities

· Developing an understanding of the roles of each team member

· Communicating clear definitions of authority and decision-making lines

· Understanding the inevitability of conflicts and the need to resolve them

· Efficiently utilizing individual capabilities

· Effectively deploying meetings

· Accurately evaluating the performance of each team member

· Continually updating individual skills to meet evolving needs

Additional indicators of effective operation include a high level of project management involvement and participation; a focus on purpose; shared responsibilities; a high degree of communication; strategically oriented thinking; and rapid response to challenges and opportunities. These team performance characteristics require every team member to contribute ideas; operate in an environment that contains a diversity of skills; appreciate the contributions of others; share knowledge; actively inquire to enhance understanding; participate energetically; and exercise flexibility.

Software Engineering – MC0071 Roll No. 521150974

Page 30: Software EngineeringAssignment

Que 5: Describe Classic Invalid assumptions in the context of Process Life Cycle models.

Ans:

Four unspoken assumptions that have played an important role in the history of software development are considered next.1.

First Assumption: Internal or External Drivers

: The first unspoken assumption isthat software problems are primarily driven by internal software factors. Granted thissupposition, the focus of problem solving will necessarily be narrowed to the softwarecontext, thereby reducing the role of people, money, knowledge, etc. in terms of theirpotential to influence the solution of problems.2.

Second Assumption: Software or Business Processes

: A second significant unspoken assumption has been that the software development process is independent of the business processes in organizations. This assumption implied that it waspossible to develop a successful software product independently of the businessenvironment or the business goals of a firm. This led most organizations and businessfirms to separate software development work, people, architecture, and planning frombusiness processes. This separation not only isolated the software-related activities, but also led to different goals, backgrounds, configurations, etc. for software as opposed tobusiness processes.3.

Third Assumption: Processes or Projects

: A third unspoken assumption was that the software project was separate from the software process. Thus, a software processwas understood as reflecting an area of computer science concern, but a softwareproject was understood as a business school interest. If one were a computer sciencespecialist, one would view a quality software product as the outcome of a development process that involved the use of good algorithms, data base design, and code. If onewere an MIS specialist, one would view a successful software system as the result of effective software economics and software management.4.

Fourth Assumption: Process Centred or Architecture Centred

: There arecurrently two broad approaches in software engineering; one is process centred and theother is architecture centred. In process-centred software engineering, the quality of theproduct is seen as emerging from the quality of the process. This approach reflects theconcerns and interests of industrial engineering, management, and standardized orsystematic quality assurance approaches such as the Capability Maturity Model and ISO.The viewpoint is that obtaining quality in a product requires adopting andimplementing a correct problem-solving approach. If a product contains an error, oneshould be able to attribute and trace it to an error that occurred somewhere during theapplication of the process by carefully examining each phase or step in the process.

Software Engineering – MC0071 Roll No. 521150974

Page 31: Software EngineeringAssignment

Que 6: Describe the following:

a. Importance of people in problem solving process.

b. Human driven software engineering.

Ans: 6(a).

Importance of people in problem solving process

It is useful to have a structure to follow to make sure that nothing is overlooked. Nothing here is likely to be brand new to anyone, but it is the pure acknowledgement and reminding of the process that can help the problems to be solved.

1. Problem Definition

The normal process for solving a problem will initially involve defining the problem you want to solve. You need to decide what you want achieve and write it down. Often people keep the problem in their head as a vague idea and can so often get lost in what they are trying to solve that no solution seems to fit. Merely writing down the problem forces you to think about what you are actually trying to solve and how much you want to achieve. The first part of the process not only involves writing down the problem to solve, but also checking that you are answering the right problem. It is a check-step to ensure that you do not answer a side issue or only solve the part of the problem that is most easy to solve. People often use the most immediate solution to the first problem definition that they find without spending time checking the problem is the right one to answer.

2. Problem Analysis

The next step in the process is often to check where we are, what is the current situation and what is involved in making it a problem. For example, what are the benefits of the current product/service/process? And why did we decide to make it like that? Understanding where the problem is coming from, how it fits in with current developments and what the current environment is, is crucial when working out whether a solution will actually work or not. Similarly you must have a set of criteria by which to evaluate any new solutions or you will not know whether the idea is workable or not. This section of the problem solving process ensures that time is spent in stepping back and assessing the current situation and what actually needs to be changed.

After this investigation, it is often good to go back one step to reconfirm that your problem definition is still valid. Frequently after the investigation people discover that the problem they really want to answer is very different from their original interpretation of it.

3. Generating possible Solutions

When you have discovered the real problem that you want to solve and have investigated the climate into which the solution must fit, the next stage is to generate a number of possible solutions. At this Software Engineering – MC0071 Roll No. 521150974

Page 32: Software EngineeringAssignment

stage you should concentrate on generating many solutions and should not evaluate them at all. Very often an idea, which would have been discarded immediately, when evaluated properly, can be developed into a superb solution. At this stage, you should not pre-judge any potential solutions but should treat each idea as a new idea in its own right and worthy of consideration.

4. Analyzing the Solutions

This section of the problem solving process is where you investigate the various factors about each of the potential solutions. You note down the good and bad points and other things which are relevant to each solution. Even at this stage you are not evaluating the solution because if you do so then you could decide not to write down the valid good points about it because overall you think it will not work. However you might discover that by writing down its advantages that it has a totally unique advantage. Only by discovering this might you choose to put the effort in to develop the idea so that it will work.

5. Selecting the best Solution(s)

This is the section where you look through the various influencing factors for each possible solution and decide which solutions to keep and which to disregard. You look at the solution as a whole and use your judgments as to whether to use the solution or not. In Innovation Toolbox, you can vote using either a Yes/No/Interesting process or on a sliding scale depending on how good the idea is. Sometimes pure facts and figures dictate which ideas will work and which will not. In other situations, it will be purely feelings and intuition that decides. Remember that intuition is really a lifetimes experience and judgments compressed into a single decision.

6. Planning the next course of action (Next Steps)

This section of the process is where you write down what you are going to do next. Now that you have a potential solution or solutions you need to decide how you will make the solution happen. This will involve people doing various things at various times in the future and then confirming that they have been carried out as planned. This stage ensures that the valuable thinking that has gone into solving the problem becomes reality. This series of Next Steps is the logical step to physically solving the problem.

Ans: 6(b).

Human driven software engineering.

Resolvability is essential to adapting to the dynamic and changing requirements in response to the feedback from context awareness systems. However, most of current context models have limited capability in exploring human intentions that often drive system evolution. To support service requirements analysis of real-world applications in distributed service environments, this paper focuses on human-intention driven software resolvability. In our approach, requirements analysis via an evolution cycle provides the means of speculating requirement changes, predicting possible new

Software Engineering – MC0071 Roll No. 521150974

Page 33: Software EngineeringAssignment

generations of system behaviors, and assessing the corresponding quality impacts. Furthermore, we also discuss resolvability metrics by observing intentions from user contexts.

From the 10th RE conference site (RE'02): "Over the last ten years, RE has moved from an immature software engineering phase to a well-recognized practice and research area spanning the whole system lifecycle. RE requires a variety and richness of skills, processes, methods, techniques and tools. In addition, diversity arises from different application domains ranging from business information systems to real-time process control systems, from traditional to web-based systems as well as from the perspective being system families or not." Synonyms: Goal-driven, goal-directed, goal-oriented, goal-based. Notice that goal-directed is a trademark of Cooper Interactive Design which is why we will not use this term even though it is more popular than goal-directed, preferring the neutral term goal-driven.

The Key Ideas

Requirements engineering (RE) is concerned with producing a set of specifications for software systems that satisfy their stakeholders and can be implemented, deployed and maintained. Goal-driven requirements engineering takes the view that requirements should initially focus on the why and how questions rather than on the question of what needs to be implemented. "Traditional" analysis and design methods focused on the the functionality of the system to be built and its interactions with users. Instead of asking what the system needs to do, goal-driven methods ask why a certain functionality is need and how it can be implemented. Thus goal-driven methods give a rationale for system functionality by answering why a certain functionality is needed while also tracking different implementation alternatives and the criteria for the selection among these alternatives.

Software Engineering – MC0071 Roll No. 521150974