a quality model for design patterns

94
A Quality Model for Design Patterns Khashayar Khosravi Yann-Ga¨ el Gu´ eh´ eneuc Summer 2004

Upload: lamthuan

Post on 28-Jan-2017

221 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: A Quality Model for Design Patterns

A Quality Model for Design Patterns

Khashayar Khosravi Yann-Gael Gueheneuc

Summer 2004

Page 2: A Quality Model for Design Patterns

Abstract

Design patterns are high level building blocks that are claimed to promote ele-gance in object-oriented programs by increasing flexibility, scalability, usability,reusability, and robustness. However, there is some evidence that design pat-terns do not intrinsically promote quality.

We believe that the problem of quality with design patterns comes bothfrom the design patterns themselves and from their misuse. Unfortunately, littlework has attempted so far to study the quality characteristics of design patternsrigorously. The objective of this technical report is to introduce a quality modeland metrics that help in assessing the quality characteristics of design patternsand in concluding on design patterns quality.

We begin with a summary of definitions on quality and related conceptsand by introducing the most common and standard quality models. Then, wedefine characteristics of the models in details and present the metrics used tomeasure programs. Some of the most common characteristics of quality modelsintroduced are used to develop a quality model to assess and measure the qualitycharacteristics that design patterns claim to possess.

Page 3: A Quality Model for Design Patterns

Contents

1 Quality Models 31.1 Quality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.2 Quality Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . 4

1.2.1 Quality Model . . . . . . . . . . . . . . . . . . . . . . . . 41.2.2 Quality Factor . . . . . . . . . . . . . . . . . . . . . . . . 51.2.3 Quality Sub-factor . . . . . . . . . . . . . . . . . . . . . . 51.2.4 Quality Criterion . . . . . . . . . . . . . . . . . . . . . . . 51.2.5 Quality Metric . . . . . . . . . . . . . . . . . . . . . . . . 61.2.6 Internal Quality . . . . . . . . . . . . . . . . . . . . . . . 61.2.7 External Quality . . . . . . . . . . . . . . . . . . . . . . . 61.2.8 Quality in Use . . . . . . . . . . . . . . . . . . . . . . . . 6

2 Quality Models 82.1 Hierarchical Models . . . . . . . . . . . . . . . . . . . . . . . . . 8

2.1.1 McCall’s Model (1976-7) . . . . . . . . . . . . . . . . . . . 82.1.2 Boehm’s Model (1978) . . . . . . . . . . . . . . . . . . . . 92.1.3 FURPS Model (1987) . . . . . . . . . . . . . . . . . . . . 102.1.4 ISO/IEC 9126 (1991) . . . . . . . . . . . . . . . . . . . . 112.1.5 Dromey’s Model (1996) . . . . . . . . . . . . . . . . . . . 12

2.2 Non-hierarchical Models . . . . . . . . . . . . . . . . . . . . . . . 142.2.1 Bayesian Belief Networks . . . . . . . . . . . . . . . . . . 142.2.2 Star Model . . . . . . . . . . . . . . . . . . . . . . . . . . 15

2.3 Quality Characteristics . . . . . . . . . . . . . . . . . . . . . . . . 162.3.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . 162.3.2 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . 272.3.3 Relationships . . . . . . . . . . . . . . . . . . . . . . . . . 28

3 Quality Metrics 303.1 Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303.2 Quality Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

3.2.1 Adaptability . . . . . . . . . . . . . . . . . . . . . . . . . 313.2.2 Completeness . . . . . . . . . . . . . . . . . . . . . . . . . 313.2.3 Complexity . . . . . . . . . . . . . . . . . . . . . . . . . . 323.2.4 Conciseness . . . . . . . . . . . . . . . . . . . . . . . . . . 35

1

Page 4: A Quality Model for Design Patterns

3.2.5 Correctness . . . . . . . . . . . . . . . . . . . . . . . . . . 353.2.6 Efficiency . . . . . . . . . . . . . . . . . . . . . . . . . . . 353.2.7 Expendability . . . . . . . . . . . . . . . . . . . . . . . . . 373.2.8 Generality . . . . . . . . . . . . . . . . . . . . . . . . . . . 383.2.9 Hardware independence . . . . . . . . . . . . . . . . . . . 383.2.10 Indicesability . . . . . . . . . . . . . . . . . . . . . . . . . 393.2.11 Learnability . . . . . . . . . . . . . . . . . . . . . . . . . . 393.2.12 Modularity . . . . . . . . . . . . . . . . . . . . . . . . . . 403.2.13 Maturity Index . . . . . . . . . . . . . . . . . . . . . . . . 413.2.14 Operability . . . . . . . . . . . . . . . . . . . . . . . . . . 423.2.15 Portability . . . . . . . . . . . . . . . . . . . . . . . . . . 423.2.16 Readability . . . . . . . . . . . . . . . . . . . . . . . . . . 423.2.17 Reliability . . . . . . . . . . . . . . . . . . . . . . . . . . . 433.2.18 Robustness . . . . . . . . . . . . . . . . . . . . . . . . . . 433.2.19 Scalability . . . . . . . . . . . . . . . . . . . . . . . . . . . 463.2.20 Simplicity . . . . . . . . . . . . . . . . . . . . . . . . . . . 473.2.21 Software independence . . . . . . . . . . . . . . . . . . . . 473.2.22 Structuredness . . . . . . . . . . . . . . . . . . . . . . . . 473.2.23 Traceability . . . . . . . . . . . . . . . . . . . . . . . . . . 483.2.24 Understandability . . . . . . . . . . . . . . . . . . . . . . 483.2.25 Usability . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

3.3 Our Model in a Nutshell . . . . . . . . . . . . . . . . . . . . . . . 523.4 Enhancing our Model . . . . . . . . . . . . . . . . . . . . . . . . 53

4 Design Patterns 564.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 564.2 Why Design Patterns? . . . . . . . . . . . . . . . . . . . . . . . . 564.3 Quality Characteristics related with Design Patterns . . . . . . . 564.4 Quality evaluation of Design Patterns . . . . . . . . . . . . . . . 57

4.4.1 Creational Design Patterns . . . . . . . . . . . . . . . . . 584.4.2 Structural Design Patterns . . . . . . . . . . . . . . . . . 634.4.3 Behavioral Design Patterns . . . . . . . . . . . . . . . . . 72

4.5 Summery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85

5 Conclusions 87

2

Page 5: A Quality Model for Design Patterns

Chapter 1

Quality Models

1.1 Quality

Everyone agrees that software quality is the most important element in softwaredevelopment because high quality could reduce the cost of maintenance, test andsoftware reusing. But quality has very different meanings for customers, users,management, marketing, developers, testers, quality engineers, maintainers, andsupport personnel. Many institutes and organizations have their own definitionsof quality and their own quality characteristics.

The software industry is going to grow up daily and “it is rather surprisingthat more serious and definitive work has not been done to date in the areaof evaluating software quality” [9]. Moreover, Kitchenham (1989) notes that“quality is hard to define, impossible to measure, easy to recognize” [39, 54].Also, Gilles states that quality is “transparent when presented, but easily recog-nized in its absence” [28, 54]. Furthermore, Kan (2000) explains that “Qualityis not a single idea, but rather a multidimensional concept. The dimensions ofquality include the entity of interest, the viewpoint on that entity, and qualityattributes of that entity” [36].

Some organisations try to develop standard definitions for quality. We mowpresent some definitions of international and standard organisations [53]:

• ISO 9126: “Software quality characteristic is a set of attributes of a soft-ware product by which its quality is described and evaluated”.

• German Industry Standard DIN 55350 Part 11: “Quality comprises allcharacteristics and significant features of a product or an activity whichrelate to the satisfying of given requirements”.

• ANSI Standard (ANSI/ASQC A3/1978): “Quality is the totality of fea-tures and characteristics of a product or a service that bears on its abilityto satisfy the given needs”.

• IEEE Standard (IEEE Std 729-1983):

3

Page 6: A Quality Model for Design Patterns

– The totality of features and characteristics of a software product thatbear on its ability to satisfy given needs: For example, conformanceto specifications.

– The degree to which software possesses a desired combination of at-tributes.

– The degree to which a customer or a user perceives that a softwaremeets her composite expectations.

– The composite characteristics of a software that determine the de-gree to which the software in use will meet the expectations of thecustomer.

Figure 1.1 is meta-model of the relationships among requirements modelsand quality models.

Figure 1.1: Relationships among requirements models and quality models

All these definitions give separate views on quality. Thus, we need to or-ganise, clarify, and standardise the large number of quality-related definitionsto obtain the best definitions for quality.

1.2 Quality Evaluation

Evaluation of quality requires models to link measures of software artifacts withexternal, high-level, quality characteristics. First, we introduce the concept ofquality model, then we present the different elements related to quality models.

1.2.1 Quality Model

ISO/IEC 9126-1 defines a quality model as a “framework which explains therelationship between different approaches to quality” [33]. Quality models de-composes in hierarchical elements. An approach to quality is to decomposequality in Factors, Sub-factors, and criteria. Evaluation of a program begins

4

Page 7: A Quality Model for Design Patterns

with measuring each quality criteria with numerical value from metrics. Then,each quality sub-factors is assessed using their criteria. Finally, numerical valueare assigned to quality characteristics from their quality sub-factors. Figure 1.2presents a meta-model of the relationships among quality model elements.

Figure 1.2: Relationship among quality model elements

1.2.2 Quality Factor

The typical objective of a quality factor is to characterize an aspect of the qualityof a work product or a process [24].

1.2.3 Quality Sub-factor

Some factors can not refer directly to their criteria, they require a extra interme-diate level to be computed. Elements of this intermediate level are sub-factors.For example, in Boehm’s model (Figure 2.2), maintainability1 as factor refersto three sub-factors: Testability, understandability, and modifiability.

The typical objectives of a quality sub-factor are to [24]:

• Characterize a part of a quality factor.

• Further characterize an aspect of the quality of a work product or process.

• Help in defining the term “quality” for an endeavor.

1.2.4 Quality Criterion

A quality criterion is the detailed description of the rationale for the existenceof a factor or of a sub-factor. For example, in Boehm’s model, portability as afactor is described with two criteria: Device-independence and self confinedness.

1All the “-ility” are define in Section 2.3.

5

Page 8: A Quality Model for Design Patterns

1.2.5 Quality Metric

We need to specify quality metrics to evaluate a given quality criteria. Eachquality metric provides a numerical value that can be scaled to measure a qual-ity factor. Metrics must be complete and detailed sufficiently to be the firmfoundation of a quality model.

“There is a strange relationship between internal and external quality. Ex-ternal quality is quality as measured by the customer. Internal quality is qualityas measured by the programmers [6]” [17].

1.2.6 Internal Quality

N. Bevan defined the internal quality as characteristic “which is measured bythe static properties of the code, typically by inspection (such as path length)”[7].

1.2.7 External Quality

External quality is defined as characteristics “which is measured by the dynamicproperties of the code when executed (such as response time)” [7].

1.2.8 Quality in Use

ISO/IEC 9126-1 defines the quality in use as “the user’s view of quality. Achiev-ing quality in use is dependent on achieving the necessary external quality, whichin turns is dependent on achieving the necessary internal quality” [33], “whichis measured by the extent to which the software meets the needs of the user inthe working environment (such as productivity)” [7].

Quality in use decomposes into four characteristics [33]:

• Effectiveness

• Productivity

• Safety

• Satisfaction

“Evaluation of software products in order to satisfy software quality needsis one of the process in the software development life-cycle. Software productquality can be evaluated by measuring internal attributes (typically static mea-sure of intermediate products), or by measuring external attributes (typicallyby measuring the behavior of the code when executed), or by measuring qualityin use attributes. The objective is for the product to have the required effect ina particular context of use” [33], see also Figure 1.3 .

Using these definitions of quality and quality models, we now present themost common quality models defined in the literature.

6

Page 9: A Quality Model for Design Patterns

Figure 1.3: Quality in the life-cycle

7

Page 10: A Quality Model for Design Patterns

Chapter 2

Quality Models

2.1 Hierarchical Models

Several quality models have been defined by different people and organizations.In the following, we summarize briefly some of the most standard and well-known quality models.

2.1.1 McCall’s Model (1976-7)

McCall’s model for software quality (see Figure 2.1) combines eleven criteriaaround product operations, product revisions, and product transitions. Themain idea behind McCall’s model is to assess the relationships amon externalquality factors and product quality criteria.

“McCall’s Model is used in the United States for very large projects in themilitary, space, and public domain. It was developed in 1976-7 by the US Air-force Electronic System Decision (ESD), the Rome Air Development Center(RADC), and General Electric (GE), with the aim of improving the quality ofsoftware products” [53].

“One of the major contributions of the McCall model is the relationshipcreated between quality characteristics and metrics, although there has beencriticism that not all metrics are objective. One aspect not considered directlyby this model was the functionality of the software product” [45].

The layers of quality model in McCall are defined as [11]:

• Factors;

• Criteria;

• Metrics.

8

Page 11: A Quality Model for Design Patterns

Figure 2.1: McCall’s model [48]

2.1.2 Boehm’s Model (1978)

Boehm added some characteristics to McCall’s model with emphasis on themaintainability of software product. Also, this model includes considerationsinvolved in the evaluation of a software product with respect to the utility ofthe program (see Figure 2.2).

“The Boehm model is similar to the McCall model in that it representsa hierarchical structure of characteristics, each of which contributes to totalquality. Boehm’s notion includes users needs, as McCall’s does; however, it alsoadds the hardware yield characteristics not encountered in the McCall model”[45].

However, Boehm’s model contains only a diagram without any suggestionabout measuring the quality characteristics.

The layers of quality model in Boehm defined as [11]:

9

Page 12: A Quality Model for Design Patterns

• High-level characteristics;

• Primitive characteristics;

• Metrics.

Figure 2.2: Boehm’s Model [9]

2.1.3 FURPS Model (1987)

The FURPS model proposed by Robert Grady and Hewlett-Packard Co. de-composes characteristics in two different categories of requirements:

• Functional requirements (F): Defined by input and expected output.

• Non-functional requirements (URPS): Usability, reliability, performance,supportability.

Figure 2.3 is an example of the FURPS model. “One disadvantage of theFURPS model is that it fails to take account of the software product’s porta-bility” [45].

10

Page 13: A Quality Model for Design Patterns

Figure 2.3: FURPS Model

2.1.4 ISO/IEC 9126 (1991)

With the need for the software industry to standardize the evaluation of soft-ware products using quality models, the ISO (International Organization forStandardization) proposed a standard which specifies six areas of importancefor software evaluation and, for each area, specifications that attempt to makethe six area measurable (see Figure 2.4).

“One of the advantages of the ISO 9126 model is that it identifies the in-ternal characteristics and external quality characteristics of a software product.However, at the same time it has the disadvantage of not showing very clearlyhow these aspects can be measured” [45].

The layers of quality model in ISO/IEC are defined as [11]:

• Characteristics;

• Sub-characteristics;

11

Page 14: A Quality Model for Design Patterns

• Metrics.

Figure 2.4: Software Quality ISO/IEC’s Model

2.1.5 Dromey’s Model (1996)

The main idea to create this new model was to obtain a model broad enough towork for different systems (see Figure 2.5). “He [Dromey] recognisees that eval-uation differs for each product and you need a more dynamic idea for modellingthe process” [21].

Dromey identified five steps to build his model:

• Choose a set of high-level attributes that you need to use for your evalu-ation.

• Make a list of all the components or modules in the system.

• Identify quality-carrying properties for each component. (That is, quali-ties of the component that has the most impact on the product propertiesfrom the list created in last step).

12

Page 15: A Quality Model for Design Patterns

• Decide on how each property affects the quality attributes.

• Evaluate the model.

• Identify and resolve weaknesses in with feedback loop.

“Dromeys model seeks to increase understanding of the relationship betweenthe attributes (characteristics) and the sub-attributes (sub-characteristics) ofquality. It also attempts to pinpoint the properties of the software product thataffect the attributes of quality” [45].

The layers of quality1 model in Dormey are defined as [11]:

• High-level attributes;

• Subordinate attributes.

Figure 2.5: Dromey’s Model

1The layers of quality in IEEE are defined as:

• Factors;

• Sub-factors;

• Metrics.

13

Page 16: A Quality Model for Design Patterns

Figure 2.6 is an example of a Dromey’s model:

• Evaluation of two components (variable and expression).

• Definition of quality-carrying properties for variable and expression.

• Definition of the product properties.

• Obtention of the quality attributes for each product properties from Dromey’smodel.

Figure 2.6: Example of Dromey’s Model

2.2 Non-hierarchical Models

2.2.1 Bayesian Belief Networks

A BBN2 is a graphical networks whose nodes are the uncertain variables andwhose edges are the causal or influential links between the variables, Associatedwith each node is a set of conditional probability functions that model theuncertain relationship between the node and its parents.

It can be explained in two stages, one stage covers the life-cycle processes ofspecification, design or coding and the second stage covers testing [41, 40] .

Using the BBN have some benefits as follow [41]:

• BBNs enable reasoning under uncertainty and combine the advantagesof an intuitive visual representation with a sound mathematical basis inBayesian probability.

2Bayesian Belief Networks

14

Page 17: A Quality Model for Design Patterns

• With BBNs, it is possible to articulate expert beliefs about the dependen-cies between different variables and to propagate consistently the impact ofevidence on the probabilities of uncertain outcomes, such as future systemreliability.

• BBNs allow an injection of scientific rigour when the probability distribu-tions associated with individual nodes are simply “expert opinions”.

• A BBN will derive all the implications of the beliefs that are input to it;some of these will be facts that can be checked against the project observa-tions, or simply against the experience of the decision makers themselves.

• The ability to represent and manipulate complex models that might neverbe implemented using conventional methods3

2.2.2 Star Model

The Star model is introduced as follows: “The software quality Star is a concep-tual model for presenting different perspectives of software quality. The modelis based on the acquirer and supplier as defined in ISO/IEC 12207 (1995)” [53]

There are three significant elements in the Star: The procurer (acquirer), theproducer (supplier), and the product (see Figure 2.7). The procurer enters in acontract with the producer to create a software product. This contract clearlyspecifies the quality characteristics of the product. The procurer’s perspectiveof the producer organization is that they use the best project managementtechniques available and that they engage in first-rate processes to create aquality product. The procurer’s perspective of the product is that it must beacceptable by the user community and that it can be serviced and maintainedby their professionals.

The model considers that the acquirer be the lead party in any contractualarrangement because it is the acquirer’s users and technical support profession-als who dictate the success or failure of the software product. Also, it is theacquirer who dictates the profile and maturity of the supplier organization.

“The model accommodates the producer’s perspective of software qualityand focuses on the maturity of the producer organization as software develop-ers and the development processes that they used to create quality softwareproducts” [53].

3BBNs have a rigorous, mathematical meaning there are software tools that can interpretthem and perform the complex calculations needed in their use[41].

15

Page 18: A Quality Model for Design Patterns

Figure 2.7: Star Model

2.3 Quality Characteristics

Definitions of quality characteristic have direct relations with the programminglanguage and the environment for which a software product is implemented.For example, Lowell J. Arthur in 1951 defines the flexibility quality charac-teristic using the question: “Is the program free of spaghetti code?” [3], i.e.,does the program source code contains GOTO instructions? Thus, this defini-tion of the flexibility quality characteristic relates to pre-procedural structuralprogramming directly and is no longer practical for object-oriented programs.

2.3.1 Definitions

In the following, we summarize standard or latest definitions for quality charac-teristics related to object-oriented programs and used in other sections of thisreport to define the quality models. These definitions are sorted alphabetically:

• Accessibility: “Accessibility is the degree to which the user interface ofsomething enables users with common or specified (e.g., auditory, visual,physical, or cognitive) disabilities to perform their specified tasks” [25].“Does the model facilitate selective use of its parts for other purposes(e.g., for the construction of another model)?” [5]

• Accountability: “Accountability: Does the model lend itself to measure-ment of its usage? Can probes be inserted to measure timing, whetherspecified branches are exercised, etc.?” [5]

• Accuracy: “The capability of the software product to provide the right oragreed results or effects with the needed degree of precision” [62]. Also,“[t]he precision of computations and control” [49], the “[a]ttributes of

16

Page 19: A Quality Model for Design Patterns

software that bear on the provision of right or agreed results or effects”[57], “the magnitude of defects (i.e., the deviation of the actual or averagemeasurements from their true value) in quantitative data” [25]. “Arel;he models calculations and outputs sufficiently precise to satisfy theirintended use?” [5]

• Adaptability: “The capability of the software product to be adapted fordifferent specified environments without applying actions or means otherthan those provided for this purpose for [/ by] the software considered”[62]. Also, “[a]ttributes of software that bear on the opportunity for itsadaptation to different specified environments without applying other ac-tions or means than those provided for this purpose for the software con-sidered” [57].

Adaptability mostly considering as following options:

– Independence of storage: We need to ensure that software modulesare independent of storage size to make the software more adaptable.

– Uncommitted memory: the ability to allocate address space withoutallocating memory to back it up at the same time

– Uncommitted processing capacity: is defined as percentage of un-committed processing capacity.

• Adaptivity: “Adaptivity suggests that the system should be designed tothe needs of different types of users” [1].

• Ambiguity: Attributes of software related with requirements with poten-tial multiple meanings [31].

• Analyzability: “The capability of the software product to be diagnosedfor deficiencies or causes of failures in the software, or for the parts tobe modified to be identified” [62]. Also, the “[a]ttributes of software thatbear on the effort needed for diagnosis of deficiencies or causes of failures,or for identification of part to be modified” [57].

• Attractiveness: “The capability of the software product to be attractiveto the user” [62]. “Attractiveness is achieved through layout, graphics,color, and dynamic elements” [1].

• Auditability: “The ease with which conformance to standards can bechecked” [49].

• Augmentability: The ability of “the model to accommodates expansion incomponent computational functions or data storage requirements” [5].

Attributes related to support the growth of data storage.

• Availability: “Availability is the degree to which a work product is opera-tional and available for use” [25] as a product or to uses. Availability hasthe same definition for malicious and non-malicious users.

17

Page 20: A Quality Model for Design Patterns

• Behavior[62]:

– Time behavior: “The capability of the software product to pro-vide appropriate response and processing times and throughput rateswhen performing its function”.

– Resource behavior: The attributes of software related with measuringthe amount of resources required to perform its function.

• Branding: “Branding is the degree to which a work product (e.g., appli-cation, component, or document) successfully incorporates the brand ofthe customer organization’s business enterprize” [25].

• Capacity: “Capacity is the minimum number of things (e.g., transactions,storage) that can be successfully handled.” [25].

• Configurability: “Configurability is the degree to which something can beconfigured into multiple forms (i.e., configurations)” [25].

• Changeability: “The capability of the software product to enable a speci-fied modification to be implemented” [62]. Also, the “[a]ttributes of soft-ware that bear on the effort needed for modification, fault removal or forenvironmental change” [57]. Changeability is also called “modifiability”[1].

• Co-existence: “The capability of the software product to co-exist withother independent software in a common environment sharing commonresources” [62].

• Compatibility: “Compatibility is the degree to which a system or a com-ponent can be used and functions correctly under specified conditions ofthe physical environment(s) in which it is intended to operate” [25].

• Completeness: “The degree to which full implementation of required func-tion has been achieved” [49]. Also, completeness related to requirements,documentation, and comments:

– Explain the program input and their presence with comments.

– Don’t reference to dummy programs.

• Compliance: “Attributes of software that make the software adhere toapplication-related standards of conventions or regulations in laws andsimilar prescriptions” [57]. Also, degree to which the software is found to“[c]omply with relevant standards and practices” [1]. In [62], compliancedecomposes in:

– Portability compliance: “The capability of the software product toadhere to standards or conventions relating to portability”.

– Maintainability compliance: “The capability of the software productto adhere to standards or conventions relating to maintainability”.

18

Page 21: A Quality Model for Design Patterns

– Efficiency compliance: “The capability of the software product toadhere to standards or conventions relating to efficiency”.

– Usability compliance: “The capability of the software product toadhere to standards, conventions, style guides or regulations relatingto usability”.

– Reliability compliance: “The capability of the software product toadhere to standards, conventions or regulations relating to reliabil-ity”.

– Functionality compliance: “The capability of the software product toadhere to standards, conventions or regulations in laws and similarprescriptions relating to functionality”.

• Communication commonality: “The degree to which standard interfaces,protocols and bandwidths are used” [49].

• Communicativeness: “Does the model facilitate the specification of in-puts? Does it provide outputs whose form and contenl, are easy to assim-ilate and useful?” [5]

• Computability: Attributes related to computation safety (such as divisionby zero or other impossible computations).

• Completeness: “Are all model inputs used within the model? Are thereno dummy sub-models referenced?” [5]

• Conformance: “Attributes of software that make the software adhere tostandards or conventions relating to portability” [57].

• Conciseness: “The compactness of the program in terms of lines of code”[49]. Also, “Attributes of software that provide the implementation ofa function with minimum amount of code” [64]. Conciseness relates toprogram excess, for example, unused entities (types, objects, parameters)or internal invocations of other functions within the same file decreasethe value of conciseness. Conciseness answer the following questions: “Isthe model implemented with a minimum amount of code? Is it exces-sively fragmented into sub-models so that the same sequence of code isnot repeated in numerous places?” [5].

• Consistency: “Does the model contain uniform notation, terminology, andsymbology within itself? Are all model attributes and variables typed andspecified consistently for all uses? Are coding standards homogeneouslyadhered to?” [5]

• Configurability: “The ability to organize and control elements of the soft-ware configuration” [49].

• Consistency: “The use of uniform design and documentation techniquesthroughout the software development project” [49].

For example:

19

Page 22: A Quality Model for Design Patterns

– The set of global variables is supposed to be used across more thanone sub program.

– The type of variables is supposed to be consistent for all their uses.

• Correctability: “Correctability is the ease with which minor defects canbe corrected between major releases while the application or componentis in use by its users” [25].

• Correctness: The “[e]xtent to which a program satisfies its specificationsand fulfills the user’s mission objectives” [26, 49]. “Correctness is thedegree to which a work product and its outputs are free from defects oncethe work product is delivered” [25]. Correctness answers the followingtypical questions: “Is the application and its data complete, accurate andconsistent?” [3].

• Currency: “Currency is the degree to which data remain current (i.e., upto date, not obsolete)” [25].

• Data Commonality: “The use of standard data structures and typesthroughout the program” [49].

• Dependability: “Dependability is the degree to which various kinds ofusers can depend on a work product” [25].

• Device independability:

– Factors for independency between computations and the computerconfiguration.

– Factors for independency between computations and hardware capa-bility, half word accessing, bit patterns. . .

“Can the model be executed on other computer hardware configurations?Have machine-dependent statements been flagged and documented?” [5]

• Effectiveness: “The capability of the software product to enable usersto achieve specified goals with accuracy and completeness in a specifiedcontext of use” [62].

• Efficiency: “The capability of the software product to provide appropriateperformance, relative to the amount of resources used, under stated con-ditions” [62]. “Efficiency is the degree to which something effectively uses(i.e., minimizes its consumption of) its resources. These resources mayinclude all types of resources such as computing (hardware, software, andnetwork), machinery, facilities, and personnel” [25]. Also, “[t]he amountof computing resources and code required by a program to perform a func-tion” [26, 49], “[a] set of attributes that bear on the relationship betweenthe level of performance of the software and the amount of resources usedunder stated conditions ” [57]. Efficiency relates to “shed load, end-to-end error detection: Cheap test, Performance defects appear under heavy

20

Page 23: A Quality Model for Design Patterns

load, safety first, scaling, throughput, latency, availability” [1]. “Does themodel fulfill its objective without waste of resources?” [5]

• Error tolerance: “The damage that occurs when the program encountersan error” [49].

• Expendability: “The degree to which architectural, data or proceduraldesign can be extended” [49].

• Extendibility: The attributes related to the modification of a componentor a system in case of increase of the storage or of the functional capacity[56].

• Extensibility: “Extensibility is the ease with which an application or com-ponent can be enhanced in the future to meet changing requirements orgoals” [25]. Also, attributes related to new capabilities or to the modifi-cation of existing capabilities upon user needs [56].

• Fault Tolerance: “The capability of the software product to maintain aspecified level of performance in cases of software faults or of infringementof its specified interface” [62]. Also, the “[a]ttributes of software thatbear on its ability to maintain a specified level of performance in cases ofsoftware faults or of infringement of its specified interface” [57]. (“Use [of]robust methods to protect against permanent failure of a limited numberof components. Use [of] stabilizing methods to protect against transitoryfaults” [1].)

• Flexibility: “Effort required to modify an operational program” [26]. Theeffort to change or to modify a software product to adapt it to otherenvironment or to other applications different from which it was designed.

• Functionality: “The capability of the software product to provide func-tions which meet stated and implied needs when the software is used underspecified conditions” [62]. Functionality is “[a] set of attributes that bearon the existence of a set of functions and their specified properties. Thefunctions are those that satisfy stated or implied needs” [57]. Function-ality “[i]s assessed by evaluating the feature set and capabilities of theprogram, the generality of functions that are delivered and the security ofoverall system” [49].

• Generality: “The breadth of potential application of program compo-nents” [49]. Generality is defined as the degree to which a software productcan perform a wide range of functions.

• Hardware independence: “The degree to which the software is decoupledfrom the hardware on which it operates” [49].

• Independence of storage: the ability to bring new storage where neededat a moments notice, more resilience and automatic failure recovery, en-hanced performance, and cost savings from efficient storage use [27].

21

Page 24: A Quality Model for Design Patterns

• Indicesability: Attributes related to the degree of correctness of the soft-ware product throughout its development cycle [?].

• Initializability: Attributes related to the degree a software product can beinitialized with the expected values [?].

item Installability: “The capability of the software product to be installedin specified environment” [62]. “Installability is the ease with which some-thing can be successfully installed in its production environment(s)” [25].Also, the “[a]ttributes of software that bear on the effort needed to installthe software in a specified environment” [57].

• Instrumentation: “The degree to which the program monitors its ownoperation and identifies errors that do occur” [49].

• Integrity: The “[e]xtent to which access to software or data by unautho-rized persons can be controlled” [26, 49]. Also, the attributes related tocontrol a software product for illegal accesses to the program and its data[?].

• Interface facility: The degree to which two software products can be con-nected successfully.

• Internationalization: “Internationalization (also known as globalizationand localization) is the degree to which something can be or is appropri-ately configured for use in a global environment” [25].

• Interoperability: “The capability of the software product to interact withone or more specified systems” [62]. Also, the “[e]ffort required to coupleone system with another” [26, 49], the “[a]ttributes of software that bearon its ability to interact with specified systems” [57], “the degree to whicha system or one of its components is properly connected to and operateswith something else” [25].

• Learnability: “The capability of the software product to enable the user tolearn its application” [62]. Also, the “[a]ttributes of software that bear onthe users’ effort for learning its application” [57]. “Learnability requiresattention to the needs of the novice and uninitiated users. The uninitiateduser is one that has no previous experience with the software or similarsoftware. The novice user has either had some experience with similarsoftware or has limited experience with the software” [1].

• Legibility: “Does the model possess the characteristic that its function iseasily discerned by reading the code?” [5]

• Maintainability: “The capability of the software product to be modified.Modifications may include corrections, improvements or adaptation of thesoftware to change in environment, and in requirements and functionalspecifications” [62]. Also, the “[e]ffort required to locate and fix an errorin an operational program” [26, 49]. “Maintainability is the ease with

22

Page 25: A Quality Model for Design Patterns

which an application or component can be maintained between majorreleases” [25]. Also, “[a] set of attributes that bear on the effort neededto make specified modifications” [57], the degree of changing or modifyingthe components to correct errors, to improve performance, or to adapt forchanging the environment [?].

• Maturity: “The capability of the software product to avoid failure as aresult in the software” [62]. Also, the “[a]ttributes of software that bearon the frequency of failure by faults in the software” [57].

• Modularity: “The functional independence of program components” [49].Modularity is increased when it is possible to divide each components intosub-components [?].

• Operability: “The capability of the software product to enable the user tooperate and control it” [62]. Also, “[t]he ease of operation of a program”[49]. “Operability is the degree to which something enables its operators toperform their tasks in accordance with the operations manual” [25]. Also,the “[a]ttributes of software that bear on the users’ effort for operationand operation control” [57]. “Part of the design process for operabilityis to develop scenarios and use cases for novice, uninitiated, and expertusers. Operability is enhanced through navigational efficiency, i.e., userscan locate the information they want” [1].

• Performance: “Performance is the degree to which timing characteristicsare adequate” [25]. Performance “[i]s measured by evaluating processingspeed, response time, resource consumption, throughput, and efficiency”[49].

• Personalization: “Personalization is the degree to which each individualuser can be presented with a unique user-specific experience” [25].

• Portability: “The capability of the software product to be transferred fromone environment to another” [62]. Also, the “[e]ffort required to transfera program from one hardware configuration and–or software system envi-ronment to another” [26, 49]. “Portability is the ease with which an ap-plication or component can be moved from one environment to another”[57, 25].

• Precision: “Precision is the dispersion of quantitative data, regardless ofits accuracy” [25].

• Productivity: “The capability of the software product to enable users toexpand appropriate amounts of resources in relation to the effectivenessachieved in specified context of use” [62].

• Readability: “Readability is characterized by clear, concise code that isimmediately understandable” [3]. Readability is defined as the set of at-tributes related to the difficulty in understanding software componentssource and documentation.

23

Page 26: A Quality Model for Design Patterns

item Recoverability: “The capability of the software product to re-establisha specified level of performance and recover the data directly affected inthe case of failure” [62, 57]. Recoverability “[u]se[s] recovery orientedmethods. System architecture should be designed with components thatcan be restarted independently of the other components. System architec-ture should be designed with an undo function to rewind time, untangleproblems, and replay the system back to the current time” [1].

• Reliability: “The capability of the software product to maintain a specifiedlevel of performance when used under specified conditions” [62]. Reliabil-ity is the “Extend to which a program can be expected to perform itsintended function with required precision” [26, 49]. It “[i]s evaluated bymeasuring the frequency and severity of failure, the accuracy of outputresult, the mean time between failure (MTBF), the ability to recover fromfailure and the predictability of the program” [49] because “Unreliableprograms fail frequently, or produce incorrect data” [3]. Also, reliabilityis “[a] set of attributes that bear on the capability of software to maintainits level of performance under stated conditions for stated period of time”[57]. “Reliability is the degree to which a work product operates withoutfailure under given conditions during a given time period” [25].

• Replaceability: “The capability of the software product to be used in placeof another specified software product for the same purpose in the sameenvironment” [62]. Also, “[a]ttributes of software that bear on opportunityand effort of using it in place of specified other software in the environmentof software” [57].

• Requirements Risk: Attributes related to the risk of project failure becauseof requirements (as with poorly written or rapidly changing requirement).

• Responsiveness: “Responsiveness is the ability of a system to meet itsobjectives for response time or throughput. In end-user systems, respon-siveness is typically defined from a user perspective” [15].

• Resource utilization: “The capability of the software product to use ap-propriate amounts and types of resources when the software performs itsfunction under stated conditions” [62].

• Reusability: “Reusability is the ease with which an existing applicationor component can be reused” [25]. It is the “[e]xtent to which a programcan be used in other applications related to the packaging and scope ofthe functions that programs perform” [26, 49]. For example, reusabilityis possible when “[m]any modules contain two or more unique functionswhich, if separated from the main body of code, could be reused withother programs” [3]. Also, he attributes related to the cost of transferringa module or program to another application [?].

24

Page 27: A Quality Model for Design Patterns

• Robustness: “Robustness is the degree to which an executable work prod-uct continues to function properly under abnormal conditions or circum-stances” [25]. Also, the attributes related to the correct functioning ofa software product in the case of invalid inputs or under stressful envi-ronmental conditions. “Does the model continue to execute reasonablywhen it is run with invalid inputs? Can the model assign default values tonon-specified input variables and parameters? Does the model have thecapability to check input data for domain errors?” [5]

• Safety: “The capability of the software product to achieve acceptable levelsof risk of harm to people business, software, property or the environmentin specified context of use” [62].

• Satisfaction: “The capability of the software product to satisfy users inspecified context of use” [62].

• Scalability: “Scalability is the ease with which an application or compo-nent can be modified to expand its existing capacities” [25]. “[S]calabilityis crucial for keeping costs down and minimizing interruptions in produc-tion” [42]. “Scalability is the ability of a system to continue to meet itsresponse time or throughput objectives as the demand for the softwarefunctions increases” [15].

• Scheduleability: “Scheduleability is the degree to which events and behav-iors can be scheduled and then occur at their scheduled times” [25].

• Security: “The capability of the software product to protect informationand data so that unauthorized persons or systems cannot read or modifythem and authorized persons or systems are not denied access to them”[62]. Also, security is “[t]he availability of mechanisms that control ofprotect programs and data” [49], “[a]ttributes of software that bear on itsability to prevent unauthorized access, where accidental or deliberate, toprograms and data” [57].

• Self containedness: “Self containedness is related to the facility of thesoftware product for initializing core storage prior to use and for properpositioning of input/output devices prior to use” [5].

• Self-descriptiveness: “Does the model contain enough information for areader to determine or verify its objectives, assumptions, constraints, in-puts, out,puts, components, and revision status?” [5]

• Self documentation: “The degree to which the source code provides mean-ingful documentation” [49].

• Simplicity: “The degree to which a program can be understood withoutdifficulty” [49].

25

Page 28: A Quality Model for Design Patterns

• Software system independence: “The degree to which the program is inde-pendent of nonstandard programming language features, operating systemcharacteristics, and other environmental constraints” [49].

• Stability: “The capability of the software product to avoid unexpectedeffects from modifications of the software” [62]. Also, the “[a]ttributes ofsoftware that bear on the risk of unexpected effect of modifications” [57].

• Structuredness: Does the model possess a definite pattern of organizationof its interdependent parts? [5]

• Subsetability: “Subsetability is the degree to which something can bereleased in multiple variants, each of which implements a different subsetof the functional requirements and associated quality requirements” [25].

• Suitability: “The capability of the software product to provide an appro-priate set of functions for specified tasks and user objectives” [62]. Also,the “[a]ttributes of software that bears on the presence and appropriate-ness of a set of functions for specified tasks” [57].

• Supportability: Supportability “combines the ability to extend the pro-gram (extensibility), adaptability and serviceability (these three attributesrepresent a more common term—maintainability), in addition to testabil-ity, computability, configurability, the ease with which a system can beinstalled and the ease with which problems can be localized” [49].

• Survivability: “Survivability is the degree to which essential, mission-critical services continue to be provided in spite of either accidental ormalicious harm” [25].

• Testability: “The capability of the software product to enable modifiedsoftware to be validated” [62]. Also, the “[e]ffort required to test a programto insure it performs its intended function” [26, 49]. ”Testability is theease with which an application or component facilitates the creation andexecution of successful tests (i.e., tests that would cause failures due toany underlying defects)” [25]. Also, the “[a]ttributes of software that bearon the effort needed for validating the modified software” [57].

• Traceability: “The ability to trace a design representation or actual pro-gram component back to requirements” [49]. Traceability is defined asthe attributes that increase traceability among implementation, design,architecture, requirements. . .

• Training: “The degree to which the software assists in enabling new usersto apply the system” [49].

• Transferability: The attributes those related to the cost of transferring asoftware product from its original hardware or operational environmentto another [?].

26

Page 29: A Quality Model for Design Patterns

• Transportability: “Transportability is the ease with which something canbe physically moved from one location to another” [25].

• Trustability: “Trustability refers to the system’s ability to provide userswith information about service correctness” [1].

• Uncommitted processing capacity is amounts of unattached processingcapacity [20].

• Uncommitted memory: “Uncommitted memory enables an application todifferentiate between reserving and using (committing) address space” [47]

• Understandability: “The capability of the software product to enable theuser to understand whether the software is suitable, and how it can be usedfor particular tasks and conditions of use” [62]. Also, the “[a]ttributes ofsoftware that bear on the users’ effort for recognizing the logical conceptand its applicability” [57].

• Usability: “The capability of the software product to be understood,learned, used and attractive to the user, when used under specified condi-tions” [62]. Usability is related to the “set of attributes that bear on theeffort needed for use, and on the individual assessment of such use, by astated or implied set of users” [57, 7]. Also, usability is the “[e]ffort re-quired to learn, operate, prepare input, and interpret output of program”[26, 49], “the ease with which members of a specified set of users are ableto use something effectively” [25]. Usability “[i]s assessed by consideringhuman factors, overall aesthetics, consistency, and documentation” [49].

• Utility: “Utility is the degree to which something can be accessed andused by its various types of users” [25].

• Variability: “Variability is the degree to which something exists in multiplevariants, each having the appropriate capabilities” [25].

• Verifiability: “Verifiability is the ease with which an application or com-ponent can be verified to meet its associated requirements and standards”[25].

• Volatility: The attributes related to the requirements documents whenthe software product changes frequently.

• Withdrawability: “Withdrawability is the ease with which an existingproblematic version of the system or one of its components can be suc-cessfully withdrawn and replaced by a previously working version” [25].

2.3.2 Summary

In the following table, we have defined the software characteristics in relationto different software products.

27

Page 30: A Quality Model for Design Patterns

Quality Characteristics All Products Requirements Documentation Design Documentation Code Test DocumentationCompleteness X [56] X [56] X [56] X [56] X [56]Correctness X [56] X [56] X [3] X [56] X [3]Reliability X [56] X [3] X [56]Generality X [56] X [3] X [56]Understandability X [56] X [3] X [56] X [56]Efficiency X [3] X [56] X [56]Modularity X [3] X [56]Portability X [3] X [56]Reusability X [3]Adaptability X [56]Maintainability X [56]Flexibility X [3]Usability X [3]Testability X [3]Integrity X [3]Interoperability X [3]

Table 2.1: Interrelationships among software characteristics and software prod-ucts

2.3.3 Relationships

Some quality characteristics are related to one another, we summarize theseinterrelationships among software quality characteristics, in Table 2.2.

28

Page 31: A Quality Model for Design Patterns

Adaptability AdaptabilityCorrectness CorrectnessFunctionality FunctionalityConsistentability Related[21] ConsistentabilityModularity ModularityDescriptiveness DescriptivenessReliability Not Related[3] Related[21] ReliabilityUnderstandability UnderstandabilitySecurity Related[61] SecurityEfficiency EfficiencyIntegrity Related[3] IntegrityMaturity Related[61] MaturitySuitability Related[61] Related[61] SuitabilityAccuracy Related[61] AccuracyUsability Not Related[3] Related[21] Related[21] Not Related[3] Related[61] Related[3] Not Related[3] UsabilityCommunicativeness Related[61] CommunicativenessConciseness ConcisenessMaintainability Not Related[3] Related[21] Related[21] [61] Related[21] Not Related[3] Related[3] Not Related[3] Related[61] MaintainabilityConsistency Related[61] Related[61] Related[61] ConsistencyTestability Not Related[3] Related[61] Not Related[3] Related[3] Not Related[3] Not Related[3] TestabilityComputability Related[21] Related[21] ComputabilityError-Tolerance Related[61] Error-ToleranceFlexibility Not Related[3] Not Related[3] Related[3] Related[3] Not Related[3] Not Related[3] Related[3] Not Related[3] Not Related[3] Not Related[3] Not Related[3] FlexibilityPortability Related[33] Related[21] Related[61] [61] Related[21] Related[3] Not Related[3] PortabilityCompleteness Related[61] Related[21] Related[21] Related[21] Related[21] CompletenessExecution-Efficiency Related[61] Execution-EfficiencyReusability Related[21] Related[21] [61] Related[21] Related[3] Related[3] Related[3] Not Related[3] Not Related[3] Not Related[3] Not Related[3] ReusabilityExpendability Related[61] ExpendabilityGenerality Related[61] Related[61] GeneralityHardware-Independence Related[61] Related[61] Hardware-IndependenceOperability Related[61] OperabilitySelf-Documentation Related[61] Related[61] Related[61] Related[61] Self-DocumentationInteroperability Related[61] Related[61] Related[61] Related[3] Related[3] Related[61] Not Related[3] Related[61] InteroperabilitySimplicity Related[61] Related[61] Related[61] SimplicitySoftware-Independence Related[61] Related[61] Software-IndependenceStorage-Efficiency Related[61] Storage-EfficiencyTraceability Related[61] TraceabilityTraining Related[61] Training

Table 2.2: Interrelationships among quality characteristics

29

Page 32: A Quality Model for Design Patterns

Chapter 3

Quality Metrics

3.1 Metrics

Software metrics are used to quantify software, software development resources,and software development processes. Some software characteristics are measur-able directly (as LOC-Lines Of Code), some other software characteristics canbe inferred only from indirect measurements (for example, maintainability),and some software characteristics are mostly related to human perception (forexample, understandability is more dependent to the people vs. the programs).

Software metrics can be classified into three categories [37]:

• Product metrics: Describe the characteristics of the product, such as size,complexity, design features, performance, and quality level.

• Process metrics: Can be used to improve software development and main-tenance process (i.e., effectiveness of defect removal during development,defect arrival, response time to fix defect).

• Project metrics: Describe characteristics of the project and its execution(i.e., number of software developers, staffing pattern related to the lifecycle of the software, cost, schedule, and productivity).

However, some metrics belong to multiple categories (i.e., the in-processquality metrics of a project are both process metrics and project metrics). More-over, a healthy metrics program focuses on much more than the measurement ofprogrammer productivity. Consider these areas of software development whichcan benefit from a well-planned metrics program [63]:

• Project management.

• Product quality.

• Product performance.

30

Page 33: A Quality Model for Design Patterns

• Development process.

• Cost and schedule estimation.

3.2 Quality Metrics

Software quality metrics are a subset of software metrics that focus on thequality characteristics of software products, processes, and projects. In thefollowing, we present some quality characteristics and related quality metrics,which are used in most quality models as example: Adaptability, Complete-ness, Complexity, Conciseness, Correctness, Efficiency, Expendability, Gener-ality, Hardware-independence, Indipesability, learnability, Modularity,Maturityindex, Operability, Portability, Readability, Reliability, Robustness, Scalability,Simplicity, Software independence, Structuredness, Traceability, Understand-ability, and Usability.

3.2.1 Adaptability

• Independence of storage: Metrics used to measure independence of storageare:MIS : Number of modules which size constraints are hard-coded.MTotal: Number of modules with size constraints,AIS = Mis

Mtotal

• Uncommitted memory: Metrics used to measure the adaptability withrespect to percentage of uncommitted memory are:MUM : Amount of uncommitted memory.MTotal: Total memory available.AUM = Mum

Mtotal.

• Uncommitted processing capacity: Metrics used to measure the percent-age of uncommitted processing capacity are:MUPC : Amount of uncommitted memory.MTotal: Total memory available.AUPC = Mupc

Mtotal.

3.2.2 Completeness

Completeness can be measured in different ways:

• First [56]:

We need a model to identify the software requirements which are incom-plete and ambiguous. Ambiguities are defined as any cause with no effect,and effect with no cause, and any combination of cause and effects thatare inconsistent with the requirements or are impossible to achieve.AExisting = Number of remaining ambiguities.

31

Page 34: A Quality Model for Design Patterns

ATotal = Number of identified ambiguities.Completeness =

(1− AExisting

AT otoal

).

To avoid division by zero, if the total number of ambiguities are zero, thecompleteness is defined as 1 (or 100%).

• Second [32]:Di: Total number of unique defects detected during the ith design or codeinspection or the ith life cycle phase.Wi: weighting distribution for the ith design or code inspection.CM =

∑101 WiDi

Code completeness can be measured by considering [16, 56]:

• Number of ambiguous references: References to inputs, functions, andoutputs should be unique. An example of an ambiguous reference is afunction being called one name by one module and a different name byanother module.

• Number of improper data references: All data references should be prop-erly defined, computed, or obtained from identifiable external sources.

• Percentage of defined functions used: All functions defined within thesoftware should be used.

• Percentage of referenced functions defined: All functions referenced withinthe software should be defined. There should be no dummy functionspresent.

• Percentage of conditional processing defined: All conditional logic andalternative processing paths for each decision point should be defined

3.2.3 Complexity

Complexity is measured by considering the following elements:

• Static complexity measures the complexity of modules as a network, re-lated with design transaction analysis. Static complexity is calculate as:E: The number of edges, indexed by i = 1, . . . , E.N: The number of modules, indexed by j = 1, . . . , N .C = E −N + 1.

• Generalized Complexity measures the complexity as represented with anetwork of modules and of used resources. Generalized complexity is cal-culated as:K: The number of resources, indexed by k = 1, . . . , K.ci: Complexity for program invocation and return along each edge e.rki: 1 if the kth resource is required for the ith edge, 0 else.dk: Complexity for the allocation of resource k.C =

∑i=1E

(Ci +

∑k=1K dk ∗ rki

).

32

Page 35: A Quality Model for Design Patterns

• Dynamic Complexity measures the complexity during program execution.Dynamic complexity is calculated with static complexity formula at dif-ferent point of the execution.

• Fan-in / fan-out: This metric is defined as the number of entries / exitsper module and is use for evaluating data encapsulation especially .ei: The number of entry points for the ith module.xi: The number of exit points for the ith module.mi = ei + xi.

• Data flow complexity is defined by the set of attributes related to thecomplexity of data.lfi: Local flows into a procedure.lfo: Local flows from a procedure.datain: Number of data structure that are accessed by a procedure.dataout: Number of data structure that are updated by a procedure.length: Number of real statements in a procedure source code (excludingcomments).

– Number of live variables:lvi, number of live variables in the ith executable statement.n: Total number of executable statements.m: Total number of modules.

LV =∑i=1

nlvi

n .

– Average number of executable statements:

LV program =∑i=1

mlvi

m

– Variable spans:spi: Number of statements between two references to the same vari-able.n: Total number of statements.

SP =∑i=1

nspi

n .

– Average spans size of program:

SP program =∑i=1

nspi

n .

• Code Complexity:

– Number of Decision: Counting the number of I/O variables, condi-tional, and loop control statements could be an indicator for codecomplexity.

– Cyclomatic complexity: The computation of cyclomatic complexityis based on the flow-graph of the module:v: Complexity of graph.e: Number of edges (nodes between programs flow).n: Number of nodes (sequential group).

33

Page 36: A Quality Model for Design Patterns

S: Number of splitting nodes (sequential group).DEi: Number of decisions for the ith module or number of conditions.

∗ Connected graph: If a connected graph is built then:v = e− n + 1.Else:v = e− n + 2.

∗ Splitting nodes:v = S + 1

∗ Number of conditions: Another way to compute the cyclomaticcomplexity consists in counting the number of conditions in thesource code.

∗ Number of regions: We can measure the cyclomatic complexityby counting the number of regions in the graph.

∗ Sum of v: The cyclomatic complexity for a multi-modules pro-gram can be measured by summing values of v for individualmodules.vprogram =

∑i=1m vi =

∑i=1m DEi + m.

– Average nesting level:Ls: Nesting level of statements, defined as the numerical value for thelevel of a statement (higher level are defined for main loop, followedby module and statements containing loops, conditional clauses. . . ).St: Total number of statements.

Average Nesting Level =∑

Ls

St.

– Executable lines of code: Complexity can be measured by countingthe number of executable lines of code per module.

– Halstead metrics: These metrics measures the properties and thestructure of programs. They provide measures of the complexity ofexisting software, predict the length of a program, and estimate theamount of time an average programmer can be expected to use toimplement a given algorithm.These metrics compute the program length by counting operatorsand operands. The measure suggests that the difficulty of a givenprogram can be derived, based on the below counts [32]:ntr: Number of unique operators.nnd: Number of unique operands.Ntr: Total number of operators.Nnd: Total number of operands.

∗ Program Vocabulary: l = ntr + nnd.∗ Observed program length: N = Ntr + Nnd.∗ Estimated program length: Υ = ntr (log2ntr) + nnd (log2nnd).∗ Jensen’s estimator of program length: NF = log2ntr! + log2nnd!.∗ Program volume: V = L (log2l).

34

Page 37: A Quality Model for Design Patterns

∗ Program difficulty: D =(

nnd

2

) (Nnd

nnd

).

∗ Program level: L1 = 1D .

∗ Effort: E = VL1 .

3.2.4 Conciseness

Conciseness define as [64]: V (G) : McCabe′scyclomaticcomplexity NIN :Numberofentrynodes NOUT : Numberofexitnodes Conciseness = 40∗V (G)+20NIN + 20 ∗NOUT

3.2.5 Correctness

• Correctness is calculated as problem report per time period (divided perphase, priority, or category).

• Measuring the defect density after each design inspection could be consid-ered as a metric to evaluate correctness.Di: Total number of defects found during the ith design inspectionI: Total number of inspections to date.KSLOD: Number of lines of source code of design statements (Designstatements are defined as a block of lines. For example all the lines be-tween the following statements are considered as one design statement inthousands [58]:

– if . . . else . . . end-if

– do . . . end-do

– All the comments between “/*” and “*/”

).

DD =∑i=1

IDi

KSLOD .

3.2.6 Efficiency

Efficiency is based on usage of CPU, RAM, and of I/O capacity. Thus, executionefficiency depends on:

• Non-loop dependency: Percentage of loop statements with non-loop de-pendent statements:Mnl: Number of modules with non-loop dependent statement in loops.Mtotal: Total number of modules.Enld = Mnld

Mtotal.

• Compound expression: Repeated compound statements reduce efficiency:Mrc: Number of modules with repeated compound expressions.Mtotal: Total number of modules.Erc = Mrc

Mtotal.

35

Page 38: A Quality Model for Design Patterns

#include <stdio.h>

char *msg="Hello World \n";

main()

{

while(*msg)putchar(*msg++);

}

Figure 3.1: Program No1

#include <stdio.h>

main()

{

pits("Hello World \n");

}

Figure 3.2: Program No2

• Memory overlay1: Memory overlay creates overhead during processingand reduces the efficiency. Thus, the number of memory overlay defines afactor of software efficiency.

• Nonfunctional executable code: Nonfunctional executable code obscuresefficiency. For example, program 3.1 is more obscure than programm 3.2:

The calculation metrics define as:

MNec: Number of modules with nonfunctional executable code.MTotal: Total number of modules.ENec = Mnec

Mtotal.

• Inefficient coding decisions:MDS : Number of modules with inefficient coding decisions.MTotal: Total number of modules.EDS = Mds

Mtotal.

1Execution is possible when entire program and data of process should be uploaded inphysical memory, if the process is larger than memory, there is a technique called memoryoverlay: The idea of overlay is to keep in memory only those instructions and data that areneeded at any given time [60]

36

Page 39: A Quality Model for Design Patterns

• Data grouping: Efficiency decreases with complicated nesting and indices:MDG: Number of modules with inefficient data grouping.MTotal: Total number of modules.EDG = Mdg

Mtotal.

• Variable initialization: Initialization of variables during execution can re-duce efficiency:MV I : Number of modules with non-initialized variable declarations.MTotal: Total number of modules.EV I = Mvi

Mtotal.

Storage efficiency depends on:

• Duplicate data definitions: Duplicate global data definitions consumespace and reduce efficiency. A metrics for duplicate global data defini-tion is defined by:MDDD: Number of modules with duplicate data definitions.MTotal: Total number of modules.Eddd = Mddd

Mtotal.

• Code duplication:MDC : Number of modules with duplicated code.MTotal: Total number of modules.EDC = Mdc

Mtotal.

• Dynamic memory management: This is a boolean metric which considersthe use of dynamic memory management. Value “true” indicates that al-located memory is released as needed, value “false” indicates that efficientmemory use is not promoted.

• Requirements allocation: This is a boolean metric which evaluates thelevel of storage optimization by compiler or assembler, with value “true”for acceptable, and value “false” for non-acceptable.

3.2.7 Expendability

National Institute of Standards and Technology [56] define expendability asattributes for assessing the adaptability in quality of code as follow:

• processing independent of storage [16, 56]:(numberofmoduleswhosesizeconstraintsarehard−coded)

(totalnumberofmoduleswithsuchsizeconstraints)

The module necessities is:

– independent of storage size, buffer space, array sizes, etc.

– provided dynamically, e.g., array sizes passed as parameters.

• Percentage of uncommitted memory [16, 56]:(amountofuncommittedmemory)

(totalmemoryavailable)

37

Page 40: A Quality Model for Design Patterns

• Percentage of uncommitted processing capacity [16, 56]:(amountofuncommittedprocessingcapacity)

(totalprocessingcapacityavailable)

3.2.8 Generality

Generality achieves a large reuse potential ability to handle any syntacticallycorrect code collection [29]. Generality calculated is size of the application do-main because the most important thing in reverse engineering could be the use-ful documentation for better identify common design and architectural. Somemetrics related to generality define as Multiple usage metric [16, 56]:

• Multiple usage metric:MMU : Number of modules referenced by more than one module.Mtotal: Total number of modules.GMU = Mmu

Mtotal.

• Mixed function metric: MMF : Number of modules that mix functions.MTotal: Total number of modules.GMF = Mmf

Mtotal.

• Data volume metric:MDV M : Number of modules that are limited by the volume of data.MTotal: Total number of modules.GDV M = Mdvm

Mtotal.

• Data value metric:MDV L: Number of modules that are data value limited.MTotal: Total number of modules.GDV L = Mdvl

Mtotal.

• Redefinition of constants metric:Mrc: Number of constants that are redefined.Mtotal: Total number of modules.Grc = Mrc

Mtotal.

3.2.9 Hardware independence

The following options related to software independency, with assessment of thesesoftware attributes we could find a numerical value for hardware independency:

• With considering the definition of open source programs as ”Productsbased on open systems standards (Particularly the ISO open system in-terconnection (OSI) and IEEE POSIX) are beginning to replace relianceon proprietary computing platforms” cite KuhnMajurskiMcCoySchulz94-opensystems, this ability can increase the value of hardware independency.

38

Page 41: A Quality Model for Design Patterns

• The dependency of software for using the programming languages andtools (like compilers, database management systems and user interfaceshells); with available implementation by other machines.

• The degree of using input/output references or calls, increase the hardwaredependency. As follow we have a metrics to assess this software attribute[56]:(numberofmodulesmakingI/Oreferences)

(totalnumberofmodules)

• Code that is dependent on machine word or character size is another pa-rameter that makes the software more dependence on machines hardware.As follow we have a metrics to have a numerical value for this attribute[56].(numberofmodulesnotfollowingconvention)

(totalnumberofmodules)

3.2.10 Indicesability

Measure with sum of the number of defects detected in the software productfrom design to final tests. The following definitions can be used through thedevelopment cycle:Di: Total number of defects detected during the ith phase of the developmentcycle.Si: Number of serious defects found.Mi: Number of medium defects found.Ti: Number of trivial defects found.PS: Size of the software product at the ith phase.Ws: Weighting factor for serious defects (default is 10).Wm: Weighting factor for medium defects (default is 3).Wt: Weighting factor for trivial defects (default is 1).At each phase, the defect index DI is calculated as:DI =

∑i

(i∗PIi

PS

).

The phase index PIi at each phase of the development cycle is defined as:PIi = Ws

Si

Di+ Wm

Mi

Di+ Wt

Ti

Di.

3.2.11 Learnability

Learnability is most of the human related part of usability, the ISO/IEC 9126-1[62] (part-1 page-9) measure the learnability with considering the suitability asinternal metrics for learnability.ISO/IEC 9126-1 defined the suitability metric as follow [62]:A = (number of function in which problems are detected in evaluation) / (num-ber of missing functions detected in evaluation) / (number of incorrectly imple-mented of missing functions detected) / (number of functions changed duringdevelopment life cycle phases)B = (number of functions checked) / (number of functions described in require-ment specifications) / (number of functions described in required specifications)

39

Page 42: A Quality Model for Design Patterns

/ (number of functions described in required specifications)X = 1− A

B 0 ≤ X ≤ 1

3.2.12 Modularity

Modularity is measured using the relationships among the elements of a module.Higher strength modules tend to have lower fault rates, and tend to cost lessto develop. Also, the modularity is greater for higher strength modules [56, 12,16, 43].

• Cohesion is evaluated by module using a scale from high for functionalcohesion to low for coincidental cohesion:X: Reciprocal of the number of assignment statements in a module.Y: Number of unique function outputs divided by number of unique func-tion inputs.STRENGTH =

√(X2 + Y 2).

More precisely, Module cohesion is defined as “how tightly bound or re-lated its internal elements are to one another” [68]. Cohesion metrics applyto unit design cohesion, or module strength that refers to the relationshipsamong the elements of a module. Design-level cohesion is defined as sixrelations between a pair of output components based on input-output de-pendence graph (IODG) representation [38, 8]:

– Coincidental relation (R1):Two module outputs have neither dependence relationship with eachother, nor dependence on a common input.R1 (o1, o2) = o1 6= o2∧¬ (o1 → o2)∧¬ (o2 → o1)∧¬∃x [(x → o1) ∧ (x → o2)]

– Conditional relation (R2):Two outputs are c-control dependent on a common input, or oneoutput has c-control dependence on the input and another has i-control dependence on the input.R2 (o1, o2) = o1 6= o2 ∧ ∃x

[(x

c→ o1

)∧

(x

c→ o2

)]

– Iterative relation (R3):Two outputs are i-control dependent on a common input.R3 (o1, o2) = o1 6= o2 ∧ ∃x

[(x

i→ o1

)∧

(x

i→ o2

)]

– Communicational relation (R4):Two outputs are dependent on a common input. One has data de-pendence on the input and the other has either a control or datadependence.R4 (o1, o2) = o1 6= o2∧∃x

[((x

d→ o1

)∧

(x

d→ o2

))∨

((x

q→ o1

)∧

(x

q→ o2

))]

Where p, q ∈ {d, c, i} and p 6= q

– Sequential relation (R5):One output is dependent on the other output.R5 (o1, o2) = o1 6= o2 ∧ [(o1 → o2) ∨ (o2 → o1)]

40

Page 43: A Quality Model for Design Patterns

– Functional relation (R6):There is only one output in a module.R6 (o1, o2) = (o1 = o2)

• Coupling is a measure of the degree to which modules share data. A lowercoupling value is better [56].Mj : Sum of the number of input items shared between components i andj.Zi: Average number of input and output items shared over m componentswith component i.n: Number of components in the software product.

Coupling =∑i=1

nZi

n .

zi =∑j=1

mMi

m .

Modularity in our model

Coupling is defined as the degree of data sharing via common area and datacoupling defined as data sharing via parameter lists. When considering thatmodularity is defined as independency of external factors and coupling as “thedegree of interconnectedness of modules” [26], the lower coupling value bringsmore modularity.Coupling metrics define as [26, 56]:

Coupling =∑n

i=1

∑m

j=1Mj

m

nMj = sum of the number of input and output items shared between componentsi and jn = number of components in the software product

3.2.13 Maturity Index

When considering the type of available data, the software maturity index isdefined as:Fc: Number of function or modules that have been changed from previousdelivery.Fa: Number of functions that have been added.Fdel: Number of functions that have been deleted.Mt: Number of functions that make up the baseline.Then, the software maturity index (SMI) is defined by:SMI = Mt−(fa+Fc+Fdel)

Mt.

Or, it could be calculated as:SMI = Mt−Fc

Mt.

41

Page 44: A Quality Model for Design Patterns

3.2.14 Operability

Operability defined with ISO/IEC 9126-1 [62] (part-1 page-9) as the capabilityof software product to enable the user to operate and control it. Operabilitycorresponds to controllability, error tolerance and conformity with user expec-tations. The metrics for measuring the operability define as follow:A = (number of input items which check for valid data) / (number of imple-mented functions which can be cancelled by the user) / (number of implementedfunctions which can be undone by the user) / (number of functions which can becustomized during operation) / (number of functions which can be customized)/ (number of functions having status monitoring capability) / (number of in-stances of operations with inconsistent behavior) / (number of implementedmessages with clear explanations) / (number of interface elements which areself-explanatory) / (number of functions implemented with user error toler-ance)B = (number of input items which could check for valid data) / (number offunctions requiring the precancellation capability) / (number of functions) /(number of functions requiring the customization capability) / (number of func-tions) / (number of functions that are required to have monitoring capability)/ (total number of operations) / (number of messages implemented) / (totalnumber of interface elements) / (total number of functions requiring the toler-ance capability)X = A

B 0 ≤ X ≤ 1

3.2.15 Portability

N. E. Fenton define the portability metrics as [23]:Portability = 1− Resourcesneededtomovesystemtothetargetenvironment

Resourceneededtocreatesystemfortheresidentenvironment

3.2.16 Readability

Readability is calculated as the number of misspelled or grammatically incor-rect statements. Readability metrics are intended to identify the difficulty ofunderstanding a passage of text.Readability metrics are often based on features such as the average number ofsyllables per word, and words per sentence. These features ignore concept dif-ficulty and are based on assumptions about writing style that may not hold inall environments [59].There are many readability metrics. FOG, SMOG and Flesch- Kinciad are threeof the most widely used readability metrics [59].The FOG readability metric [52] is defined as:GradeLevel = 3.0680+0.877∗AverageSentenceLength+0.984∗PercentageOfMonosyllablesThe SMOG readability metric [51] is defined as:GradeLevel = 3 +

√NumberofPolysyllableWordsin30Sentences

If the document is longer than 30 sentences, the first 10 sentences, the middle10 sentences, and the last 10 sentences are used.

42

Page 45: A Quality Model for Design Patterns

If the document has fewer than 30 sentences, some rules and a conversion tableare used to calculate the grade level [51].The Flesch-Kincaid readability metric [50] is defined as:GradeLevel = 0.39∗AvgNumberWordsPerSentence+11.80∗AvgNumberSyllablesPerWord−15.59The FOG metric is considered suitable for secondary and older primary agegroups. The SMOG measure tends to give higher values than other readabilitymetrics [35].Flesch-Kincaid is used more often than the FOG and SMOG metrics it is a U.S.Department of Defense standard [50].

3.2.17 Reliability

This measure assesses the system’s performance reliability [32]:U = UtilizationX = ThroughputQ = Queue length distributionWT = Waiting time distributionSE = Mean server’s efficiencyRT = Response time distributionST = Service time distributionFollowing measures are applicable to operating systems and networks:V R = Thenumberofrequestperfunctionaljobforeachserver

W = WaitingtimeS = Servertimek = NumberofserversReliability =

∑ki=1 (V R ∗ S)i +

∑ki=1 (V R ∗W )i

3.2.18 Robustness

Champeaux (1997) define the flexibility as: “Flexibility also called Robustness,of a software system can be defined as the ability to perform even outside anintended application domain, or at least it has the feature that its functionalitydegrades graduality outside its domain.” [19].

Robustness measure as following options [65]:

• Range of operating conditions (what can be done with it?)

• Amount of invalid behavior with valid input.

• Acceptability of behavior with invalid input.

Donald G. Firesmith measure robustness as following options [25]:

• Environmental tolerance: Degree to which an executable work productcontinues to function properly despite existing in an abnormal environ-ment.

43

Page 46: A Quality Model for Design Patterns

• Error tolerance: Degree to which an executable work product continuesto function properly despite the presence of erroneous input.

• Failure tolerance: Degree to which an executable work product continuesto function properly despite the occurrence of failures, where:

– A failure is the execution of a defect that causes an inconsistency be-tween an executable work product’s actual (observed) and expected(specified) behavior.

– A defect may or may not cause a failure depending on whether the de-fect is executed and whether exception handling prevents the failurefrom occurring.

– A fault (also known as defect, bug) is an underlying flaw in a workproduct (i.e., a work product that is inconsistent with its require-ments, policies, goals, or the reasonable expectations of its customersor users). Defects are typically caused by human errors, and defectshave no impact until they cause one or more failures.

Failure tolerance includes the following quality sub-factor:

– Fault tolerance: Degree to which an executable work product contin-ues to function properly despite the presence or execution of defects.

As we saw we have a different taking from robustness but the relation defi-nition and computation of robustness that could be close to main idea of usingdesign patterns is considering the robustness computes as same measurementas reliability [46, 32] in IEEE 982.1:

Before starting the definition of metrics related to robustness, we need tounderstand some primitive definitions those comes as follow [56]:F = Total number of unique faults found in a given time interval resulting infailures of a specified severity level.KSLOC = Number of source lines of executable code and non-executable datadeclarations in thousands.Di = total number of unique defects detected during the ith design, or codeinspection process, or the ith life cycle phase.I = total number of inspections to date.KSLOD = In the design phase, the number of source lines of design statementsin thousands.Fdi = Fault days for the ith fault

• Fault Density: This measure can be used to perform the following func-tions [32]:

– “Predict remaining faults by comparison with expected fault den-sity”.

– “Determine if sufficient testing has been completed, based on prede-termined goals for severity class”.

44

Page 47: A Quality Model for Design Patterns

– “Establish standard fault densities for comparison and prediction”.

To measure the fault density we need to follow the following steps:

– Failure types might include input, output (or both) and user.– Fault types might result from design, coding, documentation, and

initialization.– Observe and log each failure.– Determine the program fault(s) that caused the failure.– Classify the faults by type.– Additional faults may be found resulting in total faults being greater

than the number of failures observed, or one fault may manifest itselfby several failures.

– Thus, fault and failure density may both be measured.– Determine total lines of executable and non-executable data declara-

tion source code (KSLOC).– Calculate the fault density for a given severity level as:

Fd = FKSLOC

• Defect Density: “The defect density measure can be used after designand code inspections of new development or large block modifications. Ifthe defect density is outside the norm after several inspections, it is anindication that the inspection process requires further scrutiny” [32].

Defect density is measuring as following steps:

– Establish a classification scheme for severity and class of defect.– For each inspection, record the product size and the total number of

unique defects.

– The defect density calculate in the design phase as 2 : DD =∑I

i=1Di

KSLOD

• Cumulative Failure Profile: This is a graphical method used to [32]:

– “Predict reliability through the use of failure profiles”.– “Estimate additional testing time to reach an acceptably reliable sys-

tem”.– “Identify modules and subsystems that require additional testing”.

Establish the severity levels for failure designation. fi = total number offailures of a given severity level in a given time interval, i =1, ...Plot cumulative failures versus a suitable time base. The curve can bederived for the system as a whole, subsystems, or modules.

2This measure assumes that a structured design language is used. However, if some otherdesign methodology is used, then some other unit of defect density has to be developed toconform to the methodology in which the design is expressed.

45

Page 48: A Quality Model for Design Patterns

• Fault-Days Number: This measure represents the number of days thatfaults stays in a software system from their creation to their removal [32]:

– “Phase when the fault was introduced in the system”.

– “Date when the fault was introduced in the system”.

– “Phase, date, and time when the fault is removed”.

For measure the fault-days number we need to represents the number ofdays that faults spend in the software system from their creation to theirremoval with passing the following steps:

– Phase when the fault was introduced in the system.

– Date when the fault was introduced in the system.

– Phase, date, and time when the fault is removed 3.

– Fault-days: For each fault detected and removed, during any phase,the number of days from its creation to its removal is determined.

– The fault-days are then summed for all faults detected and removed,to get the fault-days number at system level, including all faultsdetected/removed up to the delivery date 4.

– The fault introduced during the requirements phase is assumed tohave been created at the middle of the requirement phase because theexact time that the corresponding piece of requirement was specifiedis not known.

– The measure is calculated as follows: Fault days number (FD) =∑i FDi

3.2.19 Scalability

To have a better definition for scalability, in the first we need some primitivedefinitions [34]:

• Speedup S: Measure the ratio of work increasing with change the numberof possessors from 1 to k 5.

• Efficiency E: E (k) = S(k)k

Scalability measure as:ϕ (k1, k2) = E(k2)

E(k1)

3For more meaningful measures, time units can be made relative to test time or operationaltime.

4In cases when the creation date for the fault is not known, the fault is assumed to havebeen created at the middle of the phase in which it was introduced.

5the best value for speedup define as S(k) = k

46

Page 49: A Quality Model for Design Patterns

3.2.20 Simplicity

A model consider as a simple model if it contain following aspects [36] :

• Simple to collect the data.

• Simple in concept.

• Readily implemented by computer programs.

Simplicity measure with following attributes:

• Module size ( ≤ 200 LOC )

• Number of modules ( ≤ 100 modules)

• Static graph theoretic complexity ( ≤ 50 )

3.2.21 Software independence

For measuring the software independency, we need to considering these followingoptions [16, 56]:

• Number of operating systems that could be compatible with the softwarecan increase the software independency.

• Most of procedure is made of software system to use the utilities, libraries,and operating system and make the software more dependent on particu-lar software environment. The degree of dependence on system softwareutilities have inverse related with software independency.

• ”The usage of non-standard constructs or extensions of programming lan-guages provided by particular compilers may impose difficulties in conver-sion of the system to new or upgraded software environments” [56].(numberofmodulesutilizingnon−standardconstructs)

(totalnumberofmodules)

3.2.22 Structuredness

Measuring the program structerdness could be compute as [32]:P1 : total number of modules in the program.P2 : number of modules dependent on the input or output.P3 : number of modules dependent on prior processing (state).P4 : number of database elements.P5 : number of non-unique database elements.P6 : number of database segments (partition of the state).P7 : number of modules not single entrance/single exit.D1 : Define as: design organized top down.Calculate as: BooleanD2 :Define as: module dependence.

47

Page 50: A Quality Model for Design Patterns

Calculate as: P2P1

D3 :Define as: module dependent on prior processing.Calculate as: P3

P1D4 :Define as: database size.Calculate as: P5

P4D5 :Define as: database compartmentalization.Calculate as: P6

P4D6 :Define as: module single entrance single exit.Calculate as: P7

P1Wi :Define as: assigned by the user based on the priority of each associated deriva-tive.Calculate as : numerical value between 0 and 1, with considering that

∑Wi = 1.

DSM =∑i=1

6 WiDi

3.2.23 Traceability

This measure aids in identifying, in the code, design or architecture, require-ments that are either missing from, or in addition to, the original requirements[32].R1 = Number of requirements met.R2 = Number of original requirements.TM = R1

R2 ∗ 100.

3.2.24 Understandability

Understandability in design

This measure is used to determine the simplicity of the detailed design of asoftware program it uses the following primitives:

• Number of nodes (sequential groups of program statements).

• Number of edges (program flows between nodes).

• Number of splitting nodes (nodes with more than one edge emanatingfrom it).

• Number of regions (areas bounded by edges with no edges crossing).

The values determined for the primitives can be used to identify problemareas within the software design [32].D1: Design organized top down (Boolean).

48

Page 51: A Quality Model for Design Patterns

D2: Module dependence = (Number of modules dependent on the input or out-put) / (Total number of modules in the program).D3: Module dependent on prior processing = (Number of modules dependenton prior processing) / (Total number of modules in the program).D4: Database size = (Number of non-unique database elements) / (Number ofdatabase elements).D5: Database compartmentalization = (Number of database segments) / (Num-ber of database elements).D6: (Module single entrance) / (Single exit).Wi: Weight given to the ith derived measure.DSM =

∑i=16 WiDi.

Understandability in code

Understandability in outlook of programming is the characteristic by directrelated with program code. Briand et al. [10] define understandability as:

• “to understand a method or class, we must know about the services theclass uses”.

• “understandability is influenced by the number of services used. It shouldnot matter if the server classes are stable or not”.

• “to understand a class, we need to know the functionality of the servicesdirectly used by the class”.

• “To analyze understandability, we do not need to account for polymor-phism”.

Some General metrics to measure the understandability

Understandability is related to human related ability and difficult to measurebut “The following metrics are widely accepted as predictors of understandabil-ity of the source code and the architecture” [30]:

• DIT(The maximum depth of the inheritance graph of each class [67] ) (Themaximum length from the node to the root of the tree [14] )

– DIT relates to Bunge’s notion of the scope of properties [14].

– DIT is a measure of how many ancestor classes can potentially affectthis class[14].

– The deeper a class is in the hierarchy, the greater the number ofmethods it is likely to inherit, making it more complex to predict itsbehavior [14].

– Deeper trees constitute greater design complexity, since more meth-ods and classes are involved [14].

49

Page 52: A Quality Model for Design Patterns

– The deeper a particular class is in the hierarchy, the greater thepotential reuse of inherited methods [14].

”A recommended DIT is 5 or less. The Visual Studio .NET documenta-tion recommends that DIT ¡= 5 because excessively deep class hierarchiesare complex to develop. Some sources allow up to 8. Special cases, Whena class inherits directly from System.Object (or has no Inherits state-ment), DIT=1. For a class that inherits from an unknown (unanalyzed)class, DIT=2. This is because the unknown class eventually inherits fromSystem.Object and 2 is the minimum inheritance depth 6” [2].

• NOM(Number of Methods):A simple count of methods in a class definition [4].

• NOI (Mumber Of Interfaces [30])Calculated from Class diagram

• MCC (McCabe cyclomatic complexity)This metric is calculated from program module’s control graph representa-tion 7. Program’s complexity related to the number of control path (moreif, while . . . could make more program complicated) but “it is generallynot possible to count the total number of paths through a given programsince backward branches lead to a potentially infinite number of controlpaths. Instead, McCabe’s cyclomatic complexity is defined as the numberof basic paths through a program control graph” [66].To measure McCabe’s cyclomatic complexity we need to know some pre-vious definitions as follow:

– N = number of nodes; a sequential group of program statements.

– E = number of edges; program flow between nodes.

– SN = number of splitting nodes; a node with more than one edgeemanating from it.

– RG = number of regions; in a graph with no edges crossing, an areabounded by edges.

VanderWiel et al. [66] expressed the complexity as 8:V (G) = E −N + 2IEEE 982.1 [32] measure the cyclomatic complexity as:C = E −N + 1”The cyclomatic complexity is also equivalent to the number of regions(RG) or the number of splitting nodes plus one (SN+1). If a programcontains an N-way predicate, such as a CASE statement with N cases, theN-way predicate contributes N-1 to the count of SN [32]. As follow we

6For all VB Classic classes, DIT = 0 because no inheritance is available7the graph nodes represent straight line blocks of code and the edges represent program

branches.8A complete example for measuring the cyclomatic complexity coming as [66]

50

Page 53: A Quality Model for Design Patterns

have another metrics that introduced for evaluating the understandabilitywith project analyzer group [2]:

– LEN (Length of names):Average length of names including:

∗ LENV (Length of variable names):Average length of variable names

∗ LENC (Length of constant names):Average length of constant names

∗ LENP (Length of procedure names):Average length of all constant names

– UNIQ (Name Uniqueness Ratio):UNIQ 9 = Number of unique names / total number of names

3.2.25 Usability

L. Arthur in [3] describes the usability as human related definitions as follow:

• Can the user/customer learn and use the system easily?

• Can operations run it?

These definitions are more close to McCall’s definition because usability ismostly environment and human related and McCall’s model could be acceptablefor our model too.

9When 2 program entities have the same name, it’s possible that they get mixed. UNIQmeasures the uniqueness of all names [2].

51

Page 54: A Quality Model for Design Patterns

3.3 Our Model in a Nutshell

To define a perfect attributes and metrics for our model, in the first we startwith standard definitions from IEEE and ISO/IEC and if we don’t find a matchfor the characteristics we looking for, try to match them with other models.

• Usability: ISO/IEC defines the usability as part of quality characteristicthat related with following attributes:

– Understandability

– Learnability

– Operability

For assistance of this definition, McCall’s model defines the usability as:

– Operability

– Training

– Communicativeness

To cover the understandability’s attributes, Boehm’s model define theunderstandability as a characteristic that related to:

– Structuredness

– Conciseness

– Legibility

• Reusability McCall’s model defines the reusability as a characteristic thatrelated with following attributes:

– Software system independence

– Machine independence

– Generality

– Modularity

• Flexibility McCall’s model defines the flexibility as a characteristic thatrelated with following attributes:

– Self Descriptiveness

– Expendability

– Generality

– Modularity

52

Page 55: A Quality Model for Design Patterns

• Scalability C. Smith and L. Williams define the scalability as “the abilityof a system to continue to meet its response time or throughput objectivesas the demand for the software functions increases” [15], but with consid-ering the vertical definition, the Scalability would be increase by levels ofprocessing power and application performance” [44].With those definitions we consider the attributes of processing level andapplication performance as related attributes with scalability.

• Robustness Donald G. Firesmith in his Technical Note [25] define theRobustness as an characteristic that related to:

– Environmental tolerance

– Error tolerance

– Failure tolerance

∗ Fault tolerance

With considering these characteristic and related attributes, we propose ourmodel as figure 3.3

3.4 Enhancing our Model

Before adapt the software metrics for each attributes in our model, we need toconsider that design patterns only (/mostly) defined for area of object orientedprogramming [22] and with taking consideration of object oriented programmingoriginated around 1987 [55], maybe the McCall’s model (1976-7) and Boehm’smodel (1978) never had a view of object oriented programming in their models.

In fact it would be necessitate to review each characteristics, attributes andmetrics those defined in our model and redefine them with considering the viewof object oriented software implementing.

Some of the characteristics those we define for our model could be measureas different view, for example learnability or understandability has differentdefinition from user’s view or developer’s view, but in continuing of this reportmostly we consider the view of developers.

With considering the characteristics and attributes related to object orientedprogramming, and related attributes, as following we have our modifying model(Figure 3.4).

53

Page 56: A Quality Model for Design Patterns

Figure 3.3: Model for accessing the quality in software implement with designpatterns.

54

Page 57: A Quality Model for Design Patterns

Figure 3.4: Model for accessing the quality in software implement with designpatterns.

55

Page 58: A Quality Model for Design Patterns

Chapter 4

Design Patterns

4.1 Introduction

Design patterns are high level building blocks which promote elegance in soft-ware by offering proven and timeless solutions to common problems in softwaredesign. Design patterns convey the experience of software designers.

In the following, we describe design patterns defined to manage object cre-ation, to compose objects in larger structures, and to coordinate message flowamong objects then

4.2 Why Design Patterns?

We need to reuse good solutions to design problems because reusable and flex-ible designs are difficult to conceive. Thus, design patterns1 [22] make objectoriented designs more:

• Flexible.

• Elegant.

• Ultimately Reusable.

4.3 Quality Characteristics related with DesignPatterns

Gamma et al. in “Design Patterns–Elements of Reusable Object-Oriented Soft-ware” define design patterns as: “[. . . ] Patterns specific design problems andmake object-oriented more flexible, elegant and ultimately reusable” and “De-sign patterns help you chose design alternatives that make a system reusable

1Design patterns describe good solutions to problems which occur over and over again [22].

56

Page 59: A Quality Model for Design Patterns

and avoid alternatives that compromise reusability. Design patterns can evenimprove the documentation and maintenance of existing systems by furnishingan explicit specification of class and object interactions and their underlyingintent” [22].

Software elegancy is defined as: Elegance deals with maximizing the infor-mation delivered through the simplest possible interface. Issues of elegance insoftware are reflected to robustness, scalability, flexibility, and usability [13].

When considering these definitions, design patterns claim to bring:

• Flexibility.

• Elegancy:

– Robustness.– Scalability.– Flexibility.– Usability.

• Reusability.

Thus, we can consider that design patterns increase the following qualitycharacteristics:

• Flexibility.

• Scalability.

• Usability.

• Reusability.

• Robustness.

Thus, we have some quality characteristics that are supposed to be increasedwhen we are using the design patterns to develop software product. To assess thetruthfulness of this claim, we need a quality model that contains these qualitycharacteristics and which relates metrics with these quality characteristics tomake them measurable.

4.4 Quality evaluation of Design Patterns

The most well-known design patterns are those introduced by Gamma et al. [22].Gamma et al. categorized twenty-three design patterns in three categories2:Creational, structural, and behavioral design patterns [18].

We summarize the twenty-three design patterns and evaluate manually theirquality characteristics using five-levels Lickert scale (Excellent, Good, Fair,Badand Very bad); also we consider (N/A) for characteristics those are notapplicable for number of design patterns.

2All subsequent definitions and pictures are from [22] and [18].

57

Page 60: A Quality Model for Design Patterns

4.4.1 Creational Design Patterns

Abstract Factory

• Intent: Provide an interface for creating families of related or dependentobjects without specifying their concrete classes.

• Applicability: Use the Abstract Factory pattern when:

– A system should be independent of how its products are created,composed, and represented.

– A system should be configured with one of multiple families of prod-ucts.

– A family of related product objects is designed to be used together,and you need to enforce this constraint.

– You want to provide a class library of products, and you want toreveal just their interfaces, not their implementations.

• Structure:

Figure 4.1: Abstract Factory UML-like class diagram

• Consequences: The Abstract Factory design pattern has the followingbenefits and liabilities:

– It isolates concrete classes.

– It makes exchanging product families easy.

– It promotes consistency among products.

– Supporting new kinds of products is difficult.

58

Page 61: A Quality Model for Design Patterns

• Evaluation:

Quality Characteristics ValuesExpendability ExcellentSimplicity ExcellentGenerality GoodModularity GoodSoftware Independence FairHardware Independence FairLearnability GoodUnderstandability GoodOperability GoodScalability GoodRobustness Good

Builder

• Intent: Separate the construction of a complex object from its representa-tion so that the same construction process can create different represen-tations.

• Applicability: Use the Builder pattern when:

– The algorithm for creating a complex object should be independentof the parts that make up the object and how they are assembled.

– The construction process must allow different representations for theobject that is constructed.

• Structure:

Figure 4.2: Builder design pattern UML-like class diagram

• Consequences:

– It lets you vary a product’s internal representation.

– It isolates code for construction and representation.

– It gives you finer control over the construction process.

• Evaluation

59

Page 62: A Quality Model for Design Patterns

Quality Characteristics ValuesExpendability GoodSimplicity GoodGenerality FairModularity FairSoftware Independence N/AHardware Independence N/ALearnability FairUnderstandability GoodOperability FairScalability GoodRobustness Good

Factory Method

• Intent: Define an interface for creating an object, but let subclasses decidewhich class to instantiate. Factory Method lets a class defer instantiationto subclasses.

• Applicability: Use the Factory Method pattern when:

– A class cannot anticipate the class of objects it must create.

– A class wants its subclasses to specify the objects it creates.

– Classes delegate responsibility to one of several helper subclasses, andyou want to localize the knowledge of which helper subclass is thedelegate.

• Structure:

Figure 4.3: Factory Method design pattern UML-like class diagram

• Consequences:

– Provides hooks for subclasses.

– Connects parallel class hierarchies.

• Evaluation

60

Page 63: A Quality Model for Design Patterns

Quality Characteristics ValuesExpendability BadSimplicity BadGenerality FairModularity GoodSoftware Independence N/AHardware Independence N/ALearnability GoodUnderstandability GoodOperability GoodScalability GoodRobustness Good

Prototype

• Intent: Specify the kind of objects to create using a prototypical instance,and create new objects by copying this prototype.

• Applicability: Use the Prototype pattern when a system should be in-dependent of how its products are created, composed, and represented;and:

– When the classes to instantiate are specified at run-time, for example,by dynamic loading.

– To avoid building a class hierarchy of factories that parallels the classhierarchy of products.

– When instances of a class can have one of only a few different combi-nations of state. It may be more convenient to install a correspondingnumber of prototypes and clone them rather than instantiating theclass manually, each time with the appropriate state.

• Structure:

Figure 4.4: Prototype design pattern UML-like class diagram

• Consequences

61

Page 64: A Quality Model for Design Patterns

– Adding and removing products at run-time.

– Specifying new objects by varying values.

– Specifying new objects by varying structure.

– Reduced sub-classing.

– Configuring an application with classes dynamically.

• Evaluation

Quality Characteristics ValuesExpendability ExcellentSimplicity GoodGenerality FairModularity GoodSoftware Independence N/AHardware Independence N/ALearnability FairUnderstandability GoodOperability FairScalability ExcellentRobustness Good

Singleton

• Intent: Ensure a class has only one instance and provide a global point ofaccess to it.

• Applicability: Use the Singleton pattern when:

– There must be exactly one instance of a class, and it must be acces-sible to clients from a well-known access point.

– When the sole instance should be extensible by sub-classing, andclients should be able to use an extended instance without modifyingtheir code.

• Structure:

Figure 4.5: Singleton design pattern UML-like class diagram

• Consequences: The Singleton pattern has several benefits:

62

Page 65: A Quality Model for Design Patterns

– Controlled access to sole instance.

– Reduced name space.

– Permits refinement of operations and representation.

– Permits a variable number of instances.

– More flexible than class operations.

• Evaluation

Quality Characteristics ValuesExpendability BadSimplicity Very badGenerality FairModularity ExcellentSoftware Independence FairHardware Independence FairLearnability FairUnderstandability FairOperability FairScalability GoodRobustness Good

4.4.2 Structural Design Patterns

Adapter

• Intent: Convert the interface of a class into another interface clients ex-pect. Adapter lets classes work together that couldn’t otherwise becauseof incompatible interfaces.

• Applicability: Use the Adapter pattern when

– you want to use an existing class, and its interface does not matchthe one you need

– you want to create a reusable class that cooperates with unrelated orunforeseen classes, that is, classes that don’t necessarily have com-patible interfaces

– object adapter only) you need to use several existing subclasses, butit’s impractical to adapt their interface by subclassing every one. Anobject adapter can adapt the interface of its parent class

• Structure:

• – How much adapting does Adapter do?

– Pluggable adapters.

– Using two-way adapters to provide transparency

63

Page 66: A Quality Model for Design Patterns

Figure 4.6: UML class diagram for Adapter pattern

• Evaluation

Quality Characteristics ValuesExpendability FairSimplicity FairGenerality BadModularity GoodSoftware Independence GoodHardware Independence N/ALearnability GoodUnderstandability FairOperability FairScalability GoodRobustness Fair

Bridge

• Intent: Decouple an abstraction from its implementation so that the twocan vary independently.

• Applicability: Use the Bridge pattern when

– you want to avoid a permanent binding between an abstraction andits implementation.

– both the abstractions and their implementations should be extensibleby subclassing.

– changes in the implementation of an abstraction should have no im-pact on clients; that is, their code should not have to be recompiled.

– (C++) you want to hide the implementation of an abstraction com-pletely from clients. In C++ the representation of a class is visiblein the class interface.

– you have a proliferation of classes as shown earlier in the first Motiva-tion diagram. Such a class hierarchy indicates the need for splittingan object into two parts.

64

Page 67: A Quality Model for Design Patterns

– you want to share an implementation among multiple objects (per-haps using reference counting), and this fact should be hidden fromthe client.

• Structure:

Figure 4.7: UML class diagram for Bridge pattern

• Consequences: The Bridge pattern has the following consequences:

– Decoupling interface and implementation

– Improved extensibility

– Hiding implementation details from clients

• Evaluation

Quality Characteristics ValuesExpendability GoodSimplicity FairGenerality GoodModularity GoodSoftware Independence N/AHardware Independence N/ALearnability FairUnderstandability FairOperability GoodScalability GoodRobustness Good

Composite

• Intent: Compose objects into tree structures to represent part-whole hier-archies. Composite lets clients treat individual objects and compositionsof objects uniformly.

• Applicability: Use the Composite pattern when

65

Page 68: A Quality Model for Design Patterns

– you want to represent part-whole hierarchies of objects

– you want clients to be able to ignore the difference between composi-tions of objects and individual objects. Clients will treat all objectsin the composite structure uniformly.

• Structure: A typical Composite object structure might look like this:

Figure 4.8: UML class diagram for Composite pattern

Figure 4.9: Typical Composite object structure

• Consequences: The Composite pattern

– defines class hierarchies consisting of primitive objects and compositeobjects

– makes the client simple. Clients can treat composite structures andindividual objects uniformly

– makes it easier to add new kinds of components

– can make your design overly general

• Evaluation

66

Page 69: A Quality Model for Design Patterns

Quality Characteristics ValuesExpendability FairSimplicity FairGenerality N/AModularity FairSoftware Independence N/AHardware Independence N/ALearnability FairUnderstandability GoodOperability N/AScalability N/ARobustness Good

Decorator

• Intent: Attach additional responsibilities to an object dynamically. Deco-rators provide a flexible alternative to subclassing for extending function-ality.

• Applicability: Use Decorator

– to add responsibilities to individual objects dynamically and trans-parently, that is, without affecting other objects.

– for responsibilities that can be withdrawn.

– when extension by subclassing is impractical. Sometimes a largenumber of independent extensions are possible and would producean explosion of subclasses to support every combination. Or a classdefinition may be hidden or otherwise unavailable for subclassing.

• Structure:

Figure 4.10: UML class diagram for Decorator pattern

67

Page 70: A Quality Model for Design Patterns

• Consequences: The Decorator pattern has at least two key benefits andtwo liabilities:

– More flexibility than static inheritance

– Avoids feature-laden classes high up in the hierarchy

– A decorator and its component aren’t identical

– Lots of little objects

• Evaluation

Quality Characteristics ValuesExpendability ExcellentSimplicity ExcellentGenerality GoodModularity FairSoftware Independence GoodHardware Independence N/ALearnability GoodUnderstandability GoodOperability GoodScalability GoodRobustness Fair

Facade

• Intent: Provide a unified interface to a set of interfaces in a subsystem.Facade defines a higher-level interface that makes the subsystem easier touse.

• Applicability: Use the Facade pattern when

– you want to provide a simple interface to a complex subsystem.

– there are many dependencies between clients and the implementationclasses of an abstraction.

– you want to layer your subsystems.

• Structure:

• Consequences: The Facade pattern offers the following benefits:

– It shields clients from subsystem components, thereby reducing thenumber of objects that clients deal with and making the subsystemeasier to use

– It promotes weak coupling between the subsystem and its clients

– It doesn’t prevent applications from using subsystem classes if theyneed to

68

Page 71: A Quality Model for Design Patterns

Figure 4.11: UML class diagram for Facade pattern

• Evaluation

Quality Characteristics ValuesExpendability GoodSimplicity GoodGenerality GoodModularity GoodSoftware Independence N/AHardware Independence N/ALearnability FairUnderstandability GoodOperability FairScalability FairRobustness Fair

Flyweight

• Intent: Use sharing to support large numbers of fine-grained objects effi-ciently.

• Applicability: The Flyweight pattern’s effectiveness depends heavily onhow and where it’s used. Apply the Flyweight pattern when all of thefollowing are true:

– An application uses a large number of objects

– Storage costs are high because of the sheer quantity of objects

– Most object state can be made extrinsic

– Many groups of objects may be replaced by relatively few sharedobjects once extrinsic state is removed

– The application doesn’t depend on object identity. Since flyweightobjects may be shared, identity tests will return true for conceptuallydistinct objects

69

Page 72: A Quality Model for Design Patterns

• Structure: The following object diagram shows how flyweights are shared:

Figure 4.12: UML class diagram for Flyweight pattern

Figure 4.13: object diagram to shows the flyweights’ share

• Consequences: Flyweights may introduce run-time costs associated withtransferring, finding, and/or computing extrinsic state, especially if it wasformerly stored as intrinsic state. However, such costs are offset by spacesavings, which increase as more flyweights are shared. Storage savings area function of several factors:

– the reduction in the total number of instances that comes from shar-ing

– the amount of intrinsic state per object

– whether extrinsic state is computed or stored

• Evaluation

70

Page 73: A Quality Model for Design Patterns

Quality Characteristics ValuesExpendability BadSimplicity BadGenerality FairModularity GoodSoftware Independence N/AHardware Independence N/ALearnability GoodUnderstandability BadOperability FairScalability GoodRobustness Good

Proxy

• Intent: Provide a surrogate or placeholder for another object to controlaccess to it.

• Applicability: Proxy is applicable whenever there is a need for a more ver-satile or sophisticated reference to an object than a simple pointer. Hereare several common situations in which the Proxy pattern is applicable:

– A remote proxy provides a local representative for an object in adifferent address space

– A virtual proxy creates expensive objects on demand

– A protection proxy controls access to the original object. Protectionproxies are useful when objects should have different access rights

– A smart reference is a replacement for a bare pointer that performsadditional actions when an object is accessed

• Structure: Here’s a possible object diagram of a proxy structure at run-

Figure 4.14: UML class diagram for Proxy pattern

time:

71

Page 74: A Quality Model for Design Patterns

Figure 4.15: object diagram of a proxy structure at run-time

• Consequences: The Proxy pattern introduces a level of indirection whenaccessing an object. The additional indirection has many uses, dependingon the kind of proxy:

– A remote proxy can hide the fact that an object resides in a differentaddress space

– A virtual proxy can perform optimizations such as creating an objecton demand

– Both protection proxies and smart references allow additional house-keeping tasks when an object is accessed

• Evaluation

Quality Characteristics ValuesExpendability GoodSimplicity BadGenerality FairModularity GoodSoftware Independence N/AHardware Independence N/ALearnability FairUnderstandability BadOperability GoodScalability GoodRobustness Fair

4.4.3 Behavioral Design Patterns

Chain of Responsibility

• Intent: Avoid coupling the sender of a request to its receiver by givingmore than one object a chance to handle the request. Chain the receivingobjects and pass the request along the chain until an object handles it.

• Applicability: Use Chain of Responsibility when

– more than one object may handle a request, and the handler isn’tknown a priori. The handler should be ascertained automatically

– you want to issue a request to one of several objects without speci-fying the receiver explicitly

72

Page 75: A Quality Model for Design Patterns

– the set of objects that can handle a request should be specified dy-namically

• Structure: A typical object structure might look like this:

Figure 4.16: UML class diagram for Chain of Resp. pattern

Figure 4.17: A typical object structure

• Consequences: Chain of Responsibility has the following benefits and lia-bilities:

– Reduced coupling

– Added flexibility in assigning responsibilities to objects

– Receipt isn’t guaranteed

• Evaluation

Quality Characteristics ValuesExpendability GoodSimplicity GoodGenerality GoodModularity BadSoftware Independence N/AHardware Independence N/ALearnability FairUnderstandability FairOperability GoodScalability BadRobustness Fair

73

Page 76: A Quality Model for Design Patterns

Command

• Intent: Encapsulate a request as an object, thereby letting you parame-terize clients with different requests, queue or log requests, and supportundoable operations.

• Applicability: Use the Command pattern when you want to

– parameterize objects by an action to perform, as MenuItem objectsdid above

– specify, queue, and execute requests at different times

– support undo. The Command’s Execute operation can store statefor reversing its effects in the command itself

– support logging changes so that they can be reapplied in case of asystem crash

– structure a system around high-level operations built on primitivesoperations

• Structure:

Figure 4.18: UML class diagram for Command pattern

• Consequences: The Command pattern has the following consequences:

– Command decouples the object that invokes the operation from theone that knows how to perform it

– Commands are first-class objects. They can be manipulated andextended like any other object

– You can assemble commands into a composite command

• Evaluation

74

Page 77: A Quality Model for Design Patterns

Quality Characteristics ValuesExpendability GoodSimplicity BadGenerality N/AModularity N/ASoftware Independence N/AHardware Independence N/ALearnability BadUnderstandability Very badOperability GoodScalability GoodRobustness Good

Interpreter

• Intent: Given a language, define a representation for its grammar alongwith an interpreter that uses the representation to interpret sentences inthe language.

• Applicability: Use the Interpreter pattern when there is a language tointerpret, and you can represent statements in the language as abstractsyntax trees. The Interpreter pattern works best when

– the grammar is simple

– efficiency is not a critical concern

• Structure:

Figure 4.19: UML class diagram for Interpreter pattern

• Consequences: The Interpreter pattern has the following benefits and lia-bilities:

– It’s easy to change and extend the grammar

– Implementing the grammar is easy, too

75

Page 78: A Quality Model for Design Patterns

– Complex grammars are hard to maintain

– Adding new ways to interpret expressions

• Evaluation

Quality Characteristics ValuesExpendability GoodSimplicity FairGenerality GoodModularity FairSoftware Independence N/AHardware Independence N/ALearnability FairUnderstandability FairOperability GoodScalability GoodRobustness Fair

Iterator

• Intent: Provide a way to access the elements of an aggregate object se-quentially without exposing its underlying representation.

• Applicability: Use the Iterator pattern

– to access an aggregate object’s contents without exposing its internalrepresentation.

– to support multiple traversals of aggregate objects.

– to provide a uniform interface for traversing different aggregate struc-tures (that is, to support polymorphic iteration).

• Structure:

Figure 4.20: UML class diagram for Iterator pattern

76

Page 79: A Quality Model for Design Patterns

• Consequences: The Iterator pattern has three important consequences:

– It supports variations in the traversal of an aggregate

– Iterators simplify the Aggregate interface

– More than one traversal can be pending on an aggregate

• Evaluation

Quality Characteristics ValuesExpendability ExcellentSimplicity ExcellentGenerality GoodModularity FairSoftware Independence GoodHardware Independence N/ALearnability GoodUnderstandability FairOperability FairScalability GoodRobustness Good

Mediator

• Intent: Define an object that encapsulates how a set of objects interact.Mediator promotes loose coupling by keeping objects from referring toeach other explicitly, and it lets you vary their interaction independently.

• Applicability: Use the Mediator pattern when

– a set of objects communicate in well-defined but complex ways

– reusing an object is difficult because it refers to and communicateswith many other objects

– a behavior that’s distributed between several classes should be cus-tomizable without a lot of subclassing

• Structure: A typical object structure might look like this:

• Consequences: The Mediator pattern has the following benefits and draw-backs:

– It limits subclassing

– It decouples colleagues

– It simplifies object protocols

– It abstracts how objects cooperate

– It centralizes control

77

Page 80: A Quality Model for Design Patterns

Figure 4.21: UML class diagram for Mediator pattern

Figure 4.22: A typical object structure

• Evaluation

Quality Characteristics ValuesExpendability GoodSimplicity FairGenerality GoodModularity GoodSoftware Independence N/AHardware Independence N/ALearnability FairUnderstandability FairOperability GoodScalability GoodRobustness Fair

Memento

• Intent: Without violating encapsulation, capture and externalize an ob-ject’s internal state so that the object can be restored to this state later.

• Applicability: Use the Memento pattern when

78

Page 81: A Quality Model for Design Patterns

– a snapshot of (some portion of) an object’s state must be saved sothat it can be restored to that state later, and

– a direct interface to obtaining the state would expose implementationdetails and break the object’s encapsulation

• Structure:

Figure 4.23: UML class diagram for Memento pattern

• Consequences: The Memento pattern has several consequences:

– Preserving encapsulation boundaries

– It simplifies Originator

– Using mementos might be expensive

– Defining narrow and wide interfaces

– Hidden costs in caring for mementos

• Evaluation

Quality Characteristics ValuesExpendability GoodSimplicity FairGenerality FairModularity Very badSoftware Independence N/AHardware Independence N/ALearnability BadUnderstandability FairOperability GoodScalability FairRobustness Bad

Observer

• Intent: Define a one-to-many dependency between objects so that whenone object changes state, all its dependents are notified and updated au-tomatically.

• Applicability: Use the Observer pattern in any of the following situations:

79

Page 82: A Quality Model for Design Patterns

– When an abstraction has two aspects, one dependent on the other– When a change to one object requires changing others, and you don’t

know how many objects need to be changed– When an object should be able to notify other objects without mak-

ing assumptions about who these objects are

• Structure:

Figure 4.24: UML class diagram for Observer pattern

• Consequences: Further benefits and liabilities of the Observer patterninclude the following:

– Abstract coupling between Subject and Observer– Support for broadcast communication– Unexpected updates–––

• Evaluation

Quality Characteristics ValuesExpendability ExcellentSimplicity GoodGenerality ExcellentModularity N/ASoftware Independence N/AHardware Independence N/ALearnability FairUnderstandability GoodOperability GoodScalability GoodRobustness Good

80

Page 83: A Quality Model for Design Patterns

State

• Intent: Allow an object to alter its behavior when its internal state changes.The object will appear to change its class.

• Applicability: Use the State pattern in either of the following cases:

– An object’s behavior depends on its state, and it must change itsbehavior at run-time depending on that state

– Operations have large, multiparty conditional statements that de-pend on the object’s state

– Often, several operations will contain this same conditional structure

• Structure:

Figure 4.25: UML class diagram for State pattern

• Consequences: The State pattern has the following consequences:

– It localizes state-specific behavior and partitions behavior for differ-ent states

– It makes state transitions explicit

– State objects can be shared

• Evaluation

Quality Characteristics ValuesExpendability GoodSimplicity GoodGenerality FairModularity BadSoftware Independence GoodHardware Independence N/ALearnability FairUnderstandability Very badOperability GoodScalability GoodRobustness Fair

81

Page 84: A Quality Model for Design Patterns

Strategy

• Intent: Define a family of algorithms, encapsulate each one, and makethem interchangeable. Strategy lets the algorithm vary independentlyfrom clients that use it.

• Applicability: Use the Strategy pattern when

– many related classes differ only in their behavior

– you need different variants of an algorithm

– an algorithm uses data that clients shouldn’t know about

– a class defines many behaviors, and these appear as multiple condi-tional statements in its operations

• Structure:

Figure 4.26: UML class diagram for Strategy pattern

• Consequences: The Strategy pattern has the following benefits and draw-backs:

– Families of related algorithms

– An alternative to subclassing

– Strategies eliminate conditional statements

– A choice of implementations

– Clients must be aware of different Strategies

– Communication overhead between Strategy and Context

– Increased number of objects

• Evaluation

82

Page 85: A Quality Model for Design Patterns

Quality Characteristics ValuesExpendability GoodSimplicity FairGenerality BadModularity FairSoftware Independence FairHardware Independence N/ALearnability BadUnderstandability BadOperability FairScalability BadRobustness Fair

Template Method

• Intent: Define the skeleton of an algorithm in an operation, deferring somesteps to subclasses. Template Method lets subclasses redefine certain stepsof an algorithm without changing the algorithm’s structure.

• Applicability: The Template Method pattern should be used

– to implement the invariant parts of an algorithm once and leave itup to subclasses to implement the behavior that can vary

– when common behavior among subclasses should be factored andlocalized in a common class to avoid code duplication

– to control subclasses extensions

• Structure:

Figure 4.27: UML class diagram for Template Method pattern

• Consequences: Template methods call the following kinds of operations:

– concrete operations (either on the ConcreteClass or on client classes);– concrete AbstractClass operations (i.e., operations that are generally

useful to subclasses);

83

Page 86: A Quality Model for Design Patterns

– primitive operations (i.e., abstract operations);

– factory methods (see Factory Method); and

– hook operations, which provide default behavior that subclasses canextend if necessary. A hook operation often does nothing by default

• Evaluation

Quality Characteristics ValuesExpendability ExcellentSimplicity GoodGenerality FairModularity N/ASoftware Independence N/AHardware Independence N/ALearnability GoodUnderstandability GoodOperability GoodScalability GoodRobustness Good

Visitor

• Intent: Represent an operation to be performed on the elements of anobject structure. Visitor lets you define a new operation without changingthe classes of the elements on which it operates.

• Applicability: Use the Visitor pattern when

– an object structure contains many classes of objects with differinginterfaces, and you want to perform operations on these objects thatdepend on their concrete classes

– many distinct and unrelated operations need to be performed onobjects in an object structure, and you want to avoid “polluting”their classes with these operations

– the classes defining the object structure rarely change, but you oftenwant to define new operations over the structure

• Structure:

• Consequences: Some of the benefits and liabilities of the Visitor patternare as follows:

– Visitor makes adding new operations easy

– A visitor gathers related operations and separates unrelated ones

– Adding new ConcreteElement classes is hard

– Visiting across class hierarchies

84

Page 87: A Quality Model for Design Patterns

Figure 4.28: UML class diagram for Visitor pattern

– Accumulating state

– Breaking encapsulation

• Evaluation

Quality Characteristics ValuesExpendability ExcellentSimplicity GoodGenerality GoodModularity FairSoftware Independence GoodHardware Independence N/ALearnability GoodUnderstandability BadOperability FairScalability GoodRobustness Fair

4.5 Summery

The following table summarizes the quality characteristics of the twenty-threedesign patterns.

85

Page 88: A Quality Model for Design Patterns

Design Patterns Expendability Simplicity Generality Modularity Software Independence Hardware Independence Learnability Understandability Operability Scalability RobustnessAbstract Factory Excellent Excellent Good Good Fair Fair Good Good Good Good GoodBuilder Good Good Fair Fair N/A N/A Fair Good Fair Good GoodFactory Method Bad Bad Fair Good N/A N/A Good Good Good Good GoodPrototype Excellent Good Fair Good N/A N/A Fair Good Fair Excellent GoodSingleton Bad Very bad Fair Excellent Fair Fair Fair Fair Fair Good GoodAdapter Fair Fair Bad Good Good N/A Good Fair Fair Good FairBridge Good Fair Good Good N/A N/A Fair Fair Good Good GoodComposite Fair Fair N/A Fair N/A N/A Fair Good N/A N/A GoodDecotator Excellent Excellent Good Fair Good N/A Good Good Good Good FairFacade Good Good Good Good N/A N/A Fair Good Fair Fair FairFlyweight Bad Bad Fair Good N/A N/A Good Bad Fair Good GoodProxy Good Bad Fair Good N/A N/A Fair Bad Good Good FairChain of Responsibility Good Good Good Bad N/A N/A Fair Fair Good Bad FairCommand Good Bad N/A N/A N/A N/A Bad Very bad Good Good GoodInterpreter Good Fair Good Fair N/A N/A Fair Fair Good Good FairIterator Excellent Excellent Good Fair Good N/A Good Fair Fair Good GoodMediator Good Fair Good Good N/A N/A Fair Fair Good Good FairMemento Good Fair Fair Very bad N/A N/A Bad Fair Good Fair BadObserver Excellent Good Excellent N/A N/A N/A Fair Good Good Good GoodState Good Good Fair Bad Good N/A Fair Very bad Good Good FairStrategy Good Fair Bad Fair Fair N/A Bad Bad Fair Bad FairTemplate Method Excellent Good Fair N/A N/A N/A Good Good Good Good GoodVisitor Excellent Good Good Fair Good N/A Good Bad Fair Good Fair

86

Page 89: A Quality Model for Design Patterns

Chapter 5

Conclusions

In this report, we proposed a thorough summary of quality in software products.To reach a better result we present different quality models, which decomposein hierarchical and non-hierarchical models, such as McCall, Boehm, FURPS,ISO, Dromey, Star and Bayesian Belief Networks. These models are more under-standable with definition of quality characteristics and interrelationship amongthem.

For considering a numeric value for each characteristic, we consider a soft-ware metrics for most of attributes those defined in different models, as like asAdaptability, Completeness, Complexity, Conciseness, Correctness, Efficiency,Expendability, Generality, Hardware-independence, Indipesability, learnability,Modularity, Maturity index, Operability, Portability, Readability, Reliability,Robustness, Scalability, Simplicity, Software independence, Structuredness, Trace-ability, Understandability, and Usability.

Then, we can introduce a model to assess the ability of design patterns tobring the Reusability, Flexibility, Modularity, Understandability and softwareelegancy. At last we have an evaluation of twenty-three design patterns basedon Gamma et al. [22] as design pattern’s reference. For the best result, we focuson manually evaluation of simple software implement each patterns individually.

Future research needs to find an answer to the question: “What is the benefitof design patterns related to software quality?”

87

Page 90: A Quality Model for Design Patterns

Bibliography

[1] Anthony A. Aaby. Software: a fine art, Jan 2004.

[2] aivosto.com. Project metrics in project analyzer, 2003.

[3] Lowell Jay Arthur. Software evolution, the software mainenance challenge.John Wiley and sons, 1951.

[4] Software assurance technology center. Satc code metrics, Aug 2000.

[5] Osman Balci. Credibility assessment of simulation results. Proceedings ofthe 18th conference on Winter simulation, pages 38–44, 1986.

[6] Kent Beck. Extreme Programming Explained: Embrace Change. Addison-Wesley Professional, Oct, 1999.

[7] Nigel Bevan. Quality in use: Meeting user needs for quality. Journal ofSystem and Software, 1999.

[8] James M. Bieman and Byung-Kyoo Kang. Measuring design-level cohesion.Number 2, pages 111–124, Feb 1998.

[9] B. W. Boehm, J. R. Brown, and M. Lipow. quantitative evaluation of soft-ware quality. International Conference on Software Engineering, Proceed-ings of the 2nd international conference on Software engineering(2nd):592– 605, 1976.

[10] Lionel C. Briand, John W. Daly, and Jurgen K. Wust. A unified frameworkfor coupling measurement in object-oriented systems. IEEE Transactionson Software Engineering, 25(1):91–121, January/February 1999.

[11] L. Buglione and Abran A. Geometrical and statistical foundations of athree-dimensional model of software performance. Advances in engineeringsoftware, 30:913–919, 1999.

[12] David N. Card and Robert L. Glass. Measuring software design quality.Prentice-Hall, Inc., 1990.

[13] center for software engineering. Oo analysis and design: Modeling, integra-tion, abstraction, Spring 2002.

88

Page 91: A Quality Model for Design Patterns

[14] Shyam R. Chidamber and Chris F. Kemerer. A metrics suite for objectoriented design. M.I.T. Center for Information Systems Research (CISR),December 1993.

[15] Lloyd G. Williams Connie U. Smith. Introduction to software performanceengineering, Nov 2001.

[16] Space Applications Corporation. Software reuse metrics, Jan 1993.

[17] Lisa Crispin. Is quality negotiable? In Proceedings of XP Universe, Raleigh,NC, USA, jul 2001. Object Mentor Inc.

[18] Data and Object Factory. Software design patterns. Data and ObjectFactory, 2002.

[19] Dennis de Champeaux. Object-Oriented Development process and Metrics.Prentice Hall, 1997.

[20] F. Distante, M.G. Sami, and G. Storti Gajani. A general configurablearchitecture for wsi implementation for neural nets. pages 116–123, Jan1990.

[21] R. Geoff Dromey. A model for software product quality. IEEE Transactionson Software Engineering, 21(2nd):146–162, Feb 1995.

[22] Ralph Johnson Erich Gamma, Richard Helm and John Vlissides. DesignPatterns Elements of Reusable Object-Oriented Software. Addison-WesleyPub Co, 1995.

[23] Norman E. Fenton. Software Metrics, A Rigorous approach. InternationalThomson Computer Press, fourth edition edition, 1995.

[24] Donald Firesmith. A hard look at quality management software. OPENProcess Framework (OPF), April 2004.

[25] Donald G. Firesmith. Common concepts underlying safety, security, andsurvivability engineering. Carnegie Mellon Software Engineering Institute- Technical Note CMU/SEI-2003-TN-033, December 2003.

[26] J. E. Gaffney. Metrics in software quality assurance. Number 81, pages126–130. ACM press, March 1981.

[27] Peter Baer Galvin. Storage consolidation-part 3, August 2002.

[28] Alan Gillies. Software Quality: Theory and Management. InternationalThomson Publishing, 1992.

[29] Klaus D. Muller-Glaser Gunther Lehmann, Bernhard Wunder. Basic con-cepts for an hdl reverse engineering tool-set. 1996.

[30] Juha Gustafsson and Lilli Nenonen. User manual for the maisa metric tool- version beta, Oct 200.

89

Page 92: A Quality Model for Design Patterns

[31] L. Hyatt and L. Rosenberg. A software quality model and metrics for riskassessment, April 1996.

[32] IEEE. Ieee software engineering collection 982.1 standard dictionary ofmeasures to produce reliable. Institute of Electrical and Electronics Engi-neers,, 982.1 Standard Dictionary of Measures to Produce Reliable, 1996.

[33] ISO. Iso/iec 14598-1. International Standard, Information technology soft-ware product evaluation(2nd), 1999.

[34] Prasad Jogalekar and Murray Woodside. Evaluating the scalability of dis-tributed systems. IEEE Transactions on Parallel and Distributed Systems,March 2000.

[35] Keith Johnson. Readability. 1998.

[36] Stephan H. Kan. Metrics and Models in Software Quality Engineering.Addison-Wesley publishing Company, 2000.

[37] Stephen H. Kan. Metrics and Models in Software Quality Engineering.Addison Wesley, 2003.

[38] Byung-Kyoo Kang and James M. Bieman. Design-level cohesion measures:Derivation comparison and applications. Number 20th, pages 92–97, Aug1996.

[39] Barbara Kitchenham and Shari Lawrence Pfleeger. Software quality: Theelusive target. IEEE Software, pages 12–21, 1996.

[40] Norman Fenton Martin Neil and Lars Nielsen. Building large-scale bayesiannetworks, 1999.

[41] Martin Neil and Norman Fenton. Predicting software quality using bayesianbelief networks. NASA/Goddard Space Flight Centre, December 1996.

[42] Mary Beth Nilles. A hard look at quality management software. QualityDigest, 2001.

[43] Software measurement, 1991.

[44] CBR Online. Scalability from the edge, Jun 2002.

[45] Maryoly Ortega, Mara A. Perez, and Teresita Rojas. A systemic qualitymodel for evaluating software products. Laboratorio de Investigacin enSistemas de Informacin.

[46] Sassan Pejhan, Alexandros Eleftheriadis, and Dimitris Anastassiou. Dis-tributed multicast address management in the global internet. IEEE Jour-nal of Selected Areas in Communications, 13(8):1445–1456, 1995.

[47] Peter Petersen and Tom Schotland. Win32 and real time, April 1999.

90

Page 93: A Quality Model for Design Patterns

[48] Shari Lawrence Pfleeger. Software Engineering Theory and practice. Pren-tice Hall, 2001. Beohm McCall ISO Model.

[49] Roger S. Pressman. Software Engineering a practitioner’s Approach.McGraw-Hill, Inc., 1992.

[50] Sandy Ressler. Perspectives on Electronic Publishing. Prentice-Hall, 1997.

[51] Sanford Ressler. Perspectives on Electronic Publishing (Standards Solutionsand More). Prentice Hall, March 1993.

[52] Gunning Robert. The Technique of Clear Writing. McGraw-Hill, JuneRevised edition 1968.

[53] ronan Fitzpatrick. Software quality definitions and strategic issues.Staffordshire University, 1996.

[54] Linda H. Rosenberg and Lawrence E. Hyatt. A software quality modeland metrics for identifying project risks and assessing software quality.Presented at the 8th Annual Software Technology Conference Utah, Pro-ceedings of the 2nd international conference on Software engineering, April1996.

[55] James Rumbaugh. Relations as semantic constructs in an object-orientedlanguage, October 1987.

[56] W. J. Salamon and D. R. Wallace. Quality characteristics and metrics forreusable software (preliminary report). Technical report, National Instituteof Standards and Technology, may 1994.

[57] Joc Sanders and Eugene Curran. Software Quality, A Framework for suc-cess in software Development and Support. Addison - Wesley PublishingCompany, 1995.

[58] Inc. Scientific Toolworks. Cdadl-c-based design and documentation lan-guage, Feb 1997.

[59] Luo Si and Jamie Callan. A statistical model for scientific readability. InCIKM, pages 574–576, 2001.

[60] Abraham Silberschatz and Peter B. Galvin. Operating system concepts.Addison-Wesley Publishing Company, fourth edition edition, 1994.

[61] Monique Snoeck. Product quality, 2004.

[62] International Standard. Iso/iec 9126-1. Institute of Electrical and Electron-ics Engineers, Part 1,2,3: Quality model, 2001.

[63] Wolfgang B. Strigel, Geoff Flamank, and Gareth Jones. What are softwaremetrics? Software Productivity Center Inc., 1992.

91

Page 94: A Quality Model for Design Patterns

[64] Ladan Tahvildari. Assessing the impact of using design pattern based sys-tems. Master’s thesis, University of Waterloo, 1999.

[65] Jan Tretmans and Peter Achten. Quality of information systems, 2003.

[66] Steven P. Vanderwiel, Daphna Nathanson, and David J. Lilja. A com-parative analysis of parallel programming language complexity and perfor-mance. Concurrency: Practice and Experience, 10(10):807–820, 1998.

[67] Lionel Briand Victor R. Basili and Walcelio L. Melo. A validation of object-oriented design metrics as quality indicators. Technical Report 10, Univ.of Maryland, Dep. of Computer Science, April 1996.

[68] Edward Yourdon and Larry L. Constantine. Structured Design: Fundamen-tals of a Discipline of Computer Program and Systems Design. Prentice-Hall, Inc., 1979.

92