Supporting the Modelling and
Generation of Reflective Middleware
Families and Applications using
Dynamic Variability
Nelly BencomoM.Sc.(Distinction), B.Sc.(Honors)
Computing Department
Lancaster University
A thesis submitted for the degree of
Doctor of Philosophy
March 2008
Supporting the Modelling and Generation of
Reflective Middleware Families and
Applications using Dynamic Variability
Nelly Bencomo
M.Sc.(Distinction), B.Sc.(Honors)
Computing Department
Lancaster University
A thesis submitted for the degree of
Doctor of Philosophy
March 2008
This thesis explores how synergies between system family engineering, model
driven engineering, and generative software development help to produce new
development paradigms to support design, programming, testing, deployment,
and execution of reflective middleware families and their applications. The thesis
proposes Genie, an approach that guides the development and operation of reflec-
tive middleware platforms and their applications. Genie offers management of
dynamic variability during development and allows the systematic generation of
middleware related artefacts from high level descriptions (models). To this end,
two kinds of dynamic variability are identified, namely structural variability and
environment and context variability. As a validation of the approach, a prototype
called the Genie tool has been developed. The Genie tool supports the spec-
ification, validation and generation of artefacts for component-based reflective
middleware using domain specific modelling languages (DSMLs). The approach
has also been used to support the development and operation of Gridkit, one of
the dynamically configurable middleware families that have been developed at
Lancaster University.
Declaration
I declare that this thesis was written by myself and presents my own
work developed in the context of my PhD in the Computing Depart-
ment at Lancaster University. The work reported in this thesis has
not been previously submitted for a degree, in this or any form.
The model-based approach called Genie, its variability models, and
its implementation were developed by myself for supporting the ideas
and design challenges reported in the thesis. Nevertheless, this work
does build on the ideas proposed in the component model OpenCOM,
the Gridkit project and its different middleware families, and has ben-
efited considerably from the Next Generation Middleware Group at
Lancaster University. Hopefully this thesis contributed to the progress
and expansion of the vision of the middleware group and the depart-
ment.
This research was partially supported by a Faculty Postgraduate Stu-
dentship (Lancaster University).
To my parents, Rosa and Manuel, for their love.
To Pete, some people want to be sweet; he just cannot help it!
Acknowledgements
“What Des-Cartes did was a good step. You have added much
several ways, & especially in taking ye colours of thin plates into
philosophical consideration. If I have seen further it is by standing
on ye shoulders of Giants.”
- from letter to Robert Hooke, 5 Feb. 1676 by Isaac Newton
This thesis is the result of more than four years of work and enjoyment.
In retrospect, many results look so obvious yet they were not easy to
achieve. I am not completely happy with a lot of it - as I am assured
is the case for most PhDs. The time has come to finish the work but
not the joy!
I would like to acknowledge the people who have helped during these
years:
I want to express my sincere gratitude and appreciation to my su-
pervisor, Gordon Blair. In few words, I could not have got a better
supervisor. His guidance, encouragement , knowledge, and sense of
humour were my allies.
I also especially want to thank Paul Grace. He was great company
during difficult moments in this journey. Paul is a young researcher
with a promising future.
I would like to thank everyone within the Next Generation Middleware
Research Group at Lancaster University for providing an exception-
ally friendly and productive environment. Special thanks must go
to Geoff Coulson, Carlos Flores, Thirunavukkarasu Sivaharan (aka
Siva), and Bholanathsingh Surajbali. Additional thanks to Danny
Hughes, Wei Cai, Paul Okanda, Rajiv Ramdhany, Nirmal Weeras-
inghe, Ackbar Joolia, Jo Ueyama, Nikos Parlavantzas, and Francois
Taiani, for the good discussions.
Special thanks to Robert France, an inspiring researcher and person.
Few people can manage to be so humble with such a good CV! My
thanks too to Betty Cheng, an inspiring academic and woman so full
of energy. Much thanks also to Heather Goldsby for being a great
coauthor.
I want to thank the examiners Geri Georg and Jon Whittle for the
challenging discussions. Their comments and suggestions have im-
proved the thesis.
I am grateful, too, to Juha-Pekka Tolvanen and especially, Steven
Kelly from MetaCase, for their great support with MetaEdit+.
I also would like to thank Cath Ewan, for her great friendship, and
Peter Hurley for providing good music.
Many thanks to Pat Metheny, for keeping me sane with the best of
music and talent.
In particular, I want to express my deepest gratitude to my husband,
Pete, for his patience and encouragement, and for always giving me
the support and love I needed. I have been so lucky having you close
to me!!
Nelly
Caminante, son tus huellas
el camino, y nada mas;
caminante, no hay camino,
se hace camino al andar.
Al andar se hace camino,
y al volver la vista atras
se ve la senda que nunca
se ha de volver a pisar.
Caminante, no hay camino,
sino estelas en la mar.
Antonio Machado Spanish Poet, 1875-1939
Wanderer, your footsteps are
the road, and nothing more;
wanderer, there is no road,
the road is made by walking.
By walking one makes the road,
and upon glancing behind
one sees the path
that never will be trod again.
Wanderer, there is no road–
Only wakes upon the sea.
From Selected Poems of Antonio Machado By Betty Jean Craige,
University of Georgia, 1978
Contents
1 Introduction 1
1.1 Motivation and Problem Statement . . . . . . . . . . . . . . . . . 1
1.1.1 Key issues identified . . . . . . . . . . . . . . . . . . . . . 3
1.2 Overall Aim and Objectives . . . . . . . . . . . . . . . . . . . . . 4
1.3 Research Strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.4 Research Philosophy and Research Method . . . . . . . . . . . . . 6
1.5 Main Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.6 Roadmap for the Thesis . . . . . . . . . . . . . . . . . . . . . . . 10
2 Reflective and Adaptive Middleware 13
2.1 Overview of the Chapter . . . . . . . . . . . . . . . . . . . . . . . 13
2.2 Middleware Platforms . . . . . . . . . . . . . . . . . . . . . . . . 14
2.2.1 New challenges . . . . . . . . . . . . . . . . . . . . . . . . 14
2.2.2 New Middleware Platforms . . . . . . . . . . . . . . . . . . 15
2.3 Approaches to Developing Flexible Middleware . . . . . . . . . . . 16
2.3.1 Overview of Key Approaches . . . . . . . . . . . . . . . . . 16
2.3.2 Reflective Middleware . . . . . . . . . . . . . . . . . . . . 16
2.3.3 Policy-based Mechanisms . . . . . . . . . . . . . . . . . . . 17
2.3.4 Reflection and Policies: complementary approaches . . . . 18
2.4 Reflective Middleware Families at Lancaster . . . . . . . . . . . . 19
2.5 The Gridkit Middleware . . . . . . . . . . . . . . . . . . . . . . . 25
2.6 The Flood Forecasting Application . . . . . . . . . . . . . . . . . 30
2.6.1 Description . . . . . . . . . . . . . . . . . . . . . . . . . . 30
2.6.2 Planning Reconfigurations and Adaptations . . . . . . . . 31
vii
CONTENTS
2.6.3 Transition Diagrams for Reconfiguration . . . . . . . . . . 33
2.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
3 Three approaches to Software Reuse 36
3.1 Overview of the Chapter . . . . . . . . . . . . . . . . . . . . . . . 36
3.2 Background on Software Reuse . . . . . . . . . . . . . . . . . . . 36
3.3 Abstraction, Cognitive Distance, and Software Reuse . . . . . . . 38
3.4 System family engineering . . . . . . . . . . . . . . . . . . . . . . 39
3.4.1 System Families and Product Lines . . . . . . . . . . . . . 40
3.4.2 On the Concept of Variability in Software System Families 41
3.4.3 Achieving and Modelling Variability with Features . . . . . 44
3.4.4 Achieving and Modelling Variability with Architecture . . 46
3.4.5 Features vs. Architecture . . . . . . . . . . . . . . . . . . . 48
3.5 Orthogonal Variability Models . . . . . . . . . . . . . . . . . . . . 49
3.6 Generative Software Development . . . . . . . . . . . . . . . . . . 54
3.6.1 Domain Engineering . . . . . . . . . . . . . . . . . . . . . 56
3.6.2 Application Engineering . . . . . . . . . . . . . . . . . . . 57
3.6.3 Domain-Specific Languages . . . . . . . . . . . . . . . . . 57
3.6.4 Spectrum of DSLs and Variability . . . . . . . . . . . . . . 60
3.6.5 Product Configuration . . . . . . . . . . . . . . . . . . . . 61
3.7 Model Driven Engineering . . . . . . . . . . . . . . . . . . . . . . 62
3.7.1 On the Importance of Modelling in Software Engineering . 63
3.7.2 Learning from the Past and Heading to the Future . . . . 65
3.7.3 Model Driven Architecture . . . . . . . . . . . . . . . . . . 66
3.7.4 Domain Specific Modelling . . . . . . . . . . . . . . . . . . 69
3.7.5 Similarities and Synergies between MDA and DSM . . . . 71
3.8 MDA and Reflective Middleware: tackling heterogeneity . . . . . 72
3.9 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
4 Genie: Modelling and Generating Middleware Families 76
4.1 Overview of the Chapter . . . . . . . . . . . . . . . . . . . . . . . 76
4.2 The Central Role of Dynamic Variability . . . . . . . . . . . . . . 78
4.2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . 78
4.2.2 Dimensions of Dynamic Variability . . . . . . . . . . . . . 80
viii
CONTENTS
4.2.3 Achieving Dynamic Variability . . . . . . . . . . . . . . . . 82
4.3 A Model-driven Approach for Modelling and Generating Middle-
ware Families . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
4.3.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
4.3.2 Description of the Approach . . . . . . . . . . . . . . . . . 85
4.3.3 Different Levels of Abstraction . . . . . . . . . . . . . . . . 86
4.3.4 Orthogonal Variability Models . . . . . . . . . . . . . . . . 89
4.3.5 Domain Engineering and Application Engineering . . . . . 89
4.4 Model-driven Middleware Families . . . . . . . . . . . . . . . . . 93
4.4.1 Modelling Structural Variability using OpenCOM . . . . . 94
4.4.2 Modelling Environment and Context Variability . . . . . . 95
4.5 Middleware Families Framework Support . . . . . . . . . . . . . . 99
4.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
5 Evaluation 103
5.1 Overview of the Chapter . . . . . . . . . . . . . . . . . . . . . . . 103
5.2 The Genie Tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
5.2.1 Structural Variability Models in Genie . . . . . . . . . . . 105
5.2.2 Environment and Context Variability Models in Genie . . 107
5.2.3 Validation of Models . . . . . . . . . . . . . . . . . . . . . 109
5.3 Genie and Gridkit . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
5.4 Case Study 1: Dynamic Service Discovery . . . . . . . . . . . . . 113
5.4.1 Domain Analysis of the Service Discovery Framework . . . 113
5.4.2 Commonalities and Variabilities . . . . . . . . . . . . . . . 114
5.4.3 Modelling Variability . . . . . . . . . . . . . . . . . . . . . 116
5.5 Case Study 2: The Flood Forecasting Application . . . . . . . . . 124
5.5.1 Domain Analysis of the Open Overlays Framework . . . . 124
5.5.2 Commonalities and Variabilities . . . . . . . . . . . . . . . 125
5.5.3 An application of the open overlays framework: revisiting
the Flood Forecasting Application . . . . . . . . . . . . . . 128
5.5.4 Modelling Variability . . . . . . . . . . . . . . . . . . . . . 131
5.6 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
5.6.1 Providing higher levels of abstraction . . . . . . . . . . . . 140
ix
CONTENTS
5.6.2 Providing better software automation levels . . . . . . . . 142
5.6.3 Providing structured management of variability . . . . . . 143
5.7 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
5.8 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
6 Conclusions and Future Research Agenda 149
6.1 Overview of the Chapter . . . . . . . . . . . . . . . . . . . . . . . 149
6.2 Claimed Results and Novel Contributions . . . . . . . . . . . . . . 150
6.2.1 A Model-based approach to specify and generate middle-
ware based software artefacts . . . . . . . . . . . . . . . . 150
6.3 Research Method . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
6.4 Critical Remarks and Self-Reflection . . . . . . . . . . . . . . . . 153
6.5 Future Research Agenda . . . . . . . . . . . . . . . . . . . . . . . 154
6.5.1 Traceability from Requirements to Resultant Behaviour . . 155
6.5.2 Adaptive Systems and Software Product Lines . . . . . . . 155
6.5.3 [email protected] . . . . . . . . . . . . . . . . . . . . . 156
6.5.4 Other Topics for Future Research . . . . . . . . . . . . . . 157
6.5.4.1 Number of Reconfiguration Paths . . . . . . . . . 157
6.5.4.2 Models Validation . . . . . . . . . . . . . . . . . 157
6.5.4.3 Improvement of the Genie Tool: Architectural
Patterns . . . . . . . . . . . . . . . . . . . . . . . 158
6.5.4.4 Unanticipated adaptations . . . . . . . . . . . . . 158
6.6 Final Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
A Publications 160
B Implementation Details 163
B.1 Grammar of reconfiguration policies . . . . . . . . . . . . . . . . . 163
B.2 Generator XML Configurations . . . . . . . . . . . . . . . . . . . 164
B.3 Generator of policy reconfigurations . . . . . . . . . . . . . . . . . 168
B.4 List of constraints used to check validity of the models . . . . . . 171
B.5 The Genie tool and Gridkit . . . . . . . . . . . . . . . . . . . . . 172
References 201
x
List of Figures
1.1 Structure and Chapters of the Thesis . . . . . . . . . . . . . . . . 10
2.1 The OpenCOM main concepts . . . . . . . . . . . . . . . . . . . . 20
2.2 Reflective meta-models . . . . . . . . . . . . . . . . . . . . . . . . 24
2.3 The Gridkit Middleware Architecture . . . . . . . . . . . . . . . . 27
2.4 An example configuration of the open overlays framework, from
(Grace et al. (2008b)) . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.5 Another example configuration of the open overlays framework . . 29
2.6 An example of a reconfiguration policy in Gridkit . . . . . . . . . 30
2.7 Reconfiguration diagram, based on (Grace et al. (2006b) and Grace
et al. (2008b)) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.1 Example showing features of a car, based on (Kang et al. (1990)
and Asikainen et al. (2004a)) . . . . . . . . . . . . . . . . . . . . . 47
3.2 Variation point, variant, and the variability dependency meta-
model, from (Pohl et al. (2005)) . . . . . . . . . . . . . . . . . . . 50
3.3 Relating variants and development artefacts, from (Pohl et al. (2005)) 51
3.4 Relating variants and development artefacts, from (Pohl et al. (2005)) 51
3.5 Graphical notation for orthogonal variability models . . . . . . . . 52
3.6 Graphical notation for orthogonal variability models, from (Pe-
tersen et al. (2006)) . . . . . . . . . . . . . . . . . . . . . . . . . . 52
3.7 Relating variants and component diagrams, from (Pohl et al. (2005)) 53
3.8 Key concepts in generative software development: problem space,
solution space and the mapping between them (from Czarnecki &
Eisenecker (2000) ) . . . . . . . . . . . . . . . . . . . . . . . . . . 55
xi
LIST OF FIGURES
3.9 Domain Engineering and Application Engineering (Two Life Cy-
cles)(from (Czarnecki & Eisenecker (2000)) and (SEI (1997))) . . 55
3.10 Spectrum of DSLs based on (Czarnecki (2004) and Stahl & Volter
(2006)) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
3.11 OMG’s Model Driven Architecture (MDA): main concepts . . . . 67
3.12 The role of middleware technologies when using OMG’s Model
Driven Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . 69
3.13 UML diagrams modelled in MetaEdit+ . . . . . . . . . . . . . . . 72
4.1 Classification of dynamic adaptation, based on (Trapp (2005)) . . 81
4.2 Dynamic Variability Dimensions . . . . . . . . . . . . . . . . . . . 85
4.3 The levels of abstraction and the two dimensions of dynamic vari-
ability in the approach . . . . . . . . . . . . . . . . . . . . . . . . 87
4.4 Orthogonal variability diagrams for structural variability and en-
vironment and context variability . . . . . . . . . . . . . . . . . . 90
4.5 Software Development Process: Domain Engineering and Applica-
tion Engineering . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
4.6 Different domains of middleware behaviour and their component
frameworks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
4.7 The OpenCOM meta-model (in UML) . . . . . . . . . . . . . . . 95
4.8 Different variants compliant with an hypothetical component frame-
work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
4.9 The meta-model (in UML) to model the Context and Environment
Variability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
4.10 Realization of UML associations using Variation Points . . . . . . 98
4.11 A middleware framework, its domains and the standard interfaces 100
5.1 The Publish/Subscriber framework used in Gridkit and modeled
in Genie . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
5.2 Generation of different artefacts using the Genie tool . . . . . . . 107
5.3 A model of a transition diagram and a generated reconfiguration
policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
5.4 A model of a transition diagram with the variation points . . . . . 111
5.5 Component frameworks and their specific policies . . . . . . . . . 112
xii
LIST OF FIGURES
5.6 Genie and Gridkit . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
5.7 The Service Discovery Family Architecture ( from (Flores-Cortes
et al. (2007))) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
5.8 Configuration for the variant role UA . . . . . . . . . . . . . . . . 117
5.9 Configuration for the variant role SA . . . . . . . . . . . . . . . . 117
5.10 Configuration for the variant role DA . . . . . . . . . . . . . . . . 118
5.11 Component framework models, variability model, and inter-dependencies
between artefacts . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
5.12 Transition diagram model for the Service Discovery Application (a
screenshot of the Genie tool) . . . . . . . . . . . . . . . . . . . . . 120
5.13 Variability Model and Adaptation Policies . . . . . . . . . . . . . 122
5.14 Dynamic Variability in the Service Discovery Protocols . . . . . . 123
5.15 An example configuration of the open overlays framework, from
(Grace et al. (2008b)) . . . . . . . . . . . . . . . . . . . . . . . . . 126
5.16 Reconfiguration diagram, based on (Grace et al. (2006b) and Grace
et al. (2008b)) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
5.17 Configuration of the open overlays framework for the case study 2 131
5.18 Overlays Pattern - Plug-ing Architecture . . . . . . . . . . . . . . 133
5.19 Fewest Hops Variant for the Spanning Three component framework 134
5.20 WiFi Variant for the Network component framework . . . . . . . 134
5.21 The transition diagram of the flood forecasting application and
some generated reconfiguration policies . . . . . . . . . . . . . . . 136
5.22 Transition diagram model in Genie . . . . . . . . . . . . . . . . . 137
5.23 Variability and Transition Diagrams . . . . . . . . . . . . . . . . . 138
5.24 Dynamic Variability in the Flood Forecasting Application . . . . . 139
B.1 Generator pseudo code for XML configurations . . . . . . . . . . 164
B.2 Model of the CF in the example . . . . . . . . . . . . . . . . . . . 165
B.3 XML file for the example . . . . . . . . . . . . . . . . . . . . . . . 166
B.4 Generator pseudo code for reconfiguration policies information . . 169
xiii
List of Tables
2.1 Metrics considered in the trade-off when planning reconfigurations 32
3.1 Summary of the variability realization techniques and characteris-
tics (as proposed by Svahnberg et al.) . . . . . . . . . . . . . . . . 43
xiv
Chapter 1. Introduction
“Begin at the beginning, and go on till you come to the end: then stop.”
- from Alice’s Adventures in Wonderland by Lewis Carroll (1832 - 1898)
1.1 Motivation and Problem Statement
Middleware is a term that refers to a set of services that reside between the appli-
cation and the operating system. Its primary goal is to facilitate the development
of distributed applications (Coulson (2000)). To pursue this goal many middle-
ware technologies have been developed, e.g., CORBA (OMG (1995)), Enterprise
Java Beans (Monson-Haefel (2000)), DCOM (Microsoft (1996)), Java RMI (Sun
(1999)) and .NET (Microsoft (2000)). All share the basic principle of providing
abstraction over the complexity and heterogeneity of the underlying distributed
environment. However, with the advance of time, what started with a simple
range of technologies has evolved into a large assortment of choices. Nowadays,
middleware platforms must embrace both heterogeneous networks and heteroge-
neous devices: from embedded devices in wireless ad-hoc networks to high-power
computers on the Internet. This has made challenging new requirements appear.
For example, adaptability is emerging as a crucial requirement for many appli-
cations, particularly those deployed in dynamically changing environments such
as environmental monitoring and disaster management (McKinley et al. (2004),
Floch et al. (2006)). One approach to handling this complexity at the architec-
tural level is to augment middleware systems with intrinsic reflective and adaptive
1
1.1 Motivation and Problem Statement
capabilities (Blair et al. (1998), Wang et al. (2001), Kon et al. (2002), Bruneton
et al. (2006)). This approach has been named reflective middleware ; i.e.
configurable and reconfigurable middleware platforms based on the concepts of
reflection and (typically) components. Components and the associated concept
of component frameworks provide extendable structure and functionality, and re-
flection offers the essential support for dynamic configuration and extensibility
for runtime evolution and adaptation.
For all the above reasons, the development of middleware systems is becom-
ing increasingly complex. Flexibility and adaptability have contributed to mak-
ing the development and operation of middleware platforms even more difficult.
Currently, middleware developers deal with a large number of variability deci-
sions when developing platforms and planning configurations, reconfigurations
and adaptations (Bencomo et al. (2006)). Manually guaranteeing that all the
variability decisions are consistent is error-prone. Ad-hoc approaches do not of-
fer formal foundations for verifying that the middleware will offer the required
functionality (Bencomo et al. (2005a)). Dynamically reconfigurable middleware
platforms require new software development and operational paradigms to sup-
port systematic and automated checking of both functional and non-functional
properties.
The thesis explores how system family engineering, model driven engineering
(MDE), and generative software development can be combined to produce new
development paradigms to support the life cycle (including design, programming,
testing, deployment and execution) of reflective middleware families.
On the one hand, system family engineering allows the reuse of assets from pre-
vious systems (Jacobson et al. (1997), Kim & Stohr (1998), Sommerville (2004))
and exploits the acquired knowledge when building subsequent systems in the
same domain (Schmidt (1999)). Software family-based approaches also give sup-
port to the systematic management of software variability ( Bosch (2000), Bach-
mann & Bass (2001), Svahnberg et al. (2005), Pohl et al. (2005)). Variability
management involves separating the system family into three parts - common
components, parts that are common to some member of the family but not all of
them, and and the individual members with their particular requirements- and
the management of these parts during the lifecycle (Hallsteinsen et al. (2008)).
2
1.1 Motivation and Problem Statement
On the other hand, model-driven techniques can be used to complement the
systematic reuse of development knowledge and experience and allow the lev-
els of abstraction at which developers operate to be raised (Stachowiak (1973),
Kent (2002), Ludewig (2003), Czarnecki & Eisenecker (2000), France & Rumpe
(2007)). Finally, generative software development approaches offer the means
to analyze models and automatically synthesise software artefacts from reusable
assets(Czarnecki & Eisenecker (2000), Czarnecki (2004), Schmidt (2006)). The
generation of artefacts from models enforces consistency between implementations
and functional and non-functional requirements (Baleani et al. (2005), Schmidt
(2006)). Generative software development promotes software reuse and allows
the software development process to be more efficient, reducing time-consuming
and error-prone manual work and therefore improving the software quality. A
review of the main concepts of these three areas is given in Chapter 3.
1.1.1 Key issues identified
Based on the problem statement the following key issues were identified:
• The level of abstraction at which developers operate is low. The highly
technical knowledge needed by developers requires them to work at very low
levels of abstraction (Bencomo et al. (2008b)). Middleware developers work
with concepts like components, connections, and component frameworks
directly using low-level programming and scripting language environments.
Reasoning and analysis of configurations and reconfigurations are poorly
supported by languages like Java, C++, or verbose script languages like
XML. Design, architecture or domain semantics cannot be expressed using
such languages. Working with low-level abstractions results in (i) a big gap
between the way domain experts, architects and programmers operate, and
(ii) a high level of complexity during the development process. Using higher
levels of abstraction reduce both effort and complexity (Ross et al. (1975),
Booch (1982), Selic (2003), Kramer (2007)).
• Poor software automation levels. Currently, most source code and scripts
for component configurations and its reconfigurations are written manu-
ally. This results in a time-consuming and error-prone approach. Tracing
3
1.2 Overall Aim and Objectives
decisions in an ad-hoc, manual fashion does not guarantee their validity to
achieve the required functionality (Czarnecki & Eisenecker (2000), Schmidt
(2006)).
• Lack of a structured management of variability. Middleware developers
make many complex variability decisions when planning configurations, re-
configurations, and adaptations. These include decisions such as what com-
ponents are required and how these components must be configured together
according to variations in the conditions of the environment and context.
The management of dynamic variability is therefore an important concern.
Variability management means that a systematic approach to structure,
implement and document the variability in a software family should be
realized in a repeatable manner (Jaring (2005)), and dynamic variability
implies that variation points are to be bound at runtime. Managing vari-
ability requires a consistent approach that explodes reuse independently
from specific contexts and avoiding ad-hoc solutions (Berg et al. (2005)).
Visualization of software variability is also valuable (Jaring et al. (2004),
Berg et al. (2005), Berg & Muthig (2005)).
The three key issues described above contribute to poor levels of software
reuse (Krueger (1992)).
1.2 Overall Aim and Objectives
The overall aim of this thesis is to improve the development and operation of
middleware families, systematically generating middleware-associated artefacts
from high-level descriptions. That is to say:
To propose a systematic approach that promotes software reuse and uses mod-
els as first-class entities to raise the level of abstraction beyond coding by spec-
ifying solutions using domain concepts. Moreover, the approach should offer a
structured management of variability and variation points that may need to be
resolved at runtime.
To pursue the main goal involves addressing the following strategic objectives:
4
1.3 Research Strategy
• To use model-driven techniques to raise the levels of abstraction at which
middleware developers work.
• To automate the development process where possible.
• To provide for systematic management of dynamic variability.
1.3 Research Strategy
To reach the goal the research undertakes the following approach:
i. To carry out an investigation of the approach and philosophy of the adap-
tive and reflective middleware families and to understand variability in this
domain. Special emphasis will be placed on the middleware platforms de-
veloped at Lancaster University as a representative class of these platforms.
ii. To undertake a survey of the state of the art in system family engineering
and generative software development.
iii. To examine and evaluate the state of the art in model driven engineering
(MDE). Different model-driven related approaches and techniques will be
studied (e.g. model driven architecture (MDA), domain specific modelling
(DSM) and domain specific modelling Languages (DSMLs)) to synthesize
lessons about the modelling of concepts and variability, and generation of
reflective middleware platforms.
iv. To investigate the complementary nature of the areas described in (ii)
and (iii) when developing reflective middleware platforms. Additionally, to
examine how these solutions can be used together to address the problem
tackled by this thesis.
v. To propose a structured technique to model the dynamic variability offered
by reflective middleware platforms. This technique should be complemented
with generative capabilities.
5
1.4 Research Philosophy and Research Method
The research will investigate the design and implementation of a prototype
tool to demonstrate how the above concepts fit together to reach the goal and
specific objectives. The work aims to demonstrate the viability and the benefits of
the approach using the prototype tool to support the development and operation
of different dynamically configurable middleware families that are being developed
at Lancaster University.
1.4 Research Philosophy and Research Method
The Oxford Concise dictionary defines research as:
“the systematic investigation into and study of materials, sources, etc, in or-
der to establish facts and reach new conclusions.”
As with other definitions of research, this definition is helpful because it makes
a direct link with the systematic nature of research (Johnson (2006b)). In other
words, the need for a research approach is stated. The decision about what
scientific approach to use depends on the subject under investigation and the
research discipline involved. There is a bewildering diversity of research methods
being used in Computing Science (Johnson (2006a), Easterbrook et al. (2007)).
However, most research can be categorized as qualitative or quantitative (UCD
(2007)). The qualitative paradigm focuses on studying subjective data and can
be seen as interpretative. The quantitative paradigm focuses on what can be
measured. It involves collecting and analyzing objective (often numerical) data
that can be structured into statistics. Quantitative research is often known as
positivist or deductive.
In the specific field of Software Engineering (SE) there is no well-established
research approach (Shaw (2001), Aagedal (2001)), due both to the diversity of re-
search subjects (Glass et al. (2002)) and because the history of SE is so short (Xia
(1997)). In SE, the traditional deductive approach is suitable for some research
topics, for example where performance is a research issue (Aagedal (2001)). How-
ever, there exist research topics where the empirical value of deductive approaches
is not evident. SE practitioners follow, perhaps unconsciously, the interpretive
6
1.4 Research Philosophy and Research Method
paradigm - “a theory is favored and accepted as useful if it is interpreted as such
according to a combination of intuition and experience” (Jaring (2005)).
Karl Popper stated that “[a] theory which is not refutable by any conceivable
event is non-scientific” (Popper (1974)). This can be reworded to say that a
scientific theory must make predictions (i.e. hypothesis) that can be falsified
by observations (Jaring (2005)). For Popper, “all observation-statements are
theory-laden, and are as much a function of purely subjective factors (interests,
expectations, wishes, etc.) as they are a function of what is objectively real”
(SEP (2006)). Moreover, Popper argued that the scientific quality of research
is based on the nature of the questions asked and not on its empirical method
(Dawn G. et al. (2001)). From this, it can be concluded that SE researchers and
practitioners, directly or not, favour Popperian proposals (Jaring (2005)). There
exists research that suggests that SE can meet Popper’s criteria for scientific
research (Dawn G. et al. (2001)). This implies that the scientific status of a
theory increases as it is successfully developed into practical applications (i.e. as
a methodology, technique or tool).
Research in SE methodologies is seen as a synthetic discipline. Synthetic
disciplines are more oriented toward “making” and inventing, contrasting with
analytic disciplines which are concerned with “finding” or discovering (Owen
(1997), Owen (1998)). Research in SE entails the creation of abstract mecha-
nisms to support software developers, not just to make the development process
more efficient but to understand and model complex systems (Sommerville (2004),
Pressman (2005)). The overall hypothesis is that the abstract mechanisms pro-
posed are useful in some contexts, i.e. they offer more expressiveness, represent
new concepts, or represent new relationships between formerly unrelated concepts
(Aagedal (2001)). To investigate the correctness of these mechanisms and how
precisely their goals have been met (i.e. to validate the research), case studies are
frequently used. Case studies are suitable to describe, understand, and explain
the research subject (Yin (2003), Tellis (1997), Easterbrook et al. (2007)). As
in Popperian falsification, case studies can support or refute the hypothesis of a
theory.
The hypothesis of this research is that support using model-driven and genera-
tive techniques will raise the levels of abstraction at which middleware developers
7
1.5 Main Contributions
work, will improve the levels of automation of the development process, and will
provide for systematic variability management during the development and op-
eration of reflective middleware platforms and their applications. The basis of
such support is twofold: (i) the introduction of new abstractions and high-level
constructs to model different dimensions of dynamic variability, and (ii) direct
mappings from the variability modelling constructs to middleware-related imple-
mentation artefacts.
To test the hypothesis this thesis considers two case studies. These case studies
have been not only helpful for acquiring a comprehensive and detailed knowledge
about the subject matter studied but also for helping to understand the effect of
the solutions proposed on the software development process. Furthermore, the
author has been able to identify additional research questions from the application
of the proposed solution to the case studies.
1.5 Main Contributions
The main contribution of this thesis is the model-based approach, Genie, that sup-
ports and guides the developer during the modelling and generation of
middleware-based software artefacts, and during the operation of middleware
platforms and applications. Specifically, the Genie approach offers structured
management of dynamic variability during the development of middleware plat-
forms and their supported applications. Genie also allows the systematic gener-
ation of middleware related artefacts from high level descriptions (models). To
this end, a structured technique to model the dynamic variability offered by re-
flective middleware platforms is offered. Two kinds of dynamic variability are
identified, namely structural variability and environment and context variability.
Two DSMLs are proposed to specify these kinds of variability:
i. Structural variability DSML. This DSML offers architecture-based ab-
stractions to specify component configurations. These abstractions rep-
resent standard architectural notions such as components, required and
offered interfaces, and connections between components.
8
1.5 Main Contributions
ii. Environment and context variability DSML. This DSML offers the
pertinent abstractions to specify the conditions that represent the dynamism
of the environment and their consequences for the structure of the system.
These models are in essence transition diagrams.
These DSMLs are complemented by generative capabilities. Genie allows
the generation of files and software artefacts associated with the construction of
components, configurations of components, and reconfiguration policies from the
models designed using the DSMLs. Genie also supports traceability between the
models associated with structural and environment variability.
Crucially, the modelling of structural variability is realized during Domain
Design in Domain Engineering. Likewise, the modelling of the environment and
context variability is mainly realized during Design Analysis in Application En-
gineering (Bencomo et al. (2008c)). While structural variability is related to
middleware concepts environment and context variability is mainly related to the
nature of applications. As a result there is a clean separation of concerns between
the middleware and applications functionality.
The Genie approach has partially captured the experience of the reflective
middleware group by generalizing their expertise of developing middleware plat-
forms and their supported applications. The Genie approach is complemented
by its prototype toolkit, the Genie tool. The Genie tool supports the specifi-
cation, validation and generation of artefacts for component-based middleware
using DSMLs tailored to operate with the reflective middleware platforms at
Lancaster. This thesis reports how the approach has been used to support the
development and operation of the Gridkit dynamically configurable middleware
system and two substantial applications supported by Gridkit.
Two kinds of transformations are involved when using Genie and the sup-
port of the reflective middleware platforms: a static transformation from the two
DSMLs to the reflective middleware machinery; and a runtime transformation
that maps between configurations of components (reconfiguration). The former
is part of the main contributions of this thesis, whereas the latter is mainly sup-
ported by the reflective middleware platforms. However, as noted above, the
9
1.6 Roadmap for the Thesis
Genie approach offers guidance to model and specify the reconfigurations that
are performed by the middleware platforms.
1.6 Roadmap for the Thesis
The flow of the chapters is presented in Figure 1.1.
Figure 1.1: Structure and Chapters of the Thesis
10
1.6 Roadmap for the Thesis
The remainder of this thesis develops as follows:
Chapter 2, Reflective and Adaptive Middleware, studies the problems that
appear when tackling environmental variation during execution using traditional
middleware platforms. The chapter also presents the concepts and fundamental
principles of flexible middleware platforms and describes two approaches used
to develop such platforms, namely reflection and policy-based mechanisms. The
philosophy of the approach to develop middleware families at Lancaster Univer-
sity and the description of the Gridkit middleware platform are also presented.
Finally, a flood forecasting application is presented as an example to illustrate
the dynamic variability involved in applications based on the Gridkit platform.
Chapter 3, Three approaches to Software Reuse, surveys three different ap-
proaches to software reuse: system family engineering, generative software de-
velopment, and model driven engineering. The aim of the chapter is to provide
a coherent perspective of the diverse literature and to show potential synergies
among these three areas and their potential contribution to achieving the goal of
the thesis.
In Chapter 4, Genie: Modelling and Generating Middleware Families, the au-
thor uses the insights garnered from the background chapters to document the
reasoning of the design and implementation of Genie, the approach proposed in
this thesis. The author discusses how software architecture plays a crucial role
in raising the level of abstraction during development. The chapter shows how
concepts from domain-specific modelling, generative techniques, and dynamic
variability are weaved together in the proposed approach. The chapter also doc-
uments two proposed dimensions of dynamic, structural variability and environ-
ment and context variability. The application of the approach in the specific case
of the reflective middleware families at Lancaster is also presented.
Chapter 5, Evaluation, presents the evaluation of the Genie approach. The
cases studies presented illustrate the usage of the variability models and the gener-
ative capabilities of the approach using the Genie tool. The chapter demonstrates
the practicality and benefits of the Genie approach.
Finally, in Chapter 6, Conclusions and Future Research Agenda, the conclud-
ing remarks are presented along with a brief outline of the thesis that highlights
the contributions of this research and answers to the research questions. A future
11
1.6 Roadmap for the Thesis
research agenda is also presented.
12
Chapter 2. Reflective and
Adaptive Middleware
“And since you know you cannot see yourself,
so well as by reflection, I, your glass,
will modestly discover to yourself,
that of yourself which you yet know not of.”
- from The Tragedy Of Julius Caesar, 1599 by William Shakespeare
2.1 Overview of the Chapter
This chapter focuses on the state-of-the-art in the development of reflective and
adaptive middleware platforms. Sections 2.2.1 and 2.2.2 study the problems that
appear when tackling environmental variation during execution using traditional
middleware platforms. Section 2.3 presents the concepts and fundamental prin-
ciples of flexible middleware platforms and describes two approaches used to
develop such platforms, namely reflection and policy-based mechanisms. The
philosophy of the approach to develop middleware families at Lancaster Univer-
sity and a description of the Gridkit middleware platform are presented in sections
2.4 and 2.5. Finally, as an example to illustrate the dynamic variability involved
in applications based on the Gridkit platform, a flood forecasting application is
presented in sections 2.6.
13
2.2 Middleware Platforms
2.2 Middleware Platforms
2.2.1 New challenges
Middleware is defined as “a layer of software residing on every machine, sit-
ting between the underlying operating system and the distributed applications,
whose purpose is to mask the heterogeneity of the co-operating platforms and pro-
vide a simple, consistent and integrated distributed programming environment”
(Coulouris et al. (2000)). Heterogeneity refers to the diversity of networks, op-
erating systems, and programming languages. Middleware platforms ease the
development of distributed applications and make transparent some aspects of
distribution such as transparent invocation that makes local and remote invoca-
tion calls identical.
Middleware platforms have proved to be successful in fixed networks for over-
coming heterogeneity and integrating existing legacy systems (Grace (2004)).
Many middleware technologies have been developed for fixed networks, such as:
Common Object Request Broker (CORBA OMG (1995)), Enterprise Java Beans
(Monson-Haefel (2000)), DCOM (Microsoft (1996)), Java RMI (Sun (1999)), and
.NET (Microsoft (2000)). However, these are demonstrably unsuitable for the
challenging new requirements derived from new systems (Capra et al. (2001),
Roman et al. (2001), Grace (2004)). Nowadays, middleware platforms must em-
brace both heterogeneous networks and heterogeneous devices: from fixed net-
works to ubiquitous or mobile computing, and from embedded devices in wireless
ad-hoc networks to high-power computers on the Internet. The fixed black-box
approach of traditional middleware implementations whose underlying structure
and behaviour are hidden from the programmer cannot be specialized or adapted
dynamically at runtime to cope with the changes that occur in the new scenarios
(Blair et al. (1998), Kon et al. (2002), Grace (2004)). Therefore, adaptive middle-
ware platforms have been designed to meet the new demands (Blair et al. (1998),
Kon et al. (2000), Blair et al. (2001), Capra et al. (2002), Silva-Moreira (2003),
Grace (2004), Truyen (2004), Alia et al. (2006), Floch et al. (2006)). These new
middleware solutions are configurable and dynamically reconfigurable to enable
the platform to adapt to changes in its environment while providing the expected
14
2.2 Middleware Platforms
functional service with the desired qualities. The next section gives more details
about these new platforms.
2.2.2 New Middleware Platforms
“Modern middleware platforms are deployed in diverse and constantly changing
environments” (Parlavantzas (2005)). Middleware platforms are required to offer
a high degree of configurability not only at deployment time, but also at runtime,
to satisfy the broad and variable set of requirements arising from the needs of both
applications and underlying systems. These platforms should be flexible; the more
flexible a platform is, the more likely it is that it will be able to cope with a given
environmental variation. Parlavantzas pointed out (Parlavantzas (2005)) that
to improve flexibility (and therefore adaptability), platforms should (i) support
modification both statically (i.e., at design, implementation, and deployment
time) and dynamically (i.e., at runtime) (ii) support extension, that is, the ability
to add new functionality.
Static flexibility is useful for addressing variations in static properties of
the platform environment, i.e. properties that remain invariant during operating-
time. Dynamic flexibility , by contrast, is useful when dealing with variations
in dynamic properties, such as the quality of network connectivity or battery life.
Essentially, dynamic flexibility should enable timely response to such variations
without stopping the system. Extensibility is the ability to add new function-
ality. Extensibility is particularly desirable when dealing with adaptive systems
as it guarantees that the set of potential platform variants is not a priori fixed.
Several approaches have been used to develop adaptive middleware. Reflection
and policies are two successful and complementary mechanisms that have been
widely used. These two approaches are presented in the next section.
15
2.3 Approaches to Developing Flexible Middleware
2.3 Approaches to Developing Flexible Middle-
ware
2.3.1 Overview of Key Approaches
Reflective middleware platforms are configurable to meet the requirements of a
given application domain, dynamically reconfigurable to enable the platforms to
adapt to changes in their environment, and evolvable to meet the requirements of
changing platform design (Blair et al. (2001)). In contrast, policy-driven mech-
anisms allow the application to determine and set the rules used for adaptation.
Adaptation policies are interpreted by the underlying middleware at runtime. For
each particular condition a corresponding rule is applied to alter the middleware
behaviour (Grace (2004)).
2.3.2 Reflective Middleware
Richard Gabriel et al. (Gabriel et al. (1993)) defined reflection as:
“Reflection is the ability of a program to manipulate as data something repre-
senting the state of the program during its own execution. There are two aspects
of such manipulation: introspection and intercession. Introspection is the
ability of a program to observe and therefore reason about its own state. Inter-
cession is the ability of a program to modify its own execution state or alter its
own interpretation or meaning. Both aspects require a mechanism for encoding
execution state as data; providing such an encoding is called reification.”
The data representing the state of the program is known as the base-level
while the structures used as the self representation of the program are meta-
level . Both levels are causally connected, meaning that modifications to either
one will be reflected in the other (Maes (1987)). A reflective program running at
the base level has access to its representation at the meta-level, and a modification
of this representation will have an effect on future base-level computations.
Introspection and intercession of the base-level objects is enabled by an inter-
face known as the meta-object protocol (MOP). Two types of reflection have
16
2.3 Approaches to Developing Flexible Middleware
been defined: structural reflection and behavioural reflection (Ferber (1989)).
Structural reflection refers to the access of the (static) structure of the pro-
gram. The classic example is when a meta-level object examines a base-level
object to determine the methods available for invocation. On the other hand,
behavioural reflection allows a program to access and modify its (dynamic)
behaviour. Some examples are where an interceptor is added on certain method
calls for security checks (Coulson et al. (2002)), or where an interceptor is used
to manipulate the parameters of calls. In this way, additional execution before
and/or after the call can be done when modifying the arguments of the call (Ben-
como et al. (2005b)).
Reflection can be used to cleanly modularize concerns (Dijkstra (1968), Parnas
(1972)), including dynamic concerns (Bencomo et al. (2005b)), and hence to
offer a modular support for adaptation in software systems (Blair et al. (2000),
Redmond & Cahill (2002), Schmidt (2002)). It also potentially improves software
reuse. However it has the disadvantage of forcing programmers to understand the
meta-level perspective on the computation (Tanter et al. (2003)).
A developer can use reflective services exposed by a programming language
- such as Python or Java - or offered by a middleware platform. The applica-
tion of reflection to the engineering of middleware systems in order to achieve
openness, configurability and adaptability has been widely studied (Blair et al.
(1998), Ledoux (1999), Andersen et al. (2000), Kon et al. (2000), Blair et al.
(2001), Capra et al. (2001), Kon et al. (2002), Schmidt (2002)).
These efforts have allowed the development of several middleware platforms
using reflection including: FlexiNet (Hayton et al. (1998)), OpenCorba (Ledoux
(1999)), DynamicTAO (Kon et al. (2000)), CARISMA (Capra et al. (2001)),
ReMMoC (Grace et al. (2003a)), and GREEN (Sivaharan et al. (2005)). A specific
example of a reflective middleware platform is Gridkit, which is briefly described
in Section 2.5.
2.3.3 Policy-based Mechanisms
Policy-based mechanisms enable the application to determine how to adapt to
environment and context changes. Applications govern and set the rules used for
17
2.3 Approaches to Developing Flexible Middleware
adaptation. Hence, policies promote application-aware adaptation. In general,
these policies are of the form: On condition Ci do set of instructions Ii. For each
particular condition Ci the corresponding rule (in the form of a set of instructions
Ii) is applied to adjust the middleware behaviour.
Designers or programmers make their own decisions about how quality re-
quirements are applied and therefore the application may have different kinds of
policies defined by different designers. This may lead to conflicts between poli-
cies. For example, adaptive behaviour triggered by a specific policy can conflict
with other policies. For this reason coordination between adaptive mechanisms
is also needed (Efstratiou et al. (2003)).
Several middleware platforms that use policies have been developed including:
Odyssey (Satyanarayanan (1996)), Puppeteer (Flinn et al. (2001)), CARISMA
(Capra et al. (2001)), and Chisel (Keeney & Cahill (2003)), and the approach
presented in (Efstratiou et al. (2003)).
Meta-policies have been proposed to develop autonomic computing systems
in which the policies themselves can be modified dynamically and automatically
(Anthony (2006)).
2.3.4 Reflection and Policies: complementary approaches
Reflection offers open access to system inspection and intercession while policies
allow the control of dynamic changes. In this way, policy-driven mechanisms can
be seen as higher-level means for adaptation, i.e. they are typically in a layer on
top of reflective middleware, which concentrates on low-level middleware changes.
Researchers at Lancaster University and University College London (UCL) have
studied the relation between reflection and policy-driven mechanisms in the do-
main of mobile computing middleware (Capra et al. (2002)). They discuss two
complementary approaches: ReMMoC (Grace et al. (2003a)), where reflection is
used to satisfy heterogeneity requirements imposed by both applications and their
device platforms, and CARISMA (Capra et al. (2001)), which uses reflection to
support dynamic adaptation of middleware behaviour according to application
profiles. Using both approaches together offers the key advantage of supporting
18
2.4 Reflective Middleware Families at Lancaster
reaction to given context changes and events, guaranteeing that the right strategy
is in effect (Capra et al. (2002), Grace et al. (2004)).
Gridkit (Grace et al. (2004)) uses both reflection and policies. The Gridkit
middleware platform is particularly interesting for this thesis as it forms a crucial
part of the case studies of this thesis. Gridkit is presented in Section 2.5.
2.4 Reflective Middleware Families at Lancaster
This section summarizes the perspective of researchers at Lancaster on the con-
cepts of component-based middleware families. Lancaster’s Middleware research
group advocates that modern middleware platforms should be augmented with
adaptive capabilities that enable applications to dynamically adapt to fluctuating
execution contexts. The research group has developed several reflective middle-
ware platforms such as OpenORB2 (Blair et al. (1998), Blair et al. (2001), Blair
et al. (2004)), ReMMoC (Grace et al. (2003b)), GREEN (Sivaharan et al. (2005)),
and RUNES (Costa et al. (2006)).
The notion of system families is based on three key concepts: components,
component frameworks, and reflection. Both the application and the middle-
ware platform are built from interconnected sets of components. Components
and component frameworks provide extendable structure and functionality, and
reflection offers the essential support for dynamic configuration and extensibility
for runtime evolution and adaptation (Bencomo et al. (2006)). The underlying
component model is based on OpenCOM (Coulson et al. (2004a), Coulson et al.
(2008)), a general-purpose and language independent component-based systems
building technology. The basic concepts of OpenCOM are depicted in Figure
2.1. Components are language-independent units of deployment that support
interfaces and receptacles. Interfaces express units of service provision. Re-
ceptacles express units of service requirement and are used to make explicit
the dependency of one interface on another (and hence one component on an-
other). Interfaces are expressed in terms of sets of operation signatures and their
data types. Bindings are associations between a single interface and a single
receptacle.
19
2.4 Reflective Middleware Families at Lancaster
Figure 2.1: The OpenCOM main concepts
Capsules are containing entities that offer a component runtime (CRT) API
for the loading, binding etc. of components. The pseudo code of the CRT API
is as follows (Coulson et al. (2008)):
template load(comp type name);
// loads a new component and inserts it into the OpenCOM runtime
comp inst instantiate(template t);
// creates a new instance of a component type
status unload(template t);
// unloads an existing component type from the OpenCOM runtime
status destroy(comp inst comp);
// deletes a component instance which has been previously created
comp inst bind(ipnt inst interface, ipnt inst receptacle);
// binds a receptacle with an interface
status putprop(ID entity, ID key, opaque value);
// a particular property is inserted into the runtime registry
opaque getprop(ID entity, ID key);
// a particular property is retrieved from the runtime registry
status notify(Callback *callback);
// registers a callback to receive kernel events notifications
Using the API, components can be deployed at any time during execution, and
their loading can be requested by any component within the capsule (i.e. third-
party deployment). Capsules can be implemented in different ways on different
devices. For example, they might be implemented as a Unix or Windows process
20
2.4 Reflective Middleware Families at Lancaster
on a PDA or PC; or as a RAM chip on a sensor (Costa et al. (2005)). Therefore
capsules support technical heterogeneity.
The following is a brief explanation of the operations of the CRT API:
The load() operation loads a named component template (i.e. component
type) from the local repository, and unload() unloads an existing template. Tem-
plates can be instantiated (using instantiate()) to yield component instances; this
can be done multiple times if desired. The bind() operation is used to bind a re-
ceptacle with an interface. It returns an identifier of the component instance
that represents the binding. This, like any other component instance, can be de-
stroyed (e.g. effectively unbind) using destroy(). Therefore bind() and destroy()
enable dynamic changes to the configuration of the components by binding and
unbinding components at run-time. A registry facility is used to keep arbitrary
meta-data to any component model entity, i.e., templates, components, inter-
faces, or receptacles, during runtime. The getprop() and putprop() calls give
access to the runtime registry. The getprop() returns a reference of an interface
of a component instance. The putprop() writes into the registry.
Causal connection between the base-level system and the meta-models is
achieved via the notify() operation from the CRT API. The notify() method
is used to provide specific support for the developers of (e.g reflective) optional
extensions layer. This operation allows meta-models to register a callback that is
invoked every time a subsequent call (bind, load, etc) is made on the CRT API.
The callback invocation contains all the parameter values of the call and so gives
the callback holder a complete picture of all activity in the capsule.
The above allows OpenCOM to support the construction of dynamic systems
that require runtime reconfiguration. OpenCOM-based platforms are straight-
forwardly deployable in a wide range of deployment environments ranging from
standard PCs, resource-poor PDAs, embedded systems with no OS support, and
high speed network processors.
Components are complemented by the coarser-grained notion of component
frameworks (Szyperski (2002)). A component framework is a set of compo-
nents that cooperate to address a required functionality or structure (e.g. service
discovery and advertising, security etc). Component frameworks also accept ad-
ditional ‘plug-in’ components that change and extend behaviour. Many interpre-
21
2.4 Reflective Middleware Families at Lancaster
tations of the component framework notion foresee only design-time or build-time
plugability. Using OpenCOM, runtime plugability is also supported, and compo-
nent frameworks actively police attempts to plug in new components according to
well-defined policies and constraints. An example of these constrains is a protocol
stacking component framework that accepts new protocol components as plug-
ins. The constraints associated with the component framework prescribe that
these plug-ins have to be composed into linear arrangements or stacks (Coulson
et al. (2002)).
An important aspect of Lancaster’s philosophy is the fact that both applica-
tions and the middleware platforms use the OpenCOM component model. This
is convenient in terms of portability and interoperability (Parlavantzas et al.
(2000)). The above banishes the typical barrier between middleware developers
and application developers allowing the latter to take the role of the former if
needed.
As in other program family techniques, Lancaster’s approach uses component
frameworks to manage and accomplish variability and development of systems
that can be adapted by re-configuration. The architecture defined by the com-
ponent framework basically defines the commonalities; different configurations
or variants will exist that follow the well-defined policies and constraints of the
component framework (Sawyer et al. (2007a)). The philosophy of Lancaster’s ap-
proach advocates that modern middleware platforms should be augmented with
adaptive capabilities to enable applications to dynamically adapt to fluctuating
changes in the environment and contexts. The fact that component frameworks
enforce architectural principles (constraints) on the components they support is
especially relevant in reflective architectures that dynamically change, and whose
changes must be verified. “Configuring a system is the process of choosing a spe-
cific family instance and modifying the runtime structure of a system to conform
to the chosen instance” (Heimbigner & Wolf (2002)). This implies the insertion,
deletion and modification of the structural elements represented by the compo-
nent frameworks: components, interfaces, receptacles, binding components and
constraints.
22
2.4 Reflective Middleware Families at Lancaster
The Reflective Extensions
As mentioned, reflection is used to support introspection and adaptation of the
underlying component/component framework structures. A pillar of Lancaster’s
approach to reflection is to provide an extensible suite of orthogonal meta-models
each of which is optional and can be dynamically loaded when required, and
unloaded when no longer required 1. The meta-models manage both evolution
and consistency of the base-level system. Figure 2.2 shows a hypothetical meta-
model for an executing system. The meta-model is found at the meta-level and
the system being executed is at the base-level. Three reflective meta-models are
currently supported:
• The architecture meta-model represents the current topology (i.e. soft-
ware architecture) in terms of a composition of components within a capsule;
it is used to inspect (discover), adapt and extend a set of components. For
example, we might want to change or insert a compression component to
operate efficiently over a wireless link. This meta-model provides access to
the implementation of the meta-component that has a component graph
where components are nodes and bindings are arcs. Inspection is achieved
by traversing the graph, and adaptation/extension is realized by inserting
or removing nodes or arcs. The architecture meta-model can be used to
get the current architecture information to determine the next valid step
in the execution of the system. For example, if it is required to add a
component and connect it to the architecture, the graph can be used to
check if the component is not already connected to avoid adding it twice.
Another example is inspecting the resource usage of architectures to make
decisions about the next path in the execution. For example, three repli-
cated components would be reduced to two when the current state dictates
that resources should be reduced. To do this an inspection to find the least
used component is carried out. The result is that the component would be
removed from the current architecture.
1Note that there is a potentially-confusing terminological clash between the UML “meta-model and meta-level” and “reflective meta-model and meta-level”. These concepts are entirelydistinct; nevertheless the author is forced to employ both of these terms because they are termswell established in their own communities.
23
2.4 Reflective Middleware Families at Lancaster
• The interface meta-model supports the dynamic discovery of the set of
interfaces defined on a component. Support is also provided for the dy-
namic invocation of methods defined on these interfaces. Both capabilities
together enable the invocation of interfaces whose types were unknown at
design time. This meta-model is similar to Java’s reflective capabilities for
inspection.
• The interception meta-model supports the dynamic interception of in-
coming method calls on interfaces and also the association of pre- and post-
method-call code. The code elements that are interposed are called inter-
ceptors. For example, when using a wireless link scenario we might want to
use an interceptor to monitor the conditions under which the compressor
should be switched.
Figure 2.2: Reflective meta-models
The meta-models cover both structural and behavioural aspects. In the con-
text of OpenCOM structural reflection is concerned with the content and struc-
ture of a given component, i.e. the MOP provides access to the architecture of the
system in terms, for example, of components and connectors, see Figure 2.2. On
24
2.5 The Gridkit Middleware
the other hand, behavioural reflection is concerned with activity in the underlying
system (Blair et al. (2002)), i.e. the flow of execution. Structural reflection is rep-
resented by the interface and architecture meta-models. These two meta-models
represent a separation of concerns between the external view of a component (i.e.
its set of interfaces), and the internal construction (i.e. its software architecture).
It is the interception meta-model which supports behavioural reflection.
The meta-models described above are crucial as they offer views of the current
state of the systems, mainly by describing the current configurations or variants
(in terms of the components plus bindings or connectors) at any point of execu-
tion.
As an example, the architecture meta-model keeps itself updated with infor-
mation associated with the internal topology of the capsule contents using the
notify callback. Similarly, if the meta-model architecture needs to change the
base-level configuration to mirror its own changes, it simply invokes the respec-
tive operation (bind, load, etc.) in the CRT API. This way, the causal-connection
relation between the base and the meta-level is maintained.
The next section presents Gridkit. Gridkit is a middleware platform that
uses both reflection and policy-based adaptation mechanisms and is particularly
relevant for the thesis as it plays a crucial role in the case studies.
2.5 The Gridkit Middleware
Gridkit is one of the dynamically configurable middleware families that have been
developed using the OpenCOM component model. Following the OpenCOM phi-
losophy, the Gridkit architecture is composed of component frameworks. A com-
ponent framework in Gridkit is a reusable and extensible component architecture
(i.e. composition of components) that is targeted at particular sub-domains of
middleware behaviour. Component frameworks accept additional components as
runtime plug-ins. In OpenCOM, component frameworks are inherently compo-
nents. Therefore, component frameworks can contain other component frame-
works and be recursively assembled into more complex frameworks (Parlavantzas
& Coulson (2007)). In this way, and like components, component frameworks
can be composed and connected to build a suitable middleware service for the
25
2.5 The Gridkit Middleware
particular set of requirements. Figure 2.3 illustrates the architecture of Gridkit
and its key frameworks.
In brief, the frameworks are as follows:
• The open overlays framework supports multiple virtual network ser-
vices (e.g. multicast, ad-hoc routing) required by higher level frameworks.
Above the open overlays framework is a set of further “vertical” frameworks
that provide functionality in various orthogonal areas, and can optionally
be included or not.
• The interaction framework accepts multiple interaction type plug-ins,
such as Remote Procedure Call (RPC), publish-subscribe, and group com-
munication.
• The resource discovery framework accepts plug-in strategies to dis-
cover application services, such as Service Location Protocol (SLP), Uni-
versal Plug and Play (UPnP), and Salutation; and resources such as CPUs
and storage.
• The resource management and resource monitoring frameworks
are respectively responsible for managing and monitoring resources.
• The security framework provides general security services for the rest
of the frameworks.
These frameworks are discussed in more detail in (Coulson et al. (2006)).
The configuration of each of these component frameworks can be far from sim-
ple as a component framework can consist of many components and connections.
For example, one of the configuration files of the Publish/Subscriber middleware
platform (described in the Gridkit User Guide (Lancaster (2007))), only includes
4 components and 5 bindings. However the associated XML file that describes a
configuration has 64 lines. The number of lines of such configuration files grows
exponentially as the number of components and bindings increases. This ignores
the extra code required to manage change at runtime.
26
2.5 The Gridkit Middleware
A key feature of Gridkit is its ability to handle diversity, i.e. it can be dynam-
ically configured to meet different application requirements in different environ-
mental settings. Hence, different middleware services are tailored to the current
domain (Grace et al. (2004), Coulson et al. (2006)).
Figure 2.3: The Gridkit Middleware Architecture
Frameworks are the units which enable a variety of runtime (re)configurations.
The open overlays framework gives the best example in Gridkit of a configurable
and reconfigurable framework that supports flexible virtualization of the network
resources. The open overlays framework addresses the challenges of heteroge-
neous network environments: (i) the increasing range of networking technologies
(e.g. large-scale fixed networks, mobile ad-hoc networks, resource impoverished
sensor networks, satellite links, etc), and (ii) the increasing demand by distributed
applications for sophisticated and application-tailored services from the under-
lying network (e.g. multimedia content distribution, reliable multicast, etc.).
Network overlays provide an approach to the virtualisation of the underlying net-
work resource(s), making it possible to provide a range of different networking
abstractions. The open overlays framework in Gridkit allows the co-existence of
multiple (physical or) virtual networking abstractions, and potentially support
the layering of virtual network abstractions (Grace et al. (2008a)).
27
2.5 The Gridkit Middleware
Figures 2.4 and 2.5 show two different possible configurations of the open
overlays framework. The open overlays framework can self-configure horizontal
and vertically to create different combinations. Hence, the framework can be
instantiated using many possible configurations to meet wide variation in hetero-
geneous conditions. Furthermore, the final configurations should be meaningful.
Therefore they have to be scoped according to the operation context. For exam-
ple, if multicast is required in an ad-hoc network, then an appropriate overlay
should be selected (e.g. gossip-based) (Grace et al. (2008a)).
Figure 2.4: An example configuration of the open overlays framework, from(Grace et al. (2008b))
Policies for initial configuration and dynamic reconfiguration are related to
individual frameworks. That is, an instance of the interaction framework uses
one set of policies, while the overlay framework will use a different set of poli-
cies. For example, Gridkit can offer (i) at the Interaction component framework
level, a Publish/Subscribe service (plug-in) as the interaction type in an ad-hoc
network (in contrast to an Object Request Broker in a fixed network), or (ii) at
the open overlay component framework level, the Network component framework
can offer WiFi or Bluetooth plug-ins to supply wireless communication services,
see Figure 2.3). This approach allows domain-specific policies to be included by
different developers. These policies can be implemented, for instance, as declar-
ative XML statements written by middleware developers. The configuration and
28
2.5 The Gridkit Middleware
Figure 2.5: Another example configuration of the open overlays framework
reconfiguration policies describe the framework configurations that meet a given
set of environmental conditions.
A configuration policy, that describes an initial configuration, follows the for-
mat: on set of conditions Ci, execute set of instructions Ii ; that is, the set of
instructions Ii will determine the initial configuration of the component graph
of the given framework. A reconfiguration policy follows, in principle, the same
format. However, the set of instructions Ii will determine the reconfiguration
(changes) to be performed on the current component graph of the given frame-
work. An example of a reconfiguration policy is shown in Figure 2.6; the policy
is based on the example of the flood warning application explained in the next
section. In this case, the component framework is the open overlays framework.
The policy indicates that when HIGH-FLOW = TRUE and HIGH-BATTERY
= TRUE the reconfiguration Reconfigurations.Wifi should be performed. In this
case, the reconfiguration is executed by a Java file. The reconfiguration policy
given in Figure 2.6 uses the AND operator to compose conditions that trigger
the reconfiguration. Composition of conditions using the operator OR are speci-
fied using two or more reconfiguration policies for a given component framework.
During execution, the middleware uses the context monitor to consult the condi-
tions of the current context to select the appropriate policy. The decision making
and the resolution of possible conflicts is therefore resolved by the middleware
platform during execution (Efstratiou et al. (2002)). Before initialization, a set
29
2.6 The Flood Forecasting Application
of policies will exist in the repository. However, given the dynamic nature of the
applications the Gridkit is designed to support new policies that may be inserted
dynamically. The grammar of the reconfiguration policies is shown in Section B.1
of Appendix B.
Figure 2.6: An example of a reconfiguration policy in Gridkit
2.6 The Flood Forecasting Application
2.6.1 Description
To illustrate the dynamic variability of Gridkit, consider the case study of the
open overlays framework in an already implemented real-world scenario: a wire-
less sensor network-based real-time flood forecasting deployed on the river Ribble
in the north west of England. The example is based on (Grace et al. (2006b),
30
2.6 The Flood Forecasting Application
Hughes et al. (2006a), Hughes et al. (2006b), and Grace et al. (2008b)). The
application monitors water flow and depth in the river using a network of sensor
nodes along the banks of the river. The current deployment has about 15 nodes.
Sensor nodes are known as GridStix. The sensor data is collected in real-time at
one or more designated root nodes and forwarded to a prediction model that runs
on a remote computational cluster. The collected data is forwarded to the remote
cluster via GPRS connections. Each GridStix sensor node comprises a 400MHz
XScale CPU, 64MB of RAM, 16MB of flash memory, and Bluetooth and WiFi
networks.
One of the benefits of the new technologies used in this application comes
from being able to adapt dynamically (i.e., software is removed and new soft-
ware is uploaded) in response to changing river conditions. Network topologies
can change and nodes can switch between different wireless communication tech-
nologies as the requirements for energy conservation, real-time computation, and
fault-tolerance vary according to the state of the river (Sawyer et al. (2007b)).
The application supports two plug-ins at the physical-network-level for Blue-
tooth and for WiFi (802.11b). The application also offers a configurable Span-
ning Tree overlay plug-in for sensor data dissemination that can be configured
in two strategies: shortest path or fewest hop. The conditions that require
these options are explained below.
The river states identified are: quiescent, high flow and in flood. The state of
the river is quiescent when the flow rate and water depth are within ranges that
indicate that the river poses no threat to local or down-stream communities. The
state of the river is high flow when the flow rate has increased beyond a threshold
value that could indicate that a significant and potentially damaging increase in
water depth is about to occur. Otherwise the system can be declared in flood.
Flooding conditions can result in serious consequences like livestock and crops
loss, damage to property and danger to communities.
2.6.2 Planning Reconfigurations and Adaptations
Trade-offs should be made when selecting the configuration options (plug-ins)
explained above. To work out the optimal configurations to use in the different
31
2.6 The Flood Forecasting Application
states identified it is necessary (a) to establish the metrics to be considered and
(b) to analyse the options against the metrics.
(a) Table 2.1 shows the metrics considered in the trade-off (Grace et al.
(2006b), Grace et al. (2008b)):
Latency : measures which messages can be relayed from each sen-sor node to the root node (and thence to the back-end flood pre-diction models)
Resilience : the resilience of the network is given in terms of theextent to which the failure of a given node reduces the overall con-nectedness of the network. Resilience is measured as the numberof viable routes between each node and the root
Power Consumption : flooding occurs in conditions of low lightintensity, therefore power consumption is an important factor totake into account
Table 2.1: Metrics considered in the trade-off when planning reconfigurations
(b) From empirical experiments and evaluations (Grace et al. (2006b)) the
following was concluded:
i. at the physical network level, WiFi offers higher bandwidth than Bluetooth
which makes WiFi more efficient. However, WiFi requires more energy.
ii. at the Spanning Tree level, the possibilities are summarized as follows:
-shortest path (SP) trees tend to consume less power than fewest hop (FH)
trees, but their performance is inferior
- fewest hop (FH) trees minimise loss of data that occurs due to node failure,
but they consume more power.
32
2.6 The Flood Forecasting Application
2.6.3 Transition Diagrams for Reconfiguration
Figure 2.7 presents the state transition diagram deduced from the above analysis.
The diagram shows the triggers and conditions that drive the system from one
configuration to another. An example (using pseudo-code) of a reconfiguration
policy is given. The policy corresponds to the transition from Normal state to
Emergency state.
Figure 2.7: Reconfiguration diagram, based on (Grace et al. (2006b) and Graceet al. (2008b))
Three states of the system were identified Normal , Alert and Emergency ,
respectively (Grace et al. (2006b))).
Normal conditions: as the sensor network is working under normal con-
ditions (quiescent state), the middleware is customized to be supported by an
overlay providing a shortest path tree (SP) connected using a Bluetooth (BT)
network.
33
2.7 Summary
Alert: some nodes have attached video cameras pointing the river surface.
Using simple image processing on the images of the river, a estimation of the
river flow rates can be carried out. These estimations can set the condition
trigger High-Flow. If the High-Flow condition appears and the battery life of
nodes is not high the connection to be used is WiFi. The spanning tree topology
to be used is shortest path tree (SP).
Emergency: the prediction models generate an event stating that flooding
is about to happen. Hence, the sensor network adapts itself to become less
vulnerable to node failure. The nodes configure themselves into a new topology
- a fewest hop - (FH).
The triggers High–Flow and Flood–Predicted are determined by the applica-
tion. High–Flow is inferred from the observations of video cameras attached to
some nodes. The flow rates of the river are estimated using the images taken with
the cameras. Flood–Predicted is provided by the prediction model that runs on
the GridStix nodes.
2.7 Summary
This chapter began by illustrating how traditional middleware platforms are un-
suited to the challenges posed by modern dynamically adaptive systems. The
chapter continued by explaining how reflection and policy-based mechanisms en-
able modern adaptive middleware platforms to meet the new demands. The phi-
losophy of the OpenCOM family of reflective middleware systems at Lancaster
was introduced with particular focus on one of its members, Gridkit, developed
for grid applications.
Gridkit reflects the modern view of middleware (Eliassen et al. (1999), Schmidt
et al. (2001)); that a set of middleware capabilities needs to be tailored to classes
of problem domains that are increasingly demanding advanced functionality such
as the ability to reconfigure and adapt dynamically. However, the disadvantage of
this approach is the cost of producing different middleware systems. The Open-
COM component model supports this task but still entails substantial work to
instantiate and configure a given systems architecture.
34
2.7 Summary
The high level of variability present in platforms like Gridkit requires a sys-
tematic management to structure, document, and implement software variability.
An important aim of this thesis is to investigate how generating instances of mid-
dleware tailored to specific problem domains and contexts can be achieved in
a systematic, consistent and, if possible, automatic way. The author advocates
model-driven engineering (Kent (2002), Schmidt (2006), France & Rumpe (2007))
and generative techniques help to achieve this. The author also supports the use
of model-driven techniques for guiding and validating runtime behaviour.
The next chapter presents a state-of-the-art investigation of the related areas
of system families, generative software development, and model driven engineer-
ing in the context of software reuse. The purpose is to explore the synergies
between these three areas when pursuing the goals of this thesis.
35
Chapter 3. Three approaches
to Software Reuse
“Where is the wisdom we have lost in knowledge?
Where is the knowledge we have lost in information?”
- from The Rock (1934) by T.S. Elliot (1888 - 1965)
3.1 Overview of the Chapter
This chapter provides a brief introduction of software reuse and identifies the
importance of abstraction. Three different approaches to software reuse are ex-
amined: system family engineering, generative software development, and model-
driven engineering. The main purpose of the chapter is to provide a coherent
perspective of the diverse literature and to show potential synergies among these
three areas when pursuing the goals of this thesis.
3.2 Background on Software Reuse
“Software reuse is the process of creating software systems from existing software
rather than building software systems from scratch.” (Krueger (1992)). Software
reuse can be classified as opportunistic (or by accident) and systematic (GAO
(1993)). Opportunistic reuse is practiced in an ad-hoc fashion during software
development. In opportunistic reuse, new systems are built from software parts
36
3.2 Background on Software Reuse
that have been retrieved from existing systems and adapted to meet the specific
needs of the new system. By contrast, “[s]ystematic reuse is planned and inte-
grated into a well defined software development process.” (GAO (1993)). In the
case of systematic reuse, new systems are built from software modules that have
been specifically designed and developed to be reused.
Virtually every organization routinely practices opportunistic reuse by retriev-
ing old “parts” in some ad-hoc manner. Nevertheless, the benefits of this ad-hoc
reuse approach has limits. On the other hand, systematic reuse is meant to re-
duce software development costs while improving software quality. According to
a number of software experts, reuse has the potential to increase productivity and
reliability, reduce costs, and establish a more standard and consistent approach
to software development and maintenance (GAO (1993)). Henceforth, when we
refer to reuse of software we mean systematic reuse.
Whereas many authors report benefits of software reuse (Krueger (1992), GAO
(1993), Frakes (1994), Devanbu (1998), Kim & Stohr (1998), Gacek (2002), Som-
merville (2004)), many caution that these benefits are not straightforwardly or
rapidly achievable. The potential impact of software reuse remains questionable
because of technical, organizational, and legal issues that need to be addressed.
Despite all the attention the concept has received it has failed to become a stan-
dard good practice in software engineering (Krueger (1992), IEEE (1994)), Mili
et al. (1995), Glass (1998), Kim & Stohr (1998), Schmidt (1999), Valtech et al.
(1999)). In the specific case of cost, it is important to note that software con-
structed for reuse costs more than traditional development. The idea behind any
reusable approach or technology is that the investment will be repaid after the
reusable solution has been applied a number of times depending on the context.
Having a principled, systematic reuse process presupposes the existence of
software to reuse. Domain engineering helps identify this software (Poulin (1995)).
A broadly used definition of Domain Engineering is found in (Czarnecki & Eise-
necker (2000)): Domain Engineering “is the activity of collecting, organizing,
and storing past experience in building systems or parts of systems in a particular
domain in the form of reusable assets (i.e., reusable work products), as well as
providing an adequate means for reusing these assets (i.e., retrieval, qualification,
dissemination, adaptation, assembly, and so on) when building new systems.”
37
3.3 Abstraction, Cognitive Distance, and Software Reuse
Perhaps the most important reusable asset identified during Domain Engineering
is the common architecture (Bass et al. (2003)).
Software reuse is about using existing life-cycle artefacts during the develop-
ment of a new system. It is not only artefacts such as source code or specifications
that can be reused. Knowledge should also be reused and Domain Engineering
is the key. In Domain Engineering, common elements across a domain are deter-
mined and this information will inform additional domain development. Domain
knowledge suppliers include code, specifications, documents, plus information
that only exist in the heads of domain experts (Moore (2001)). Knowledge reuse
is challenging because domain information is not written down consistently and
because of many other issues such as organizational culture.
By contrast, the term Application Engineering refers to the process of
tailoring and configuring the common architecture and other assets to create
a specific application. The construction and configuration of new applications
from the reusable assets can either be done manually and/or using generators to
produce them automatically. Section 3.6 reports further discussion about Domain
Engineering, Application Engineering and automatic generation of artefacts.
3.3 Abstraction, Cognitive Distance, and Soft-
ware Reuse
Many researchers have acknowledged the importance of abstraction to software
reuse (Wegner (1987), Neighbors (1984), Booch (1987), Parnas et al. (1989),
Krueger (1992)). For instance, Wegner (Wegner (1987)) stated that “abstraction
and reusability are two sides of the same coin” and according to (Krueger (1992))
“abstraction is the essential feature in any reuse technique”.
Computer scientists repeatedly use abstraction to reduce the intellectual com-
plexity of software (Shaw (1984)). The importance of abstraction is that without
it software developers would be forced to deal with reusable artefacts, trying to
understand the goal of each artefact and when and how they could be reused.
For Krueger (Krueger (1992)), the notion of abstraction is so important that he
38
3.4 System family engineering
claims that the “successful application of a reuse technique to a software engi-
neering technology is inevitably tied to raising the level of abstraction for that
technology”.
Abstraction can be used as both a process and an entity. Abstraction, as a
process can be defined as:
“a process whereby we identify the important aspects of a phenomenon and
ignore its details” (Ghezzi et al. (1991)).
Abstraction, as an entity, has been defined as follows:
“[a]n abstraction denotes the essential characteristics of an object that distin-
guish it from all other kinds of object and thus provide crisply defined conceptual
boundaries, relative to the perspective of the viewer” (Booch (1991)).
There are several levels or degrees in a software abstraction (Krueger (1992)).
The higher of the two levels is known as the abstraction specification. The lower
and more detailed level is called the abstraction realization or implementation.
The success of the use of abstractions in a software reuse technique can be
assessed in terms of the intellectual effort necessary to apply them. In other
words, an abstraction is better if less intellectual effort is required from the user.
Krueger also introduced the concept of cognitive distance as a qualitative metric.
Cognitive distance is defined as “the amount of intellectual effort that must
be expended by software developers in order to take a software system from one
stage of development to another” (Krueger (1992)). For the creator of a soft-
ware reuse technique, the goal is to minimize cognitive distance. An interesting
fact highlighted by Krueger is that when using a reuse technique the cognitive
distance is minimised in two ways: (l) using higher level abstractions and (2)
using automation processes (to go from abstractions in the reuse technique to
executable implementations). Domain-specific abstractions help reduce the cog-
nitive distance.
3.4 System family engineering
Organizations constantly develop very similar software systems within a given
domain taking into account variations to meet different requirements. Rather
than developing each new system variant from scratch, substantial benefits are
39
3.4 System family engineering
obtained when reusing assets from previous systems to develop new ones (Jacob-
son et al. (1997), Kim & Stohr (1998), Sommerville (2004)). Furthermore, the
key goal is not just the reuse of assets from previous systems but also to take
advantage of the acquired knowledge when building subsequent systems in the
same domain. The idea is to correct a common problem in software engineering:
avoid presenting a particular software solution without a systematic explanation
describing how the concrete solution was found. “The principles, methods, and
skills required to develop reusable software cannot be learned effectively by gen-
eralities and platitudes. Instead, developers must learn concrete technical skills
and gain hands-on experience by developing and applying reusable software com-
ponents and frameworks in their daily professional practice” (Schmidt (1999)).
Capturing and reusing this knowledge potentially make maintenance and evo-
lution of systems more straightforward and less costly. One way to do this is
shifting from the construction of single systems to families of systems.
System family engineering seeks to exploit the commonalities among systems
from a given problem domain while managing the variabilities among them in
a systematic way (Czarnecki et al. (2004)). New variants can be rapidly devel-
oped based on a set of reusable artefacts such as models, components, common
architecture, etc. These results have made system family approaches to be con-
sidered an effective support for reuse and have caused them to be adopted by a
wide variety of organizations. System family engineering has proven to substan-
tially decrease costs and time-to-market, and to increase the quality of software
products (Sinnema et al. (2004)).
3.4.1 System Families and Product Lines
The terms product line and system family are closely related. Although some
people take them as the same, the author prefers to note the differences:
• A system family (or product family) denotes a set of systems (or prod-
ucts) that share enough common properties to be built from a common set
of assets. (Withey (1996), Czarnecki & Eisenecker (2000), Bosch (2001)).
The members of the system family share a common generic architecture
(family architecture).
40
3.4 System family engineering
• A product line is a group of products sharing a common managed set
of features that satisfy the specific needs of a selected market (Withey
(1996)). From the perspective of the user, the products can be seen as
alternatives as they can be applicable in different but related contexts or
they can complement each other (Stahl & Volter (2006)).
Members of system families are mainly scoped based on technical common-
alities between their members. Products of a product line are scoped to satisfy
a selected market and “do not necessarily share any technical commonalities”
(Stahl & Volter (2006)). Hence, system family-based approaches can be seen as
technology-oriented and product line-based approaches as market-oriented (Jar-
ing (2005)). On the one hand, the features in two products might require totally
different technical solutions and a product line may include one or more sys-
tem families. In addition, a system family could be reused in more than one
product line. However it should be noted that generally, if not always, systems
families should focus on one or more markets needs. This virtually makes all
system-families market-oriented as well (Jaring (2005)). Despite, the differences
between system families and product lines, in both cases a systematic support
for commonality and variability is necessary.
This thesis focuses on software engineering concepts and technology that apply
during the lifecycle and the definition of system family is therefore preferred.
3.4.2 On the Concept of Variability in Software System
Families
One of the basic principles in system families is to delay design decisions related
to offered functionality and quality to later phases of the life cycle (Svahnberg
et al. (2005)). Instead of deciding on what system to develop in advance, a set of
assets and a common system family (reference) architecture are specified and im-
plemented during the Domain Engineering process. Later on, during Application
Engineering, specific systems are developed to satisfy the requirements reusing
the assets and architecture. This is during the Application Engineering when
delayed design decisions are solved. The realization of this delay relies heavily on
the use of variability in the development of system families.
41
3.4 System family engineering
Variability is defined in (Svahnberg et al. (2005)) as “the ability of a software
system or artefact to be efficiently extended, changed, customized or configured
for use in a particular context.”
Variability is expressed in the form of variation points. A variation point
denotes a particular location in a software-based system where choices are made
to express the selected variant (Svahnberg et al. (2005)). Eventually, one of the
variants should be chosen to be achieved or implemented. The time when it is
done is called binding time. Bachmann & Bass (Bachmann & Bass (2001))
defined three kinds of variations:
• A variation can be optional . This is the case when a specific functionality
is offered by one product of the family but is not offered by other products.
• A variation can be an alternative (an instance out of several alternatives).
This means that the architecture provides a placeholder in which one of
several alternatives can be inserted.
• A variation can be a set of alternatives . In this case the architecture
provides a placeholder in which multiple instances of different alternatives
can be inserted.
Many different mechanisms to implement or realize variability have been de-
veloped. Solutions include compiler directive, inheritance, mixins, dynamic link-
ing, parametrization, environment variables, plug-ins, aspect-oriented program-
ming, and numerous others. Having a good overview and understanding of the
different variability mechanisms available as well as their pros and cons help mak-
ing informed decisions during the software development (Svahnberg et al. (2005)).
Jacobson et al. (Jacobson et al. (1997)) propose a classification of variability
mechanisms: inheritance, extensions, uses, configuration, parameters, template
instantiation, and generation. Another classification presented in (Bachmann
& Bass (2001)) is generators, configuration management systems, compilation,
adaptation during start-up, and adaptation during execution. More recently, an
extension of these classifications has been proposed by Svahnberg, Gurp & Bosch
(Svahnberg et al. (2005)). Table 3.1 summarizes the variability realization tech-
niques and characteristics proposed by Svahnberg et al..
42
3.4 System family engineering
Table 3.1: Summary of the variability realization techniques and characteristics(as proposed by Svahnberg et al.)
The categories generators and Infrastructure-Centered Architecture shown in
Table 3.1 are relevant mechanisms for the present research. Generators are cov-
ered in Section 3.6. Infrastructure-Centered Architecture makes the connections
between components a first-class entity, which helps avoid the hard-coding of re-
quired interfaces of the components. Dynamic reorganization of the architecture
during runtime is therefore achievable using this mechanism. As will be pre-
sented later, the reflective middleware platforms at Lancaster offer a medium to
implement this mechanism.
Traditionally, decisions have been deferred to architecture design, detailed de-
sign, implementation, compilation, linking, and deployment (Batory & O’Malley
(1992), Beuche et al. (2004), Czarnecki & Eisenecker (2000), Lee & Muthig (2006),
Ommering (2004), Svahnberg et al. (2005)). Currently, software engineers try to
43
3.4 System family engineering
delay these decisions to the latest possible point in time. An example of the need
of this delay is posed by the emergent class of software systems that should recon-
figure and adapt dynamically at runtime. The above makes the term variability
and specifically dynamic variability (also known as runtime variability) a relevant
notion in this thesis (Bencomo et al. (2008b), Bencomo et al. (2008a)).
An important concern when developing system families is the systematic man-
agement of variability. Rigorous management supposes the specification and mod-
elling of customizations and configurations, and the possible extensions. A large
number of methods and tools for specification, modelling and support of variabil-
ity have been reported. These usually take features or product family architecture
as the organizing abstraction to represent variability. Domain-specific languages
(DSLs) may also be used to specify variability in software product families. The
next two sections present important concepts related to variability management
and modelling using features and product family architectures. DSLs are widely
covered in Sections 3.6.3 and 3.7.4.
3.4.3 Achieving and Modelling Variability with Features
Feature modelling (Kang et al. (1990), Griss et al. (1998), Kang (1998)) is a
broadly accepted approach to discover and specify commonalities and variabil-
ities during Domain Engineering. Feature modelling can help the process of
understanding the problem space addressed by the development of a software
system in a specific domain. Using features, the specification of the variability of
a system family consists of listing the features that may vary between the mem-
bers. These features are called variant features and need to be implemented
in such a way that the resulting software artefact can easily accommodate the
different variants associated (Svahnberg et al. (2005)). In contrast to variability,
commonality denotes features that are part of each member in exactly the same
form (Pohl et al. (2005)).
A variety of feature-based methods and notations have been proposed: RSEB
(Jacobson et al. (1997)), FeatureRSEB (Griss et al. (1998)) and related work
from (Gurp et al. (2001)), and the best known Feature Oriented Domain Analysis
44
3.4 System family engineering
(FODA) (Kang et al. (1990), Kang (1998)) which laid the basis for more recent
research.
However, different methods use somewhat different definitions of a feature.
According to (Kang et al. (1990)), features are characteristics of the domain under
consideration and guide the creation of a set of products that define the domain.
In (Kang et al. (1990)) a feature is formally defined as “an end-user visible char-
acteristic of a system”. Czarnecki & Eisenecker (2000) defines a feature as “a
distinguishable characteristic of a concept (e.g. system, component, and so on)
that is relevant to some stakeholder of the concept”. In (Bosch (2000)), a feature
is “a logical unit of behaviour that is specified by a set of functional and qual-
ity requirements”. For Bosch, a feature groups related requirements and “there
should at least be an order of magnitude difference between the number of features
and the number of requirements for a product family member” (Bosch (2000),
Svahnberg et al. (2005)). Thus, features serves as abstractions for requirements.
Features define both commonalities and differences between related systems
in the domain. Features are used to differentiate various members of the sys-
tem family, i.e. variation between the members is usually expressed in terms of
features. Therefore, a system family must support variability for those features
that tend to differ from member to member. Features are also used to describe
the essence of the domain in terms of the mandatory, optional, or variant char-
acteristics of these related systems (Griss et al. (1998)). Mandatory features
correspond to core capability representing central parts at the problem level being
modelled. Optional features correspond to primary characteristics which may
be unnecessary in some systems of the domain. Variant features correspond
to alternative ways to configure mandatory or optional features. In order to spec-
ify variant features, it is broadly accepted that features are organized using the
so called feature diagrams or feature trees defined below.
Features and Feature Models
The feature model should capture the common features (commonality) and dif-
ferences of the members of the family (variability). Both concepts were first
introduced by the FODA method. (Kang et al. (1990), Kang (1998)).
45
3.4 System family engineering
A FODA feature model consists of the following four elements (Czarnecki &
Eisenecker (2000)):
i. A Feature diagram that represents hierarchical decompositions of fea-
tures indicating if the features are mandatory, alternative, or optional.
ii. Feature definitions describe all the features indicating the binding time
of each feature (compile time, load time, runtime, or any other time).
iii. Composition rules indicate which feature combinations are valid and
not valid.
iv. Rationale for features that indicates reasons for choosing or rejecting
a given feature.
The fundamental element of a feature model is the feature diagram. An
example of this diagram is shown in Figure 3.1. The feature diagram describes
the decomposition of features into sub-features in a hierarchical way generally
using a tree diagram. The root node represents the concept being described and
the remaining nodes represent sub-features. Each sub-feature below a certain
feature can be specified as mandatory, alternative or optional.
A mandatory feature is a feature that must be part of every system that
offers the feature. In 3.1 every car, for instance, has to have the feature “horse-
power” making “horsepower” a mandatory feature. Alternative features are
a set of features every system has to have only one of them (the semantics is
similar to the logical XOR-operator). For example, in Figure 3.1 the features
“manual” or “automatic” are alternative subfeatures of a car’s “transmission”
feature, since every transmission is either manual or automatic. Optional fea-
tures are features a system may or may not have. Air conditioning is an example
of an optional feature of a car.
3.4.4 Achieving and Modelling Variability with Architec-
ture
In architecture-based approaches, the system family architecture supports the
structured management of variability. The architecture of the system family acts
46
3.4 System family engineering
Figure 3.1: Example showing features of a car, based on (Kang et al. (1990) andAsikainen et al. (2004a))
as an abstraction to define the high-level structure of the system family. This
high-level structure is eventually mapped onto the source code that implements
the products (Bosch (2000)).
While a “typical” software architecture just describes the structure of a sin-
gle software system, a system family architecture (or product line architecture)
describes the architectural structure for a set of closely-related products (Bosch
(2000)). A system family architecture supports understanding and management
of variabilities and commonalities that exist among individual product architec-
tures of the family.
A number of mechanisms for modelling variability in product family architec-
tures have been proposed. For instance, Koalish (Asikainen Timo & Mannisto
(2004)), xADL (Dashofy et al. (2002)), AcmeStudio (ABLE (2007)), and Arch-
Studio (ISR (2007)) offer variability mechanisms based on Architecture De-
scription Languages (ADLs) to better support the modelling of family sys-
47
3.4 System family engineering
tems and architectural variability. “Architecture description languages (ADLs)
are formal languages that can be used to represent the architecture of a software-
intensive system” (Clements (1996)). ADLs aid architecture-based development
by providing formal notations for the specification and description of software
systems (Medvidovic & Taylor (2000)).
Another mechanism to achieve variability for family system architectures
are component frameworks (Szyperski (2002), Wijnstra (2000), Floch et al.
(2006), Blair et al. (2001)). Section 2.4 presented more information about the
concepts of component frameworks.
Component frameworks ease mass customization (Pohl et al. (2005)) and can
play a strong role when restricting the number of possible number of configura-
tions of components. Variation points correspond to the places in the framework
where plug-in components may be be added (Wijnstra (2004), Pohl et al. (2005)).
Variants are realized by specific choices or alternatives of the plug-in. To fit, each
plug-in has to obey rules defined by the frameworks. However, some authors
consider that the variability offered by only replacing components in fixed hot
spots is not flexible enough (Sora et al. (2005)).
3.4.5 Features vs. Architecture
The difference between feature-based and architecture-based models is notable
as they serve different levels of abstraction. Feature-based models describe the
systems mainly from higher level of abstraction as in the context of the cus-
tomer or final users’ viewpoints, while architecture-based models could be seen
as providing a technical view (Asikainen et al. (2004b)). While costumers and
final users typically think in terms of product features, developers rather focus
on architectural elements (i.e. components, interfaces, bindings, etc.) (Dhungana
(2006)).
Existing tools often support just one of these levels of abstraction (Dhungana
et al. (2007b)). The gap between the levels of abstractions followed by these
two approaches creates a poor traceability between these two levels. A typical
example of a traceability problem is that the external variability, i.e. that seen
by customers/users is often weakly integrated with the internal variability taken
48
3.5 Orthogonal Variability Models
by the engineers (Dhungana (2006)). Research efforts have been done to bridge
the gap between the two levels of abstraction offered by these two approaches.
Grunbacher et al. (Grunbacher et al. (2004)) present a lightweight approach in-
tended to provide a systematic way of reconciling requirements and architectures
using intermediate models. Lago et al. (Lago et al. (2004)) present an extensive
mapping from conceptual features to architectural components.
As the goals of this research work are related to the support of the development
of middleware platforms, it is the architecture-based approach that will be used
to achieve and manage variability.
3.5 Orthogonal Variability Models
The specification of software variability can either be integrated in software de-
velopment artefacts or in a separate model. As discussed above, some research
contributions integrate the variability in traditional software development models
such as feature models, class diagrams, and use case models (Pohl et al. (2005)).
Doing so presents some disadvantages: (i) keeping consistency of information
across different models is a hard task, (ii) it is not easy to trace the influence
from variability in requirements to variability in subsequent phases of the soft-
ware life cycle, (iii) software development artefacts are overload with variability
information, and (iv) the concepts to define variability using different software
development artefacts is different which impedes the achievement of a consistent
and global picture of the software variability (Pohl et al. (2005)). To overcome
those disadvantages the notion of first-class variability modelling was introduced
by Bachmann et al. as a means to separate variability from the affected artefacts
(Bachmann et al. (2003)). Based on those original ideas Pohl et al. provide an
in depth study of orthogonal variability model (OVM). “[An OVM] relates the
variability defined to other software development models such as feature models,
use case models, design models, component models, and test models.” (Pohl et al.
(2005)) The use of orthogonal variability modelling promotes a separation of
concerns as the variability models provide a cross-sectional view of the variability
across other development artefacts.
49
3.5 Orthogonal Variability Models
The meta-model of the basic modelling elements of the OVM are shown in
Figure 3.2 using UML 2 notation.
Figure 3.2: Variation point, variant, and the variability dependency meta-model,from (Pohl et al. (2005))
A variability dependency is the association class of an association between the
variation point and the variant classes. This association indicates that a variation
point offers some given variant. The multiplicity of the association follows the
following:
- Every variant must be associated with at least one variation point.
- Every variation point must be associated with a least one variant.
- A variant can be associated with different variation points.
- A variation point can offer more that one variant.
Modelling variability not only implies modelling variation points, variants,
and their relationships. Developers also have to associate the variability specified
in the variability model with software artefacts that are specified in other models,
text-based documents, and code. These associations are defined using traceability
links between the variability model and the other development artefacts, see
Figure 3.3.
To enable the association of variability definitions with other software devel-
opment artefacts, the variability meta-model shown above is extended to include
the new relationship “artefact dependency”.
50
3.5 Orthogonal Variability Models
Figure 3.3: Relating variants and development artefacts, from (Pohl et al. (2005))
The extended meta-model is depicted in Figure 3.4 and contains the class
“development artefact”. Specific development artefacts are sub-classes of this
class. The “realised by” association relates the “variant” class with the newly
introduced development artefact class. The artefact dependency is realised as an
association class. The multiplicities of the association follows the following:
- A variant must be related to at least one development artefact, (1..n).
- A development artefact can be related to one or several variants but need
not be related to any, (0..n).
Figure 3.4: Relating variants and development artefacts, from (Pohl et al. (2005))
51
3.5 Orthogonal Variability Models
The visual notation used in OVM is shown in Figure 3.5. A variation point
that describes “what does vary” is represented by a triangle. The variants that
describe “how does it vary” represent choices of functionality and quality with
regard to a specific variation point (Petersen et al. (2006)). Figure 3.6 shows the
variation point “Language” and three variants “English”, “German”, and “Span-
ish”. The continuous line connecting the variation point “Language” and the
variant “English” indicates that the selection is mandatory, while the dotted line
indicates that the variants “German” and “Spanish” are optional. The semicircle
over the dotted lines indicates that one of the two variants has to be selected.
Figure 3.5: Graphical notation for orthogonal variability models
Figure 3.6: Graphical notation for orthogonal variability models, from (Petersenet al. (2006))
Figure 3.7 shows the documentation of variability for a component diagram.
The example is associated with the components of a ‘window management system’
52
3.5 Orthogonal Variability Models
in houses (Pohl et al. (2005)). According to the figure, the control lock of a
window can be manual or electronic. If the control lock is electronic then the
keypad-based authentication may be required as expressed by the requires v vp
association.
The example of Figure 3.7 shows that variants and variation points may have
constraints. For example, requires v vp, indicates that the selection of a variant
V1 requires the consideration of a variation point VP2. Similarly, the constraint
excludes v vp indicates that the selection of a variant V1 excludes the consider-
ation of a variation point VP2. These constraints are frequently needed when
specifying variability.
Figure 3.7 is relevant to this thesis as it shows how OVMs can be used to
document the variability associated with components. OVMs are proposed by
the author as the means to document the variability of whole sets of components.
Further details about OVM specification is found in (Pohl et al. (2005)).
Figure 3.7: Relating variants and component diagrams, from (Pohl et al. (2005))
53
3.6 Generative Software Development
3.6 Generative Software Development
Generative software development is a system family approach which focuses on
automating the creation of system family members. In his definitive survey of
reuse (Krueger (1992)), Krueger identifies application generators (what is today
encompassed by Generative software development) as an important reuse cat-
egory (Mernik et al. (2005)). Using generative software development, a highly
customized and optimized intermediate artefact or end system can be automat-
ically generated from a specification written in one or more textual or graphical
specification languages (Czarnecki & Eisenecker (2000), Czarnecki et al. (2004)).
The family members to be automatically generated range from procedures or
classes to full systems or subsystems.
A key concept in generative software development is that of a mapping be-
tween problem space and solution space; see Figure 3.8. The three elements shown
in the figure compose the generative domain model (Czarnecki & Eisenecker
(2000), Czarnecki et al. (2004)). The problem space is a set of domain-specific
abstractions that can be used to specify the desired system-family members.
The solution space, by contrast, consists of implementation-oriented abstrac-
tions which can be instantiated to create implementations of the specifications
expressed using the domain-specific abstractions from the problem space. The
mapping transforms elements from the problem space to elements that belong
to the solution space. The mapping should use constraint rules and default set-
tings to ensure legal solutions. Generators synthesize artefacts according to the
mapping between the problem space and the solution space. Generators are also
known as transformation engines (Schmidt (2006)).
Generative software development includes two complete development cycles
(see Figure 3.9): one for designing and implementing a generative domain model
(development for reuse) and another for using the generative model to create spe-
cific systems (development with reuse) (Czarnecki & Eisenecker (2000)). These
processes correspond to the Domain Engineering and Application Engineering
processes defined in Section 3.2. The outcome of Domain Engineering is reused
during Application Engineering, i.e. the reusable assets developed during the
Domain Engineering are used when producing a concrete member application of
54
3.6 Generative Software Development
Figure 3.8: Key concepts in generative software development: problem space,solution space and the mapping between them (from Czarnecki & Eisenecker(2000) )
the system family. This process, which involves two development cycles, is clearly
different from conventional software development processes for a single system.
Figure 3.9: Domain Engineering and Application Engineering (Two Life Cy-cles)(from (Czarnecki & Eisenecker (2000)) and (SEI (1997)))
55
3.6 Generative Software Development
3.6.1 Domain Engineering
Domain Engineering encompasses the activities Domain Analysis, Domain De-
sign, and Domain Implementation, see Figure 3.9.
“The result of Domain Analysis is a domain that supports the description of
similar member systems.” (Neighbors (1998)). The main purpose of this activity
is domain scoping to define a set of reusable assets and configurable requirements
for the systems in the domain. “This activity is considered the critical first step in
a reuse plan, because it captures reuse opportunities at the requirements level and
can institutionalize reuse in the early activities of system developments.” (Stropky
& Laforme (1995)).
Domain Design takes the results of Domain Analysis to identify and gen-
eralize solutions for those common requirements to construct a Domain-Specific
Software Architecture (Stropky & Laforme (1995)). Domain Design is also some-
times known as software architecture development (SEI (1997)). This activity
focuses on the problem space, and not only on a specific requirement, to design
a solution (i.e. solution space). Domain Design also devises a production plan
in terms of the defined architecture (Czarnecki & Eisenecker (2000)). This plan
describes how concrete systems will be developed from the common architecture
and the components. The plan includes the description of the process of assem-
bling components. Three automation levels of the assembly process have been
identified (Cohen (1999), Czarnecki & Eisenecker (2000)) and embody the focus
of Generative software development:
• Manual assembly : The architecture and the components are accompanied
by a developer’s guide. The systems are assembled manually.
• Automated assembly support : Several tools support the assembly of com-
ponents: component browsers, search tools and generator for selected parts
of the system development.
• Automated assembly : This is where a set of tools supports the process of
requesting a system and the (semi-)complete system can be generated. The
system is said to be a semi-complete system if there are parts that require
56
3.6 Generative Software Development
a bespoke development. The goal of generative software development is to
accomplish this level of assembly.
Finally, Domain Implementation includes the implementation of reusable
assets (such as components), the specification means (such as domain-specific
languages), and generators (to realize the mappings).
3.6.2 Application Engineering
Application Engineering is also known as product development (Czarnecki (2004))
and allows the construction of concrete applications from the reusable assets de-
veloped during Domain Engineering. In the same way as traditional software
engineering, Application Engineering starts with requirements elicitation and
analysis. However, the requirements are derived from the generic requirements
specified during Domain Engineering. New requirements not taken into account
during Domain Engineering involve some custom design. Finally, the application
is totally or partially produced using the generators. The experience from the
insertion of new requirements, the product analysis, and the design should be fed
back to Domain Engineering to improve and extend the existing reusable assets
if necessary.
3.6.3 Domain-Specific Languages
Domain-specific languages (DSLs) are an essential concept in generative software
development. They are used during the specification of family members that
eventually will be mapped or transformed into solution implementations. A DSL
can be seen as a language focused on a particular domain or problem. Accord-
ing to (Biggerstaff (1998)) the major contribution of DSLs is to enable reuse of
software artefacts. DSLs allow the reuse of semantic notions embodied into the
language without requiring detailed domain analysis (Mernik et al. (2005)).
DSLs provide appropriate built-in abstractions and notations and is less ex-
pressive than a general-purpose language (GPL). For instance, UNIX shells can
be considered as DSLs whose domain abstractions and notations include streams
(such as stdin and stdout) and operations on streams (such as redirection and
57
3.6 Generative Software Development
pipe). FORTRAN or Java by contrast, although conceived to serve an intended
domain, can be considered rather general. “Moreover, DSLs are not necessarily
programming languages: they are languages tailored to express something about
the solution to a problem.” (Wile (2001)). A good example of a DSL is music
notation. Music notation is a DSL for music which has existed for centuries.
This is irrelevant that nowadays programs that read music can be constructed
and executed (Wile (2001)).
Many definitions for a DSL have been given (Wile & Ramming (1999), Czar-
necki & Eisenecker (2000), France & Rumpe (2003), Greenfield et al. (2004), and
Stahl & Volter (2006)). The author finds appropriate for her work the definition
proposed in (Deursen et al. (2000)):
“A domain-specific language is a programming language or executable spec-
ification language that offers, through appropriate notations and abstractions,
expressive power focused on, and usually restricted to, a particular problem do-
main.”
The specification of a DSL should define the abstractions of concepts in the
domain, precisely specify its semantics and constraints, and their relationships. A
DSL describes concepts specific to a domain while the solution space contains the
corresponding concepts expressed in a given programming language. A DSL can
be textual or graphical and can offer different levels of specialization. Frequently,
several DSLs may be needed to specify a complete application.
The implementation of a DSL can be carried out using different approaches
(Deursen et al. (2000)):
• As a new textual language using traditional compiler and interpreter tools
and concepts. The major advantage of building a compiler or interpreter is
that the implementation is fully tailored towards the DSL and no compro-
mises are necessary regarding notation and semantics. On the other hand,
a clear disadvantage is the cost of building such a tool from scratch, and
the lack of reuse from other (DSL) implementations.
• As embedded languages or domain-specific libraries. In this approach, ex-
isting mechanisms such as definitions for functions or operators with user-
defined syntax are used to create a library of domain-specific operations.
58
3.6 Generative Software Development
The syntax of the base language is used to express the concepts of the do-
main (Deursen et al. (2000)). A clear advantage of the approach is that
the compiler or interpreter of the host language is reused. However, the
approach has to limit its expressiveness according to the syntax of the host
language. The result is that the domain-specific notation usually has to be
compromised to fit the limitations of the base language. Examples include
the meta-programming templates in C++ and the Haskell Template (Czar-
necki et al. (2004)), OpenC++ (Chiba (1995)), and OpenJava (Tatsubori
et al. (2000)).
Other implementation techniques may be used. For instance, in aspect-
oriented programming (Kiczales et al. (1997)), a DSL can be used to describe
an aspect. An aspect weaver is then used to generate domain-specific code and
merge it with the main code.
Czarnecki (2004) adds to the implementation styles described above the fol-
lowing:
• Wizards and interactive GUIs like feature-based configurators such as Pure::
Variants (puresystems (2006)), Feature-Modelling Plugin (Antkiewicz &
Czarnecki (2004)), or CaptainFeature (Bednasch et al. (2004)). In this case,
the implementation of the DSL is embedded in a wizard that iteratively asks
a user for configuration confirmation.
• Graphical (visual) languages. Examples are UML profiles (OMG (2006a)),
MetaEdit+ (MetaCase (2007)), GME (GME (2006)), (Ledeczi et al. (2001)),
or Microsoft’s DSL technology (Greenfield et al. (2004)). Usually, in this
case, the DSLs are considered to be Domain Specific Modelling Languages
(DSMLs) in the scope of the Domain-Specific Modelling (DSM) area (Nord-
strom et al. (1999), Tolvanen et al. (2001)). DSM is a relevant topic for
the present research. Therefore, Section 3.7.4 presents more information on
DSMLs.
59
3.6 Generative Software Development
3.6.4 Spectrum of DSLs and Variability
The appropriate technology to build a DSL depends on the range of variation
that needs to be supported and the suitability of the technology (Czarnecki et al.
(2004), Tolvanen (2006), Stahl & Volter (2006)). Figure 3.10 shows the spectrum
ranges from routine configuration using wizards to more sophisticated visual lan-
guages. For example, a textual DSL may be appropriate for expert users. Wiz-
ards and feature-based DSLs base the configuration process on making choices
among known decisions and features. Therefore, they may be better suited for
apprentices or occasional users. Visual languages do not set choices explicitly
but give a virtually infinite space of variation (Tolvanen (2006)). Although the
designer may not know all the variants, the language is designed in such a way
that it constrains the user to make only legal configurations. Note that manual
programming is at the end of the spectrum. Manual programming provides far
greater flexibility, but also presumes the least guidance and bigger effort (Stahl
& Volter (2006)).
Figure 3.10: Spectrum of DSLs based on (Czarnecki (2004) and Stahl & Volter(2006))
60
3.6 Generative Software Development
3.6.5 Product Configuration
Variability has also been studied in the domain of mechanical and electrical prod-
ucts (Asikainen (2006)). This domain of research is called product configuration.
Product configuration research is a subfield of artificial intelligence (Faltings &
Freuder (1998)). A configurable product allows the adaptation of individual prod-
ucts to the requirements of a particular customer order (Asikainen et al. (2004b)).
Traditionally, configurable products have been non-software products, usually
mechanical and electronic products such as telecommunication switches, comput-
ers, elevators, diesel engines, or vehicles (Asikainen et al. (2007)). However, re-
cently it has also be applied in the area of software systems (Krebs et al. (2002),
Asikainen et al. (2003)).
A configurable product usually includes a modular structure: specific prod-
ucts consist of pre-defined components, and selecting different components leads
to different product variants (Soininen & Stumptner (2003)). A specification of a
configuration (i.e. individual product) is produced during the configuration task
done by a configurator. The selection of the combinations of parts (configura-
tions) that satisfy given specifications (requirements) is the fundamental goal of
a configuration task (Sabin & Weigel (1998)).
The basic functionality of a configurator is to support a user or customer in
finding a configuration that matches her specific needs and requirements
(Asikainen et al. (2007)).
Sabin & Weigel (Sabin & Weigel (1998)) identify several paradigms to un-
dertake the configuration task: rule-based reasoning, model-based reasoning (us-
ing logic-based, resource-based, and constraint-base approaches), and case-based
reasoning. According to (Sabin & Weigel (1998)) [c]“onfiguration is a generative
process.” A solution is the result of a search process within the space of all possible
combinations of objects”.
At first sight, product configuration and generative software development ap-
pear to have the same goals. However, in spite of the similarity of the goals, there
are fundamental differences. Different from generative software development, the
main focus of product configuration is the management of complexity in finding
a valid configuration conforming to the requirements of the customer (Asikainen
61
3.7 Model Driven Engineering
et al. (2004b)). In many cases the process of finding the valid configuration(s)
should satisfy optimization criteria (Sabin & Weigel (1998)) very common in
Artificial Intelligence research areas.
3.7 Model Driven Engineering
Humans use models all the time and usually unconsciously. In software engineer-
ing the creation and use of models is an explicit topic, the focus whose goal is
to produce software artefacts systematically and automatically and to support
communication and understanding between stakeholders. While software devel-
opers create concrete models, researchers in software engineering build notations
and methods for developing such concrete models (Ludewig (2003)). This has
coined the related terms: Model-Driven Architecture, Model Driven Develop-
ment, Model-Based Engineering, Model-Driven Engineering, Model-Centric Soft-
ware Development, among others. The field still awaits a consolidation of these
terms into a smaller set of canonical approaches. In the absence of any consensus
at the present time the author considers Model Driven Engineering (MDE) to be
a more universal term and therefore it will be used in this thesis. MDE refers
to the systematic use of models as first-class entities or artefacts throughout the
software lifecycle (Kent (2002), Schmidt (2006), France & Rumpe (2007)).
One of the most comprehensive and frequently cited works on models was
written by Stachowiak (Stachowiak (1973)). In his work a “model” is regarded
as follows 1:
• “A model has a purpose.
• A model describes some entity that exists or is intended to exist in the future.
• A model is an abstraction, that is, it does not describe details of the entity
that are not of interest to the audience of the model.”
The entity described by a model may be just another model; therefore we can
have chains of models. For example, we can develop design models of the code,
1the translation of the original German text was taken from (France & Rumpe (2003)).
62
3.7 Model Driven Engineering
while the code is a model of the computation performed by the computer when
the code is executed (Ludewig (2003)). Chaining models is the basis for refining
models from higher levels of abstraction down to the level of implementation
details. This is a fundamental concept in both MDE and generative software
development.
3.7.1 On the Importance of Modelling in Software Engi-
neering
Models are better than code for answering high level abstraction questions stated
by different stakeholders (Czarnecki (2004)). Models can be used to capture the
intentions of stakeholders more precisely, to ignore more implementation-oriented
facts, and to be more amenable to analysis. “Models help in developing artefacts
by providing information about the consequences of building those artefacts before
they are actually made” (Ludewig (2003)). A classic example is the exploratory
model that describes the cost estimation of a software project. However, models
are not just for this purpose. Models can be used to capture knowledge of a
specific domain or problem; such models are useful when an expert leaves the
organization and at least part of her/his knowledge has been captured in models.
Models can also be useful to describe processes, structures (design models) or
principles (design patterns).
From the above, the following benefits of MDE can be identified:
• Systematic reuse of development knowledge and experience. Models can
be used to capture knowledge about a specific domain and describe pro-
cesses and principles. How to explicitly and systematically reuse develop-
ment knowledge and experience has been extensively investigated (Krueger
(1992), Arango et al. (1993), GAO (1993), Griss (1993), Jacobson et al.
(1997), Kim & Stohr (1998)). The potential contribution of MDE in this
area is valuable as the current state-of-the-art can be greatly improved
through the systematic application of model-driven technologies. Software
reuse is particularly relevant to system family engineering and product lines
as described in Section 3.4.
63
3.7 Model Driven Engineering
• Higher levels of abstraction. MDE allows the level of abstraction at which
developers operate to be raised, with the goal of both simplifying and for-
malizing the activities and tasks of the software life cycle (Hailpern & Tarr
(2006)). According to (Selic (2003)), the most important characteristic
that an engineering model must possess is abstraction. Higher levels of
abstraction reduce both effort and complexity when developing software
products. Certainly, there should always be a trade-off between simplifica-
tion (by raising the abstraction levels) and oversimplification (where there
might not be enough details for a real purpose). The ability to express so-
lutions at higher levels of abstraction is not new and has a long trajectory.
It comes from the introduction of assembly language as an abstraction over
machine code, followed by the introduction of third-generation languages
such as FORTRAN and COBOL that enabled developers to ignore memory
allocations and other low level instructions, and object-oriented languages
which introduced additional abstractions. MDE adheres to the tradition
and extends it by bringing in model abstractions at the various phases of
the software life cycle. In this sense, the so-called transformation of models
between these phases is comparable to compilation. The above sets the
basis for automatic transformation of high-level abstraction-based models
into running systems. This results in the following benefit.
• Generators and transformation engines. These engines access and ana-
lyze models to automatically synthesize different implementation and de-
ployment artefacts, such as source code and deployment descriptions or
even other kinds of model representations (Czarnecki & Eisenecker (2000),
Czarnecki (2004), Schmidt (2006)). Generating software artefacts from
models helps enforce consistency between implementations and functional
and non-functional requirements. The automated transformation process
thus facilitates building systems that are correct-by-construction (Suenbuel
(2003), Baleani et al. (2005), Schmidt (2006)) rather than constructed-
by-correction, as in cases when compilers and script validators are used
(Gokhale et al. (2005)). More information on generators is found in Section
3.6.
64
3.7 Model Driven Engineering
3.7.2 Learning from the Past and Heading to the Future
Since the early days of developing software, there has been an appreciation of au-
tomated tools to help the software developer. Using concepts such as structured
or object-oriented, analysis and design, a key idea was to draw a diagram that
represented an abstraction of the system to develop, and to use that diagram to
help generate or implement that system. This was the goal of Computer Aided
Software Engineering (CASE) which was promoted by many vendors during the
1980s. In (SEI (2006)), CASE is defined as “the use of computer-based support
in the software development” and a CASE tool is defined as “a computer-based
product aimed at supporting one or more software engineering activities within
a software development process”. CASE focused on the development of soft-
ware methods and tools to enable developers to specify their designs in terms of
general-purpose graphical representations mainly related to programming, such
as structure diagrams, state machines, and dataflow diagrams (Schmidt (2006)).
Although CASE caused considerable research interest, it was not a success in
practice. When analyzing the failure of CASE, three main reasons can be high-
lighted (Greenfield et al. (2004), Schmidt (2006)). First, the general-purpose
visual representations used in CASE tools mapped inadequately on to the un-
derlying platforms. No domain-specific terminology was used so CASE tools did
not support many application domains effectively. The “one-size-fits-all” visual
representations were too generic and non customizable. Interestingly, the same
criticism is made of the Unified Modelling Language (UML). Second, the amount
and complexity of code generated to bridge the abstraction gap between the
models and the execution platform was beyond the capability of the technologies
available at the time. Any mistakes in the generated code would tend to be cor-
rected not by fixing the generator but by fixing the generated code, thus breaking
the association between the model and the solution and making the whole process
inoperative (Greenfield et al. (2004), Schmidt (2006)). Third, CASE tools were
aimed at proprietary execution environments, which made difficult the integration
of solutions between different platform technologies.
Modern research in MDE seeks to avoid the problems that caused CASE to
fail. “MDE researchers are (knowingly or unknowingly) building on the experience
65
3.7 Model Driven Engineering
and work of early CASE researchers” (France & Rumpe (2007)).
The phrase “model driven” has been considered by some to be redundant
(France & Rumpe (2007)). What they mean is that the activity of modelling
is implicit in Software Engineering. While the author agrees, it is true that
the current practice does not take the advantages of the formal and effective
management of models to guide the long life cycle of software development. The
current emphasis of the phrase “model-driven” in research stresses the importance
of higher levels of abstraction when modelling in comparison with just coding.
Eventually, when better modelling practices are widely used and the perception
that models are primarily documentation artefacts during development fades, the
“model-driven” phrase will become redundant (France & Rumpe (2007)).
One of the best known MDE initiatives is the Object Management Group’s
(OMG) Model Driven Architecture (MDA), which is a registered trademark of
OMG. Of particular interest for this thesis is the initiative proposed by the Do-
main Specific Modelling (DSM) community (Nordstrom et al. (1999), Tolvanen
et al. (2001)). These are described in the following sections 3.7.3 and 3.7.4 respec-
tively. It should be noticed that MDA and DSM can be seen as two branches of
MDE. Others include Model Integrated Computing (MIC) (Sztipanovits & Karsai
(1997), Karsai et al. (1998)) and Model-Driven Software Development (MDSD)
(Stahl & Volter (2006)). Model-Driven Development (MDD) which is also an
OMG trademark, can be seen as yet other branches of MDE. However, the next
sections focus only on MDA and DSM.
3.7.3 Model Driven Architecture
The concept of Model Driven Architecture (MDA) was first proposed by the
Object Management Group (OMG) in late 2000. The primary goal is to allow
the specification of applications independently of specific implementation plat-
forms such as middleware and programming languages. MDA applies the basic
principle of separation of concerns by separating the specification of the sys-
tem functionality from its specification on a specific platform. The former is
defined as a Platform Independent Model (PIM), the latter as Platform Spe-
cific Model (PSM). The OMG defines a platform as “a set of subsystems and
66
3.7 Model Driven Engineering
technologies that provide a coherent set of functionality through interfaces and
specified usage patterns”. Concrete examples of platforms are vendor-specific im-
plementations of middleware technologies such as Borland’s VisiBroker, IBM’s
WebSphere, and Microsoft’s .NET or technology specific component frameworks
such as CCM/CORBA or EJB/J2EE.
The mapping from PIM to PSMs is performed using transformation rules.
Interoperability among applications relying on different platforms can be realized
by tools that not only generate PSMs, but form the bridges between them. In
the MDA paradigm, application developers capture integrated, end-to-end views
of entire applications in the form of models. Rather than focusing on a single
custom application, the models may capture the essence of a class of applications
in a particular domain (Gokhale et al. (2004b)).
Figure 3.11: OMG’s Model Driven Architecture (MDA): main concepts
Abstraction is key to deal with complexity in software development (Ross
et al. (1975), Booch (1982), Kramer (2007)). The concepts of PIM and PSM are
proposed by MDA to provide different levels of abstraction. Here, it is clear that
the essential concepts of MDA are not new but are an evolution of concepts that
have been around for decades.
MDA is based on the use of several other standards. These are:
• the UML currently at version 2.0 (OMG (2005b)) to specify models.
67
3.7 Model Driven Engineering
• the Meta Object Facility (MOF)(OMG (2006b)), a language for defining
the abstract syntax of modelling languages.
• the Query, View, Transformation (QVT) (Gardner & Griffin (2003)), a
standard for implementing model transformations (e.g., PIM-PSM trans-
formations).
• the XMI (XML Metadata Interchange) (OMG (2005a)) that is intended
to help Unified Modelling Language (UML) development tools to exchange
data models.
MDA and Middleware Platforms Heterogeneity
The proliferation of middleware technologies is a contributing factor to the com-
plexity of creating and evolving distributed systems. Developers often find it
hard to identify the right middleware platform for their distributed applications.
Domination of the business logic by technical details that are not relevant to the
functionality of the application hinders analysis and design. These problems have
stimulated investigation of the application of MDA to distributed systems devel-
opment and particularly middleware platforms (Gokhale et al. (2004b), Nechy-
purenko et al. (2004), Caporuscio et al. (2005), Ghosh et al. (2005), Gokhale et al.
(2006), Carroll et al. (2006)). The common goal is to develop distributed appli-
cations with no consideration of the middleware technology upon which they will
be deployed, separating the business logic from the technical details associated
with specific middleware platforms.
In this context, the first step in MDA is to construct a model with a high level
of abstraction that is independent of any middleware platform (the PIM). Within
a PIM, the system is modelled from the viewpoint of how it best supports the
business (Kleppe et al. (2003)). Whether the system is going to be implemented
using CORBA, Java/RMI or Web Services, these middleware technologies should
play no role in a PIM. In the next step, the PIM is transformed into one or more
PSMs. In the specific case of distributed application development, the PIM is
mapped (transformed) to one or more middleware technologies models via OMG
standard mappings such as the UML profile for CORBA (OMG (2002)). This
68
3.7 Model Driven Engineering
transformation might be made by a MDA tool that applies a standard mapping
to generate a PSM from the PIM. Depending on the tool, code production will
be partially automatic and partially hand-written. Finally, each PSM and their
middleware-related specifications are mapped (transformed) to code. Because a
PSM fits its technology rather closely, this transformation should be relatively
straightforward (Kleppe et al. (2003)). An example is shown in Figure 3.12.
Figure 3.12: The role of middleware technologies when using OMG’s ModelDriven Architecture
3.7.4 Domain Specific Modelling
Domain Specific Modelling (DSM) (Nordstrom et al. (1999), Tolvanen et al.
(2001)) proposes the idea of developing models for a specific domain using a
Domain Specific Modelling Language (DSML) suitable for that domain. DSM
promotes the systematic use of DSMLs to express different facets of information
systems. It frequently involves the systematic use of a visual DSML to represent
69
3.7 Model Driven Engineering
different aspects of a system. As a specific kind of DSLs, DSMLs can support
higher-level abstractions than general-purpose modelling languages, meaning that
they require less effort and fewer low-level details to specify a given system. Often
the term DSL is used synonymously with DSML (as the context may be some-
times tacit). The author prefers to use the term DSML because it emphasizes
the process of modelling.
DSM often also includes the idea of using generation techniques: automating
the creation of artefacts directly from the models as presented in Section 3.6.
These artefacts can be text, code, or even other intermediate models. With less
manual intervention and reduced need to maintain code and artefacts, the DSM
community claims that productivity can be improved (Greenfield et al. (2004),
Stahl & Volter (2006), Kelly & Tolvanen (2008)). DSM and DSMLs help bridge
the gap between the domain problem and the implementation.
DSM differs from earlier code generation attempts such as CASE tools during
the 80s or UML tools during the 90s. In general, the earlier tools include their
own generators and modelling languages. In the case of DSM tools, it is more
common that a small group of expert developers create the modelling language
and generators within the organization. This brings several benefits: (i) DSMLs
are more likely to match the domain concepts according to the needs of the
organization, (ii) developers will use the concepts and notations with which they
are familiar so they don’t need to learn a new notation, and finally (iii) the
modelling language should evolve more easily in response to changes in the domain
because the needs of just one organization are taken into consideration (Czarnecki
et al. (2004)).
These benefits should be contrasted with the effort of coming up with the
design of a new DSML. Furthermore, there is also the problem posed by the
exchange of models from one organization to another. This is caused by the
so-called DSL-Babel (Christensen (2003), France & Rumpe (2005)), i.e. the po-
tentially large number of DSMLs created and used. Many DSMLs can increase
the problems of interoperability and language evolution, and communication be-
tween different domains. DSMLs are commonly contrasted with general-purpose
modelling languages like the UML (Abouzahra et al. (2005), Estublier & Vega
(2005), Bezivin et al. (2006)). DSMLs are said to be more expressive than UML.
70
3.7 Model Driven Engineering
However, the integration of DSMLs will probably be as difficult to accomplish as
the combination of various types of diagrams in a UML model (France & Rumpe
(2007)).
Different parts of the same system might be specified using different DSMLs.
In this case, a means to link concepts across DSMLs and the guarantee of consis-
tency between concept representations across the languages should be assured.
In many cases DSM includes the creation of domain-specific generators that
create code and other artefacts directly from models (Kelly & Tolvanen (2000)).
DSMLs can be implemented as graphical (visual) languages. Examples include
MetaEdit+ (MetaCase (2007)), GME (GME (2006)), (Ledeczi et al. (2001)), and
Microsoft’s DSL technology (Greenfield et al. (2004)). At first, implementing
this kind of DSML was limited as it was common to develop the supporting tool
besides the DSMLs and the generators. Nowadays, modern metamodel-based
DSM tools are available which may be used by developers to just focus on the
development of DSMLs and the generators (Bencomo & Blair (2006)). It is
important to note that the acceptance of domain-specific modelling approaches
often depends on how suitable the graphical notation is, and how good is the
support given by the tool (Stahl & Volter (2006)).
3.7.5 Similarities and Synergies between MDA and DSM
As discussed above, both MDA and DSM offer significant benefits. Benefits from
MDA include the use of UML with its useful standard language notations. UML
contrasts with the focus on a specific domain that is offered by DSM. The need
to connect both areas has resulted in research efforts to bridge UML profiles and
DSML approaches (Abouzahra et al. (2005)).
Furthermore, some of the differences between MDA and DSM are beginning
to dwindle, while some synergies are becoming more obvious depending on the
context of the systems to be developed. For example, MDA and DSM concepts
are straightforwardly mapped directly onto the concepts from generative software
development studied in Section 3.6. On the one hand, in MDA terms a transfor-
mation from a PIM to a PSM corresponds to a mapping between the problem
space and the solution space in generative software development (Czarnecki et al.
71
3.8 MDA and Reflective Middleware: tackling heterogeneity
(2004)). On the other hand, DSM fits directly into the generative domain model
of generative software development shown in Figure 3.8, i.e the problem space
specified with a DSL is transformed (mapped) using the generators to synthe-
size elements of the solution space. In this context, the MDA initiative has
until now been mainly focusing on working with technology variation (Czarnecki
(2004)). However, generative software development addresses both technology
and problem-domain variability.
In addition, DSM-based tools can be used to design particular UML-based
diagrams for a given domain problem. In other words, the UML tool to be
used can be implemented using a DSM tool. As an example, see Figure 3.13
which shows a class and a sequence UML diagram using MetaEdit+. From those
diagrams output text (like source code, documentation, etc. ) can be generated.
Figure 3.13: UML diagrams modelled in MetaEdit+
3.8 MDA and Reflective Middleware: tackling
heterogeneity
Applications should ideally be designed without taking into account middleware-
based concepts. As noted in Section 3.7.3, MDA was originally proposed, in
part, for the development of distributed applications with no consideration of
72
3.8 MDA and Reflective Middleware: tackling heterogeneity
the underlying middleware technologies. In contrast, other solutions have been
proposed ReMMoC (Grace (2004)), UniFrame (Siram et al. (2002)), and uMiddle
(Nakazawa et al. (2006)). Grace (Grace (2004)) investigated the problem of
middleware heterogeneity in mobile computing environments and examined how
reflective middleware platforms could overcome the problem that this posed.
MDA and reflective technologies can be seen as different approaches for the
same goal, i.e. to help applications to cope with middleware heterogeneity. On
the one hand, when using the MDA-based philosophy, the design of an applica-
tion is decoupled from the integration with the target middleware. On the other
hand, reflective middleware adapts itself to satisfy the requirements of a given
application. Using its reflective capabilities, dynamic modification of both struc-
ture and behaviour of the middleware platform is supported. Thus, MDA can
be considered as a top-down approach and the use of reflective technologies as
a bottom-up approach to tackle heterogeneity. However, a subtle but important
difference is that MDA has been applied to tackle static heterogeneity (i.e. it is
done before deployment), and reflective middleware technologies undertake the
challenge of tackling dynamic heterogeneity that happens to be at runtime.
Dynamic middleware heterogeneity is common in adaptive systems like mo-
bile environments or grid computing. Grace (Grace (2004)) describes a scenario
where different application services, implemented using different types of middle-
ware, are advertised using separate service discovery protocols and are deployed
in different locations. In this kind of scenario, any user will potentially find an
unknown middleware implementation each time she enters a new location. Using
the reflective middleware approach, Grace has developed a solution for mobile
applications to be developed independently of fixed platform types whose prop-
erties are unknown to the application programmer at design time. To do this,
mobile middleware platforms are dynamically reconfigured to interact with differ-
ent middleware types and utilise different service discovery middleware protocols.
The implementation of this approach is a component-based middleware frame-
work, named ReMMoC (Grace et al. (2003b)), that can dynamically adapt its
underlying behaviour between different concrete middleware implementations.
ReMMoC promotes a higher-level abstraction that provides middleware trans-
parency for mobile application developers (Grace et al. (2003a)). ReMMoC is
73
3.9 Summary
one of the middleware platforms at Lancaster which focuses on the mobile ap-
plications domain. ReMMoC and the reflective middleware philosophy (briefly
presented in Chapter 2) offer a solution to overcome the problem of dynamic
middleware heterogeneity in mobile computing and grid computing.
From the specific example described above, it might be conjectured that the
reflective capabilities of ReMMoC are used as a way to undertake runtime trans-
formations from PIM to PSM using the MDA philosophy. In contrast to reflective
middleware, MDA is unable to tackle the problem of heterogeneity of dynamic
changes to support interoperability of intrinsically adaptive systems. Therefore,
traditional model-driven approaches based on the MDA philosophy are not suit-
able to pursue the goals of this research.
3.9 Summary
This chapter has illustrated the key roles of software reuse and abstraction when
developing similar applications within a given problem domain. Systematic soft-
ware reuse requires existing assets to be reused. Such software assets are de-
veloped during Domain Engineering. During Application Engineering, different
variants of the application can be rapidly developed based on these reusable
assets. The author has presented how the areas of system family engineering,
generative software development, and model driven engineering offer different
yet complementary practices to steer the Domain Engineering and Application
Engineering processes:
(i) System family engineering proposes techniques for the structured manage-
ment of variability. Specifically, the architecture of a system family acts as an
abstraction to describe the high-level structure for a set of closely-related appli-
cations.
(ii) Using model driven engineering techniques, models can be used to cap-
ture knowledge of a given problem domain. Model-driven techniques also allow
the level of abstraction at which developers operate to be raised. Furthermore,
domain-specific modelling allows the use of abstractions purposely tailored to that
74
3.9 Summary
domain. The use of these domain-specific abstractions means that less intellec-
tual effort is required from the software developer, therefore reducing complexity
during development. Finally,
(iii) Generative software development approach can be based on (i) and (ii)
to make Application Engineering more efficient.
The author argues that the development of reflective and adaptive middleware
platforms and their applications can exploit each of the above. The next chapter
of this thesis examines how system family engineering, generative software devel-
opment, and model driven engineering can be integrated in a single approach to
overcome the issues identified in the Introduction.
75
Chapter 4. Genie: Modelling
and Generating Middleware
Families
“Happy families are all alike; every unhappy family is unhappy in its own way.”
- opening line from Anna Karenina (1877) by Leo Tolstoy (1828 - 1910)
4.1 Overview of the Chapter
Chapters 1 and 2 of this thesis presented the problem faced by the first gener-
ation of middleware platforms. These platforms generally cannot cope with the
dynamic fluctuations of properties in the environment as found in, for example,
emerging ubiquitous and mobile applications. Chapter 2 also showed how a new
generation of reconfigurable and flexible middleware platforms have been designed
to meet such new demands. It was also noted that these new flexible middleware
platforms pose new challenges as they increase complexity during development
and operation. The development of components and planning of configurations
and reconfigurations involve a large number of variability decisions related to fluc-
tuations of environment and context. These decisions are frequently implemented
using programming environments and tools with low levels of abstraction (i.e. us-
ing constructions offered by programming languages like C++ and Java). This
results in a gap between the way programmers, architects and domain experts
76
4.1 Overview of the Chapter
operate. Furthermore, the development process is mostly carried out manually
using potentially repeated ad-hoc solutions. The above poses the need for both
new software development approaches and operational paradigms able to deal
with the new levels of complexity. In order to deal with complexity these new
development and operational paradigms should raise the levels of abstraction at
which developers operate, and be more systematic and efficient, exploiting soft-
ware reuse whenever possible.
Modelling architectural information is particularly important in this thesis
because of the crucial role of software architecture in raising the level of abstrac-
tion during development. Such a fundamental role is repeatedly emphasized
by the numerous definitions of software architecture (Kruchten & Thompson
(1994), Clements & Kogut (1994), Lawson et al. (1995), Garlan (2000), Bass
et al. (2003)). For example, according to (Oreizy et al. (1998)) “a software ar-
chitecture represents software system structure at a high level of abstraction, and
in a form that makes it amenable to analysis, refinement, and other engineer-
ing concerns”. This definition is particularly relevant because it also highlights
the opportunities for high-level analysis provided by architectural descriptions
(Garlan (2000)).
The author argues that MDE and generative software development help to
produce new development paradigms to support the life cycle of flexible and
dynamically configurable middleware platforms. In the MDE area, research has
focused mainly on using models during the phases before execution (i.e. design,
implementation and deployment) with emphasis on the generation of software
artefacts to be used in those phases (e.g. source code or deployment descriptors).
Moreover, the level of abstraction of models has frequently been at the level of
concepts that map almost directly to source code and scripting code (e.g. the
use of UML classes that map directly to java classes). However, model-driven
techniques can also be used to model software artefacts that take into account the
architecture (i.e. high-level structural organization) of the system and its changes
according to the fluctuation of the environment (i.e. dynamic behaviour).
The author proposes domain-specific modelling and dynamic variability (i.e.
variability that needs to be solved at runtime) as relevant concepts of the con-
struction of models for the dynamic fluctuation of the environment and contexts,
77
4.2 The Central Role of Dynamic Variability
and their impact on the variation of the middleware architecture during execu-
tion. Using the mappings from the models to implementation artefacts, gener-
ative techniques will allow the (semi) automatic generation of implementation
artefacts making the process more efficient and promoting software reuse. The
author argues that generating the code associated with configurations and recon-
figurations directly from the models provides the basis for defining safer execution
by reducing coding errors.
In the long term, and given the reflective capabilities of the middleware plat-
forms, these models also offer the potential of being consulted at runtime to be
used for validating and monitoring runtime behaviour. For example, a model
could be used to verify that the current architecture is valid.
Of crucial interest for the present thesis are the middleware families developed
at Lancaster. These platforms are used as a representation of the new generation
of flexible platforms described in Chapter 2.
The remainder of this chapter concentrates on the conception and design of
the Genie approach proposed by the author. Section 4.2 discusses how concepts
from domain-specific modelling, generative techniques, and dynamic variability
are weaved together in the proposed approach. Section 4.3 describes the Genie
approach in detail. In particular, the section documents two proposed dimensions
of dynamic, Structural Variability and Environment and Context Variability. Fi-
nally, Section 4.4 discusses the application of the approach in the specific case of
the reflective middleware families at Lancaster.
4.2 The Central Role of Dynamic Variability
4.2.1 Introduction
Section 3.4 presented the concept of variability in software system families and
the importance of its systematic management, highlighting the results of research
on the modelling and implementation of variability. Traditionally, the binding of
variability has been done during architecture design, implementation, compila-
tion, linking, and deployment (Batory & O’Malley (1992), Beuche et al. (2004),
78
4.2 The Central Role of Dynamic Variability
Czarnecki & Eisenecker (2000), Lee & Muthig (2006), Ommering (2004), Svahn-
berg et al. (2005)). However, it is becoming common that systems should be
able to adapt dynamically to changing contexts at runtime. Such systems exhibit
degrees of variability that depend on runtime fluctuations in their contexts. This
kind of variability is defined as dynamic variability or runtime variability (Ben-
como et al. (2008a)). These systems impose new requirements that can hardly
be implemented with variation points that are bound before runtime (Goedicke
et al. (2002)). Therefore, although the research results presented in sections 3.4.2,
3.4.3, and 3.4.4 have proved to be valuable, several research challenges remain.
These research challenges are mainly related to finding new approaches and mech-
anisms to manage dynamic or runtime variability (Gurp et al. (2001), Goedicke
et al. (2004), Goedicke et al. (2004)). In order to lessen complexity, these new
approaches should work with different views or dimensions of variability Dobrica
& Niemela (2007), Hallsteinsen et al. (2006) and also cover all the phases of the
life cycle, including runtime (Bencomo et al. (2008a)).
The author advocates that dynamic variability management cannot be limited
to traditional and fine-grained approaches such as component specializations or
parameterizations during runtime (Floch et al. (2006)). Decisions should also
involve coarse-grained mechanisms able to manage the variability of whole sets of
components taking into account their semantics, i.e. software architecture and its
dynamic reorganization (Svahnberg et al. (2005)). Support for the specification
of variation of the changing operation contexts and environment that trigger the
reorganization of the architecture is also needed. Such approaches will enable
the levels of abstraction to be raised (Kruchten & Thompson (1994), Clements &
Kogut (1994), Lawson et al. (1995), Garlan (2000), Bass et al. (2003), Oreizy et al.
(1998)). Furthermore, concerns related to the specification of the variability of
the structure of the system (architecture) and the variability related to changes of
context and environment should be treated separately. The lack of this separation
of concerns makes the software development and evolution complex (Dobrica &
Niemela (2007), Hallsteinsen et al. (2006).
79
4.2 The Central Role of Dynamic Variability
4.2.2 Dimensions of Dynamic Variability
Developing adaptive software that is capable of changing behavior at runtime is
complex and a challenge for software engineering researchers. The complexity
derives from the fact that adaptive software is expected to support a wide range
of “unanticipated” customizations of the system (Sora et al. (2005)) according
to the needs of the fluctuating environment. In an ideal world, adaptive systems
should be able to identify a new context unknown at design time and react (i.e.
adapt) accordingly. However, a crucial challenge is posed by balancing support
for unanticipated adaptive capabilities and guaranteeing the correct architecture
(structural composition) and state of the system during execution. The unantic-
ipated conditions are related to:
i. Environment or context variability: this refers to the fact that the
evolution of the environment cannot be completely predicted at design time.
Therefore, the total range of contexts and requirements may be unknown
at design time.
ii. Structural variability: this covers the variety of the components and
the variety of their configurations and is a consequence of the variability
explained above. In order to satisfy the set of requirements for a new con-
text, the system may add new components or arrange the current structural
configuration (reconfiguration). Hence, the solutions cannot be restricted
to a fixed set of known-in-advance configurations and components.
The model-based approach proposed in this thesis can now be expressed in
terms of how to model the two dimensions of dynamic variability defined above.
However, it is important first to clarify the scope of the dynamic adaptive ca-
pabilities offered by the middleware platforms that are modeled. In this sense,
the classification of “dynamic adaptation” proposed by Trapp (Trapp (2005)) is
relevant to set the required scope, see Figure 4.1. The classification distinguishes
two different types of dynamic adaptation: dynamic behaviour adaptation and
dynamic reconfiguration.
In dynamic behaviour adaptation, systems recognize new environmental
conditions not envisioned during development. In this systems, control and order
80
4.2 The Central Role of Dynamic Variability
Figure 4.1: Classification of dynamic adaptation, based on (Trapp (2005))
is emergent rather than predetermined (Dooley (1997), Waldrop (1992)). This
kind of adaptation is proposed by researchers using mechanisms based on genetic
algorithms or neural networks. Research on this kind of adaptation is still at an
early stage.
The focus of the author’s work is on dynamic re-configuration which
requires that all feasible variants of behaviour can be somehow, and at least to
some extent, predefined before execution. During execution, the current state of
the system and its environment and context is evaluated and the most appropriate
behaviour variant is selected; i.e. the system is dynamically reconfigured. There is
currently little software engineering support for dynamic variability management
in the scope of dynamic reconfiguration, yet the need for support is becoming
urgent with the emergence of new applications requiring runtime adaptation.
Dynamic reconfiguration can be realized using two approaches software-based
configuration and hardware-based configuration. The latter is omitted in this
document as the author is concerned only with software-based reconfiguration.
81
4.2 The Central Role of Dynamic Variability
Software-based reconfiguration can be achieved using two approaches: pre-
determined reconfiguration and online-determined reconfiguration. Pre-deter-
mined reconfiguration is based on a set of configurations with known impact
defined before the deployment of the application. In this case, the system only
supports the pre-designed configurations which are hard coded in advance by the
developers. Therefore the system is only reconfigured (i.e. the system adapts)
when in a predefined and hardcoded configuration. The implementation of a new
configuration requires the system to be reinitiated, i.e. new variants of behaviour
cannot be added during execution. This case is very restrictive.
Online-determined reconfiguration, is a solution in-between
pre-determined reconfiguration and dynamic behaviour adaptation. With online-
determined reconfiguration, the system has a mechanism to identify the possible
configurations at runtime. Online-determined reconfiguration supports dynamic
extension of the application by not only adding, changing and removing artefacts
(e.g. components and connections) at runtime but also the way these artefacts
will be connected. This approach “requires a complex reconfiguration framework”
(Trapp (2005)). The next section elaborates how reflective middleware platforms
act as the reconfiguration framework necessary to support the identification of
configurations during execution.
4.2.3 Achieving Dynamic Variability
In Section 3.4, features and product family architectures were studied to in-
vestigate how they can be used as the organizing abstraction to represent and
model variability. Section 3.4.5 showed the rationale for the decision to use the
architecture-based approach to represent and model variability instead of using
features. It was noted that, while end users generally think of features of the final
system, developers think in terms of architectural elements. Chapter 3 discussed
how domain-specific languages (DSLs) and specifically, domain-specific modelling
languages (DSMLs), can also be used to specify and manage variability in soft-
ware product families. DSMLs can help raise the level of abstraction during
development in comparison to general-purpose languages. The approach to man-
age and model variability proposed in this thesis is a mix of architecture-based
82
4.2 The Central Role of Dynamic Variability
concepts and the principles of DSMLs.
An important decision is the selection of the appropriate technique to im-
plement or realize variability (Svahnberg et al. (2005)). Section 3.4.2 presented
a summary of different variability realization techniques. Generators (Jacobson
et al. (1997)) and infrastructure-centered architecture (Svahnberg et al. (2005))
are relevant to the solution presented in this thesis. The first category, gen-
erators permit automatic constraint-based configuration. Generators partially
relieve the user of manual configuration work (Czarnecki & Eisenecker (2000))
offering a more efficient and less error-prone solution. As a realization technique,
generators, fit cleanly with the idea of managing variability using DSMLs. In
this sense, generators traverse models specified using DSMLs to generate specific
software artefacts.
When using the second category, infrastructure-centered architecture ,
connections between components are treated as first-class entities. This means
that required interfaces of components are not hard-coded. Dynamic replacement
of a component in the architecture or indeed dynamic reorganization of the archi-
tecture is eased if the architecture and the location where such modifications could
be carried out is made explicit. “Used correctly, this technique yields perhaps the
most dynamic of all architectures” (Svahnberg et al. (2005)). What makes this
technique relevant to this work is that the reflective middleware platforms and the
concept of component frameworks offer the mechanisms to perform this variabil-
ity realization technique. Component frameworks offer powerful but controlled
ways to achieve structural variability. In this sense, the architectural principles
and constraints that component frameworks enforce are relevant in architectures
that change dynamically. Moreover, systems can be assembled from component
frameworks in a recursive way (Floch et al. (2006), Costa et al. (2005)). A com-
ponent plugged into a component framework may be an atomic component, but it
can also be a compound component that is a component framework itself. While
traditional fine-grained approaches like specialization and replacement of compo-
nents can still be used, support for more powerful mechanisms that are able to
manage whole sets of components, their connections and associated semantics is
also possible.
83
4.3 A Model-driven Approach for Modelling and GeneratingMiddleware Families
Based on the discussions in this section, the next section introduces the pro-
posed approach.
4.3 A Model-driven Approach for Modelling and
Generating Middleware Families
4.3.1 Overview
The kind of dynamic adaptation carried out by the proposed approach is on-line
determined reconfiguration explained above, where (i) the middleware platforms
constitute the frameworks that facilitate reconfiguration, and (ii) their reflec-
tive capabilities support the dynamic decisions needed to identify the possible
configurations at runtime. The (dynamic) variability realization to be used is
the infrastructure-centered architecture. Domain specific languages are used for
the construction of the models associated with the structural and environment
variability. Using models and generative techniques, software artefacts can be
generated more efficiently.
The behaviour exhibited by middleware families and their applications is quite
distinct from traditional system families where, once a member (“product”) of
the family is created, it does not change significantly during its life time. Us-
ing technologies able to offer reconfiguration capabilities, like flexible middleware
platforms, a member of the family may be transformed at runtime to adapt the
system to suit the new contexts. Middleware platforms and their applications
can be dynamically reconfigured from one structural variant (configuration) to
another according to changes in the context or environment,as shown in Figure
4.2. To do this, the system monitors specific properties of the runtime environ-
ment and reacts to given changes while keeping a valid state. The system should
be able to decide what kind of reconfiguration has to be performed, if any.
To model the adaptive behaviour described above it is necessary to define
what adaptation means in terms of configurations and conditions:
An adaptation is defined in the scope of this research as the process of having
the system transforming itself from a given configuration Ci to another configu-
84
4.3 A Model-driven Approach for Modelling and GeneratingMiddleware Families
Figure 4.2: Dynamic Variability Dimensions
ration Cj given the set of conditions Tk.
The possible reconfiguration-based adaptations correspond to the context and
environment variability and the configurations (components and connections) cor-
respond to the structural variability. The next section describes the approach
proposed.
4.3.2 Description of the Approach
The approach proposed by the author is to use DSMLs to specify:
i. the structural variability. A DSML will allow the modelling of the
component configurations expressed in terms of the architecture dictated
by component frameworks. The modelling elements to be used are generic
architectural elements such as components, required and offered interfaces,
and bindings.
ii. the environment and context variability. A DSML is used to specify
the conditions that represent the dynamic nature of the environment and
context. Basically, a DSML is used to specify adaptations of the form (as
85
4.3 A Model-driven Approach for Modelling and GeneratingMiddleware Families
defined above): from the configuration Ci and on the set of conditions Tk,
go to configuration Cj. These models are in essence transition diagrams.
Using the modelling constructs of the DSMLs of (i) and (ii) the developer
designs models to specify the configurations of components and the transition
diagrams. Using generators capable of traversing these models, different software
artefacts can be generated:
iii. components and configurations of components associated with the
component frameworks are generated from the structural variability mod-
els. The constraints specified by the component frameworks are captured
in the models to validate the configurations and ensure consistency of the
resultant artefacts. The middleware platforms allow newly generated com-
ponents and component configurations to be added during the execution of
the system.
iv. reconfiguration policies are generated from the transition models spec-
ified. As in (iii), validation of transition diagrams should be performed
to avoid inconsistencies. The middleware platforms allow the generated
policies to be inserted during execution. The newly added reconfiguration
policies are used as long as the component(s) or component configuration(s)
cited in the policy exist in the repositories.
4.3.3 Different Levels of Abstraction
An overview of the different levels of abstraction promoted by the approach is
shown in Figure 4.3. This figure shows the specific artefacts that populate the
layers which correspond to different levels of abstraction (abstraction levels are
raised from bottom to top).
(1) The first level at the bottom is populated by different software artefacts
like source code, XML configuration files describing the different configurations
associated with component frameworks, and the XML files of reconfiguration
policies.
(2) The second level corresponds to the models associated with structural
variability, i.e. the models of the component frameworks and their components
86
4.3 A Model-driven Approach for Modelling and GeneratingMiddleware Families
Figure 4.3: The levels of abstraction and the two dimensions of dynamic variabil-ity in the approach
87
4.3 A Model-driven Approach for Modelling and GeneratingMiddleware Families
and configurations. These models are visual representations of the component
configurations, their components and interfaces. The developer edits and rea-
sons about the configurations and components using higher levels of abstraction
beyond, for example, the java or XML code of the figure.
(3) The third level at the top corresponds to the environment and context
variability. In this level the developer plans the adaptations based on transition
diagrams (i.e. reconfigurations diagrams). At this level the developer reasons in
terms of structural variants and conditions of the environment and context that
trigger the reconfigurations.
Each node in the transition diagrams is considered as a structural variant
of the system. Structural variants are “coarser grain” configurations than con-
figurations associated with individual component frameworks in the sense that
they are described by a set (or n-tuple) of component frameworks. In this way,
structural variants can be seen as configurations of component frameworks. The
set of component frameworks are associated with the problem domain. Thus, for
example, if the problem domain identified requires structural changes (in terms
of reconfiguration) of the routing protocols and the topology of nodes in a sensor
network, the component frameworks to be used in each structural variant should
represent concepts associated with routing protocols and topologies of nodes. The
proposed approach aims, then, at partitioning each structural variant into a set
of specialized and focused domains of concern.
In the hypothetical example of Figure 4.3, each structural variant of the tran-
sition diagram is described in terms of two component frameworks (namely CFa
and CFb) and their possible configurations (CFai, CFaj, CFak, CFbm, and CFbn).
Therefore, in terms of the definition of adaptation given above, Cn can be (CFai,
CFbm), for example.
From the initial configuration (architecture) the system will evolve over time
according to the conditions of the environment specified in the arcs of the dia-
grams. The places where the architecture can be changed and the consequences
of the changes will be driven by the transition diagrams.
The use of DSMLs in the approach described above promotes higher levels of
abstraction beyond programming and code. Furthermore, the use of generative
techniques increases the levels of efficiency and automation.
88
4.3 A Model-driven Approach for Modelling and GeneratingMiddleware Families
4.3.4 Orthogonal Variability Models
To complement the approach described above, the orthogonal variability approach
by Pohl et al. in (Pohl et al. (2005)) is used. As discussed in 3.5, an orthogonal
variability model defines the variability of a software system family in a separate
model and is based on variation points (VPs) and their variants. This model
associates the VPs and variants defined with other software development models
such as use cases or component models. A brief introduction of the approach and
its notation was presented in Section 3.5.
Figure 4.4 shows the variability diagrams used to model the variants in the
hypothetical example. The three structural variants, SV1, SV2, and SV3 are
associated with the variation point VP:Structural Variants. Each structural vari-
ant is associated with its corresponding node in the transition diagram using
the “artefact dependency” associations. A given transition diagram has only one
variation point associated with the structural variants.
As said previously, each structural variant of the transition diagram is ex-
pressed by a set of component frameworks. In the case of the example of Figure
4.4 these component frameworks are CFa and CFb. Each of these component
frameworks have different variants (CF Variants ). To represent this association
new VPs are defined that correspond to the variants of the component frame-
works. In the case of the example, there are two variation points, VP:CFa and
VP:CFb. The variation point VP:CFa offers an alternative choice with three
variants, namely CFai, CFaj and CFak. Similarly the variation point VP:CFb
offers an alternative choice with two variants, CFbn and CFbm.
4.3.5 Domain Engineering and Application Engineering
An overview of the software development process followed by the proposed ap-
proach is shown in Figure 4.5. It is based on the differentiation between the
Domain Engineering and Application Engineering processes studied in Chapter
3. Instead of deciding on what specific system to develop in advance, design de-
cisions are postponed and a set of components and a common system family (ref-
erence architecture) are specified and implemented during Domain Engineering.
Later on, during Application Engineering, specific application-oriented transition
89
4.3 A Model-driven Approach for Modelling and GeneratingMiddleware Families
Figure 4.4: Orthogonal variability diagrams for structural variability and envi-ronment and context variability
diagrams are developed to satisfy the requirements (as in Goldsby et al. (2008))
reusing different artefacts or domain assets developed previously that encode the
experts knowledge, e.g. architecture, components, algorithms, configuration pat-
terns, etc. It should be emphasized that Domain Engineering and Application
Engineering are performed in parallel. During Application Engineering develop-
ers can identify new requirements not found in the domain models. Therefore,
feedback to Domain Engineering should be performed to refine and possible evolve
the domain assets.
As noted in Section 3.4.2, it is common to express delayed design decisions
using software variability. Variation points denote the specific locations where
decisions are made to select a variant. Eventually, one of the variants should be
chosen to be achieved or implemented (binding). This thesis focuses on dynamic
90
4.3 A Model-driven Approach for Modelling and GeneratingMiddleware Families
variability at runtime that happens to be bound at runtime.
Crucially the modelling of Structural Variability is realized during the Do-
main Design sub-process performed during Domain Engineering. Likewise, the
modelling of the Environment and Context Variability is mainly realized during
the Design Analysis sub-process during Application Engineering.
Figure 4.5: Software Development Process: Domain Engineering and ApplicationEngineering
The term domain has different interpretations and definitions according to
different research communities, such as artificial intelligence (AI), object-oriented
technology (OO), and software reuse. In the scope of software reuse, the concept
of domain does not just encapsulate the knowledge about a problem area (as
in AI and OO), but also includes how to build the software that supports and
automates the processes in the problem area. The definition of domain adopted
91
4.3 A Model-driven Approach for Modelling and GeneratingMiddleware Families
in this thesis is as in (Czarnecki & Eisenecker (2000)):
A domain is “an area of knowledge (i) scoped to maximize the satisfaction of
the requirements of its stakeholders, (ii) includes a set of concepts and terminology
understood by practitioners in that area (iii) includes the knowledge of how to build
software systems (or part of software systems) in that area.”
In the case of the proposed approach, the middleware architecture is decom-
posed into an extensible set of focused (sub)domains of concerns of middleware
behaviour such as service discovery, interaction types, overlays, etc. (Parlavantzas
et al. (2000)). Domain engineering involves creating and maintaining the set of
reusable artefacts associated with the different domains.
During domain engineering, middleware experts of the different sub-domains
take advantage of their domain expertise by capturing their acquired knowledge
in the form of reusable assets. In this way and in the scope of each domain, spe-
cific configuration patterns, algorithms, and components that share many char-
acteristics and requirements are candidates for reusable assets. A common set
of interfaces or contracts are defined for each sub-domain. Every component
framework designed for each domain has to follow these standard contracts.
The most important reusable asset is the common architecture of the fam-
ily related to the domain. This common architecture dictates the rules to be
followed by the possible variants of the family (i.e. configurations). Having
a common architecture offers several benefits; as for example it simplifies the
configuration process since there are component types and connection bindings
that remain the same for any configuration. Every domain has its corresponding
middleware family (e.g. the service discovery middleware family (Flores-Cortes
et al. (2007)), and the event notification service middleware family (Sivaharan
et al. (2005), Bencomo et al. (2005c)). As seen in Figure 4.5, different Domain
Engineering processes will produce different family architectures addressing the
different domains. Each domain-based architecture is implemented in terms of
a set of (re)configurable component frameworks. Figure 4.6 shows the struc-
tural variants in a transition diagram with the respective component frameworks
associated with two domains.
During application engineering, the reusable artefacts designed and imple-
mented during domain engineering are used to design and build the “products”.
92
4.4 Model-driven Middleware Families
The products in this context are the applications whose dynamic behaviour are
driven by the structural variants and reconfigurations policies described in the
scope of the transition diagrams.
Figure 4.6: Different domains of middleware behaviour and their componentframeworks.
Software reuse promoted by reusable assets allows the development of appli-
cations in a shorter time and with higher quality. Higher quality is the result of
a systematic and consistent software development approach.
4.4 Model-driven Middleware Families
The approach described above aims to be general to a range of adaptive middle-
ware technologies. The ideas promoted by the approach are applicable to any
middleware platform and application that work with the architectural concepts
of components and component frameworks and which are able to support the
93
4.4 Model-driven Middleware Families
dynamic decisions required, e.g. Fractal-based middleware platforms (Bruneton
et al. (2006)), TAO (DOC (2007)), and Lua (Maia et al. (2005)).
This section discusses the application of the approach in the specific con-
text of the reflective middleware families at Lancaster. First, the section shows
how structural variability can be described in terms of the OpenCOM compo-
nent model. As was discussed in Chapter 2, OpenCOM is the component model
that forms the basis of the Lancaster reflective middleware families and their
applications. Next, environment and context variability is expressed in terms of
the reconfiguration policy-based mechanisms offered by the reflective middleware
platforms.
4.4.1 Modelling Structural Variability using OpenCOM
UML is used to specify the meta-model that represents the core architectural
elements common to all middleware family members regardless of their domain
(Bencomo et al. (2005a), Bencomo & Blair (2005)). In effect, the domain ad-
dressed by this meta-model is the more generic one of the middleware platforms
and basically represents the fundamental component-based concepts of the Open-
COM component model: viz. component, capsule, interface, receptacle, binding,
and component framework. The meta-model of the specification of these concepts
is shown in Figure 4.7.
The meta-model also specifies the fact that a component framework focuses
on a determined domain of concern. A component framework offers the com-
mon architecture to be used by the different variants. Essentially, the common
architecture rules the commonalities shared by the variants, i.e. the interac-
tion patterns between components and the environment, configuration patterns,
algorithms, components, and the set of standard interfaces that allows the com-
munication with other components outside the component framework. In other
words, the architecture facilitates the reuse of assets. Figure 4.8 shows how these
concepts are applied on the hypothetical example used above: the architecture
of a component framework CFa that focuses on a domain of concern a and the
associated three configuration variants CFai, CFaj, and CFak.
94
4.4 Model-driven Middleware Families
Figure 4.7: The OpenCOM meta-model (in UML)
The described meta-model constitutes the basis for the specification of the
visual DSML that allows the modelling and generation of structural variants.
The DSML captures the generality inherent to the approach and forms the basis
for the automatic generation of middleware families that can be instantiated
differently depending on the application domain and deployment environment.
The DSML-based models are used to automate or semi-automate generation of
artefacts (source code, configuration descriptors, etc.) related to the development
and configuration of the different middleware platforms.
The possible adaptations based on reconfigurations are captured by the vari-
ability model of the context and environment and are presented in the following
section.
4.4.2 Modelling Environment and Context Variability
Following the proposed approach, at runtime the system is reconfigured from one
structural variant to another according to fluctuations in the context or envi-
95
4.4 Model-driven Middleware Families
Figure 4.8: Different variants compliant with an hypothetical component frame-work
ronment. Each structural variant is dictated by a set of component frameworks
which are specified using the meta-model described in Section 4.4.1 and pictured
in Figure 4.7. The complete specification of the possible structural variants and
their reconfigurations (adaptations) will be captured in transition diagrams. Each
node of a transition diagram represents a structural variant, and the events asso-
ciated with transitions (arcs) are conditions in the environment and context that
trigger the transitions or reconfigurations.
Figure 4.9 shows the meta-model that specifies the above. Essentially, this
meta-model specifies the DSML for the modelling and design of transition dia-
grams that describe the conditions associated with the environment and context
variability. A Structural Variant is specified by a set of Component Frame-
works. A transition starts in a previous variant and ends in a next variant
and Transitions occur due to several Triggers. Triggers are defined in term
96
4.4 Model-driven Middleware Families
of conditions of environment and context. Finally, each component framework
has different configurations of components associated(i.e. its variations); this is
represented by the association between the Component Framework and the CF
Variant.
Figure 4.9: The meta-model (in UML) to model the Context and EnvironmentVariability
As the notes in Figure 4.9 indicate, the variation points of the orthogonal
variability models support the realization of the variation provided by the archi-
tecture of the component framework. In this sense, the specification of the set of
component frameworks that compose a Structural Variant as modelled using the
UML composition association is realized using a Variation Point ( a Structural
Variants VP) as shown in Figure 4.4. The realization of the specification of the
different variants associated with a component framework is also supported by
variation points of the orthogonal variability models. These realization relation-
ships are shown in Figure 4.10.
97
4.4 Model-driven Middleware Families
Figure 4.10: Realization of UML associations using Variation Points
98
4.5 Middleware Families Framework Support
The described meta-model matches the way the OpenCOM-based middleware
platforms use reconfiguration policies. Examples of reconfigurations needed to
meet different environment conditions or contexts are (1) the modification of
the underlying routing algorithm to transport data around a sensor network, (2)
the change from one wireless network technology to another, or (3) the change
of the current service discovery protocol. Examples of situations that trigger
these reconfigurations are: high or low battery conditions, the fact that flood is
predicted (as in the case of the scenarios related to flooding prediction systems
described in Chapter 2), or the fact that a new service discovery protocol is found
(Bencomo et al. (2008a), Flores-Cortes et al. (2007)).
It is the role of the middleware developer to write the set of reconfiguration
policies to tailor, and potentially extend the behaviour of the reflective middle-
ware. The visual DSML described above allows the design, validation and gen-
eration of the adaptation policies. Using the DSML, the design and validation of
reconfiguration policies is performed in a more efficient manner.
4.5 Middleware Families Framework Support
Figure 4.11 shows the standard architecture of a middleware framework that
serves as the platform with reconfiguration support for the different domain-
specific component frameworks. Each domain-specific component framework ad-
dresses a specific middleware functionality (Coulson et al. (2004b)) and offers
a common architecture that rules the different configurations (variants) of the
specific middleware family. As such, they can be combined to provide more
complex capabilities using the structural variants and the transition diagrams as
shown above. The architecture of the different component frameworks will of-
fer the pertinent standard interfaces that agree with the underlying middleware
framework. This will allow interaction between different component frameworks
and their components.
As seen in Chapter 2, both applications and middleware platforms use the
same OpenCOM component model, i.e. both are built from interconnected sets
of components. In OpenCOM, there is no hard distinction between the appli-
cation and the middleware platforms (Parlavantzas et al. (2000)). Using the
99
4.5 Middleware Families Framework Support
Figure 4.11: A middleware framework, its domains and the standard interfaces
same component model for both middleware and applications requires just one
programming model and therefore eases the complexity of reconfiguration. The
approach described in this Chapter has captured the common component model
and standardization of the domain-specific frameworks as part of its conceptual
pillars.
100
4.6 Summary
4.6 Summary
The chapter has presented Genie, an approach that offers structured management
of dynamic variability during development and operation of reflective adaptive
middleware platforms and their applications. The chapter has shown how mid-
dleware artefacts can be systematically generated from high level descriptions
using domain-specific modelling languages. In summary, the Genie approach:
• systematically promotes software reuse and the use of models as first-class
entities to raise the level of abstraction beyond coding by specifying solu-
tions using domain concepts.
• focuses on the architecture as the main abstraction concept, in contrast to
code-level abstractions.
• proposes models that are based on different partitions of focused domains
of concern to tackle complexity.
• offers model-based management of dynamic variability.
The author has identified two dimensions of dynamic variability namely Struc-
tural Variability and Environment and Context Variability. Genie supports the
use of domain-specific languages to specify and validate models based on ab-
stractions of the dynamic variability dimensions. These models describe the ar-
chitecture of reconfigurable applications and the conditions of the environment
and context that trigger the reconfiguration of the architecture during execution.
From these models, different software artefacts can be generated and added to the
system during its execution. Such artefacts support the dynamic reorganization,
runtime decision-making and system adaptation mechanisms. Specifically, the
models of transition diagrams allow the developer to work at higher levels of ab-
straction than working directly with reconfiguration policies. These models offer
an overall view of the different domains and the whole process of reconfiguration
that the systems can undergo.
The chapter has also presented the application of Genie in the context of the
reflective middleware families at Lancaster.
101
4.6 Summary
The next chapter of this thesis presents a evaluation of the Genie approach to
the modelling and generation of reflective middleware families and their applica-
tions.
102
Chapter 5. Evaluation
“The great tragedy of Science -
the slaying of a beautiful hypothesis by an ugly fact.”
- from Collected Essays (1893-1894) VIII Discourses: Biological & Geological,
Biogenesis and Abiogenesis by Thomas Henry Huxley (1825 - 1895)
5.1 Overview of the Chapter
This chapter presents an evaluation of the Genie approach to the modelling and
generation of reflective middleware families and their applications presented in
Chapter 4.
The evaluation methodology applied in this thesis uses the hypothetico-
deductive method as explained in Section 1.4 where the hypothesis of the thesis
was given as:
“support using model-driven and generative techniques will raise the levels
of abstraction at which middleware developers work, will improve the levels of
automation of the development process, and will provide for systematic variabil-
ity management during the development and operation of reflective middleware
platforms and their applications. The basis of such support is twofold: (i) the
introduction of new abstractions and high-level constructs to model different di-
mensions of dynamic variability, and (ii) direct mappings from the variability
modelling constructs to middleware-related implementation artefacts. ”
This hypothesis is tested using two substantial case studies. The evaluation
is based on the use of the Genie Tool, i.e. the toolkit prototype based on the
103
5.2 The Genie Tool
proposed approach. Both case studies rely on Gridkit as the middleware platform
that supports the reconfiguration of the applications.
Before examining the case studies, Section 5.2 presents basic aspects of the
Genie Tool and Section 5.3 describes how Gridkit and the Genie Tool work to-
gether. The case studies used for the evaluation are based on the multi-protocol
framework for ad-hoc service discovery (Bencomo et al. (2008a), Flores-Cortes
et al. (2007)) and the flood forecasting application presented in Section 2.6, and
which is further elaborated in Bencomo et al. (2008b), Sawyer et al. (2007b),
Goldsby et al. (2008)). The case studies are described in Section 5.4 and Section
5.5 respectively.
When presenting the case studies, the improvements of the development pro-
cess supported by the offered approach are illustrated and discussed. To do this,
the author draws on the key issues identified in the Introduction (Chapter 1),
i.e. the low-level of abstraction used by developers, the poor software automation
levels, and lack of a structured management of variability,.
Section 5.6 presents a general discussion about how the key issues are tackled
by the proposed approach. Section 5.7 discusses closely related work. Section 5.8
presents a summary of the chapter.
5.2 The Genie Tool
The Genie tool is the prototype that supports the approach proposed in this the-
sis. The tool offers Domain Specific Modelling Languages for the specification,
validation and generation of artefacts for OpenCOM-based middleware platforms
and their applications. Using the Genie tool, models representing components,
component frameworks, and transition diagrams can be constructed. From these
models, different artefacts can be generated, e.g. files of components and config-
urations, reconfiguration policies, and documentation.
The next two subsections present details about the capabilities of the tool
prototype to specify and use the Structural, and Environment and Context Vari-
ability. The subsections explain the different kinds of models that can be con-
structed using the tool, and also give specific details about the artefacts that can
be generated. Genie has been developed using MetaEdit+ (MetaCase (2006)).
104
5.2 The Genie Tool
5.2.1 Structural Variability Models in Genie
As explained in Chapter 4, the approach presented in this thesis uses the
OpenCOM-based architectural concepts of components and component frame-
works to manage and accomplish the structural variability. The specification of
how these OpenCOM concepts work together in the tool is implemented by a
DSML whose meta-model was explained in Section 4.4.1. The meta-model is
shown in Figure 4.7 and essentially specifies the component-based concepts of
the OpenCOM component model.
The DSML is realized by graph-based models associated with components
and component frameworks. An example of a Genie-based model that describes
a component framework is shown in Figure 5.1. The figure depicts the component
framework Publish/Subscriber used in Gridkit (Grace et al. (2005b), Sivaharan
et al. (2005), Bencomo et al. (2005c)). The figure shows components that offer
and require interfaces, and interfaces which can be bound together to connect
components. The figure also shows how component frameworks export interfaces
from internal components or require interfaces to satisfy the requirements of some
of their internal components. This shows how a component framework can be
seen recursively as a component that offers and requires interfaces. All these
associations are compliant with the concepts of the meta-model described above.
Examples of artefacts that can be generated from component framework mod-
els in Genie are:
• the XML files associated with configurations of the component frameworks.
• test code that uses hardcoded connections of the components in the com-
ponent framework. This test code can be useful as it can be executed in
isolated experiments before performing the tests that use the configurator’s
capabilities.
• reports of validations and checking. For example, a report can be produced
that automatically checks and notifies mismatches between interfaces. Mis-
matches occur when interfaces of different types are mistakenly connected.
• documentation files.
105
5.2 The Genie Tool
Figure 5.1: The Publish/Subscriber framework used in Gridkit and modeled inGenie
Figure 5.2 shows examples of relations between models and artefacts. Ar-
row (a) shows how from the graph of a component framework, a component can
be chosen to reveal more detail about the component. From the model (graph)
associated with a component more details associated with required and offered
interfaces, author, version, etc. may also be requested (see arrow (b)). If the
developer wants to explore the interfaces associated with a component, she could
open a window with the data associated with the interfaces (i.e. signature, pa-
rameters, etc.). In the same way, a developer could open a window with the data
associated with the developer who is responsible for the component. Arrow (c)
shows how, from the model of a component, the skeleton code of the component
can be generated and/or accessed. Finally, arrow (d) points to the XML file of
the configuration of components shown in the graph of the component framework.
These configuration files are stored in a Knowledge Repository that is accessed
by the middleware configurators at runtime. Eventually, when the system starts
up, the configurators read the policies to perform the initial configurations.
106
5.2 The Genie Tool
Figure 5.2: Generation of different artefacts using the Genie tool
5.2.2 Environment and Context Variability Models in Ge-
nie
Genie also provides support for the design, validation, and generation of reconfig-
uration policies using the transition diagrams explained in Section 4.4.2. This is
implemented by a DSML which meta-model was explained in Section 4.4.1. The
meta-model is shown in Figure 4.9 and essentially specifies concepts related to the
way the system is reconfigured from one structural variant to another according
to fluctuations in the context or environment.
Figure 5.3 shows a specific example of a transition diagram designed using
Genie. This figure also shows a Gridkit reconfiguration policy generated from
this model and related to one of the transitions.
Using Genie, the variability models described in Chapter 4 can also be spec-
ified. Figure 5.4 presents the transition diagram shown in Figure 5.3 plus the
107
5.2 The Genie Tool
Figure 5.3: A model of a transition diagram and a generated reconfigurationpolicy
variability diagrams used to specify the variants. Genie allows the selection of
objects and relationships that the user would like to display on the screen. In
this way, the user could choose to concentrate just on a transition diagram and
avoid displaying the variability notation.
Genie stores the reconfiguration policies in a Knowledge Repository that is
accessed by the middleware platform during execution. These polices are used by
the middleware platforms to define the conditions under which a reconfiguration
of the architecture occurs. In this way, during runtime and when the monitoring
conditions specified by the policy are met, the middleware platform configurators
will perform the respective reconfigurations. The reconfigurations will result in
the connection and disconnection of the components specified that will finally
108
5.2 The Genie Tool
perform the adaptation of the system (Bencomo et al. (2006)).
5.2.3 Validation of Models
The models explained above can be valuable as they can be verified and validated
a priori, i.e. before the execution of the configuration (deployment) and reconfig-
uration. Any generation of artefacts (source code, XML file, etc.) does require
validation and checking of the content in the model.
To understand the important role of model validation consider the case of the
design of models of component frameworks. As noted previously, a component
framework imposes constraints on the components it supports. Consequently the
basic checking is related to these architectural constraints. Known architectural
structures can be exploited so that common checking infrastructure can be built
once and then used by any user of Genie in the corresponding configurations of
the component framework. An example of basic validation which applies to any
configuration, irrespective of the component framework used, is the verification
that all the connections between required interfaces and offered interfaces con-
form to the same type. If this is guaranteed, the configurator does not need to
check these conditions at runtime. Other examples of validations are related to
particular constraints enforced by a component framework; for example, a spe-
cific component may appear only once at the most, a connection between two
components must exist, etc. The validations constructed in the Genie tool so
far are basic and generic validations that apply to any component framework, or
ad-hoc validations that apply to specific conditions of a given case.
In the longer term, domain-specific validations applicable to a class of con-
figurations, or even better, a constraint language for validation should be imple-
mented. These kinds of validations are not a central focus of this thesis. How-
ever, the author elaborates further on this topic in the proposed Future Research
Agenda in Section 6.5.4.
Genie has been used in the scope of Gridkit, therefore Section 5.3 shows more
information about how the tool Genie is used to model and generate Gridkit
software artefacts.
109
5.3 Genie and Gridkit
5.3 Genie and Gridkit
As said above, both case studies rely on the support of the Gridkit middle-
ware platform. Gridkit is the embodiment of the reflective middleware-based
philosophy presented in Chapter 2. Gridkit provides the necessary support for
reconfiguration and adaptation. Essentially, during runtime, Gridkit reads and
interprets reconfiguration policies to dynamically load, unload, bind, and unbind
components. These policies define how Gridkit adapts (i.e. reconfigures) ac-
cording to changes in the environment. New reconfiguration policies and their
respective components and component frameworks can be added to specific repos-
itories while Gridkit is already in execution, enabling unanticipated changes to
the ongoing architecture (Bencomo et al. (2008a)).
Configuration and reconfiguration policies are scoped to individual frame-
works. For example, an instance of the interaction framework will have one set
of policies, while an instance of the overlays framework will use a different set of
policies (see Figure 5.5). Genie stores the generated policies to the corresponding
folder of the component framework.
Figure 5.6 gives an overview of how Genie and Gridkit interact. Modellers cre-
ate and design models of components, configurations of component frameworks,
or transition diagrams to generate artefacts such as components source code,
configuration files descriptors, or reconfiguration policies. Components and com-
ponent frameworks are stored in the Component Repository and reconfiguration
policies are stored in the Knowledge Repository. Gridkit is designed for network
applications and is deployed on each participating node, for example, a mobile
device, a laptop or even a tiny system for embedded network systems. Dur-
ing execution, Gridkit accesses the repositories to support the decision making
about how and when to load and unload components, or connect and disconnect
components.
The next two sections describe the case studies of the evaluation.
110
5.3 Genie and Gridkit
Figure 5.4: A model of a transition diagram with the variation points
111
5.3 Genie and Gridkit
Figure 5.5: Component frameworks and their specific policies
Figure 5.6: Genie and Gridkit
112
5.4 Case Study 1: Dynamic Service Discovery
5.4 Case Study 1: Dynamic Service Discovery
This section introduces the first case study of the evaluation. The case study is in
the context of mobile computing environment applications which need to dynam-
ically discover services in highly heterogeneous environments. The middleware
platform studied in this section is part of the broader middleware architecture
supported by Gridkit and specifically helps to overcome service discovery het-
erogeneity (Flores-Cortes et al. (2007)). Section 5.4.1, Domain Analysis, briefly
presents the motivation for dynamic service discovery and discusses the domain
problem. It is followed by the description of how the architecture commonalities
and variants (configurations) are identified in Section 5.4.2. Finally, the variabil-
ity model and its notation and application are described in Section 5.4.3. The
use of the approach in this case study is based on the partial results reported in
(Bencomo et al. (2008a) and Bencomo et al. (2008b)).
5.4.1 Domain Analysis of the Service Discovery Frame-
work
Mobile computing is becoming increasingly important (Floch et al. (2006), Grace
et al. (2005a), Mascolo et al. (2002)). Mobile devices are characterized by sudden
and unexpected changes in execution context. Applications running on these de-
vices need to dynamically adapt according to the changing context. Such contexts
may also include or require new services. Service discovery protocols (SDP) were
conceived to simplify the discovery and use of network resources such as printers,
video cameras, directories, and mail servers, with minimum user intervention.
Many different approaches to SDPs (Marin-Perianu et al. (2005)) have been pro-
posed. Consequently it is not possible to completely foresee which protocols will
be used to advertise services in a given context or environment.
Flores-Cortes et al. (Flores-Cortes et al. (2007), Flores et al. (2006)) present
a configurable and reconfigurable middleware solution for the dynamic discov-
ery of services advertised using heterogenous protocols in diverse environments.
The solution takes into consideration a set of common core architectural ele-
ments that individual discovery protocols follow. Using the final architecture,
113
5.4 Case Study 1: Dynamic Service Discovery
individual discovery platforms can be implemented and dynamically plugged-in
to the discovery middleware. Hence, different SDP personalities can be used to
discover services advertised by heterogeneous platforms. This middleware so-
lution has been evaluated with the development of four existing ad-hoc service
discovery protocol personalities: ALLIA, GSD, SSD, and SLP. The offered solu-
tion enhances configurability and re-configurability and minimizes resource usage
through reusable assets such as components and patterns of interaction (Flores-
Cortes et al. (2007)).
5.4.2 Commonalities and Variabilities
The commonalities are captured in the family architecture proposed by Flores-
Cortes et al. (Flores-Cortes et al. (2007)) and shown in Figure 5.7.
Figure 5.7: The Service Discovery Family Architecture ( from (Flores-Corteset al. (2007)))
The architecture dictates the rules and constraints to be followed by the pos-
sible variants (i.e. configurations). The six components of the architecture are
detailed below:
-Advertiser Component : this component is utilized to advertise services
and to process incoming service advertisements, storing them in cache. This
114
5.4 Case Study 1: Dynamic Service Discovery
component also deals with protocol messages related to the maintenance of a
directory overlay network.
-Request Component : this component is utilized to generate service re-
quests. It is also employed to process incoming service requests, match them
against local services previously stored in a cache. It can also forward request
messages in both roles.
-Reply Component : this component is used to generate service replies when
a positive request-service match occurs or to notify applications about requests.
-Cache Component : common tasks performed by this component are the
management of temporary data, storage of received service advertisements, de-
scription of local services and location of neighbouring directories.
-Policy Component : this component stores and deals with user preferences,
application needs and/or inclusive context requirements.
-Network Component : this component allows components connected to it
to transmit and receive messages utilizing different routing schemes.
Service Discovery Agents
A service discovery platform uses three kinds of agents to advertise and discover
services:
-User Agent (UA) to discover services on behalf of clients,
-Service Agent (SA) to advertise services, and,
-Directory Agent (DA) to support a service directory where SAs register
their services and UAs send their service requests. A DA stores temporary service
advertisements, matches requested services against advertisements, and replies to
requesting clients when a positive match is found.
The agents identified above can be seen as roles that individual protocol per-
sonalities assume. Depending on the required functionality, participating nodes
using a given protocol personality might be required to support 1, 2, or 3 roles
at any time.
Following the rules dictated by the architecture, any SDP personality in any
environment and under any context needs the network component to interface
with networks services or clients, and policies and cache components are always
115
5.4 Case Study 1: Dynamic Service Discovery
required since they interact with either discovery role. Therefore, the Network,
Cache and Policy components will always be present in any valid configuration.
The other three components and their bindings will be part of the configuration
or not, depending on the roles the protocol might perform (i.e. SA, UA, or DA).
Hence, the roles (agents) directly define the structural variants.
The different configuration variants can be constructed using the Genie tool.
Figures 5.9, 5.8, and 5.10 present different screen shots of the Genie tool showing
the possible variants of the architecture described above. Figures 5.8 and 5.9
show how the architecture can be configured to support either a UA or SA role
by restricting the number of components to only those required to provide the
determined functionality. By using a complete framework configuration, a DA
can also be supported and the configuration to be used is shown in Figure 5.10.
Hence, by configuring individual protocols according to the role (i.e. UA, SA or
DA), the number of resources required by a multi-personality middleware service
discovery can be significantly reduced to minimise the footprint and potentially
improve the performance of the system (Flores-Cortes et al. (2007)).
With the multiprotocol middleware platform, heterogeneous discovery plat-
forms can be implemented with a common component architecture. This simpli-
fies the configuration process since the component types and connection bindings
remain the same for any protocol implementation. Thus, because of the common
configuration pattern, the execution of a simple single component replacement
algorithm is enough to re-configure the architecture. Similar common algorithms
are required to perform a coarse-grained re-configuration when loading a new dis-
covery personality or when changing the role in a given personality is required.
Fine-grained and coarse-grained changes can be made to the ongoing architecture
to support context changes in the environment.
5.4.3 Modelling Variability
The solution presented above significantly enhances reconfigurability. However,
the increasing number of variants and their relationships makes the structured
management of variability crucial. This section illustrates the notation and mod-
els of the proposed approach to address the requirement for management of dy-
116
5.4 Case Study 1: Dynamic Service Discovery
Figure 5.8: Configuration for the variant role UA
Figure 5.9: Configuration for the variant role SA
namic variability that is exposed by the family of Service Discovery Protocols
described.
117
5.4 Case Study 1: Dynamic Service Discovery
Figure 5.10: Configuration for the variant role DA
Structural Variability
Figure 5.11 shows a partial view of the orthogonal variability model of the case
study. Essentially, the figure shows how in the proposed models the different
role-based configurations correspond to variants of the component framework.
This is specified using the variation point “Role” with the three variants SA,
UA, and DA.
Figure 5.11 also shows the ‘artefact dependencies ’ depicted as the dashed
arrows. Using this association, each variant is associated with the corresponding
configuration model. These variants correspond with the management of the
Structural Variability described in 4.4.1.
The configuration-based models are designed using the DSML explained in
Section 5.2.1. These models are used to automatically generate the corresponding
XML file or code. The associations between the artefacts generated from the
model and the models are specified using the dotted lines. A configuration variant
118
5.4 Case Study 1: Dynamic Service Discovery
will eventually be chosen by a runtime selection between alternative component
configurations. This is performed according to the triggers and environmental
conditions which are specified in the transition diagrams explained in the next
section.
Figure 5.11: Component framework models, variability model, and inter-dependencies between artefacts
Environment and Context Variability
This section explains the transition diagram that corresponds to the management
of the Environment and Context Variability described in Section 4.4.2. In this
sense, the models of the case study are extended to show how the reconfigurations
from one role to another are specified.
Figure 5.12 shows an example of a transition diagram that guides the specified
reconfigurations. Transitions diagrams allow the specification of the triggers and
119
5.4 Case Study 1: Dynamic Service Discovery
the reconfiguration actions involved. The triggers are associated with the arcs
between the variants.
Figure 5.12: Transition diagram model for the Service Discovery Application (ascreenshot of the Genie tool)
The following examples illustrate some of the reconfiguration opportunities
that are specified in the transition diagram of Figure 5.12 (Bencomo et al. (2008a)).
Example 1: Nodes operating service discovery protocols might run consensus
algorithms periodically to re-elect the DA nodes in charge of giving directory
services to other nodes. Therefore, if a node UA has been chosen as a DA,
this node should be reconfigured to match its new role. A pseudo code of the
reconfiguration policy that guides the adaptation of the example is as follows:
if ( Elected-DA ) then
reconfigure(UA,DA)
end
Example 2: If a DA node has low battery and it was originally a node with
the UA role, the node should be reconfigured to its original UA configuration.
The same could happen if after the consensus algorithms to reelect the DA nodes
another node is elected. The policy is as follows (RUA describes the condition
“required UA”):
120
5.4 Case Study 1: Dynamic Service Discovery
if(!Elected-DA ||(Low-Battery && RUA))then
reconfigure(DA,UA)
end
Using the information specified in the transition diagram model, the associ-
ated reconfiguration policies are generated. To do this a generator report traverses
the transition diagram and generates the text shown bellow. The text generated
contains the source, target, and triggers for all the reconfigurations.
1
SDP
State DA ( DA ) goes to
UA usingcondition (Low-Battery && RUA) || (!Elected-DA)
SA usingcondition (!Elected-DA) || (Low-Battery && RSA)
State SA ( SA ) goes to
UA usingcondition (RUA)
DA usingcondition (Elected-DA)
State UA ( UA ) goes to
DA usingcondition (Elected-DA)
SA usingcondition (RSA)
The text shown above is processed using regular expressions to finally generate
all the reconfiguration policies associated with the transition diagram of 5.12.
Figure 5.13 shows another view of the transition diagram for this case study.
A specific reconfiguration policy for Example 1 explained above is shown. The
figure also shows the variation point for the specification of the structural variants
described in Section 5.4.3 above. The source code of the generator is shown in
Section B.3 of Appendix B.
Figure 5.14 gives an overview of the two dimensions of dynamic variability for
the case of Service Discovery Protocols. The Genie tool allows the modelling and
generation of the artefacts related to the two dimensions of dynamic variability
identified by the approach. During the execution of the system, the environment
and context variables are continuously consulted to perform the pertinent recon-
figuration, in order to maintain the system in a sensible configuration. Using
121
5.4 Case Study 1: Dynamic Service Discovery
Figure 5.13: Variability Model and Adaptation Policies
the Genie tool and with the runtime support of the middleware platforms, new
reconfiguration policies can be created and added during the execution of the
application. Therefore, the behaviour of the system will be driven by the newly
added reconfiguration policies (Bencomo et al. (2008b)).
122
5.4 Case Study 1: Dynamic Service Discovery
Figure 5.14: Dynamic Variability in the Service Discovery Protocols
As the problem domain of this case study only includes service discovery
aspects, the structural variants are composed of just one component framework.
To contrast this, the second case study shows the case of structural variants that
represent two component frameworks.
123
5.5 Case Study 2: The Flood Forecasting Application
5.5 Case Study 2: The Flood Forecasting Ap-
plication
This section presents the second case study of the evaluation. This case study is
more complex than the first case study and includes the wireless sensor network-
based flooding forecast application introduced in Section 2.6. The application is
supported by the ‘open overlays’ framework which is part of the broader Gridkit
middleware architecture (Grace et al. (2008b)). The open overlays framework
studied in this section helps to overcome the network heterogeneity that can arise
during execution. Section 5.5.1, Domain Analysis, briefly discusses the problem
domain. Section 5.5.2 follows with a description of the architecture commonalities
and variants (configurations) identified. Finally, the variability model and its
notation and application are described in Section 5.5.4. Partial results of the
application of the approach for the flooding forecast application have already
been reported in the following publications (Bencomo et al. (2008b), Goldsby
et al. (2008), and Sawyer et al. (2007b)).
5.5.1 Domain Analysis of the Open Overlays Framework
Dealing with heterogeneity is one of the main goals of middleware. In this sense,
the fact that distributed applications increasingly need to integrate a wide range
of networking technologies have made the areas of network heterogeneity an im-
portant research topic in the middleware research. A successful solution to deal
with this kind of heterogeneity are the network overlays (El-Sayed et al. (2003)).
Overlay networks are virtual networks that are logically ‘laid over ’ an underlying
physical network (Grace et al. (2006a), Grace et al. (2008b)). These networks
basically have two parts: one that maintains a virtual network topology, and
another that routes messages over this virtual topology. At Lancaster, the Mid-
dleware Research Group has investigated the development of ‘open overlays ’
(Grace et al. (2004)) to approach dynamic reconfigurability of the overlays. This
has resulted in the design of the open overlays framework (Grace et al.
(2005b)). This framework accommodates overlay plug-ins to provide different
124
5.5 Case Study 2: The Flood Forecasting Application
network services using an approach to configurable deployment and dynamic re-
configurability. The framework has been in progressive development and used
for a number of years (Grace et al. (2006b)), and has proved especially relevant
in sensor networks (Hughes et al. (2006a), Hughes et al. (2006b), Grace et al.
(2008b)). Different communication services such as data streaming, P2P resource
discovery and application-level multicast can be offered by the appropriate overlay
configurations. Following the component framework-based philosophy, multiple
overlay personalities can be plugged in after deployment. In this sense, and ac-
cording to the application requirements, the overlay component framework can
be customized to implement the behaviour required by (i) a centralized spanning
tree using Dijkstra’s shortest path algorithm, (ii) a Bellman-Ford’s decentralized
shortest path, (iii) a fewest hop tree, or (iv) proximity-aware overlay (Grace et al.
(2006b)). The next subsection shows the commonalities and variability offered
by the architecture of the open overlays component framework which can be
exploited by the Genie approach.
5.5.2 Commonalities and Variabilities
The open overlays framework is an OpenCOM component framework that is
deployed on each participating node in the distributed application (Grace et al.
(2008b)). This framework allows overlay-related components that implement
different types of behaviours to be plugged into the framework. As an example
of a possible configuration of the open overlays component framework, Figure
5.15 shows four overlay plug-ins: overlay Tree Building Control Protocol TBCP
(Mathy et al. (2001)), Scribe (Castro et al. (2002.)), Chord Distributed Hash Table
(DHT) and Chord Key-Based Routing (KBR) overlay (Stoica et al. (2001)). As
seen in the figure, multiple overlays can operate at the same time inside the
framework either separated (as in the case of TBCP and Scribe plug-ins) or in
stacks (as in the case of the Scribe and Chord DHT which are both stacked
on top of the Chord KBR plug-in) (Grace et al. (2008b)). The overlay plug-in
abstraction can be applied uniformly all the way through the communication
stack. For example, transport protocols like TCP andUDP are also overlay plug-
ins in the figure. Furthermore, an AODV (Ad hoc On-demand Distance Vector)
125
5.5 Case Study 2: The Flood Forecasting Application
overlay plug-in may be provided in the network layer in a MANET (Mobile ad-
hoc Network) environment. The abstraction can also be applied at the level of
the physical network.
Figure 5.15: An example configuration of the open overlays framework, from(Grace et al. (2008b))
After investigating the design of reconfigurable overlay networks and also ex-
ploring different types of reconfigurations, a three-element architecture for over-
lay plug-in, called the overlay pattern, has been designed. Each overlay plug-in
that follows this architecture is itself a ‘mini’ component framework (Grace et al.
(2008b)). The reader should remember that following the OpenCOM philoso-
phy, component frameworks are also components. Each overlay plug-in follows
the common architecture of the overlay pattern. This pattern presents three
separate software components (Grace et al. (2004)):
i. the control component that encapsulates the distributed algorithm used
to build and maintain the overlay-specific virtual network topology.
ii. the forwarding element that determines how the overlay will route mes-
sages over the virtual topology.
126
5.5 Case Study 2: The Flood Forecasting Application
iii. the state component that maintains and offer access to generic state and
information of the overlay; such as a nearest neighbour list.
Each of these three components exposes the standard interfaces IControl,
IForward, and IState. These interfaces enable the composition of overlays as
explained above. The rationale behind the overlay pattern is to achieve both
configuration and dynamic reconfiguration by allowing both control and forward-
ing components to be independently replaced without loss of state information
(Grace et al. (2004)). Furthermore, as each of the three components can itself be
a component framework, the overlay pattern forms the basis for further decom-
position in a recursive way (Grace et al. (2008b)).
Using the open overlays component framework, different types of reconfigu-
rations can be performed. The reconfigurations involve the replacement of either
forwarding or control component and are usually performed on every node of the
overlay (Grace et al. (2006a)). Example of reconfigurations are:
- Topology reconfiguration: in this case control components on every node
are changed to create a different topology. This type of reconfiguration results in
changes to the performance of the network as the resources become (un)available.
- Dependability reconfiguration: in this case control component are reconfig-
ured on selected nodes of the overlay network to reinforce dependability. This
can imply the replacement of algorithms to deal with node failures.
- Routing configuration: in this case, forwarding components can be replaced
on every node of the network to update the routing algorithms.
In the context of the open overlays framework, both local reconfigurations (i.e.
in a given node) and distributed reconfigurations (i.e. across several nodes of the
network) can be performed. Distributed reconfigurations rely on OpenCOM’s
distributed component framework (DCF) facility (Grace et al. (2006a)). DCFs
are defined as “coordinated sets of local component framework instances that are
spread across a set of coordinated nodes” (Grace et al. (2008b)). In this way,
the DCF facility for the case of the TBCP overlay plug-in in Figure 5.15 would
contain an instance of the TBCP component framework plug-in for every node
in the overlay. As seen above, the common architecture of the DCFs supports
dynamic reconfiguration both at a coarse-grained and a fine-grained level.
127
5.5 Case Study 2: The Flood Forecasting Application
5.5.3 An application of the open overlays framework: re-
visiting the Flood Forecasting Application
This section discusses the application of the open overlays framework in the im-
plementation of a prototype for a real-world scenario: a wireless sensor network-
based for the real-time flood forecasting in a river valley in the North West of
England (Grace et al. (2006b), Grace et al. (2008b)). This application constitutes
a good case study for the description of the dynamic reconfigurability capabilities
offered by the open overlays framework and therefore the benefits of the appli-
cation of the Genie approach. The nature of this case study is different from
case study 1, demonstrating the generality of the Genie approach. The appli-
cation related to the case study has already been partially presented in Section
2.6. However, a summary of the application is given in this section taking into
account the new discussion and the sections presented above.
Context
The application monitors water depth and flow rate in the river using a network
of specialised sensor nodes installed along the bank of the river. Each sensor
node is known as a ‘GridStix’. About 15 nodes have been deployed (Grace et al.
(2008b)). Sensors route the data collected in real-time using a spanning tree
topology to one or more designated root nodes. From these nodes, the data is
forwarded (via GPRS) to a prediction model that runs on a remote computational
cluster.
Each sensor node includes a 400MHz XScale CPU, 64MB of RAM, 16MB of
flash memory, and Bluetooth and WiFi Networks. The designated root nodes
are also equipped with GPRS. Each GridStix is powered by a 4 watt solar array
and a 12V 10Ah battery. They run Linux 2.6 and the Java virtual machine
1.4 in contrast to conventional sensor network deployments, where sensors are
simply responsible for transmitting sensor data off-site, this deployment permits
the use of local processing. Local processing supports computation for the local
prediction of future environmental conditions. This functionality requires heavy
support for heterogeneous network technologies. However, a trade-off has to
be done as, on the one hand, networking support must be power efficient to
128
5.5 Case Study 2: The Flood Forecasting Application
facilitate the operation of nodes for extended periods of time. On the other hand,
applications such as image-based processing for flow prediction also require high
performance (and therefore power demanding) networking support. The above
should also take into account varying resilience requirements. In that respect,
during quiescent periods and when flooding is unlikely, data may reach the off-
site cluster with an allowed high delay. During these quiet periods, low energy
consumption is a major requirement to maximize the life-time of the nodes. By
contrast, when a flood is imminent, the network should react promptly, while
providing a high degree of resilience (e.g. a low sensitivity to disruptions), even if
this means its energy supplies run down much more quickly. Section 2.6.2 briefly
explains the overall performance of the application in terms of latency, resilience,
and power consumption. The results of derivation of the requirements based on
those discussions is partially shown in (Grace et al. (2008b), Goldsby et al. (2008),
and Sawyer et al. (2007b)).
Reconfiguration Opportunities
An overview of the reconfiguration opportunities that were identified after the
analysis of the metrics named above is shown in Figure 5.16.
The figure shows the three possible states of the river that were identified:
Normal, Alert, and Emergency, and the reconfigurations of the application can
go through. The reconfiguration support is done in two dimensions: at the phys-
ical network level and at the routing level, both offered by the Spanning Tree and
Network component frameworks which can be plugged into the Overlay Frame-
work (Grace et al. (2008b)).
State 1 - Normal conditions: as the sensor network is working under
normal conditions (quiescent state), the overlay provides a shortest path tree (SP)
connected using a Bluetooth (BT) network. Both options are power-efficient.
State 2 - Alert: some nodes have attached video cameras pointing at the
river surface. Using simple image processing of the images of the river, an esti-
mation of the river flow rates is carried out and the condition High-Flow may be
triggered. If the High-Flow conditions appears and the battery life of nodes is
129
5.5 Case Study 2: The Flood Forecasting Application
Figure 5.16: Reconfiguration diagram, based on (Grace et al. (2006b) and Graceet al. (2008b))
not high the connection to be used is WiFi. The spanning tree topology to be
used is shortest path tree (SP).
State 3 - Emergency: the prediction models generate an event stating that
flooding is about to happen. Hence, the sensor network adapts itself to become
less vulnerable to node failure. Therefore, the nodes configure themselves into a
new topology - a fewest hop - (FH).
The triggers High–Flow and Flood–Predicted are not directly given by the
environment but are determined by the application. High–Flow is inferred from
the observations of video cameras attached to some nodes. The flow rates of the
river are estimated using the images taken with the cameras. Flood–Predicted is
provided by the prediction model.
Figure 5.17 shows the configuration of the overlays framework and the overlay
130
5.5 Case Study 2: The Flood Forecasting Application
plug-ins used for the case study. The overlay plug-ins with dashed boxes to the
right are the possible alternatives for the configuration. This case study only
takes into account the the Spanning Tree and Network component frameworks
highlighted in Figure 5.17.
Figure 5.17: Configuration of the open overlays framework for the case study 2
5.5.4 Modelling Variability
The application presented above gives a good example of the high degree of dy-
namic variability in Gridkit. As in the first case study, the increasing number
of variants and their relationships poses the need for a structured management
of dynamic variability. The next two subsections present how the proposed ap-
proach addresses the management of dynamic variability exposed by the families
of overlay networks described above.
Structural Variability
In terms of the approach, each state of the river identified above corresponds to a
structural variant. In contrast to the first case study, here, each structural variant
131
5.5 Case Study 2: The Flood Forecasting Application
is composed by two component frameworks which are associated with the open
overlays component framework, specifically the Spanning Tree and the Network
frameworks. The Spanning Tree supports the routing algorithm to be used when
transmitting data between the nodes, and the Network component framework
specifies the network technology to be used (see Figure 5.17).
As seen in Chapter 4, these component frameworks have associated configura-
tions variants that will be instantiated at runtime. The Spanning Tree component
framework has two possible variants: Shortest Path (SP) and Fewest Hops (FH).
The Network component framework offers two possible variants: BlueTooth (BT)
and WiFi. The 2-tuples identified that correspond to the structural variants
used in the case study are (SP,BT ), (SP,WiFi), and (FH,WiFi) (Bencomo et al.
(2008a), Bencomo et al. (2008b)).
Figure 5.18 shows the architecture of the overlay plug-ins (overlays pattern).
Based upon the common architecture of the overlay pattern, individual overlay
plug-ins are developed as configurations of component frameworks (variants) that
are composed by three components control, forwarding, and state, which were
described above. The three components interact within the individual plug-in as
shown by the bi-directional arrows in Figure 5.18.
The exported interfaces and receptacles are used to express interdependencies
and interactions with other overlay implementations. The control and forward-
ing components can be directly used by other overlays as the standard exported
interfaces permit. The state component “does not particularly lend itself to com-
monality” (Grace et al. (2004)), therefore there is no standard interface to be
exported in its case.
The control component exposes the IControl interface which offers the com-
mon operations to create, join and leave an overlay. It is the implementation
of the component which determines how these operations are used to create the
configuration between the nodes. The forwarding component has operations to
route messages to nodes in the overlay, send messages to neighbour nodes, and
receive any incoming messages. The control component also exports an IForward
receptacle to forward control messages using its own, or a different overlay‘s, for-
warding mechanism. In the same way, the control component exports an IDeliver
interface. This interface is used by lower-level overlays to pass messages to the
132
5.5 Case Study 2: The Flood Forecasting Application
control component atop. Finally, the forwarding component exports an IForward
receptacle to forward messages using the underlying implementation.
Figure 5.18: Overlays Pattern - Plug-ing Architecture
As seen in Section 5.5.2, a key feature of the open overlay component frame-
work’s architecture is that it does not impose a fixed layered architecture. Instead,
the overlays can depend upon one another using arbitrary configurations. How-
ever, it must be guaranteed that the configurations used are sensible and correct.
This is achieved by maintaining a set of configurations files defined in XML,
which describe the legal variants to be used. Notably, the different configuration
variants can be constructed using the DSML for the description and design of the
Structural Variability provided by the Genie tool.
Figure 5.19 presents a screen shot of the model created using the Genie tool
for the Fewest Hops (FH ) variant of the architecture described above. This
figure shows only 3 components, a couple of bindings, and 4 exported interfaces.
However, the corresponding XML file of the configuration variant also includes
other components used by the middleware platform. Such components do not
have to be included in the model as they add no value in terms of analysis of the
model and are just part of the invariant part of the configuration. Excluding these
components from the model lessens complexity during the design and analysis of
the configurations.
Figure 5.20 presents a screen shot of the model created using the Genie tool
for the WiFi Variant of the Network component framework. The corresponding
133
5.5 Case Study 2: The Flood Forecasting Application
Figure 5.19: Fewest Hops Variant for the Spanning Three component framework
file of this model also presents an invariant part that does not need to be included
in the model.
Figure 5.20: WiFi Variant for the Network component framework
The solution proposed above simplifies the configuration and dynamic recon-
figuration by reusing the common open overlay pattern and its abstractions in a
recursive way. The abstractions used are simple and mainly represented by the
three core elements of the patterns and the standard interfaces and receptacles
offered and required.
134
5.5 Case Study 2: The Flood Forecasting Application
Environment and Context Variability
The way different variants of the component frameworks are chosen at runtime
will depend on the variations of conditions in the environment and context that
were considered relevant during the derivation of requirements (Sawyer et al.
(2007b), Grace et al. (2008b), Goldsby et al. (2008)). Such variations are cap-
tured using the models of the transition diagrams. These models are constructed
using the DSML offered by the Genie tool. During execution, the architecture
of the system will evolve according to such specifications. In other words, the
places where the ongoing architecture can be changed and the consequences of
the changes will be driven by the transition diagrams.
Figure 5.21 shows examples of the configuration policies that are invoked
when the specified monitoring conditions are met. One of the advantages of
using transition diagram models is that they offer a complete view of the re-
configuration opportunities of the system offered to the user. The architectural
perspective offered by these models shift focus away from the source code of
isolated policies to the whole view of the reconfiguration opportunities and the
component frameworks involved. Different stakeholders can abstract away irrel-
evant implementation-related details and focus on the big picture: the system
structure and its runtime change. It should be contrasted with the partial view
of working with individual policies using traditional approaches.
Figure 5.22 shows a screen shot of the Genie tool with the model of the
transition diagram for the example of the flooding forecast application.
The variation points that implement the trace relationship between the struc-
tural variant and the variants of the component frameworks can also be specified
using the tool. In this context, Figure 5.23 shows the variability diagrams of the
variants in the example of the flooding forecast application.
The three structural variants, Normal, Alert, and Emergency are associated
with the variation point VP:Flood App marked by (a). The transition diagram
specifies how each structural variant in the graph is described using two compo-
nent frameworks (Spanning Tree and the Network component framework). The
different variants of the Spanning Tree and the Network component frameworks
are specified using the variation points pointed by (b) and (c).
135
5.5 Case Study 2: The Flood Forecasting Application
Figure 5.21: The transition diagram of the flood forecasting application and somegenerated reconfiguration policies
The variability models have been particularly useful when managing the trace-
ability relations between the artefacts that populate the different levels of abstrac-
tion proposed by the Genie approach. Specifically between the structural variants
in the transition diagrams (level 3) and the component frameworks configurations
(level 2) of Figure 4.3. These traceability relationships are fundamental to keep
a clean mapping between design models and the reconfiguration rules depicted
in Figure 5.21. The generation of reconfiguration policies is supported by this
mapping. The generator associated with the transition diagram generates the
output text given bellow. The text generated contains the source, target, and
triggers for all the reconfigurations.
136
5.5 Case Study 2: The Flood Forecasting Application
Figure 5.22: Transition diagram model in Genie
2
Network
Spanning Tree
State Alert ( SP WiFi ) goes to
Normal usingcondition (High_Flow)
Emergency usingcondition (FloodPredicted)
State Emergency ( FH WiFi ) goes to
Alert usingcondition (!FloodPredicted && High Flood)
Normal usingcondition (!Flood_Predicted) && (!High_Flood)
State Normal ( SP BT ) goes to
Emergency usingcondition (FloodPredicted)
Alert usingcondition (High_Flow)
As in the first case study, the output text shown above is processed using
regular expressions to generate all the reconfiguration policies associated with
the transition diagram. The source code of the generator is shown in Section B.3
of Appendix B.
137
5.5 Case Study 2: The Flood Forecasting Application
Figure 5.23: Variability and Transition Diagrams
138
5.5 Case Study 2: The Flood Forecasting Application
Figure 5.24 shows an overview of the two dimensions of dynamic variability
for the case of the flood forecasting application. As discussed above, the prob-
lem domain of this case study includes routing algorithm and network technology
concerns, therefore the structural variants are composed by two component frame-
works. Using the Genie tool, the user models and generates the artefacts related
to the two dimensions of dynamic variability. During the execution of the system,
variables associated with the problem domain in the the environment and context
are continuously consulted to perform the appropriate reconfigurations.
Figure 5.24: Dynamic Variability in the Flood Forecasting Application
Furthermore, new reconfiguration policies can be created and added during
the execution of the application. Using the support provided by the middleware
platform the behaviour of the system will correspond to the newly added reconfig-
uration policies (Bencomo et al. (2008b)). The Genie approach makes explicit the
support the middleware platforms provide in separating the system evolution and
system adaptation processes as two simultaneous processes in self-adaptive soft-
ware (Oreizy et al. (1999)). System evolution ensures the consistent application of
change over time, and system adaptation focuses on “the cycle of detecting chang-
139
5.6 Discussion
ing circumstances and planning and deploying responsive modifications” (Oreizy
et al. (1999)).
It should be noted that adding a set of reconfiguration policies from the Genie
Tool using the transition diagrams is different from the addition of an isolated
policy. The transition diagrams offered by the approach facilitate the inclusion
of a process of validation and the detection of conflicts to guarantee a valid set of
reconfiguration policies. This last topic is revisited for further discussion in the
Section 6.5.4 of the Future Research Agenda.
5.6 Discussion
The above case studies have demonstrated the application and advantages of the
Genie approach to meet the main goal of this thesis which was stated in Section
1.2. To pursue the main goal, three strategic objectives were defined that address
each of the key issues identified and discussed in Section 1.1.1. This section
elaborates further and discusses how the key issues identified in Section 1.1.1 are
addressed by the approach, hence meeting the objectives:
5.6.1 Providing higher levels of abstraction
Without an approach like the solution supported by Genie and its associated
tool prototype, developers work at the low-level of abstraction provided by the
syntax of scripting languages or programming languages like Java or C++. Pro-
gramming languages do not convey either architecture-based design or domain
semantics. The proposed approach, as seen in the case studies, allows the de-
velopers to work at a higher level of abstraction using models that specify the
component frameworks (configurations) and their reconfigurations. The use of
models that show graphical representations of architectural concepts, as shown
in the component configurations in Figures 5.10 and 5.19, reduce effort during
the lifecycle. For example, the configuration of the model of Figure 5.10 of case
study 1 shows 6 components and 12 bindings and the generated XML file that
describes the configuration has 105 lines. Figure 5.19 of case study 2 shows just 3
components, 2 bindings, and 4 exported interfaces; however the generated XML
140
5.6 Discussion
file that describes the configuration has 102 lines. This XML file includes stan-
dard bindings (connections) to central components of the middleware platform
that are included in the report that generates the XML file but are not shown in
the model as they are part of the invariant part of the file and are irrelevant for
the designer. In both cases the effort needed when analyzing the configurations
and their connections is lessened using the models provided by the Genie tool.
This is because these models are used to hide information that is not relevant for
analysis and design. For example, the model of Figure 5.19 shows just three com-
ponents, but the XML file generated from this model makes reference to other
components that are part of the implementation of the open overlays pattern.
However, it is not only the management of architectural (i.e. structural) as-
pects of configurations and components that can be realized with the DSMLs
of the approach. Specification of dynamic adaptive behaviour in terms of re-
configurations is also possible. With the approach, middleware developers use
abstractions provided by the transition diagrams to reason, plan, and validate
the reconfigurations. When the developer edits a transition diagram, she has an
overview of all the reconfigurations that an application can go through. Tran-
sition diagrams offer an overall graphic-based view of the whole process of re-
configurations, while also depicting the triggers or conditions that initiate the
reconfigurations.
As examples of the information that a transition diagram can convey consider
the transition diagrams of figures 5.12 and 5.22. Both diagrams show 3 struc-
tural variants and 6 possible reconfigurations (arcs) which represent 8 different
reconfiguration policies in each case (it was just by chance that the number of
policies is 6 in both cases). In the first figure the reconfiguration policies are
associated with just one component framework while the reconfiguration policies
of the second figure are associated with two different component frameworks.
The overall view offered by the transition diagrams described above contrasts
with the partial text-based view offered by each reconfiguration policy. Using
only partial views makes it very probable that the developers ignore, or simply
lose sight of, important interdependency relationships. Overlooking dependencies
can cause failures and inconsistencies during execution. Furthermore, identifying
the source of the error may require significant effort and time. In this sense,
141
5.6 Discussion
as transition diagrams reduce the cognitive distance (as defined in Section 3.3),
they can be useful for avoiding these kind of errors as they are more amenable
to analysis. Another advantage of these models is the fact that transition dia-
grams resemble state-transitions models which are a widely used and understood
notations.
The use of transition diagrams has given evidence to improve the interaction
and communication between programmers, architects, requirements engineers,
and domain experts. This is supported by the work on goal modelling to derive
the requirements of the environment variability and required system behaviour
associated with case study 2 and reported in (Sawyer et al. (2007b), Goldsby et al.
(2008)). These requirements are eventually mapped onto the structural variants
and transition diagrams in Genie. Potential extension of the Genie approach to
cover the requirements derivation is further explained in Section 6.5.1.
5.6.2 Providing better software automation levels
As seen in the case studies, automatic generation of artefacts using models is
also possible. The approach allows the generation of software artefacts for the
configuration variants ruled by the architecture of component frameworks and
the reconfiguration policies. In both the case of the generation of the XML files
for the component frameworks and the generation of the reconfiguration policies,
100% of the code is generated. This means that if the structure of these files
requires changes, it is enough to update the associated report and use this re-
port to re-generate the files if required. This is different from the error-prone
approach of directly editing the XML file. The above makes the development
and maintenance of the system more efficient. The files that eventually specify
the scripts associated with the reconfigurations are not generated as this requires
specific knowledge that is not encoded in the transition diagrams. However, doc-
umentation to guide the programmer about the source and target configuration
involved in the reconfiguration is generated. Morin et al. (Morin et al. (2008))
elaborates on details about how Genie offers the basis to automatically generate
these scripts at runtime. The generator in both cases plays a crucial role during
142
5.6 Discussion
development. Generators substitute cases of copy&paste repetition with only one
copy of the appropriate code in the generator.
In the case of files of components, the generation does not cover 100% of
the code, as the reconfiguration files require specific knowledge that cannot be
encoded in the generators (reports). In this way, a report generates the skeleton
code of the operations of the components. This automation makes the process
not only faster and more efficient but avoids the ad-hoc given names for methods
and their signatures that is common among programmers if they are not using
such a tool. The resultant artefacts share the same style and formatting rules. If
these formatting rules need to be changed, the changes would be done in some
specific models and generators. This helps avoid errors and create more consistent
artefacts. The case studies just covered the case of the generation of Java code.
However the generation of files of components written in another language like
C++ or Ruby would just require the design of one generator for all the skeleton
files for the new language.
As discussed in Section 5.3, the files generated by the Genie Tool are saved
in the preset locations and folders the middleware platform will take them from.
Again, this contrasts with the error-prone approach of doing this manually.
Inherited from MetaEdit+, Genie uses protected blocks in the text-based
output. This means (i) manual changes to generated files can be preserved each
time new code is generated and (ii) the programmer who adds handwritten code
knows exactly where to add it. In this way, unwanted changes in the generated
code is avoided.
5.6.3 Providing structured management of variability
The models designed using the DSMLs offered by the approach support the defini-
tion of strategies for structured software reuse and variability management. The
approach proposes variability notations and the associated models for the explicit
management of the dynamic variability supported by the middleware platforms.
Such notations and models rely on the use of the component frameworks and the
transition diagram models. The strategy proposed differentiates structural vari-
ability and environment or context variability. The architecture defined by the
143
5.7 Related Work
component frameworks basically describes the structural commonalities. Differ-
ent configurations representing the variants of the component frameworks exist
that follow the well-defined constraints imposed by their respective frameworks.
Essentially, the environment and context variability models allow the specification
of adaptations enabled by the middleware-based policy mechanism for reconfigu-
ration (Bencomo et al. (2008b)).
Using the two dimensions of dynamic variability that are proposed, the ap-
proach separates the application specific functionality offered by the component
frameworks from the adaptation concerns, thereby reducing complexity (Oreizy
et al. (1999), Bencomo et al. (2008a)). The proposed variability management
structures the variability in a standard and repeatable manner (Jaring (2005)).
In this sense the proposed variability models were applied in two case studies
from very different domains. Furthermore, the use of the orthogonal variability
models offers the potential to achieve traceability through the different models
and resultant implementation artefacts, as well as the correspondence between
the changes to the requirements and final behaviour of the system according to
varying environmental conditions.
5.7 Related Work
Traditionally, variability has been solved at predelivery time (Bencomo et al.
(2008a), Gurp et al. (2001)). In this research, the customization according to
requirements needs to take place postdelivery; that is, at runtime. Different
contributions for the management of runtime variability have been proposed.
These approaches have mainly been focused on the exchange of runtime enti-
ties, parametrization, inheritance for specialization, and preprocessor directives
(Goedicke et al. (2002, 2004), Posnak & Lavender (1997), Svahnberg et al. (2005)).
The Genie approach uses architectural modelling concepts to manage whole sets
of components, their connections and semantics. In this sense Genie is more
coarse-grained than the approaches cited above. Working with the architectural
descriptions of whole sets of components provides higher levels of abstraction
that just working with specific individual components as discussed in Section
4.1. However, the Genie approach is complementary to the finer grain styles
144
5.7 Related Work
cited above. For example, for each configuration of components, traditional fine-
grained approaches for the management of variability can also be used to describe
specific component replacements or specializations.
Of particular relevance to this research is the European project MADAM
(Floch et al. (2006), Hallsteinsen et al. (2006)) and the subsequent European
project MUSIC (MUSIC (2008)). As in the case of Genie, both MADAM and
MUSIC use the adaptation capabilities offered by an underlying middleware plat-
form and use coarse-grained variability mechanisms. In the MADAM/MUSIC
approach, variants are treated as configurations, not simply components, in the
same way that component frameworks support variants in Genie. Adaptation
capabilities in each of the three approaches are underpinned by architectural re-
configurations. MADAM also uses on-event-do-action policies. At the time of
writing this thesis MUSIC was still ongoing work and may include other patterns
in the future. Despite the similarities with MADAM and MUSIC, the three ap-
proaches are different in several significant ways. In contrast to Genie, neither
MADAM or MUSIC focus on generative capabilities. Any generative capabilities
in MADAM/MUSIC rely on UML-based tools. In the case of the reconfigura-
tion approach supported at the middleware level during execution, MADAM and
MUSIC have a common reconfiguration pattern based upon utility functions that
use context information (Hallsteinsen et al. (2006)). In Genie, reconfiguration
is explicitly modelled using the transition diagrams to generate reconfiguration
policies. The Genie approach is also more general since the focus of MADAM
and MUSIC is restricted to mobile computing applications, which Genie can also
support (Bencomo et al. (2008a)).
Another relevant work is carried out by researchers at Vanderbilt University.
They have developed a model-driven development tool suite called Component
Synthesis using Model-Integrated Computing (CoSMIC) (Balasubramanian et al.
(2006), Gokhale et al. (2004a)). CoSMIC configures and deploys distributed real-
time and embedded (DRE) systems using quality of service (QoS)-enabled compo-
nent middleware. CoSMIC addresses crosscutting configuration and deployment
concerns at multiple layers of middleware and applications in component-based
DRE systems. CoSMIC has several DSMLs that simplify and automate activi-
ties associated with developing, optimizing, deploying, and verifying component-
145
5.7 Related Work
based DRE systems. One of the main differences between the Vanderbilt and
Genie approaches is that CoSMIC tackles issues related to configuration and
deployment activities that are performed before execution or when the system
starts up. Unlike CoSMIC, the Genie approach also tackles issues related to
reconfiguration and adaptation that take place at runtime.
Vanderbilt’s researchers have developed the Generic Modeling Environment
(GME) (GME (2006)). GME is a configurable tool to create domain-specific mod-
eling and program synthesis environments. The configuration uses meta-models
that specify the modeling paradigm (i.e. modeling language) of the application
domain. Afterwards, the meta-models are used to automatically generate the
target domain-specific environment. The language used to construct the meta-
models uses the UML class diagram notation and OCL. The Genie approach could
be implemented using GME. GME and MetaEdit+ (the tool used to implement
the Genie approach) use different terminology. However the main concepts are
fundamentally the same. As an example the terms meta-models, domain mod-
els, and specification of DSMLs essentially describe the same concept and their
workflows can be described using similar tasks. However, GME was, at the time
of the implementation of the Genie approach, in continuous evolution. Therefore,
it was not an option for development of the Genie tool.
Wolfinger et al. (Wolfinger et al. (2008)) demonstrate the benefits of the inte-
gration of an existing tool suite to support runtime or dynamic variability with a
plug-in platform for enterprise software. As in the case of this research, automatic
runtime adaptation and reconfiguration are achieved by using the knowledge doc-
umented in the variability models. The differences between both apporaches exist
mainly because of the different aims of each approach. Wolfinger et al. focus on
enterprise software while in the case of Genie the work covers the grid computing,
mobile computing, and embedded systems domains. For example, while variabil-
ity decisions in (Wolfinger et al. (2008)) are user-centered, variability decisions in
Genie are environment-centered.
Other related work is found in (Sora et al. (2005)). Sora et al. introduce
the concept of composable components which is similar to the concept of compo-
nent frameworks. Sora et al. apply recursive composition according to external
146
5.8 Summary
requirements using ADLs that, to some extent, can be equivalent to reconfig-
uration policies. The approach offered by Sora et al. does not offer reflection
capabilities, i.e their systems cannot reason about the current state or configu-
ration of the system. Reflection offers support to determine where the points for
variation are, what are the possible set of variations, or the state of the system at
any point in time. Nevertheless, using reflection has a potential negative impact
on performance and integrity of the system. When using reflective capabilities
a trade-off between flexibility and performance has to be carefully considered.
Another crucial difference is that the research of Sora et al. does not focus on
generative capabilities.
Finally, the Genie approach is complemented by the LoREM goal-driven re-
quirements approach (Goldsby et al. (2008)). LoREM supports the formulation
of the requirements for dynamically adaptive systems, helping the analyst to un-
derstand the characteristics of the operational environment and the adaptation
scenarios of such systems. The goal models that are the outcome of LOREM
can be used to derive the DSML models used in Genie (Bencomo et al. (2008a),
Goldsby et al. (2008), Sawyer et al. (2007b)). At the bottom, the middleware
platform underpins the reorganization of the ongoing architecture at run-time
providing particular system support as the requirements imposed by the environ-
ment change. In (Goldsby et al. (2008), Sawyer et al. (2007a,b)) the author and
her co-authors explain how the policy mechanisms contribute to providing a clear
trace from user requirements to adaptation requirements (Berry et al. (2005)) and
their implementations. The Genie approach is at the heart of this traceability.
In this sense, the research related to requirements-driven composition in (Sora
et al. (2005)) is similar to the joint work LoREM/Genie. Future research efforts
combining LoREM and Genie are further explained in Section 6.5.1.
5.8 Summary
This chapter has presented an evaluation of the Genie approach using two sub-
stantial case studies. The author has argued how the application of the Genie
approach in the case studies give sufficient evidence to support the hypothesis of
this thesis. The author has argued how the approach:
147
5.8 Summary
i. raises the level of abstraction that developers work at offering domain-
specific modelling languages that support the use architectural concepts.
The models supported by the approach are more amenable to analysis and
less error-prone. Crucially, the approach fosters communication between
different stakeholders.
ii. improves the automation levels of the software development and therefore
makes the development process more efficient. Generators make effective
use of the abstractions and concepts supported by the DSMLs. The soft-
ware developers work at high levels of abstraction. The complexity that is
concealed by the abstractions is encoded as implementation details of the
generators.
iii. proposes a notation and associated models to address the management of
dynamic variability in a standard and systematic way. The models separate
software-adaptation concerns and application-specific functionality. This
allows the independent analysis and evolution of software adaptation from
software functionality.
The author has also contrasted her research contributions against related re-
search work. The next chapter presents the conclusions of the thesis taking into
consideration the extent to which the work meets the research objectives.
148
Chapter 6. Conclusions andFuture Research Agenda
“All progress is precarious, and the solution of one problem brings us face to
face with another problem.”
- Martin Luther King Jr., ‘Strength to Love,’ 1963, (1929 - 1968)
6.1 Overview of the Chapter
This thesis has investigated the use of system family engineering, model driven
engineering, and generative software development paradigms to produce Genie.
Genie is an approach that offers a structured management of dynamic variability
during development and allows the systematic generation of software artefacts for
reflective middleware platforms and their applications. These software artefacts
are generated from models designed using domain-specific languages. The main
motivation for the proposed approach has been the need to support the develop-
ment of middleware systems and their applications to tackle the following issues:
the low-level of abstraction used by developers, the poor software automation
levels, and lack of a structured management of variability.
This chapter presents a summary of the contributions of the thesis in Section
6.2. The chapter also presents some discussions about the research method used to
produce the results in Section 6.3, and presents some critical remarks in Section
6.4. This chapter also explores how the research can be developed further, in
Section 6.5.
149
6.2 Claimed Results and Novel Contributions
6.2 Claimed Results and Novel Contributions
The following are the novel contributions of the thesis.
6.2.1 A Model-based approach to specify and generatemiddleware based software artefacts
The major contribution of this thesis is the model-based approach Genie to spec-
ify and generate middleware-based software artefacts. The approach is comple-
mented by a prototype implementation, the Genie tool. The approach and im-
plementation are partially described in the publications “Genie: Supporting the
Model Driven Development of Reflective, Component-based Adaptive Systems”
(Bencomo et al. (2008b)) and “Reflective Component-based Technologies to Sup-
port Dynamic Variability” (Bencomo et al. (2008a)). Specifically, the Genie ap-
proach describes a systematic process that promotes software reuse that:
i. raises the levels of abstraction that developers work at supporting the
use of models that apply architectural concepts as the medium. Specifically,
the approach classifies the solution space according to three levels of ab-
straction: level 1, with the lowest level of abstraction, corresponds to source
code artefacts, level 2 focuses on architectural elements and corresponds to
component framework-based models, and finally level 3, with the highest
level of abstraction, that focuses on models for reconfiguration-based adap-
tations.
The benefits from raising the levels of abstraction using models is twofold.
First, automation levels are improved as the models allow the specification
and application of repetitive patterns that are used in the generation of
software artefacts. As a result software reuse is encouraged. Second, the
gap between the way requirements engineers, domain experts, software ar-
chitects and programmers operate is reduced, thereby promoting their joint
collaboration. The use of the transition diagrams is an example of architec-
tural models that can be used by any of these stakeholders who usually work
at different levels of abstraction. These models have fostered a collaboration
between researchers from different research groups at Lancaster University
150
6.2 Claimed Results and Novel Contributions
and Michigan State University. This synergy has resulted in a new col-
laboration investigating goal-based requirements for dynamically adaptive
systems. Partial results of this collaboration have already been presented in
the publications “Handling Multiple Levels of Requirements for Middleware-
Supported Adaptive Systems” (Sawyer et al. (2007b), “Goal-Based Modeling
of Dynamically Adaptive System Requirements” (Goldsby et al. (2008), and
Dagstuhl Seminar on Software Engineering for Self-Adaptive Systems: Re-
quirements Section (Cheng et al. (2008))).
ii. offers DSMLs and generative capabilities. Two DSMLs for the de-
sign of models, the OpenCOM DSML and the Transition Diagrams DSML
respectively, are offered by the approach. In essence, these DSMLs sup-
port the specification of structural variability and environment or context
variability respectively.
The OpenCOM DSML allows the construction of models for components
and component frameworks (configurations) that populate abstraction level
2. Essentially, the OpenCOM DSML can be considered as an Architecture
Description Language (ADL) with generative capabilities.
The Transition Diagrams DSML allows the specification of transition di-
agram models that populate abstraction level 3. The transition diagrams
specify the adaptations that the system will undergo according to the con-
ditions of the environment. The architectural consequences of these adap-
tations are also specified in the transition diagrams.
These two DSMLs are the basis of the generative capabilities of the ap-
proach. They are used to realize the mappings from the models (at ab-
straction levels 2 and 3 ) to the solution implementations (i.e. software
artefacts that populate abstraction level 1 ). Crucially, the approach also
specifies how the generated artefacts can be added to the system at runtime.
The publications “Genie: a Domain-Specific Modeling Tool for the Gener-
ation of Adaptive and Reflective Middleware Families”(Bencomo & Blair
(2006)) and “Models, Runtime Reflective Mechanisms and Family-based
151
6.3 Research Method
Systems to support Adaptation” (Bencomo et al. (2006)) have reported par-
tial results of the design, implementation, and use of these DSMLs.
iii. the increasing number of variants and their interdependency relationships
associated with applications supported by the flexible middleware families
results in significant complexity, and therefore the approach also proposes a
notation and associated models to address the structured management
of dynamic variability. The author has identified two dimensions of
dynamic variability, namely Structural Variability and Environment and
Context Variability. Essentially, these dimensions correspond to the levels
of abstraction 2 and 3. This correspondence has been fundamental to
providing the linkage necessary to generate the artefacts. This mapping
is crucial to providing the potential support of traceability which is further
explained in Section 6.5.
iv. proposes an implementation of the variability realization technique
called infrastructure-centered architecture. The work has specified how the
reflective middleware platforms and the concepts of components and com-
ponent frameworks support the necessary dynamic reorganization of the
architecture at runtime. The variability models proposed allow the or-
ganization of whole sets of components and their connections according to
environment conditions. Dynamic reorganization of the architecture, by the
insertion/deletion components in the architecture (or (re)configurations), is
derived from the variability models described in (iii).
Finally, the author has captured and built upon the experience of the Mid-
dleware Research group at Lancaster to provide a principled approach to support
the development of adaptive application. Moreover, the approach offers a bal-
ance between the required support for unanticipated adaptive capabilities and
the assurance of the correct ongoing architecture and state of the system
6.3 Research Method
As described in Section 1.4, software engineering (SE) does not have a def-
inite well-accepted research approach. Nevertheless, different versions of the
152
6.4 Critical Remarks and Self-Reflection
hypothetico-deductive method are widely used. In the SE area much research
acts as a synthetic discipline in the sense that the goal of the research is often
the creation of an abstract mechanism which is the focus of the hypothesis. The
hypothesis is that the abstract mechanisms proposed offer new benefits in some
contexts. Case studies are therefore commonly used to test the hypothesis. As
explained in Section 1.4, this is the research method used by the author in this
thesis. The hypothesis was stated in Section 1.4, page 6.
To collect evidence to test the hypothesis, the author has developed the Ge-
nie approach and its implementation, the Genie tool. The Genie approach has
been used to support the development and operation of Gridkit, a dynamically
configurable middleware platform. Genie has been applied to two substantial
case studies that are supported by Gridkit. The improvements and benefits of
the development process supported by the Genie approach were comprehensively
illustrated and discussed in Chapter 5. The results of the case studies were shown
to support the hypothesis.
6.4 Critical Remarks and Self-Reflection
This section presents comments and self-reflection on the work in this thesis.
Are the Results General?
Results of research should be applicable outside the research context where they
were produced. In other words they should be general to some degree. The
claims may turn out to be false or irrelevant otherwise. Therefore, good practice
in research is to ask if the results are general enough to support the claims (Booth
et al. (2003), Aagedal (2001)).
As discussed in Section 4.4, the Genie approach proposed in this thesis aims
to be general to a range of adaptive middleware technologies. The approach is
applicable to any middleware platform and application that works with archi-
tectural concepts such as components and component frameworks and which is
able to support dynamic architectural reorganization and which uses a decision-
making process during execution. The approach has been used in the context
of the reflective middleware platforms at Lancaster as shown in the case studies
153
6.5 Future Research Agenda
presented. Genie has also offered the basis of research work in the context of
the European project DiVA (DiVA (2008)) and using the concepts and support
of the Fractal-based middleware platforms (Bruneton et al. (2006), Morin et al.
(2008)). Moreover, the author has presented two different case studies where the
Genie approach was applied, These case studies comprise reconfigurations and
adaptations scenarios related to three different domains (networking, routing,
and service discovery).
Are the Results Useful?
It is also important to question the usefulness of the research results. As noted
above, one of the goals of research in SE is to create abstract mechanisms to sup-
port software developers to make the development process more efficient. The
author claims that the results of this thesis are useful towards that general goal
and specific objective. The main contribution of this thesis, as explained in
Section 6.2.1, is an approach that, in the basis of the case studies, raises the
levels of abstraction at which middleware developers work, improves the levels
of automation of the development process, and provides a systematic variabil-
ity management during the development and operation of reflective middleware
platforms and their applications. This lessens complexity and makes the software
development more efficient.
6.5 Future Research Agenda
A desirable aspect of any research is that in addition to providing solutions to
initial issues or questions, it should identify new, well-defined areas of research
that would allow researchers to further work to eventually produce more useful
knowledge and progress. In this section the author identifies different areas of re-
search that required additional work. She also hopes the research efforts involved
are useful in some positive way.
154
6.5 Future Research Agenda
6.5.1 Traceability from Requirements to Resultant Be-haviour
The Genie approach has identified three levels of abstraction that correspond
to the software artefacts, the Structural Variability models, and Environment
and Context Variability models respectively. The approach also identifies the
correspondence necessary to generate the artefacts at abstraction level 1 from
the variability models of abstraction level 2 and 3. This correspondence has been
fundamental to providing the mapping necessary to generate the artefacts. The
approach offers the potential to achieve traceability through the different models
and resultant implementation artefacts using the mapping between the different
levels. Furthermore, the approach can be extended to meet one of the crucial
challenges posed by dynamically adaptive systems (DASs): the need to handle
the correspondence between the changes to the requirements and subsequent final
behaviour of a DAS according to varying environmental conditions (Sawyer et al.
(2007b)).
A new abstraction level 4 associated with Requirement Engineering (RE) mod-
elling can be added on top of the 3 abstraction levels described above. Different
researchers have investigated the use of goal-oriented approaches to model DAS
requirements (Fickas & Feather (1995), Lapouchnian et al. (2005), Berry et al.
(2005), Sawyer et al. (2007b), Wang et al. (2007), Goldsby et al. (2008)). The
author believes research efforts are worthy to find how these approaches can take
advantage of the engineering machinery offered by Genie. Furthermore, variabil-
ity models of level 2 and 3 can potentially be derived from goal-based models
following the style of Genie. The resultant approach would be a model-driven
approach from requirements to implementation. The main objective would be
helping ensure that the resultant behavior is consistent and satisfies the require-
ments using a systematic support for traceability.
6.5.2 Adaptive Systems and Software Product Lines
Software Product Lines (SPLs) and the Genie approach have different yet comple-
mentary purposes. In a sense, a dynamically adaptive system (DAS) developed
using Genie and the reflective middleware platforms can be seen as a software
155
6.5 Future Research Agenda
product line (SPL) that can be dynamically modified after deployment. This
means that new products of the product line (i.e. DAS) can be derived or pro-
duced during runtime. The adaptation infrastructure granted by the middleware
platform provides the reusable assets and the reference architecture of the product
line. This is an approach that some researchers have started to investigate (Kim
et al. (2005), Hallsteinsen et al. (2006), Dhungana et al. (2007a), Wolfinger et al.
(2008)). For SPLs, it is important that the knowledge captured in variability
models is effectively and efficiently communicated among stakeholders during the
product derivation process. It is fundamental that stakeholders should be guided
all through the derivation process by supporting them in making decisions and
description the variant-based capabilities of the product line. The variability
management of Genie would then ideally support communication process during
the product derivation. The potential traceability offered by Genie is also crucial
in this context. Conceiving a dynamically adaptive system as a software prod-
uct line shares the benefits of the strategic software reuse that comes with the
variability management of product lines.
6.5.3 [email protected]
During her PhD, the author has been investigating the role and use of models and
model-driven techniques during runtime. Models can be used to check correctness
and consult the current state during execution. A key benefit is that models can
be used to offer a richer semantic support for runtime decision-making related
to system adaptation and other runtime concerns (Bencomo et al. (2007)). The
use of reflection is crucial to having a self-representation of the system that can
be consulted in operation. In the case of Genie, reflection offers support to
determine what the possible set of variations are, where the points for variation
are, or establish the state of the system at any point in time.
Requirements reflection (Finkelstein (2008), Cheng et al. (2008)) is a research
topic that studies how the requirements for a software system can be dynamically
observed, i.e. during execution. In order to do this, a model of the requirements
of the system should be maintained while the system is running. The right
associations between such requirements models and the implementation artefacts
156
6.5 Future Research Agenda
should also be taken into account to maintain the dynamic traceability needed.
Future work is needed to examine how technologies may provide the infrastructure
to do this. The author is personally motivated to investigate the role of the
Genie approach, the potential extension with a fourth RE level on top of the
current levels of abstraction, and the use of reflection to offer support to maintain
requirement models that can be consulted at runtime. Using reflection may have
some drawbacks in terms of performance. However, when developing reflective
systems a trade-off between flexibility and performance has to be studied and a
rigorous system development has to be performed.
6.5.4 Other Topics for Future Research
6.5.4.1 Number of Reconfiguration Paths
One of the main results of the approach is a solution to the problem of unan-
ticipated configurations when developing a dynamically adaptive system where
decisions depend on runtime contexts. The solution This allows the modelling and
insertion during execution of new components, component configurations, and re-
configurations to be used in the adaptations of the system. However, the number
of reconfiguration paths might not be manageable in some problem domains. The
author believes that the combination of the specificity of on-event-do-action poli-
cies and higher-level policies that focus on general properties of the system can
mitigate the problem.
6.5.4.2 Models Validation
The visual DSMLs offered by the Genie tool allow general validations of models
before the generation of the artefacts. The modelling process of reconfiguration
policies depend on the domain. The tool could be improved to include domain-
specific validations. Furthermore, the Genie tool should be improved to include
the validation and the detection of conflicts between policies to guarantee the
correct generation of the corresponding artefacts. For example, the tool could
automatically return a list of conflicts that need to be resolved.
157
6.6 Final Remarks
6.5.4.3 Improvement of the Genie Tool: Architectural Patterns
The tool should be improved to include the explicit management of the architec-
tures of component frameworks and not just their configurations. In this way,
the new models would correspond to architectural patterns that have related
configurations (variants). This would have a potential use in the management
of architectural patterns based catalogues which have proven successful (Risi &
Rossi (2004)).
6.5.4.4 Unanticipated adaptations
Finally, the Genie approach proposed in this thesis deals with dynamic adaptation
based on reconfiguration which requires the enumeration of all feasible variants of
behaviour. During execution and with the support of the middleware platforms
new variants can be added in a controlled way. Genie offers the required support
for the management of dynamic variability. The author supports the need of more
research towards new approaches for dealing with the uncertainty inherent in self-
adaptive systems. In this sense, these new approaches should be able to allow
new variants of behaviour during execution that were not necessarily explicitly
specified in advance.
6.6 Final Remarks
This thesis has presented Genie, an approach that offers principled, model-based
management of dynamic variability in systems that must adapt dynamically to
changing runtime context. Genie offers support to deal with unanticipated con-
ditions that are typical in dynamically adaptive systems using two dimensions of
dynamic variability, namely Structural and Environment and Context variability.
The approach relies on technologies that work with the architectural concepts
of components and component frameworks and which are able to support the
dynamic decisions and architectural reorganization required.
The author has provided insight into the role of models for dynamic variabil-
ity and dynamic architectures when developing adaptive systems that have to
meet requirements which can evolve during execution. The author envisages the
158
6.6 Final Remarks
next generation of model-driven techniques will support runtime decision-making
related to system adaptation and other runtime concerns. The author hopes the
Genie approach will help to influence the development of these next techniques.
Furthermore, she hopes this thesis opens a new branch to continue the applica-
tion of SE principles in the development of both middleware platforms and other
dynamically adaptive software systems (Wolfgang (2000), Issarny et al. (2007)).
159
Appendix A. Publications
This appendix contains the titles and venues of the publications the PhD research
has produced so far:
1. “Genie: Supporting the Model Driven Development of Reflec-
tive, Component-based Adaptive Systems”, Nelly Bencomo, Paul
Grace, Carlos Flores, Danny Hughes, Gordon Blair, ICSE 2008 - Research
Demonstrations Track, Leipzig, Germany, May, 2008
2. “Dynamically Adaptive Systems are Product Lines too: Using
Model-Driven Techniques to Capture Dynamic Variability of
Adaptive Systems”, Nelly Bencomo, Pete Sawyer, Paul Grace, Gordon
Blair, in 2nd International Workshop on Dynamic Software Product Lines
(DSPL 2008), Limerick, Ireland, September 2008
3. “Engineering Complex Adaptations in Highly Heterogeneous Dis-
tributed Systems”, Paul Grace, Gordon Blair, Carlos Flores, and Nelly
Bencomo, in 2nd International Conference on Autonomic Computing and
Communication Systems (Autonomics 2008), Turin, Italy, September 2008
4. “Goal-Based Modeling of Dynamically Adaptive System Require-
ments”, Heather J. Goldsby, Pete Sawyer, Nelly Bencomo, Danny Hughes,
and Betty H. C. Cheng 15th IEEE International Conference on Engineer-
ing of Computer-Based Systems, ECBS 08, Belfast, Northern Ireland, April,
2008
5. “Reflective Component-based Technologies to Support Dynamic
Variability”, Nelly Bencomo, Gordon Blair, Carlos Flores, and Pete
160
Sawyer, 2nd International Workshop on Variability Modelling of Software-
intensive Systems, VaMoS 08, Essen, Germany, January 16-18, 2008
6. “Visualizing the Analysis of Dynamically Adaptive Systems Us-
ing i* and DSLs”, Pete Sawyer, Nelly Bencomo, Danny Hughes, Paul
Grace, Heather J. Goldsby, and Betty H. C. Cheng, 2nd International Work-
shop on Requirements Engineering Visualization, held with the 15th IEEE
International Requirements Engineering Conference, October, New Delhi,
India 2007
7. “Handling Multiple Levels of Requirements for Middleware-
Supported Adaptive Systems”, Pete Sawyer, Nelly Bencomo, Paul
Grace, and Gordon Blair, Technical Report (COMP 001-2007), January,
2007, Lancaster University
8. “Models, Reflective Mechanisms and Family-based Systems to
Support Dynamic Configuration”, Nelly Bencomo, Gordon Blair
and Paul Grace, Workshop on MOdel Driven Development for Middleware
(MODDM), held with the 7th International Middleware Conference, Mel-
bourne, Australia, November, 2006
9. “Genie: a Domain-Specific Modeling Tool for the Generation of
Adaptive and Reflective Middleware Families”, Nelly Bencomo
and Gordon Blair, The 6th OOPSLA Workshop on Domain-Specific Mod-
eling, Portland, October, 2006
10. “Ubiquitous Computing: Adaptability Requirements Supported
by Middleware Platforms”, Nelly Bencomo, Pete Sawyer, Paul Grace,
and Gordon Blair, Workshop on Software Engineering Challenges for Ubiq-
uitous Computing, Lancaster, June, 2006
11. “Reflection and Aspects Meet Again: Runtime Reflective Mech-
anisms for Dynamic Aspects”, Nelly Bencomo, Gordon Blair, Geoff
Coulson, Paul Grace, Awais Rashid, Proceedings of the First Middleware
05 Workshop on Aspect Oriented Middleware Development (AOMD 05)
Grenoble, France, November, 2005
161
12. “Preening: Reflection of Models in the Mirror a Meta-modelling
Approach to Generate Reflective Middleware Configurations”,
Nelly Bencomo, Gordon Blair, Lecture Notes in Computer Science, Satel-
lite Events at the MoDELS 2005 Conference: The Doctoral Symposium,
MODELS 2005, Springer-Verlag , pp. 337 - 338
13. “Towards a MetaModelling Approach to Configurable Middle-
ware”, Nelly Bencomo, Gordon Blair, Geoff Coulson, Thais Batista,
Proc. 2nd ECOOP’2005 Workshop on Reflection, AOP and MetaData for
Software Evolution RAM-SE 2005, pp 73-82, Glasgow, Scotland, July 2005
14. “Raising a Reflective Family”, Nelly Bencomo, Gordon Blair, Proc.
Models and Aspects Handling Crosscutting Concerns in MDSD, Glasgow,
Scotland, July 2005
The author of this thesis is the principal author of all the publications with
the exception of the publications 6, 4, and 3 . However, the authors effort has
been essential for these papers as providing application examples and evaluation
support as well as being a co-author.
The author has also published the following posters in different international
conferences:
1. “The world is going MAD: Models for Adaptation”, Nelly Ben-
como, Gordon Blair, Paul Grace, Poster in MoDELS 2006, Genoa, October,
2006
2. “A Green Family: Generating Publish/Subscribe Middleware
Configurations”, Bencomo, N, Sivaharan T., and Blair G, Poster in the
4th Workshop on Adaptive and Reflective Middleware (ARM05), Grenoble,
France, November, 2005
3. “Families of Reflective Middleware Systems: the new gener-
ation”, Nelly Bencomo, Gordon Blair, Geoff Coulson, and Paul Grace,
Poster in MODELS 2005, Jamaica, October, 2005
162
Appendix B. ImplementationDetails
This appendix presents some implementation details of the Genie tooltion and
the middleware platforms.
B.1 Grammar of reconfiguration policies
The grammar for the specification of the reconfiguration policies is defined below.
The grammar specification uses EBNF syntax where angle-brackets “<” and “>”
enclose non-terminals, “[” and “]” enclose an optional part, ”—” means choice,
”*” means zero or more times, “+” means one or more times, and ::= denotes
productions. Curly brackets (“” and “”) are used for grouping single quotes (”)
are used to enclose terminals. Keywords are shown in bold.
S ::= ‘<ReconfigurationRule>’ <rule>‘</ReconfigurationRule>’rule ::= <CF-Tag><eventset><reconfiguration><CF-Tag>::= ‘Overlay ’ | ‘Discovery ’ | ‘Interaction’ | ‘SpanningTree’eventset ::= ‘<Events>’ <event>+ ‘</Events>’event ::= ‘<Event>’ <type><value>‘</Event>’type ::= ‘<Type>’ Property-name ‘</Type>’value ::= ‘<Value>’ <boolean>‘</Value>’boolean ::= ‘true’ | ‘false’reconfiguration ::= ‘<Reconfiguration>’ <filetype>Name-of-File ‘</Reconfiguration>’<filetype>::= ‘java’ | ‘C’
Property-name is a string that corresponds to a variable that will be moni-
tored, e.g. HIGH-FLOW and FLOOD-PREDICTED.
163
B.2 Generator XML Configurations
B.2 Generator XML Configurations
Input: a CF represented by its set of components connected using their interfaces and receptacles. Some interfaces and receptacles of these components are exported by the CF. Output: the XML file associated with the description of the configuration of the CF where the information of components, connections between components and interfaces/receptacles exported is specified. Italic text corresponds with text written by the generator. Bold text is pseudo code. Begin <Configuration> <Framework> name </Framework> <Interfaces> For each interface that is exported by the CF write <Interface> <type> type </type> <Source> source </Source> /* component source that offers the interface to be exported */ <Interface> </Interfaces> <Components> For each component in the CF write <Component> <ComponentName> comp-name </ComponentName> <ComponentType> comp-type </ComponentType> <Connections> For each receptacle (interface required) of this component write <Connection> <Name> interf-name </Name> <Sink> comp-name </Sink> /* Sink is the component source that offers the interface required */ <Interface> interf-name </Interface> </Connection> </Connections> </Component> <Components> <Receptacles> For each receptacle (interface required) exported by the CF write <Receptacle> <type> type </type> <Source> source </Source> /* component source that offers the receptacle to export*/ <Receptacle> </Receptacles> </Configuration> End
Figure B.1: Generator pseudo code for XML configurations
164
B.2 Generator XML Configurations
Figure B.1 shows the pseudo code of the generator that transforms component
diagrams to code.
As an example, Figure B.2 shows the component diagram for the Component
Framework Publisher and Figure B.3 shows the code generated, that is the XML
file that describes the configuration.
Figure B.2: Model of the CF in the example
165
B.2 Generator XML Configurations
<Configuration> <Framework>Publisher</Framework> <Interfaces> <Interface> <type>Interfaces.IPublish.IPublish</type> <Source>Publish</Source> <Interface> </Interfaces> <Components> <Component> <ComponentName>EventService</ComponentName> <ComponentType>InteractionTypes.PublishSubscribe.EventService.EventService</ComponentType> <Connections> <Connection> <Name>ISOAPMessaging</Name> <Sink>SOAPMessaging</Sink> <Interface>Interfaces.ISOAPMessaging.ISOAPMessaging</Interface> </Connection> </Connections> </Component> <Component> <ComponentName>Publish</ComponentName> <ComponentType>InteractionTypes.PublishSubscribe.Publish.Publish</ComponentType> <Connections> <Connection> <Name>IEventService</Name> <Sink>EventService</Sink> <Interface>Interfaces.IEventService.IEventService</Interface> </Connection> <Connection> <Name>IEventServiceData</Name> <Sink>EventService</Sink> <Interface>Interfaces.IEventServiceData.IEventServiceData</Interface> </Connection> <Connection> <Name>ISOAPMessaging</Name> <Sink>SOAPMessaging</Sink> <Interface>Interfaces.ISOAPMessaging.ISOAPMessaging</Interface> </Connection> </Connections> </Component> <Component> <ComponentName>SOAPMessaging</ComponentName> <ComponentType>InteractionTypes.PublishSubscribe.SOAPMessaging.SOAPMessaging</ComponentType> <Connections> <Connection> <Name>ISOAPTransport</Name> <Sink>SOAPTransport</Sink> <Interface>Interfaces.ISOAPTransport.ISOAPTransport</Interface> </Connection> </Connections> </Component> <Component> <ComponentName>SOAPTransport</ComponentName> <ComponentType>InteractionTypes.PublishSubscribe.SOAPToMulticast.SOAPToMulticast</ComponentType> </Component> </Components> <Receptacles> <Receptacle> <type>Interfaces.IMulticast.IMulticast</type> <Source>SOAPTransport</Source> <Receptacle> </Receptacles> </Configuration>
Exported receptacles
Exported interfaces
Figure B.3: XML file for the example
166
B.2 Generator XML Configurations
The complete code of the generator using the report language of MetaEdit+
is as follows:
Report ’Create_CF_XML_Conf’
/* The goal of this report is to generate the XML file for the* configurations of a given CF*/
if :type; = ’CF’ then /* The Graph is a CF */
’<Configuration>’newline; ’ <Framework>’;:name;’</Framework>’newline;
’ <Interfaces>’newline;foreach .(Interface) {
do ~InterfToExport {/* Found exported interface by the CF */’ <Interface>’;newline; ’<type>Interfaces.’:name;1;’.’;:name;1;’</type>’ ;newline; ’<Source>’ .()~OfferedInterf>()~CompSrv.Component:name;’</Source>’;newline;’ <Interface>’newline;
}}
’ </Interfaces>’newline;
’ <Components>’newline;
foreach .(Component) {’ <Component>’newline; ’<ComponentName>’; :name; ’</ComponentName>’newline; ’<ComponentType>’ ; :component_type ; ’.’ ; :source_file ;’</ComponentType>’newline;’ <Connections>’; newline;
do ~CompClt>()~RequestedInterf.Interface{
’ <Connection>’newline; ’ <Name>’;id; ’</Name>’ newline; if ~OfferedInterf>()~CompSrv.Component; then’ <Sink>’~OfferedInterf>()~CompSrv.Component:name; ’</Sink>’;newline; else ’<Sink>’ ’ ERROR: Change this!!’ ; ’</Sink>’;newline; endif ’<Interface>Interfaces.’:name;’.’;:name;’</Interface>’ ;newline; ’</Connection>’newline;
}’ </Connections>’; newline;’ </Component>’ ;newline;
167
B.3 Generator of policy reconfigurations
}
’</Components>’newline
’<Receptacles>’newline;foreach .(Interface) {
do ~ReceptToExport{
/* Found exported receptacle by the CF */ ’<Receptacle>’newline; ’<type>Interfaces.’:name;1;’.’;:name;1;’</type>’ ;newline; ’<Source>’ .()~RequestedInterf>()~CompClt.Component:name;’</Source>’;newline; ’ <Receptacle>’newline;
}}’</Receptacles>’newline;
’</Configuration>’;
else/* The Graph is not a CF. A CF XML file cannot be generated */’ The Graph is not a CF ! it is a component’ ;newline;’ A CF XML file cannot be generated. ’ ;
endif
endreport
B.3 Generator of policy reconfigurations
Figure B.4 shows the pseudo code of the generator of the information associated
with reconfiguration policies.
An example of a text output is as follows (based on the case study 2 of this
thesis).
168
B.3 Generator of policy reconfigurations
Figure B.4: Generator pseudo code for reconfiguration policies information
2NetworkSpanningTreeState Alert ( SP WiFi ) goes to
Normal usingcondition (High_Flow)Emergency usingcondition (FloodPredicted)
State Emergency ( FH WiFi ) goes toAlert usingcondition (!FloodPredicted && High Flood)Normal usingcondition (!Flood_Predicted) && (!High_Flood)
State Normal ( SP BT ) goes toEmergency usingcondition (FloodPredicted)Alert usingcondition (High_Flow)
The text file showed above is then processed by a java program that uses
regular expressions to finally generate the 6 policies.
The complete code of the generator is as follows:
Report ’GenerateJavaInputRules’
169
B.3 Generator of policy reconfigurations
/* This file will contain the report created. This file will beprocessed by the java file that uses regular expressions to generateall the policy rules associated with the transition diagram */
filename ’C:\MyDocuments\NetBeansProjects\Rules.txt’ write
/* Write number of CFs in each structural variant */ :numberCFs ;newline ;
/* Write the names of the CFs */ foreach .VarPoint { /* Thevariation point 0 denotes the structural variants */
if (:CF_id <> ’0’) thenid ; newline ;
endif}
to ’%null’ newline ’* $’ endto
/* Generate information for each reconfiguration */
foreach .ConfState {’State ’id ;’ ( ’ do ~fromCFnum>CFnum_Var~toVar.Variant{
:name ’ ’;} ’)’’ goto ’
do ~previous>TransitionReconf{
newline ;’ ’~next.ConfState:name ’ usingcondition ’ :Conditionvariable ’first’ write
’1’close
/*:Conditions */do :Conditions{
if ($first = ’1’ ) thenid;$first++%null ;
else’ || ’ id;
endif}
}newline ;}
170
B.4 List of constraints used to check validity of the models
close
external ’java -jar"C:\bencomo\MyDocuments\NetBeansProjects\WriteRules\dist\writeRules.jar’execute
external ’notepad "C:\bencomo\MyDocuments\NetBeansProjects\out’
execute
endreport
B.4 List of constraints used to check validity of
the models
The following are the validations that are made before the generation of artefacts
from the models designed with the Genie tool. These validations are generic in
the sense that apply to any interface, component, component framework, or tran-
sition diagram.
- The interfaces used by a component should exist in the interfaces repository.
- The components in a component framework should exist in the component
repository.
- Mismatch between interfaces should not be allowed. Mismatches occur when
interfaces of different types are mistakenly connected.
- The number of variation points associated with the structural variants in a
transition diagram should exist and should be just one.
- The component frameworks in a transition diagram should exist in the com-
ponent framework repository.
- The number of component frameworks used in variability points should be
the same.
- The component frameworks used in a variability point should be different.
171
B.5 The Genie tool and Gridkit
B.5 The Genie tool and Gridkit
The Genie tool works for Gridkit middleware platform. The following is a brief
description of the installation folders. Gridkit should be installed on a folder that
we will call ROOTDIR. This folder contains the following subfolders associated
for the component frameworks:
Repository/DiscoveryFramework
Repository/OverlayFramework
Repository/InteractionFramework
the folder associated with the Spanning Tree component framework is
Repository/OverlayFramework/ST
Each of the subfolders named above contains a subfolder called Configurations
where the respective configurations will be loaded by the Genie tool.
The ROOTDIR also contains the subfolder Repository/Reconfigurations/Rules
where the Genie toll will load the reconfiguration policies. Eventually Gridkit will
match the reconfiguration policies with the monitored events and apply the cor-
responding actions to the right component frameworks.
172
References
Aagedal, J. (2001). Quality of Service Support in Development of Distributed
Systems . Ph.D. thesis, Department of Informatics, Faculty of Mathematics and
Natural Sciences, University of Oslo, Norway. 6, 7, 153
ABLE (2007). Acmestudio. Tech. rep., School of Computing Science, Carnegie
Mellon. 47
Abouzahra, A., Bezivin, J., Fabro, M.D.D. & Jouault, F. (2005). A
practical approach to bridging domain specific languages with uml profiles. In
Best Practices for Model Driven Software Development at OOPSLA’05 , San
Diego, California. 70, 71
Alia, M., Eliassen, F., Hallsteinsen, S. & Stav, E. (2006). Madam:
Towards a flexible planning-based middleware. 14
Andersen, A., Eliassen, F. & Blair, G. (2000). A reflective component-
based middleware with quality of service management. In Protocols for Multi-
media Systems PROM’2000 , Poland. 17
Anthony, R.J. (2006). Generic support for policy-based self-adaptive systems.
In 17th International Conference on Database and Expert Systems Applications
(DEXA’06), 108–113. 18
Antkiewicz, M. & Czarnecki, K. (2004). Featureplugin: Feature modeling
plug-in for eclipse. In OOPSLA Workshop on Eclipse Technology eXchange,
67–72, Vancouver, Canada. 59
Arango, G., Schoen, E. & Pettengill, R. (1993). Design as evolution and
reuse. IEEE Computer Society Advances in Software Reuse. 63
173
REFERENCES
Asikainen, T. (2006). Methods for modelling the variability in software product
families, helsinki University of Technology, Software Business and Engineering
Institute, PhD work. 61
Asikainen, T., Soininen, T. & Mannisto, T. (2003). Towards managing
variability using softwareproduct family architecture models and product con-
figurators. In Proc. of Software Variability Management Workshop, 8493, The
Netherlands. 61
Asikainen, T., Mannisto, T. & Soininen, T. (2004a). Representing feature
models of software product families using a configuration ontology. xi, 47
Asikainen, T., Soininen, T. & Mannisto, T. (2004b). A koala-based ap-
proach for modelling and deploying configurable software product families. In
F. van der Linden, ed., Software Product-Family Engineering, 5th International
Workshop, 2003 , vol. 3014 of Lecture Notes in Computer Science, 225 – 249,
Springer. 48, 61
Asikainen, T., Mannisto;, T. & Soininen, T. (2007). Kumbang: A do-
main ontology for modelling variability in software product families. Adv. Eng.
Inform., 21, 23–40. 61
Asikainen Timo, T., Soininen & Mannisto, T. (2004). Software Product-
Family Engineering , chap. A Koala-Based Approach for Modelling and De-
ploying Configurable Software Product Families, 225–249. Springer Berlin /
Heidelberg. 47
Bachmann, F. & Bass, L. (2001). Managing variability in software architec-
tures. In SSR ’01: Proceedings of the 2001 symposium on Software reusability ,
126–132, ACM Press. 2, 42
Bachmann, F., Goedicke, M., Leite, J., Nord, R., Pohl, K., Ramesh,
B. & Vilbig, A. (2003). A meta-model for representing variability in product
family development. In 5th International Workshop on Product Family Engi-
neering (PFE05), vol. 3014 of Lecture Notes in Computer Science, 66 – 80,
Springer Berlin / Heidelberg. 49
174
REFERENCES
Balasubramanian, K., Gokhale, A., Karsai, G., Sztipanovits, J. &
Neema, S. (2006). Developing applications using model-driven design envi-
ronments. IEEE Computer , 33 – 40. 145
Baleani, M., Ferrari, A., Mangeruca, L., Sangiovanni, A., Freund,
U., Schlenker, E. & Wolff, H. (2005). Correct-by-construction transfor-
mations across design environments for model-based embedded software devel-
opment. In Design, Automation and Test in Europe, 1044 – 1049. 3, 64
Bass, L., Clements, P. & Kazman, R. (2003). Software Architecture in
Practice. Addison-Wesley Professional, 2nd edn. 38, 77, 79
Batory, D. & O’Malley, S. (1992). The design and implementation of hi-
erarchical software systems with reusable components. ACM Transactions on
Software Engineering and Methodology , 1, 355 – 398. 43, 78
Bednasch, T., Endler, C. & Lang, M. (2004).
https://sourceforge.net/projects/captainfeature/. 59
Bencomo, N. & Blair, G. (2005). Raising a reflective family. In Models and
Aspects Handling Crosscutting Concerns in MDSD , Glasgow, Scotland. 94
Bencomo, N. & Blair, G. (2006). Genie: a domain-specific modeling tool for
the generation of adaptive and reflective middleware families. In 6th OOPSLA
Workshop on Domain-Specific Modeling , Portland. 71, 151
Bencomo, N., Blair, G., Coulson, G. & Batista, T. (2005a). Towards a
metamodelling approach to configurable middleware. 2, 94
Bencomo, N., Blair, G., Coulson, G., Grace, P. & Rashid, A. (2005b).
Reflection and aspects meet again: Runtime reflective mechanisms for dynamic
aspects. In First Middleware Workshop on Aspect Oriented Middleware, Greno-
ble, France. 17
Bencomo, N., Sivaharan, T. & Blair, G. (2005c). A green family: Generat-
ing publish/subscribe middleware configurations. In 4th Workshop on Adaptive
and Reflective Middleware (ARM05), Grenoble, France. 92, 105
175
REFERENCES
Bencomo, N., Grace, P. & Blair, G. (2006). Models, runtime reflective
mechanisms and family-based systems to support adaptation. In Workshop on
MOdel Driven Development for Middleware (MODDM). 2, 19, 109, 152
Bencomo, N., France, R. & Blair, G. (2007). 2nd international workshop
on [email protected]. In H. Giese, ed., Workshops and Symposia at MODELS
2007 , Lecture Notes in Computer Science, Springer-Verlag. 156
Bencomo, N., Blair, G., Flores, C. & Sawyer, P. (2008a). Reflective
component-based technologies to support dynamic variability. In 2nd Interna-
tional Workshop on Variability Modelling on Software-intensive Systems (Va-
MoS’08), Essen, Germany. 44, 79, 99, 104, 110, 113, 120, 132, 144, 145, 147,
150
Bencomo, N., Grace, P., Flores, C., Hughes, D. & Blair, G. (2008b).
Genie: Supporting the model driven development of reflective, component-
based adaptive systems. In ICSE 2008 - Formal Research Demonstrations
Track . 3, 44, 104, 113, 122, 124, 132, 139, 144, 150
Bencomo, N., Sawyer, P., Grace, P. & Blair, G. (2008c). Adaptive sys-
tems are product lines too: Using model-driven techniques to capture dynamic
variability of adaptive systems. In 2nd International Workshop on Dynamic
Software Product Lines (DSPL’08). 9
Berg, K. & Muthig, D. (2005). A critical analysis of using feature models for
variability management. Tech. rep., University of Pretoria. 4
Berg, K., Bishop, J. & Muthig, D. (2005). Tracing software product line
variability from problem to solution space. In Annual research conference of
the South African institute of computer scientists and information technologists
on IT research in developing countries , White River, South Africa. 4
Berry, D., Cheng, B. & J. Zhang, P. (2005). The four levels of require-
ments engineering for and in dynamic adaptive systems. In 11th Interna-
tional Workshop on Requirements Engineering: Foundation for Software Qual-
ity (REFSQ’05), Porto, Portugal. 147, 155
176
REFERENCES
Beuche, D., Papajewski, H. & Schroder-Preikschat, W. (2004). Vari-
ability management with feature models. Science of Computer Programming.
Special issue: Software variability management , 53, 333–352. 43, 78
Bezivin, J., Favre, J.M. & Rumpe, B. (2006). First international workshop
on global integrated model management (gamma) - workshop summary. In
28th International Conference on Software Engineering (ICSE) Proceedings ,
Shanghai. 70
Biggerstaff, T.J. (1998). A perspective of generative reuse. Annals of Software
Engineering , 5, 169–226. 57
Blair, G., Coulson, G., Robin, P. & Papathomas, M. (1998). An ar-
chitecture for next generation middleware. In S.J. Davies N.A.J. Raymond K.,
ed., IFIP International Conference on Distributed Systems Platforms and Open
Distributed Processing (Middleware’98), 91–206, The Lake District, UK. 2, 14,
17, 19
Blair, G., Coulson, G., Andersen, A., Blair, L., Clarke, M., Costa,
F., Duran-Limon, H., Fitzpatrick, T., Johnston, L., Moreira, R.,
Parlavantzas, N. & Saikoski, K. (2001). The design and implementation
of open orb 2. IEEE Distributed Systems Online, 2. 14, 16, 17, 19, 48
Blair, G., Coulson, G., Blair, L., Duran-Limon, H., Grace, P., Mor-
eira, R. & Parlavantzas, N. (2002). Reflection, self-awareness and self-
healing. In ACM SIGSOFT Workshop on Self-Healing Systems (WOSS’02),
Charleston, USA. 25
Blair, G., Coulson, G. & Grace, P. (2004). Research directions in reflec-
tive middleware: the lancaster experience. In 3rd Workshop on Reflective and
Adaptive Middleware, 262–267. 19
Blair, G.S., Coulson, G., Andersen, A., Blair, L., Clarke, M.,
Costa, F., Duran, H., Parlavantzas, N. & Saikoski, K. (2000). A
principled approach to supporting adaptation in distributed mobile environ-
ments. In International Symposium on Software Engineering for Parallel and
Distributed Systems (PDSE 2000), 3–12, Limerick, Ireland. 17
177
REFERENCES
Booch, G. (1982). Object-oriented design. Ada Lett., I, 64–76. 3, 67
Booch, G. (1987). Software Components with Ada: Structures, Tools, and Sub-
systems . Cummings Publishing Company. 38
Booch, G. (1991). Object-Oriented Design With Applications . Ben-
jamin/Cummings. 39
Booth, W.C., Colomb, G.G. & Williams, J.M. (2003). The Craft of Re-
search. Chicago Guides to Writing, Editing, and Publishing (CGWEP), 2nd
edn. 153
Bosch, J. (2000). Design & Use of Software Architectures - Adopting and Evolv-
ing a Product Line Approach. Addison-Wesley. 2, 45, 47
Bosch, J. (2001). Software product lines: Organizational alternatives. In 23rd
International Conference on Software Engineering , 91–100, Toronto, Canada.
40
Bruneton, E., Coupaye, T., Leclercq, M., Quema, V. & Stefani, J.B.
(2006). The fractal component model and its support in java. Software: Practice
and Experience, 36, 1257 – 1284. 2, 94, 154
Caporuscio, M., Ruscio, D.D., Inverardi, P., Pelliccione, P. &
Pierantonio, A. (2005). Engineering mda into compositional reasoning for
analyzing middleware-based applications. In Second European Workshop on
Software Architecture (EWSA 2005), Pisa, Italy. 68
Capra, L., Emmerich, W. & Mascolo, C. (2001). Reflective middleware
solutions for context-aware applications. In Third International Conference on
Meta-level Architectures and Separation of Crosscutting Concerns , Japan. 14,
17, 18
Capra, L., Blair, G., Mascolo, C., Emmerich, W. & Grace, P. (2002).
Exploiting reflection in mobile computing middleware. ACM SIGMOBILE Mo-
bile Computing and Communications Review , 6, 34–44. 14, 18, 19
178
REFERENCES
Carroll, R., Lehtihet, E., Fahy, C., Meer, S.v.d., Georgalas, N. &
Cleary, D. (2006). Applying the p2p paradigm to management of large-scale
distributed networks using a model driven approach. In IEEE/IFIP Network
Operations & Management Symposium (NOMS 2006), Vancouver, Canada. 68
Castro, M., Druschel, P., Kermarrec, A. & Rowstron, A. (2002.).
Scribe: A large-scale and decentralized application-level multicast infrastruc-
ture. IEEE Journal on Selected Areas in Communications (JSAC), 20, 1489
1499. 125
Cheng, B.H.C., Whittle, J., Bencomo, F.A., Nelly, Magee, J.,
Kramer, J., Park, S. & Dustda, S. (2008). Requirements engineering
section of software engineering for self-adaptive systems: A research road map.
151, 156
Chiba, S. (1995). A metaobject protocol for c++. In Conference on Object-
Oriented Programming Systems, Languages, and Applications (OOPSLA’95),
285–299, Texas, USA. 59
Christensen, N.H. (2003). Domain-specific languages in software development
and the relation to partial evaluation. Ph.D. thesis, Dept. of Computer Science
at University of Copenhagen. 70
Clements, P. & Kogut, P. (1994). The software architecture renaissance.
Crosstalk - The Journal of Defense Software Engineering , 7. 77, 79
Clements, P.C. (1996). A survey of architecture description languages. In
IWSSD ’96: Proceedings of the 8th International Workshop on Software Spec-
ification and Design, 16, IEEE Computer Society, Washington, DC, USA. 48
Cohen, S. (1999). From product-line architectures to products. In Workshop on
Object-Oriented Technology , vol. 1743, 198 – 199, Lecture Notes In Computer
Science, Springer-Verlag. 56
Costa, P., Coulson, G., Mascolo, C., Picco, G.P. & Zachariadis, S.
(2005). The runes middleware: A reconfigurable component-based approach
to networked embedded systems. In 16th Annual International Symposium on
179
REFERENCES
Personal Indoor and Mobile Radio Communications (PIMRC05), Berlin, Ger-
many. 21, 83
Costa, P., Coulson, G., Mascolo, C., Mottola, L., Picco, G. &
Zachariadis, S. (2006). A reconfigurable component-based middleware for
networked embedded systems. International Journal of Wireless Information
Networks . 19
Coulouris, G., Dollimore, J. & Kindberg, T. (2000). Distributed Systems,
Concepts and Design. Addison-Wesley, 3rd edn. 14
Coulson, G. (2000). What is reflective middleware? IEEE Distributed Systems
Online. 1
Coulson, G., Blair, G.S., Clark, M. & Parlavantzas, N. (2002). The
design of a highly configurable and reconfigurable middleware platform. ACM
Distributed Computing Journal , 15, 109–126. 17, 22
Coulson, G., Blair, G., Grace, P., Joolia, A., Lee, K. & Ueyama,
J. (2004a). A component model for building systems software. In Software
Engineering and Applications (SEA’04), USA. 19
Coulson, G., Grace, P., Blair, G., Mathy, L., Duce, D., Cooper, C.,
Yeung, W. & Cai, W. (2004b). Towards a component-based middleware
framework for configurable and reconfigurable grid computing. In Workshop
on Emerging Technologies for Next Generation Grid (ETNGRID-2004). 99
Coulson, G., Grace, P., Blair, G., Cai, W., Cooper, C., Duce, D.,
Mathy, L., Yeung, W.K., Porter, B., Sagar, M. & Li, W. (2006).
A component-based middleware framework for configurable and reconfigurable
grid computing. Concurrency and Computation: Practice and Experience, 18,
865–874. 26, 27
Coulson, G., Blair, G., Grace, P., Joolia, A., Lee, K., Ueyama, J.
& Sivaharan, T. (2008). A generic component model for building systems
software. ACM Transactions on Computer Systems . 19, 20
180
REFERENCES
Czarnecki, K. (2004). Overview of generative software development. In Uncon-
ventional Programming Paradigms (UPP), vol. 3566, 313–328, Springer-Verlag,
Mont Saint-Michel, France. xii, 3, 57, 59, 60, 63, 64, 72
Czarnecki, K. & Eisenecker, U. (2000). Generative Programming: Methods,
Tools and Applications . Addison-Wesley. xi, xii, 3, 4, 37, 40, 43, 45, 46, 54, 55,
56, 58, 64, 79, 83, 92
Czarnecki, K., O’Donnel, J., Striegnitz, J. & Taha, W. (2004). Dsl
implementation in metaocaml, template haskell, and c++. In Lecture notes
in computer science: Domain-Specific Program Generation, vol. 48, 51–72,
Springer Berlin / Heidelberg. 40, 54, 59, 60, 70, 71
Dashofy, E., van der Hoek, A. & Taylor, R.N. (2002). An infrastructure
for the rapid development of xml-based architecture description languages. In
24th International Conference on Software Engineering (ICSE2002). 47
Dawn G., G., Uday R., K. & Ajay S., V. (2001). Understanding the philo-
sophical underpinnings of software engineering research in information systems.
Information Systems Frontiers , 3, 169 – 183. 7
Deursen, A.v., Klint, P. & Visser, J. (2000). Domain-specific languages:
An annotated bibliography. ACM SIGPLAN Notices , 35, 26–36. 58, 59
Devanbu, P., ed. (1998). Proceedings of Fifth International Conference on Soft-
ware Reuse. IEEE Computer Society, Victoria, British Columbia, Canada. 37
Dhungana, D. (2006). Integrated variability modeling of features and architec-
ture in software product line engineering. 327–330, IEEE Computer Society,
Los Alamitos, CA, USA. 48, 49
Dhungana, D., Grunbacher, P. & Rabiser, R. (2007a). Domain-specific
adaptations of product line variability modeling. In IFIP Working Conference
on Situational Method Engineering: Fundamentals and Experiences , Geneva.
156
181
REFERENCES
Dhungana, D., Paul, G. & Rabiser, R. (2007b). Decisionking: A flexible
and extensible tool for integrated variability modeling. In VAMOS’07 First In-
ternational Workshop on Variability Modelling of Software-intensive Systems .
48
Dijkstra, E. (1968). The structure of the multiprogramming system. Commu-
nications of the ACM , 11, 341–346. 17
DiVA (2008). Diva-dynamic variability in complex, adaptive systems,
http://www.ict-diva.eu/. 154
Dobrica, L. & Niemela, E. (2007). Modeling variability in the software prod-
uct line architecture of distributed services. In Software Engineering Research
and Practice, 269–275. 79
DOC (2007). The ace orb (tao). Tech. rep., Center for Distributed Object Com-
puting, Washington University. 94
Dooley, K. (1997). Complex adaptive systems : A nominal definition. 81
Easterbrook, S.M., Singer, J., Storey, M.A. & Damian, D. (2007).
Guide to Advanced Empirical Software Engineering , chap. Selecting Empirical
Methods for Software Engineering Research. Springer. 6, 7
Efstratiou, C., Friday, A., Davies, N. & Cheverst, K. (2002). Utilising
the event calculus for policy driven adaptation on mobilesystems. In Third
International Workshop on Policies for Distributed Systems and Networks , 13
– 24. 29
Efstratiou, C., Friday, A., Davies, N. & Cheverst, K. (2003). Utilising
the event calculus for policy driven adaptation on mobile systems. In I.C. So-
ciety, ed., 3rd International Workshop on Policies for Distributed Systems and
Networks (POLICY 2002), Monterey, Ca., U.S. 18
El-Sayed, A., Roca, V. & Mathy, L. (2003). A survey of proposals for an
alternative group communication service. IEEE Network , 17, 46– 51. 124
182
REFERENCES
Eliassen, F., Andersen, A., Blair, G.S., Costa, F., Coulson,
G., Goebel, V., Hansen, O., Kristensen, T., Plagemann, T.,
Rafaelsen, H.O., Saikoski, K.B. & Yu, W. (1999). Next generation
middleware: Requirements, architecture, and prototypes. In 7th IEEE Work-
shop on Future Trends of Distributed Computing Systems (FTDCS’99), South
Africa. 34
Estublier, J. & Vega, G. (2005). Reuse and variability in large software appli-
cations. In European Software Engineering Conference (ESEC) and Symposium
on the Foundations of Software Engineering (FSE) ESEC-FSE’05 , 316–325,
Lisbon, Portugal. 70
Faltings, B. & Freuder, E. (1998). Configuration. IEEE Intelligent Systems ,
13. 61
Ferber, J. (1989). Computational reflection in class based object-oriented lan-
guages. In Conference on Object-Oriented Programming and Languages (OOP-
SLA’89), 317–326, New Orleans. 17
Fickas, S. & Feather, M. (1995). Requirements monitoring in dynamic en-
vironments. In Second IEEE International Symposium on Requirements Engi-
neering (RE’95). 155
Finkelstein, A. (2008). Talk “requirements reflection” in schloss dagstuhl sem-
inar on software engineering for self-adaptive systems. 156
Flinn, J., Lara, E.d., Satyanarayanan, M., Wallach, D. &
Zwaenepoel, W. (2001). Reducing the energy usage of office applications.
In Middleware Conference, Heidelberg, Germany. 18
Floch, J., Hallsteinsen, S., Stav, E., Eliassen, F., Lund, K. & Gjor-
ven, E. (2006). Using architecture models for runtime adaptability. Software
IEEE , 23, 62–70. 1, 14, 48, 79, 83, 113, 145
Flores, C., Blair, G. & Grace, P. (2006). Service discovery in highly het-
erogeneous environments. In 4th Minema Workshop, Lisbon, Portugal. 113
183
REFERENCES
Flores-Cortes, C.A., Blair, G.S. & Grace, P. (2007). An adaptive mid-
dleware to overcome service discovery heterogeneity in mobile ad-hoc environ-
ments. IEEE Distributed Systems Online, 8. xiii, 92, 99, 104, 113, 114, 116
Frakes, W.B., ed. (1994). Proceedings of the Third International Conference
on Software Reuse: Advances in Software Reusability . International Conference
on Software Reuse, Institute of Electrical & Electronics Engineering, Rio De
Janeiro, Brazil. 37
France, R. & Rumpe, B. (2003). Model engineering (editorial). Software and
Systems Modeling (Sosym), 2, 73–75. 58, 62
France, R. & Rumpe, B. (2005). Editorial domain specific modeling. Software
and Systems Modeling , 4, 1–3. 70
France, R. & Rumpe, B. (2007). Model-driven development of complex soft-
ware: A research roadmap. In L. Briand & A. Wolf, eds., Future of Software
Engineering , IEEE-CS Press. 3, 35, 62, 66, 71
Gabriel, R., Bobroe, D. & White, J. (1993). Clos in context- the shape
of the design space, chapter 2. In Object-Oriented Programming- the CLOS
perspective, 29–61, MIT Press. 16
Gacek, C., ed. (2002). Proceedings of the 7th International Conference on Soft-
ware Reuse: Methods, Techniques, and Tools . Springer-Verlag, London, UK.
37
GAO (1993). Software reuse - major issues need to be resolved before benefits
can be achieved. report. Tech. Rep. GAO/IMTEC-93-16, United States General
Accounting Office (GAO). 36, 37, 63
Gardner, T. & Griffin, C. (2003). A review of omg mof 2.0 query / views /
transformations submissions and recommendations towards the final standard.
68
Garlan, D. (2000). Software Architecture: a Roadmap. ACM Press. 77, 79
184
REFERENCES
Ghezzi, C., Jazayeri, M. & Mandrioli, D. (1991). Fundamentals of Soft-
ware Engineering . Prentice-Hall, New Jersey. 39
Ghosh, S., France, R.B., Bare, A., Kamalakar, B., Shankar, R.P.,
Simmonds, D.M., Tandon, G. & Yin, S. (2005). A middleware transpar-
ent approach for developing distributed applications. Software & Practice and
Experience, 35, 1131–1154. 68
Glass, R.L. (1998). Reuse: what’s wrong with this picture? Software, IEEE ,
15, 57 – 59. 37
Glass, R.L., Vessey, I. & Ramesh, V. (2002). Research in software inge-
neering: an analysis of the literature. Information and Software Technology ,
491– 506. 6
GME (2006). http://www.isis.vanderbilt.edu/projects/gme/. vanderbilt univer-
sity. 59, 71, 146
Goedicke, M., Pohl, K. & Zdun, U. (2002). Domain-specific runtime vari-
ability in product line architectures. In 8th International Conference on Object-
Oriented. Information Systems , 384 – 396. 79, 144
Goedicke, M., Kollmann, C. & Zdun, U. (2004). Designing runtime vari-
ation points in product line architectures: three cases. Science of Computer
Programming Special Issue: Software variability management , 53, 353 – 380.
79, 144
Gokhale, A., Balasubramanian, K. & Lu, T. (2004a). Cosmic: addressing
crosscutting deployment and configuration concerns of distributed real-time
and embedded systems. In OOPSLA ’04: Companion to the 19th annual ACM
SIGPLAN conference on Object-oriented programming systems, languages, and
applications , 218–219, ACM, New York, NY, USA. 145
Gokhale, A., Schmidt, D., Natarajan, B., Gray, J. & Wang, N.
(2004b). Model driven middleware. In Q. Mahmoud, ed., Middleware for Com-
munications , John Wiley and Sons. 67, 68
185
REFERENCES
Gokhale, A., Balasubramanian, K., Balasubramanian, J., Krishna,
A., Edwards, G., Deng, G., E, E.T., Parsons, J. & Schmidt, D.
(2005). Model driven middleware: A new paradigm for deploying and provi-
sioning distributed real-time and embedded applications. Journal of Science of
Computer Programming: Special Issue on Model Driven Architecture. 64
Gokhale, A., Balasubramanian, K., Balasubramanian, J., Krishna,
A.S., Edwards, G.T., Deng, G., Turkay, E., Parsons, J. & Schmidt,
D.C. (2006). Model driven middleware: A new paradigm for deploying and
provisioning distributed real-time and embedded applications. The Journal of
Science of Computer Programming: Special Issue on Model Driven Architec-
ture. 68
Goldsby, H.J., Sawyer, P., Bencomo, N., Hughes, D. & Cheng, B.H.
(2008). Goal-based modeling of dynamically adaptive system requirements. In
15th Annual IEEE International Conference on the Engineering of Computer
Based Systems (ECBS). 90, 104, 124, 129, 135, 142, 147, 151, 155
Grace, P. (2004). Overcoming Middleware Heterogeneity in Mobile Computing
Applications . Ph.D. thesis, Ph.D. Thesis, Lancaster University. 14, 16, 73
Grace, P., Blair, G.S. & Samuel, S. (2003a). Marriage of web services and
reflective middleware to solve the problem of mobile client interoperability. In
Workshop on Middleware Interoperability of Enterprise Applications , Dublin,
Ireland. 17, 18, 73
Grace, P., Samuel, S. & Blair, G.S. (2003b). Remmoc: A reflective mid-
dleware to support mobile client interoperability. In International Symposium
on Distributed Objects and Applications (DOA), Sicily, Italy. 19, 73
Grace, P., Coulson, G., Blair, G., Mathy, L., Duce, D., Cooper, C.,
Yeung, W.K. & Cai, W. (2004). Gridkit: Pluggable overlay networks for
grid computing. In Symposium on Distributed Objects and Applications (DOA),
Cyprus. 19, 27, 124, 126, 127, 132
186
REFERENCES
Grace, P., Blair, G. & Samuel, S. (2005a). A reflective framework for
discovery and interaction in heterogeneous mobile environments. ACM SIG-
MOBILE Mobile Computing and Communications Review , 9, 2–14. 113
Grace, P., Coulson, G., Blair, G. & Porter, B. (2005b). Deep middle-
ware for the divergent grid. In IFIP/ACM/USENIX Middleware Conference,
Grenoble, France. 105, 124
Grace, P., Coulson, G., Blair, G. & Porter, B. (2006a). A distributed
architecture meta-model for self-managed middleware. In The 5th Workshop
on Adaptive and Reflective Middleware (ARM06), Melbourne, Australia. 124,
127
Grace, P., Coulson, G., Blair, G., Porter, B. & Hughes, D. (2006b).
Dynamic reconfiguration in sensor middleware. In MidSens ’06: Proceedings
of the international workshop on Middleware for sensor networks , 1–6, Mel-
bourne, Australia. xi, xiii, 30, 32, 33, 125, 128, 130
Grace, P., Blair, G., Flores, C. & Bencomo, N. (2008a). Engineering
complex adaptations in highly heterogeneous distributed systems. In 2nd In-
ternational Conference on Autonomic Computing and Communication Systems
(Autonomics 2008), Turin, Italy. 27, 28
Grace, P., Hughes, D., Porter, B., Blair, G., Coulson, G. & Taiani,
F. (2008b). Experiences with open overlays: A middleware approach to network
heterogeneity. In submitted to Eurosys 2008, Glasgow, UK . xi, xiii, 28, 31, 32,
33, 124, 125, 126, 127, 128, 129, 130, 135
Greenfield, J., Short, K., Cook, S. & Kent, S. (2004). Software Facto-
ries: Assembling Applications with Patterns, Models, Frameworks, and Tools .
Wiley. 58, 59, 65, 70, 71
Griss, M.L. (1993). Software reuse: From library to factory. IBM Systems Jour-
nal , 32, 1–23. 63
187
REFERENCES
Griss, M.L., Favaro, J. & d’Alessandro, M. (1998). Integrating feature
modeling with the rseb. In Fifth International Conference on Software Reuse,
76–85, IEEE Comput. Soc., CA, USA. 44, 45
Grunbacher, P., Egyed, A. & Medvidovic, N. (2004). Reconciling soft-
ware requirements and architectures with intermediate models. Software and
System Modeling , 3, 235–25. 49
Gurp, J.V., Bosch, J. & Svahnberg, M. (2001). On the notion of variabil-
ity in software product lines. In Working IEEE/IFIP Conference on Software
Architecture (WISCA’01), 45. 44, 79, 144
Hailpern, B. & Tarr, P. (2006). Model-driven development: The good, the
bad, and the ugly. IBM Systems Journal , 45. 64
Hallsteinsen, S., Stav, E., Solberg, A. & Floch, J. (2006). Using prod-
uct line techniques to build adaptive systems. In SPLC ’06: Proceedings of
the 10th International on Software Product Line Conference, 141–150, IEEE
Computer Society, Washington, DC, USA. 79, 145, 156
Hallsteinsen, S., Hichey, M., Park, S. & Schmid, K. (2008). Dynamic
software product lines. IEEE Computer , 93 – 95. 2
Hayton, R., Herbert, A. & Donaldson, D. (1998). Flexinet: A flexi-
ble component-oriented middleware system. In 8th ACM SIGOPS European
Workshop on Support for Composing Distributed Applications, Sintra. 17
Heimbigner, D. & Wolf, A. (2002). Intrusion management using configurable
architecture models. technical report. Tech. Rep. 929-02, Department of Com-
puter Science, University of Colorado. 22
Hughes, D., Greenwood, P., Coulson, G., Blair, G., Pappenberger,
F., Smith, P. & Beven, K. (2006a). Gridstix:: Supporting flood prediction
using embedded hardware and next generation grid middleware. In 4th Inter-
national Workshop on Mobile Distributed Computing (MDC’06), Niagara Falls,
USA. 31, 125
188
REFERENCES
Hughes, D., Greenwood, P., Coulson, G., Blair, G., Pappenberger,
F., Smith, P. & Beven, K. (2006b). An intelligent and adaptable flood mon-
itoring and warning system. In 5th UK E-Science All Hands Meeting (AHM06)
(Best Paper Award). 31, 125
IEEE (1994). Why do so many reuse programs fail? IEEE Software, 11, 114 –
115. 37
ISR (2007). Archstudio, software and system architecture development environ-
ment. 47
Issarny, V., Caporuscio, M. & Georgantas, N. (2007). A perspective on
the future of middleware-based software engineering. In FOSE ’07: 2007 Future
of Software Engineering , 244–258, IEEE Computer Society, Washington, DC,
USA. 159
Jacobson, I., Griss, M. & Johnson, P. (1997). Software Reuse: Architecture,
Process and Organization for Business success . Addison-Wesley. 2, 40, 42, 44,
63, 83
Jaring, M. (2005). Variability Engineering as an Integral Part of the Software
Product Family Development Process. PhD Thesis . Ph.D. thesis, University of
Groningen. 4, 7, 41, 144
Jaring, M., Krikhaar, R.L. & Bosch, J. (2004). Visualizing and classify-
ing software variability in a family of magnetic resonance imaging scanners.
Software Practice and Experience, 34, 69–100. 4
Johnson, C. (2006a). Basic research skills in computing science. 6
Johnson, C. (2006b). What is research in computing science? 6
Kang, K., Cohen, S., Hess, J., Novak, W. & Peterson, A.
(1990). Feature-oriented domain analysis (foda) feasibility study. Tech. Rep.
CMU/SEI-90-TR-021, Software Engineering Institute, Carnegie Mellon Uni-
versity. xi, 44, 45, 47
189
REFERENCES
Kang, K.C. (1998). Form: a feature-oriented reuse method with domain specific
architectures. Annals of Software Engineering , 5, 345–355. 44, 45
Karsai, G., Sztipanovits, J. & Franke, H. (1998). Towards specification of
program synthesis in model-integrated computing. In Engineering of Computer
based Systems (ECBS’98), 226–233, Jerusalem, Israel. 66
Keeney, J. & Cahill, V. (2003). Chisel: A policy-driven, context-aware, dy-
namic adaptation framework. In 4th IEEE International Workshop on Policies
for Distributed Systems and Networks (POLICY 2003), Lake Como, Italy. 18
Kelly, S. & Tolvanen, J.P. (2000). Visual domain-specific modelling: Ben-
efits and experiences of using metacase tools. In International workshop on
Model Engineering in ECOOP 2000 , France. 71
Kelly, S. & Tolvanen, J.P. (2008). Domain-Specific Modeling: Enabling Full
Code Generation. Wiley-IEEE Computer Society Pr. 70
Kent, S. (2002). Model driven engineering. In Third International Conference
on Integrated Formal Methods (IFM 2002), 286–298, Springer-Verlag, Turku,
Finland. 3, 35, 62
Kiczales, G., Lamping, J., Mendhekar, A., Maeda, C., Lopes, C., Lo-
ingtier, J.M. & Irwin, J. (1997). Aspect-oriented programming. In Euro-
pean Conference on Object-Oriented Programming , vol. 1241, 220–242, Finland.
59
Kim, J.S. & Stohr, E.A. (1998). Software reuse: Survey and research direc-
tions. Journal of Management Information Systems , 14, 113 – 149. 2, 37, 40,
63
Kim, M., Jeong, J. & Park, S. (2005). From product lines to self-managed
systems: an architecture-based runtime reconfiguration framework. In Proceed-
ings of the 2005 Workshop on Design and Evolution of Autonomic Application
Software. 156
190
REFERENCES
Kleppe, A., Warmer, J. & Bast, W. (2003). MDA Explained The Model
Driven Architecture: Practise and Promise. Addison-Wesley. 68, 69
Kon, F., Roman, M., Liu, P., Mao, J., Yamane, T., Magalhaes, L. &
Campbell, R. (2000). Monitoring, security, and dynamic configuration with
the dynamictao reflective orb. In 2nd ACM/IFIP International Conference on
Middleware, 121–143, New York. 14, 17
Kon, F., Costa, F., Blair, G. & Campbell, R. (2002). The case for reflec-
tive middleware. Communications of the ACM , 45, 33–38. 2, 14, 17
Kramer, J. (2007). Is abstraction the key to computing? Commun. ACM , 50,
36–42. 3, 67
Krebs, T., Hotz, L., & Gunter, A. (2002). Knowledge-based configuration
for configuring combined hardware/software systems. In Proc. of 16. Workshop,
Planen, Scheduling und Konfigurieren, Entwerfen (PuK2002). 61
Kruchten, P. & Thompson, C. (1994). An object-oriented, distributed ar-
chitecture for large scale ada systems. In Tri-Ada ‘94 , Baltimore, Maryland.
77, 79
Krueger, C.W. (1992). Software reuse. ACM Computing Surveys (CSUR), 24,
131–183. 4, 36, 37, 38, 39, 54, 63
Lago, P., Niemela, E. & Vliet, H.V. (2004). Tool support for trace-
able product evolution. In Eighth Euromicro Working Conference on Software
Maintenance and Reengineering (CSMR’04). 49
Lancaster (2007). The gridkit user guide. Tech. rep., Next Generation Middle-
ware Group, Lancaster University. 26
Lapouchnian, A., Liaskos, S., Mylopoulos, J. & Yu, Y. (2005). Towards
requirements-driven autonomic systems design. In Workshop on the Design and
Evolution of Autonomic Application Software (DEAS 2005). 155
191
REFERENCES
Lawson, H., Kirova, V. & Rossak, W. (1995). A refinement of the ecbs ar-
chitecture constituent. In International Symposium and Workshop on Systems
Engineering of Computer Based Systems , Tucson, Arizona. 77, 79
Ledeczi, A., Maroti, M., Bakay, A., Karsai, G., Garrett, J., Thoma-
son, C., Nordstrom, G., Sprinkle, J. & Volgyesi, P. (2001). The
generic modeling environment. In IEEE International Workshop on Intelligent
Signal Processing (WISP’2001), Budapest, Hungary. 59, 71
Ledoux, T. (1999). Opencorba: A reflective open broker. In L. Springer-Verlag,
ed., Reflection’99 , vol. 1616, France. 17
Lee, J. & Muthig, D. (2006). Feature-oriented variability management in prod-
uct line engineering. Communications of the ACM , 49. 43, 79
Ludewig, J. (2003). Models in software engineering- an introduction. Software
and Systems Modeling , 2, 5–14. 3, 62, 63
Maes, P. (1987). Computional reflection. Ph.D. thesis, Vrije Universiteit. 16
Maia, R., Cerqueira, R. & Kon, F. (2005). A middleware for experimenta-
tion on dynamic adaptation. in proc. 4th workshop on adaptive and reflective
middleware (arm2005), co-located with 6th international middleware confer-
ence, grenoble, france, november 2005. In 4th Workshop on Adaptive and Re-
flective Middleware (ARM2005), Grenoble, France. 94
Marin-Perianu, R., Hartel, P. & Scholten, H. (2005). A classification of
service discovery protocols. Tech. Rep. TR-CTIT-05-25, University of Twente.
113
Mascolo, C., Capra, L. & Wolfgang, E. (2002). Mobile computing mid-
dleware. LNCS 2597 , 20–58. 113
Mathy, L., Canonico, R. & Hutchinson, D. (2001). An overlay tree build-
ing control protocol. In 3rd International COST264 Workshop on Networked
Group Communication, 7687. 125
192
REFERENCES
McKinley, P.K., Sadjadi, S.M., Kasten, E.P. & Cheng, B.H. (2004).
Composing adaptive software. IEEE Computer , 37, 56–64. 1
Medvidovic, N. & Taylor, R.N. (2000). A classification and comparison
framework for software architecture description languages. IEEE Transactions
on Software Engineering , 26, 70–93. 48
Mernik, M., Heering, J. & Sloane, A.M. (2005). When and how to develop
domain-specific languages. ACM Comput. Surv., 37, 316–344. 54, 57
MetaCase (2006). Domain-specific modeling with metaedit+. 104
MetaCase (2007). http://www.metacase.com/mep/. 59, 71
Microsoft (1996). Distributed component object model protocol-dcom/1.0.
Tech. rep., Microsoft Corporation. 1, 14
Microsoft (2000). The .net framework. Tech. rep., Microsoft Corporation. 1,
14
Mili, A., Mili, F. & Mili, A. (1995). Reusing software: Issues and research
directions. IEEE Transactions on Software Engineering , 21, 528–562. 37
Monson-Haefel, R. (2000). Enterprise Java Beans . O’Reilly, UK, 2nd edn. 1,
14
Moore, M.M. (2001). Software reuse: silver bullet? Software, IEEE , 18, 86–86.
38
Morin, B., Fleurey, F., Bencomo, N., Jezequel, J.M., Solberg, A.,
Dehlen, V. & Blair, G. (2008). An aspect-oriented and model-driven ap-
proach for managing dynamic variability. In MODELS’08 Conference, France.
142, 154
MUSIC (2008). http://www.ist-music.eu/music. 145
Nakazawa, J., Tokuda, H., Edwards, W.K. & Ramachandran, U.
(2006). A bridging framework for universal interoperability in pervasive sys-
tems. In Distributed Computing Systems (ICDCS 2006), 3. 73
193
REFERENCES
Nechypurenko, A., Schmidt, D.C., Lu, T., Deng, G. & Gokhale, A.
(2004). Applying mda and component middleware to large-scale distributed
systems: a case study. In 1st European Workshop on Model Driven Architecture
with Emphasis on Industrial Application, Enschede, Netherlands. 68
Neighbors, J.M. (1984). The draco approach to constructing software from
reusable components. IEEE Trans. Softw. Eng., 564 – 574. 38
Neighbors, J.M. (1998). Domain analysis and generative implementation. In
Fifth International Conference on Software Reuse, 356 – 357. 56
Nordstrom, G., Sztipanovits, J., Karsai, G. & Ledeczi, A. (1999).
Metamodeling - rapid design and evolution of domain-specific modeling envi-
ronments. In Engineering of Computer-Based Systems (ECBS’99), 68–74. 59,
66, 69
OMG (1995). The common object request broker: Architecture and specification,
tech. report. version 2.0. Tech. rep., Object Management Group. 1, 14
OMG (2002). Uml profile for corba, v 1.0. 68
OMG (2005a). Mof 2.0 / xmi mapping specification, v2.1. 68
OMG (2005b). The object management group. uml 2.0: Superstructure specifi-
cation. version 2.0. 67
OMG (2006a). Catalog of uml profile specifications. 59
OMG (2006b). Omg’s metaobject facility. 68
Ommering, R.v. (2004). Building Product Populations with Software Compo-
nents. PhD Thesis . Ph.D. thesis, Rijksuniversiteits Groningen. 43, 79
Oreizy, P., Rosenblum, D.S. & Taylor, R.N. (1998). On the role of con-
nectors in modeling and implementing software architectures. Tech. Rep. 98-04,
University of California, Irvine. 77, 79
194
REFERENCES
Oreizy, P., Gorlick, M.M., Taylor, R.N., Heimbigner, D., Johnson,
G., Medvidovic, N., Quilici, A., Rosenblum, D.S. & Wolf., A.L.
(1999). An architecture-based approach to self-adaptive software. IEEE Intel-
ligent Systems and Their Applications , 14, 54–62. 139, 140, 144
Owen, C.L. (1997). Understanding design research: Toward an achievement of
balance. Journal of the Japanese Society for the Science of Design, 2, 36 – 45.
7
Owen, C.L. (1998). Design research: Building the knowledge base. Design Stud-
ies , 19, 9 – 20. 7
Parlavantzas, N. (2005). Constructing modifiable middleware with component
frameworks . Ph.D. thesis, Ph.D. Thesis, Lancaster University. 15
Parlavantzas, N. & Coulson, G. (2007). Designing and constructing modi-
fiable middleware using component frameworks. IET Software. 25
Parlavantzas, N., Coulson, G., Clarke, M. & Blair, G. (2000). To-
wards a reflective component-based middleware architecture. In ECOOP2000
Workshop on Reflection and Meta-level Architectures . 22, 92, 99
Parnas, D. (1972). On the criteria for decomposing systems into modules. Com-
munications of the ACM , 15, 1053–1058. 17
Parnas, D.L., Clements, P.C. & Weiss, D.M. (1989). Enhancing reusabil-
ity with information hiding, Software reusability: concepts and models , vol. 1.
ACM Press, New York, NY. 38
Petersen, K., Bramsiepe, N. & Pohl, K. (2006). Applying variability mod-
eling concepts to support decision making for service composition. vol. 0, 1,
IEEE Computer Society, Los Alamitos, CA, USA. xi, 52
Pohl, K., Bockle, G. & Linden, F.v.d. (2005). Software Product Line
Engineering- Foundations, Principles, and Techniques . Springer. xi, 2, 44,
48, 49, 50, 51, 53, 89
195
REFERENCES
Popper, K.R. (1974). Conjectures and refutations : the growth of scientific
knowledge.. Routledge and Kegan Paul, 5th edn. 7
Posnak, E. & Lavender, G. (1997). An adaptive framework for developing
multimedia. Communications ACM , 40, 43–47. 144
Poulin, J.S. (1995). Domain analysis and engineering: How domain-specific
frameworks increase software reuse. In CASE JAPAN’95 , 12–15, Tokyo, Japan.
37
Pressman, R.S. (2005). Software Engineering: A Practitioners Approach.
McGraw-Hill, 6th edn. 7
puresystems (2006). Pure-systems gmbh. 59
Redmond, B. & Cahill, V. (2002). Supporting unanticipated dynamic adap-
tation of application behavior. In 16th European Conference on Object-Oriented
Programming (ECOOP 2002), 205–230, Mlaga, Spain. 17
Risi, W.A. & Rossi, G. (2004). Architectural pattern catalogue for mobile web
information systems. International Journal of Mobile Communications , 2, 235
– 247. 158
Roman, M., Kon, F. & Campbell, R.H. (2001). Reflective middleware: From
the desk to your hand. IEEE DS Online, Special Issue on Reflective Middle-
ware, 2. 14
Ross, D., Goodenough, J. & Irvine, C. (1975). Software engineering: Pro-
cess, principles, and goals. IEEE Computer , 8, 17 – 27. 3, 67
Sabin, D. & Weigel, R. (1998). Product configuration frameworks-a survey.
IEEE Intelligent Systems , 13, 42–49. 61, 62
Satyanarayanan, M. (1996). Mobile information access. IEEE Personal Com-
munications , 3, 26–33. 18
196
REFERENCES
Sawyer, P., Bencomo, N., Grace, P. & Blair, G. (2007a). Handling mul-
tiple levels of requirements for middleware-supported adaptive systems. Tech.
Rep. COMP 001-2007, Lancaster University. 22, 147
Sawyer, P., Bencomo, N., Hughes, P., Danny andl Grace, Goldsby,
H.J. & Cheng, B.H.C. (2007b). Visualizing the analysis of dynamically adap-
tive systems using i* and dsls. In REV’07: Second International Workshop on
Requirements Engineering Visualization, Delhii, India. 31, 104, 124, 129, 135,
142, 147, 151, 155
Schmidt, D.C. (1999). Why software reuse has failed and how to make it work
for you. C++ Report, SIGS, Vol. 11, No. 1 , 11. 2, 37, 40
Schmidt, D.C. (2002). Adaptive and reflective middleware for distributed real-
time and embedded systems. In Second International Conference EMSOFT’02 ,
282–293. 17
Schmidt, D.C. (2006). Model driven engineering. IEEE Computer , 25–31. 3, 4,
35, 54, 62, 64, 65
Schmidt, D.C., Schantz, R.E., Masters, M.W., Cross, J.K., Sharp,
D.C. & DiPalma, L.P. (2001). Toward adaptive and reflective middleware
for network-centric combat systems. Crosstalk The Journal of Defense Software
Engineering , 10–16. 34
SEI (1997). Software enginering institute. model-based software enginering. xii,
55, 56
SEI (2006). Computer-aided software engineering (case) environments. 65
Selic, B. (2003). The pragmatics of model-driven development. IEEE Software,
20, 19–25. 3, 64
SEP (2006). Stanford encyclopedia of philosophy. 7
Shaw, M. (1984). Abstraction techniques in modern programming languages.
IEEE Software, 1, 10 – 26. 38
197
REFERENCES
Shaw, M. (2001). The coming-of-age of software architecture research. In Inter-
national Conference on Software Engineering (ICSE’01). 6
Silva-Moreira, R.J.d. (2003). FORMAware: Framework of Reflective compo-
nents for Managing architecture Adaptation. Ph.D. thesis, PhD Thesis, Lan-
caster University. 14
Sinnema, M., Deelstra, S., Nijhuis, J. & Bosch, J. (2004). Managing vari-
ability in software product families. In 2nd Groningen Workshop on Software
Variability Management Software Product Families and Populations , Gronin-
gen, The Netherlands. 40
Siram, N.N., Raje, R.R., Olson, A.M., Bryant, B.R., Burt, C.C. &
Auguston, M. (2002). An architecture for the uniframe resource discovery
service. In SEM , 20–35. 73
Sivaharan, T., Blair, G. & Coulson, G. (2005). Green: A configurable and
re-configurable publish-subscribe middleware for pervasive computing. In Agia
Napa, Cyprus . 17, 19, 92, 105
Soininen, T. & Stumptner, M. (2003). Guest editorial, special issue: Con-
figuration. AI EDAM , 17, 1–2. 61
Sommerville, I. (2004). Software Engineering . Pearson Addison Wesley, 7th
edn. 2, 7, 37, 40
Sora, I., Cretu, V., Verbaeten, P. & Berbers, Y. (2005). Managing vari-
ability of self-customizable systems through composable components. Software
Process: Improvement and Practice, 10, 77–95. 48, 80, 146, 147
Stachowiak, H. (1973). Allgemeine Modelltheorie. Springer-Verlag, Wien etc.
3, 62
Stahl, T. & Volter, M. (2006). Model-Driven Software Development: Tech-
nology, Engineering, Management . xii, 41, 58, 60, 66, 70, 71
198
REFERENCES
Stoica, I., Morris, R., Karger, R., Kaashoek, M. & Balakarishnan,
H. (2001). Chord: A scalable peer-to-peer lookup service for internet applica-
tions. In ACM SIG-COMM , 149–160. 125
Stropky, M.E. & Laforme, D. (1995). An automated mechanism for effec-
tively applying domain engineering in reuse activities. In International Confer-
ence on Ada, 332 – 340, California,USA. 56
Suenbuel, A. (2003). Correct by construction components or: Would nasreddin
use components? In 29th Euromicro Conference (EUROMICRO’03). 64
Sun (1999). Java remote method invocation (rmi). Tech. rep., Sun Microsystems
Corporation. 1, 14
Svahnberg, M., Gurp, J.v. & Bosch, J. (2005). A taxonomy of variability
realization techniques. Software: Practice and Experience, 35, 705 – 754. xiv,
2, 41, 42, 43, 44, 45, 79, 83, 144
Sztipanovits, J. & Karsai, G. (1997). Model-integrated computing. IEEE
Computer , 110–112. 66
Szyperski, C. (2002). Component Software - Beyond Object-Oriented Program-
ming . Addison-Wesley / ACM Press. 21, 48
Tanter, E., Noyec, J., Caromel, D., Cointe, P. & Proceedings of the
1, O.. (2003). Partial behavioral reflection: Spatial and temporal selection of
reification. In 8th Annual ACM SIGPLAN Conference on Object-Oriented Pro-
gramming, Systems, Languages, and Applications (OOPSLA 2003), Anaheim,
USA. 17
Tatsubori, M., Chiba, S., Killijian, M.O. & Itano, K. (2000). Openjava:
A class-based macro system for java. In W. Cazzola, R.J. Stroud & F. Tisato,
eds., Lecture Notes in Computer Science: Reflection and Software Engineering ,
117–133, Springer-Verlag. 59
Tellis, W. (1997). Introduction to case study [68 paragraphs]. the qualitative
report [on-line serial], 3(2), (1997, July). 7
199
REFERENCES
Tolvanen, J.P. (2006). Domain-specific modeling: How to start defining your
own language. 60
Tolvanen, J.P., Kelly, S., Gray, J. & Lyytinen, K. (2001). Oopsla work-
shop on domain-specific visual languages (dsvl’01) - summary and proceedings.
Tech. rep., University of Jyvaskyla. 59, 66, 69
Trapp, M. (2005). Modeling the Adaptation Behavior of Adaptive Embedded
Systems. PhD Thesis . Ph.D. thesis, University of Kaiserslautern. xii, 80, 81,
82
Truyen, E. (2004). Dynamic and Context-Sensitive Composition in Distributed
Systems . Ph.D. thesis, K.U.Leuven. 14
UCD (2007). Teaching & learning research methodologies. Centre for Teaching
and Learning, University College Dublin. 6
Valtech, M.E., Morisio, M. & Tully, C. (1999). Failure and success fac-
tors in reuse programs: a synthesis of industrial experiences. In International
Conference on Software Engineering (ICSE’99), 681 – 682, Los Angeles, USA.
37
Waldrop, M. (1992). Complexity: The Emerging Science at the Edge of Chaos .
New York: Simon and Schuster. 81
Wang, N., Schmidt, D.C., Parameswaran, K. & Kircher, M. (2001).
Towards a reflective middleware framework for qos-enabled corba component
model applications. EEE Distributed Systems Online special issue on Reflec-
tiveMiddleware. 2
Wang, Y., McIlraith, S.A., Yu, Y. & Mylopoulos, J. (2007). An au-
tomated approach to monitoring and diagnosing requirements. In ASE ’07:
Proceedings of the twenty-second IEEE/ACM international conference on Au-
tomated software engineering , 293–302, ACM, New York, NY, USA. 155
Wegner, P. (1987). Varieties of reusability. In Workshop on Reusability in Pro-
gramming , 30–44. 38
200
REFERENCES
Wijnstra, J.G. (2000). Supporting diversity with component frameworks as ar-
chitecturalelements. In Proceedings of the International Conference on Software
Engineering (ICSE 2000), 51 – 60. 48
Wijnstra, J.G. (2004). Variation mechanisms and multi-view architecting in
platform-based product family development . Ph.D. thesis, University of Gronin-
gen. 48
Wile, D.S. (2001). Supporting the dsl spectrum. Special Issue on DSLs of the
Journal of Computing and Information Technology (CIT), 263–287. 58
Wile, D.S. & Ramming, J.C. (1999). Special issue on domain-specific lan-
guages. IEEE Transactions on Software Engineering , 25. 58
Withey, J. (1996). Investment analysis of software assets for product lines. Tech.
Rep. Technical Report CMU/SEI-96-TR-010, Software Engineering Institute,
Carnegie Mellon University. 40, 41
Wolfgang, E. (2000). Software engineering and middleware: a roadmap. In
Proceedings of the conference on The future of Software engineering (ICSE
2000) - Future of SE Track , 117–129, ACM Press, Limerick, Ireland. 159
Wolfinger, R., Reiter, S., Dhungana, D., Grunbacher, P. & Pra-
hofer, H. (2008). Supporting runtime system adaptation through product
line engineering and plug-in techniques. In Seventh International Conference
on Composition-Based Software Systems (ICCBSS 2008), 21 – 30. 146, 156
Xia, F. (1997). Software engineering research: A methodological analysis. In
Fourth Asia-Pacific Software Engineering and International Computer Science
Conference (APSEC’97 / ICSC’97), vol. 2, 229 – 236. 6
Yin, R.K. (2003). Applications of case study research. Sage Publications, Inc;,
3rd edn. 7
201