interacting discrete event systems: …w3.isis.vanderbilt.edu/sherif/pub_files/thesis.pdfa discrete...
TRANSCRIPT
INTERACTING DISCRETE EVENT SYSTEMS:
MODELLING, VERIFICATION, AND
SUPERVISORY CONTROL
by
Sherif Salah Abdelwahed
A thesis submitted in conformity with the requirementsfor the Degree of Doctor of Philosophy
Department of Electrical and Computer EngineeringUniversity of Toronto
c© Copyright by Sherif Salah Abdelwahed 2002
For my parents
ii
INTERACTING DISCRETE EVENT SYSTEMS:MODELLING, VERIFICATION, AND SUPERVISORY CONTROL
A thesis submitted in conformity with the requirementsfor the Degree of Doctor of Philosophy
Graduate Department of Electrical and Computer EngineeringUniversity of Toronto
c© Copyright by Sherif Salah Abdelwahed, 2002
Abstract
Within the formal language and automata settings, a modelling and analysis
paradigm for multiprocess discrete event systems is proposed. The modelling struc-
ture, referred to as one of interacting discrete event systems (IDES), features explicit
representation of the system components. In addition, different forms of interactions
between the components can be directly represented - as interaction specification -
in the modelling structure. A multilevel extension to the model is introduced. The
composition and decomposition operations for multiprocess systems are extended for
the IDES model to incorporate the interaction specification.
Several approaches are presented for verifying interacting discrete event systems.
The proposed approaches do not require the computation of the synchronous prod-
uct of the system components, and therefore avoid a major bottleneck in this class
of problems. In one approach, the system is verified for internal correctness (non-
blocking) by first detecting potential synchronization conflicts and then checking the
reachability of the underlying states. External verification with respect to a given
specification is also investigated. In certain situations, the interaction information
can be used to solve the external verification problem modularly by converting the
problem into an equivalent set of verification tests addressing the system components
individually. In case the interaction specification does not provide enough informa-
tion to verify the system modularly, an iterative procedure is presented to refine the
interaction specification gradually until a solution is found.
iii
The supervisory control problem is then investigated within the IDES setting.
Two efficient procedures are presented for optimal non-blocking supervisory synthe-
sis for multiprocess systems. In these procedures, the composition of the system
components is avoided by exploring only the common traces between the system and
the specification. The general case of IDES supervision is then investigated. It is
shown that under certain conditions depending on the structure of the system and
the given specification, optimal supervision can be achieved and implemented modu-
larly, that is, using local supervisors for the components and a high-level supervisor
for the interaction specification.
iv
Acknowledgements
I would like to express my gratitude to all those who helped make this work
a reality. In particular, I would like to thank my supervisor Professor W. Murray
Wonham for his support and guidance. I am also grateful for his assistance and
advice during the final preparation of the manuscript.
The System Control Group of the Electrical and Computer Engineering Depart-
ment has provided a stimulating research environment. In particular, I would like
to thank Ryan Leduc, Qian Ken Pu, and Su Rong for the friendly and informed
discussions.
Financial support from Connaught fellowship, Ontario Graduate Scholarship, and
the University of Toronto Doctoral Fellowship is gratefully acknowledged.
Special thanks go to my family for their patience and support. For my parents,
there are no words that will suffice to thank them. I would have never come so far
without their unconditional support and love.
v
Contents
1 Introduction 1
1.1 Systems and architecture . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Hierarchical architecture . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Discrete event systems . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.4 Related work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.5 Contribution and outline of the thesis . . . . . . . . . . . . . . . . . . 12
2 Mathematical Preliminaries 17
2.1 Algebraic structures . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.2 Formal languages and automata . . . . . . . . . . . . . . . . . . . . . 20
2.3 DES controllability and supervision . . . . . . . . . . . . . . . . . . . 24
3 Interacting Discrete Event Systems 27
3.1 Preliminaries and notations . . . . . . . . . . . . . . . . . . . . . . . 29
3.2 Behaviour analysis in process space . . . . . . . . . . . . . . . . . . . 30
3.3 Compact language vectors . . . . . . . . . . . . . . . . . . . . . . . . 34
3.4 Language compensation in process space . . . . . . . . . . . . . . . . 38
3.5 Interacting discrete event systems . . . . . . . . . . . . . . . . . . . . 42
3.6 Modelling multiprocess systems . . . . . . . . . . . . . . . . . . . . . 44
3.7 Abstract architectural specification . . . . . . . . . . . . . . . . . . . 47
3.8 Multilevel Interaction Structures . . . . . . . . . . . . . . . . . . . . . 57
3.8.1 System Tree Diagram . . . . . . . . . . . . . . . . . . . . . . . 66
3.8.2 Multilevel Decomposition . . . . . . . . . . . . . . . . . . . . 69
vi
4 Behavioural Aspects of Interacting Discrete Event Systems 71
4.1 Abstraction in process spaces . . . . . . . . . . . . . . . . . . . . . . 73
4.1.1 Automaton-based abstraction . . . . . . . . . . . . . . . . . . 74
4.1.2 Language-based abstraction . . . . . . . . . . . . . . . . . . . 79
4.1.3 Abstraction of interacting discrete event systems . . . . . . . . 89
4.2 Compact interacting discrete event systems . . . . . . . . . . . . . . . 93
4.3 Complete interaction specifications . . . . . . . . . . . . . . . . . . . 96
5 Interacting Discrete Event Systems: Verification 101
5.1 Blocking in process space . . . . . . . . . . . . . . . . . . . . . . . . . 103
5.1.1 Deadlock detection in multiprocess systems . . . . . . . . . . . 105
5.1.2 Livelock detection in multiprocess systems . . . . . . . . . . . 111
5.1.3 Nonblocking interaction specifications . . . . . . . . . . . . . . 122
5.2 Verification of IDES . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
5.3 Iterative verification in process space . . . . . . . . . . . . . . . . . . 134
6 Interacting Discrete Event Systems: Supervisory Control 143
6.1 Supervisory control of multiprocess systems . . . . . . . . . . . . . . 144
6.1.1 Direct synchronization approach . . . . . . . . . . . . . . . . . 145
6.1.2 Detect-first approach . . . . . . . . . . . . . . . . . . . . . . . 148
6.2 Supervisory control of IDES . . . . . . . . . . . . . . . . . . . . . . . 149
7 Conclusions 162
7.1 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
7.2 Future Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
A Decomposition and Modularity in Process Space 165
A.1 Decomposition and Modularity . . . . . . . . . . . . . . . . . . . . . 165
A.2 Deterministic Interaction Specifications . . . . . . . . . . . . . . . . . 169
B Supervisory Control of Structured IDES 172
vii
List of Symbols
The following is a list of key symbols in the thesis and the corresponding page
numbers where they first appear.
Symbol Page Description
g ◦ f 18 Composition of the functions f and g
L(Σ) 21 Set of all languages over Σ
α(L) 21 Alphabet of the language L
L 21 Prefix closure of the language L
Elig 21 Eligible events function
Σs 29 Shared events in the process space Σ
L(Σ) 30 Set of all language vectors in Σ
E ,� 30 Componentwise membership and containment
�,� 30 Componentwise union and intersection
BΣ 30 Composition operation in the process space Σ
P Σ 31 Vector projection map in the process space Σ
Lc(Σ) 30 Set of all compact language vectors in Σ
CΣ(L) 39 Set of Σ-compensators for L
CΣ(L) 40 Supremal Σ-compensator for L
I(Σ) 42 Set of all IDES in Σ
PΣ 43 Generalized decomposition operation in Σ
EΣ 48 Index equivalence relation generated by Σ
[σ]Σ 49 Coset of the index equivalence realtion containing σ
FΣ 49 Natural abstraction map
Y(Σ) 49 Set of layouts in the process space Σ
KΨ 62 Interaction language generated by the structure Ψ
Ic(Σ) 94 Set of all compact IDES in Σ
CoΣ(L) 122 Set of nonblocking compensators for L
CoΣ(L) 122 Supremal nonblocking compensator for L
viii
List of Definitions
The following is a list of the main definitions used in the thesis and the corre-
sponding page numbers where they are defined.
Definition Page
process space 29
(in)composable string vector 30
compact language vector 34
(Σ-)decomposable language 38
interacting discrete event system (IDES) 42
(abstract) layout 49
natural abstraction map 49
(in)direct abstraction 77
Σ-abstraction map 80
Σ-abstraction map 81
composition invariant map 84
structured interacting discrete event system (SIDES) 91
compact interacting discrete event system 94
complete (interaction) language 96
blocked system 103
live language vector 103
deadlock/livelock state 104
clique vector 114
reachable/terminal/marked clique module 115
live interacting discrete event system 122
modular supervision 149
ix
x
Chapter 1
Introduction
1.1 Systems and architecture
Technological and economical advances of the twentieth century have created a new
class of artificial systems of unprecedented complexity.1 Consequently, the issue of
systems complexity attracted the attention of the research community in many dis-
ciplines including engineering, economics, politics, sociology and biology. While the
context of the complexity problem may vary substantially depending on the physical
attributes of the system and the scope of analysis, the objective is generally the same,
namely to find a way to contain the complexity of the system in order to bring the
analysis procedures into feasibility.
It is well observed that contemporary real-life systems, in spite of their apparent
complexity, tend to be organized in one way or another, mainly due to the physical
constraints on the design and development process. Many practical systems can
be viewed as a set of interacting subsystems or system components. Each of these
components performs certain tasks and interacts with other components in a specified
way to achieve a global objective. The composition of these subsystems is modelled
by relating the behaviour of the overall system to the behaviour of its individual
1The term complex hereafter is being used in the usual sense to mean complicated, huge or both.
1
components. This relationship is defined by the way these components interact and
communicate which in turn defines what is known as the system architecture.
Systems architecture can be viewed as a specification for the system components.
Many practical complex systems can be represented as a set of components together
with an architectural specification that defines how these components interact. Archi-
tectural specification restricts the overall (composite) behaviour of the system compo-
nents to follow certain (high-level) patterns.2 In practice, the identities of the system
components are usually given at the outset. Alternatively, in certain situations, they
might be identified through given decomposition procedures. Architectural specifica-
tions are usually defined at the initial stage of the design and may be refined in the
analysis process.
1.2 Hierarchical architecture
In the context of complex systems analysis, architectural information (specification)
is viewed as an indispensable part of the system model. This information provides the
main support for encapsulation and abstraction of the system behaviour, ultimately
paving the way to contain the complexity of its model. In this regard, hierarchy is
known as the most common architectural scheme supporting this purpose.
In a hierarchically organized system, basic components are connected (interacting)
in a distributed multilevel way. The system components are grouped in a set of disjoint
modules; each contains a set of components interacting in a specified way. Each
module has its own independent architectural specification. These modules form the
components of the next level in the system organization. The process of clustering
components into modules and abstracting modules into higher-level components is
repeated until the highest level of abstraction is reached at which point the interaction
between all the system components is completely specified.
2Architectural restrictions can be viewed as a high-level controller that controls the externalarrangements of the components rather than their internal behaviour.
2
Based on this description, hierarchical architecture can be regarded as an ordered
set of abstract constraints on the system, usually implemented by a multi-level de-
centralized control structure. By distributing architectural constraints on different
levels, the hierarchical scheme allows a set of limited capacity components to han-
dle complex tasks in a stable and manageable way. This scheme provides the basic
support for the decentralized approach to system analysis where, in some situations,
certain procedures involving the overall system can be translated into an equivalent
set of procedures involving the system components individually.
The efficiency of a hierarchical structure is directly related to its ability to repre-
sent the coupling between the system components correctly. The process of clustering
components into modules should match the physical coupling between these compo-
nents. Naturally, and otherwise ideally, hierarchical systems are organized such that
the coupling (interaction) between the components in a given module is “stronger”
than that between components from different modules. This rule holds for all the
different levels of the hierarchy. Not surprisingly, this natural form of interaction is
an efficient one. In fact, the contrast between strong coupling inside each module and
weak coupling between modules characterizes the efficiency of a hierarchical struc-
ture. Capturing such requirement in the design of an artificial system and carrying it
through into its model depends largely on the physical attributes of the system, the
design process, and the perceptions of the human modeler about the system and its
environment.
1.3 Discrete event systems
A discrete event system (DES) is a dynamic system with discrete time and state
space, event-driven or asynchronous and possibly nondeterministic. Examples of
discrete event systems include communication networks, database systems, traffic
networks, digital circuits and manufacturing systems. In its basic form, a discrete
event system is described by a set of states, possible transitions between them, and
3
the associated events. In physical systems, events correspond to the actions that can
change the state of the system, that is, trigger a transition between states. This
form of description is usually based on internal observation of the system dynamics.
Alternatively, a discrete event system can be described by logical predicates over a
set of state variables. In this case, an event corresponds to a change to one or more
state variables and, consequently, results in a transition from one state to another. A
discrete event system can also be described externally through its observed behaviour.
This is represented formally by the set of all event sequences that generate state
transitions over time. These event sequences define the language generated by the
system.
In general, behavioural properties of discrete event systems seem more intuitive to
represent and analyze in the formal language setting. This is based on the fact that
the behaviour of a given DES is defined as the language generated by its automaton
model. Therefore, theoretical developments in this thesis are mainly language-based.
However, we will use automaton-based analysis when it appears more appropriate for
the underlying case.
Practical discrete event systems are usually composed of a set of interacting com-
ponents or subsystems. The interaction scheme of the system components is usually
specified in one form or another and given as a part of the system description. In
general, the existence of certain interaction between the system components means
that the behaviour of each component of the system depends not only on its internal
structure but also on the behaviour of the other components. More precisely, the in-
teraction between the system components means that the set of eligible events at any
time in a given component depends not only on the current state of the component
but also on the current states of the other components. The effect of such interaction
is to limit the dynamics of each component. Therefore, from a behavioural point of
view, interaction between the system components can be considered as a restriction
on the overall (composite) system behaviour.3
3Roughly speaking, such interaction can also be viewed as a controller that forces the system to
4
In the absence of any interactions, the system components evolve concurrently
while synchronizing on shared events. This form of operation corresponds to the least
restrictive form of interaction (no-interaction). The overall behaviour of the system
in this case is given by the parallel composition4 of the system components. In this
case the state space of the composite system is represented by the product of the state
space of the components. As a result, the state space of the composite system may
grow exponentially with the number of interacting components. This phenomenon is
commonly referred to as the state explosion problem. In this situation, even a modest
system can quickly scale beyond common computational facilities and the associated
model becomes intractable.
Clearly, parallel composition is not the only form of interaction in discrete event
systems. In general, system components may interact in a variety of ways in order
to conform to their organizational restrictions. Some of these interactions correspond
to common forms of physical arrangement and hence may be referred to as standard
interactions. For example, serial composition is a well-known form of interaction in
which the system components are arranged in a way such that they can complete their
tasks sequentially without interleaving. In this case, the state space of the composite
system grows polynomially with the number of interacting components. This is one
instance where architectural specification, in effect, reduces the complexity of the
model. In hierarchical discrete event systems, the overall interaction between the
system components is the composition of more than one form of interaction; all are
arranged in a scheme that matches the system hierarchy. In order to describe these
systems efficiently, the underlying modelling paradigm should be able to express and
incorporate a wide range of interaction specifications.
Practical discrete event systems are usually designed for computer controlled au-
tomatic operation. In this case, correct operation is imperative and may be critical.
Failure may result in financial disaster, or physical and natural damages. One ap-
proach to avoid these problems is to prove formally the correctness of the system
operate within a desired region of the system’s state-space.4In this thesis, we use the term parallel composition to denote the synchronous product operation.
5
operation by verifying that the system implementation meets a set of specifications
that define the correct behaviour of the system. This approach is usually referred to as
the formal verification approach. Given that the system is correctly modelled, design
bugs can be detected and removed before they percolate down to the implementation
phase. Many algorithms and tools have been developed recently to automate the
verification process. In many cases, these algorithms require exhaustive searching of
the state space of the system and hence may suffer from the state explosion prob-
lem inherited from the model. Therefore, from a practical point of view, they are
computationally infeasible for realistic size systems.
Most discrete event systems exhibit control mechanisms through which certain
parts of the system behaviour can be modified. Based on the physical characteris-
tics of the system, events are regarded as either controllable - can be disabled by an
external agent - or uncontrollable - cannot be disabled and considered permanently
enabled. A control theory of a general class of discrete event systems was initiated and
pioneered by Ramadge and Wonham (RW) [RW87]. Control-theoretic concepts such
as controllability and observability have been introduced and formalized in the DES
setting. Supervisory synthesis procedures have been developed within this frame-
work that guarantee that the controlled system behaviour is contained in a given set
of specifications. It is often possible to meet these specifications optimally, that is, in
a minimally restrictive way. In the current state of RW framework, the specifications
and the controllers must be represented as finite state structures. The effort of super-
visor synthesis given the plant and its specification is of polynomial order in the state
size of the associated models. However, in general when the system is formed from a
set of components this turns to grow exponentially with the number of components
and becomes intractable.
Analysis frameworks of discrete event systems deal with a mathematical represen-
tation of the system. In system analysis, it is often the case that the given represen-
tation of the system is the main cause of complexity. Straightforward transcriptions
of complex systems may lead to inefficient representation while well-chosen transcrip-
6
tions could bring an efficient one. The efficiency of the system representation can
be measured on one hand by the amount of information it reveals about the system
dynamics and, on the other hand, by how such information is organized to allow di-
rect access to relevant elements of the system structure in order to support modular
analysis. Consequently, an efficient representation for complex discrete event sys-
tems should include well-defined representations for the system components as well
as any interaction between these components. The finite state machine model does
not provide such representation and therefore is not suitable to represent complex
multiprocess system. A more structured formulation is needed in order to represent
the interaction dimension of the system structure and to integrate it into the analysis
procedures in a convenient and efficient way.
This thesis presents a new paradigm for modelling and analysis of multiprocess
discrete event systems that incorporates architectural information explicitly in the
modelling and analysis process. The proposed model supports standard forms of
interaction specification such as parallel, serial, refinements, and interleaving compo-
sitions. The model can be extended to represent the common form of hierarchical
architecture. This research is motivated by, and directed to, practical system de-
velopment at the levels of modelling, design and analysis with special focus on the
verification and supervisory control problems.
1.4 Related work
This section provides a brief overview of related work on modelling and analysis
approaches for complex discrete event systems. We will focus mainly on those ap-
proaches that utilize and incorporate architectural properties of the system in the
modelling and analysis process, particularly in the verification and supervisory con-
trol problem. However, this review is not intended to be a comprehensive survey of
all relevant work.
Multiprocess discrete event systems have been the focus of extensive research re-
7
cently. As the number of applications mounts in many fields, particularly in engineer-
ing and computer science, so does the need to develop tools to ensure the correctness
and robustness of these systems and to modify their behaviour when needed in or-
der to cope with usual changes of requirements and tasks. Development of these
tools is usually based on concrete mathematical modelling paradigms that can ex-
press relevant aspects of the system dynamics. In [Shi97], several models for parallel
and concurrent logical systems are reviewed and compared. A general framework to
express parallelism in logical multiprocess systems is introduced through which the
relationship between these modelling formalisms is investigated.
The relation between the model and the observed behaviour of the system is cen-
tral to multiprocess system analysis. This relationship has been addressed recently
in formal language theory literature. The main aim of the encountered research is
to provide a formal representation of the effect of the interaction between the sys-
tem components on the overall system behaviour. In [KK97], the notion of vector
controlled concurrent systems was introduced as a behavioural model for concurrent
systems. In this approach the system is represented by a set of sequential processes
together with a vector synchronization mechanism controlling their mutual synchro-
nization. The idea originated from the theory of path expressions [CH74] and its
vector firing sequence semantics. The notion of vector firing sequences was also in-
troduced in [Shi79] as a semantic of COSY systems [JL92]. In another approach
[MRS96], the operation of shuffle on trajectories is introduced as a generalization of
the parallel composition of asynchronous systems. These works have been influential
to the modelling paradigm proposed in this thesis.
Statecharts was proposed in [Har87] as a state-based language that extends con-
ventional state machines with structuring and communication mechanisms. These
mechanisms enable the description of large and complex systems in a more struc-
tured way. In Statecharts, the state-transition diagram is enriched with elements
dealing with notions of hierarchy and concurrency. Hierarchy is achieved by a tree-
like relation on the set of states. Parallelism is supported by a special graphical
8
convention that allows communication between parallel components. The proposed
modelling framework in this thesis supports a more general interaction scheme that
includes the AND/OR (parallel/ refinement) scheme of Statecharts. In addition, in-
teraction information is utilized to enhance the verification and supervisory control
procedures for multiprocess systems.
The Petri nets model [Pet62] is yet another well-known paradigm for modelling and
analyzing concurrent systems. Petri nets are characterized by a graphical and precise
syntax and also by a long record of theoretical results and supporting tools. How-
ever, similar to state machines, ordinary Petri nets lack a hierarchical decomposition
mechanism and suffer from the state explosion problem in its reachability graph. To
overcome these problems, high-level Petri nets [GL81] and coloured Petri nets [Jen87]
have been proposed. A hybrid technique for both top-down and bottom-up modelling
of Petri nets has been presented in [LPM96]. In the top-down approach the system
is decomposed into a set of subsystems at each level of abstraction. Interconnection
rules are then given for the bottom-up synthesis. However, these approaches also deal
with specific form of component interaction (that of refinement) and are confined to
the modelling processes with little or no reflection on the analysis procedures.
There are several remarkable approaches that utilize hierarchical and structured
modelling formalisms to improve the efficiency of the verification procedures. In
[Dil88], a theory of automatic hierarchical verification of speed-independent digital
circuits has been developed. In this approach, the verification task is conducted re-
cursively through multiple applications of a special class of substitution and hiding
operations. In [Kur94], homomorphic reduction of coordination processes was used
to alleviate the complexity associated with the verification of multiprocess systems.
It also introduced a heuristic reduction technique based on task decomposition and
task localization. In the task decomposition process, the specification is decomposed
into a set of local properties, whereas in the task localization step the system model is
reduced to a set of equivalent models with respect to local tasks. In another approach
[AL91], refinement maps were introduced to prove that lower level specifications cor-
9
rectly implement higher level ones in hierarchical structures.
Iterative verification [BSV93, Kur94] is yet another efficient approach in which
approximation is used to provide a simplified representation of the system. This
simplified representation is then iteratively refined using heuristic methods until the
verification test is affirmed. In Chapter 5, we propose another variant of the iter-
ative verification algorithm that uses the interaction information in the new model
to implement a refinement strategy that is more flexible to adjust (and hence more
effective) than the localization strategy used in [BSV93, Kur94] which can only be
used to include or exclude an entire component of the system in the verification check.
Blocking is a major problem in multiprocess discrete event systems. Deadlock
blocking occurs when the system resources have been occupied in such a way that the
set of system components cannot continue their activities. Static reachability analysis
provides a method for automatically detecting blocking situations in multiprocess
systems. Its application in practice, however, is also limited by the state explosion
problem. State space reduction techniques try to avoid searching the entire state space
of the system by exploiting certain regularities in the system structure. Partial order
reduction [God94] is an instance of this approach in which the effect of representing
concurrency is alleviated with interleaving. This reduction technique is based on the
fact that blocking conditions depend only on the partial order of events rather than on
the particular sequence of these events. In another related approach, symmetry in the
system is traced back to its components and then exploited to reduce the complexity
of the reachability analysis [McD89].
Most blocking detection techniques deal only with the multiprocess systems with
parallel components. In this thesis we propose a new approach for blocking detection
in parallel multiprocess systems which can be particularly efficient for loosely cou-
pled systems (with few shared events). The proposed approach is then extended for
multiprocess systems with general interaction specifications.
There is also a growing interest in dealing with the complexity issue in the su-
pervisory control field [Koz94]. In the RW framework, architectural concepts such
10
as modular, decentralized and hierarchical control have been extensively investigated
[WR88, LW90, ZW90]. There have been several other approaches to incorporate ar-
chitectural information directly into the model and utilize it in the analysis process.
In [BH93] an efficient algorithm is developed for computing the reachability tree for a
restricted version of Statecharts. The algorithm is then used for supervisory control
synthesis in the RW framework. In [Sch92] a cellular DES model was presented in
which the system is viewed as a collection of interconnected DES cells each identi-
fies a local subsystem. Controllability of the specification structure with respect to
the plant structure is investigated. Conditions for the consistency of the combined
control actions from the controller cells are given. In [Wan95], multiprocess DES are
represented as a set of holons that is mapped into a state tree with two types of
nodes representing the AND/OR expansion schemes. The map is defined such that
the construction, referred to as state tree structure (STS), can provide several levels
and modes of abstractions. Controllability in the STS setting is defined recursively.
An algorithm for designing STS controller for a given STS plant and a matching
specification is given. In another approach, dynamic consistency is introduced in
[CW95] as a constraint on a form of finite state machines abstraction. In [Als96]
procedural control for chemical process is examined. Modular supervisory techniques
are used to control systems of industrial scale, particularly by handling certain forms
of sequential and parallel compositions.
Supervisory control of concurrent discrete event systems is investigated in [WH91].
In this work an extension of the decentralized supervisory control [LW90] is proposed
and investigated. The notion of separability is introduced, and it is shown, under
given conditions, that separability is a necessary and sufficient condition for optimal
decentralized control. Part of Chapter 6 in this thesis is a generalization of the re-
sults of this paper. Using the proposed model in this thesis, the decentralized control
scheme can be applied to a more general class of systems where the interaction be-
tween the system components is considered a variable in the model, allowing different
forms of interactions, rather than a fixed one (parallel composition). Also, based on
the proposed model, the separability condition is dropped allowing the decentralized
11
control structure to be implemented for a more general class of specifications.
1.5 Contribution and outline of the thesis
This thesis introduces a modelling and analysis framework for a general class of
multiprocess discrete event systems with well-defined interaction between the sys-
tem components. The proposed model structure, referred to as interacting discrete
event system (IDES), provides a concrete and separate representation for the system
components and their interaction specifications. In this framework, components and
interactions are modelled and analyzed within the setting of automata and formal
language theory.
The main feature of the IDES framework is the incorporation of the interaction
specification of the system components as a dimension of the model structure. Inter-
action specification can be used as a representation of the system architecture. In the
reminder of this thesis we will use the term “architectural specification” to denote
those interaction specifications that target the external organization of the system
components rather than their internal structures. In the IDES framework, standard
architectural specifications as well as custom ones can be incorporated directly into
the modelling structure. Interaction specification in the IDES structure is directly
linked to the system behaviour. Consequently, the proposed framework provides a
new systematic way to explore the relation between the system architecture and its
behaviour.
The main objective of this thesis is to provide the formal settings and the theo-
retical foundation for:
• modelling a general class of multiprocess discrete event systems with the sup-
port for representing general interaction specifications, standard architectural
schemes, and multi-level hierarchical configurations,
• exploring the relation between the system structure (components and interac-
12
tion specification) and its behaviour, and
• utilizing structural information to enhance the analysis procedures for this class
of systems with particular emphasis on the verification and supervisory control
problems.
It is important to note that the problems targeted in this thesis can always be
solved directly through exhaustive search over the composite system. However, based
on the IDES modelling structure we implement two alternative (indirect) approaches
to solve these problems while avoiding the computation of the synchronous product
of the system components. The first is to limit the reachability analysis to a region
in the state space that is directly related to the problem in hand. In this approach,
relevant parts of the state space are identified first, then the reachability of these
parts is tested when needed. The worst case complexity of this approach is similar
to that of the direct computation. However, it is shown through some example that
significant computational gain can be achieved in certain special cases.
The other approach is to solve the (verification or supervisory control) problem
modularly by translating the problem into an equivalent set of problems addressing
the system components individually. The advantage of this approach is mainly struc-
tural as it aims to localize the problem over the system structure. This localization is
of significant importance to practical situations where the system is typically subject
to continuous changes and upgrades. Under the modular approach, we need only to
recompute the solution with respect to the changed components of the system rather
than the whole system as will be needed under the direct approach.
The thesis is organized as follows.
Chapter 2 presents basic mathematical concepts in algebra, formal languages,
and automata theory and other facts that will be used throughout the thesis. It also
contains a summary of the basic results of the RW theory.
The IDES model is defined over a formal notion of multiprocess environment re-
ferred to as process space. Chapter 3 introduces the basic elements of process space
13
and investigates its properties. In this environment basic elements and operations of
single process systems are extended to vector quantities. In this setting a multipro-
cess system is represented by a language vector. The association between the set of
language vectors and the set of languages is defined based on the composition and
decomposition operation. In the light of this association, the range of the decompo-
sition operation is extended in a way that allows every language to be decomposed
in a recoverable way. This is done by compensating for the information lost through
the projection operation.
The IDES model is then introduced as a natural extension of language vectors to
support the extended composition and decomposition operations. Interacting discrete
event systems are represented by a structure of two elements: a language vector
representing the basic components of the system and a language that specifies the
interaction between these components. The extended decomposition (projection)
operation can be used to decompose any given single process system into an IDES
structure that generates the same behaviour (language).
Architectural specifications can be expressed conveniently in the IDES model.
Architectural specifications are usually abstract in the sense that they address the
external organization of the system components. To represent this form of specifica-
tions, an abstraction scheme referred to as the natural abstraction map is proposed.
This scheme preserves the structure of the process space while abstracting internal
details of the system components. Languages generated by this abstraction are re-
ferred to as layouts. Layouts are extension of the concept of trajectories presented in
[MRS96] and can be used to define abstract architectural specification. Many stan-
dard forms of interactions such as serial and parallel compositions can be represented
as layouts in the IDES structure.
The IDES model can be extended to handle hierarchical systems. To this end,
interaction specification is extended to interaction structure. An interaction structure
is a set of interaction specifications embedded in a tree that matches the organization
of the system. Such organization is modelled as a multi-level process space. It is
14
shown that multilevel interaction structures can always be transformed into a single
level interaction specification. This transformation can be done directly without
involving the system components. It is also shown that a single process system can
always be converted to a behaviourally equivalent multilevel IDES that matches a
given multilevel process space.
In chapter 4, abstraction of multiprocess DES is investigated. Abstraction is
essential to contain the complexity of multiprocess systems. In general, abstractions
are used to simplify the system model while preserving enough information to carry
out the required analysis. A class of abstraction schemes that maintain a well-defined
association between the system and its abstract representation is introduced. Such
association is important to adjust the abstraction in a way that reveals only the
needed information about the system behaviour. To ensure effectiveness, the given
abstraction schemes can be computed without the need to compute the composition
of the system components.
In Chapter 5, the verification problem is investigated within the IDES model set-
ting. Blocking detection in multiprocess DES, an internal verification problem, is
first investigated. The detect-first approach is proposed for this problem. In this
approach, potential blocking states are first identified by examining shared eligible
events. Potential blocking states are then tested for their reachability. This approach
explores only those global states that are relevant to the blocking condition. Particu-
larly in loosely coupled systems where there are relatively few shared events compared
with local events, the saving offered by this approach can surpass its overhead and
the overall computation can be significantly less than the direct procedure. This
approach is used for both deadlock and livelock detection.
A formula for computing the supremal non-blocking interaction specification for
a given IDES is presented. This supremal element generates the same behaviour
as the given IDES without blocking. Two approaches are presented for computing
this supremal element without computing the composition of the system components.
The first approach traces the common behaviour of the system components and the
15
specification and removes blocked states as they are discovered. This approach can
be efficient for highly restricting specification where the state space of the composite
IDES system is much less than that of the composite system components. The other
approach is a slightly modified version of the detect-first approach which targets the
case when the specification is highly-permissive.
Verifying the system against a set of external specifications is then investigated.
It is shown that transforming the specification into an IDES structure can enhance
the chance of finding a modular solution to the verification problem. In general, it is
established that the system satisfy a given specification if IDES structure of the system
is included componentwise in the IDES of the specification. However, no confirmation
can be obtained otherwise. To deal with this case, a common iterative technique is
implemented to refine the interaction specification of the system gradually, using
abstraction, until an answer to the verification question is confirmed.
In Chapter 6 the supervisory control problem is investigated within the IDES
setting. Two procedures are presented to find a maximally permissive non-blocking
supervisor for a given multiprocess DES. These procedures are adaptations of the ones
introduced in Chapter 5 to obtain the supremal non-blocking interaction specification
for a given IDES. Similar to the earlier developments, these approaches are particu-
larly useful for the cases of highly-restrictive and highly-permissive specification with
mostly controllable events.
Modular supervision of interaction discrete event systems is then investigated. It
is shown that under certain conditions related to the structure of process space, the
architecture of the system, and the given specification, an optimal supervisor can be
computed and implemented modularly. Another more flexible set of conditions that is
independent of the specification is established for solution in a special class of IDES.
Conclusions and suggestions for future research are presented in Chapter 7.
16
Chapter 2
Mathematical Preliminaries
This chapter introduces the mathematical concepts used in this thesis. This is also
intended as a statement of notation and terminology of these mathematical concepts.
Standard and straightforward proofs will be omitted. Comprehensive treatments of
these subjects can be found in [MB88, Wec92] for algebra, [HU79, Ber79, RS97] for
formal languages and automata theory, and [Won01] for further information on the
RW framework for discrete event systems control.
2.1 Algebraic structures
Elementary notations of basic set theory will be used throughout this thesis, namely,
membership (∈), set-builder notation ({− | −}), subset (⊆), union (∪), intersection
(∩), set subtraction −, product of sets (A1 × . . . × An), set power (Pwr(A)), and
cardinality (|A|). The term “iff” will be used as an abbreviation of “if and only if”.
We will not distinguish between the set {a} and the element a if the intention is clear
from the context.
Let A and B be two sets. A relation between A and B is any subset R ⊆ A×B.
If A = B then R is a binary relation on A. For (a, b) ∈ A × B we write aRb if the
ordered pair (a, b) ∈ R. A function f from a set X to a set Y , written f : X → Y ,
17
is a subset of X × Y such that for each x ∈ X there is exactly one y ∈ Y such that
(x, y) ∈ f . This will be written as f(x) = y. A function f : X → Y is partial if it is
only defined on a subset of X. Associated with each relation R ⊆ X × Y a partial
function fR : X → Pwr(Y ) defined by fR(x) = {y ∈ Y | (x, y) ∈ R}. A function
f : X → Y is injective if for all x1, x2 ∈ X, f(x1) = f(x2) implies x1 = x2; it is
surjective if for every y ∈ Y there exists x ∈ X such that y = f(x); and it is bijective
if it is both injective and bijective.
For a function f : X → Y , the subset extension of f is the function f∗ : Pwr(X) →Pwr(Y ), defined as
(∀X ′ ⊆ X) f∗(X ′) := {y ∈ Y | (∃x ∈ X ′) (x, y) ∈ f}
Thus f∗(X ′) ⊆ Y . In particular, f∗(X ′) is the image of f . Usually we do not
distinguish between f and f∗, simply writing f(X ′) for f∗(X ′). The surjective closure
of a function f : X → Y is the function f ′ : X → Y ′ with f(x) = f ′(x) for all x ∈ X
and Y ′ = f(X). The following Proposition shows some basic properties of functions
that will be used directly throughout this thesis
Proposition 2.1. Let f : X → Y be a function. Then,
1. (∀B ⊆ Y ) f(f−1(B)) ⊆ B and equality holds if f is surjective
2. (∀A ⊆ X) A ⊆ f(f−1(B)) and equality holds if f is injective
3. (∀A1, A2 ⊆ X) f(A1 ∪ A2) = f(A1) ∪ f(A2), f(A1 ∩ A2) ⊆ f(A1) ∩ f(A2)
4. (∀B1, B2 ⊆ Y ) f−1(B1 ∪B2) = f−1(B1) ∪ f−1(B2),
f−1(B1 ∩B2) = f−1(B1) ∩ f−1(B2)
�
Let f : X → Y and g : Y → Z be two functions. The function composition of f
and g, written g ◦ f : X → Z, is the function defined by
g ◦ f := {(x, z) ∈ X × Z | (∃y ∈ Y ) (x, y) ∈ f & (y, z) ∈ g}
18
The composition of functions is associative. Let f be a function from X to Y and let
A ⊆ X and B ⊆ Y . We will write f |A to denote the restriction of the domain of f to
A.
Proposition 2.2. Let f : X → Y be a surjective function, and let A ⊆ X and
B ⊆ Y . Then
f(A ∩ f−1(B)) = f(A) ∩B
�
Let R ⊆ A×A be a binary relation on A. The following is a list of some standard
algebraic properties of R.
reflexive: (∀a ∈ A) aRa,
symmetric: (∀a, b ∈ A) aRb =⇒ bRa,
antisymmetric: (∀a, b ∈ A) aRb, bRa =⇒ b = a,
transitive: (∀a, b, c ∈ A) aRb, bRc =⇒ aRc.
Combinations of these properties characterize many important algebraic relational
structures. A relation R over A which is reflexive, antisymmetric, and transitive is
called partial order on A. In this case the pair (A,R) is called a partial order set
or simply a poset. In this thesis, the more convenient notation ≤ will be used for a
partial order relation with the convention a < b for a ≤ b and a �= b.
Let B be a subset of a poset (A,≤). An element x ∈ B is an upper bound of B if
∀a ∈ B a ≤ x; moreover if for all upper bounds y of B, x ≤ y then x is said to be the
least upper bound for B (also called supremum of B) and denoted sup(B). Dually,
we can define the lower bound and the greatest lower bound (also called infimum of
B) and denoted inf(B).
A lattice is a poset (A,≤) in which every two elements of A have both a least upper
bound and a greatest lower bound. A lattice (A,≤) is complete if for any subset B of
A both sup(B) and inf(B) exist. Hence, every finite lattice is complete by definition.
For a nonempty complete lattice (A,≤), sup(A) is called the top element, and denoted
19
�. Dually, inf(A) is called the bottom element, and denoted ⊥.
A relation E on A which is reflexive, symmetric and transitive is called an equiv-
alence relation on A. The convention a ≡E b may be used instead of aEb. For
a ∈ A the equivalence class of a generated by the equivalence relation E is the set
[a]E = {a′ ∈ A | a′ ≡E a}. The subscript in [a]E may be dropped if known from
the context. The set [a]E is also referred to as the coset of a with respect to E.
The collection of all cosets of A with respect to the equivalence relation E defines a
partition of A and will be denoted A/E. The set of all equivalence relations on A is
denoted E(A). A partial order ≤ can be defined on E(A) as follows
(∀E1, E2 ∈ E(A)) E1 ≤ E2 ⇐⇒ (∀a, b ∈ A) aE1b ⇒ aE2b
In the poset (E(A),≤) the supremum and infimum of any subset of E(A) always exists.
The supremum of a set T ⊆ E(A) is⋂
T . The infimum of T is the transitive closure
of⋃
T . Consequently the structure (E(A),≤) forms a complete lattice. For the set
A, any function f : A → B induces an equivalence relation ker f ∈ E(A) referred to
as the equivalence kernel of the function f , given by
(a1, a2) ∈ ker f ⇐⇒ f(a1) = f(a2)
Similarly, any E ∈ E(A) defines a canonical output map E : A → A/E which maps
each element a ∈ A to its equivalence class [a]E.
2.2 Formal languages and automata
An alphabet Σ is a finite nonempty set of symbols which may be called also letters
or events. A string or word is sequence of events. We will write Σ+ for the set of all
(nonempty) finite strings with events in Σ, and Σ∗ = Σ+ ∪ {ε}, where ε �∈ Σ denotes
the empty string. With catenation, cat, as the product operation, Σ∗ becomes a
monoid. A language over the alphabet Σ is any subset of Σ∗. The set of all languages
20
over the alphabet Σ will be denoted L(Σ). The complement of a language L will
be denoted Lc. The alphabet of a language L, denoted α(L), is the set of letters
forming strings in L. A string t ∈ Σ∗ is a prefix of s ∈ Σ∗, denoted t ≤ s, if there
exists u ∈ Σ∗ such that tu = s. The prefix closure of a language H ⊆ Σ∗, denoted
H, is the set of all strings in Σ∗ that are prefixes of strings in H. This can also be
represented by the map pre : L(Σ) → L(Σ) : H → H. The eligible events function
over a prefix closed language H ∈ Σ∗ maps each string in H to the set of its eligible
events. Formally
EligH : H −→ Pwr(Σ) : s �→ {σ ∈ Σ | sσ ∈ H}
Let R be an equivalence relation on Σ∗. R is a right congruence on Σ∗ if, for all s, t,
u ∈ Σ∗, sRt =⇒ (su)R(tu). In another words, R is a right congruence if and only if
the cosets of R are invariant with respect to the operation of right catenation. The
Nerode equivalence relation on Σ∗ with respect to the language L is denoted ≡L and
defined as follows
(∀s, t ∈ Σ∗) s ≡L t ⇐⇒ (∀u ∈ Σ∗) su ∈ L ⇔ tu ∈ L
That is, s ≡L t iff s and t can be continued in exactly the same ways (if any) to form
a string in L. It can be proved that ≡L is a right congruence.
Let L1 ⊆ Σ∗1 and L2 ⊆ Σ∗
2, where Σ1 and Σ2 are not necessarily disjoint. Let
Σ = Σ1∪Σ2. The natural projection operation from Σ∗ to Σ∗i is a map Pi : Σ∗ −→ Σ∗
i
defined as follows
Pi(ε) = ε
Pi(σ) =
ε if σ �∈ Σi
σ if σ ∈ Σi
Pi(sσ) = Pi(s)Pi(σ)
for all s ∈ Σ∗, σ ∈ Σ and i ∈ {1, 2}. The action of Pi is to erase all occurrences of
21
σ ∈ Σ−Σi from s. It is straightforward to check that L ⊆ P−1i PiL and L = PiP
−1i L
is valid for any language L ⊆ Σ∗.
Proposition 2.3. Let P : Σ∗ → Σ∗o be a natural projection. Then,
P ◦ pre = pre ◦ P, P−1 ◦ pre = pre ◦ P−1
�
This is to say that natural projections and their inverses are invariant with respect
to the prefix closure operation. The synchronous product of L1 ⊆ Σ∗1 and L2 ⊆ Σ∗
2 is
defined according to
L1 ‖ L2 = P−11 (L1) ∩ P−1
2 (L2)
Intuitively, L1‖L2 represents the concurrent evolution of L1 and L2 with synchro-
nization on common events. The synchronous product operation can be defined
incrementally as follows. Let L1 ⊆ Σ∗, u ∈ Σ∗1, a ∈ Σ1, L1 ⊆ Σ∗, v ∈ Σ∗
2, b ∈ Σ2.
Then
u ‖ ε = u
ε ‖ v = v
ua ‖ vb =
(u ‖ vb) a ∪ (ua ‖ v) b a ∈ (Σ1 − Σ2), b ∈ (Σ2 − Σ1)
(u ‖ v)({a} ∩ {b}) otherwise
L1 ‖ L2 =⋃
u∈L1,v∈L2
u ‖ v
The above definition of synchronous product conforms with the way this product is
computed in practice.
Let Σ and ∆ be two finite alphabets. A morphism is a mapping h : Σ∗ −→ ∆∗
satisfying h(ε) = ε and for all s, t ∈ Σ∗, h(st) = h(s)h(t). Clearly, h(Σ∗) is a
22
submonoid of ∆∗. The morphism h is said to be alphabetic if
(∀σ ∈ Σ) h(σ) ∈ ∆ ∪ {ε}
Therefore, projections are a special class of alphabetic morphisms where ∆ ⊆ Σ. A
substitution over Σ∗ × ∆∗ is a morphism from Σ∗ to L(∆), namely, a mapping ϕ :
Σ∗ −→ L(∆) satisfying ϕ(ε) = ε and for all s, t ∈ Σ∗, ϕ(st) = ϕ(s)ϕ(t). By definition,
morphisms and substitutions are completely defined by specifying only the image of
each letter in Σ under the mapping. Semigroup morphisms are a generalization
of (monoid) morphisms where it is only required to be invariant with respect to
catenation.
Proposition 2.4. Let h : Σ∗ −→ ∆∗ be a surjective alphabetic morphism. Then h−1
is a semigroup substitution.
Proof. Let R = h−1(ε) ∩ Σ and Rσ = h−1(σ) ∩ Σ for all σ ∈ Σ. Then h−1(ε) = R∗
and h−1(σ) = R∗RσR∗ for all σ ∈ Σ. Hence, for all s ∈ Σ∗ with s = σ1 . . . σn where
σi ∈ Σ for i ∈ [1 . . . n], we can write
h−1(s) = h−1(σ1) . . . h−1(σn) = R∗Rσ1R∗Rσ2R
∗ . . . R∗RσnR∗
This shows that h−1(s1s2) = h−1(s1)h−1(s2) for all s1, s2 ∈ Σ∗. Hence, h−1 is a
semigroup morphism from Σ∗ to L(∆).
An automaton is a 5-tuple structure (Q, Σ, δ, qo, Qm), where Q is a finite set of
states, Σ is a finite nonempty set of events, δ : Q × Σ → Q is a (partial) transition
function, qo ∈ Q is the initial state, and Qm ⊆ Q is a nonempty set of marker states.
A transition in the automaton A is any element of δ, and may be denoted simply by
the triple (q, σ, q′) (instead of ((q, σ), q′)) where δ(q, σ) = q′. If δ(q, σ) is defined, then
we say that σ is eligible at q in A. This can also be captured by the overloaded map
EligA : Q → Pwr(Σ) which assigns to each state in A the set of eligible events. The
map δ is extended to strings in the usual way. For two states q, q′ in Q we say that
23
q′ is reachable from q in A if there exists a string s ∈ Σ∗ such that q′ = δ(q, s). We
will write �A(q) to denote the set of states reachable from q in A.
Let A = (Q, Σ, δ, qo, Qm) be an automaton. A state q ∈ Q is said to be reachable
if it can be reached from the initial state, that is , if q ∈ �A(qo), and A is reachable
if all its states are reachable. A state q is said to be coreachable if it can reach one
of the marker states, that is, if there exists a state qm ∈ Qm such that qm ∈ �A(q),
and A is coreachable if all its states are coreachable. If all the states in A are both
reachable and coreachable then A is said to be trim. The closed language generated
by an automaton A is L(A) = {s ∈ Σ∗ | δ(qo, s) is defined}, and the marked language
of A is Lm(A) = {s ∈ Σ∗ | δ(qo, s) ∈ Qm}. If A is reachable and coreachable, then
Lm(A) = L(A).
For a regular language L ⊆ Σ∗ there is a unique (up to isomorphism) minimal
trim automaton that generates L. This automaton is denoted A(L). Clearly, in this
case L = Lm(A(L)). Let A(L) be the tuple (Q, Σ, δ, qo, Qm). The restriction of δ to
qo × Σ can be defined by a map ϑ : L → Q that assigns each string s ∈ L to a state
q ∈ Q iff δ(qo, s) = q. It can be verified that the map ϑ is well-defined. It can also be
verified that EligL(s) = EligA(L)(ϑ(s)) for any s ∈ L.
2.3 DES controllability and supervision
Practical discrete event systems usually exhibit a control mechanism that allows their
behaviour to be modified. In the RW framework, the control information is incor-
porated into the system model by partitioning the set of system events Σ into two
disjoint subsets; Σc denoting the set of controllable events, and Σu denoting the set
of uncontrollable events. Controllable events are those events that can be disabled by
an external agent while the uncontrollable ones are considered permanently enabled.
Let G = (Q, Σ, δ, Qo, Qm) be a DES with Σ = Σc ∪Σu. A supervisor for G can be
viewed as an external agent that observes the behaviour of the system G and enables
or disables any of the controllable events at any point of time. Formally, a supervisor
24
C for a system G is a function
C : L(G) → Γ := {γ ∈ Pwr(Σ) : Σu ⊆ γ}
The function C assigns for each string s ∈ L(G) the set of eligible events that can be
executed. Uncontrollable events are always enabled. The language of the controlled
system, denoted L(V/G), is totally defined by the function C and L(G). The language
L(V/G) can be defined recursively as follows
• ε ∈ L(V/G)
• s ∈ L(V/G), sσ ∈ L(G), and σ ∈ V (s) ⇔ sσ ∈ L(V/G)
• No other strings belong to L(V/G)
The marked behaviour of G under the supervision of C is
Lm(V/G) = L(V/G) ∩ Lm(G)
The supervisor C is said to be nonblocking if
Lm(V/G) = L(V/G)
In this case, C allows every string in the controlled behaviour to be extended to a
marked string of Lm(G).
Typically, the control problem requires constructing a supervisor C for the system
G such that the controlled behaviour of G follows some given specifications. The
solution for the supervisory control problem is based on the controllability property.
A language K ⊆ Σ∗ is said to be controllable with respect to L(G) if
KΣu ∩ L(G) ⊆ K
That is, K is controllable if and only if no string in L(G) that is already a prefix of
25
K, when followed by an uncontrollable event in G exits from the prefixes of K. In
another words, K is controllable with respect to L if
(∀s ∈ K)(∃Σ′ ⊆ Σ, Σu ⊆ Σ′) Σ′ ∩ EligL(s) = EligK(s)
Clearly, L(V/G) is controllable with respect to L(G) and it is prefix closed by defini-
tion. RW theory guarantees that, for a system G any nonempty and closed control-
lable specification K ⊆ L(G) can be implemented by a supervisory controller so that
the controlled behaviour is K.
Given a language E representing a specification for G whereas E is possibly un-
controllable with respect to G. Define CG(E) to be the set of all sublanguages of E
that are controllable with respect to G. Formally,
CG(E) = {K ⊆ E |K is controllable with respect to G}
As a subset of the sublanguages of E, CG(E) is nonempty and closed under arbitrary
unions. In particular, CG(E) contains a unique supremal element called the supremal
controllable sublanguage of E with respect to G written as sup CG(E). An efficient
fixpoint algorithm was developed in [RW87] to compute this supremal language. The
automaton generating the supremal controllable language can be used then as a su-
pervisor for the system.
26
Chapter 3
Interacting Discrete Event Systems
This chapter introduces a formal model for a general class of multiprocess discrete
event systems with well-defined inter-process interactions. First, the formal setting
for multiprocess DES environments is introduced. A multiprocess environment is
referred to here as process space. In this environment basic notions of automata
and formal languages are extended to vector quantities. A process space is defined
by an alphabet vector identifying all possible concurrent events in the underlying
multiprocess environment. In this setting a language vector is a set of languages
representing the behaviour of a set of processes. In the absence of any restrictions,
these processes evolve concurrently while synchronizing their shared events.
In a multiprocess environment, the connection between the set of multiprocess sys-
tems and the set of all possible behaviours in this environment is established through
the composition and the decomposition operations. In this chapter, we investigate
two procedures directly related to these operations. The first one removes redundant
string vectors from a given language vector so that all parts of the system compo-
nents contribute to the overall behaviour of the system. The result is a compact
language vector where each system component can be recovered directly from the
system behaviour using the corresponding projection operation. The second pro-
cedure compensates the information lost in the decomposition operation so that a
given language can be recovered from its set of projections over the process space.
27
It is shown that every language admits a supremal compensator. However, the
efficiency of this construction varies substantially from one system to another, and in
general characterizes how the given system “matches” the process space structure.
A general model for multiprocess discrete event systems is then introduced. The
model, referred to as a one of interacting discrete event systems (IDES), consists
of two basic elements; a language vector representing the system components, and a
specification language representing the interactions between these components. The
overall behaviour of the system is obtained by restricting the free run of the system
components to the interaction specification.
Analysis of discrete event systems usually involves behaviour comparison in one
way or another. However, the behaviour of a multiprocess system is obtained through
the synchronous composition operation which is computationally intractable. Under
the IDES modelling framework, alternative approaches to behaviour analysis of mul-
tiprocess DES can be formulated where the computation of the synchronous product
of the system components can be avoided or reduced using a simpler representation
of these components.
In interacting discrete event systems the interaction specification can be viewed
as a representation of the system architecture. Architectural specifications, how-
ever, usually address the external organization of the system component rather than
their internal structures. To capture this observation, a class of abstraction maps is
proposed. This class preserves the process space structure while abstracting low level
details about the components. The corresponding abstract behaviour can be used to
represent typical architectural specifications such as serial and parallel interactions.
A hierarchical form of interaction specifications, referred to as interaction struc-
tures, is then introduced. Interaction structures combine several interaction spec-
ifications in a multilevel scheme similar to the organization of typical hierarchical
systems. A multi-level interaction structure can always be converted to a flat inter-
action specification that has the same overall effect on the system components.
28
3.1 Preliminaries and notations
A process space is a structure that defines a multiprocess environment where a set of
processes can run concurrently. To describe and analyze systems in process spaces,
basic notions of formal language theory will be extended to vector quantities. In the
usual way, a vector quantity refers to an element of a Cartesian product of sets.
Let I be the index set of a collection of processes. An alphabet vector over I is
a set Σ = {Σi | i ∈ I} of alphabets. Bold letters will be used to distinguish vector
quantities as well as functions with a set of vectors as a codomain (range). The ith
component of a vector will be referred to by the normal (unbolded) symbol with the
subscript i. For example, the ith component of the vector u is denoted ui.
Let Σ = {Σi | i ∈ I} be an alphabet vector. The union of all alphabets in Σ will
be denoted by the normal (unbolded) symbol Σ, or by α(Σ) if any confusion may
arise. A process space is uniquely defined by its alphabet vector, hence either terms
designate the same entity. For a subset J of I, we will write ΣJ to denote the set of
all events in the J components in Σ, namely, the set ∪j∈JΣj. Therefore, under this
notation we have ΣI = Σ and Σ{i} = Σi. We will write Σs or αs(Σ) for the set of
shared (synchronous) events in Σ, namely,
Σs =⋃i�=j
(Σi ∩ Σj)
The set of asynchronous (local) events, Σ− Σs, will be denoted Σa or αa(Σ).
Let Σ be a process space with index set I. A string vector over Σ is a set
s = {si ∈ Σ∗i | i ∈ I}. The empty string vector over I is the set εI = {ε | i ∈ I}. The
subscript I may be dropped if it is clear from the context. An event vector over Σ is
a set σ = {σi ∈ Σi ∪ {ε} | i ∈ I} such that σ �= ε. For the string vectors v and u,
the componentwise catenation of u and v is denoted cat(u,v) or simply uv.
A language vector over the process space Σ is a set L = {Li ⊆ Σ∗i | i ∈ I}.
Componentwise catenation can be defined over the set of language vectors in the
29
usual way. The set of all language vectors over Σ is denoted L(Σ). Recall that
L(Σ) denotes the set of all languages over the alphabet Σ. A language L ∈ L(Σ)
is considered a language vector L with a singleton element over the alphabet vector
Σ = {Σ}, and vice versa, so that L = {L}. For two language vectors L′ and L over
Σ, L′ is said to be a language subvector or simply a subvector of L if L′i ⊆ Li for
all i ∈ I. In this case we write L′ � L. The componentwise membership is denoted
E . Therefore, s E L if for all i ∈ I we have si ∈ Li. The componentwise union and
componentwise intersection of L′ and L are denoted L′ �L and L′ �L, respectively.
For a language vector L and a natural projection operation Po : Σ∗ → Σ∗o, we will
write PoL for the componentwise projection of L. That is, PoL = {PoLi | i ∈ I}. Let
F : L(Σ) → L(Σ) be a map. With a slight abuse of notation, we will write F (L) for
the vector language {F (Li) | i ∈ I}. The synchronous product of the components of
the language vector L will be denoted ‖L, or BΣ(L).
A string vector s in the process space Σ is said to be composable if BΣ(s) �=∅, otherwise it is called incomposable. Consider, for instance, the process space
Σ = {{a, x, z}, {b, x, y}, {c, z, y}}. In this process space, the string vector s =
{axz, xby, zyc} is composable with BΣ(s) = (axbzyc, axzbyc, axzybc), while the string
vector t = {ax, by, cy} is incomposable as BΣ(t) = ∅. Clearly, the composability of a
string vector is directly linked to the order of shared events in its components.
3.2 Behaviour analysis in process space
Behavioural analysis problems in the multiprocess space environment usually share
two main arguments, namely, a system consisting of a set of concurrent processes,
and a specification defining certain tasks or correctness criteria for the system. This
applies to both the verification and the supervisory control problems investigated in
this thesis. It is then required to check if the system satisfies the given specification
and if not to check whether the system behaviour can be restricted through supervi-
sion to satisfy the specification. In both situations the system behaviour has to be
30
compared to the specification. This comparison is based on the containment order
on the set of languages.
In multiprocess environments, a system of concurrent processes is represented by
a language vector while the specification is usually given as a language. Direct com-
parison between the two domains is not possible. Therefore, a transformation from
one domain to the other is necessary for such comparison. Typically, the composi-
tion operation is used to generate the language representing the behaviour of a given
language vector while the decomposition operation generates the vector projection
of a given language. Formally, these operations are represented by two maps each
transform from one domain to the other.
In process space, the decomposition operation is represented formally as follows.
For a process space Σ over an index set I, the vector projection map P Σ : L(Σ) →L(Σ) associates each language L ∈ L(Σ) with the language vector {PiL | i ∈ I} where
Pi : Σ∗ → Σ∗i is the natural projection map that erases all events other than those
of the ith component of Σ. Therefore, for the process space Σ the language vector
P Σ(Σ∗) contains (componentwise) all the string vectors that can be generated in this
process space. For two languages L and L′ in L(Σ) it is straightforward to verify the
following result
Proposition 3.1.
P Σ(L ∩ L′) � P Σ(L) � P Σ(L′) and P Σ(L ∪ L′) = P Σ(L) � P Σ(L′)
�
On the other hand, the composition process is represented by the vector composi-
tion function BΣ : L(Σ) → L(Σ) that associates each language vector in Σ with the
behaviour (language) it generates. Namely, for a language vector L over Σ we have
BΣ(L) = ∩i∈IP−1i Li. For two language vectors L and L′ in L(Σ), it is straightforward
to prove the following dual result for the composition operation.
31
Proposition 3.2.
BΣ(L) ∩BΣ(L′) = BΣ(L �L′) and BΣ(L) ∪BΣ(L′) ⊆ BΣ(L �L′)
�
Note that the union inclusion in the above proposition is strict in general. Based
on the above definitions of the composition and decomposition operations, a two way
transformation between the set of languages and the set of language vectors in any
process space is established as shown in the following figure.
BΣ
P Σ
(Composition)
(Decomposition)
Language Vectors
L(Σ) L(Σ)
Languages
Figure 3.1: Behaviour transformation in process space
In order to compare the behaviour of a language vector to a given language, one
of these two transformations has to be made. The two approaches, however, are
far from being similar. In the composition approach, the language vector of a given
system is converted to the language generated by the system. In this case, behavioural
comparison can be conducted for the system with respect to any given specification.
However, it is well-known that the composition operation is intractable with respect
to the number of components. The decomposition operation is not computationally
efficient either. However, in most cases the system is given as a set of components and
the specification is given as a language. Therefore, it is only required to decompose
32
the specification. Also, the state size of the specification is usually much less than
the size of the composite system making the decomposition of the specification more
efficient than the composition of the system components. Under these assumptions, it
would be more efficient to decompose the specification into a language vector and then
compare it componentwise with the language vector of the system. Unfortunately,
this simple solution will not work as the outcome of this comparison does not generally
reflect the relation between the behaviour of the system and the specification. This
is due to the following facts:
• In general, due to possible existence of incomposable string vectors, different lan-
guage vectors can generate the same behaviour. Therefore, the componentwise
comparison between language vectors cannot give precise information about the
relation between their corresponding behaviours.
• A language L is not in general equivalent to the composite behaviour of its vector
projection, P Σ(L). Therefore, the containment order on the set of languages
does not translate into an equivalent componentwise containment order on the
corresponding language vectors.
The source of the problems here is that neither the composition nor the decompo-
sition operation preserves behavioural information, or more precisely, the containment
order on the set of languages. In the composition operation information is lost due
to the synchronization constraints, whereas in the decomposition operation informa-
tion is lost because of the ambiguity associated with partial observations. To address
these issues, the framework for multiprocess discrete event systems will be refined by
extending the definition of multiprocess system and redefining the composition and
decomposition operations accordingly.
33
3.3 Compact language vectors
The composition operation enforces strict synchronization of shared events, that is,
shared events must be triggered simultaneously by the corresponding components of
the system. Under this rule, it is possible that certain strings in one component do
not synchronize with the other components of the system, and therefore these strings
do not contribute to the overall behaviour of the system. As a result, component-
wise comparison between language vectors cannot give precise information about the
relation between their corresponding behaviours. This can be resolved by defining a
class of language vectors in which the composition operation is order preserving.1
Let Σ be a process space. Then, for any language vectors L and L′ in L(Σ) we
have
L′ � L =⇒ BΣ(L′) ⊆ BΣ(L)
The reverse direction does not hold in general. Therefore, the composition operation
BΣ is not fully order preserving. To get a closer look into this situation, consider
the kernel of the map BΣ. This kernel defines an equivalence relation on the set of
language vectors in L(Σ) in which a set of language vectors are equivalent if they
generate the same language. Therefore, each coset of ker BΣ contains a set of lan-
guage vectors generating the same behaviour. It is straightforward to see that the
set of language vectors within each coset of kerBΣ is closed under componentwise
intersection as indicated by Proposition 3.2. Hence, there is a unique minimal ele-
ment (with respect to componentwise inclusion) in each coset that can generate the
language associated with the coset. This minimal element is formally characterized
as follows. A language vector L over the process space Σ is said to be compact if it
satisfies
(∀L′ ∈ L(Σ)) BΣ(L) = BΣ(L′) =⇒ L � L′
Basically, a compact language vector contains the minimal set of components that is
1Within the domain of languages and language vectors, the term order refers to the (componen-twise) containment order.
34
needed to generate its language. Hence, L is compact if it is the minimal element
in its coset in the partition ker BΣ. Consider, for instance, the process space Σ =
{{a, x}, {b, x}}. In this process space, the language vector L = {{a, ax}, {b, xb}}is compact with BΣ(L) = {ab, ba, axb}. On the other hand, the language vector
H = {{a, ax, xxa}, {b, xb, xbxbx}} with BΣ(H) = {ab, ba, axb} = BΣ(L), is not.
It can be verified easily that for a language L ∈ L(Σ), the language vector P Σ(L)
is always compact. Therefore, if the components of a given system are known to be
equal to the vector projection of the system behaviour then this system is compact.
However, multiprocess systems are usually specified by their components. And, in
this situation, there is a possibility that certain parts of the components do not con-
tribute to the system behaviour. This case may arise, for instance, due to (re)design
errors which are difficult to trace, particularly when the system is composed of a large
number of tightly coupled components. In such cases, compactness of the system is
not guaranteed. The following result provides another characterization for the com-
pactness property. To simplify notation we will write PBΣ to denote the composition
P Σ ◦BΣ : L(Σ) → L(Σ).
Proposition 3.3. L is compact if and only if L = PBΣ(L)
Proof. (⇒) Let L = {Li|i ∈ I} be a language vector. Then we have
Pi(BΣ(L)) = Pi
(⋂j∈I
P−1j Lj
)= Li ∩ Pi
( ⋂j∈I−i
P−1j Lj
)⊆ Li
This shows that PBΣ(L) � L. Also, in general R ⊆ BΣ(P Σ(R)) for any language
R. Therefore BΣ(L) ⊆ BΣ(PBΣ(L)). However, given that PBΣ(L) � L, we can
conclude that BΣ(PBΣ(L)) = BΣ(L). Then, if L is compact, it must be that L �PBΣ(L). Hence, L = PBΣ(L).
(⇐) Assume that L = PBΣ(L). However, in general for any L′ ∈ L(Σ) with
BΣ(L) = BΣ(L′) we have PBΣ(L) = PBΣ(L) � L′. Therefore L � L′, and hence
L is compact.
35
Therefore, a language vector is compact if and only if its components are exactly
the vector projection of the composite behaviour of the system. Based on this result,
the compact language vector that can generate the same behaviour as a language
vector L is PBΣ(L). Hence, we can define a closure operator CΣ : L(Σ) → L(Σ)
that associates each language vector in L(Σ) with the compact language vector that
generates the same behaviour. That is, CΣ(L) = PBΣ(L). Note that redundant
information in vector languages arises from the set of incomposable string vectors
which is directly related to the shared behaviour of the system. Therefore, a more
efficient procedure can be developed for the computation of CΣ(L) by tracing only
the shared behaviour of the system as follows. For i ∈ I, define the language vector
Li derived from the language vector L as follows
Lii = Σ∗
i and (∀j ∈ (I − {i})) Lij = PsLj
where Ps : Σ∗ → Σ∗s is the natural projection of the set of shared events in the
process space. Hence, Li is constructed by replacing the ith component of L by Σ∗i ,
and replacing all other components by the corresponding projection onto the shared
events. The following proposition defines another way to compute the map CΣ. First
the following lemma will be needed.
Lemma 3.1. Let Σ and Σo be an alphabet set such that Σs ⊆ Σo ⊆ Σ and let
Po : Σ∗ → Σ∗o be the associated natural projection. Then,
(∀L ∈ L(Σ)) Po(BΣ(L)) = BΣ(PoL)
�
The above lemma is a simple extension of a result in [Won01]. The proof of this
extension is direct based on the associativity of the synchronous product.
Proposition 3.4.
CΣ(L) = {Li ∩ Pi(BΣ(Li)) | i ∈ I}
36
Proof. Write Pj,s : Σ∗ → (Σs ∪ Σj)∗ for the natural projection of shared events and
events from the jth component.
(CΣ(L))j = Pj(BΣ(L))
= Pj(⋂
i∈I P−1i Li)
= Lj ∩ Pj(⋂
i�=j P−1i Li)
= Lj ∩ Pj(⋂
i�=j P−1i Li ∩ P−1
j Σ∗j)
= Lj ∩ PjPj,s(⋂
i�=j P−1i Li ∩ P−1
j Σ∗j) (Pj = Pj ◦ Pj,s)
= Lj ∩ Pj(⋂
i∈(I−j) P−1i Pj,sLi ∩ P−1
j Pj,sΣ∗j) (Lemma 3.1, Σs ⊆ Σj,s)
= Lj ∩ Pj(⋂
i∈(I−j) P−1i PsLi ∩ P−1
j Σ∗j)
= Lj ∩ Pj(BΣ(Lj))
Based on the above result, the computation of CΣ depends only on the shared
behaviour of the system components. This result also confirms that asynchronous
language vectors (containing no shared events) are always compact.
Proposition 3.5. Let L and S be two compact language vectors. Then
BΣ(L) ⊆ BΣ(S) ⇐⇒ L � S
Proof. The left direction (⇐) is trivial and generally holds for any pair of language
vectors not necessarily compact ones. The right direction (⇒) is direct based on the
property Li = Pi(BΣ(L)) for all i ∈ I when L is a compact language vector, and the
fact that the map Pi is monotonic.
This says that the composition operation restricted to the set of compact language
vectors is totally order preserving. Because of the importance of this property to
basically all forms of behavioural analysis discussed in this thesis, we will be dealing
mainly with compact language vectors here. The set of compact language vectors will
be denoted Lc(Σ) for the process space Σ.
37
3.4 Language compensation in process space
The decomposition of a language L is a language vector consisting of the projection of
L into each component of the process space. However, in general, because of the infor-
mation lost through the projection operation, languages cannot be recovered directly
from their vector projections using the composition operation. In this section, this
recovery is made possible by considering a mechanism to compensate the information
lost in the decomposition operation.
Let Σ be a process space with index I. As indicated earlier, language decompo-
sition is presented by the vector projection map P Σ : L(Σ) → L(Σ). Similar to the
composition map, BΣ, this map is neither injective nor surjective. In general we have
(∀L ∈ L(Σ)) L ⊆ BΣ(P Σ(L))
The other direction does not hold in general, indicating that the information lost
through the projection operation cannot be recovered using the composition opera-
tion. To have a closer look into this situation, consider the kernel of the decomposition
operation P Σ. This kernel defines an equivalence relation on the set of languages L(Σ)
based on the projected language vectors, namely, two languages in L(Σ) are equiv-
alent with respect to ker P Σ if they have the same vector projection. Hence, each
coset of ker P Σ consists of a set of languages that have the same vector projection in
Σ. It is straightforward to see that the set of languages within each coset of kerP Σ
is closed under union (but not under intersection in general), simply by the fact that
the projection operation distributes over union. Hence, each coset contains a unique
maximal element. This maximal element is covered below in more detail.
To simplify notation we will write BP Σ to denote the composition BΣ ◦ P Σ :
L(Σ) → L(Σ). In the process space Σ, a language L ∈ L(Σ) is said to be Σ-
decomposable (or simply decomposable if no confusion arises) if
L = BP Σ(L)
38
Basically, a decomposable language L is a one that can be generated directly from
its vector projection P Σ(L) using the composition operation. The following proposi-
tion confirms that the set of decomposable languages is exactly the set of supremal
elements (w.r.t inclusion) in the cosets of ker P Σ.
Proposition 3.6. L is Σ-decomposable if and only if
(∀L′ ∈ L(Σ)) P Σ(L′) = P Σ(L) =⇒ L′ ⊆ L
Proof. (⇒) Assume that L is Σ-decomposable. Then L = BP Σ(L). Let L′ be
a language such that P Σ(L) = P Σ(L′). In general L′ ⊆ BP Σ(L′) = BP Σ(L).
Therefore L′ ⊆ L.
(⇐) Let L be a language satisfying the above condition. The language vector P Σ(L)
is always compact. Therefore by Proposition 3.3 we have P Σ(L) = P Σ(BΣ(P Σ(L))).
Then by the above condition we get BΣ(P Σ(L)) ⊆ L. However, in general we have
L ⊆ BΣ(P Σ(L)). Therefore, it must be that L = BΣ(P Σ(L)). Hence L is Σ-
decomposable.
Clearly, the set of Σ-decomposable languages is exactly the image of the map
BΣ. As indicated earlier, the class of Σ-decomposable languages can always be recov-
ered from its decomposition using the composition operation. However, in many
behavioural analysis situations we need to deal with languages that are not Σ-
decomposable. Therefore, it may be useful to extend the recoverability feature of
the class of Σ-decomposable languages to the set of all languages in L(Σ). This can
be done by compensating the information lost in the projection operation. That is, to
add necessary information to the composite behaviour BP Σ(L) such that the overall
behaviour of the structure is equal to L. Clearly, for any language L such compen-
sator depends on L (a function of L) and must contain L. The set of Σ-compensators
for L, denoted CΣ(L), is defined as follows
CΣ(L) = {K ∈ L(Σ) | L = BP Σ(L) ∩K}
39
The set CΣ(L) is not empty as it contains L. It is easy to verify that the set CΣ(L)
is closed under union and intersection and hence has a supremal and infimal element.
Clearly, the infimal element of CΣ(L) is L. We will write CΣ(L) to denote the infimal
element of the set CΣ(L) and CΣ(L) to denote its supremal element.
Proposition 3.7.
CΣ(L) = L ∪BP Σ(L)c
Proof. Clearly L = BP Σ(L)∩ (L ∪BP Σ(L)c), so L∪BP Σ(L)c is a compensator for
L. Now, assume that K is a compensator for L such that L ∪ BP Σ(L)c ⊆ K. Then
we can write K = L∪BP Σ(L)c∪H for some H ∈ L(Σ) where H ∩ (L ∪BP Σ(L)c) =
∅ and therefore H ∩ L = ∅ and H ⊆ BP Σ(L). Then we have L = BP Σ(L) ∩(L ∪BP Σ(L)c ∪H). By simple manipulation of this expression we get L = L ∪ H
and therefore H ⊆ L. It is already established that H ∩L = ∅. Then it must be that
H = ∅. Therefore, L ∪BP Σ(L)c is the supremal compensator for L.
It is easy to see that, any language K such that L ⊆ K ⊆ CΣ(L) is a Σ-
compensator for L. Hence, the set of compensators of L, CΣ(L), is totally defined by
its supremal element, CΣ(L). Note that a compensator K for L may be blocking in
the sense that the intersection of K with BP Σ(L) may produce a string that cannot
be completed to a string in L. The problem of blocking in process space will be
investigated in Chapter 5.
It is worthwhile to examine the limit cases of the maximal compensator for a given
language L ∈ L(Σ). One limit corresponds to the case when CΣ(L) = Σ∗. Based on
the above proposition, it must be that L = BP Σ(L), that is, L is a Σ-decomposable
language. The other limit corresponds to the case when CΣ(L) = L. Based on the
above proposition, this translates to BP Σ(L) = Σ∗. Roughly speaking, the projected
components of L in this case do not carry any information about L. Such languages
will be referred to as Σ-indecomposable languages. It is easy to see that the set of
Σ-indecomposable languages in L(Σ) is exactly the set of languages equivalent to Σ∗
in ker BP Σ.
40
Example 3.1. Let Σ = {{a, b, x}, {c, d, x}} be a process space. The language
L = (ax, ac, ca, xc) is Σ-decomposable and therefore can be compensated by Σ∗.
Clearly, in this case Σ∗ is the optimal compensator for L. Note that the optimal
compensator for a given language is not necessarily optimal with respect to state
size. For instance, the language H = (ac, ad) can be compensated by the language
K = (a, b)∗(c, d)∗ where A(K) has two states. On the other hand, the automaton of
its optimal compensator, CΣ(H) = Σ∗ − (ca, da), has four states as shown below.
c, d
c, d
K ∈ CΣ(H)
a, b
a
d
c
H
a, b, x
c, dΣ− a
a
ΣΣ
CΣ(H)
The language F shown below is an example of a Σ-indecomposable language. The
only compensator for F is the language F itself, as we have P Σ(F ) = P Σ(Σ∗) =
{(a, b, x)∗, (c, d, x)∗}.
xa, b
c, d
x
F = CΣ(F )
�
Using compensation, any language L can be decomposed into its vector projection,
P Σ(L) and a compensator K ∈ CΣ(L). The language L can then be recovered from
the pair (P Σ(L), K) via the intersection of K with the composition of P Σ(L). In order
to formalize this recovery procedure, an extended form of the composition operation
is needed.
41
3.5 Interacting discrete event systems
In this section the notion of vector languages is extended to a new structure that
can accommodate a decomposition procedure based on language compensation as
presented in the previous section. The composition and vector projection operations
will be extended accordingly.
Let Σ be an alphabet vector with index set I. An interacting discrete event
system (IDES) over Σ is a pair L = (L, K) where L is a language vector in L(Σ)
and K is a language in L(Σ). The language K will be referred to as the interaction
specification language or simply the interaction language of the IDES L. We will use
calligraphic letters to denote IDES structures. Also we will write Li to denote the
ith component of L. The language generated by L is given by
BΣ(L) = ‖L ∩ K
Therefore, the IDES structure consists of a set of components, represented by the
language vector L, running concurrently, and a language K that synchronizes with
the composite behaviour of these components. Clearly, in this case K is a compensator
for the language BΣ(L). The set of IDES in the process space Σ will be referred to
as I(Σ).
Based on the definition of the behaviour of IDES structures, a language vector L
is behaviourally equivalent to an IDES L = (L, Σ∗). Therefore, the set of language
vectors L(Σ) has an isomorphic correspondence with a subset of I(Σ) containing those
IDES with interaction language K = Σ∗. Based on this correspondence, language
vectors may be represented either as an IDES structure or simply by the original
notation as a set of languages. Also, as shown above, the overloaded function BΣ
is used also to denote the composition operation for IDES. This operation will be
referred to as the generalized composition operation. Similar to the case with language
vectors, BΣ is a map from I(Σ) to L(Σ).
For two IDES structures L′ = (L′, K ′) and L = (L, K) over the process space Σ,
42
L′ is said to be a substructure of L if L′ � L and K ′ ⊆ K. To simplify the notation, we
will write L′ � L or L � L′ when L′ is a substructure of L. Also, the componentwise
union L′ and L is denoted L′�L and is set to be equal to (L′�L, K ′∪K). Similarly,
the componentwise intersection of L′ and L is the IDES L′ � L = (L′ � L, K ′ ∩K).
Similar to the set vector languages, the following result can be easily proved.
Proposition 3.8.
BΣ(L) ∩BΣ(L′) = BΣ(L � L′) and BΣ(L) ∪BΣ(L′) ⊆ BΣ(L � L′)
�
As shown earlier, any language L ∈ L(Σ) can be transformed into an IDES con-
taining its vector projection and one of its compensators. However, L can have more
than one compensator in general. Therefore, it can be decomposed into more than
one IDES each generating the same behaviour. The set of IDES that can gener-
ate the language L from its vector projection will have the form (P Σ(L), K) where
K ∈ CΣ(L). This set has a maximal element (with respect to the relation �) denoted
PΣ(L) = (P Σ(L), CΣ(L)) and a minimal element denoted PΣ(L) = (P Σ(L), CΣ(L)).
This set will be associated with the language L through the generalized decomposition
operation, PΣ : L(Σ) → Pwr(I(Σ)) defined as follows
(∀L ∈ L(Σ)) PΣ(L) = {L ∈ I(Σ) | PΣ(L) � L � PΣ(L)}
The generalized decomposition operation associates a language L with the set of IDES
that contains its vector projection and a compensator for L. Therefore, any IDES in
PΣ(L) is a valid decomposition for L.
Note that the original settings for composition and decomposition in process space
can be recovered back from the above extended setting by removing the interaction
language from the IDES model and consequently from the generalized composition
and decomposition operation. The extended process space setting for composition
decomposition is shown in the following figure.
43
BΣ
PΣ
(Composition)
(Decomposition)
Interacting DES
L(Σ) L(Σ)
Languages
Figure 3.2: Behaviour transformation in process space
The IDES setting allows the transformation of any language into a structure that
generates the same behaviour. This makes the decomposition procedure reversible
for all languages. As should be expected and to be confirmed later, this can provide
a better ground for behavioural analysis of multiprocess systems. In addition, the
above setting introduces a new important dimension to the modelling of multiprocess
discrete event systems that can capture architectural information about the system.
In the next section, the basic aspects of modelling multiprocess discrete event sys-
tems are outlined. The relevance of the interaction specification to the modelling
multiprocess system will become more clear in the light of these aspects.
3.6 Modelling multiprocess systems
In the modelling process, physical properties of the system are transformed into a
mathematical model that captures essential details about its behaviour and function.
The extent and the nature of such details in the system model depend mainly on the
specific interest for studying this system. This is valid for simple systems as well as
complex multiprocess systems. However, there are certain factors that are specific to
the structure and behaviour of multiprocess systems. In this section, we outline the
basic factors that define the behaviour of multiprocess discrete event systems.
44
System components
Complex discrete event systems are composed of a set of basic components. There
is no strict formal definition of what constitutes a basic component. However, in
practice the basic components of the system are usually recognizable from the system
description and can be refined to fit the scope of analysis. It is usually assumed
that the system components are relatively independent in the sense that they can act
autonomously.
System environment
Multiprocess environments enforce certain rules for the composition of the system
components. These rules are usually independent of the internal description of the
underlying components. Synchronous composition is a common composition scheme
in discrete event systems in which shared events must be triggered simultaneously
by the corresponding components. Other composition rules can be formulated for
different environments. Composition rules are usually assumed fixed for a given envi-
ronment. In general, composition rules define the way processes interact in the least
restrictive form, usually referred to as the parallel or concurrent composition.
Interaction constraints
In practice, the system components may be configured to interact in a variety of ways
in order to conform to certain organizational schemes. Such configuration usually
takes the form of abstract specification that, in effect, constrains the overall behaviour
of the system components. In contrast with the composition rules, these specifications
are specific to the environment. Initial interaction constraints may be given as a
part of the system description. These interaction constraints are usually accessible
(observable) and flexible to change (controllable) in order to facilitate systems testing,
maintenance and updates.
45
System Environment
Component
Component
Component
Component
Interaction Specification
Figure 3.3: Multiprocess Discrete Event System
46
A modelling framework for multiprocess systems has to consider the above dimen-
sions in one way or another. In the most general form, all of these dimensions can
be represented as the elementary types (variables) of the system model. The IDES
framework provides a mechanism to include the above dimensions independently in
the system model. The system components are represented by a language vector and
their interaction constraint is represented by the interaction language. In the IDES
setting, the composition rule is fixed to the synchronous product operation. Nev-
ertheless, the composition rule remains, conceptually, an independent factor in the
IDES model setting. In a more general setting, the composition operation can be
redefined or even considered a variable to accommodate other environments.
3.7 Abstract architectural specification
In multiprocess environments, the interaction between the system components has
the effect of restricting the composite behaviour of these components. Therefore, the
interaction language can be regarded as a specification for the system components.
This view is captured in the IDES model and the generalized composition operation as
introduced earlier. In this section, we investigate the use of the interaction language
as an abstract architectural specification.
Let Σ be a process space and let K be a language in L(Σ). The language K defines
a class of IDES in I(Σ) in which K is the interaction language, that is, each IDES in
this class has the form (L, K), where L ∈ L(Σ). This class will be referred to as the
K-class in Σ and will be denoted I(Σ, K). That is, I(Σ, K) = {(L, K) |L ∈ L(Σ)}.The behaviour of each IDES in this class is the intersection of the synchronous product
of the corresponding language vector with K. Therefore, the behaviour of each IDES
in this class is contained in K. The language vector P Σ(K) will be referred to as the
interaction domain of the set I(Σ, K).
47
Proposition 3.9.
(∀s E P Σ(Σ∗)) BΣ(s, K) �= ∅ =⇒ s E P Σ(K)
Proof. Let s E P Σ(Σ∗) be a string vector in Σ. Clearly, if BΣ(s) ∩ K �= ∅ then
BΣ(s) �= ∅ and consequently Pi(BΣ(s)) = {si}.
BΣ(s) ∩K �= ∅ =⇒ (∀i ∈ I) BΣ(s) ∩ P−1i PiK �= ∅ (K ⊆ P−1
i PiK)
=⇒ (∀i ∈ I) Pi(BΣ(s) ∩ P−1i PiK) �= ∅
=⇒ (∀i ∈ I) Pi(BΣ(s)) ∩ Pi(P−1i PiK) �= ∅
=⇒ (∀i ∈ I) {si} ∩ Pi(K) �= ∅=⇒ s E P Σ(K)
Therefore, only string vectors contained (componentwise) in the interaction do-
main of the set of (Σ, K)-systems can contribute a non-empty behaviour to any IDES
in this class.
In many practical situations, the system architecture - as defined by the interac-
tion between the system components - does not usually target the internal structure
of the components but rather their external arrangement. Serial composition, for
instance, is a well-known form of interaction in which the system components run se-
quentially in a way that is independent of the internal structure of these components.
Using a form of abstract language to describe architectural specification can better
capture typical situations in multiprocess environments. In addition, using a simple
interaction specification would possibly simplify the analysis in this environment.
Let Σ be a process space with index set I. Define the index equivalence relation
generated by Σ, denoted EΣ, as follows
(σ, σ′) ∈ EΣ ⇐⇒ (∀i ∈ I) σ ∈ Σi ⇔ σ′ ∈ Σi
48
That is, two events in Σ are equivalent with respect to EΣ if they are shared (triggered)
by exactly the same set of components. Consequently, each coset of EΣ is associated
with a subset of I where the corresponding components share exactly the events of
the coset. The coset of EΣ that contains the event σ will be denoted [σ]Σ.
The set of subsets of I that are associated with the cosets of the index equivalence
relation, EΣ, will be denoted IΣ. Each element in the set IΣ can be viewed as an
abstract event that corresponds to a transition made simultaneously and collectively
by the corresponding set of components. Define the map fΣ : α(Σ) → IΣ as follows
(recall that IΣ ⊆ Pwr(I)),
(∀σ ∈ Σ) fΣ(σ) = {i ∈ I | σ ∈ Σi}
The map fΣ associates every event in Σ with the set of components in the process
space Σ that have to coordinate in triggering this event. It is clear that ker fΣ = EΣ.
The function fΣ is extended over languages in the usual way.
Languages over the set IΣ can be viewed as a form of abstract behaviour that
does not distinguish between the events in the system components while recognizing
the boundaries of the system components and their synchronization constraints. Lan-
guages over IΣ will be referred to as abstract layouts. The corresponding languages
over the alphabet set Σ will be referred to as layouts . Formally, a layout in the process
space Σ is a language K ∈ L(Σ) that satisfies
(∀(σ, σ′) ∈ ker fΣ)(∀u, v ∈ Σ∗) uσv ∈ K ⇐⇒ uσ′v ∈ K
It is easy to see that the above condition is equivalent to having K = f−1Σ (fΣ(K)).
The composition map f−1Σ ◦ fΣ : L(Σ) → L(Σ) will be referred to as the natural
abstraction map and will be denoted FΣ.
Based on the above definition, the languages ∅, ε, and Σ∗ are layouts. The set
of layouts over Σ will be denoted Y(Σ). Clearly, there is a bijective correspondence
between the set of layouts and the set of languages over IΣ representing the set of
49
abstract layouts. Therefore, we will not distinguish between these two sets as long as
the interpretation is clear from the context. Based on this bijective correspondence,
it is straightforward to show that the set of layouts is closed under any operation
that respects the partition EΣ. This includes union, intersection, catenation, and
complement.
As mentioned earlier, architectural specifications do not usually target the inter-
nal structure of the components but rather their external arrangement. Therefore,
architectural specifications (ideally) should not impose any restriction on the internal
dynamics of the system components. If this applies to any set of components (lan-
guage vector) in the process space, then we say that such architectural specification
is universal. The parallel composition scheme with K = Σ∗ is an example of such
universal architectural specifications.
Intuitively, the universality property can be achieved if the projection of the archi-
tectural specification is equal to the supremal language vector in the corresponding
process space. That is, an architectural specification K is universal if it satisfies
P Σ(K) = P Σ(Σ∗)
namely, K is Σ-indecomposable. It is easy to see, based on Proposition 3.8, that under
this condition the components of the IDES of the architectural specification does not
impose any restriction to the internal structure of any set of components in the process
space. Roughly speaking, this means that, the effect of a universal architectural
specification is independent of the system components. In general, any interaction
specification K can be described as universal with respect to its domain, P Σ(K), in
the sense that, the interaction specification K does not impose internal restriction
on the components of any language vector L within its domain (i.e. satisfying L �P Σ(K)).
The following convention will be used for graphical representation of IDES. Sys-
tem components will be drawn inside a rounded box while the interaction language
(automaton) will be drawn inside a double box. Labels for components and inter-
50
action languages may be added above the boxes. The interaction language will be
shown connected to each component. In the case when the interaction language is a
layout, abstract events can be used. Their legends will be shown as follows
• Events that are exclusive to a certain component (not shared with any other
components) will be represented by an abstract event shown in a small box at
the left end corner of the component box.
• Shared events will be represented by abstract events shown inside a box in the
hyper edge joining the corresponding components.
The following example illustrates the use of layouts to express architectural specifi-
cations, and demonstrates the above graphical conventions for representing IDES.
Example 3.2. Let Σ = {{ai, bi, ci, di, x} | i ∈ [1, 2, 3]} be a process space consisting
of three components. The index equivalence relation generated by this process space is
EΣ = {{a1, b1, c1, d1}, {a2, b2, c2, d2}, {a3, b3, c3, d3}, {x}}. The corresponding abstract
events will be denoted U , V , W and X respectively. The natural abstraction map for
this process space is defined as follows
FΣ(ai) = FΣ(bi) = FΣ(ci) = FΣ(di) = {ai, bi, ci, di} (i ∈ [1, 2, 3]),
FΣ(x) = {x}
Now, consider a system consisting of three machines in this process space. A synchro-
nization mechanism is implemented to allow flexible arrangements of the machines.
Each arrangement can be expressed as a layout K representing the interaction spec-
ification of the system. The overall system is shown in Figure 3.4 for the layout
K = {U, V,W,X}∗ which allows the components to run in parallel.
The shared event, x, is used in this system to detect the initiation and the termination
of each machine. Such mechanism is easy to implement and is common in practical
systems. Figure 3.5 shows another setting of the system where the interaction lan-
guage is set such that the first two machines run in parallel and then followed by the
third machine in a continuous cycle.
51
x
a1
b1
c1d1
U
X
x a2
c2
b2
xb3
a3
c3d3
WV
U, V, W, X
Figure 3.4: The IDES structure of three machines in parallel
52
X
X
U, V
X
WW
X
x a2
c2
b2
xb2
a3
c3d3
WV
L1 L2 L3
(L, K)
x
a1
b1
c1d1
U
U, V
BΣ(L, K)
L1‖L2 L3
x
xx
Figure 3.5: The three machines in another configuration
53
The overall system behaviour is outlined in the above figure. The detailed state
machines for L1‖L2 and L3 are omitted for simplification. �
As seen in the previous example, architectural specification can be expressed using
abstract events, as defined by the natural abstraction map. Recall that an abstract
event corresponds to a transition made collectively by a specific set of components.
Therefore, specifications built over the set of abstract events can define the way the
components are arranged irrespective of their internal details. Such specifications
can be used to define the same arrangement for any set of components in the process
space.
Consider the process space Σ = {Σ1, Σ2} with the corresponding abstract events
U = Σ1 − Σ2, V = Σ2 − Σ1 and X = Σ1 ∩ Σ2. In this process space, the language
U∗ simply means that the first component can operate in this context. Similarly, the
language V + can be interpreted as the second component must operate in the given
context. Also, shared events can be used to define the context of operation for the
components. This can be done by controlling the initiation and termination of the
components using shared events as shown in the previous example.
Many standard language operations can be simulated using layouts as the corre-
sponding interaction specifications. In the following, some standard binary operations
and their corresponding interaction specifications are presented. These binary oper-
ations are defined over the process space Σ = {Σ1, Σ2} consisting of two components
where the corresponding abstract events are denoted U,X, V as given above.
Serial composition
Serial composition can take different forms depending on the system and its environ-
ment. In general, in this form of interaction, components run sequentially one after
the other. In asynchronous environments for instance, serial composition is repre-
sented by the catenation operation which can be simulated by the layout K = U∗V ∗.
Note that this layout is a universal architectural specifications. Other forms of serial
54
composition can be defined for synchronous environments. For example, the layout
K = U∗X∗V ∗ can simulate a form of synchronous composition where one process
starts then synchronizes with a second process (if possible) and then the second pro-
cess continues and exits.
Parallel Composition
Parallel composition can also take different forms depending on the system and its
environment. In general, this composition corresponds to the least restrictive inter-
action (no restriction at all) between the system components. This interaction is
represented simply by the synchronous product operation which can be simulated by
the layout K = {U,X, V }∗. In the case when there are no shared events, or when
only asynchronous interaction is allowed, the term X can be removed from the above
setting. The corresponding operation becomes that of the shuffle (asynchronous prod-
uct) operation. At the other limit, namely when there are only shared events or when
only synchronous interaction is allowed, then the terms U and V can be removed.
Clearly, the corresponding operation becomes that of intersection.
Refinement
Refinement is a well-known approach for designing and modelling complex systems.
In this approach, the system is described initially by an abstract model that hides
many details about the internal structure of the system. More details about the
system can then be revealed by expanding certain events or states into languages. In
general, the refinement procedure adds information to the system model by extending
certain parts of the model. Refinement, therefore, can be regarded as the reverse of
the abstraction process.
In discrete event systems, refinement can take different forms depending on the
underlying system and its environment. Language refinement is one of these forms
in which a given behaviour of the system is extended by replacing certain events by
55
languages. In general, language refinements can be represented by substitution maps
in the form ϕ : Σ∗ → L(∆) which associate every letter in Σ with a language in ∆∗,
where Σ ⊆ ∆. In this case Σ corresponds to the alphabet of the abstract (high level)
model and ∆ is the alphabet of the more detailed (low level) system representation
which extends the high level model.
Language refinements can be expressed using the generalized language composi-
tion under certain assumptions. For instance, when shared events are used to initialize
the extension (substitution) procedure while all other events are considered internal
(not shared). This means that each event σ in Σ is mapped to a language in ∆∗ in
the form σH, where H ⊆ (∆ − Σ)∗. This refinement is referred to as synchronized
refinement. This form of refinement can be simulated by the layout K = {U,XV ∗}∗.
Handshaking is another example of refinement that can be expressed by the gener-
alized languages composition. This form of refinement is common in communication
protocols and hardware systems. In this interaction scheme, shared events are used
to initiate the extension (refinement) and they are also required to signal its termi-
nation. Generally, there are four disjoint sets of events; initiating events, terminating
events, internal events of the calling subsystem and internal events of the called sub-
system. Considering the same set of initiating and terminating events, handshaking
refinements can be simulated by the layout K = {U,XV ∗X}∗.
Interleaving
Interleaving is also a common interaction scheme in hardware and software systems. It
corresponds to situations where the system components perform a task in an alternat-
ing way, or when a multiprocess system is simulated in a single-process environment.
In interleaving interaction two or more systems are executed interchangeably based
on certain time limits and priority schemes. In the simple setting of two systems
with equal priorities, no timing constraints, and no shared events, interleaving can
be represented by the (universal) layout K = (UV )∗(U + ε). Figure 3.6 shows this
layout as well as the layout of some other standard interactions.
56
U V U VX
X
U
Catenation Handshaking refinement Interleaving
V
V
Figure 3.6: The Layouts of Some Standard Interactions
3.8 Multilevel Interaction Structures
Practical systems are usually organized in a hierarchical multilevel scheme. In hier-
archical systems, the main components are grouped into a disjoint set of objects or
“larger” components. Each of these objects contains a set of components interacting
in a specified way. In turn, these objects form the components of the next level.
The process of clustering components is repeated until the highest level is reached at
which the interaction between all the system components is fully specified.
In this section, the IDES model is extended to provide direct representation of
hierarchical multiprocess DES. This requires expanding the interaction specification
from a language to a structure that matches the organization of the systems. This
interaction structure can be viewed as a more detailed representation of the interaction
specification of the system, consisting of an ordered set of local specifications each
one targets a specified level of the system description.
57
Let Π and Σ be two alphabet vectors with index sets I and J respectively such
that α(Σ) = α(Π). We say that Π is a cover for Σ, written Σ � Π, if
1. (∀i ∈ I)(∀j ∈ J) Σi ∩ Πj �= ∅ =⇒ Σi ⊆ Πj,
2. (∀i ∈ I)(∀j, k ∈ J) Σi ⊆ (Πj ∩ Πk) =⇒ j = k.
That is, every component in Σ is a subset of some unique component of Π and
every component of Π is the union of a unique set of components in Σ. Hence,
each j ∈ J corresponds to a unique subset of I and, under this correspondence, the
set J can be identified with a partition of I. Note that, in general, both Σ and Π
may contain shared events. We will write Σ ≺ Π for the case when Σ � Π and
Σ �= Π. For example, consider the case where Σ = {(a, b, x), (c, x, y), (y, d)} and
Π = {(a, b, x), (c, d, x, y)}. Then, Σ ≺ Π.
The composition of a hierarchical multiprocess system can be conducted iteratively
as follows. The components of the system are grouped into a collection of disjoint sets
of components. Each set in this collection has its own local interaction specification.
Each set of components is then composed with its local interaction specification to
define a new higher level component. The set of new components obtained this way
is now defined over a more abstract process space that is a cover for the original one.
A higher level interaction specification can now be defined for these new components.
The new components, in turn, can be grouped into another collection of disjoint sets of
components defined over a higher level of process space abstraction. This procedure is
repeated until the top level is reached where the components has a uniform interaction
specification. At this level the interaction between all basic components of the system
is now completely specified. The outcome of the composition at the highest level
of the hierarchy is the language generated by the overall system. The interaction
specification for such hierarchical systems can be represented by a tree-like structure.
The root node of this structure corresponds to the top level of the abstraction and the
components appear as the leaves of the lowest level as shown in the following figure.
A formal model for multilevel interaction in hierarchical systems can be defined
58
Interaction
Systems
Structure
Components
Figure 3.7: Multilevel Interaction Structure
59
based on the above description. An N -level interaction structure for the process space
Σ is defined as the tuple (Π,K), where:
- The first element Π is a set of alphabet vectors {Σn | n ∈ [1 . . . N ], α(Σn) = α(Σ)},referred to as process space structure, satisfying
(∀n ∈ [1 . . . N − 1]) Σ ≺ Σn ≺ Σn+1 and ΣN = {Σ}
That is each alphabet vector in Π is a cover for any lower rank alphabet vector. The
highest rank alphabet vector has one component, namely, the alphabet set Σ. A set
of partitions for the index set I will be used as indices for the set Π. This set of
indices is denoted IΨ and defined as follows,
IΨ = {In ∈ EI | n ∈ [1 . . . N ], and (∀n ∈ [1 . . . N − 1]) In < In+1}
where EI is the set of all partitions of the set I. Each element In ∈ IΨ serves as an
index for the alphabet vector Σn ∈ Π. Recall that each element in In is a subset of
I and the set In is a partition of I. Also, the partition In is finer than the partition
In+1 for all n ∈ [1 . . . N ], and at the top level IN = {I}.
The following convention will be used to identify the components in the set of alphabet
vectors Π. Each component will have two subscripts. The first subscript indicates
the abstraction level and the second one indicates the index of the component. For
example, Σn,j denotes the component j of the alphabet vector Σn at the nth level.
Note that in this hierarchy, each alphabet component in a given level covers (that
is, contains the alphabet of) a unique set of components at the lower level. The
subvector of Σn−1 that is covered by the component Σn,j at the upper level will be
denoted Σn,j.
- The second element K is a set of interaction language vectors {Kn|n ∈ [1 . . . N ]}satisfying
(∀n ∈ [1 . . . N ]) Kn = {Kn,j ⊆ Σ∗n,j | j ∈ In}
60
Here also two subscripts will be used to identify each interaction language in the set
K similar to the convention used above. So, Kn,j is the component j of the language
vector Kn at the nth level.
Example 3.3. Let Σ = {(a, x), (b, x, y), (c, y), (d, y)} be a process space over the
index set I = {1, 2, 3, 4}. Let Ψ = (Π,K) be a 2-level interaction structure over Σ
defined as follows
1. Π = {Σ1,Σ2}, with
Σ1 = {{a, b, x, y}, {c, d, y}} = {Σ1,A, Σ1,B}, Σ2 = {{a, b, c, d, x, y}} = {Σ2,I}2. K = {K1,K2}, with K1 = {K1,A, K1,B}, and K2 = {K2,I}, where
K1,A ⊆ Σ∗1,A, K1,B ⊆ Σ∗
1,B, and K2,I ⊆ Σ∗2,I
The interaction structure Ψ applied to a language vector L is shown in the following
figure.
K1,A K1,B
L1 L2 L3 L4
K2,I
Here we have IΨ = {I1, I2} where, I1 = {{1, 2}, {3, 4}} and I2 = {{1, 2, 3, 4}. In the
above we write A for {1, 2} and B for {2, 3}. �
61
The composition operation BΣ can now be extended to handle multiprocess sys-
tems with interaction structure specification. Let L be a language vector over Σ and
let Ψ = (Π,K) be an interaction structure over Σ. The composition of L under Ψ,
denoted BΣ(L, Ψ), is defined through the following recursion. Let L0 = L. For each
i ∈ [1 . . . N ] define
Li = {Ki,J ∩ ‖j∈JLi−1,j | J ∈ Ii}
where Li−1,j is the component j of the language vector Li−1. This iteration will end
up with the language vector LN which contains a single element, LN,I , which is the
result of the compound composition BΣ(L, Ψ). That is,
BΣ(L, Ψ) = LN,I
Clearly, BΣ(L, Ψ) is the language generated by the language vector L under the
restriction of the interaction structure Ψ. Applying the above recursion on the last
example we get,
L0 = {L0,i | L0,i = Li, i ∈ [1, . . . , 4]} = L,
L1 = {L1,1, L1,2}, where L1,1 = (L0,1‖L0,2) ∩K1,A, L1,2 = (L0,3‖L0,4) ∩K1,B,
L2 = {L2,1}, where L2,1 = (L1,1‖L1,2) ∩K2,I ,
BΣ(L, Ψ) = L2,I
It is worthwhile to see if the effect of the interaction structure Ψ on a given al-
phabet vector can be simulated by an interaction language. Define the language
BΣ(P Σ(Σ∗), Ψ) as the interaction language generated by interaction structure Ψ.
This language may be referred to as the Ψ-interaction language and will be denoted
KΨ. Based on its definition, the language KΨ can be obtained by composing the set
K recursively starting from the vector K1 in a way similar to the procedure described
above for computing BΣ. Applying this procedure to the last example we get
KΨ = ( K1,A ‖K1,B ) ∩ K2,I
62
We claim that the language KΨ simulates the effect of the interaction structure Ψ
on any given language vector in the process space Σ. That is, the restriction of any
language vector L in Σ to KΨ is equivalent to the restriction of L to the structure
Ψ. Before we present the proof of this claim we need to introduce the following
convention. Let Σi ⊆ Σj be two alphabet sets. We will write Pj/i : Σ∗j → Σ∗
i for the
natural projection from Σj to Σi. The inverse of this projection is denoted P−1j/i .
Lemma 3.2. Let Σi ⊆ Σj ⊆ Σk be alphabet sets. Then,
P−1k/j ◦ P−1
j/i = P−1k/i
Proof. Let σ ∈ Σi be an event, and let Σji = Σj − Σi and Σkj = Σk − Σj. Then
P−1k/j(P
−1j/i (σ)) = P−1
k/j(Σ∗jiσΣ∗
ji)
= Σ∗kj(Σ
∗kjΣjiΣ
∗kj)
∗Σ∗kjσΣ∗
kj(Σ∗kjΣjiΣ
∗kj)
∗Σ∗kj
= (Σ∗kjΣji)
∗Σ∗kjσ(Σ∗
kjΣji)∗Σ∗
kj
= (Σkj ∪ Σji)∗σ(Σkj ∪ Σji)
∗
= Σ∗kiσΣ∗
ki
= P−1k/i(σ)
The extension to strings and languages follows directly from the fact that the inverse
projection operation distributes over catenation and union.
The following result confirms the claim above and shows that the restriction effect
of any interaction structure on a given language can be simulated by an interaction
language that depends only on the interaction structure.
Theorem 3.1. Let Ψ be an interaction structure in the process space Σ. Then
(∀L ∈ L(Σ)) BΣ(L, Ψ) = BΣ(L, KΨ)
Proof. We will proceed by induction on the number of the levels in the interaction
structure. Clearly, the above equation holds for a one level system. Assume now it
63
holds also for N level systems. Let Ψ be an N + 1 interaction structure. For J ⊆ I,
we will write ΨN,J for the restriction of Ψ to N levels (starting from level 0) and
J components in I. We also write LJ for the language vector containing only the
set of J components in L. Now, assume that the recursive procedure for computing
BΣ(L, Ψ) has been carried through the N level at which we have obtained a language
vector LN .
BΣ(L, Ψ) = ‖ LN ∩ KN+1,I
=⋂
J∈INP−1
IN/JLN,J ∩ KN+1,I
=⋂
J∈INP−1
IN/J(BΣ(LJ , ΨN,J)) ∩ KN+1,I
=⋂
J∈INP−1
IN/J(BΣ(LJ , KΨN,J)) ∩ KN+1,I
=⋂
J∈INP−1
IN/J(⋂
j∈J P−1J/jLj ∩KΨN,J
) ∩ KN+1,I
=⋂
J∈IN(⋂
j∈J P−1IN/JP−1
J/jLj ∩ P−1IN/JKΨN,J
) ∩ KN+1,I
=⋂
j∈INP−1
IN/jLj ∩⋂
J∈INP−1
IN/JKΨN,J∩ KN+1,I
=⋂
j∈I P−1I/jLj ∩ (
⋂J∈IN
P−1IN/JKΨN,J
∩ KN+1,I)
=⋂
j∈I P−1I/jLj ∩ KΨ = BΣ(L, KΨ)
This shows that the given equality holds for N + 1 level.
Based on the above result, an IDES model can be represented as a tuple L =
(L, Ψ) where Ψ is an interaction structure. The same system can be described by the
multilevel IDES L = (L, KΨ), where KΨ is the interaction language equivalent to Ψ.
Layout languages were introduced earlier to represent abstract architectural spec-
ifications. The abstract characteristics of layouts can be extended to the multilevel
interaction structure as follows. In a process space Σ with index set I, an N -level
interaction structure Ψ = (Π,K) is said to be a layout structure if
(∀In ∈ IΨ)(∀j ∈ In) Kn,j ∈ Y(Σn,j)
That is, every interaction language in the set K is a layout with respect to the
corresponding process space. The following result shows that the interaction language
64
generated by a layout structure is a layout.
Proposition 3.10.
Ψ is a layout structure =⇒ KΨ is a layout
Proof. We will proceed by induction on the number of levels in the interaction struc-
ture Ψ. First we need to introduce the following lemma.
Lemma 3.3.
Σ ⊆ Π =⇒ Y(Σ) ⊆ Y(Π)
Proof. Let K ∈ Y(Σ). We need to prove that K ∈ Y(Π). Let σ and σ′ be two
symbols from Σ such that (σ, σ′) ∈ ker fΠ. Based on the definition of fΠ, σ and σ′ are
triggered by the same components in Π. This implies also that σ and σ′ are triggered
by the same components in any subset of Π. Given that Σ ⊆ Π, then this implies
that (σ, σ′) ∈ ker fΣ. Hence, if K is invariant with respect to fΣ then it must be also
invariant with respect to fΠ. This proves that K is also a layout for Π.
The base case when N = 1 is trivial. Assume that the above predicate is true
for all interaction structures with N levels. Now assume that Ψ is an N + 1 level
layout system. For J ⊆ I, we will write ΨN,J for the restriction of Ψ to N levels
(starting down from level 1) and J components. Based on the initial assumption,
each interaction structure ΨN,J can be simulated by an interaction language KΨN,J,
and this interaction language is a layout over the process space ΣN,J . Therefore
KΨN,J= FΣN,J
(KΨN,J). Then we can write
KΨ =⋂
J∈INP−1
IN/JKΨN,J∩ KN+1,I
=⋂
J∈INP−1
IN/JFΣN,J(KΨN,J
) ∩ FΣ(KN+1,I)
=⋂
J∈INP−1
IN/JFΣ(KΨN,J) ∩ FΣ(KN+1,I)
=⋂
J∈INFΣ(P−1
IN/JKΨN,J) ∩ FΣ(KN+1,I)
Then KΨ is the intersection of a set of layouts (in Σ), hence KΨ is a layout.
65
3.8.1 System Tree Diagram
The graphical representation for interacting discrete event systems can be extended
to show hierarchical interaction specifications. In this regard, a tree-like diagram,
referred to as the System Tree Diagram (STD), is proposed here. In this diagram the
system components and their hierarchical interaction structure are revealed in a way
that matches the organization of the system.
The convention used for the System Tree Diagram is similar to that used earlier
to draw IDES. In the STD, systems are represented by a tree graph in which the com-
ponents appear at the end leaves inside rounded boxes while interaction languages
are shown at all other nodes inside double boxes. In this diagram each interaction
language shows the interaction between the components that directly descend from
its node. The STD can be used to visualize any hierarchical multiprocess DES. How-
ever, it is more visually effective for loosely coupled systems with layout interaction
specifications.
Example 3.4. This example is a slightly modified version of the three machines
system of example 3.2. In this example an extra shared event, y, is used to allow
more flexible arrangements of the system. The system is arranged in order to satisfy
the following specifications
• The second machine can be initiated after event c1 and must terminate before
the event d1 (event x is used to initiate and terminate the second machine).
• The third machine works only after the first machine and must terminate before
any of the two machines can start again.
This arrangement can be represented as a hierarchical layout structure as shown in the
Figure 3.8. In this figure, the layout K1 restrict the first two machines in accordance
with the first specification. The second specification is handled at the second level
using the upper level layout. Note that in the first level the abstract event Z refers to
the event shared between the composition of the first two machines (under K1) and
the third machine, namely the event y.
66
L1
a1
b1
c1d1
X
Ua2
c2
b2
L2
b3
c3d3
L3
a3
V W
Y
X
VX
x
y yx
U, Y W, Y
Z
ZZU1
U1
V1
V1
K1 K2
U1 V1
Figure 3.8: Three machines in a hierarchical interaction structure
67
Note that each node in this diagram together with all its descendants defines
an integral part of the system and therefore can form their own independent STD
which can be represented by a node in the system’s STD. The following figure shows
the above system under the interaction language generated by the above interaction
structure.
L1
y
a1
b1
c1d1
x
X
Ua2
c2
b2
L2
xy
b3
c3d3
L3
a3
V W
Y
W
WY
U
U
Y
V
X
V
Figure 3.9: The IDES of the three machines system
�
68
The above example shows the flexibility of the System Tree Diagram in particular
and the settings of the IDES model for multiprocess systems in general. The STD
provides easy access to the system components and their interaction (architectural)
specification. Modifications and expansions can be carried out efficiently to any part
of the system. In the last example, for instance, any of the given machines can be
expanded into a set of machines having their own interaction. Also, the interaction
between the first two machines can be modified. These changes can be done locally
without the need to change other parts of the system.
3.8.2 Multilevel Decomposition
It is shown earlier that any language can be decomposed into a IDES structure. In
this section, we extend the decomposition procedure for process space structures.
Under this extension, a given specification for a hierarchical system can be converted
to an IDES with an interaction structure that matches that of the system.
Let Π = {Σn | n ∈ [1 . . . N ]} be a an N -level process space structure over
the process space Σ and let S be a language in L(Σ). An interaction structure
Ψ = (Π,K) is said to be a compensation structure for S if the composition of P Σ(S)
under the interaction structure Ψ is equal to S, namely
S = BΣ(P Σ(S), Ψ)
Based on Theorem 3.1, Ψ is a compensation structure for S if and only if KΨ ∈CΣ(S). Let Ψ1 = (Π,K1) and Ψ2 = (Π,K2) be two interaction structures. The
componentwise union of Ψ1 and Ψ2 is denoted Ψ1 � Ψ2 and is given as a structure
Ψ1 = (Π,K) where
K = {Kn1 �Kn2 | n ∈ [1 . . . N ]}
That is, the interaction vectors of K are the componentwise union of the correspond-
ing interaction vectors in K1 and K2. Componentwise intersection of interaction
structures is defined similarly. Note that in both operations the arguments and the
69
output are structurally matched, namely, defined over the same process space struc-
ture Π. It can be shown easily that the set of compensation structures for a given
language S is closed under componentwise union and therefore has a supremal ele-
ment. Clearly, if Ψ is the supremal compensation structure for S then KΨ = CΣ(S).
However, the other direction does not generally hold.
For a process space Σ and N -level process space structure Π over Σ with N > 1,
it is easy to see that the supremal compensation structure for a language S ⊆ Σ∗ is
given by (Π,K) where,
Ki = P Σi(Σ∗) i ∈ [1 . . . N − 1]
and KN = {CΣ(S)}. That is, the rth component of Kj ∈ K, with 1 ≤ j < N
is given as Kr,j = Σ∗r,j. For example, with respect to the process space structure of
Example 3.3, a language S ⊆ Σ∗ can be decomposed into the following multilevel
IDES with optimal compensation structure.
Σ∗1,A Σ∗
1,B
P1(S) P2(S) P3(S) P4(S)
CΣ(S)
The supremality of this compensation structure is direct based on the associativity
of the synchronous product operation.
70
Chapter 4
Behavioural Aspects of Interacting
Discrete Event Systems
The IDES model contains two basic elements defining the overall behaviour of the
system, namely a language vector that represents the system components, and an
interaction specification that restricts the composite behaviour of these components.
Considering only compact language vectors, a unique language vector is associated
with any given composite behaviour (decomposable language). However, in general,
a set of interaction specifications - rather than a unique one - can be associated with
the behaviour of an interacting discrete event system. This flexibility in the model
can be utilized to develop efficient behavioural analysis procedures for this class of
systems.
Abstraction can be used to refine the interaction specifications of interacting dis-
crete event systems. Clearly, the abstraction procedure has to be efficient itself
before the efficiency of the underlying analysis procedure can be claimed. In this
chapter we investigate a class of abstractions that can be computed while avoiding
the direct computation of the synchronous product of the system components. In
addition, we will focus on the case where a well-defined correspondence can be estab-
lished between the system and its abstraction. Such correspondence is essential to
adjust the abstraction procedure such that the required information can be obtained
71
from the abstract representation without revealing unnecessary details about the sys-
tem behaviour. This correspondence also provides important behavioural information
that can be utilized in the analysis process.
Two forms of multiprocess system abstractions are discussed in this chapter,
namely automaton-based and language-based abstractions. Each form ad-
dresses a different representation of the underlying system. The objective is to
characterize a class of abstractions that is invariant with respect to the composi-
tion operation. Such invariance can be used to compute the abstraction indirectly
by abstracting the components first and then computing the synchronous product of
the result. This indirect computation is usually more efficient than the direct one.
Also this procedure preserves the original correspondence between the system and
its abstraction. In IDES the interaction specification is by definition an abstraction
of the system behaviour. Certain behaviour patterns in the interaction specification
can be used to identify an abstraction map that links the system behaviour with the
given interaction specification. An IDES with such map defined is referred to as a
structured IDES.
Finally, in this chapter, the compactness property is extended to interacting
discrete event systems. This property ensures that the vector language component
of a given IDES can be retrieved from its composite behaviour via the decomposi-
tion operation. This resolves one issue for the order preserving requirements of the
composition operation. This property is a necessary condition for some behavioural
analysis results in this thesis. It is shown that the compactness of an IDES requires
the compactness of its language vector. In addition, compactness depends on the
interaction language of the system. Based on this dependence we introduce the no-
tion of complete languages. A combination of a complete interaction language and
compact language vector is a compact IDES. Although, there is no known way to test
the completeness of a given language, a set of results is presented to help in testing
this property for certain classes of interaction languages.
72
4.1 Abstraction in process spaces
Abstraction is a common strategy for containing the complexity of multiprocess sys-
tems. In the abstraction process, the system dynamics is encapsulated into a simpler
model. This simpler representation is usually established so that it contains enough
information about the system behaviour to carry out the required analysis.
It is important to draw a distinction between the notions of abstraction and re-
duction as generally adopted in literature. Given a property P and a system L, a
reduction for L with respect to P will result in another simplified representation of
L, namely L′ such that L satisfies property P if and only if L′ satisfies the same
property. Therefore, reductions by definition guarantee a solution to the given prob-
lem. However, reductions are usually hard to find and computationally expensive.
Moreover, they are specific to the property at hand and have to be adjusted and
recalculated for other properties.
Abstraction, on the other hand, retains enough information about L such that L
satisfies property P if L′ satisfies the same property1. In general, the same conclusion
applies for any other behaviour-based property. In contrast with reductions which
require certain computations, abstractions are usually guessed [Kur94]. However,
abstractions do not guarantee a decisive answer to the problem in the first run. When
abstractions fail to give such answer, a common technique is to refine the abstraction
iteratively until a solution is found.
Abstraction-based behaviour analysis usually requires a well-defined mapping be-
tween the system and its abstract representation. For instance, in the iterative ap-
proach to behavioural analysis, this mapping is needed to refine the abstraction when
it fails to provide a solution. In this section, we present a class of automaton-based
and language-based abstractions that provide such a correspondence. Moreover, this
correspondence is defined directly based on the relation between components and their
abstraction, without the need to compute the composition of the system components.
1In particular the property P we deal with in this thesis is that of language inclusion.
73
4.1.1 Automaton-based abstraction
The target of automaton-based abstractions is the finite state structure of the system.
In this form of abstraction the system model is reduced by aggregating its set of
states such that the dynamics of the aggregated system preserves the transitions of
the original system. In such construction, it is always guaranteed that the behaviour
of the original system is a subset of the behaviour of its abstraction. Also, there
is a well-defined mapping between the abstract system and the original one. This
correspondence takes the form of automaton homomorphism.
An automaton homomorphism is a transition preserving map defined over the
state sets of the underlying transition structures. Formally, let A = (Q, Σ, δ, qo, Qm)
and B = (Q′, Σ′, δ′, q′o, Q′m) be two automata. An automaton homomorphism from A
to B is a map H : Q→ Q′ that satisfies,
1. (∀q ∈ Q)(∀σ ∈ Σ) H(δ(q, σ)) = δ′(H(q), σ)
2. H(qo) = q′o
3. H(Qm) ⊆ Q′m
It is straightforward to see that based on the definition of H it is always the case
that L(A) ⊆ L(B). An automaton epimorphism from A onto B is a homomorphism
where H is surjective and, in addition, the inclusion in the third condition above is
replaced by equality. In this case we say that B is a homomorphic image of A.
Consider the automaton A = (Q, Σ, δ, qo, Qm). Let π be an equivalence relation
on Q. The quotient automaton of A with respect to π is denoted A/π and is equal to
(Q/π, Σ, δπ, qo/π,Qm/π) where δπ is defined as follows
δπ(X, σ) = Y =⇒ (∃q ∈ X)(∃q′ ∈ Y ) δ(q, σ) = q′
In general the quotient automaton A/π can be non-deterministic as several states
with common eligible events may collapse to a single state under the partition while
the targets of these events remain distinct. An admissible partition of the automaton
74
A satisfies
(∀(q, q′) ∈ π)(∀σ ∈ Σ) δ(q, σ)! ∧ δ(q′, σ)! =⇒ (δ(q, σ), δ(q′, σ)) ∈ π
Clearly, A/π is deterministic if and only if π is admissible. It is easy to prove that for
a given partition π, the coarsest admissible partition that is finer than π always exists.
In the implementation of automaton-based abstractions, we will only consider admis-
sible partitions to avoid unnecessary complications resulting from non-determinism.
Let π be a partition on the automaton A and let Fπ : Q → Q/π be the corre-
sponding map. It is easy to see that Fπ is an epimorphism between A and A/π. The
map Fπ is referred to as the natural epimorphism defined by π. The next theorem
is a simple restatement of the standard homomorphism theory. It emphasizes the
natural correspondence between automaton epimorphisms (an external relation) and
quotients (an internal relation).
Theorem 4.1. Let A and B be two machines and H be an epimorphism from A to
B. Suppose that π is a partition on A satisfying π ≤ ker H. Then there exists an
epimorphism G from A/π to B such that H = G◦Fπ. Furthermore if π = ker H then
G is an isomorphism. �
The above theorem defines the correspondence between the system and its aggre-
gate representation. The issue now is to see if such correspondence remains under the
composition operation and if so in what form. To this end, consider the two automata
Ai = (Σi, Qi, δi, qoi, Qmi) where i ∈ [1, 2]. Let πi be an admissible partition for Ai.
Define the partition π1,2 of the Cartesian product Q1 ×Q2 as follows
(q1, q2) ≡π1,2 (q′1, q′2) ⇐⇒ q1 ≡π1 q′1 ∧ q2 ≡π2 q′2
That is, two compound states are equivalent with respect to π1 × π2 iff their compo-
nents states in the same automaton are equivalent with respect to the corresponding
partition.
75
Proposition 4.1.
(A1‖A2)/π1,2∼= (A1/π1)‖(A2/π2)
Proof. Clearly, the set (Q1 ×Q2)/π1,2 is isomorphic to the set Q1/π1 ×Q2/π2. This
isomorphism is established by the equality
(∀q1 ∈ Q1)(∀q2 ∈ Q2) [(q1, q2)]π1,2 = [q1]π1 × [q2]π2
Therefore, we have [(q1o, q2o)]π1,2 = ([q1o]π1 , [q2o]π2), and similar equality holds for
each combination of marker states. Next we show that this isomorphism is preserved
under the transition mappings in both structures. Let ζ be the transition map in
(A1‖A2)/π1,2, η be the transition map in (A1/π1)‖(A2/π2), δ1,2 be the transition map
in A1‖A2, and δπibe the transition map in Ai/πi. Assume that, ζ(x, σ) = x′ for
some σ ∈ Σ1 − Σ2. Then there must exist (q1, q2) ∈ x and (q′1, q′2) ∈ x′ such that
δ1,2((q1, q2), σ) = (q′1, q′2). Therefore, we have δ1(q1, σ) = q′1 and q2 = q′2. Then
δπ1([q1]π1 , σ) = [q′1]π2 and therefore η(([q1]π1 , [q2]π2), σ) = ([q′1]π1 , [q′2]π2). However,
[q1, q2]π1,2 corresponds to ([q1]π1 , [q2]π2) under the isomorphism and so does [q′1, q′2]π1,2
with ([q′1]π1 , [q′2]π2). Therefore, the isomorphism is preserved under any transition
with event from Σ1−Σ2. Similarly, this can proved for events from Σ2−Σ1 and from
Σ1 ∩ Σ2.
Based on the above result, any partition of the local components of a given system
directly corresponds to a partition of the composite structure. Moreover, the corre-
sponding partition of the composite structure is totally specified by the partitions of
the system components. A result similar to that of Theorem 4.1 can be obtained for
epimorphism mapping. Let hi : Ai → Bi with i ∈ [1, 2] be two epimorphism maps.
Define the map h1,2 : Q1 ×Q2 → Q1 ×Q2 as follows
(∀q1 ∈ Q1)(∀q2 ∈ Q2) h1,2(q1, q2) = (h1(q1), h2(q2))
It is easy to see that this map is well-defined. Moreover, this map defines an epi-
76
morphism on the automaton A1‖A2 to the automaton h1(A1)‖h2(A2) namely the
composition B1‖B2. The following result is a direct consequence of the previous
proposition together with Theorem 4.1.
Corollary 4.1.
h1,2(A1||A2) ∼= h1(A1)‖h2(A2)
�
Similarly here the map h1,2 is totally defined by the maps h1 and h2. The above
results show that automaton-based abstraction can be performed by abstracting the
components first and then composing the abstracted structures. This approach will
be referred as indirect abstraction. Clearly, this is generally more efficient than the
direct abstraction approach where the synchronous product is computed first then the
composite structure is then abstracted. Figure 4.1 shows both directions in computing
the abstraction h1,2(A1‖A2). The corresponding partitions is shown by the dashed
boxed.
Abstraction-based analysis may require the refinement of the system abstraction
iteratively until a solution is found. Such iterative procedure is usually required
to be finite, ending with the identity map where the abstraction of the system is
the system itself. In this case, the number of available refinement steps will have a
significant impact on the efficiency of the iterative procedure. More refinement steps
bring a better resolution which in turn helps to reach a level of abstraction closer to
the exact amount of information needed to solve the given problem. Therefore, the
maximum number of possible refinement steps can be used as a quality measure for a
given abstraction scheme. In automaton-based abstractions, this number is equal to
the length of the maximum chain in the lattice of all possible (admissible) partitions
of the system.
The maximum number of refinement steps can show the limitation of indirect ab-
straction compared with direct ones. In Figure 4.1, for instance, we have a maximum
of 6 possible refinement steps using indirect abstractions compared with 9 for the di-
77
‖
h1 × h2
h2h1
A2
‖
y
xb2
z
a2
2
1
0
a1
b1
y
x
z
0
1
2
A1
0 1,2a1
x
yb1, z
B1
x
yz
a2
b20,1 2
B2
B1‖B2
a1
b2
a1
b2x
y
z
A1‖A2
a2
b2
a2
b2
a2
b2
a1 b1
a1 b1
a1 b1
z
y
x
Figure 4.1: Two way abstraction of a multiprocess DES
78
rect one. In the general case of a system with index set I where the state size of each
component is equal to ni for i ∈ I, it is easy to verify that the maximum number of
refinement steps is equal to∑
i∈I ni while direct abstraction has up to∏
i∈I ni steps.
4.1.2 Language-based abstraction
Language-based abstractions target the behaviour (language) of the system rather
than its finite state model. In general, abstractions are required to preserve the
original behaviour of the system. Starting from this condition, language-based ab-
stractions can take different forms depending on the scope of analysis. The class of
monotonic maps and its catenative subclass are examples of language-based abstrac-
tion maps. Monotonicity is needed to preserve the containment order on the set of
languages, which is required in behavioural comparison of DES. Similar to the case
with automaton-based abstraction, the aim here is to define a class of language-based
abstractions that can be computed efficiently without losing the correspondence with
the original system.
A general class of language-based abstraction maps can be defined based on the
above two features, namely, a monotonic map that preserves the original behaviour of
the system. For the alphabet set Σ a map G : Σ∗ → L(Σ) is said to be an extension
map if
(∀s ∈ Σ∗) s ∈ G(s)
Therefore, an extension map extends and preserves the behaviour of its arguments.
It is easy to see that extension maps are monotonic. A catenative map is a map that
is invariant with respect to the catenation operation, namely, a catenative abstraction
map G : Σ∗ → L(Σ) satisfies
(∀s1, s2 ∈ Σ∗) G(s1s2) = G(s1)G(s2)
Therefore, catenative maps are semigroup substitutions and hence completely defined
by specifying only the image of each element in Σ ∪ {ε} under the mapping. A
79
catenative extension map from Σ∗ to L(Σ) will be referred to as a Σ-abstraction map.
Σ-abstraction maps can be implemented as transducers2. The output mapping
of any transducer is monotonic by definition. Therefore a transducer defines a Σ-
abstraction map if transition outputs contain the triggering event, and when any two
transitions with the same event always have the same outputs. Σ-abstraction maps
will be referred to simply as abstraction maps if Σ is known from the context.
A map F : Σ∗ → L(Σ) is said to be prefix preserving if F ◦ pre = pre ◦ F.
This property ensures that the evolution history of the system is preserved under the
map. For abstraction maps, this feature is important when the information about the
evolution history of the system is crucial to the underlying analysis. The following
standard result is needed in characterizing prefix preserving abstractions.
Lemma 4.1. Let A,B be two languages in L(Σ). Then
1. A ∪B = A ∪B, and A ∩B ⊆ A ∩B
2. AB = A ∪ AB
�
The following proposition establishes the condition for a given abstraction map to
be prefix preserving.
Proposition 4.2. Let G be a Σ-abstraction map. Then
pre ◦G = G ◦ pre ⇐⇒ (∀σ ∈ Σ ∪ {ε}) G(σ) = G(σ)
Proof. (⇒) This follows directly from the condition.
(⇐) We will proceed with induction on string length. The case when the length is
0 holds as we have G(ε) = G(ε) and ε ∈ G(ε) by definition. Also the case when the
length is 1 holds by the given assumption on G. Next assume G(s) = G(s). We want
2Transducers are extensions of Mealy machines where transition outputs can be any regularlanguage [Ber79].
80
to show that for all σ ∈ Σ, G(sσ) = G(sσ).
G(sσ) = G(s ∪ sσ) = G(s) ∪G(sσ)
= G(s) ∪G(s)G(σ)
= G(s) ∪G(s) ∪G(s)G(σ) (G(s) ⊆ G(s))
= G(s) ∪G(s)(G(ε) ∪G(σ))
= G(s) ∪G(s)G(σ)
= G(s)G(σ) = G(sσ)
The extension to languages is straightforward given that the prefix closure operation
distributes over union.
Therefore, an abstraction map is prefix preserving if the image of each event is
either prefix closed or only missing the empty string, ε, from its prefix closure. For
example, based on this result, the abstraction map G : Σ∗ → L(Σ) where G(ε) = ε
and G(σ) = σ+ for all σ ∈ Σ is prefix preserving. Also, this result shows that the
composition map P−1 ◦P is always prefix preserving where P is a natural projection
map.
Let Σ be an alphabet vector with index set I and let G be a Σ-abstraction where
Σ = α(Σ). The map G is said to be Σ-compatible if it satisfies G(ε) = ε and
(∀σ ∈ Σ) G(σ) ⊆ [σ]∗Σ
Recall that [σ]Σ denotes the coset of the index equivalence relation, EΣ, that contains
the event σ. We will refer to Σ-compatible abstraction maps as Σ-abstraction maps.
Hence, Σ-abstraction maps are (monoid) substitutions that extend each event σ ∈ Σ
within the intersection of those components containing σ.
A Σ-abstraction map that satisfies the above condition (Σ-compatible) will be sim-
ply referred to as Σ-abstraction map. It is important to note that in a Σ-abstraction
map shared events are mapped to languages over events shared by exactly the same
set of components. The following proposition provides another characterization of
81
the class of Σ-abstraction maps. This characterization is based on the set of natural
projection operations over the process space components.
Proposition 4.3. Let Σ be an alphabet vector with index set I and let G : Σ∗ →L(Σ) be a Σ-abstraction map such that Σ = α(Σ). Then G is Σ-compatible if and
only if
(∀i ∈ I) Pi ◦G = G ◦ Pi
Proof. (⇒) Assume that G satisfies G(ε) = ε and for all σ ∈ Σ we have G(σ) ⊆ [σ]∗Σ.
Let σ be an event in Σ. If σ ∈ Σi then Pi(G(σ)) = G(σ) = G(Pi(σ)). Otherwise,
if σ �∈ Σi then Pi(G(σ)) = ε = G(ε) = G(Pi(σ)). Therefore for all σ ∈ Σ we have
Pi ◦G = G ◦ Pi for all i ∈ I.
(⇐) Assume Pi ◦ G = G ◦ Pi. Let σ ∈ Σi then G(Pi(σ)) = G(σ) = Pi(G(σ)).
This implies that G(σ) ⊆ Σ∗i . For σ �∈ Σi then G(Pi(σ)) = G(ε) = ε = Pi(G(σ)).
Therefore we have α(G(σ)) ∩ Σi = ∅ for any σ �∈ Σi. This confirms that the map G
is Σ-compatible
It is straightforward to verify that the condition for any Σ-abstraction map, G,
to be prefix preserving as given in Proposition 4.2 is equivalent to stating that
(∀σ ∈ Σ)(∃H ⊆ [σ]∗Σ)(∃∆ ⊆ [σ]Σ) G(σ) = H ∪∆
That is, a Σ-abstraction map is prefix preserving if each event σ is mapped to a prefix
closed language possibly combined with a set of events within its coset [σ]Σ in the
alphabet set Σ.
Next the relation of Σ-compatible abstraction maps with the inverse projections
maps will be investigated. Write Gi for the restriction of G to the domain L(Σi).
Proposition 4.4. Let G be a Σ-abstraction map. Then
(∀i ∈ I) G ◦ P−1i ⊆ P−1
i ◦Gi
82
Proof. Let G be a Σ-abstraction map. Then for i ∈ I and a language Li ⊆ Σ∗i we
have
s ∈ G(P−1i Li) =⇒ Pis ∈ PiG(P−1
i Li)
=⇒ Pis ∈ G(PiP−1i Li)
=⇒ P−1i Pis ⊆ P−1
i G(Li)
=⇒ s ∈ P−1i G(Li) (s ∈ P−1
i Pis)
Then G(P−1i Li) ⊆ P−1
i G(Li).
It is easy to see that the above result does not depend on whether G is catenative
or not. It is only required to satisfy Pi ◦ G = G ◦ Pi for all i ∈ I. Therefore,
this result can be used for a more general class of Σ-abstraction maps that are not
necessarily catenative. Note also that the other direction in the above proposition
does not hold in general. The following proposition shows that an extra condition is
required for the other direction to hold.
Proposition 4.5. Let G be a Σ-abstraction map. Then
(∀i ∈ I) P−1i ◦Gi = G ◦ P−1
i ⇐⇒ (∀σ ∈ Σ) G(σ) ⊆ [σ]Σ
Proof. (⇒) Assume that P−1i ◦ Gi = Gi ◦ P−1
i and assume that for σ ∈ Σ we have
G(σ) = H where H ⊆ [σ]∗Σ. Then based on the initial assumption, it must be
that Gi(P−1i (σ)) = P−1
i (Gi(σ)). This implies that P−1i (H) = (Σ − Σi)
∗H(Σ − Σi)∗.
We claim that for every s ∈ H it must be that |s| ≤ 1. To show this, assume
there exists a string s ∈ H with s = σ1σ2 . . . σn where n ≥ 2. Then we will have
σ1(Σ−Σi)∗σ2 . . . σn ∈ P−1
i (H). However, σ1(Σ−Σi)∗σ2 . . . σn �∈ (Σ−Σi)
∗H(Σ−Σi)∗
as s ∈ H and H ⊆ Σ∗i contradicting the above assumption. Then it must be that
every s ∈ H is such that |s| ≤ 1. This implies that G(σ) ⊆ [σ]Σ for all σ ∈ Σ.
83
(⇐) Assume that (∀σ ∈ Σ) G(σ) ⊆ [σ]Σ. Then
Gi(P−1i (σ)) = Gi((Σ− Σi)
∗σ(Σ− Σi)∗)
= Gi((Σ− Σi)∗)Gi(σ)Gi((Σ− Σi)
∗)
= (Σ− Σi)∗Gi(σ)(Σ− Σi)
∗
= (Σ− Σi)∗ ⋃
δ∈Gi(σ) δ(Σ− Σi)∗
=⋃
δ∈Gi(σ)(Σ− Σi)∗δ(Σ− Σi)
∗
=⋃
δ∈Gi(σ) P−1i δ = P−1
i
⋃δ∈Gi(σ) δ = P−1
i Gi(σ)
Note that δ above can be the empty string, ε. The extension to strings and languages
is straightforward based on the assumption that Gi and P−1i are both catenative and
distribute over union.
To simplify notation, the composition P−1i ◦Pi : L(Σ) → L(Σ) may be denoted Pi.
The map Pi associates every language in Σ∗ with the maximal language equivalent to
it with respect to the relation ker Pi. Note that Pi is a prefix preserving, catenative
Σ-abstraction map. However, it is not Σ-compatible.
The relation between abstraction maps and the composition operation can now
be explored based on the above results. First, the invariance of the abstraction map
with respect to the composition operation is investigated. Let G be a Σ-abstraction
map. The map G is said to be composition invariant if
(∀L ∈ Lc(Σ)) BΣ(G(L)) = G(BΣ(L))
Recall that G(L) denotes the language vector {G(Li) | i ∈ I}, and Lc(Σ) is the
set of compact language vectors. The set of Σ-abstraction maps is not in general
composition invariant even if it is invariant with respect to inverse projections as
shown in the following example.
Example 4.1. Consider the process space Σ = {{a, x, y}, {b, x, y}} and let L1 =
(aax, ay) and L2 = (xbb, yb). Now, consider the language vector L = {L1, L2} and
the natural abstraction map FΣ for the process space Σ. Clearly, L is compact
84
and the map FΣ is invariant with respect to inverse projections. However, axbb ∈FΣ(L1) ‖ FΣ(L2) while axbb �∈ FΣ(L1 ‖ L2). �
The following proposition explores the relation between the synchronous product
operation and the set of Σ-abstraction maps. Note the necessity of the compactness
property to this result.
Proposition 4.6. Let L be a compact language vector and let G be a Σ-abstraction
map. Then
G(BΣ(L)) ⊆ BΣ(G(L))
Proof. Let s ∈ Σ∗ be a string where Σ = α(Σ). Then
s ∈ G(BΣ(L)) =⇒ s ∈ ⋂i∈I P−1
i PiG(BΣ(L))
=⇒ s ∈ ⋂i∈I P−1
i G(Pi(BΣ(L)))
=⇒ s ∈ ⋂i∈I P−1
i G(Li)
=⇒ s ∈ BΣ(G(L))
Therefore, G(BΣ(L)) ⊆ BΣ(G(L))
Note also here that the above result does not depend on whether G is catenative or
not. The other direction of the inclusion in the above proposition does not generally
hold even when L is compact as shown in the last example.
The main problem regarding the invariance of the abstraction with respect to the
synchronous composition operation is in the synchronization mechanism. In general,
under a Σ-abstraction map, it is possible that incomposable string vectors in the
system components map to composable string vectors. For example, consider the
process space Σ = {(a, x, y), (b, x, y)}, and the map G defined as follows
G(a) = a, G(b) = b, G(x) = G(y) = {x, y}
The string vector s = {ax, yb} is incomposable (x,y are shared events). Therefore,
G(BΣ(s)) = G(∅) = ∅. However, the language G(s) = {(ax, ay), (xb, yb)} is compos-
85
able and therefore BΣ(G(s)) �= ∅. In order to resolve such cases, abstraction maps
need to be refined such that shared events should not be abstracted under the map.
That is, every shared event is mapped only to itself. This condition is a sufficient
and necessary condition to obtain the composition invariance property as shown in
the following proposition.
Proposition 4.7. Let G be a Σ-abstraction map. Let Σs be the set of shared events
in Σ, and Σa be the set of non-shared events. Then G is composition invariant if and
only if
G(σ)
⊆ [σ]Σ if σ ∈ Σa
= σ if σ ∈ Σs
Proof. The proof here is for the case when the process space contains two compo-
nents. The general case can be proved similarly.
(⇒) Assume that G is composition invariant. Then based on the last propositions it
must be that for all σ ∈ Σ we have G(σ) ⊆ [σ]Σ. Now assume that for some shared
event σ ∈ Σs we have a ∈ [σ]Σ ∪ {ε} such that {σ, a} ⊆ G(σ). Then we will have
G(σ‖a) = ∅ while a ∈ G(σ)‖G(a) irrespective of what G(a) is, as a ∈ G(a). Therefore
it must be that G(σ) = σ for all σ ∈ Σs.
(⇐) It was shown earlier that G(L1 ‖ L2) ⊆ G(L1) ‖ G(L2) holds in general for
any compact language vector L = {L1, L2}. Assume s ∈ G(L1) ‖ G(L2). Then
there must exist ui ∈ P−1i Pi(L1‖L2) such that s ∈ G(ui) for i ∈ {1, 2}. We
can write s = σ1 . . . σn where n = |s| and σr ∈ Σ for all r ∈ [1 . . . n]. Then
we can write u1 = X1γ1X2 . . . XnγnXn+1 where γr ∈ Σ and σr ∈ G(γr) for all
r ∈ [1 . . . n], and Xr ∈ Σ∗a with ε ∈ G(Xr) for all r ∈ [1 . . . (n + 1)]. We can
also write u2 = Y1δ1Y2 . . . YnδnYn+1 with Yr and δr defined similar to Xr and γr re-
spectively. Define αr = ({γr} ∩ Σ1) ∪ ({δr} ∩ Σ2). Based on the initial assumption
that G(σ) = σ for all σ ∈ Σs, it is easy to see that |αr| = 1 for all r ∈ [1 . . . n].
Let u = (P1X1)(P2Y1)α1(P1X2)(P2Y2) . . . (P1Xn)(P2Yn)αn(P1Xn+1)(P2Yn+1). Clearly
s ∈ G(u). Moreover, P1u = P1u1 and P2u = P2u2. This implies that u ∈ P−11 P1u1 ∩
P−12 P2u2 and hence u ∈ L1‖L2. Therefore s ∈ G(L1‖L2).
86
The composition invariance property for abstraction maps has two important con-
sequences. First, it ensures that the abstracted behaviour preserves the decompos-
ability of the original system. Consequently, the abstracted behaviour is decompos-
able. Moreover, the components of the abstracted system are given directly as the
abstraction of the system components.
The other benefit of this property is computational, namely, it allows the com-
putation of the abstraction indirectly by computing the synchronous product of the
abstracted system components rather than computing the abstraction of the syn-
chronous product of these components (the composite system). It is easy to see that,
the complexity of the indirect abstraction is of the order of the size of the abstract
system which is typically less than the original system.
It should be noted however that composition invariant abstraction is a limited form
of abstraction that may not produce a significant complexity reduction. In general,
the efficiency of the computation of composition invariant abstractions depends on the
system. In certain situations, a significant reduction can be obtained. For instance,
for asynchronous systems (no shared events) or when the system components contain
repeated patterns and such patterns are the target of the abstraction; for instance, the
repeated pattern in the language L1 = (a1b1, a2b2, a3b3) can be abstracted to ab where
a = {a1, a2, a3} and b = {b1, b2, b3} (assuming all ai and bi are asynchronous events).
On the other hand if all events are shared no composition invariant abstraction can
be achieved.
The following proposition presents a more relaxed condition where a given Σ-
abstraction function can be computed without computing the synchronous product
of the system.
Proposition 4.8. Let G be a Σ-abstraction map such that G(σ) = σ for all σ ∈ Σs.
Let L be a language vector in Σ. Then,
G(BΣ(L)) =⋂i∈I
G(P−1i Li)
87
Proof. We will prove the proposition for two components process space. The general
case can be proved directly by induction on the number of components in the process
space. In general, G(P−11 L1 ∩ P−1
2 L2) ⊆ G(P−11 L1) ∩ G(P−1
2 L2) so we only need
to show the other direction of the inclusion. Assume s ∈ G(P−11 L1) ∩ G(P−1
2 L2).
Then there must exist ui ∈ P−1i Li for i ∈ {1, 2} such that s ∈ G(u1) ∩ G(u2).
We can write s = S1 . . . Sn where for all r ∈ [1 . . . n], the nonempty string Sr is
either in (Σ1 − Σ2)+ or in (Σ2 − Σ1)
+, or is equal to some σ ∈ Σs. Then we can
write u1 = X1X2 . . . Xn where Xr ⊆ (Σ − Σs)+ if Sr ∈ (Σ − Σs)
+, and Xr = σ
if Xr = σ ∈ Σs. The sequence {X1, X2, . . . , Xn} as defined above is not unique
but at least one such sequence must exist in order for s to be in G(u1). Similarly
we can write u2 = Y1Y2 . . . Yn with Yr defined exactly as Xr. Let Zr = σr if Sr =
σ ∈ Σs, otherwise Zr = (P1Xr)(P2Yr). Clearly this ensures that Sr ∈ G(Zr) for all
r ∈ [1 . . . n]. Let z = Z1Z2 . . . Zn. Hence s ∈ G(z). Moreover, P1z = P1u1 and
P2z = P2u2. Therefore z ∈ P−11 P1u1 ∩ P−1
2 P2u2 and hence z ∈ P−11 L1 ∩ P−1
2 L2. This
implies that s ∈ G(P−11 L1 ∩ P−1
2 L2) ⊆ G(P−11 L1).
The condition in the above proposition restricts only the map of shared events.
It requires that every shared event in the system to be mapped to itself. Clearly,
the above result can be applied directly to any asynchronous systems under any Σ-
abstraction map with no restriction. Similar to the previous result, the efficiency of
computing G(BΣ(L)) as given in the above proposition depends on the system and
the abstraction map.
As indicated by the last two propositions, the efficiency of computing the ab-
straction of a given system depends on the coupling between the system components
as expressed by their shared behaviour. In general, a more efficient computation of
the abstraction can be expected for loosely coupled systems compared with that for
tightly coupled ones. This observation regarding the relation between the compo-
nents coupling and computational efficiency will become more evident throughout
this thesis.
88
Implementation remark
Typically, model abstraction is conducted manually within the modelling and analysis
stages. The information content of common abstraction schemes can provide some
guidance for the abstraction process. Consider, for instance, the set of events Σ. First
we need to figure out if certain subset Σr of Σ is irrelevant with respect to a given
analytical situation - a specification for example. That is, roughly speaking, events in
Σr appear equivalent, and so separating these events would not add any information
to solve the given problem3. In such cases, depending on the amount of information
we need about this set of events with respect to a given analytical situation, we can
further map this set of events to anywhere from Σ∗r to Σr. The former mapping
corresponds to the minimum amount of information, namely, the possible occurrence
of this set of events within certain contexts in the system dynamics4. Another level
of abstraction corresponds to the mapping to Σ+r . The information in this map is
also contextual. However, in this case, more information about the context of this set
of events in the system dynamics is provided. The finest level of abstraction maps
every event in Σr to the set Σr. Under this abstraction, events in Σr have exactly the
same contexts in the abstracted system dynamics. Although, there are many other
possibilities for abstraction mappings, the above three cases are usually sufficient for
typical (catenative) abstraction situations.
4.1.3 Abstraction of interacting discrete event systems
In interacting discrete event systems the interaction specification is in itself an ab-
straction of the system behaviour. Therefore, it is only required here to identify a
suitable mapping between the system and its interaction specification. Similar to the
case of language-based abstractions, we will only consider the case when such map-
ping is monotonic. The following result shows that such mapping can be established
3For instance, the distinction between the events a1 and a2 in the language L = (a1b1, a2b2) isirrelevant with respect to the specification S = ({a1, a2}{b1, b2}).
4Note that this mapping is different from the map P−1r Pr where Pr : Σ∗ → Σ∗
r is a naturalprojection map, which does not provide any information about the set Σr.
89
under certain conditions.
Proposition 4.9. Let (L, K) be an IDES in the process space Σ, where L � P Σ(K)
and let F be a Σ-abstraction satisfying K = F(K), and K ⊆ F(BΣ(L)). Then
K = F(BΣ(L, K))
Proof. In general, as F is an extension map we get K ⊆ F(K). The set of languages
satisfying F(L) = K is closed under union and therefore has a supremal element.
This element must be K, otherwise we will have K ′ ⊃ K with K = F(K ′) implying
that F(K ′) ⊂ K ′ contradicting that F is an extension map. Now K ⊆ F(BΣ(L))
implies that there exists X ⊆ BΣ(L) such that K = F(X). This is based on the
assumption that F is a Σ-abstraction map and that L � P Σ(K). Therefore X ⊆ K.
Then X ⊆ BΣ(L) ∩K and as F is monotonic we get F(X) ⊆ F(BΣ(L) ∩K). Hence,
K ⊆ F(BΣ(L) ∩ K). Also, in general, F(BΣ(L ∩ K)) ⊆ F(BΣ(L)) ∩ F(K). Given
that K = F(K) and K ⊆ F(BΣ(L)), then we get F(BΣ(L ∩K)) ⊆ K. Then it must
be that K = F(BΣ(L, K)).
Therefore, in order to establish a mapping between the system and its interaction
specification, we need first to find a Σ-abstraction map F such that K = F(K), and
then to test if this map satisfies K ⊆ F(L).
In general, finding a suitable map for a given K is an exercise in creativity. How-
ever, in certain situations the map F can be identified by examining each occurrence
of an event σ ∈ Σ in the transition structure of K and identifying its invariance
domain. In another words, for the event σ we try to obtain a maximal language K ′
that always occurs in the same context within K. Namely,
(∀u, v ∈ Σ∗) uσv ∈ K ⇐⇒ uK ′v ∈ K
The language K ′ can then be a candidate image of σ under the map F that we
are looking for. This search continues until the images of all events σ ∈ Σ can be
specified. When K is simple enough, the above procedure can lead to a suitable
90
(coarse) Σ-abstraction map F such that K = F(K). Consider for instance the layout
of the serial composition K = U∗V ∗ in the asynchronous process space Σ = {Σ1, Σ2}where U = Σ1 and V = Σ2. It is easy to see that every event in Σi is invariant with
respect to the language Σ∗i where i = [1, 2]. Therefore, the map F defined as follows
(∀i ∈ [1, 2])(∀σ ∈ Σi) F(σ) = Σ∗i
satisfies K = F(K). Note that if F satisfies F(K) = K then so is any function G such
that id ≤ G ≤ F. Next we check to see if K ⊆ F(BΣ(L)) for the given IDES. To check
this efficiently, it is necessary to be able to compute F(BΣ(L)) efficiently. This is the
case for above map as it satisfies the conditions of Proposition 4.8. Note also that if
F(BΣ(L)) can be computed efficiently using Proposition 4.8 then so is any G(BΣ(L))
where id ≤ G ≤ F. Therefore, once a Σ-abstraction mapping between the system
and its interaction specification is defined, refinements of this mapping become handy.
For instance, the above functional description of the serial composition layout can be
refined to G given as
G(σ) =
Σ1 σ ∈ Σ1
Σ+2 σ ∈ Σ2
Clearly, G is also an extension map. Here we get K = G(K) automatically. And for
any given L � P Σ(K), the abstraction G(BΣ(L)) can be computed efficiently using
Proposition 4.8.
Structured IDES
The case when the interaction language is associated with the system behaviour
through a monotonic map is of special importance in behavioural analysis of IDES
and therefore will be given a special notation. We will distinguish those IDES models
where such a map is given as structured interacting discrete event systems. Formally,
a structured interacting discrete event system (SIDES) is a tuple L = (L, K, F) such
that
K = F(BΣ(L, K))
91
Calligraphic letters will also be used to denote SIDES structures. Similar to the case
with IDES, the ith component of the SIDES will be denoted Li, and the language
generated by an SIDES L is given by BΣ(L). Note that having K associated with
L = BΣ(L, K) via a monotonic map completes the picture of L being composed of
the images of L under a set of monotonic maps (the maps P−1i ◦ Pi are obviously
monotonic). That is
L =⋂i∈I
P−1i Pi(L) ∩ F(L)
This feature is of special importance in solving the verification and the supervisory
control problems as will be shown later in this thesis. Note that the supremal com-
pensation map, CΣ, could not fit in the SIDES structure because this map is not
monotonic in general as was shown in Example 4.3.
Example 4.2. A system of three machines uses a synchronization mechanism for
flexible arrangement of the machines. This is implemented using the shared event x
which can be used also to observe the termination of each cycle. The system - shown
below - is arranged such that machines 1 and 2 are working in parallel and have to
complete their cycles before the third machine can start.
U, VU, V
F
X X
W
x xx
c3
a3
b3
d3b2
c2
a2
d2
X
WVU
d1
c1
a1
b1
92
It is easy to see that the interaction language of the system is invariant with respect
to the map F defined as follows
F(U) = U, F(V ) = V, F(W ) = W ∗, and F(X) = X
where U, V,W,X as given in the convention of the STD above. It is clear also that
F is an extension map. Using Proposition 4.8 to compute F(BΣ(L)) will reduce the
maximum number of states involved in this computation from 27 to 9 states. It is
then easy to verify that K ⊆ F(BΣ(L)) confirming that K = F(BΣ(L, K)). �
Many standard interaction schemes can be formulated such that the underlying
system can be represented as a SIDES. As shown in the above example, simple non-
standard interactions can also be captured in this form. However, in general there
is no mechanical way to establish a monotonic correspondence between the system
behaviour and its interaction specification. Such correspondence has to be “guessed”
and then checked for correctness.
4.2 Compact interacting discrete event systems
In this section, the compactness property is investigated for the class of IDES struc-
tures and the associated composition operation. Similar to the case with language
vectors, the aim here is to characterize a subclass of interacting DES where the com-
position operation preserves the containment order.
Consider the process space Σ. Let L = (L, K) and S = (S, R) be two IDES in
I(Σ). In general we have,
L � S =⇒ BΣ(L) ⊆ BΣ(S)
This is valid for any two IDES. The problem with behaviour comparison in the IDES
setting is that the other direction cannot be confirmed in general, even when both L
and S are compact as shown in the following example.
93
Example 4.3. Let Σ = {{a, x}, {b, x}} be the process space. Let L = {(ε), (ε, a)},and S = {(ε, b), (ε, a)} be two language vectors. Let L = (L, K) and S = (S, R)
be two IDES where K = Σ∗ and R = (Σ∗ − (b, ba)) . Here we have BΣ(L) = a and
BΣ(S) = ab. Therefore, BΣ(L) ⊆ BΣ(S). However, L �� S. �
The other direction would hold if the IDES L is associated with the language
BΣ(L) via a monotonic function, namely the projection operation. One approach
to establish this direction is to extend the compactness property to IDES class. An
IDES L = (L, K) in I(Σ) is said to compact if
P Σ(BΣ(L)) = L
That is, L is compact if its language vector can be retrieved from its composite
behaviour using the decomposition operation P Σ. The class of compact IDES in Σ
will be denoted Ic(Σ).
Proposition 4.10.
L is compact ⇐⇒ L ∈ PΣ(BΣ(L))
Proof. (⇒) Assume that L = (L, K) is compact. Then L = P Σ(BΣ(L, K)). There-
fore, each IDES in the set PΣ(BΣ(L, K)) will have the form (L, R), where R ∈CΣ(BΣ(L, K)). Clearly, we have BΣ(L, K) ⊆ K. Also we have
CΣ(BΣ(L, K)) = BΣ(L, K) ∪ (BΣ(P Σ(BΣ(L, K))))c
= (BΣ(L) ∩K) ∪ (BΣ(L))c
= (BΣ(L) ∪ (BΣ(L))c) ∩ (K ∪ (BΣ(L))c)
= K ∪ (BΣ(L))c
Therefore, we have K ⊆ CΣ(BΣ(L, K)). Hence, K ∈ CΣ(BΣ(L, K)). Therefore
(L, K) ∈ PΣ(BΣ(L, K)).
(⇐) This is clear from the definition of the set PΣ(BΣ(L, K))
94
The above result is another characterization for the compactness property stating
that an IDES is compact if and only if it can be retrieved from its behaviour through
the generalized decomposition operation. Note the similarity between the definition
of compact language vectors and compact IDES as given in the last proposition. The
effect of the compactness property in the composition and decomposition process in
process space is demonstrated in the following diagram. In this diagram the map
id1 : L(Σ) → L(Σ) assigns each IDES (L, K) to its language vector L.
Lc(Σ)L(Σ) L(Σ)
Ic(Σ) Lc(Σ)
P Σ P Σ
idid1 BΣBΣ
Figure 4.2: The compactness property
It is important to note that, similar to the case with language vectors, the def-
inition of a compact IDES requires the system components to be associated with
its behaviour via monotonic maps, namely, the class of projection operations. This
requirement resolves part of the problem discussed earlier concerning the order pre-
serving property of the generalized composition operation.
Proposition 4.11.
(L, K) is compact =⇒ L is compact
Proof. In general we have P Σ(BΣ(L, K)) � P Σ(BΣ(L, Σ∗)). Then, if (L, K) is com-
pact we can deduce that L � P Σ(BΣ(L)). Now, given that the other direction of the
(componentwise) inclusion holds in general. Then, it must be that L = P Σ(BΣ(L)).
Therefore, L is compact.
95
Therefore, the compactness of the language vector is necessary for the compactness
of the underlying IDES. However, it is not sufficient in general. This should be easy
to see as, in general, the interaction language can be chosen arbitrarily. For instance,
for any compact language vector L with BΣ(L) �= ∅, setting K = BΣ(L)c will result
in BΣ(L, K) = ∅. Clearly, the IDES (L, K) in this case is not compact.
Proposition 4.12. Let Σ be a process space and let L be a language in L(Σ). Then
(∀L ∈ PΣ(L)) L is compact
Proof. Every IDES in PΣ(L) has the form (P Σ(L), K) where K ∈ CΣ(L). There-
fore, by definition we have L = BΣ(P Σ(L), K) and this implies that P Σ(L) =
P Σ(BΣ(P Σ(L), K)). Hence (P Σ(L), K) is compact.
Again similar to the case with language vectors, the above result shows that an
IDES obtained from the generalized decomposition of a given language is always
compact. Based on this result, it is always possible to convert a given IDES L into a
compact one that generate the same behaviour. For instance, one such IDES would
be given by PΣ(BΣ(L)) which can be directly computed from BΣ(L). However, such
construction requires the computation of the composite behaviour of L and therefore
is not efficient.
4.3 Complete interaction specifications
In this section, we characterize the class of complete interaction specifications. A
combination of a complete interaction specification and any compact language vector
in its domain produces a compact IDES structure. The completeness property can
ensure the compactness of a given IDES without the need to compute the composite
structure.
For a process space Σ, an interaction language K ⊆ Σ∗ is said to be complete
if the restriction of the composition of any composable string vector in its domain,
96
P Σ(K), to K is not empty. Formally, K is complete if
(∀s E P Σ(K)) BΣ(s, K) = ∅ =⇒ BΣ(s) = ∅
That is, an interaction specification is complete if it contains at least one string in
the composition of any composable string vector in its domain.
Example 4.4. Let Σ = {{a, x}, {b, x}} be a process space. In this process space,
the interaction language K = (xax, xbx) is not complete as the language vector
L = {(xx), (xx)} is within the domain of K and BΣ(L) = xx �= ∅ but BΣ(L, K) = ∅.On the other hand, the layout K = {axb, aaxb} is complete. Also, in the asyn-
chronous process space, Σ′ = {{a}, {b}}, the interaction language K = (ab, aaabb) is
incomplete as the composable string vector s = {(a), (bb)} is within the domain of
K, however, we have BΣ(s, K) = ∅. �
In asynchronous process space (no shared events) every string vector is compos-
able. In this case, the condition for completeness reduces to,
(∀s E P Σ(K)) P Σ(BΣ(s, K)) = s
For a general process space, the decidability of verifying the completeness of a given
language is an open problem. However, simple interaction specifications can usually
be checked for completeness by inspection with little effort. It is easy to see that the
languages Σ∗, ε, ∅ are all complete by definition. Also, it is straightforward to check
that all forms of parallel compositions discussed in the previous chapter are complete.
It is easy also to check that the interaction language for interleaving is not complete.
For instance, given the process space Σ = {{a}, {b}}, the string vector t = {a, bbb}is in P Σ(K), however, BΣ(t, K) = ∅. The following two propositions can be used to
identify and construct complete interaction languages.
Proposition 4.13. Let G be a Σ-abstraction map and s ∈ Σ∗ be a string. Then the
language G(s) is complete.
97
Proof. We will proceed by induction on the string length. The initial step with |s| = 0
holds as by definition G(ε) = ε. Assume that for a string s ∈ Σ∗, G(s) is complete. We
want to show that G(sσ) is complete. First we have G(sσ) = G(s)G(σ) with G(σ) ⊆[σ]∗Σ. Let t E P Σ(G(sσ)). Then for all i ∈ I we have ti ∈ PiG(sσ) = G(Pi(sσ)). Let
J be the set of components in I that contains σ. Then for all i ∈ (I − J) we have
ti ∈ G(Pi(s)) = PiG(s), and for all j ∈ J we have tj ∈ G(Pj(sσ)) = PjG(s)PjG(σ).
Then we can write tj = t′jrj where t′j ∈ PjG(s) and rj ∈ PjG(σ) = G(σ), for all
j ∈ J . However, for the string vector t to be composable it must be that rj = rk for
all j, k ∈ J . Then, under the assumption that t is composable, we can write tj = t′jr
where t′j ∈ PjG(s) and r ∈ PjG(σ), for all j ∈ J . Then,
BΣ(t) =⋂
i∈(I−J)
P−1i ti ∩
⋂j∈J
P−1j t′jr ⊇
⋂i∈(I−J)
P−1i ti ∩
⋂j∈J
P−1j t′j
r
Therefore, (BΣ(s))r ⊆ BΣ(t). Now, the string vector s with si = ti for all i ∈ (I −J)
and sj = t′j for j ∈ J is in P Σ(G(s)), that is, in the domain of G(s). Hence, by the
completeness of G(s) we have BΣ(s) ∩ G(s) �= ∅. Also we have r ∈ G(σ), therefore
we can say that (BΣ(s)) r ∩G(sσ) �= ∅. Given that we have (BΣ(s))r ⊆ BΣ(t). Then
BΣ(t) ∩G(sσ) �= ∅.
Based on the above result, it can be shown that all forms of serial composition are
complete. For instance the synchronized serial composition represented by the layout
K = U∗X∗V ∗ can be obtained by a Σ-abstraction of any string uxv where u ∈ U ,
x ∈ X and v ∈ V and therefore is complete.
In general, the set of complete layouts is not closed under intersection or union.
For example, in the process space Σ = {{a}, {b}} the interaction languages K1 =
(ab, abb, bba, aabb) and K2 = (ba, abb, aab, bbaa) are both complete but their intersec-
tion, K = (abb, bba, bbaa), is not complete as the string vector s = {(a), (d)} is in
the domain P Σ(K) but BΣ(s, K) = ∅. Also, the interaction languages K1 = (ab)
and K2 = (aabb) are both complete, however, their union K = (UV,UUV V ) is not
complete as s = {(a), (dd)} is in P Σ(K), whereas BΣ(s, K) = ∅.
98
Proposition 4.14. Let K1 and K2 be languages over Σ. Then
1. If K1 is complete and P Σ(K1) = P Σ(K2), then K1 ∪K2 is complete
2. If K1 and K2 are complete, then
((∀i, j ∈ I) i �= j ⇒ PiK1 ∩ PjK2 = ∅) =⇒ K1 ∪K2 is complete
Proof. 1. Assume the K1 is complete. Let t be a string vector in P Σ(K1 ∪K2). Now
P Σ(K1) = P Σ(K2) implies P Σ(K1 ∪ K2) = P Σ(K1) = P Σ(K2). Therefore, t is in
P Σ(K1). Hence, BΣ(t) ∩ K1 �= ∅. Therefore, BΣ(t) ∩ (K1 ∪ K2) �= ∅, and hence
K1 ∪K2 is complete.
2. Assume the K1 and K2 are complete. Let t be a string vector in P Σ(K1∪K2) which
is equal to P Σ(K1)∪P Σ(K2). Assuming that PiK1∩PjK2 = ∅ for any i �= j. Then any
string vector s with mixed components from P Σ(K1) and P Σ(K2) is incomposable.
Therefore, t is composable if and only if t is either in P Σ(K1) or P Σ(K2). Considering
the first case then BΣ(t) ∩ K1 �= ∅. Therefore, BΣ(t) ∩ (K1 ∪ K2) �= ∅, and hence
K1 ∪K2 is complete.
The first part of this proposition can be used to show that parallel compositions
are complete. The second part can be used to show that the interaction languages
for refinements are complete. For instance, consider the handshaking refinement
which can be represented by the layout (U,XV ∗X)∗. This language can be written
as the union of the set of languages (U∗XV ∗XU∗)n where n ∈ N, the set of all
natural numbers. Each one of these language is complete as it is the outcome of a
Σ-abstraction map of the string (uxvxu)n where u ∈ U and v ∈ V . Also n �= m
implies that Ps(U∗XV ∗XU∗)n �= Ps(U
∗XV ∗XU∗)m for every n,m ∈ N. Therefore
K = (U,XV ∗X)∗ is complete.
The following theorem establishes the link between the completeness of interaction
languages and the compactness of the underlying IDES structures.
Theorem 4.2. Let K be an interaction specification in the process space Σ. Then
99
K is complete if and only if
(∀L � P Σ(K)) L is compact =⇒ (L, K) is compact
Proof. (only if) Assume that K is complete and the language vector L � P Σ(K) is
compact. Then we have L = P Σ(BΣ(L)). This implies that P Σ(BΣ(L, K)) � L.
Now assume there exists a composable string vector t E L such that t is not in
P Σ(BΣ(L, K)). Then it must be that BΣ(t, K) = ∅ which contradicts the assumption
that K is complete. Therefore, it must be that P Σ(BΣ(L, K)) = L and hence (L, K)
is compact.
(if) Assume that t is a composable string vector in P Σ(K). Clearly, t is compact,
that is, t = P Σ(BΣ(t)). Therefore, as (t, K)) is compact we get t = P Σ(BΣ(t, K)),
hence BΣ(t, K) �= ∅. Hence, K is complete.
The above theorem shows that the completeness of the interaction language pre-
serves the compactness property in the transition from the language vector domain
to the IDES domain. Therefore, a compact IDES structure can be constructed by
combining a compact language vector and a complete interaction specification. It
is important to note here that a compact IDES does not necessarily have a com-
plete interaction specification. This should be clear as for any language K the IDES
structure (P Σ(K), K) is compact irrespective of the completeness of K.
100
Chapter 5
Interacting Discrete Event
Systems: Verification
In the verification stage the system is tested to ensure that it works correctly as
expected. Problems in the system behaviour may originate from the “internal” in-
teractions and communications between the system components, as defined by the
synchronization mechanism between discrete event systems. Correctness of the sys-
tem can also be measured with respect to a set of “external” specifications that define
how the system should work. Therefore, the verification can be labelled as either in-
ternal or external depending on the correctness criterion.
The direct approach to verification is to check the system with respect to the
correctness specification through exhaustive search of the system’s state space. This
is obviously inefficient in dealing with practical multiprocess systems. In this chapter,
we present a set of indirect procedures to verify both the internal and external
correctness of multiprocess and interacting discrete event systems. These procedures
avoid the exploration of the whole state space of the system by focusing on relevant
regions in the system state space. The proposed procedures can be particularly
efficient in dealing with loosely coupled systems, that is, those with relatively limited
synchronous interaction.
101
Detecting blocking is a typical internal verification problem. Blocking corresponds
to the situation when the system reaches a region in the state space where it cannot
exit from. Depending on the possibility of further moves, blocked states can be
described as either deadlock or livelock. In this chapter, an indirect approach is
proposed to detect and identify blocking states of both forms for multiprocess systems.
In this approach, referred to as the detect-first approach, potential deadlock states
are first identified and then tested for reachability.
Detecting blocking in interacting discrete event systems is then investigated. As-
suming a set of nonconflicting components, we can focus only on those blocking
situations caused by the interaction specification. Two indirect blocking detection
procedures are proposed for special cases when the interaction specification is either
too restrictive or too permissive. The case when the specification is too permissive is
treated by detecting potential blocking situations first, similar to the approach used
for detecting blocking in multiprocess systems.
The problem of verifying the system behaviour with respect to a given specifi-
cation is then considered. One approach to solving this problem efficiently (under
certain assumptions) is to decompose the specification into an equivalent IDES and
perform the verification modularly. However, in general, the IDES model does not
carry enough information to support the decomposition approach. Nevertheless,
modular verification can be affirmative when the specification interaction is a su-
perset of the system interaction language. Also, it is shown that the existence of a
monotonic correspondence between the system behaviour and its interaction specifi-
cation can help to obtain a modular solution.
An iterative approach to the verification problem is then discussed. In this
approach, the interaction specification is iteratively refined until it has enough in-
formation to solve the verification problem modularly. The abstraction techniques
discussed in the previous chapter are used here to refine the interaction specification.
The correspondence between the abstracted representation and the system behaviour
is essential to the implementation of this approach.
102
5.1 Blocking in process space
Blocking is a major problem in multiprocess discrete event systems. It originates
from the nature of the synchronous composition operation. Possibilities of blocking
depend strongly on the arrangement of shared transitions in the system components.
The direct approach to detect blocking requires the exploration of the entire state
space of the system. An alternative approach is presented in this section. In this
approach only relevant parts of the system behaviour - related to its shared dynamics
- are explored. Consequently, the complexity of this approach depends directly on the
coupling between the system components and is therefore expected to be generally
more efficient than the direct search in loosely coupled systems.
A dynamic system is said to be blocked if there exists a reachable state from
which the system cannot reach any terminal state. These states are referred to as
blocked states. Formally, for discrete event systems, blocking can be expressed by
the condition L(A) �= Lm(A) where A is the automaton model of the system. In
multiprocess environments, the composite system may contain blocked states even if
its individual components are not blocked. Such blocking situations result from the
nature of the synchronous composition operation. In general, any vector language L
in a process space Σ satisfies
BΣ(L) ⊆ BΣ(L)
where L = {Li | i ∈ I}. However, the other direction of the inclusion does not hold in
general. This is due to the fact that the evolution history of the system components
may contain composable strings that do not contribute to the evolution history of the
overall system behaviour. This should be clear from the fact that incomposable string
vectors with empty behaviour always contain at least one composable prefix, namely
εI which has a nonempty behaviour. A language vector L is said to be unblocked or
live if BΣ(L) = BΣ(L). Clearly, L is live if the set {P−1i Li | i ∈ I} is non-conflicting,
that is, satisfying ⋂i∈I
P−1i Li =
⋂i∈I
P−1i Li
103
Blocking originates from the shared dynamics of the system. Therefore, similar to
the compactness property, a system with no shared events is always live. However,
although the liveness and compactness properties are directly linked to the shared
behaviour of the system, they are not related as demonstrated in the following exam-
ple.
Example 5.1. Consider the process space Σ = {(a, b, x, y), (c, d, x, y)}. The language
vector L = {(ax, by), (cy, dx)} is compact but not live. A blocking situation in this
language vector is shown below.
y
c da
x
b
y x
L2L1
The language vector L = {(a, x), (b, y)}, on the other hand, is live but not compact
as x ∈ BΣ(L) but x �∈ BΣ(L). �
It is important to distinguish between two forms of blocking in discrete event
systems. The first corresponds to the case when an unmarked reachable state cannot
reach any other state (including itself), that is, when no event can be triggered at the
blocked state. Such state is referred to as a deadlock state. Otherwise, if a blocked
state can reach some other unmarked state it is referred to as a livelock state. Clearly,
the two forms are exhaustive, that is, a blocked state must be of either form but not
both.
104
5.1.1 Deadlock detection in multiprocess systems
Let L be a language vector in Σ. A string s ∈ BΣ(L) is called nonterminal if it is not
in BΣ(L). A language vector L is said to be deadlock-free if the set of eligible events
at every nonterminal string in BΣ(L) is not empty. Formally, L is deadlock-free if
(∀s ∈ (BΣ(L)−BΣ(L)))(∃σ ∈ Σ) sσ ∈ BΣ(L)
Deadlock can be detected by exploring all the state space of the synchronous product
of the system components and identifying those non-terminal states with no eligible
events. An alternative approach is to identify potential blocking states first and
then to check their reachability. This approach will be referred to as detect-first
approach. The key to this approach is given in the following result which provides
a characterization for deadlock freeness in multiprocess discrete event systems. For
given a subset J of I, let Σ(J) denote the set of events exclusive to the set of J
components. Namely, Σ(J) = ∩j∈JΣj − ∪i∈I−JΣi.
Theorem 5.1. The language vector L is deadlock-free if and only if
(∀s ∈ BΣ(L)−BΣ(L))(∃J ⊆ I) Σ(J) ∩⋂j∈J
EligLj(Pjs) �= ∅
Proof. Let s ∈ (BΣ(L) − BΣ(L)). Clearly this implies that Pi(s) ∈ Li for all i ∈ I.
Now assume that there exists J ⊆ I such that Σ(J) ∩⋂
j∈J EligLjPj(s) �= ∅. This
could not hold for the case J = ∅ as Σ(∅) = ∅ by definition. Therefore, we must have
a nonempty J and an event σ ∈ Σ(J) such that Pj(sσ) ∈ Lj for all j ∈ J . Therefore
we get P−1j (Pj(sσ)) ⊆ P−1
j Lj for all j ∈ J . However as σ ∈ Σ(J) then it must be that
P−1i (Pi(sσ)) = P−1
i (Pi(s)) ⊆ P−1i Li for all i ∈ I − J . Therefore we obtain
sσ ∈⋂i∈I
P−1i Pi(sσ) ⊆
⋂i∈I
P−1i Li = BΣ(L)
Then the system is not deadlocked at any string s satisfying the given condition. The
only if direction is straightforward.
105
The above theorem shows that deadlock can be traced by examining the set of
eligible events at local strings generated by the system components. Clearly, if a
deadlock occurs at a given string then it also occurs at the corresponding state in
the system’s automaton, as the Nerode equivalent strings of a deadlocked string must
be deadlocked as well. Based on the above theorem a string s ∈ BΣ(L) − BΣ(L)
identifies a deadlock state if and only if
(∀J ⊆ I)⋂j∈J
EligLj(Pjs) ∩ Σ(J) = ∅
Therefore, a language vector L cannot be deadlocked at a string s if there exists
an asynchronous (local) event enabled at any of the vector projection components
of this string. The reverse is also true, that is, if an asynchronous (local) event is
enabled in a given local state, then any global state containing this local state cannot
be deadlocked. Therefore, in detecting global deadlocked states, we only need to
consider those combinations of local states all of whose eligible events are shared.
These combinations (global states) are further refined by testing them with respect
to the above criterion to determine if they identify potential deadlock states in the
composite system. Each potential deadlock state is then traced backward to check
if it is reachable in the composite system. The following Proposition, which follows
directly from Lemma 3.1, shows that this search can also be performed by considering
only the shared behaviour of the system.
Proposition 5.1. Let L be a language vector in a process space Σ. Then
BΣ(L) = ∅ ⇐⇒ BΣ(Ps(L)) = ∅.
�
Therefore, a given potential deadlock state is reachable if and only if the corre-
sponding state in the projected composite system is reachable. Testing reachability
through the projected system, Ps(BΣ(L)), can offer a computational advantage in
loosely coupled systems, given that Ps(BΣ(L)) = BΣ(Ps(L)) as implied by Proposi-
106
tion 4.7.
To further clarify the detect-first approach for deadlock detection, consider the
language vector L represented by a set of automata Ai = (Σi, Qi, δi, qoi, Qmi)}, i ∈ I
where Ai = A(Li). Every automaton Ai is assumed trim and therefore live. Let
A = ‖i∈IAi be the synchronous product automaton given by the tuple (Σ, Q, δ, qo, Qm)
where Σ = ∪i∈IΣi and Q ⊆ Q1 × Q2 × . . . × Qn be the set of all possible states in
A. Transitions in the automaton A are defined through the synchronous product
operation in the usual way. Write Q for the set Q1×Q2× . . .×Qn. Clearly, Lm(A) =
BΣ(L). However L(A) may not be equal to BΣ(L) due to possible blocked states in
the composite system. Define PDS(A) as the set of potential deadlock states in A as
outlined by the last theorem, namely
PDS(A) = {(q1, . . . , qn) ∈ Q | (∀J ⊆ I)⋂j∈J
EligAj({qj} ∩ (Qj −Qjm)) ∩ Σ(J) = ∅}
Therefore, the set PDS(A) can be computed by testing only unmarked local states1
where all eligible events are shared (no asynchronous events). The set of (actual)
deadlock states in A is then equal to PDS(A) ∩ Reach(A), where Reach(A) denotes
the set of all reachable states in A. In another words, a potential deadlock state is an
actual deadlock state if it is reachable from the initial state of the composite system,
namely qo = (qo1, . . . , qon). A direct backward reachability procedure can be used to
determine if a given state q ∈ Q is reachable in A.
For loosely coupled systems, testing that a given global state q belongs to the set
Reach(A) can be conducted more efficiently by considering only the shared behaviour
of A namely Ps(A). Based on Proposition 4.7, the projection map Ps is a composition
invariant map and therefore Ps(A) can be computed indirectly as Ps(A) = ‖i∈IPs(Ai).
The projection operation Ps defines a correspondence between Q and the set of states
of Ps(A) which can be identified as a subset X of Pwr(Q). Based on Proposition 5.1,
q ∈ Reach(A) if and only if there exists x ∈ X where x ∈ Reach(Ps(A)) and q ∈ x.
1This is based on the convention that EligB(∅) = ∅ for any automaton B.
107
The complexity of the detect-first procedure depends to a great extent on the
shared behaviour of the system. The initial part of the procedure detects the set
of potential deadlock states from those combinations of local states with all shared
eligible events. Therefore the worst case complexity of this part is of the order of
|Qs1| × . . .× |Qs
n| where Qsi ⊆ Qi denotes the set of states in the ith component of the
system having only shared eligible events. However, the typical complexity of this part
is much smaller than the worst case as many combinations can be excluded during the
test. For instance, if for some J ⊆ I we have⋂
j∈J EligAj(qj)∩Σ(J) �= ∅, then it follows
from Theorem 5.1 that any global state containing the set {qj |j ∈ J} is deadlock-free.
In fact, the efficiency of computing the set PDS(A) can vary substantially depending
on the order at which the set of local states are tested with respect to the deadlock
condition given in Theorem 5.12.
In the second part of the procedure, the reachability of potential deadlock states
is checked. The worst case complexity of this part is in the order of the state size
of the composite system, |Q|. However, this part would explore only those states
leading to potential deadlock states. The actual overhead in this part is the tracing of
unreachable potential deadlock states (as the identification of a deadlock state must
involve testing its reachability in one way or another). However, the rate of early
termination of such backward tracing can be high, particularly in loosely coupled
systems. Also, as discussed earlier, a significant reduction of the complexity of the
reachability test can be obtained by considering only the shared behaviour of the
system.
Example 5.2. The system shown in the figure below is a modified version of the Mil-
ner scheduler presented in [Mil80], which is often used as a benchmark for verification
algorithms because of its exponential state expansion with the number of cyclers. The
scheduler consists of a set of simple components called cyclers. The modified version
allows flexible cycling. Figure 5.1 shows the automaton representation of the ith cy-
cler, where i ∈ [0 . . . N ] for a scheduler consisting of N + 1 cyclers. In this system,
2The situation here is similar to the problem of generating an efficient ordered binary decisiondiagram which depends to a large extent on the way the variables are ordered.
108
state 1i is the initial and marker state for the first cycler (i = 0) while state 0i is the
initial and marker state for the remaining cyclers (i ∈ [1 . . . N ]). The subscripts of
the local components are taken as mod N , so that xN+1 = x0. The cycling sequence
can be adjusted using an interaction specification layout and the overall system can
be described by an IDES L = (L, K), where L is the language vector representing
the set of cyclers and K is their interaction specification.
yi−1
yi−1
xi+1
xi+1
6i 5i
4i 3i
2i1i0i
Ci
K
xi,yi
CkC1C0. . .
bi
cibi
ai
ci
Figure 5.1: The process scheduler
It is required to verify that the parallel composition of above cyclers is deadlock free.
Applying the detect-first procedure, potential deadlock states are selected from those
states with all shared eligible events, that is, the set
Qs = {(q0ro , q1r1 , . . . , qNrN)|(∀j ∈ [0 . . . n] rj ∈ [0, 3]}
The set Qs can be tested thoroughly with respect to the definition of PDS(A). How-
ever, working by eliminating those subsets of Qs that do not correspond to potential
deadlock states can be more efficient for this system. Clearly, any global state con-
taining any of the combinations (qi0, q(i+1)3) or (qi0, q(i−1)3) for i ∈ [0 . . . N ] cannot be
a deadlock state. Excluding these cases will leave only two potential deadlock states,
109
namely (q0r, q1r, . . . , qNr) where r ∈ [0, 3]. The next step is to check the reachability
of these two states. Backward tracing will reveal that both states are not reachable.
To reach this conclusion, the backward reachability test explores 2N+1 global states
for the first potential-deadlock state (r = 0) and 3 global states for the second one
(the composite system contains approximately 7N+1 states). Backward reachability
can also be performed using the projection of the shared behaviour of the system
components shown in the following diagram.
xi, yixo, yo
yi−1
Ps(Ci)Ps(Co)
00
yN
10 0i 1i
xi+1x1
Potential deadlock states in the composite system correspond to the set of states
(q′0(r−1), q′1r, . . . , q
′Nr) where r ∈ [0, 1] in the composite of the components projection.
Checking the reachability of these states requires testing only two global states in
the projected system. The complexity of computing the projections is of the order of
7(N + 1) in this case. �
The above procedure upon termination will return the set of all deadlock global
states in the system. Based on this information, the system model can be modified
and then tested again until it is deadlock free. It is important to note that the
overhead of the detect-first procedure (the number of unreachable potential deadlock
states) can be very high in tightly coupled systems. However, for loosely coupled
systems the computational gain is expected to surpass this overhead.
110
5.1.2 Livelock detection in multiprocess systems
Livelock is usually caused by a flaw in the communication exchange between the
system components preventing the system from performing its tasks. In contrast
with deadlock, a livelock state has a nonempty set of eligible events and therefore
can reach other states (possibly itself). The difficulty in detecting livelock states
stems from its characterization, which is linked to the existence of deadlock states.
Particularly, a livelock state can be defined recursively as a state with a reachable
domain consisting entirely of a nonempty set of deadlock or livelock states. Detecting
livelock states may, therefore, depend on the existence of deadlock states, and hence
deadlock detection has to be performed first.
Livelock detection can be simplified by considering those states that can only reach
livelock states. Blocked states that can reach only deadlock states can be considered
blocked for this reason. Such blocked states will be referred to as semi-livelock. By
definition, the existence of semi-livelock states is tied to the existence of deadlock
states. Therefore, assuming that the system is deadlock-free we only need to consider
blocked states that can reach only livelock states. Those states will be referred to
simply as livelock states. Given the assumption of finite state space, the recursive
characterization of livelock leads to the conclusion that any livelock state must exist
within a loop, or more precisely a clique, in the global system.
Let A = (Σ, Q, δ, qo, Qm) be a deadlock-free automaton. Based on the above
description, a state q ∈ A is a livelock state iff,
�A(q) �= ∅ ∧ �A(q) ∩Qm = ∅ ∧ (∀q′ ∈ �A(q)) �A(q′) �= ∅
The first two conditions characterize also semi-livelock states, while the last condition
distinguishes (strict) livelock states as those which can only lead to another livelock
state. Clearly, based on the above definition, the reachable domain of a livelock state
is a nonempty set of livelock states. The automaton A is said to be livelock-free if it
does not contain any livelock state.
111
Let X be a nonempty subset of Q in A. Let δX denote the restriction of δ to the
states in X, and let �XA : X → Pwr(X) be the reachability map that assigns each
state x ∈ X to those states in X that are reachable from x via transitions from δX .
The set X is called a clique in A if every state in X can reach any other state in X
via transitions from δX . Formally, a set of states X is a clique in A if
(∀x, x′ ∈ X) x′ ∈ �XA (x)
Clearly, the empty set is a (trivial) clique under the above definition. However, here
we restrict the notion of clique to nonempty subsets of Q. A state xo ∈ X is called
an input state for the clique X if xo is the initial state of A or
(∃q ∈ Q−X)(∃σ ∈ Σ) δ(q, σ) = xo
In this case the transition (q, σ, xo) is said to be an input transition for the clique X.
Similarly, a state xm ∈ X is called an output state in X if it is a terminal state in
A or there exists a transition (q, σ, xm) with q ∈ Q −X. In this case, the transition
(q, σ, xm) is then an output transition for the clique Q′. For a clique set X we will
write Xo for its set of input states and Xm for its set of output states. Therefore, a
system may enter a clique from one of its input states, then it can stay in the clique
indefinitely or exit the clique from one of its output states. The tuple (X,Xo, Xm)
will be referred to as a clique structure in A. Clearly, every clique defines exactly one
clique structure, so both names may be used for the same thing. For example, the
automaton in the following diagram contains four cliques, namely, {0, 1, 2}, {1, 2},{2}, and {3, 4}.
A clique is said to be reachable if any (and therefore all) of its states is reachable,
and is said to be coreachable if any (and therefore all) of its states is coreachable. All
the cliques in the shown automaton are reachable, and all of them are coreachable
except {3, 4}. Note that if an automaton A is reachable then every clique structure
in A must contain at least one input state, and if A is coreachable then every clique
112
d
cb
a
c
b10
3 4
2
d
a
structure in A must contain at least one output state. A clique is said to be maximal
if there is no other clique X ′ = (X ′, X ′o, X
′m) in A such that X ⊆ X ′, Xo ⊆ X ′
o, and
Xm ⊆ X ′m. A clique is said to be terminal if each reachable state from the clique is
in the clique, that is, if �A(X) = X. It is easy to see that every terminal clique is
maximal but the reverse is not generally true. For example, in the above automaton,
cliques {0, 1, 2} and {3, 4} are maximals, but only {3, 4} is terminal.
Proposition 5.2. Assume that the automaton A is deadlock-free. Then A is livelock-
free if and only if every reachable terminal clique in A contains a marked state.
Proof. Clearly, any reachable terminal clique that is not coreachable is associated
with at least one livelock state in A. Then, we only need to show the other direction,
that is, for every livelock state x in A, the set �A(x) contains a terminal clique that
is not coreachable. Let x be a livelock state in A. First we show, by contradiction,
that �A(x) must contain a clique. We define a run in �A(x) as a sequence of states
{xo, x1, . . . , xn} where xo = x and xi ∈ �A(x) and xi−1 = δ(xi, σi) for some σi ∈ Σ for
all i ∈ [1 . . . n]. Now assume that �A(x) does not contain a clique. Then every run in
�A(x) consists of a set of distinct states. Given that A is finite then it follows that
every run in �A(x) is finite, and therefore contains a (nonempty) set of maximals with
113
respect to the prefix ordering. By definition, each maximal run ends with a state that
cannot reach any other state in A contradicting the assumption that x is a livelock
state. Therefore �A(x) must contain a clique. Clearly, if �A(x) contains a clique then
it must contain the maximal clique of that clique, as any clique containing a clique in
�A(x) must be in �A(x). Therefore �A(x) must contain a maximal clique. It remains
to shown that �A(x) contains a terminal clique. This follows directly from the fact
that every maximal clique in �A(x) is either terminal or must reach another maximal
clique (otherwise, A would contain a deadlock state), and the fact that the set of
maximal cliques is disjoint and finite (for a finite state system). It follows that for
every livelock state x, the set �A(x) must contain a terminal clique. Clearly such a
terminal clique does not contain a marker state. This completes the proof.
Therefore, livelock states can be detected by testing the set of terminal cliques
in the system3. The above result can be used as a base for developing a detect-first
procedure to livelock detection in multiprocess systems. First we need to identify
the conditions in the system components that characterize a terminal clique in the
composite system. To this end, let (X,Xo, Xm) be a clique structure in A. A clique
machine is a tuple M = (X, xo, Xm) where xo ∈ Xo. The transition function of this
machine is δX . A clique machine M = (X, xo, Xm) is said to be maximal if there is no
other clique machine M ′ = (X ′, x′o, X
′m) such that x′
o = xo, X ⊆ X ′, and Xm ⊆ X ′m,
that is if it is not a submachine of any other clique machine. As mentioned earlier, if
A is trim then every clique in A must have at least one input and one output state.
Therefore, in this case, each clique is associated with at least one maximal clique
machine.
Let A be a set of trim automata {Ai = (Σi, Qi, δi, qoi, Qmi) | i ∈ I} over a process
space Σ and let A = ‖Ai = (Σ, Q, δ, qo, Qm) be their synchronous product. For a
nonempty J ⊆ I let M = {Mj = (xjo, Xj, Xjm) | (j ∈ J)} be a set of clique machines
in A. The set M is said to be a clique vector if M = ‖M is a clique machine, where
3Finding all possible cliques in a given graph is a known NP-complete problem as it requiresexamining all possible subsets of the set of state [GJ79].
114
‖ is the composition operation for the process space Σ. The composite machine M
is a tuple (xo, X,Xm), where xo = {xjo | j ∈ J} is called the local input state for M ,
X ⊆ ×j∈JXj is the set of states reachable from xo in M , and Xm = X∩×j∈JXjm. By
definition, the set M is a clique vector if and only if xo is reachable from itself in the
composition ‖M , that is, only through states from X. Let y ∈ ×i∈I−JQi and x ∈ X.
We will write (y ∪ x) to denote the global state q corresponding to the disjoint union
of x and y components. The tuple (M , y) will be referred to as a clique module. The
clique module (M , y) is said to be reachable if
(xo ∪ y) ∈ �A(qo)
That is, the global state (xo ∪ y) is reachable in the composite system. The clique
module (M , y) is said to be terminal if
(∀q ∈ Q) q ∈ �A(xo ∪ y) =⇒ (∃x ∈ X)q = (x ∪ y)
That is, any reachable global state from (xo ∪ y) must remain in the clique while
preserving its y part. To satisfy this condition it is necessary that y be deadlocked
with respect to M , i.e., there should be no eligible event at y that is also an eligible
event at any x ∈ X. Clearly, this also requires that y have no asynchronous eligible
events. Finally, (M , y) is said to be marked if
(∃xm ∈ Xm) (xm ∪ y) ∈ Qm
That is, a the clique module contains a global marked state. Intuitively, a clique
vector M = {Mj = (xjo, Xj, Xjm) | (j ∈ J)} is a set of local cliques that compose
to a clique in the global system independently of any other components in I − J .
Therefore, for any y ∈ ×i∈I−JQi the tuple (M , y) corresponds to a potential clique in
the global system. The tuple (M , y) only needs to be reachable to correspond to an
actual clique, that is when (xo, y) is reachable. The conditions for termination and
marking can be formulated directly for this tuple through this correspondence.
115
Theorem 5.2. Let A be a vector of trim automata in Σ where A = ‖A is deadlock-
free. Then A is livelock-free if and only if every reachable and terminal clique module
in A is marked.
Proof. As shown earlier, A is livelock-free if and only if every reachable terminal clique
in A contains a marker state. We show first that every reachable terminal clique in
A corresponds to a reachable terminal clique module in A. Let X ⊆ Q be a terminal
clique in A. Let M = (xo, X,Xm) be one of its clique machines. As M is reachable
in A then it must be that xo ∈ �A(qo). Let α(M) be the set of the transition events
of M . Let J be the maximal subset of I that satisfies
(∀j ∈ J) αA(M) ∩ Σj �= ∅
It is easy to verify that such J is always unique and nonempty for any clique in A.
Given that each Ai is trim, then the composition operation defines a homomorphic
correspondence between each component automaton Ai and the composite automaton
A given as hi : Q → Qi where hi(q) = qi iff qi ∈ q. Then, the clique machine
(xo, X,Xm) in A corresponds to the machine Mj = (xjo, Xj, Xjm) in Aj, where xoj =
hj(xo), Xj = hj(X), and Xmj = hj(Xm). It is easy to see that the machine Mj
is a clique as we have αA(X) ∩ Σj �= ∅ and it is known that projection preserves
reachability. Also, for i ∈ J − I, the clique M corresponds to a single state, with
no transitions in the corresponding automaton Ai. Let y ∈ ×i∈I−JQi denote this set
of states. Therefore, the set of clique machines M = {Mj = (xjo, Xj, Xjm) | j ∈J} together with y forms a clique module. The composite machine M ′ = ‖Mj =
(x′o, X
′, X ′m) has the same transition structure of the clique M , therefore, L(M) =
L(M ′). Based on the correspondence between the clique M and the set M , we can
conclude that M is reachable via y in A and it is also terminal with respect to the
same y as M is terminal. Also M could not be marked via y, as this would imply that
M contains a marked state, contradicting the initial assumption. The other direction
can be proved by contradiction using the above deduction.
116
Based on the above result, livelock situations can be traced from reachable and
terminal clique modules that are not marked. Therefore, detecting livelock in multi-
process systems can be performed by detecting this class of clique modules. Identify-
ing modules, however, requires identifying all cliques in the system components and
examining the synchronous behaviour of all possible combinations of these cliques.
In general, the number of cliques in a given automaton may exceed its number of
states. However, many of these combinations are expected to be eliminated early by
examining the criteria for a livelock situation. In summary, a set of clique machines
M = {Mj = (xjo, Xj, Xjm) | j ∈ J} and a set of states y ∈ ×i∈I−JQi in a vector
automaton A identifies a livelock situation in the composite system A = ‖A iff
1. (M , y) is a clique module:
The composite machine M = ‖M = (xo, X,Xm) satisfies xo ∈ �M(xo)
2. (M , y) is reachable in A:
(y ∪ xo) ∈ �A(qo)
3. (M , y) is terminal in A:
Any state reachable from (xo, y) state in A must be in the form (x ∪y) for some
x ∈ X. That is, y is deadlocked with respect to the set of states X. In general
y should not have any eligible event that is asynchronous.
4. (M , y) is not marked:
Every global state (xm ∪ y) must not be in Qm, the set of marked states of the
composite system.
A clique module is first identified by a composable set of cliques M and set of
states y that is deadlocked with respect to the states in ‖M . The clique module
(M , y) identifies a potential livelock source if it satisfies any of the above conditions.
It becomes a livelock source if it satisfies all the above condition. A list of potential
livelock sources is built and updated by testing the conditions above for reachability
and marking. A clique module (M , y) is removed from the list when it fails any of
these conditions.
117
The detection procedure can be enhanced by utilizing the reachability and core-
achability information as they accumulate during the detection process. Another
enhancement can be obtained by observing the inclusion of one clique machine into
another in the system components. Formally, a clique machine M = (xo, X,Xm) is
a submachine of another clique machine M ′ = (x′o, X
′, X ′m) if xo = x′
o, X ⊆ X ′, and
Xm ⊆ X ′m. A clique machine is said to maximal if it is not a submachine of any
other clique machine. Let M be a submachine of M ′ and let M be a clique vector
containing M , and M ′ be the clique vector obtained from M by substituting M with
M ′. It is easy to see that if M ′ does not satisfy any one of the above conditions then
M does not also satisfy it. And if M ′ satisfies all these conditions then it is a source
of livelock and the overall set of states, including M irrespective of its status with
respect to the above condition, need to be examined. Therefore, we only need to
consider the set of maximal clique machines in the detecting procedure.
The detect-first procedure outlined above can be used to confirm that the system
in Example 5.2 is livelock-free, simply because the only possible clique module is
the whole system which must be marked if the composite system is a clique. The
following is an example of a more complex situation where all the above conditions
need to be examined.
Example 5.3. In this example we consider the alternating bit protocol (ABP) as
presented in [Rud92]. This protocol was designed to achieve reliable full-duplex data
transfer between two processes over unreliable half-duplex transmission line in which
messages can be lost or corrupted in a detectable way, but not duplicated or reordered.
The basic components of the protocol are shown in Figure 5.2.
In this model, the sender gets the current data frame (get-data) and transmits
it (tdi, i ∈ [0, 1]) to the receiver with a sequence number i. The timer is then
set (set-timer) and the sender waits for an acknowledgment (rack). The sender will
retransmit the data that was sent most recently if it does not receive any message
from B within a given time-out interval after it sends a message (time-out) or if
it receives an acknowledgment error (aerror). When a correct acknowledgement is
118
102 10
5 4 3
210
9 8 7 6 5
43210
td0 rack
get datatd1rack
reset timer reset timer
Sender
Ack ChannelData Channel
Receiver
rd0
rd0
rd1
sack
rerror
rerror
put data
sack
put data
rd1
sd0sd1 sack
rerror
rd1,
dlose,
rerror
dlose,
rd0,
alose,
aerror
rack,
aerror, timeout
aerror, timeout
set timerget data
set timer
Figure 5.2: The alternating bit protocol model
119
received the timer is reset (reset-timer) and the sender sends the next data package
with an alternating sequence number. The receiver will wait for the message with
the correct sequence number (rdi, i ∈ [0, 1]). Once the data is received, it is sent to
the user (put-data) and then acknowledged (sack). The receiver will retransmit the
acknowledgment on receiving the same sequence number again and will ignore data
errors (rerror). The models of the two channels include the events of losing data and
acknowledgments (dlose, alose). The overall protocol is the composition of the shown
four components.
A successful protocol for transmission problem must possess two properties: (1)
the sequence of data that has been received by the receiver at a given moment is a
prefix of the sequence output by the sender, and (2) any data item given to process A
(or B) is eventually output to the receiver at B (or A). The first property is a safety
property since it does not require that any action take place but merely that, if an
action does occur, it is permissible. The second property is a liveness property since
it does require that some actions take place. In this example we consider only the
liveness property of the protocol.
Consider first the deadlock issue. In the above model, the local sets of states with
only shared eligible events are
OSE(Sender) = {2, 7}, OSE(Data-Channel) = {0},OSE(Receiver) = {0, 2, 3, 5}, OSE(Ack-Channel) = {0}
where OSE(A) denote the set of states in A that have only shared eligible events. The
set of potential deadlock states is then tested based on Theorem 5.1. Using the result
of this Theorem, we can confirm that the global states (2, X, 0, X) and (7, X,X, 0),
where X denotes any state in the corresponding component, cannot correspond to a
deadlock state. However, these two combinations are exactly all possible global states
with only shared eligible events. This shows that the protocol is deadlock-free.
To detect livelock states in the protocol, the set of maximal clique machines in
each components has to be identified. In the sender we can identify the following
120
maximal clique machines.
S1 = (Q1, 0, {0}), S2 = ({1, 2, 3}, 1, {3}), S3 = ({1, 2, 3}, 6, {6, 7, 8})
In the receiver machine we have the following maximal clique machines,
R1 = (Q2, 0, {0}), R2 = ({2, 3}, 2, {3}), R3 = ({3}, 3, {3}),R4 = ({5, 0}, 0, {0}), R5 = ({5, 0}, 5, {0})
In both the Data-Channel and the Ack-channel the only maximal clique is the machine
itself. We denote these cliques as D1 and A1 respectively. First consider those com-
binations containing S1. Examining this clique will show that it needs to synchronize
with events from the other three machines in order to remain as a loop in the com-
posite structure. Therefore, we need to consider only clique combinations containing
a clique from each component. The combinations (S1, R1, D1, A1), (S1, R4, D1, A1),
and (S1, R5, D1, A1) are excluded as they are always marked. The combinations
(S1, R2, D1, A1), and (S1, R3, D1, A1) are also excluded as their local initial states
(0, 2, 0, 0), and (0, 3, 0, 0) are not reachable. Testing the remaining combinations will
reveal that the system is livelock-free. �
As shown in the above example, there are several ways a given clique vector M
could be excluded from the list of potential livelock generators. The efficiency of
detecting livelock using the detect-first approach depends to a large extent on the
way the given conditions are tested. The detection procedure can be enhanced by
conducting some tests before others. For instance, checking if a given clique vector M
is reachable or terminal may be more efficient than testing if it is a module (computing
the composition ‖M ) if the size of the clique sets in M is relatively large. The reverse
can be true if the size of the clique sets in M is relatively small. Although the above
procedure can be fully automated, we believe it is better used in a semiautomated
way as certain parts can be done more efficiently manually, such as detecting local
cliques and excluding certain clique combinations.
121
5.1.3 Nonblocking interaction specifications
In interacting discrete event systems the interaction specification adds another di-
mension to the blocking phenomenon. To investigate the blocking problem for IDES
we first need to extend the definition of blocking to the class of interacting discrete
event systems. An IDES system L = (L, K) is said to be live iff
BΣ(L) = BΣ(L)
with the convention that L = (L, K). Similar to the case of language vectors, the set
of live IDES is not closed under componentwise union or intersection. An IDES that
is not live is called blocked. Clearly, an IDES L can be blocked even if the language
vector L and the interaction language K are both live. Moreover, L can be live
even if the language vector L or the interaction language K (or both) is blocked. To
simplify the analysis we will assume that the language vector L is live. In this case
we can focus on checking if the interaction language K does not block the parallel
composition of the system components.
Let Σ be a process space and let L ∈ L(Σ). Let CoΣ(L) denote the set of non-
blocking compensators for L. Formally, the set CoΣ(L) is defined as follows:
CoΣ(L) = {K ∈ CΣ(L) | BΣ(P Σ(L), K) = BΣ((P Σ(L), K))}
This set is not empty for any language L as it contains L itself. It is easy to verify
that this set is not closed under intersection in general. However, it is closed under
union and hence contains a supremal element. We will denote this supremal element,
CoΣ(L) for a given language L. Therefore, it is always possible to generate an optimal
live IDES model that generates the language L. This is based on the following result.
Proposition 5.3.
CoΣ(L) = L ∪
[LΣ ∩ [BP Σ(L)]c
]Σ∗
Proof. First we show that the language L ∪[LΣ ∩ [BP Σ(L)]c
]Σ∗ is a compensator
122
for L. For simplification, we will write H(L) for the set[LΣ ∩ [BP Σ(L)]c
]Σ∗. We
only need to show that
BP Σ(L) ∩[LΣ ∩ [BP Σ(L)]c
]Σ∗ = ∅
It is easy to see that the above equality holds based on the facts that [BP Σ(L)]cΣ∗ =
[BP Σ(L)]c and BP Σ(L) ⊆ BP Σ(L). Note that any string in[LΣ ∩ [BP Σ(L)]c
]Σ∗
must have the form uσt where u ∈ L, σ ∈ Σ, uσ �∈ BP Σ(L), and t ∈ Σ∗. Therefore a
string s ∈[LΣ ∩ [BP Σ(L)]c
]Σ∗ must be either in L or in [BP Σ(L)]c. Therefore we
can write
BP Σ(L) ∩[LΣ ∩ [BP Σ(L)]c
]Σ∗ ⊆ L
Based on that we show below that the compensator above is a non-blocking one.
BP Σ(L) ∩ L ∪H(L) =(BP Σ(L) ∩ L
)∪
(BP Σ(L) ∩H(L)
)= L ∪
(BP Σ(L) ∩H(L)
)= L ∪
(BP Σ(L) ∩
[LΣ ∩ [BP Σ(L)]c
]Σ∗
)= L
Next, we show that the above non-blocking compensator is a supremal one. Clearly,
any compensator for L must contain L itself. Now assume that S is non-blocking
compensator of L and there exists s ∈ S such that s �∈ H(L). Clearly, we only need
to consider the case when s �∈ L. Therefore, based on the non-blocking assumption
we can write
s ∩BP Σ(L) ⊆ L
However as s �∈ L we must have u ∈ s and σ ∈ Σ such that u ∈ L but uσ �∈ BP Σ(L).
Then, it must be that
s ∩[LΣ ∩ [BP Σ(L)]c
]�= ∅
This contradicts the assumption that s �∈ H(L). Therefore, the above non-blocking
compensator is the optimal one.
123
Therefore, to obtain CoΣ(L), we add to L any string that extends a string in L
without being a prefix of BP Σ(L). The situation is illustrated in the following figure.
In this diagram, the event σ1 should be disabled after s as it escapes the closure L to a
prefix of BP Σ(L). On the other hand, all possible extensions of tσ2 will be included.
σ2
s
t
BP Σ(L)
L
σ1
This can be implemented by computing the synchronous product of L and BP Σ(L)
and at every state (q1, q2) in this product, where q1 is a state in A(L) and q2 is a state
in A(BP Σ(L)), we add a set of transitions with events
Σ′ = Σ− (EligA(L)(q1) ∪ EligA(BPΣ(L))(q2))
leading to a terminal state with self-loop Σ∗. Similarly, it is always possible to obtain
the optimal live IDES that is behaviourally equivalent to a given IDES (L, K). Based
on the above result, to obtain CoΣ(BΣ((L, K))), compute the intersection [K∩BΣ(L)]
and then add to it any string that escapes this intersection without further move in
the prefix of BΣ(L). This is achieved by disabling any event that escapes from this
intersection through a common event in the prefixes K and BΣ(L) and enable any
other event.
To reduce the complexity of this computation, the intersection [K ∩ BΣ(L)] is
computed by exploring only those states in BΣ(L) that are needed to synchronize
with K. Blocked states can be identified and removed iteratively in the usual way.
124
The number of explored states in this case is typically in the order of the state size
of [K ∩ BΣ(L)]. Consequently, this computation can more efficient than the direct
composition approach when the specification K is highly restrictive such that the
size of K ∩ BΣ(L) is much smaller than that of BΣ(L). Consider, for instance, the
process space Σ = {{a, b}, {c, d}}, the language vector L = {(aba), (cdd)} in L(Σ),
and the interleaving interaction specification K = (UV )∗(U + ε) shown in Fig 3.7
with U = {a, b} and V = {c, d}. The synchronization procedure will only need to
explore 7 out of the total 16 states of BΣ(L). The optimal non-blocking compensator
CoΣ(BΣ((L, K))) contains 8 states as shown below.
a
a,b,c
a,b,c,d
a,b,c,d
b,cc,bc,a
a,db,d
dadbc
The detect-first approach discussed earlier can also be applied to find a non-
blocking compensator for a given language vector L when the interaction specifica-
tion K is highly permissive, namely, when relatively few restrictions to the system
behaviour is required to satisfy K. As discussed earlier, the idea is to isolate potential
blocking states first and then to trace back their reachability. However, we have here
a case of total synchronization between the events in L and K. Considering dead-
lock situations, blocking can occur at any state in the composite structure where the
corresponding state in K and that of BΣ(L) do not have a common eligible event.
Such composite state is a potential deadlock state. Formally, let (q1, . . . , qn) be a
state in the automaton A of BΣ(L) and x be a state in the automaton B of K. The
125
composite state (q1, . . . , qn, x) is a potential deadlock state in the composite structure
of the IDES (L, K) if
EligA(q1, . . . , qn) ∩ EligB(x) = ∅
Assuming that A is deadlock-free, then we only need to consider the case when
EligA(q1, . . . , qn) �= ∅. This condition can be tested locally as indicated by Theo-
rem 5.1. Potential deadlock states can be detected by first identifying the set of
ineligible events at each state in B. Then we identify those states in the local com-
ponents of L where a subset of this ineligible events set is the only set of possible
events after synchronization. Therefore, this procedure is more efficient for the case
when K is not very restrictive, that is, when most of the system events are eligible at
each state in B. Consider for example the IDES structure shown below. This IDES
represents a system of two processes, L1 and L2, interacting such that event a1 is
always executed before the a2 (Σ′ = Σ− {a1, a2}).
b2
b2
a2
a2
L2
x
y
z
a2
a1
10
1
L1
a1b1
b1a1
z
y
x
Σ′ Σ′
0
2
3
4 0 1
3
4
2
126
First we test if the components alone are live. In these two components there are
only two potential deadlock states, namely, (0,4) and (4,0). Testing the reachability
requires the exploration of two global states and will confirm that the two components
are deadlock-free. Also, examining clique vectors in these two components will show
that there are no terminal clique modules in these components (there are only two
maximal cliques in each component). The interaction specification restricts only
two events. At state 0 of the specification, event a2 is the only event that is not
eligible. This leaves only state 3 from the second component to consider for potential
deadlock. In the first component we only need to consider state 0 and 4 together
with this combination, as they have all shared eligible events. Therefore, we have
(0,3,0), and (4,3,0) as potential deadlock global states. With similar arguments we
can obtain (3,0,1) and (3,4,1) as the remaining potential deadlock states. Testing the
reachability of these states requires the exploration of only 4 global states and will
confirm that the overall IDES structure is deadlock-free.
The detect-first approach can also be used for livelock detection in IDES. The
procedure is similar to that used earlier for multiprocess systems. Basically, the set
of clique modules in the system components and the set of cliques in the interaction
specification are first identified. Then each combination of a clique module in the
components and a clique in the interaction specification is tested for reachability,
coreachability and composability as to verify that the composition of the two cliques
forms a clique in the overall system. The details of this procedure will not be pre-
sented here. However, there are two important points to consider in this approach.
First, information about the clique modules and their reachability and coreachability
is usually available from detecting livelock situations in the systems components. This
information can be reused when an interaction specification is added to the model.
Second, the total synchronization between the components and the interaction spec-
ification can be an advantage as invalid clique combinations can be identified early
without extensive exposure to the global states of the composite systems. There-
fore, it may be more efficient to do the three tests (reachability, coreachability, and
composability) in an alternative scheme, going through a part of each test at a time.
127
5.2 Verification of IDES
The term external verification or simply verification refers here to the procedure in
which the system model is checked against a set of behavioural specifications that
ensure that the system works as expected. The (external) verification problem can
be described formally as follows, given a system modelled by automaton P and a
specification modelled by another automaton S; test if the statement L(P ) ⊆ L(S)
is true, where L(G) denotes the language generated by the system G.
In practice, the system P is usually composed of a number of coordinating com-
ponents, and therefore, the composed structure P is too complex to allow a language
containment test directly by conventional means. This motivates the use of the de-
composition approach as a base for efficient solution. It is important to emphasize
again that the efficiency of this approach can be claimed only under the assumption
that the specification is much simpler than the system, and therefore, the decompo-
sition of the specification is not by itself a computational bottleneck.
In general, the system is given as a set of components interacting concurrently,
which may be constrained by a given interaction language. The specification, on the
other hand, is usually given as a language (automaton) describing a desired behaviour
or a task to be achieved. Formally, the IDES verification problem can be described as
follows: given an IDES system L = (L, K) over a process space Σ and a specification
S ∈ L(Σ) we want to test the statement BΣ(L) ⊆ S. The challenge is to conduct
this test without computing BΣ(L). As shown earlier, the IDES model supports dual
conversions to and from the domain of languages. Hence, the specification S in the
above problem statement can always be converted to an IDES, say S = (S, R) such
that S = BΣ(S). The verification problem is then to test the inclusion BΣ(L) ⊆BΣ(S). The following proposition shows that the componentwise inclusion implies
behaviour inclusion.
Proposition 5.4.
L � S =⇒ BΣ(L) ⊆ BΣ(S)
128
Proof. Having Li ⊆ Si for all i ∈ I implies that BΣ(L) ⊆ BΣ(S). Then, adding
K ⊆ R, we obtain BΣ(L) ∩K ⊆ BΣ(S) ∩R and hence BΣ(L) ⊆ BΣ(S).
The above result holds even if L and/or S are not compact. The other direction
does not generally hold even if both L and S are compact, and even if, in addition,
their interaction languages are optimal - both are equal to the supremal compensator
for their prospective behaviour. This case is shown in the following example.
Example 5.4. Let Σ = {{a, x}, {b, x}} be the process space. Let L = {(a), (b)},and S = {(a, aa), (b)}} be two language vectors. Let L = (L, K) and S = (S, R)
be two IDES where K = Σ∗ and R = Σ∗ − {aab}. Then L = BΣ(L) = (ab, ba)
and S = BΣ(S) = {ab, ba, baa} with K = CΣ(L) and R = CΣ(S). Here we have
BΣ(L) ⊆ BΣ(S). However, L �� S as K �⊆ R. �
Note that for a system L = (L, K) and a specification S = (S, R), we may have
BΣ(L) ⊆ BΣ(S) even if L �� S. It is easy to see that if K ⊆ BΣ(S) then we will
have BΣ(L) ⊆ BΣ(S) irrespective of L and S. However, as shown in the above result,
having L � S will relax the requirement to K ⊆ R (instead of K ⊆ BΣ(L)) in order
to confirm the behavioural containment.
In spite of the fact that the IDES model does not guarantee a modular solution to
the verification problem, it offers in general a better ground towards such a modular
solution. The interaction information provided in the IDES structure enhances the
chance of finding a modular solution in certain situations that would otherwise require
the computation of the synchronous product of the system components. This is
illustrated in the following example.
Example 5.5. Let Σ be the process space of the previous example. Consider the
system L′ = (L, K ′) where K ′ = a∗b∗. The language vector L, the specification S
and its IDES S are as given in the last example. Clearly, we have L′ � S. This
confirms that BΣ(L) ⊆ S. Note the necessity of the compensation information of the
specification S and the information about the interaction specification of L to obtain
a modular solution in this example. �
129
Two limiting cases are worth mentioning here. The first is when the specification
is Σ-decomposable. In this case, the verification check can always be confirmed
modularly irrespective of the interaction specification of the system. The second case
arises when the system components interact in the total parallelism scheme. In this
case the compensator for the systems is Σ∗. Based on the last proposition, the only
form of specification that can be (directly) checked modularly is one with similar
compensation, that is, a decomposable specification.
Verification of Structured IDES
As shown above, the information provided in the IDES model is not enough to carry
out the verification modularly. The problem is that in the IDES model the correspon-
dence between the system behaviour and its interaction specification is not generally
order preserving. The structured IDES model can resolve this problem. As intro-
duced earlier, this model provides a more powerful representation of multiprocess
systems in which a well-defined monotonic association between the system behaviour
and its interaction language is given.
As discussed in the previous chapter, a SIDES model for a given system can be
obtained by identifying patterns of behavioural invariance in the interaction specifi-
cation. To allow modular comparison, the specification should also be converted into
a SIDES structure. In contrast with IDES, there is no known procedure to trans-
form a given language into SIDES structure other than for the trivial case with the
identity map. Therefore, invariance patterns in the specification have to be explored
and tested to obtain a suitable SIDES representation. The verification problem can
then be described formally as follows, given an SIDES system L = (L, K, G) over a
process space Σ and a specification S represented by another SIDES S = (S, R, H);
verify that BΣ(L) ⊆ S.
Proposition 5.5. Assume that L and S are compact and G ≤ H. Then
L � S ⇐⇒ BΣ(L) ⊆ BΣ(S)
130
Proof. The right direction holds in general as proved in the previous proposition. The
left direction is obtained as follows. The assumption that both L and S are compact
implies that Li = Pi(BΣ(L)) and Si = Pi(BΣ(S)) for any i ∈ I. Given that the
projection operation is monotonic, then we can conclude that Li ⊆ Si for all i ∈ I.
Also, G ≤ H implies that K ⊆ R. This completes the proof.
Note that compactness is only needed for the “only if” direction. Based on the
above result, the verification problem can be solved modularly if a monotonic map
H ≥ G is found to model behavioural invariance in the specification, or alternatively
an extension monotonic map H = G is found to abstract both the system and the
specification such that K ⊆ R.
Example 5.6. The system in this example is a modified version of the three machines
system given in Example 4.2, whereas the number of machines here is k. Also, the
interaction specification is set to K = Σ∗, that is, the component machines are
working in parallel. Now consider checking if the system under this configuration will
allow event b1 to directly follow the event a1. The figure below shows this specification,
S, and the associated IDES structure S = (P Σ(S), CΣ(S)).
a2
b2c2
xd2
a1
b1
c1, d1, x
. . . .
a1
b1
Σ′
a1
a1
a1
a1 b1
b1Σ
b1B
AA
b1
A, B
A, B
x
ak
bk
ck
dk
The following abbreviations are used in the figure above: A = {c1, d1, x}, B =
{ai, bi, ci, di | i ∈ [2 . . . k]}, and Σ′ = A ∪ B. The IDES of the specification can-
131
not bring a decisive answer to the verification check as K �⊆ CΣ(S). However, it is
easy to check that the specification is invariant with respect to the abstraction map
G defined as follows
(∀i ∈ [2 . . . k]) G(ai) = G(bi) = G(ci) = G(di) = {ai, bi, ci, di}∗,G(a1) = a1, G(b1) = b1, G(c1) = c1, G(x) = x
The map G can then be used to abstract the system. The abstraction of the system
can be computed efficiently using the result of Proposition 4.8. The figure below
shows the abstraction of the system, K = G(BΣ(L)). In this diagram Σ′ is the set
Σ− {a1, b1, c1}.
c1
d1
b1
Σ′a1
Σ′ − x
Σ′ − x
It is easy to see that K is invariant with respect to G, that is K = G(K). Conse-
quently, the system can be represented by the SIDES structure (L, K, G). Also, given
that S = G(S), the specification can be represented by the SIDES (P Σ(S), S, G).
Here we have K �⊆ S, and based on Proposition 5.5, this confirms that the system
does not satisfy this specification. Now consider the situation when the system is
arranged as shown below.
132
U1a1
c1d1
b1
x U1a1
c1d1
b1
x
.. . . .
X
Uk x
dk
ck
bk
ak
.
.
.
.
Uk
X
U1
X
Uk
U1
The above interaction specification K ′ restricts the operation of the system so that
only one machine is working at a time. Given the same specification, the IDES model
of the specification can give a decisive answer as we have K ′ ⊆ CΣ(S). This confirms
that the system in this form satisfies the specification. �
The use of the structure IDES model in verification can be useful to verify local
specifications as shown in the above example. In other words, it can be used to
verify that certain local properties of some of the system components rather than
the whole system. In this case the abstraction map will be required to preserve all
the information about the local components in question and to abstract all other
components. It is also important to note, however, that efficient computation of the
system abstraction may require every shared event to be mapped to itself. This can
significantly reduce the efficiency of this approach for tightly coupled components.
Example 5.7. Consider the Milner scheduler described in Example 5.2. Assume here
that the cyclers are left free without a restrictive interaction specification. Under
this assumption, it is required to test if the composite system meets the following
specification.
133
a0
x0, y0
Σ− {a0, x0, y0}
The above specification requires the scheduler in its free form be initiated and termi-
nated in its first cycler. It is easy to see that this specification is invariant with the
Σ-abstraction map, G, defined as follows:
(∀i ∈ [0 . . . k]) G(xi) = xi, G(yi) = yi, G(a0) = a1, G(b1) = G(c1) = {c1, b1}∗
(∀i ∈ [1 . . . k]) G(ai) = G(bi) = G(ci) = {ai, bi, ci}∗
Therefore, the specification can be represented by the SIDES (P Σ(S), S, G). The
system can also be abstracted indirectly using the map G. Therefore, the system can
be represented by the SIDES (L, G(L), G). Given this setting, the system satisfies
the specification iff
L � P Σ(S), G(L) ⊆ S
In the case when K = 4, the scheduler machine BΣ(BL) has 1620 states and the
abstracted system G(L) has less than 25 states. The computation of P Σ(S) is also
negligible compared to the size of the composite system. �
5.3 Iterative verification in process space
It is shown in the previous section that if the system IDES is componentwise contained
(�) in the specification IDES or if the system interaction language is contained in the
specification, then it can be concluded that the system behaviour is contained in the
specification. No such conclusion can be drawn otherwise. However, the interaction
specification of the system is an abstraction of its behaviour. This abstraction can
134
be refined toward its minimal limit, namely, the overall system behaviour. At this
limit the verification test can always be confirmed. This suggests using an iterative
refinement procedure to adjust the amount of information in the IDES model. In such
procedure the interaction specification of the system is refined towards its minimal
limit until a solution is found. This procedure is guaranteed to terminate if the
refinement steps are finite and progressive in the sense that every refinement is strictly
finer than the previous ones.
The iterative approach for IDES verification can be summarized as follows. Con-
sider an IDES L = (L, K) in the process space Σ and a specification S ∈ L(Σ).
The specification can be converted to an IDES S = (S, R) where S = P Σ(s) and
R = CΣ(S). We will let R = S if L �� S. Initially, the language K is chosen as an
abstraction of the system. Then, in the verification step we check if K ⊆ R. If it is
true then terminate with a confirmation that the system satisfies the specification.
Otherwise, analyze the failure path to determine whether the failure is inherent in
the system or is caused by over-approximation in current abstraction. If the former is
true, terminate with a confirmation that the system does not satisfy the specification.
Otherwise, refine the abstraction in a way that the reported failure is eliminated and
then go back to the verification step. This sequence is repeated until a confirmation
is obtained.
This general procedure is applicable to any formal verification methods that im-
plement a well-defined incremental abstraction mechanism. The specific iterative
algorithm depends on the abstraction mechanism as well as the choice of the initial
abstraction and refinement procedures. In [Kur94, BSV93], for instance, the abstrac-
tion and refinement process are aiming at localizing the verification test to a subset
of the system components.
In the following a general iterative procedure is presented for the verification of
multiprocess systems based on the IDES model settings. Only multiprocess systems
are considered here. Therefore, the interaction specification of the system is simply
an abstraction of the concurrent behaviour of the systems components and does not
135
add any restrictions to it. In this procedure, the interaction specification of the sys-
tem is iteratively refined, while maintaining a well-defined association with the overall
system behaviour, until a solution is found. The association between the interaction
specification and the system behaviour is used to find a proper refinement or to iden-
tify a discrepancy between the system and the specification. Both automaton-based
and language-based abstractions, as presented in the previous chapter can be used
in this procedure. However, only the automaton-based abstraction is demonstrated
here. This approach is illustrated through another version of Milner scheduler. The
scheduler here consists of three cyclers as shown in the following figure.
5
4
3
2
Ci
0 1
6C2
Σ∗
C1C0
xi
xi+1
xi+1
bi
cibi
ai
ci
Figure 5.3: A 3-cyclers scheduler
The above scheduler is organized in a parallel scheme in which the system goes through
the cycles 0 to 2 repeatedly. In this system, state 1 is the initial and marker state
for the first cycler (i = 0), and state 0 is the initial and marker state for all other
cyclers. The system can be described by the IDES L = (L, K), where L is the set of
cyclers and K = Σ∗ is its interaction language. The correctness specification for the
scheduler is shown in the figure below where Σ′ = {bi, ci, xi | i ∈ [0 . . . 3]}. It is easy
to verify that this specification is Σ-indecomposable.
136
a2
a0 a1
Σ′ Σ′ Σ′
Figure 5.4: The scheduler specification
The scheduler system will now be verified against the above specification using
the iterative approach. In general, the iterative verification procedure consists of four
main steps.
Initial abstraction
In this step an initial abstraction for the system is specified. This abstraction must
have a well-defined association with the system behaviour or transition structure.
Automaton-based or language-based abstractions - as presented in the previous chap-
ter - can be used here to obtain a well-defined association between the system and
its abstraction. In the automaton-based scheme, the initial abstraction can be taken
as the coarsest partition of the components states. The transition structure of this
abstraction consists of a single state with a selfloop of all the system events, namely,
the initial abstraction in this case is Ko = Σ∗.
Verification
Assume that the current abstraction of the system is Kn where n ∈ N. At this stage we
check if Kn ⊆ R. If this is true, then the verification procedure terminates confirming
that the system meets the given specification. Otherwise, if Kn �⊆ R, then we need
to check if this failure is due to the system or is a result of over-approximating the
system behaviour in the given abstraction Kn. The initial abstraction of the scheduler
system clearly fails this step, as we have Ko �⊆ R.
137
Failure analysis
In the case of verification failure, verification tools usually provide a failure report
containing a set of failure paths. Each failure path consists of a sequence of transitions
starting from the initial state and ending with a state with an eligible event that leads
out of the specification R. The failure report at step n can then be represented by a
finite set of tuples in the form (s, σ), where s ∈ σ∗ and σ ∈ Σ, such that
sσ ∈ Kn and s ∈ R and sσ �∈ R
Given this information, it is now required to test the statement sσ ∩ BΣ(L) �= ∅. If
this statement is true then the verification procedure terminates with failure as, under
the assumption that the system components are live (BΣ(L) = BΣ(L)), this confirms
that part of the system behaviour, namely the set sσ ∩ BΣ(L), is not included in
the specification. Otherwise, if sσ ∩ BΣ(L) = ∅ for every failure tuple (s, σ) then
proceed to the next step. Note that checking that sσ ∩ BΣ(L) �= ∅ can be done by
exploring only those global states in BΣ(L) that are required to synchronize with the
failure path sσ. In the scheduler example the following failure tuples are found for the
automaton-based abstraction; (ε, a1) and (ε, a2). Verifying that each tuple is not part
of the composite system behaviour will require the exploration of two global states.
Refinement
Reaching this step, the verification has failed because of over-approximation. There-
fore, the current abstraction needs to be refined to eliminate all failure paths. How-
ever, such refinement should be just “enough” for this elimination without exposing
unnecessary details about the system behaviour. Therefore, for a failure pair (s, σ),
the refinement should achieve the removal of sσ from the set Kn with a minimal
exposure to the system behaviour. Clearly the optimal situation is to keep s in prefix
of the next abstraction Kn+1 while disabling σ after s. If this cannot be achieved
then we move backward to maintain the maximal possible prefix of s in the next
138
abstraction while disabling all the events leading to σ after that.
Consider the automaton-based abstraction scheme. The refinement procedure will
try first to disable σ after s in the next abstraction. If that cannot be achieved, then
it moves one step backward and tries to disable σ′ after s′ where s = s′σ′. This
process continues until an event leading to σ in s is disabled. Disabling σ after s can
be achieved by preempting another event between s and σ. This preemption can be
done as follows. Let q be the state in Kn at which s terminates and σ is enabled. Let
qi be a state in the nth abstraction of the ith component Li, referred to as An(Li). Let
πni be the corresponding partition so that An(Li) is the quotient machine A(Li)/π
ni .
Let [qi]πni
be the class of states equivalent to qi under πni . Assume that σ ∈ Σi, and
let siσ be the ith projection of sσ. Let Xi be the set of states in [qi]πni
where σ is
eligible, and let Yi be the set of states in [qi]πni
that are the destination of the string
si. If Xi ∩ Yi = ∅, then refine the partition [qi]πni
into two partitions one containing
Xi and the other containing Yi. Such refinement will guarantee that σ is no longer
available after s in the composite abstraction. This is valid if σ is either a shared or
an asynchronous event. This situation is illustrated in the following diagram.
If such refinement cannot be established at the ith component (Xi ∩ Yi �= ∅),then try other components j where Σ ∈ Σj. If no refinements can established in
any of these components, then move backward and replace σ with σ′ and s with s′
where s′σ′ = s and repeat the above steps. Clearly, if Xi ∩ Yi = ∅, then there is
more than one possibility to partition [qi]πni
in order to separate between Xi and Yi in
general. Therefore, there are several ways to refine the current partition in order to
disable σ after s. Heuristic rules can be used to choose a suitable partition that will
reveal the minimum amount of information in the process; for instance, minimizing
the difference between the sizes of the partitions. It will be assumed that a suitable
refinement is randomly chosen among the available ones. It is already established
that sσ is not in BΣ(L), therefore, there must exist a partition that can remove sσ
from the prefix of Kn. Given that the string s (from the failure report) is finite, it
follows that the above backward searching procedure will always terminate with such
139
si
XYσ
Li
. . .. . .
Lni
Kns
σ
sNsi
σ
σ
LnNLn
1
s1
P1 PiPN
Figure 5.5: Refining the abstraction at the nth iteration
140
partition.
In the scheduler example, the failure tuple (ε, a1) can be traced back to the second
component. In this component, disabling a1 after ε can be achieved by separating
state 0 and 1 in the next refinement. Clearly, there are many partitions that can
achieve this. The partition π11 = {{0, 2, 3, 4, 5, 6}, {1}} will be used here. Similar
partition will be used in the third cycler to disable a2 after ε. The partition of the
first cycler will remain the same. The abstraction K1 of the system now has three
states as shown in the following figure where Σ′ = Σ− {a1, x1, a2, x2}.
Σ′
a2
a1
x2
x1
Figure 5.6: The system abstraction after the first iteration
The above procedure is guaranteed to terminate, as every iteration must either
terminate or refine the current abstraction, and there is a finite number of possi-
ble abstractions that end with the overall system behaviour BΣ(L), at which the
verification check can always be confirmed.
Continuing with the next iteration for the scheduler example will bring two failure
tuples at the verification step, namely (x1, a1) and (x2, a2). In the refinement step,
based on the current partitions of the system components, it will not be possible
to refine any of the partitions in order to disable a1 after x1. The procedure, then,
moves backward and tries to disable x1 after ε. This can be achieved by separating
the state set {1} from the state set {3, 5} in the first component. The partition
π20 = {{0, 1, 4, 6}, {2, 3, 5} will be used. Similarly, a2 cannot be disabled after x2,
141
and to disable x2 after ε, the state set {3, 5} must be separated from the state set
{4, 6}. The partition π21 = {{1}, {2, 3, 5}, {4, 6, 0}} will be used. The partition of the
third components will remain the same. The abstraction K2 is the composition of the
three quotient components. Continuing this procedure, the following abstraction can
be obtained at the fifth iteration (Σ′ = Σ − {a0, x0, a1, x1, a2, x2}). This abstraction
is a subset of R, confirming that the system meets the given specification.
a2 x2
x1a0
x0a1
Σ′ Σ′ Σ′
Σ′ Σ′ Σ′
Figure 5.7: The system abstraction after the fifth iteration
The efficiency of this procedure can be measured by the number of iterations and
the size of the final abstraction. Both factors are related, to a large extent, to the
choice of the initial abstraction, as well as, to the choice of a suitable refinement
among the set of valid ones in the refinement step.
142
Chapter 6
Interacting Discrete Event
Systems: Supervisory Control
In the supervisory control problem a supervisor is constructed for a given sys-
tem such that the controlled (supervised) behaviour of the system follows a set of
predefined specifications. Based on the physical characteristics of the system, events
are regarded as either controllable (can be disabled) or uncontrollable (permanently
enabled). In general, the outcome of the supervisory control procedure determines
whether or not a supervisor exists that can restrict the system behaviour to the desired
specification. Therefore, supervisory synthesis can be considered a generalization of
the verification problem.
Similar to the verification problem, the supervisory synthesis procedure suffers
from the state explosion problem in multiprocess environment. The practical context
of both problems is similar; the targeted systems are usually hierarchically organized,
complex, and multiprocess. In this chapter the supervisory control of multiprocess
and interacting discrete event systems is investigated. The relation between the sys-
tem model and its behaviour is central to supervision synthesis. However, the control
dimension introduces new factors and conditions that render the problem more man-
ageable in certain situations.
143
Similar to internal verification, the supervisory control problem can be solved for
multiprocess DES by exploring only relevant parts in the state space of the system.
Supervision based on forward and backward exploration is introduced. Modular
supervision of interacting discrete event system is also investigated. It is shown
that under certain conditions regarding the controllability of shared events, modular
supervision can be achieved without the need to compute the synchronous product
of the system components. The result is extended for structured IDES.
It is interesting to see the relevance of the coupling between the system components
to both the verification and the supervisory control problems. In both problems it is
shown that more efficient solutions can be formulated for loosely coupled components.
In multiprocess discrete event systems, coupling is represented by shared events. In
the supervisory control problem, it is shown that the existence of a modular solution
is directly linked, among other factors, to the controllability of shared events, and,
a modular solution (not necessary optimal) is always guaranteed for asynchronous
multiprocess systems.
6.1 Supervisory control of multiprocess systems
In the IDES setting, it is assumed that the parallel behaviour of the system com-
ponents can be restricted to any interaction specification via direct synchronization.
In other words, all events are assumed controllable and can be disabled at any time
through an external agent. Clearly, this is not always the case in practice. In gen-
eral some of the system events may not be controllable and, therefore, cannot be
disabled. In this case, the given interaction specification may not be implementable
via direct synchronization. Techniques based on supervisory control theory can be
used to design a supervisor that restricts the system behaviour to conform with the
specification while respecting the given control limitations.
In general, supervisor synthesis for a given multiprocess system requires an ex-
haustive search of the state space of the system and thus is intractable for multipro-
144
cess systems. However, in certain situations, an optimal supervisor can be synthesized
without the need to explore all possible behaviour of the system. This can be achieved
by focusing on the common behaviour between the system and the specification, or
alternatively on the differences between the two structures.
6.1.1 Direct synchronization approach
Supervisory synthesis for multiprocess systems is a generalization of computing the
supremal nonblocking compensator for a given IDES. It was shown in the previous
chapter that the supremal non-blocking compensator for a given IDES can be obtained
by expanding only those traces in the composite IDES structure that are needed to
synchronize with the interaction specification. Blocked (bad) states are removed
as discovered during this synchronization. This procedure can be extended for a
limited control situation as follows. The specification is synchronized with the systems
components incrementally, that is, only states required for such synchronization are
explored. During this exploration, a global state consisting of a specification state x
and the set of components states q = (q1, . . . , qn) is marked “bad” if it satisfies one
of the following conditions
• an uncontrollable event is eligible at q but not eligible at x,
• (q, x) is not marked and cannot reach a marked state,
• (q, x) is the source of an uncontrollable event that leads to a bad state
Note that the first condition is what distinguishes this procedure from the one
used for finding the supremal non-blocking compensator. No exploration is further
attempted at a bad state. The exploration process is repeated until no more explo-
ration is possible and all bad states are identified. These bad states are then removed
from the synchronous composition of the specification and the set of components. It is
easy to see that the outcome of this procedure is a non-blocking maximally permissive
supervisor for the system.
145
The worst-case complexity of this synchronization procedure is similar to the direct
approach that requires the computation of the synchronous product of the systems
components. However, this approach is generally more efficient, as only the part of
the system behaviour that is relevant to the specification is explored. This procedure
is expected to be more efficient if the specification is highly restrictive, such that
the state size of the supervised system is much less than that of the synchronous
composition of the system components.
Example 6.1. The system in this example is a three process version of the system
given in section 5.1.3. Also here the processes are required to synchronize at initial
and terminal states. The system is shown in the following figure.
0 1
2
3
4
x
y
0 1
2
3
4
x
y b1
a1 b1
a1
0 1
2
3
4
x
y
a2
b2
b2
a2
a3
b3
b3
a3
Figure 6.1: A three process system
It is required to design a supervisor such that events b1, b2, and b3 are executed
sequentially without interleaving with any other event in between. This specification
is shown below where Σ′ = Σ− {b1, b2, b3}.
0
1
2
b1
b3
b2
Σ′
146
In this system the set of uncontrollable events is {a1, a2, a3}. All other events
are controllable (controllable transitions are marked with a tick in the correspond-
ing edge). Using the synchronization procedure, the optimal supervisor - shown in
Figure 6.2 - is constructed using direct synchronization for the above specification.
0,0,0 1,1,1
x
y
a1
a2
a3
a2
a1
a3
a3
b1
b1
b1
b1 b1
b1 b1
a1
a2
a3
a2
a1
3,1,12,1,1
1,2,1
1,1,2
3,2,1
3,1,2
2,1,2
2,2,1
1,2,2
2,2,2
4,1,2
3,2,2
4,1,1 4,2,1
4,2,2b1
b2
b3
z
4,4,24,4,4
Figure 6.2: The optimal supervisor for three process system
In this example all bad states (shaded nodes in the above figure) are identified in
one iteration. The total number of explored states is 19, while the number of states
of the composite system is 64. �
147
6.1.2 Detect-first approach
The detect-first approach can also be used in the supervisory synthesis of multiprocess
systems. The basic idea is to isolate and then to remove potential “bad” states.
Potential bad states have the same characteristics of the bad states in the direct
synchronization approach. The only difference is that in the detect-first approach
these bad states are identified first and then checked for reachability (if needed) by
backward tracing. The detect-first approach is explained in the following.
Consider the set of components L and the specification K. We assume initially
that L is live. To identify bad states, we first identify those states in K where not
all the events in Σ are eligible. Let x be such state in K and let Σ′ be the set of
events that are not eligible at x. Next we identify the set of states q in the composite
structure BΣ(L) that has eligible events from the set Σ′. The combination (q, x) is a
potential bad state if: an uncontrollable event in Σ′ is eligible in q, a subset of Σ′ is
the only set of events that can lead to a marked global state in the composite structure
of (L, K), or if it has an uncontrollable event that leads to a bad state. Potential bad
states are then tested for reachability by backward tracing. A confirmed bad state is
then removed from the specification by disabling the latest controllable event leading
to it. This process is repeated until all bad states are eliminated.
Note that, in the detect-first procedure above, we do not need to carry out the
reachability test for all potential bad states. Confirming the status of a potential bad
state is only needed if it affects the status of other states. In another words, we need
to test the reachability of a potential bad states (q, x) only if there exists another
state (q′, x′) where the status of (q′, x′), as being potentially bad or not, depends on
whether (q, x) is an actual bad state or not. Otherwise, if the status of a potential
bad state (q, x) need not be confirmed, then we just supply the control information
needed to avoid reaching this state. This information will be the set of controllable
events to be disabled at a given global state leading to (q, x).
The worst case complexity of the detect-first procedure is also similar to that of the
direct approach. However, this approach can be more efficient in certain situations,
148
particularly, when most of the events are controllable and the specification is, roughly
speaking, highly permissive. In such cases, there are few events that are not eligible
at any state of the specification. In this situation, the set of bad states is not likely
to proliferate. Therefore, turning the specification into a supervisor will not require
excessive exposure to the state space of the system.
6.2 Supervisory control of IDES
In the IDES setting of the supervisory control problem, the system is given as a set of
components interacting concurrently and, optionally, may be constrained by a given
interaction specification language. The specification, on the other hand, is given as
a language describing a desired behaviour or a task to be achieved. However, it is
always possible to convert any language to an IDES model. Therefore, without loss
of generality, we will assume that the specification is also given as an IDES structure.
The supervisory control problem for IDES is stated formally as follows: given
a system L = (L, K) over a process space Σ generating a language L = BΣ(L),
and a specification language S represented by an IDES structure, S = (S, R) with
S = BΣ(S), construct an IDES supervisor V = (V , T ), such that the supervised
system L/V satisfies BΣ(L/V) ⊆ S. The form of supervision we want to develop in
this section is a component-wise supervision where each component of the supervisor
controls the corresponding component of the system, and the supervisor’s interaction
specification controls the system’s interaction specification. Such supervision will be
referred to as modular supervision. Formally this translates into defining BΣ(L/V)
as follows
BΣ(L/V) = ‖i∈I(Vi/Li) ∩ (T/K)
where Vi is a supervisor for Li with respect to the specification Si. Then (Vi/Li)
denotes the restriction of the ith component of the system by the ith component
of the supervisor. Typically such restriction is achieved by total synchronization
(intersection). The IDES supervision setting is depicted in Figure 6.3.
149
Vn
Ln
K
T
Vn−1
Ln−1
V1
L1
. . . . .
Figure 6.3: The IDES supervision structure
150
We will address two main issues regarding the IDES supervision mechanism. The
first is the validity of this form of supervision, that is, to check if IDES supervision
can restrict the system to the given specification. The second issue is optimality,
that is, to check when the IDES supervision can produce the minimally restrictive
behaviour while satisfying the specification.
Recall that for a language S representing a specification for a system language
L, the set of all sublanguages of S that are controllable with respect to L is denoted
CL(S). This set contains a supremal element denoted sup CL(S). This element, if not
empty, corresponds to the optimal supervisor for L with respect to the specification
S. The blocking issue will not be considered here. Therefore, the system and the
specification languages are assumed prefix closed.
Proposition 6.1. Let S1, S2, L ∈ L(Σ) be prefix closed languages. Then
sup CL(S1 ∩ S2) = sup CL(S1) ∩ sup CL(S1)
�
The above proposition is a special case of a more general result in [Won01]. The
following result is a direct consequence of the controllability property.
Proposition 6.2. Let E,L ∈ L(Σ). Then
L ⊆ E =⇒ sup CL(E) = E
�
Optimal supervisors are implemented by the language sup CL(S ∩ L). Based on
the above results sup CL(S)∩L = sup CL(S ∩ L). To simplify notation, this language
will be denoted as sup CL(S). Given this notation, the optimality of IDES supervision
can be expressed by the following condition:
sup CL(S) = ‖ sup CLi(Si) ∩ sup CK(R) =
⋂i∈I
P−1i sup CLi
(Si) ∩ sup CK(R)
151
Clearly the optimality of IDES supervision implies its validity. Conditions for the op-
timal IDES supervision will be explored hereafter. The validity issue will be discussed
after that in the light of the optimality conditions.
Proposition 6.3.
S = S and L = L =⇒ sup CL(S) = sup CL(S)
Proof. First we show that sup CL(S) = sup CL(S). Let H ∈ L(Σ) be a language.
Then
H ∈ CL(S) =⇒ H ⊆ S and HΣu ∩ L ⊆ H
=⇒ H ⊆ S and HΣu ∩ L ⊆ H (S = S and H = H)
=⇒ H ∈ CL(S)
Hence, sup CL(S) ∈ CL(S). Therefore, sup CL(S) ⊆ sup CL(S). Also sup CL(S) ⊆sup CL(S) in general, hence it must be that sup CL(S) = sup CL(S). Now, we have
sup CL(S) = sup CL(S ∩ L) = sup CL(S) ∩ L = sup CL(S) ∩ L
However,
sup CL(S) ∩ L ⊆ sup CL(S) ∩ L
Therefore, sup CL(S ∩ L) ⊆ sup CL(S ∩ L). Also, in general we have
sup CL(S ∩ L) ⊆ sup CL(S ∩ L)
Then it must be that sup CL(S ∩ L) = sup CL(S ∩ L). That is, sup CL(S) = sup CL(S)
Based on the above result, the optimal supervisor is guaranteed to be prefix closed
when the system and the specification are prefix closed . The next result shows that
the sup C operation is invariant with inverse projections. We will write Σ to denote
152
the set Σ ∪ {ε}.
Proposition 6.4.
P−1i sup CLi
(Si) = sup CP−1i Li
(P−1i Si)
Proof. Let H ⊆ Σ∗i be a language in the set CLi
(Si). First, H ⊆ Si ∩ Li implies
P−1i H ⊆ P−1
i Si ∩ P−1i Li. Second,
H ∈ CLi(Si) =⇒ HΣui ∩ Li ⊆ H
=⇒ H Σui ∩ Li ⊆ H Σui = Σui ∪ {ε}=⇒ P−1
i
(H Σui ∩ Li
) ⊆ P−1i H
=⇒ P−1i H P−1
i Σui ∩ P−1i Li ⊆ P−1
i H
=⇒ P−1i H Σu ∩ P−1
i Li ⊆ P−1i H (Σu ⊆ P−1
i Σui)
=⇒ P−1i H Σu ∩ P−1
i Li ⊆ P−1i H
=⇒ P−1i H ∈ CP−1
i Li(P−1
i Si)
For the other direction, let T ∈ L(Σ) be a language in the set CP−1i Li
(P−1i Si). First,
T ⊆ P−1i Si ∩ P−1
i Li implies that PiT ⊆ Si ∩ Li. Second,
T ∈ CP−1i Li
(P−1i Si) =⇒ TΣu ∩ P−1
i L ⊆ T
=⇒ Pi(TΣu ∩ P−1i L) ⊆ PiT
=⇒ PiT PiΣu ∩ PiP−1i Li ⊆ PiT
=⇒ PiT Σui ∩ Li ⊆ PiT
=⇒ PiT ∈ CLi(Si)
Now let X = sup CLi(Si) and let Y = sup CP−1
i Li(P−1
i Si). Then we have, by supremal-
ity, P−1i X ⊆ Y and PiY ⊆ X. Therefore P−1
i PiY ⊆ P−1i X. However, Y ⊆ P−1
i PiY .
Then it must be that P−1i X = Y . Therefore, we can write
P−1i sup CLi
(Si) = sup CP−1i Li
(P−1i Si)
This completes the proof.
153
The next step is to find conditions upon which the synthesis of local supervisors
for the system components can be done optimally using only information about these
components without the need of any information from the other components of the
system.
Proposition 6.5. Let L = (L, K) be an IDES such that K is controllable with
respect to BΣ(L) and Σs ⊆ Σc. Let L = BΣ(L). Then for all i ∈ I,
sup CL(P−1i Si ∩ P−1
i Li) ∩ L = sup CP−1i Li
(P−1i Si ∩ P−1
i Li) ∩ L
Proof. Let H ∈ L(Σ) be a language. Then
H ∈ CP−1i Li
(P−1i Si ∩ P−1
i Li) =⇒ H Σu ∩ P−1i Li ⊆ H
=⇒ H Σu ∩ L ⊆ H (L ⊆ P−1i Li)
=⇒ H ∈ CL(P−1i Si ∩ P−1
i Li)
Therefore, we have sup CP−1i Li
(P−1i Si ∩ P−1
i Li) ⊆ sup CL(P−1i Si ∩ P−1
i Li). Now, as-
sume there exists σ ∈ Σu and a string s ∈ Σ∗ such that s ∈ L, sσ ∈ P−1i Li and
sσ �∈ L. Given that shared events are controllable, then σ ∈ Σu ∩ Σi implies that
sσ ∈ P−1j Lj for all j ∈ I − {i}. Also by the controllability of K, if σ ∈ Σu then
sσ ∈ ∩i∈IP−1i Li and s ∈ K implies that sσ ∈ K. Therefore, we can state that
(∀s ∈ Σ∗)(∀σ ∈ (Σu ∩ Σi)) sσ ∈ P−1i Li =⇒ sσ ∈ L
Hence, for any language T ⊆ L, we have T (Σu ∩ Σi) ∩ P−1i L ⊆ TΣu ∩ L. Then, we
can write
T ∈ CL(P−1i Si ∩ L) =⇒ TΣu ∩ L ⊆ T
=⇒ T (Σu ∩ Σi) ∩ P−1i Li ⊆ T
=⇒ P−1i Pi(T (Σu ∩ Σi) ∩ P−1
i L) ⊆ P−1i PiT
=⇒ P−1i PiT P−1
i Pi(Σu ∩ Σi) ∩ P−1i Li ⊆ P−1
i PiT
=⇒ P−1i PiT Σu ∩ P−1
i Li ⊆ P−1i PiT
154
Clearly, T ⊆ (P−1i Si ∩ L) implies P−1
i PiT ⊆ (P−1i Si ∩ P−1
i Li). Then based on the
above we can conclude that P−1i PiT ∈ CP−1
i Li(P−1
i Si ∩ P−1i L). This shows that
P−1i Pi sup CL(P−1
i Si ∩ L) ⊆ sup CP−1i Li
(P−1i Si ∩ P−1
i L)
Given that sup CL(P−1i Si ∩ L) ⊆ P−1
i Pi sup CL(P−1i Si ∩ L) and combining the previ-
ous result, we can write
sup CL(P−1i Si ∩ L) ⊆ sup CP−1
i Li(P−1
i Si ∩ P−1i L) ⊆ sup CL(P−1
i Si ∩ P−1i Li)
Now, given that any language H such that H ∩ L = ∅ is controllable with re-
spect to L trivially, then with little effort it can be seen that sup CL(P−1i Si ∩ L) =
sup CL(P−1i Si ∩ P−1
i Li) ∩ L. Then by adding the intersection with L to the above
line we get sup CL(P−1i Si ∩ P−1
i Li) ∩ L = sup CP−1i Li
(P−1i Si ∩ P−1
i Li) ∩ L. This
completes the proof.
The conditions in the above proposition are sufficient to guarantee the optimality
of supervision at the components level. The question about the necessity of these
conditions does not affect the results in this section and will be left for future inves-
tigation. The controllability conditions for the interaction part of the model will be
examined next.
Given the IDES specification S = (S, R), to achieve optimal IDES supervision a
condition is needed such that information about the interaction language K is enough
to calculate the language sup CL(R)∩K. This language represents the optimal super-
visor for the system’s interaction language with respect to the interaction language
of the specification.
Proposition 6.6. Assume R ∩K is controllable with respect to K. Then
sup CL(R ∩K) = sup CK(R ∩K)
Proof. Assume H ∈ CK(R), then HΣu∩K ⊆ H. However, HΣu∩L ⊆ HΣu∩K ⊆ H.
155
Hence H ∈ CL(R). Therefore
sup CK(R ∩K) ⊆ sup CL(R ∩K)
Also if R ∩ K is controllable with respect to K, then it must be that R ∩ K =
sup CK(R ∩K). However, it always holds that sup CL(R ∩K) ⊆ K ∩ R. This shows
that sup CL(R ∩K) = sup CK(R ∩K), and both in this case are equal to R ∩K.
It is clear that the condition that R ∩ K is controllable with respect to K is
sufficient but not necessary. For instance, when K = L, the implication of the propo-
sition holds trivially independent of R. However, based on the above proposition, an
optimal supervision for the system’s interaction language K can be achieved if a com-
pensator R for the specification is found such that K ∩R is controllable with respect
to K. In general, the set of compensators for the specification S is any language R
such that S ⊆ R ⊆ CΣ(S). Based on that we can state the following result.
Proposition 6.7. There exists an IDES model (S, R) for S such that R is prefix
closed and K ∩R is controllable with respect to K if
S ⊆ sup CK(CoΣ(S))
�
Note that if S is prefix closed, then CoΣ(S) is also prefix closed, and therefore
sup CK(CoΣ(S)) is prefix closed. It is worthwhile here to reflect on limit cases for the
system interaction language, K. In one limit case, when K = L, the supervisor of the
system interactions is always optimal regardless of the specification. However, this is
also the most complex case. The other limit is more interesting, namely when K = Σ∗
reflecting total parallelism of the system components. Clearly here K ∩ R = R for
any compensator R for the specification. It is straightforward to check that in this
case, the above condition holds if and only if RΣu ⊆ R, namely uncontrollable events
are always enabled in R. For instance, the serial composition interaction U∗V ∗ can
156
be implemented for any two parallel components if and only if the set of events V is
controllable.
Theorem 6.1. Let L = (L, K) be an IDES over a process space Σ with L = L(L),
and let S ∈ L(Σ). Assume that,
1. Σs ⊆ Σc
2. K is controllable w.r.t BΣ(L)
3. S ⊆ sup CK(CoΣ(S)).
Then
sup CL(S) = ‖ sup CLi(Si) ∩ sup CK(R)
Proof. Given the third condition, we can construct an IDES model, S = (S, R), for
the specification S such that K ∩R is controllable with respect to K, and R is prefix
closed. Based on the above assumption we can make the following deductions.
sup CL(S) = sup CL(⋂
i∈I P−1i Si ∩R)
=⋂
i∈I sup CL(P−1i Si ∩ P−1
i Li ∩ L) ∩ sup CL(R ∩K)
=⋂
i∈I
(sup CL(P−1
i Si ∩ P−1i Li) ∩ L
) ∩ sup CL(R ∩K)
=⋂
i∈I
(sup CP−1
i Li(P−1
i Si ∩ P−1i Li) ∩ L
)∩ sup CK(R ∩K)
=⋂
i∈I sup CP−1i Li
(P−1i Si ∩ P−1
i Li) ∩ sup CK(R) ∩ L
=⋂
i∈I P−1i sup CLi
(Si ∩ Li) ∩ sup CK(R) ∩ L
=⋂
i∈I P−1i sup CLi
(Si) ∩ sup CK(R) ∩ L
Clearly, for all i ∈ I we have sup CLi(Si) ⊆ Li also we have sup CK(R) ⊆ R, hence
⋂i∈I
P−1i sup CLi
(Si) ∩ sup CK(R) ⊆⋂i∈I
P−1i Li ∩ R = L
This completes the proof.
The efficiency of the solution for this problem depends, among other factors, on
the interaction language of the system, as well as that of the specification. Roughly
157
speaking, the chance of finding an optimal IDES supervisor increases by restricting
the interaction language of the system and relaxing that of the specification.
Example 6.2. The system in this example consists of two processes with synchro-
nization mechanism to allow certain forms of coordination using two shared events
x, y. The system is initially arranged such as the two processes are working in parallel.
The STD of the system is shown below.
Σ
op1a op2a
Process A
fa2fa1
op3a
r3a
xx
ma
op1b op2b
r1b r2b
Process B
op3b
r3b r4b
op4b
x
mb
r1a r2a
fb1 fb2
op4a
r4a
U V
The specification of the system is shown in the figure below. The specification S is
converted to the shown IDES structure S = (P Σ(S), CoΣ(S)).
Spec A Spec B
op2a
op1a r2b
op1b
op2a,r2b
op2a,op1b
op2a,r2b
op1a,r2b
op2a
op1a r2b
op1b op1a r2b
op1bop2a
CoΣ(S)
Specification S
(selfloops of remining events are not shown)
158
It is clear that the specification S is not controllable with respect to the interaction
language of the system K = Σ∗. However, CoΣ(S) is controllable with respect to K
and therefore sup CK(CoΣ(S)) = Co
Σ(S). The IDES supervisor for the specification is
shown below.
op2a
op1a r2b
op1b
op2a,r2b
op2a,op1b
op2a,r2b
op1a,r2b
r1a
Sup A
op2a
x
x
r2a
op1a
x
x
r2b
op1b
r1b
fb1 fb2
op4b
op3b
r3b mb
r4b op2b
Sup B
The following figure shows a new specification for the system. The new speci-
fication affects only the local specification for process B. Therefore, the new IDES
supervisor can be obtained from the one above by updating only the supervisor for
process B with respect to its new specification.
op1a
op2a
op2b
op1b op1b
op2b
Spec B
r1b
op1b
r2b
op2b
Sup B
x
x
new specification new local specification new local supervisor
�
159
The conditions for the optimality of IDES supervision, as given in the above
Proposition, are related to the system, the specification, and the structure and the
control characteristics of the process space. However, if the optimality of the IDES
supervision is not required, some or all of these conditions can be relaxed. To this
end, an IDES supervision is said to be valid if it can restrict the system behaviour to
the given specification. Formally, this can be described by the following condition:
‖ sup CLi(Si) ∩ sup CK(R) ∈ CL(S)
That is, a valid IDES supervisor corresponds to a (flat) supervisor (not necessarily
optimal) for the overall system.
Theorem 6.2. Let L(L, K) and S = (S, R) be two interacting discrete event systems
over a process space Σ with L = L(L) and S = L(S). Then ‖ sup CLi(Si)∩ sup CK(R)
is a valid supervisor for L with respect to S.
Proof. We only need to show that
‖ sup CLi(Si) ∩ sup CK(R) ∈ CL(S ∩ L)
Based on the proof of the previous lemmas, we have
sup CK(R) ∈ CL(R ∩K) (Proof of Lemma 6.6)
sup CP−1i Li
(P−1i Si) ∈ CL(P−1
i Si ∩ P−1i Li) (Proof of Lemma 6.5)
Therefore we can write
⋂i∈I sup CP−1
i Li(P−1
i Si) ∩ sup CK(R) ∈ ⋂i∈I CL(P−1
i Si ∩ P−1i Li) ∩ CL(R ∩K)
∈ CL(⋂
i∈I
(P−1
i Si ∩ P−1i Li
) ∩ (R ∩K))
∈ CL(L ∩ S)
This shows that every IDES supervisor is a valid supervisor for the system. The
160
extent of how close this IDES supervisor is to an optimal supervisor cannot be known
in advance. Based on the given conditions, in general, a better solution is more likely
for loosely coupled systems than for tightly coupled ones.
The established conditions for optimal IDES supervision are not mutually depen-
dent. This allows different possibilities for adjusting the system and its model. For
instance, the condition that requires shared events to be controllable may require
changing the process space structure. The other conditions involve mainly the inter-
action specifications of the system and the specification. This adds other possibilities
for adjustments as interaction specifications are not unique.
161
Chapter 7
Conclusions
7.1 Summary
In this thesis we investigated the relation between the behaviour and the structure
of interacting multiprocess discrete event systems. In the light of these investigations
we proposed an approach to modelling and analysis of a general class of multiprocess
discrete event systems with well-defined interaction specification. The framework is
based on elementary notions of formal languages and automata. One of the main
objectives of this thesis is to provide the formal basis and the theoretical foundation
for integrating the information about the components interaction in the modelling and
analysis process. As shown in the thesis, such integration can be a key for modular and
efficient behavioural analysis in the complex environment of multiprocess systems.
The relation between the model and the system behaviour is crucial to the analysis
in multiprocess environment. In the IDES framework this relation is founded on the
reversibility of the composition (synchronous product) and decomposition (vector
projection) operations. This reversibility is established based on the compactness
property and the compensation concept. Compensation can be used to convert any
given language to an IDES structure, while a generalized composition operation can
be used to obtain the language generated by a given IDES structure. This reversibility
162
is an essential element of the analysis approaches presented in this thesis.
In the thesis, we investigated the verification and supervisory control problem
within the IDES framework. We proposed several approaches for solving these prob-
lems while avoiding the direct computation of the synchronous product of the system
components. Also, conditions have been established for modular solution for the ver-
ification and supervisory synthesis problems. The established conditions emphasize
the importance of the coupling between the system components on the efficiency of
the analysis procedures.
7.2 Future Research
The proposed IDES framework was intended to be a general one. In this thesis,
many important aspects on behavioural analysis of multiprocess systems have been
addressed. However, much research remains to be done to address other important
issues.
Real time systems
The IDES model can easily be extended to represent real time specifications. The
extension of the analysis procedures needs more effort. This is due to the special
characteristics of the timing information that requires special treatment. However,
many analytical results in this direction can be obtained by extending the results in
this thesis using the already established results for the verification and supervisory
control of real-time systems.
Multilevel systems
The extension of the IDES framework to multilevel systems in this thesis has consid-
ered only the modelling aspects of the framework. There are many analytical issues
to be addressed for multilevel hierarchical systems. The multilevel structure of the
163
system can be incorporated and utilized in the analysis process.
Blocking issue
Blocking is one of the major problems in the IDES framework. Further research is
needed to address this problem for interacting multiprocess systems. This includes the
incorporation of different techniques to limit the search domain. Iterative techniques
such as the one discussed for the verification problem may be useful to detect blocking
situations in multiprocess systems efficiently.
Software Implementation
A software implementation of the IDES modelling and analysis framework would
require special arrangement in the input interface to incorporate the interaction spec-
ification and the process space structure (alphabet vector) together with the corre-
sponding operations (composition, decomposition, abstractions, ... etc.). The veri-
fication and supervisory control procedures described in this thesis can be directly
translated into software functions. Such software tool is essential to bring the result
of this thesis to potential non-specialist users.
164
Appendix A
Decomposition and Modularity in
Process Space
In this section, we consider a general definition of decomposability in process space
that takes into account the interaction between the system components. One expected
outcome of utilizing this dimension is to extend the domain of specifications that can
be verified modularly. We also investigate the case when the interaction specification
can guarantee a modular verification for any given specification.
A.1 Decomposition and Modularity
Let Σ be an alphabet vector with index set I and let K ∈ L(Σ) be a language. A
language H ⊆ Σ∗ is said to be (Σ, K)-decomposable, or simply K-decomposable when
Σ is known from the context, if
H = BΣ(P Σ(H), K)
That is, H is (Σ, K)-decomposable if it can be constructed from its vector projection
and the language K. The set of (Σ, K)-decomposable languages is not closed under
union. However, it can be shown easily that it is closed under intersection.
165
A language H is said to be (Σ, K)-semidecomposable if H can be written as a
finite union of a set of (Σ, K)-decomposable languages. Hence, by definition a K-
decomposable language is also K-semidecomposable. A interaction language K is
said to be modular if any sublanguage of K is (Σ, K)-decomposable. Formally, K is
modular if
(∀H ⊆ K) H = BΣ(P Σ(H), K)
A interaction language K is said to be semimodular if any regular sublanguage of K
is (Σ, K)-semidecomposable. Clearly, from the above definition, if K is semimodular
then it is modular. Let H be a regular sublanguage of K. Define the set of (Σ, K)-
decomposable sublanguages of H as follows
DK
Σ(H) = {F ⊆ H | F = BΣ(P Σ(F ), K)}
We will write DK
Σ(H) to denote the maximals of the set DK
Σ(H). Clearly, to show that
K is semimodular we need to show that the set DK
Σ(H) is finite for any sublanguage
H of K.
Proposition A.1. The interaction languages for serial composition, refinement, and
interleaving are semimodular.
Proof. We will consider only the case of two components process space. The extension
to the general case is straightforward. Let Σ = {Σ1, Σ2} be a process space with the
corresponding abstract events U = Σ1 − Σ2, V = Σ2 − Σ1 and X = Σ1 ∩ Σ2. Let
A,B ∈ DK
Σ(H) where A �= B. Clearly, there must exist s ∈ A and t ∈ B such that
BΣ(P Σ{s, t}, K) �⊆ H, otherwise either A is contained in B or B is contained in A
contradicting the assumption that they are maximals. The set {s, t} will be called
an indecomposable pair in H. It is easy to see that if {s, t} is indecomposable in
H then it must be that Pss = Pst, that is, they share the same sequence of shared
events. In the following, we will show that the set DK
Σ(H) is finite by showing that
any two indecomposable strings must differ at least in one state in the automaton of
H. This will imply that any two maximals in DK
Σ(H) must have at least one state
166
not in common. Given that H is regular by assumption and therefore has a finite
number of states, this will prove that H has a finite number of maximals, and thus
is K-semidecomposable.
(Serial composition) We consider first the case of synchronous serial composition.
In this interaction scheme, each string in K will have the form uxv where u ∈ U∗,
x ∈ X+ and v ∈ V ∗. Therefore, we can write the two strings s and t as u1xv1 and
u2xv2 respectively where u1, u2 ∈ U∗, v1, v2 ∈ V ∗, and x ∈ X+. If s and t share the
same set of states in H then it must be that u1xv2 and u2xv1 are both in H, therefore,
BΣ(P Σ{s, t}, K) ⊆ H, contradicting the assumption they are indecomposable in H.
Therefore, s and t must have at least one state not in common in H. Similar argument
can be used for the asynchronous serial composition.
(Refinement) Considering the handshaking refinement scheme with the interaction
language K = {U,XV ∗X}∗. Then s, t will have the form
s = u1x1v2x2u3 . . . un−1xn−1vnxnun+1, t = a1x1b2x2a3 . . . an−1xn−1bnxnan+1
where ui, ai ∈ U∗ for i ∈ [1, 3 . . . n + 1], xk ∈ X for k ∈ [1 . . . n], and vj, bj ∈ V ∗ for
j ∈ [2, 4 . . . n]. Therefore, if s and t share the same set of states in H, then we can
exchange each ui with ai or vj with bj and the result string in either cases will be in
H. Therefore, in this case BΣ(P Σ{s, t}, K) ⊆ H contradicting the initial assumption.
The case for the synchronized refinement can be proven similarly.
(Interleaving) For K = (UV )∗(U + ε), the strings s, t will have the form (uv)n or
(uv)nu where n is any natural number. Note that every string w ∈ K satisfies
|P1(w)| = |P2(w)| + r where r ∈ [0, 1]. Therefore, to be indecomposable, s and t
must satisfies ||s| − |t|| ≤ 1. Without losing generality, let s = u1v1 . . . unvn and
t = a1b1 . . . anbnan+1 where ui, ai ∈ U and vi, bi ∈ V . If s and t share the same set of
states in H, then it must be that t′ = a1b1 . . . anbn is also in H and it must be in the
set B ∈ DK
Σ(H). Clearly s, t′ also share the same set of states. Now, replacing each
occurrence of ui with ai and/or vi with bi will result in a string in H. Consequently,
we must have BΣ(P Σ{s, t}, K) ⊆ H contradicting the initial assumption. Therefore,
167
s and t′ must have at least one state not in common and so are s and t.
Therefore, any sublanguage of any of the above interaction schemes can be repre-
sented as a finite union of a set of maximal decomposable sublanguages.
Example A.1. Let Σ = {(a, b, x, y), (c, d, x, y)}. The catenation composition base
for this process space is K = (a, b)∗(c, d)∗. The figure shows a sublanguage of K and
the corresponding set of maximal (Σ, K)-decomposable sublanguages.
H
H2
H1
b
a
b a
c
cd
cd
a
a
b
DK
Σ (H)
d
d
a
a
b
a
b c
c
�
Modular verification can be achieved when the specification can be transformed
into an equivalent one that can be decomposed and checked locally. Formally, consider
and IDES L = (L, K) over a process space Σ and a specification S ⊆ K. Then
BΣ(L) ⊆ S can be verified locally if S is (Σ, K)-semidecomposable. In this case
we can write S = ∪n∈NSn, where the set N is finite and Sn is a maximal (Σ, K)-
decomposable sublanguage of S. Then,
Proposition A.2.
L ⊆ S ⇐⇒ (∃n ∈ N)(∀i ∈ I) Li ⊆ Pi(Sn)
�
168
The complexity of the above approach depends on the number of maximal (Σ, K)-
decomposable sublanguages in the specifications. Clearly, the above proposition will
be more useful if an efficient algorithm is found to decompose any sublanguage of a
modular interaction language into a finite set of (Σ, K)-decomposable languages.
A.2 Deterministic Interaction Specifications
In this section, an attempt is made to characterize semimodular interaction specifica-
tions. An interaction language K is said to be deterministic if the restriction of the
composition of any string vector in the domain P Σ(K) to K is either a single string
or the empty set. Formally,
(∀s E P Σ(K)) |BΣ(s, K) | ≤ 1
Roughly speaking, the determinism property reflects the unambiguity of the compo-
nents interaction. That is, each sequence of events in the system components con-
tributes a unique sequence of events (possibly none) in the overall system behaviour.
Note the duality of determinism and the completeness property (completeness re-
quires that | BΣ(s, K) | ≥ 1 for every composable string vector in P Σ(K)). Clearly,
when K is both deterministic and complete, then we have |BΣ(s, K) | = 0 if and only
if s is incomposable.
Unfortunately, there is no known general way to check the determinism of a given
language. However, it is shown in [MRS96] that checking the determinism property is
decidable when K is a layout in asynchronous environment. However, in certain situ-
ations the interaction language is simple enough to allow determinism to be checked
by inspection. For example, it is straightforward to check that parallel composition
is not deterministic, except in the case of intersection. The following results can help
to test the determinism of simple interaction languages.
169
Proposition A.3. Let K1 and K2 be deterministic languages over Σ. Then
1. (∀L ∈ L(Σ)) L ⊆ K1 =⇒ L is deterministic.
2. K1 ∩K2 is deterministic.
3. (∃i ∈ I) PiK1 ∩ PiK2 = ∅ =⇒ K1 ∪K2 is deterministic.
4. PsK1 ∩ PsK2 = ∅ =⇒ K1 ∪K2 is deterministic.
Proof. Let K1 and K2 be deterministic languages over Σ.
1. Let L ⊆ K be a language. Then, P Σ(K) � P Σ(L) and therefore s E P ΣL implies
that s E P Σ(K). Also, given that L ⊆ K, then |BΣ(s, K) | ≤ |BΣ(s, K) | ≤ 1.
2. Let s E P Σ(K1 ∩K2). Then s E P Σ(K1) and s ∈ P Σ(K2). Hence |BΣ(s, K1) | ≤ 1
and |BΣ(s, K2) | ≤ 1. Also we have BΣ(s, (K1∩K2)) = BΣ(s, K1)∩BΣ(s, K2). Then
it must be that |BΣ(s, (K1 ∩K2)) | ≤ 1.
3. Let s E P Σ(K1∪K2). Then BΣ(s, (K1∪K2)) = BΣ(s, K1)∪BΣ(s, K2). Therefore,
|BΣ(s, (K1∪K2))| ≤ 1 except for the case when BΣ(s, K1) and BΣ(s, K2) are both non
empty and not equal. In this case let t1 = BΣ(s, K1) and t2 = BΣ(s, K2). Clearly, this
implies that t1 ∈ K1, t2 ∈ K2, and Pit1 = Pis = Pit2 for all i ∈ I. This contradicts
the initial assumption that there exists some i ∈ I where PiK1∩PiK2 = ∅. Therefore,
K1 ∪K2 must be deterministic.
4. Similar to the previous case. We must have Pst1 = Pss = Pst2. This contradicts
the assumption that PsK1 ∩ PsK2 = ∅.
From the third part of the above proposition we can conclude that the interac-
tion language of catenation is deterministic. This is based on the fact that any two
strings in this layout do not share the same vector projection. Similarly we can see
that synchronous catenation is deterministic. The same argument applies for the
interleaving layout. From, the third and fourth parts we can conclude that the in-
teraction language of refinements is deterministic. From the first part we can state
that any sublanguage of a deterministic interaction language is also deterministic.
Therefore, the procedure to obtain the optimal non-blocking compensator preservers
the determinism property.
170
The finite union of a set of deterministic interaction languages is of special impor-
tance to the topic of this section and we will refer to such union as a semideterministic
interaction language. The two-way interleaving interaction is an example of this class.
Theorem A.1.
K is semimodular =⇒ K is semideterministic
Proof. We only need to show that if K is modular, then it is deterministic. Let K be a
modular language. Now assume that BΣ(s, K) = H for some s E P Σ(K) where |H| >1, that is, H contains more than one string. Then for every string t ∈ H we have Pit =
si. Hence BΣ(P Σ(t), K) = H. This implies that |BΣ(P Σ(t), K)| > 1 contradicting
the initial assumption that K is modular which requires BΣ(P Σ(t), K) = t. So it
must be that |BΣ(P Σ(t), K)| = 1 for all s E P Σ(K). Hence, K is deterministic. Then
if K is a finite union of modular language then it is also a finite union of deterministic
languages.
Note that, if K is deterministic then, it is not necessary modular. For instance, the
layout of catenation is deterministic but not modular. However, the other direction
in the above theorem is believed to hold at least when K is regular. To prove this,
we need to show that a deterministic K is semimodular. However, neither a counter
example nor a proof can be found for this claim.
Conjecture A.1.
K is semideterministic =⇒ K is semimodular
�
Note that it is easy to show that every sublanguages of a deterministic K can be
written as a union of a set of (Σ, K)-decomposable languages, as every string s ∈ K
is by definition (Σ, K)-decomposable. The problem is to show that the set of DK
Σ(H)
is finite for any regular H ⊆ K.
171
Appendix B
Supervisory Control of Structured
IDES
The condition in Theorem 6.1 ensures the optimality of the interaction specification
supervision. The following result shows that there is another way to achieve this.
Proposition B.1.
(∀R ∈ L(Σ)) sup CL(R) = sup CK(R) ⇐⇒ L is controllable with respect to K
Proof. (⇒) Assume that sup CL(R ∩ L) = sup CK(R ∩K) for any R ∈ Σ∗. Let R = L
then we got sup CK(L ∩K) = sup CL(L), hence, as L ⊆ K we get sup CK(L) = L,
therefore L is controllable with respect to K.
(⇐) We already showed that sup CK(R ∩K) ⊆ sup CL(R ∩K) in the proof of Propo-
sition 6.6. For the other direction let H ⊆ Σ∗ be a language, then
H ∈ CL(R ∩K) =⇒ HΣu ∩ L ⊆ H
=⇒ HΣu ∩ (LΣu ∩K) ⊆ H ∩ L (LΣu ∩K ⊆ L)
=⇒ (H ∩ L)Σu ∩K ⊆ H ∩ L (L = L)
=⇒ H ∩ L ∈ CK(R ∩K)
172
Therefore, combining all the results above we get
sup CL(R ∩K) ∩ L ⊆ sup CK(R ∩K) ⊆ sup CL(R ∩K)
However, given that any language H such that H ∩ L = ∅ is controllable with re-
spect to L trivially, then with little effort it can be seen that sup CL(R ∩K) ∩ L =
sup CL(R ∩K). Adding this to the above line, and given that L ⊆ K, then we get
sup CL(R ∩ L) = sup CK(R ∩K).
The main concern regarding the above result is how to check that L is controllable
with respect to K efficiently without resorting to the computation of synchronous
product of the system components. More important, is how to adjust K if needed
such that L can be controllable with respect to K. As discussed earlier, in some
situations the system can be described by a structured IDES model where the system
interaction specification is associated with its behaviour through a known monotonic
map. In this situation, the system can be described by the model L = (L, K, G)
where K = G(BΣ(L)). Given this setting, the question here is how to test that L
is controllable with respect to G(L) locally by dealing with the system components
individually. This can be achieved by testing the relation between the controllability
of the system and the controllability of its components under the given abstraction
map. The blocking issue will not be considered here, therefore, all relevant languages
are assumed prefix closed and the map G is assumed prefix preserving.
Proposition B.2. Let G : L(Σ) → L(Σ) be a Σ-abstraction map then
(∀i ∈ I) Li is controllable w.r.t G(Li) =⇒ L is controllable w.r.t G(L)
173
Proof.
(∀i ∈ I) LiΣui ∩G(Li) ⊆ Li
=⇒ (∀i ∈ I) LiΣui ∩G(Li) ⊆ Li
=⇒ (∀i ∈ I) P−1i
(LiΣui
⋂G(Li)
) ⊆ P−1i Li
=⇒ (∀i ∈ I) P−1i Li P−1
i Σui
⋂P−1
i G(Li) ⊆ P−1i Li
=⇒ (∀i ∈ I) P−1i Li Σu
⋂G(P−1
i Li) ⊆ P−1i Li (G ◦ P−1 ⊆ P−1
i ◦G)
=⇒ ⋂i∈I P−1
i Li Σu
⋂G
(⋂i∈I P−1
i Li
) ⊆ ⋂i∈I P−1
i Li
=⇒ ⋂i∈I P−1
i Li Σu
⋂G(L) ⊆ ⋂
i∈I P−1i Li
⋂G(L)
=⇒ ⋂i∈I P−1
i Li Σu
⋂G(L) Σu
⋂G(L) ⊆ ⋂
i∈I P−1i Li
⋂G(L)
=⇒ LΣu ∩G(L) ⊆ L
The condition above does not depend on the control structure of the underlying
process space. However, the situation is different in the reverse direction as shown
the next proposition.
Proposition B.3. Let G : L(Σ) → L(Σ) be a Σ-abstraction map. Assume that
Σs ⊆ Σc and G(L) is complete. Then
L is controllable w.r.t G(L) =⇒ (∀i ∈ I) Li is controllable w.r.t G(Li)
Proof. As L is controllable with respect G(L), then LΣu ∩ G(L) ⊆ L. Now, assume
there exists σ ∈ Σu and a string s ∈ Σ∗ such that sσ ∈ P−1i PiG(L) and sσ �∈ G(L).
Based on the completeness of G(L), we can state that
sσ ∈ P−1i PiG(L) and sσ �∈ G(L) =⇒ (∃j ∈ I) sσ �∈ P−1
j PjG(L)
However, if s ∈ G(L), then s(Σ− Σj)∗ ∈ P−1
j PjG(L). Given also that shared events
are controllable then, it must be that σ ∈ (Σj−Σi), or in general σ ∈ (Σ−Σi). Hence
174
we can state that for an string s ∈ G(L)
(∀σ ∈ (Σu ∩ Σi)) sσ ∈ P−1i PiG(L) =⇒ sσ ∈ G(L)
Therefore for any language T ⊆ G(L), we have T (Σu∩Σi)∩P−1i PiG(L) ⊆ TΣu∩G(L).
Based on that we can write
LΣu ∩G(L) =⇒ L (Σu ∩ Σi) ∩ P−1i PiG(L) ⊆ L
=⇒ Pi(L Σui ∩ P−1i G(PiL)) ⊆ PiL (Pi ◦G = G ◦ Pi)
=⇒ PiL Σui ∩G(PiL)) ⊆ PiL
=⇒ Li Σui ∩G(Li) ⊆ Li (G is complete)
This completes the proof.
Next we present the main theorem in this section showing the conditions to test
the controllability of the overall system behaviour w.r.t its interaction specification
locally. Note that abstraction map G is assumed prefix preserving in the setting of
the problem.
Theorem B.1. Let L = (L, K, G) be a structured IDES over a process space Σ with
L = BΣ(L), and Σs ⊆ Σc. Assume that K is complete. Then
(∀i ∈ I) Li is controllable w.r.t G(Li) ⇐⇒ L is controllable w.r.t K
�
The above results can be used to adjust the abstraction map G (locally) so that
K is controllable w.r.t L. It is important to note the difference between this approach
and the approach adopted in the previous section. In this approach, we are looking
for an interaction specification K for the system L such that L is controllable with
175
respect to K. Once this K is found, it can be used with any specification. In contrast,
the conditions given in the previous section are specific to the given specification.
176
Bibliography
[AL91] M. Abadi and L. Lamport. The existence of refinement mappings. Theo-
retical Computer Science, 82:253–284, 1991.
[Als96] N. Alsop. Formal Techniques for the Procedural Control of Industrial Pro-
cesses. PhD thesis, Center for Process Systems Engineering, Imperial Col-
lege London, 1996.
[Ber79] J. Berstel. Transductions and Context Free Languages. Teubner, Stuttgart,
1979.
[BH93] Y. Brave and M. Heymann. Control of discrete event systems modeled as
hierarchical state machines. IEEE Trans. Autom. Control, 38(12):1803–
1819, December 1993.
[BSV93] F. Balarin and A. Sangiovanni-Vincentelli. An iterative approach to lan-
guage containment. In Lecture Notes in Computer Science, volume 697,
pages 29–40. Springer, Berlin, 1993.
[CH74] R. H. Campell and A. N. Habermann. The specification of process syn-
chronization by path expressions. In Lecture Notes in Computer Science,
volume 16, pages 89–102. Springer, Berlin, 1974.
[CW95] P. E. Caines and Y.-J. Wei. On dynamically consistent hybrid systems.
Lecture Notes in Computer Science, 999:86–91, 1995.
177
[Dil88] D. L. Dill. Trace Theory for Automatic Hierarchical Verification of Speed-
Independent Circuits. The MIT Press, 1988. An ACM Distinguished Dis-
sertation.
[GJ79] M.R. Garey and D.S. Johnson. Computers and Intractability: A Guide to
the Theory of NP Completeness. W.H. Freeman and Company, New York,
1979.
[GL81] H. J. Genrich and K. Lautenbach. System modeling with high-level petri
nets. Theoretical Computer Science, 13(1):109–136, January 1981.
[God94] P. Godefroid. Partial Order Methods for the Verification of Concurrent Sys-
tems: An Approach to the State Explosion Problem. PhD thesis, University
de Liege, 1994.
[Har87] D. Harel. Statecharts: A visual formalism for complex systems. Science of
Computer Programming, 8(3):231–274, 1987.
[HU79] J.E. Hopcroft and J.D. Ullman. Introduction to Automata Theory, Lan-
guages, and Computation. Addison-Wesley, Reading, MA, USA, 1979.
[Jen87] K. Jensen. Coloured petri nets. In W. Brauer, W. Reisig, and G. Rozenberg,
editors, Petri Nets: Central Models and Their Properties, Advances in Petri
Nets 1986, LNCS 254, pages 248–299. Springer-Verlag, 1987.
[JL92] R. Janicki and P. E. Lauer. Specification and Analysis of Concurrent Sys-
tems, The COSY Approach. EATCS Monographs on Theoretical Computer
science. Springer, Berlin, 1992.
[KK97] N. W. Keesmaat and H. C. M. Kleijn. Restrictions and representations
of vector controlled concurrent system behaviours. Theoretical Computer
Science, 179:61–102, 1997.
[Koz94] P. Kozak. Dealing with complexity in the discrete event system control
methods. In Proc. of the IEEE European Workshop on Computer-Intensive
178
Methods in Control and Signal Processing, pages 229–236, Prague, Czech
Republic, September 1994.
[Kur94] R. P. Kurshan. Computer-Aided Verification of Coordinating Processes.
Princeton University Press, Princeton, New Jersey, 1994.
[LPM96] H. Liu, J. Park, and R. E. Miller. On hybrid synthesis for hierarchical
structured petri nets. Technical Report CS-TR-3628, Dept. of Computer
Science, Univ. of Maryland, College Park, Maryland, USA, April 1996.
[LW90] F. Lin and W.M. Wonham. Decentralized control and coordination of dis-
crete event systems with partial observation. IEEE Trans. Autom. Control,
35(12):1330–1337, 1990.
[MB88] S. MacLane and G. Birkhoff. Algebra. Chelsea, New York, 1988.
[McD89] C. E. McDowell. A practical algorithm for static analysis of parallel algo-
rithms. Journal of Parallel and Distributed Processing, 6(3):515–536, June
1989.
[Mil80] R. Milner. A Calculus of Communicating Systems, volume 92 of Lecture
Notes in Computer Science. Springer-Verlag, 1980.
[MRS96] A. Mateescu, G. Rozenberg, and A. Salomaa. Shuffle on trajectories: Syn-
tactic constraints. Technical Report 41, Turku Centre for Computer Science,
1996.
[Pet62] C.A. Petri. Kommunikation mit Automaten. PhD thesis, Univ. Bonn, West
Germany, 1962.
[RS97] G. Rozenberg and A. Salomaa. Handbook of Formal Language. Springer-
Verlag, Berlin Heidelberg, 1997.
[Rud92] K. Rudie. Decentralized Control of Discrete Event Systems. PhD thesis,
University of Toronto, 1992.
179
[RW87] P.J. Ramadge and W.M. Wonham. Supervisory control of a class of discrete-
event systems. SIAM Journal on Control and Optimization, 25:206–230,
1987.
[Sch92] B. Schwartz. State aggregation of controlled discrete-event systems. Mas-
ter’s thesis, Dept. of Elec. Eng., Univ. of Toronto, Canada, October 1992.
[Shi79] M. W. Shields. Adequate path expressions. In Lecture Notes in Computer
Science, volume 70, pages 249–265. Springer, Berlin, 1979.
[Shi97] M. W. Shields. Semantics of Parallelism: Non-Interleaving Representation
of Behaviour. Springer-Verlag, London, 1997.
[Wan95] B. Wang. Top-down design for RW supervisory control theory. Master’s
thesis, Dept. of Elec. Eng., Univ. of Toronto, Canada, 1995.
[Wec92] W. Wechler. Universal Algebra for Computer Scientists, volume 25 of
EATCS Monograms on Theoretical Computer Science. Springer-Verlag,
New York, 1992.
[WH91] Y. Willner and M. Heymann. Supervisory control of concurrent discrete
event systems. Int. Journal of Control, 54(5):1143–1169, 1991.
[Won01] W.M. Wonham. Notes on control of discrete-event systems. Lecture notes,
Department of Electrical Engineering, University of Toronto, Toronto, Ont.,
2001.
[WR88] W.M. Wonham and P.J. Ramadge. Modular supervisory control of discrete
event systems. Mathematics of Control, Signals and Systems, 1(1):13–30,
1988.
[ZW90] H. Zhong and W.M. Wonham. On the consistency of hierarchical supervi-
sion in discrete-event systems. IEEE Trans. Autom. Control, 35(10):1125–
1134, October 1990.
180