Dottorato di Ricerca in Informatica
XII ciclo
Universit�a di Salerno
Algorithmic Issues in Distributed Computing
Gianluca De Marco
November 2000
Coordinatore: Relatore:
prof. Alfredo De Santis prof. Luisa Gargano
Abstract
This thesis discusses the following central algorithmic issues in distributed computing:
communication, symmetry breaking, incomplete knowledge and fault tolerance.
We start considering the problem of graph coloring in an n-vertex graph of maximum
degree �; when vertices have only partial topological knowledge of the graph. We present
the �rst known O(�) vertex-coloring distributed algorithm which can work faster than in
polylogarithmic time. The result is extended to the edge-coloring problem, by giving an
algorithm achieving an O(�) edge-coloring in the same time. Vertex and edge-coloring al-
gorithms are also presented in the one-port model, the weakest among the so far considered
communication models for the same problem.
We discuss the problem of broadcasting with partial knowledge of the network. The
problem presents interesting issues of symmetry breaking both under the extensively inves-
tigated radio model and the one-way communication model, which represents the weakest
of all store-and-forward models for point-to-point networks, and hence algorithms designed
for this model also work for other models in at most the same time. We present new al-
gorithms for both models. In particular, for the one-way model, we provide interesting
trade-o�s between communication time and amount of knowledge available to each vertex.
We consider the problem of concurrent multicast (CM), that is, the problem of infor-
mation dissemination from a set of source nodes to a set of destination nodes in a weighted
communication network. We assume the realistic model including the edge weights into
the communication cost of an algorithm. It is considered both the classical case when all
the blocks of data known to a node can be freely concatenated and the resulting mes-
sage transmitted to the destination node, and when message transmissions must consist
iii
of one block of data at a time. We prove that the CM problem is NP-hard under both
assumptions and therefore we provide approximation algorithms.
Finally, we consider the problem of broadcast in two very popular networks, the n-
dimensional hypercube and the n-dimensional star graph, under the issue of fault tolerance.
We allow the location of the failures to change at any time unit (dynamic failures). We
prove that the n-dimensional hypercube and the n-dimensional star graph exhibit very
good performances even in presence of link failures.
iv
Acknowledgements
I would like to thank Luisa Gargano and Ugo Vaccaro for having initiated me into re-
search and for their precious supervision and collaborative works. I' ve also appreciated
their advices throughout my research experience. Many thanks are extended to Adele A.
Rescigno for her always friendly attitude in our collaborative works.
I feel also indebted to Andrzej Pelc for his hospitality during my stay at the Universit�e
du Qu�ebec �a Hull, from September 1999 to June 2000, and for many fruitful discussions.
In addition, I would like to express my gratitude to my family and my friends. Special
thanks are due to Zia Rosa and Zio Pietro for their generosity. I am also very thankful to
Mila and Giuseppe and to my cousins Elena and Elisa.
v
Contents
Abstract iii
Acknowledgements v
1 Introduction 1
1.1 Distributed systems and distributed computing . . . . . . . . . . . . . . . . 1
1.1.1 The graph model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1.2 Complexity measures . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Research overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2.1 Research issues in distributed computing . . . . . . . . . . . . . . . 4
1.2.2 Research contributions of this thesis . . . . . . . . . . . . . . . . . . 6
1.3 Summary of results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2 Fast distributed graph coloring 10
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.1.1 The problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.1.2 Linial's model and the terminology . . . . . . . . . . . . . . . . . . . 11
2.1.3 Previous work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.1.4 Our results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.2 The combinatorial tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.3 O(�)-coloring in Linial's model . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.3.1 Vertex-coloring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.3.2 Edge-coloring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
vi
2.4 O(�)-coloring in weaker models . . . . . . . . . . . . . . . . . . . . . . . . . 18
3 Deterministic broadcasting time with partial knowledge 21
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.1.1 The problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.1.2 Related work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.1.3 The models and terminology . . . . . . . . . . . . . . . . . . . . . . 23
3.2 Deterministic broadcasting in the one-way model . . . . . . . . . . . . . . . 24
3.2.1 Preliminary results: knowledge radius 0 . . . . . . . . . . . . . . . . 25
3.2.2 Knowledge radius 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.2.3 Knowledge radius 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.2.4 Larger knowledge radius . . . . . . . . . . . . . . . . . . . . . . . . . 34
3.3 Deterministic broadcasting in the radio model . . . . . . . . . . . . . . . . . 36
3.3.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
3.3.2 Smaller selective families and faster broadcasting . . . . . . . . . . . 37
4 Concurrent multicast in weighted networks. 39
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
4.1.1 Statement of the problem and summary of our results . . . . . . . . 40
4.1.2 Related work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
4.2 Multi{digraphs associated to instances of concurrent multicast . . . . . . . 42
4.3 Characterization of a minimum cost instance . . . . . . . . . . . . . . . . . 43
4.4 Proof of Theorem 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
4.5 Algorithms for Concurrent Multicast . . . . . . . . . . . . . . . . . . . . . . 52
4.5.1 Approximation Algorithms . . . . . . . . . . . . . . . . . . . . . . . 53
4.5.2 On{line algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
4.6 Concurrent multicast without block concatenation . . . . . . . . . . . . . . 56
4.7 Communication time and communication complexity . . . . . . . . . . . . . 57
vii
5 Broadcasting in hypercubes and star graphs with dynamic faults 61
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
5.2 Broadcasting in the Hypercube . . . . . . . . . . . . . . . . . . . . . . . . . 62
5.3 Broadcasting in the Star Graph . . . . . . . . . . . . . . . . . . . . . . . . . 65
6 Summary and open problems 69
References 72
viii
Chapter 1
Introduction
1.1 Distributed systems and distributed computing
Despite of their today's wide di�usion, there is no complete agreement on how distributed
systems can be de�ned. It is impossible to capture precisely, with a suitable single def-
inition, all the possible environments and aspects in which a distributed system can be
introduced. Following a rather common de�nition, a distributed system can be viewed as
a collection of computing units which can exchange information with each other. This
de�nition includes wide and local area networks, multiprocessor computers and systems
of cooperating processes.
For a safer approach, it is maybe convenient to view distributed systems simply as op-
posed to centralized systems. Such systems are composed of a single controlling unit. This
means that, at any single unit of time, at most one autonomous activity can be carried
on. On the other hand, in distributed systems, there may be many autonomous active
units at the same time. Moreover, these units (processes) are able to communicate with
each other in order to cooperate and coordinate all their actions.
Another possible confusion, generated by the progress of multi-processing systems, arises
among three closely related computing frameworks: concurrent, parallel and distributed
computing. Concurrent computing refers to those multi-processing activities arising on
a single machine. The di�erence between parallel and distributed computing lies on the
goals of the system. Parallel systems are usually designed in order to exploit the power
Chapter 1. Introduction 2
of the cooperation to handle very complex problems for which a single processor machine
would require unreasonable time. In other terms, parallel computers are designed with
the objective of solving a problem, exploiting the bene�t of splitting the job over several
processors.
In contrast, distributed systems have much more individual goals. In such systems there
are many individual users, each of them interested in their individual goals. The main
di�erence between the two systems seems to lie in the di�erent way of considering coop-
eration. In parallel systems, the cooperation among processors is viewed in positive sense:
the possibility of having many active processes at the same time is exploited in order to
reduce the computation time. In distributed systems, we are in general more interested
in the inherent limitations of the system rather then their advantages. Indeed, there are
many di�culties arising when dealing with multi-processing systems, such as collisions,
incomplete knowledge, coordination of the actions. Under this view, it is clear that the
algorithms for distributed systems are not devoted to solve speci�c data-processing tasks,
but they are oriented in �nding basic protocols which provide services to be used by the
users placed (distributed) in the various points of the system (routing, broadcasting, con-
struction of spanning trees, maximal independent sets, ...). It should be remarked, at this
point, that the di�erence between the two systems is not very clean, but it tends to be
quite vague, so that it is not rare to �nd examples in which the characteristics outlined
above for the two systems merge 0.
1.1.1 The graph model
The processors and the connecting wires between them of a distributed system can be
viewed as a network in which every node corresponds to a processor of the system. The
network is usually modelled by a simple connected undirected graph G = (V;E), where
the set of vertices V represents the set of processors of the network, and the set of edges E
correspond to the set of the communication links. Moreover, when dealing with complexity
analysis (as we will see in the following chapter), it is assumed that one or more weight
functions are associated with the set of edges of the graph.
There are many models of communication in distributed systems depending on the
underlying architecture. Within the purpose of this thesis we consider the message passing
model in which, di�erently from the opposed shared memory model, there is no common
Chapter 1. Introduction 3
memory storing the data accessible by all processors, but every processor has its local
memory and uses the communication links of the system to exchange information with
the other processors.
Every processor of the network has its unique identi�er ID. Precisely, it is assumed that
there is a mapping ID from the set of vertices to the positive integers.
Every vertex v is connected with external connection points, ports, (numbered 1 to
degG(v)), to its set of adjacent vertices. The vertex v, in order to send a message to
its neighbor u, load the message on the correspondent port. At any given time, at most
one message can keep busy a communication channel, and usually it is assumed that the
allowable message size is O(logn) bits. Larger messages can be sent by splitting them in
a sequence of smaller messages of size O(logn).
1.1.2 Complexity measures
In distributed computing, two complexity measures are usually considered: the message
complexity and the time complexity.
In the message passing model, the message complexity of an algorithm is the maximum
total number of messages sent over all the possible executions of the algorithm.
Before giving the de�nition of time complexity we need to anticipate the notion of syn-
chrony provided by the system. A more detailed de�nition of synchrony will be given in the
next section. Brie y, the system is synchronous when the execution of an algorithm can
be partitioned in rounds: in every round, each processor can send messages, the messages
are delivered, and can execute local computation on the base of the messages received; the
system is asynchronous when there is no limit on the time which can elapse between two
consecutive steps of a processor. Thus, in synchronous systems, the time complexity can
be simply considered as the number of rounds until termination (an algorithm is assumed
to be terminated when all processors are inactive and no messages are in transit). Hence,
in the synchronous message passing system, the time complexity is de�ned as the maxi-
mum number of rounds in any admissible execution of the algorithm until its termination.
A common approach to de�ne the time complexity for the asynchronous system is to as-
sume that the maximum message delay in any execution is one unit of time. Therefore, we
adopt the following de�nition: the time complexity of an algorithm for the asynchronous
message passing model is the worst case number of time units until its termination, as-
Chapter 1. Introduction 4
suming that each message incurs a delay of at most one time unit. Since asynchronous
computation is inherently nondeterministic, the \worst case" is referred to all possible
inputs and all possible scenarios over each input. Obviously, this de�nition may be used
only for performance evaluation, but does not imply that the message delay is bounded.
1.2 Research overview
The main themes of research in distributed computing concern the issues that most dis-
tinguish the distributed environment from the sequential one.
1.2.1 Research issues in distributed computing
Here we list some of the main questions which are unique to the distributed computing
scenario and which inspire some of the most active areas of research.
� Communication
Communication represents a central issue in the distributed environment. If we ex-
clude the hardware questions at very low level which concern centralized systems,
we can see the need of communicating information between di�erent objects, as
something which uniquely characterizes the distributed setting. It is clear that com-
munication should be dealt as a computational resource which does not come for
free, but which has to be used with parsimony. In many situations, the cost of
communication dominates other costs, for this reason many theoretical papers on
distributed computing assume that the local computations at the processors of the
system comes for free, and the only costs to be considered are those associated with
the communication.
� Incomplete knowledge
In centralized systems, there is complete knowledge of all data involved with the
computation, including the information obtained during the computation itself.
In distributed systems, a processor has usually only a partial knowledge of the net-
work. This happens in many practical situations: think for example to dynamic
networks in which the topology is not �xed, but can vary at every time. In such
systems, it is too much expensive to inform every time all the processors about
Chapter 1. Introduction 5
the changes. Thus, incomplete knowledge is an important issue to take into account
when designing distributed algorithms. There are many di�erent models correspond-
ing to various levels of topological knowledge available at the processors. The most
extreme case is that of anonymous networks where it is assumed that every processor
of the network knows nothing about the topology, neither it knows its own ID. More
realistic models assume that nodes know their own ID and the part of the network
within a certain knowledge radius from it. Such realistic models often assume even
the knowledge, at the various sites of the network, of additional information con-
cerning the underlying graph, such as the graph degree and (the order of magnitude
of) the number of nodes.
� Failures
In distributed computing there are many components which can fail during the exe-
cution of a distributed algorithm. We know that protection from failure is also very
important in the centralized setting. The di�erence is that while in the centralized
case there is nothing to do against hardware or software failures, but just �nd the
source of the problem and �x it, in distributed systems things are less simple. Here,
failures are very frequent and since the failure of a system component does not neces-
sarily imply the failure of the entire system, the goal is to ensure that the algorithm
works properly even in presence of occasional failures of some component (although
at the expenses of the e�ciency). Clearly any sort of e�ort should be made to reduce
the inevitable counter-e�ects on the e�ciency.
� Time and synchrony
One of the most important notion in distributed computing is that of time. The
study of the time is strictly connected with the concept of synchrony provided by
the system in use. In distributed computing, we usually distinguish between two
extreme models: the synchronous and the asynchronous one 1.
In the synchronous model, it is assumed that each processor has access to a global
clock which controls the rounds of communication: if a message is sent at pulse p
of the clock, it must reach the destination node before pulse p + 1 is generated by
1Here we consider only the two extreme models, also sometimes referred as the fully synchronous and
the totally asynchronous, although intermediate models are also present in literature.
Chapter 1. Introduction 6
the clock. In this way, one can think about the actions of the system as composed
of cycles of computations, during which, each processor can execute the following
steps:
1. Send messages to (some of) the neighbors.
2. Receive messages from (some of) the neighbors.
3. Perform some local computation.
In the asynchronous model, when sending a message, processors have no idea about
which time the message will reach the destination, all they know is that the message
will arrive within some �nite time. This is a strong limitation, in particular, a
processor cannot deduce that a message was lost by simply waiting, because it can
be always the case that the message is still in transit.
� Symmetry breaking
There are many situations, in distributed computing, where two or more processors
wish to have mutually exclusive access to a resource. The problem is to �nd a
criterium which allow any processor to determine when it should acquire access, in
such a way to avoid collisions with the other processors, i.e., the situations in which
two or more processors try to accede to the same resource. This kind of problems
(symmetry breaking problems) are very popular in distributed computing.
1.2.2 Research contributions of this thesis
This thesis deal with central issues in distributed computing: communication, symmetry
breaking, incomplete knowledge and fault tolerance. The work is organized as follows.
Symmetry breaking and incomplete knowledge
In Chapter 2 we consider the problem of deterministic distributed coloring of an n-vertex
graph with maximum degree �, assuming that every vertex knows a priori only its own
label and parameters n and �. The aim is to get a fast algorithm using few colors.
Linial [84] showed a vertex-coloring algorithm working in time O(log� n) and using O(�2)
colors. We improve both the time and the number of colors simultaneously by showing an
algorithm working in time O(log�(n=�)) and using O(�) colors. This is the �rst known
Chapter 1. Introduction 7
O(�)-vertex-coloring distributed algorithm which can work faster than in polylogarithmic
time.
Our method also gives an edge-coloring algorithm with the number of colors and time
as above. On the other hand, it follows from Linial [84] that our time of O(�)-coloring
cannot be improved in general. In addition, we show how our method gives fast coloring
algorithms in communication models weaker than Linial's.
Chapter 3 deals with the distributed deterministic broadcasting with incomplete knowl-
edge available to nodes. We adopt two widely studied communication models: the one-way
model and the radio model.
In the one-way model, in every round, each node can communicate with at most one
neighbor, and in each pair of nodes communicating in a given round, one can only send a
message while the other can only receive it. This is the weakest of all store-and-forward
models for point-to-point networks, and hence our algorithms work for other models as
well in at most the same time. We show tradeo�s between knowledge radius and time
of deterministic broadcasting, when knowledge radius is small, i.e., when nodes are only
aware of their close vicinity. While for knowledge radius 0, minimum broadcasting time is
�(e), where e is the number of edges in the network, broadcasting can be usually completed
faster for positive knowledge radius. Our main results concern knowledge radii 1 and 2.
We develop fast broadcasting algorithms and analyze their execution time. We also prove
lower bounds on broadcasting time, showing that our algorithms are close to optimal, for
a given knowledge radius. For knowledge radius 1 we develop a broadcasting algorithm
working in time O(min(n;D2�)), where n is the number of nodes, D is the diameter
of the network, and � is the maximum degree. We show that for bounded maximum
degree � this algorithm is asymptotically optimal. For knowledge radius 2 we show how
to broadcast in time O(D� logn)) and prove a lower bound (D�) on broadcasting time,
when D� 2 O(n). This lower bound is valid for any constant knowledge radius. For
knowledge radius log� n + 3 we show how to broadcast in time O(D�). Finally, for any
knowledge radius r, we show a broadcasting algorithm working in time O(D2�=r).
In the radio model, each node can send its message to all of its neighbors, but if a
node u can be reached from two nodes which send messages in the same round, none of
the messages is received by u. We study the time of distributed deterministic broadcasting
in synchronous radio networks of unknown topology and size. Precisely, we assume that
Chapter 1. Introduction 8
nodes are completely ignorant of the network: they know neither its topology, nor size,
nor even their immediate neighborhood. The initial knowledge of every node is limited to
its own label. In [28] a broadcasting algorithm working in time O(n11=6) was constructed
under this total ignorance scenario. We improve this result by showing how to broadcast
in time O(n5=3(logn)1=3) in the same model.
Further improvements of deterministic broadcasting time in unknown radio networks have
been obtained. In [29] an algorithm working in time O(n3=2) was developed. Indepen-
dently, an upper bound O(n3=2plogn) was shown in [93] using a di�erent method. Finally,
the fastest currently known algorithm for this task is the broadcasting algorithm from [30]
working in time O(n(logn)2).
Communication and fault tolerance
In Chapter 4 we focus on the concurrent multicast problem. This is the problem of
information dissemination from a set of source nodes to a set of destination nodes in a
network with cost function: each source node s needs to multicast a block of data B(s)
to the set of destinations. We are interested in protocols for this problem which have
minimum communication cost, where the communication cost of a protocol is the sum
of the costs of all message transmissions performed during its execution. The typical
measure of the communication cost of an algorithm is the number of messages sent across
the network during its execution. This measure assumes that the cost of sending a message
along any channel is equal to one. We use the more realistic assumption that includes the
edge weights into the communication cost of an algorithm. We consider both the classical
case in which any transmitted message can consist of an arbitrary number of data blocks
and the case in which each message must consist of exactly one block of data. We show
that the problem of determining the minimum cost to perform concurrent multicast from
a given set of sources to a set of destinations is NP-hard under both assumptions. We
give approximation algorithms to e�ciently perform concurrent multicast in arbitrary
networks. We also analyze the communication time and communication complexity, i.e.,
the product of the communication cost and time, of our algorithms.
In Chapter 5 we consider the problem of broadcasting in the n-dimensional hypercube
under the hypothesis that each node can inform in one unit of time all of its n neighbours
and that any n � 1 message transmissions can fail during each time unit. Under these
Chapter 1. Introduction 9
assumptions we prove that broadcasting can be accomplished in only 7 time units more
than that it would be necessary to broadcast in absence of transmission failures. This
improves on previously published results. Moreover we prove that at least n+2 time units
are necessary. We also prove that broadcasting in the n-dimensional star interconnection
network can be accomplished in only 11 time units more than that it would be necessary
to broadcast in absence of transmission failures.
Our upper bound of n+7 has been improved in [45] where a n+2 matching upper bound
was found.
1.3 Summary of results
Chapter 2 contains the �rst known O(�) vertex-coloring distributed algorithm which
can work faster than in polylogarithmic time. The results of Chapter 2 were obtained
jointly with Andrzej Pelc and will appear in [36].
In Chapter 3 we present results on the deterministic broadcast problem in the one-
port model and the radio model under the issue of incomplete knowledge. The results
were obtained in joint works with Andrzej Pelc [37, 38].
Chapter 4 contains results on the concurrent multicast problem obtained in a joint
work with Luisa Gargano and Ugo Vaccaro that will appear in [42]. An extended abstract
appeared in [43].
In Chapter 5 there are results on the fault tolerant broadcasting in hypercubes and
star graphs. The results were obtained in a joint work with Ugo Vaccaro appeared in [39]
I am grateful to all my co-authors for having allowed me to include in this thesis results
obtained in joint works with them.
Chapter 2
Fast distributed graph coloring
2.1 Introduction
2.1.1 The problem
Vertex coloring of a graph G = (V;E) is one of the fundamental graph problems. This
is the task of �nding an assignment f : V �! f1; :::; kg such that f(x) 6= f(y), whenever
vertices x and y are adjacent. Such an assignment is called a k-vertex-coloring. While
the problem of �nding a k-vertex-coloring of a graph using the least number of colors is
NP-hard, it is easy to �nd a (� + 1)- vertex-coloring (where � denotes the maximum
degree of G) using a centralized algorithm knowing the entire graph.
A related problem is that of edge-coloring. In this case we seek an assignment g :
E �! f1; :::; kg such that g(e1) 6= g(e2), whenever edges e1 and e2 have a common end-
point. Vizing's theorem [103] states that the minimum number of colors is either � or
(� + 1), and provides a centralized algorithm knowing the entire graph, which �nds an
edge coloring with at most (�+ 1) colors.
The tasks of coloring vertices or edges of a graph become much more di�cult if coloring
has to be found in a distributed way, and vertices of the graph have a priori only local
knowledge. If the time of the coloring algorithm has to be smaller than the time needed
to learn the entire graph, the coloring method must rely only on partial knowledge of the
graph accessible to vertices during algorithm execution. The aim of this chapter is to show
fast deterministic distributed coloring algorithms (both in case of vertex-coloring and of
edge-coloring) using only O(�) colors.
Chapter 2. Fast distributed graph coloring 11
2.1.2 Linial's model and the terminology
Throughout the chapter we consider an n-vertex graph G = (V;E) with maximum degree
�. All vertices have distinct labels which are integers from the set f1; :::; ng. We use
the computational model described by Linial [84] and later used, e.g., in [65, 88, 89]. It
is assumed that vertices of the graph model processors, and hence all computations are
carried out by them, and they store the algorithm's output. Here are the assumptions of
the model.
1. The a priori knowledge of every vertex consists of its own label and of parameters
n and �.
2. Computations proceed in synchronous rounds controlled by a global clock.
3. In every round every vertex can perform the following actions:
(a) execute an arbitrary amount of local computations,
(b) get messages from all of its neighbors,
(c) send messages to all of its neighbors.
4. All actions of vertices are deterministic.
Upon completion of a distributed k-vertex-coloring algorithm, every vertex outputs an
integer from the set f1; :::; kg, such that adjacent vertices have di�erent outputs. Likewise,upon completion of a distributed k-edge-coloring algorithm, the coloring function g :
E �! f1; :::; kg is output as follows. Every vertex v outputs an integer gv(e) from the set
f1; :::; kg, for every edge e incident to v, and these integers satisfy the following conditions:
� If e = fv; wg then gv(e) = gw(e),
� If e1 and e2 are distinct edges incident to v then gv(e1) 6= gv(e2).
The time of such an algorithm is the number of rounds it uses in the worst case.
It is clear that after D rounds, where D is the diameter of the graph, every vertex can
get complete knowledge of the graph, which thus yields (�+1)-coloring in D rounds (both
in case of edges and vertices). The problem becomes non-trivial when e�cient coloring
has to be done faster.
Chapter 2. Fast distributed graph coloring 12
In the sequel we will often use short phrases coloring and k-coloring instead of vertex-
coloring and k-vertex-coloring, respectively. For edge-coloring and k-edge-coloring we will
use the entire phrases.
Since we deal with local knowledge available to nodes, we will often use the phrase
vertex v knows the part of graph G at distance at most r from v. By this we mean that
v knows all vertices at distance at most r from it, and all edges e = fx; yg such that at
least one of vertices x or y is at distance < r from v. (Thus v need not know adjacencies
between vertices at distance exactly r from it.) The same de�nition was adopted, e.g., in
[4]. It is easy to see that every node can acquire this knowledge in r rounds, in Linial's
model.
2.1.3 Previous work
The problem of fast vertex- and edge- coloring in the above distributed model has been
extensively studied. Cole and Vishkin [32] described an algorithm achieving a 3-coloring
of an n-vertex ring in time O(log� n). Actually the algorithm in [32] was stated for the
PRAM model, but it is straightforward to see that it is valid for the above distributed
model as well. Linial [84] showed a matching lower bound (log� n) on time needed for
3-coloring of the ring. In the same paper he showed a deterministic O(�2)-coloring in
time O(log� n), for arbitrary n-vertex graphs of maximum degree �. The latter result was
previously shown in [62] only for graphs of constant degree. Linial [84] asked whether this
quadratic bound can be improved at the cost of increasing time from O(log� n) to, e.g.,
polylogarithmic.
Both randomized and deterministic distributed vertex- and edge-coloring have been
also studied in [3, 65, 88, 89]. For edge-coloring, the best randomized algorithm is a
O(log logn)-time algorithm achieving a (1+�)�-coloring [65], where � is any given positive
constant. For the deterministic case, the best known edge-coloring algorithm has time
complexity 2O(plogn) [88]. On the other hand, for vertex-coloring, there are randomized
and deterministic algorithms that use � + 1 or � colors. For the randomized case, the
fastest algorithm known so far is polylogarithmic [85, 88], and for the deterministic case,
the fastest one has time complexity O(nO(1=plogn)) [89].
A closely related problem is that of color reduction. Given a c-coloring, obtain a f(c)-
coloring in 1 round, for f(c) < c. The solution of this problem implies f(n)-coloring in
Chapter 2. Fast distributed graph coloring 13
1 round, since labels can be considered as n di�erent colors. Cole and Vishkin [32] and
Goldberg, Plotkin and Shannon [62] showed a method that achieves a color reduction from
c colors to O((log c)�) colors. Linial's method [84] reduces the number of colors from c
to O(�2 log c). Szegedy and Vishwanathan [100] showed non-constructively a reduction
from c to O(�2� log log c) colors, which is better than the previous results when c is much
bigger than �. Mayer, Naor and Stockmeyer [86] showed how to make the latter method
constructive achieving color reduction from c to O(�23� log log c) colors.
2.1.4 Our results
Our main result is a deterministic O(�)-vertex-coloring algorithm working in O(log�(n=�))
rounds. Thus we improve both the number of colors and execution time of [84]. Conse-
quently we give a strong positive answer to Linial's question. To the best of our knowl-
edge, ours is the �rst O(�)-vertex-coloring distributed algorithm which works faster than
in polylogarithmic time. (As mentioned above, the fastest previously known O(�)-vertex-
coloring randomized algorithm works in polylogarithmic time and the fastest deterministic
algorithm works in time O(nO(1=plogn)).)
Our method also gives aO(�)-edge-coloring algorithm working in O(log�(n=�)) rounds.
(In fact, it can be observed that any O(�)-vertex-coloring algorithm yields a O(�)-edge-
coloring algorithm working in the same time.) On the other hand, it follows from Linial
[84] that our time of O(�)-coloring cannot be improved in general.
In addition we show how our method gives fast coloring algorithms in communication
models weaker than Linial's. If every node can communicate with only one neighbor in a
round (as opposed to all neighbors in Linial's model) then distributed O(�)-coloring can
be done in time O(� log(n=�)).
The chapter is organized as follows. In Section 2.2, we describe the combinatorial
tool used in our algorithms. In Section 2.3 we describe the coloring algorithms in Linial's
model. In Section 2.4 we show how these algorithms yield fast and e�cient coloring in
weaker models.
Chapter 2. Fast distributed graph coloring 14
2.2 The combinatorial tool
Consider a graph G with n vertices and maximum degree �. Let [n] = f1; 2; : : : ; ng: We
de�ne a list Q = (Q1; Q2; : : : ; Qm) of queries on [n] as a sequence of m subsets of [n] (m
is called the length of Q). Consider an arbitrary node v of G. Let �(v) be the set of all
neighbors of v, and let Iv denote the set fvg [ �(v).
De�nition 2.2.1 Given a node v of G, we say that the list of queries Q isolates v (re-
spectively w 2 �(v)) in Iv if there exists Qj in Q such that Qj \ Iv = fvg (respectively
Qj \ Iw = fwg). In this case we also say that Qj isolates v (respectively w) in Iv.
As we will see in the proof of Theorem 2.3.1, our algorithm uses the concept of isolating
a vertex v in Iv in order to choose a color for v that is di�erent from any neighbor's color.
The following lemma uses a probabilistic argument similar to those in [33, 56, 75].
Lemma 2.2.1 There exists a list of queries Q on [n] of length O(� log(n=�)) such that
for every n-vertex graph G of maximum degree � � 2 and any node v of G, Q isolates at
least djIv j=2e elements in Iv.
Proof: As a �rst step we prove the existence, for any k � dlog �e, of a list of queriesQk on [n], of length O(2k log(n=2k)), which isolates at least d(d + 1)=2e elements in any
set Iv of size jIvj = d+ 1, where 2k�1 < d � 2k.
Let k and d be as above. Let v be an arbitrary node of G such that jIvj = d + 1.
Consider the list of queries Qk = fQ1; : : : ; Qtg for some t, where each query Qi is formed
by randomly and independently including each element j 2 [n] with probability 1=(2k+1).
The probability that Qi isolates at least one element x 2 Iv which is not isolated by any
previous query Ql, for 1 � l < i; is at least
(d+ 1) � 1
2k + 1
�1� 1
2k + 1
�d �1� 1
2k + 1
�i�1� 1
2e2:
Hence, the probability that there exists a set Iv of size (d + 1) which contains at least
d(d+ 1)=2e elements that are not isolated by Qk , is at most
n
d+ 1
! d+ 1
d(d+ 1)=2e
!�1� 1
2e2
�t= 2
log( nd+1)+log(
d+1d(d+1)=2e)�t log(2e
2=(2e2�1)):
Chapter 2. Fast distributed graph coloring 15
Since log�nk
� 2 O(k log(n=k)), the above expression is less than 2�d, for some t 2 O(d log(n=d)) =
O(2k log(n=2k)).
Hence the probability that for some d, such that 2k�1 < d � 2k, there exists a set Iv
of size (d+ 1) with the above property, is at mostP
2k
d=2k�1+1 2�d < 1. Consequently, the
probability that a random list Qk of length O(2k log(n=2k)) isolates at least d(d + 1)=2eelements in any set Iv of size jIvj = d+ 1, for any 2k�1 < d � 2k, is positive. Hence such
a list exists.
In order to complete the proof, consider the list of queries Q on [n] resulting from the
concatenation of Q1;Q2; : : : ;Qp; where p = dlog�e. Its length is O(� log(n=�)) and it
isolates at least djIv j=2e elements in any set Iv, since jIvj � �+ 1. 2
De�nition 2.2.2 Given a list of queries Q = (Q1; Q2; : : : ; Qm) on [n] and a node v of G,
we de�ne a corresponding list of sets I = (I1v ; I2v ; : : : ; I
mv ), where I
jv � I0v = Iv; as follows:
Ijv =
8>>><>>>:
Ij�1v n fwg; if there exists a neighbor w of v such that Qj \ Iw = fwg ;Ij�1v n fvg; if Qj \ Ij�1v = fvg;Ij�1v ; otherwise.
A list of queries Q on [n], of lengthm, is said to be (�; n)-separating if for any n-vertex
graph G of maximum degree �, and any node v of G, Imv = ;:
Lemma 2.2.2 For all integers n and �, such that 2 � � � n, there exists a (�; n)-
separating list of queries, of length O(� log(n=�)):
Proof: Lemma 2.2.1 guarantees, for i = 1; 2; : : : ; dlog�e, the existence of a list of
queries Qi on [n] of length O(2i log(n=2i)) such that for any n-vertex graph G of maximum
degree 2i and any node v of G, Qi isolates at least djIv j=2e elements in Iv . Consider the
list of queries Q given by the concatenation of Qdlog�e;Qdlog�e�1; : : : ;Q2;Q1 followed
by the query f1; 2; : : : ; ng: It is easy to see that Q is a (�; n)-separating list of length
O(� log(n=�)). 2
Chapter 2. Fast distributed graph coloring 16
2.3 O(�)-coloring in Linial's model
2.3.1 Vertex-coloring
We now show how our combinatorial tool can be applied to get fast distributed O(�)-
coloring. We start with an algorithm which produces a coloring using more colors but
which can be performed in constant time.
Theorem 2.3.1 Let G be an n-vertex graph with maximum degree �. It is possible to
�nd distributively a O(� log(n=�))-coloring of G in 2 rounds.
Proof: Every vertex knows a priori its own label and parameters n and �. Since
in a single round vertices can communicate with all their neighbors, after 2 rounds every
vertex knows the part of the graph at distance at most 2 from it. More precisely, any
vertex v knows Iw, for any w 2 Iv.
Lemma 2.2.2 is proved in a non-constructive way, i.e., the existence of a (�; n)-
separating list of queries Q is shown but this list is not constructed. However, in view
of the assumptions of Linial's model (cf. Section 2.1.2), we will show that this is su�-
cient to carry out deterministic O(� log(n=�))-coloring without communication, provided
that every vertex v knows Iw, for any w 2 Iv . Indeed, this knowledge, together with a
priori knowledge of parameters n and �, permits a vertex to decide whether a given list
of queries is (�; n)-separating. Thus all vertices can perform identical local exhaustive
search until the required list of queries Q is found. Since vertices perform this search
identically, they get the same list Q. Since the search is local, it can be performed in a
single round. Consequently, we may assume that after 2 rounds all vertices have the same
(�; n)-separating list Q of length O(� log(n=�)).
Let v 2 V be an arbitrary vertex of G. It chooses color j if and only if Ij�1v \Qj = fvg:Since, in view of Lemma 2.2.2, the length of Q is O(� log(n=�)), all vertices of G choose
a color among at most O(� log(n=�)) possibilities. So it remains to prove that no pair of
neighbors in G get the same color.
Suppose, for the purpose of contradiction, that there is a pair of neighbors v and w
with the same color j. This means that
Qj \ Ij�1v = fvg and Qj \ Ij�1w = fwg:
Chapter 2. Fast distributed graph coloring 17
Thus w 2 Qj ; and so we must have w 62 Ij�1v ; since Qj\Ij�1v = fvg:However, w 2 I0v = Iv:
This implies the existence of a query Qk in Q, k < j, such that Qk \ Iw = fwg:Clearly, Ik�1w � Iw: Two cases are possible: if w 2 Ik�1w ; then Qk \ Ik�1w = fwg, and so
the color of w is k < j; on the other hand, if w 62 Ik�1w ; then for some m < k we must
have Qm \ Im�1w = fwg, and so the color of w is m < k < j: Thus in both cases we get
a contradiction. This proves that no pair of neighbors get the same color, and hence an
O(� log(n=�))�coloring has been found.
2
Using the same argument we can get the following more general result. (Recall that,
given an initial coloring, every node can learn colors of all its neighbors in a single round).
Theorem 2.3.2 Let G be an n-vertex graph with maximum degree �. Given a c coloring
of G, it is possible to �nd distributively a O(� log(c=�))-coloring of G in 2 rounds.
It is easy to see that after r iterations of color reduction described in Theorem 2.3.2
(starting with n colors identical to labels), we get aO(� log(r)(n=�))-coloring. (Here log(r)
denotes the r times iterated logarithm.) Thus log�(n=�) iterations reduce the number of
colors to O(�) and each iteration can be done in 2 rounds. Thus we obtain the following
theorem which is the main result of this chapter.
Theorem 2.3.3 Let G be an n-vertex graph with maximum degree �. Then a O(�)-
coloring of G can be found distributively in O(log�(n=�)) rounds.
On the other hand, it follows from Linial [84] that our time of O(�)-coloring cannot
be improved in general. Indeed, Linial [84] showed that 3-coloring of an n-vertex ring
requires (log� n) rounds. The same argument shows that (log� n) rounds are necessary
for getting any O(1)-coloring of the ring.
2.3.2 Edge-coloring
We now show how our vertex-coloring algorithm can be modi�ed to obtain distributive
O(�)-edge-coloring in O(log�(n=�)) rounds. (In fact, it can be observed that { using the
same method as described below { any O(�)-vertex-coloring algorithm yields a O(�)-
edge-coloring algorithm working in the same time.) Similarly as before, we �rst produce
an edge-coloring with more colors but in constant time.
Chapter 2. Fast distributed graph coloring 18
Theorem 2.3.4 Let G be an n-vertex graph with maximum degree �. It is possible to
�nd distributively a O(� log(n=�))-edge-coloring of G in 3 rounds.
Proof: After 3 rounds every node knows the part of the graph at distance at most 3
from it. Consider the line graph G� of G. Its vertices are edges of G, and two of them are
adjacent if the respective edges have a common end-point in G. Thus k-edge-coloring of
G is equivalent to k-(vertex)-coloring of G�. Consider any edge e = fx; yg of G. After 3rounds vertices x and y know the part of the graph G� at distance at most 2 from its vertex
e. More precisely, x and y know all edges incident to e and all edges incident to these edges.
Hence they can simulate the actions which the O(� log(n=�))-vertex-coloring algorithm
from the proof of Theorem 2.3.1 prescribes to vertex e of G�. Thus they will both get an
appropriate color for e, and hence distributively output a O(� log(n=�))-edge-coloring of
G. 2
Similarly as before, the iteration of the above gives
Theorem 2.3.5 Let G be an n-vertex graph with maximum degree �. Then a O(�)-edge-
coloring of G can be found distributively in O(log�(n=�)) rounds.
Remark. Notice a small di�erence between the �rst and the subsequent iterations. If
e = fx; yg, in order to learn the name of an edge e0 = fx0; y0g at distance 2 from e in the
line graph, i.e., such that x and x0 are adjacent in G, node y needs to learn the label of y0,
i.e., 3 rounds are required in the �rst iteration, as mentioned in Theorem 2.3.4. However,
if an edge-coloring C is already obtained in a subsequent iteration, both nodes x0 and y0
know the color of e0 in C, and hence it is enough that node y communicates with x0 to
learn this color. Consequently 2 rounds are su�cient for each subsequent iteration.
2.4 O(�)-coloring in weaker models
The results of Section 2.3 hold under the computational model of Linial, described in
Section 2.1.2. This is a very strong model. In particular, its communication part assumes
that every node can exchange messages with all of its neighbors, in a single round. This
model is often referred to as all port (cf., e.g., surveys [51, 68]). Weaker models, restricting
the number of messages that a node can simultaneously send and/or receive, often re ect
Chapter 2. Fast distributed graph coloring 19
more realistically communication constraints of a network. Among these weaker models
one of the most popular is the 1-port model [51, 68] It assumes that, in every round,
each node can communicate with at most one neighbor. There are two variations of this
model: full-duplex assuming that a communicating pair of nodes can exchange messages
in one round, and half-duplex, assuming that transmissions can only go one-way at a
time. However, one round in the full-duplex model can be replaced by two rounds in the
half-duplex model, hence orders of magnitude of time complexity in both variations are
the same. The one-port model has been used, e.g., in [47, 48, 63, 79, 74, 98]. Since the
assumptions of this model are quite restrictive, algorithms designed for it work also for
other models (intermediary between one-port and all-port, as, e.g., the postal model from
[25]).
In this section we show how our coloring algorithms designed for Linial's model can
be adapted for weaker communication models. We work in the one-port model, and more
precisely we modify Linial's model in the following two points (cf. Section 2.1.2):
1. is replaced by
1'. The a priori knowledge of every vertex consists of the part of the graph at distance
at most 3 from it, and of parameters n and �.
3. is replaced by
3'. In every round every vertex can perform the following actions:
(a) execute an arbitrary amount of local computations,
(b) exchange messages with one neighbor.
Note that the knowledge of the part of the graph at distance at most 3 from a node
can be acquired in 3 rounds in Linial's model, while item 3'. is strictly weaker than 3. in
Linial's model.
The following algorithm produces an O(�)-vertex-coloring and an O(�)-edge coloring.
Algorithm One-port-coloring
The algorithm works in log� n phases. In the �rst phase, using knowledge of the graph
at \radius 3" a O(� log(n=�))-vertex-coloring and a O(� log(n=�))-edge-coloring are
obtained, as described in Section 2.3. Suppose by induction that after phase i < log� n, a
O(� log(i)(n=�))-vertex-coloring V and a O(� log(i)(n=�))-edge-coloring C are obtained.
Let t 2 O(� log(i)(n=�)) be the number of colors in C. Phase i + 1 lasts 2t rounds,
divided into two consecutive segments of length t. In the jth round of each segment,
Chapter 2. Fast distributed graph coloring 20
communications go along edges with color j in edge-coloring C. In the �rst segment all
vertices report their own color in V and the colors of their incident edges in C. In the
second segment every vertex relays the information obtained in the �rst segment. Thus
phase i+1 simulates two rounds of the (i+1)th iteration from Section 2.3 (cf. the Remark
at the end of Section 2.3).
Theorem 2.4.1 Let G be an n-vertex graph with maximum degree �. Algorithm One-
port-coloring �nds distributively a O(�)-vertex-coloring and a O(�)-edge-coloring of G in
O(� log(n=�)) rounds, in the one-port model.
Proof: The fact that transmissions are scheduled using edge-coloring C guarantees
that there are no con icts, i.e., the speci�cations of the one-port model are respected.
After log� n phases we obtain a O(�)-vertex-coloring and a O(�)-edge coloring. It fol-
lows from Theorem 2.3.2 that the ith phase of Algorithm One-port-coloring lasts ti 2O(� log(i)(n=�)) rounds. The execution time of the entire algorithm is therefore
log� nX
i=1
ti 2 O(� log(n=�)):
2
Chapter 3
Deterministic broadcasting time with partial
knowledge
3.1 Introduction
3.1.1 The problem
Broadcasting is one of the fundamental tasks in network communication. One node of
the network, called the source, has a message which has to reach all other nodes. In
synchronous communication messages are sent in rounds controlled by a global clock. In
this case the number of rounds used by a broadcasting algorithm, called its execution time,
is an important measure of performance. Broadcasting time has been extensively studied
in many communication models (cf. surveys [51, 67, 68]) and fast broadcasting algorithms
have been developed.
If network communication is to be performed in a distributed way, i.e., message schedul-
ing has to be decided locally by nodes of the network, without the intervention of a central
monitor, the e�ciency of the communication process is in uenced by the amount of knowl-
edge concerning the network, a priori available to nodes. It is often the case that nodes
know their close vicinity (for example they know their neighbors) but do not know the
topology of remote parts of the network.
The aim of this chapter is to study the impact of the amount of local information
available to nodes on the time of broadcasting. Each node v knows only the part of the
network within knowledge radius r from it, i.e., it knows the graph induced by all nodes
Chapter 3. Deterministic broadcasting time with partial knowledge 22
at distance at most r from v. Apart from that, each node can know also the maximum
degree � of the network and the total number n of nodes.
3.1.2 Related work
Network communication with partial knowledge of the network has been studied by many
researchers. This topic has been extensively investigated, e.g., in the context of radio
networks. In [16] broadcasting time was studied under assumption that nodes of a radio
network know a priori their neighborhood. In [28, 29] an even more restrictive assumption
has been adopted, namely that every node knows only its own label (knowledge radius
zero in our terminology). In [34] a restricted class of radio networks was considered and
partial knowledge available to nodes concerned the range of their transmitters.
In [59] time of broadcasting and of two other communication tasks was studied in
point-to-point networks assuming that each node knows only its own degree. However,
the communication model was di�erent from the one assumed in this chapter: every node
could simultaneously receive messages from all of its neighbors.
In [4] broadcasting was studied assuming a given knowledge radius, as we do in this
chapter. However the adopted e�ciency measure was di�erent: the authors studied the
number of messages used by a broadcasting algorithm, and not its execution time, as we
do.
A topic related to communication in an unknown network is that of graph exploration
[11, 44, 90]: a robot has to traverse all edges of an unknown graph in order to draw a map
of it. In this context the complexity measure is the number of edge traversals which is
proportional to execution time, as only one edge can be traversed at a time.
In the above papers communication algorithms were deterministic. If randomization
is allowed, very e�cient broadcasting is possible without knowing the topology of the
network, cf., e.g., [16, 53]. In fact, in [16] the di�erences of broadcasting time in radio
networks between the deterministic and the randomized scenarios were the main topic of
investigation.
Among numerous other graph problems whose distributed solutions with local knowl-
edge available to nodes have been studied, we mention graph coloring [84], fault mending
[80] and label assignment [52].
Chapter 3. Deterministic broadcasting time with partial knowledge 23
3.1.3 The models and terminology
The communication network is modelled by a simple undirected connected graph with a
distinguished node called the source. n denotes the number of nodes, e denotes the number
of edges, � denotes the maximum degree, and D the diameter. All nodes have distinct
labels which are integers between 1 and n, but our algorithms and arguments are easy to
modify when labels are in the range 1 to M , where M 2 O(n).
Communication is deterministic and proceeds in synchronous rounds controlled by
a global clock. Only nodes that already got the source message can transmit, hence
broadcasting can be viewed as a wake-up process.
We adopt two widely used models: the one-way model (cf. [68]), also called the 1-port
half-duplex model [51] and the radio model.
In the one-way model, in every round, each node can communicate with at most one
neighbor, and in each pair of nodes communicating in a given round, one can only send an
(arbitrary) message, while the other can only receive it. This model has been used, e.g.,
in [47, 48, 63, 79, 74]. It has the advantage of being the weakest of all store-and-forward
models for point-to-point networks (cf. [51]), and hence algorithms designed for this model
work also for other models (allowing more freedom in sending and/or receiving), in at most
the same time.
A radio network is a set of transmitter-receiver stations, called nodes. Every node
can reach a given subset of other nodes, depending on the power of its transmitter and on
the topography of the region. A radio network can thus be modelled by its reachability
graph in which the existence of a directed edge (uv) means that node v can be reached from
u. We work in the communication model used, e.g., in [1, 16, 28, 58, 76]. send messages in
synchronous rounds measured by a global clock which indicates the current round number.
In every round every node acts either as a transmitter or as a receiver. A node acting as
a receiver in a given round gets a message, if and only if, exactly one of its neighbors
transmits in this round. The message received is the one that was transmitted. If at least
two neighbors of u transmit in a given round, none of the messages is received by u in this
round. In this case we say that a collision occurred at u. Two scenarios are possible in
case of a collision and both were studied in the literature (cf., e.g., [16, 28, 64]). Node u
may either hear nothing (except for the background noise), or it may receive interference
noise di�erent from any message received properly but also di�erent from background
Chapter 3. Deterministic broadcasting time with partial knowledge 24
noise. These two scenarios are referred to as the absence (resp. availability) of collision
detection.
For a natural number r we say that r is the knowledge radius of the network if every
node v knows the graph induced by all nodes at distance at most r from v. For example, if
knowledge radius is zero, each node knows nothing but its own label; if knowledge radius
is 1, each node knows its own label, labels of all neighbors, knows which of its adjacent
edges joins it with which neighbor, and knows which neighbors are adjacent between them.
The latter assumption is where our de�nition of knowledge radius di�ers from that in [4],
where knowledge radius r meant that a node v knows the graph induced by all nodes at
distance at most r from v with the exception of adjacencies between nodes at distance
exactly r from v. However all our results hold for this weaker de�nition as well. In fact,
we show that our lower bounds are valid even under the stronger notion and we construct
the algorithms using only the weaker version from [4], thus obtaining all results under
both de�nitions of knowledge radius.
3.2 Deterministic broadcasting in the one-way model
In this section we take under consideration the one-way model.
We show tradeo�s between knowledge radius and time of deterministic broadcasting, when
knowledge radius is small, i.e., when nodes are only aware of their close vicinity. We assume
that apart from that partial topological information, each node knows only the maximum
degree � of the network and the number n of nodes.
While for knowledge radius 0, minimum broadcasting time is �(e), where e is the
number of edges in the network, broadcasting can be usually completed faster for positive
knowledge radius. The main results concern knowledge radii 1 and 2. For knowledge
radius 1, we develop a broadcasting algorithm working in time O(min(n;D2�)), and we
show that for bounded maximum degree � this algorithm is asymptotically optimal. For
knowledge radius 2, we show how to broadcast in time O(D� logn)) and prove a lower
bound (D�) on broadcasting time, when D� 2 O(n). This lower bound is valid for any
constant knowledge radius. For knowledge radius log� n + 3 we show how to broadcast
in time O(D�). Finally, for any knowledge radius r, we show a broadcasting algorithm
working in time O(D2�=r).
Chapter 3. Deterministic broadcasting time with partial knowledge 25
3.2.1 Preliminary results: knowledge radius 0
For knowledge radius 0 tight bounds on broadcasting time can be established: the mini-
mum broadcasting time in this case is �(e), where e is the number of edges in the network.
We �rst make the following observation (cf.[59]).
Proposition 3.2.1 In every broadcasting algorithm with knowledge radius 0 the source
message must traverse every edge at least once.
Proof: Consider a broadcasting algorithm A working on a network G in t rounds,
such that the source message does not traverse edge l = fx; yg. Let G0 be the network
resulting from G by removing edge l and adding a new node z and edges fx; zg and fy; zg.Consider the �rst t rounds of the run of algorithm A on the network G0. The actions of
A in these rounds and the local states of all nodes except z are identical when A runs on
G or on G0. Hence A must accomplish broadcasting on G0 as well, which contradicts the
fact that z does not get the source message. 2
The following result establishes a natural lower bound on broadcasting time. Its proof
is based on that of Theorem 4.6 from [52].
Theorem 3.2.1 Every broadcasting algorithm with knowledge radius 0 requires time at
least e for networks with e edges.
Proof: Consider a broadcasting algorithm A that works correctly on every network.
Suppose, for the purpose of contradiction, that there exists a network G = (V;E) and an
execution � of the algorithm A on G working in fewer than jEj rounds. By Proposition
3.2.1 the source message must traverse each edge of G at least once during execution �.
Hence there exists a round t and two di�erent edges (u1; w1) and (u2; w2) such that the
source message is sent on each of them for the �rst time in round t.
Suppose (without loss of generality) that ui has sent the source message to wi over
the edge (ui; wi) in round t, for i = 1; 2. Observe that by the speci�cations of the one-
way model, all four nodes involved, namely, u1, u2, w1 and w2, are necessarily distinct.
Consider the network G2 obtained from G by eliminating the edges (u1; w1) and (u2; w2),
and replacing them by a new node v0, and four new edges (u1; v0), (v0; w1), (u2; v
0) and
(v0; w2). If algorithm A is invoked from the same source s on G2, then its execution �2 on
Chapter 3. Deterministic broadcasting time with partial knowledge 26
G2 will be identical to � up to round t � 1, and moreover, in round t a message will be
sent by the node ui over the edge (ui; v0) , for i = 1; 2. This violates the speci�cations of
the one-way model, as it causes the node v0 to receive two messages in the same round,
leading to contradiction. 2
In the classic depth �rst search algorithm a token (the source message) visits all nodes
and traverses every edge twice. In this algorithm only one message is sent in each round
and hence the speci�cations of the one-way model are respected. This is a broadcast-
ing algorithm working in 2e rounds and hence its execution time has optimal order of
magnitude. In view of Theorem 3.2.1, we have the following result.
Theorem 3.2.2 The minimum broadcasting time with knowledge radius 0 is �(e), where
e is the number of edges in the network.
3.2.2 Knowledge radius 1
In order to present our �rst algorithm we need the notion of a layer of a network. For
a natural number k, the kth layer of network G is the set of nodes at distance k from
the source. The idea of Algorithm Conquest-and-Feedback is to inform nodes of the
network layer by layer. After the (k�1)th layer is informed (conquered) every node of this
layer transmits to any other node of this layer information about its neighborhood. This
information travels through a partial tree constructed on nodes of the previous layers and
consumes most of the total execution time of the algorithm. As soon as this information is
exchanged among nodes of the (k� 1)th layer (feedback), they proceed to relay the source
message to nodes of layer k. The knowledge of all adjacencies between nodes from layer
k � 1 and nodes from layer k enables transmitting the source message without collisions.
We now present a detailed description of the algorithm.
Algorithm Conquest-and-Feedback
All rounds are divided into consecutive segments of length �. Rounds in each segment
are numbered 1 to �. The set of segments is in turn partitioned into phases. We preserve
the invariant that after the kth phase all nodes of the kth layer know the source message.
The �rst phase consists of the �rst segment (i.e., it lasts � rounds). In consecutive
rounds of this segment the source informs all of its neighbors, in increasing order of their
Chapter 3. Deterministic broadcasting time with partial knowledge 27
labels. (If the degree of the source is smaller than �, the remaining rounds of the segment
are idle.)
Any phase k, for k > 1, consists of 2k � 1 segments (i.e., it lasts �(2k � 1) rounds).
Suppose by induction that after phase k � 1 all nodes of layer k � 1 have the source
message. Moreover suppose that a tree spanning all nodes of layers j < k is distributively
maintained: every node v of layer j remembers from which node P (v) of layer j � 1 it
received the source message for the �rst time, and remembers the round number r(v) � �
in a segment in which this happened.
We now describe phase k of the algorithm. Its �rst 2(k � 1) segments are devoted to
exchanging information about neighborhood among nodes of layer k�1 (feedback). Every
such node transmits a message containing its own label and labels of all its neighbors.
During the �rst k�1 segments messages travel toward the source: one segment is devoted
to get the message one step closer to the source. More precisely, a node v of layer j < k�1
which got feedback messages in a given segment transmits their concatenation to P (v) in
round r(v) of the next segment. The de�nitions of r(v) and P (v) guarantee that collisions
are avoided. After these k � 1 segments the source gets all feedback messages. From the
previous phase the source knows all labels of nodes in layer k � 2. Since neighbors of a
node in layer k � 1 can only belong to one of the layers k � 2, k � 1 or k, the source can
deduce from information available to it the entire bipartite graph Bk whose node sets are
layers k � 1 and k and edges are all graph edges between these layers. The next k � 1
segments are devoted to broadcasting the message describing graph Bk to all nodes of
layer k � 1. Every node of layer j < k � 1 which already got this message relays it to
nodes of layer j + 1 during the next segment, using precisely the same schedule as it used
to broadcast the source message in phase j + 1. By the inductive assumption collisions
are avoided.
Hence after 2(k � 1) segments of phase k the graph Bk is known to all nodes of layer
k � 1. The last segment of the phase is devoted to relaying the source message to all
nodes of layer k. This is done as follows. Every node v of layer k � 1 assigns consecutive
slots s = 1; :::; �, � � � to each of its neighbors in layer k, in increasing order of their
labels. Since Bk is known to all nodes of layer k � 1, all slot assignments are also known
to all of them. Now transmissions are scheduled as follows. For any node v of layer k � 1
and any round r of the last segment, node v looks at its neighbor w in layer k to which
Chapter 3. Deterministic broadcasting time with partial knowledge 28
it assigned slot r. It looks at all neighbors of w in layer k � 1 and de�nes the set A(w)
of those among them which assigned slot r to w. If the label of v is the smallest among
all labels of nodes in A(w), node v transmits the source message to w in round r of the
last segment, otherwise v remains silent in this round. This schedule avoids collisions and
guarantees that all nodes of layer k get the source message by the end of the kth phase.
Hence the invariant is preserved, which implies that broadcasting is completed after at
most D phases.
Phase 1 lasts � rounds, and each phase k, for k > 1, lasts �(2k � 1) rounds. Since
broadcasting is completed after D phases, its execution time is at most
�(1 +DXk=2
(2k � 1)) 2 O(D2�):
Hence we get.
Theorem 3.2.3 Algorithm Conquest-and-Feedback completes broadcasting in any network
of diameter D and maximum degree � in time O(D2�).
For large values of D and � the following simple Algorithm Fast-DFS may be more
e�cient than Algorithm Conquest-and-Feedback. It is a DFS-based algorithm using the
idea from [13]. The source message is considered as a token which visits all nodes of the
graph. In every round only one message is transmitted, hence collisions are avoided. The
token carries the list of previously visited nodes. At each node v the neighborhood of v is
compared to this list. If there are yet non visited neighbors, the token passes to the lowest
labelled of them. Otherwise the token backtracks to the node from which v was visited for
the �rst time. If there is no such a node, i.e., if v is the source, the process terminates. In
this way all nodes are visited, and the token traverses only edges of an implicitly de�ned
DFS tree, rooted at the source, each of these edges exactly twice. Avoiding sending the
token on non-tree edges speeds up the process from time �(e) to �(n). Hence we get
Proposition 3.2.2 Algorithm Fast-DFS completes broadcasting in any n-node network
in time O(n).
Since the diameter D may be unknown to nodes, it is impossible to predict which of the
two above algorithms is faster for an unknown network. However, simple interleaving of
the two algorithms guarantees broadcasting time of the order of the better of them in each
Chapter 3. Deterministic broadcasting time with partial knowledge 29
case. De�ne Algorithm Interleave which, for any network G executes steps of Algorithm
Conquest-and-Feedback in even rounds and steps of Algorithm Fast-DFS in odd rounds.
Then we have
Theorem 3.2.4 Algorithm Interleave completes broadcasting in any n-node network of
diameter D and maximum degree � in time O(min(n;D2�)).
The following lower bound shows that, for constant maximum degree �, execution
time of Algorithm Interleave is asymptotically optimal.
Theorem 3.2.5 Any broadcasting algorithm with knowledge radius 1 requires time (min(n;D2))
in some constant degree n-node networks of diameter D.
Proof: Fix parameters n and D < n. Since D is the diameter of a constant degree
network with n nodes, we must haveD 2 (logn). Consider a complete binary tree rooted
at the source, of height h � D=3 and with k leaves a1; :::; ak, where k 2 (n=D). It has
2k� 1 nodes. Assume for simplicity that D is even and let L = D=2�h. Thus L 2 (D).
Attach disjoint paths of length L (called threads) to all leaves. Denote by bi the other
end of the thread attached to ai, and call this thread the ith thread. Again assume for
simplicity that 2k � 1 + kL = n, and thus the resulting tree T has n nodes and diameter
D. (It is easy to modify the construction in the general case.)
Next consider any nodes u and v belonging to distinct threads, respectively ith and
jth, of T . De�ne the graph T (u; v) as follows: remove the part of the ith thread between
u and bi (including bi), and the part of the jth thread between v and bj (including bj),
and add a new node w joining it to u and to v. Arrange the remaining nodes in a constant
degree tree attached to the source, so as to create an n-node graph of constant degree and
diameter D.
We now consider the class of graphs consisting of the tree T and of all graphs T (u; v)
de�ned above. We will show that any broadcasting algorithm with knowledge radius 1
which works correctly on this class requires time (min(n;D2)) in the tree T . Fix a
broadcasting algorithm A.
Since the algorithm must work correctly on T , the source message must reach all nodes
bi, and consequently it must traverse all threads. For each thread de�ne the front as the
farthest node from the source in this thread, that knows the source message. Call each
Chapter 3. Deterministic broadcasting time with partial knowledge 30
move of a front a unit of progress. Consider only the second half of each thread, the one
farther from the source. Thus kL=2 units of progress must be made to traverse those parts
of threads.
Consider fronts u and v in second halves of two distinct threads and suppose that these
fronts move in the same round t. Observe that before this is done, information which u
has about its neighborhood must be transmitted to v or vice-versa. Otherwise the local
states of u and v in round t are the same when the algorithm is run on T and on T (u; v).
However simultaneous transmission from u and v in T (u; v) results in a collision in their
common neighbor w and thus the assumptions of the model are violated. Since u and v
are in second halves of their respective threads, the distance between them is at least L,
hence transmission of information from u to v requires at least L rounds after u becomes
a front.
Units of progress are charged to rounds in which they are made in the following way. If
at least two units of progress are made in a round, all of them are charged to this round.
We call this the �rst way of charge. If only one unit of progress is made in a round we
charge this unit to this round and call it the second way of charge.
Partition all rounds into disjoint segments, each consisting of L consecutive rounds. Fix
such a segment of rounds, and let t1 < ::: < ts be rounds of this segment in which at least
two units of progress are made. Let Ati , for i = 1; :::; s, be the set of thread numbers in
which progress is made in round ti. Notice that, for any i � s, the set (At1[� � �[Ati�1)\Ati
can have at most 1 element. Indeed, if a; b 2 (At1 [� � �[Ati�1)\Ati , for a 6= b, then fronts
u in thread a and v in thread b move simultaneously in round ti but neither information
about neighborhood of u could reach v nor information about neighborhood of v could
reach u because this information could only be sent less than L rounds before ti.
Since j(At1 [ � � � [ Ati�1) \ Ati j � 1 for any i � s, it follows that jAt1 j + � � �+ jAts j �k + s � k + L. Hence at most k + L units of progress can be charged to rounds of a
segment in the �rst way. Clearly at most L units of progress can be charged to rounds of
a segment in the second way. Hence a total of at most k + 2L units of progress can be
charged to rounds of each segment. Since kL=2 units of progress must be made to traverse
second halves of all threads, broadcasting requires at least kL=(2(k+ 2L)) segments and
thus at least kL2=(2(k+ 2L)) rounds. If k � L we have kL2=(2(k+ 2L)) � kL=6 2 (n),
and if k � L we have kL2=(2(k+ 2L)) � L2=6 2 (D2). Hence we have always the lower
Chapter 3. Deterministic broadcasting time with partial knowledge 31
bound (min(n;D2)) on broadcasting time in the tree T . 2
3.2.3 Knowledge radius 2
In this subsection we show the existence of a broadcasting algorithm with knowledge radius
2 working in O(D� logn) rounds for n-node networks of diameter D and maximum degree
�. The proof is non-constructive in that it uses the existence of a family of sets obtained
by a probabilistic argument.
Let v be any node of a network G. If the degree d of v is strictly less than �, add
�� d \dummy" edges with one endpoint v, using a di�erent new endpoint for each edge.
The number of new nodes added this way is less than n�. Any node v of G �xes an
arbitrary local enumeration (v; i)ji= 1; :::;� of all directed edges starting at v. v is called
the beginning of the edge (v; i) and the other endpoint of this edge is called its end. The
set = f(v; i)j v 2 V (G); 1 � i � �g contains all directed edges of G together with all
dummy edges. Notice that jj = n�.
Suppose that node w is the end of edge (v; i). Denote by �w the set of all edges with
end w, and by R(v; i) the reverse of edge (v; i), i.e., the edge having beginning and end
interchanged. Given an arbitrary set T of edges, let R(T ) = fR(x)j x 2 Tg: Let []kdenote the set of all k�element subsets of : Any sequence of random members of []k
will be called a k-list.
De�nition 3.2.1 Consider an n-list L = (Q1; : : : ; Qt); and let I be the set of all edges
with beginning v, for a given node v: For any I 0 � I; an element Qi of L is said to isolate
an edge (v; l) in I 0 if
Qi \ (I 0 [R(I)[ �w [R(�w)) = f(v; l)g; (3.1)
where w is the end of (v; l).
We also say that an n-list L isolates an edge (v; l) in I when there exists an element
Qi in L such that (3.1) holds. The following lemma is similar to Lemma 5.2.1 from [33].
Lemma 3.2.1 Let L = (Q1; : : : ; Q�); be an n-list. For every set I 2 []� of edges with
common beginning v and every 1 � i � �; with probability at least e�5 there is an edge
(v; l) 2 I isolated by Qi but not by any Qk for 1 � k < i:
Chapter 3. Deterministic broadcasting time with partial knowledge 32
Proof: The probability that Qi isolates at least one edge in I = f(v; 1); : : : ; (v;�)gwhich is not isolated by any Qk for 1 � k < i; is
� � Pr(Qi isolates (v; 1)) � Pr(Qk does not isolate (v; 1); 1 � k < i):
Since every Qi is a randomly chosen set of n elements among n� elements, and since
jI j = �; we have
Pr(Qi isolates (v; 1)) = Pr((v; 1) 2 Qi) � Pr(x 62 Qi, for all x 2 I [R(I)[ �w [R(�w) n f(v; 1)g)
� 1
�
�1� 1
�
�2��1 �
1� 1
�
�2(��2)
� 1
�
�1� 1
�
�4�
� 1
�e4:
Pr(Qk does not isolate (v; 1); 1 � k < i) � Pr((v; 1) 62 Q1) � Pr((v; 1) 62 Q2) � � �Pr((v; 1) 62 Qi�1)
=
�1� 1
�
�i�1��1� 1
�
��� 1
e:
2
The next two lemmas follow respectively from Lemma 5.2.2 and Lemma 5.2.3 of [33]
that are reformulations of results �rst obtained in [75].
Lemma 3.2.2 A n-list Q1; : : : ; Q� isolates at least �=e�8 edges in any set I 2 []� of
edges with common beginning v, with probability at least 1� e�b�; where b > 0:
Lemma 3.2.3 For � � 2, a n-list Q1; : : : ; Qt of length t 2 O(� logn) isolates at least
�=e�8 edges in any set I 2 []� of edges with common beginning v.
Clearly a concatenation of two or more n-lists is itself a n-list. In the following, a n-list
de�ned as a concatenation of n-lists L1; : : : ; Lm will be denoted by LL1;:::;Lm . Notice that,
given a set I of edges, a n-list LL1;:::;Lm de�nes a family L(I) = fI1; : : : ; Im+1g of sets suchthat I1 = I and every Ij , for 2 � j � m + 1, is the subset of Ij�1 obtained by deleting
from Ij�1 all the edges isolated by Lj�1 in Ij�1:
De�nition 3.2.2 Given a n-list LL1;:::;Lm and a set I of edges, an element Qi 2 []n of
LL1;:::;Lm is said to select (v; l) in I if Qi isolates (v; l) in Ij, for some Ij 2 L(I):
The following theorem is a reformulation of Theorem 5.2.4 proved in [33], using Lemma
3.2.3.
Chapter 3. Deterministic broadcasting time with partial knowledge 33
Theorem 3.2.6 For � � 2 there exists a n-list LL1;:::;Lm of length O(� logn) which
selects all the edges in I, for every set I 2 []� of edges with common beginning v.
In the following algorithm we assume that all nodes have as input the same n-list
LL1;:::;Lm of length l 2 O(� logn) (with the appropriate selection property) whose exis-
tence is guaranteed by Theorem 3.2.6. (This is the only non-constructive ingredient of the
algorithm. If time of local computations is ignored, all nodes can locally �nd the same list
using a predetermined deterministic exhaustive search.)
Algorithm Select-and-Transmit
The algorithm works in phases. The �rst phase lasts � rounds and each of the following
phases lasts l rounds, where l 2 O(� logn) is the length of the n-list. Each round r of a
phase p � 2 corresponds to the rth element Qr 2 []n of LL1;:::;Lm .
� In phase 1 the source sends the message to all of its neighbors.
� In round r of phase p, for p � 2, any node v that got the source message for the �rst
time in the previous phase p � 1, sends the message on edge (v; i) if and only if Qr
selects (v; i) in I , where I is the set of all edges with beginning v. If (v; i) happens
to be a dummy edge, v is idle in round r.
Theorem 3.2.7 Algorithm Select-and-Transmit completes broadcasting in any n-node net-
work of diameter D and maximum degree � in time O(D� logn):
Proof: First observe that in order to decide if a given edge (v; i) is selected by a set
Qr, node v must only know the part of the network at distance at most 2 from it, and
hence Algorithm Select-and-Transmit is indeed an algorithm with knowledge radius 2.
Let I be the set of all edges with beginning v. Since Theorem 3.2.6 guarantees that
all elements of I are selected within l 2 O(� logn) rounds, we conclude that at the end
of phase p, any node v informed in phase p� 1, transmits the source message to all of its
neighbors. Hence after D phases broadcasting is completed.
It remains to show that all transmissions respect the model speci�cations, i.e., that
collisions are avoided. When node v sends the message on edge (v; i) in round r of a given
phase, Qr selects (v; i) in I . By De�nition 3.2.2, this means that there exists Ij 2 L(I)such that Qr isolates (v; i) in Ij . Hence, if w is the end of (v; i), we have
Qr \ (Ij [ R(I)[ �w [R(�w)) = f(v; i)g:
Chapter 3. Deterministic broadcasting time with partial knowledge 34
This implies that, apart from all edges in I n Ij that have been already selected in some
previous round, any other edge (v; k) 2 I; for k 6= i, is not in Qr. Also, no edge with end
v and no other edge with beginning or end w can be in Qr. Therefore none of these edges
can be selected by the same set Qr which selects (v; i). Hence no transmission in round r
can collide with the transmission on edge (v; i). 2
We now present a lower bound on broadcasting time showing that Algorithm Select-
and-Transmit is close to optimal for knowledge radius 2. In fact our lower bound is valid
for any constant knowledge radius r.
Theorem 3.2.8 Assume that D� 2 O(n) and let r be a positive integer constant. Any
broadcasting algorithm with knowledge radius r requires time (D�) on some n-node tree
of maximum degree � and diameter D.
Proof: Assume for simplicity that r divides D and that 1+ (D� r)(�� 1) = n. It is
easy to modify the construction in the general case, using the assumption D� 2 O(n). Let
d = �� 1. Let T be a tree consisting of a root and d disjoint paths (branches) of length r
attached to it. Let k = D=r�1 and consider k copies T1; :::; Tk of T . Let the source be the
root of T1 and identify the root of Ti+1 with some leaf of Ti, for any i = 1; :::; k� 1. The
resulting tree T � has maximum degree d+ 1 = �, diameter r+ kr = D, and kdr+ 1 = n
nodes. We will show that any algorithm with knowledge radius r requires time (D�) on
some labelled tree isomorphic to T �.
Consider any broadcasting algorithm A. When the root vi of Ti gets the source message
in round t, its local state is the same regardless of the leaf of Ti to which Ti+1 is attached.
(This is due to the fact that knowledge radius is equal to the depth of Ti.) Hence if
Ti+1 is attached to the leaf in the branch of Ti corresponding to the last child of vi
which gets the source message, then the root vi+1 of Ti+1 gets the source message at
least d+ r� 1 rounds after round t. Consequently broadcasting in T � takes time at least
k(d+ r � 1) � kd 2 (D�). 2
3.2.4 Larger knowledge radius
In this subsection we show that for larger knowledge radius the time of broadcasting can be
signi�cantly improved. Our �rst algorithm uses knowledge radius log� n+3 and completes
Chapter 3. Deterministic broadcasting time with partial knowledge 35
broadcasting in time O(D�). It has again a non-constructive ingredient, similarly as
Algorithm Select-and-Transmit. We use the following theorem which is an easy corollary
of Theorem 2.3.4 from the previous chapter.
Theorem 3.2.9 If nodes have knowledge radius log� n + 3 then distributed O(�)-edge
coloring of an n-node graph of maximum degree � can be achieved without any communi-
cation among nodes.
Algorithm Color-and-Transmit
All nodes of the network have as input a �xed distributed k-coloring of edges, where
k 2 O(�), guaranteed by Theorem 3.2.9 More speci�cally, every node knows colors of its
incident edges. The algorithm works in phases. The �rst phase lasts � rounds and each
of the following phases lasts k rounds. In phase 1 the source sends the message to all of
its neighbors. In round r of phase p, for p � 2, any node v that got the source message for
the �rst time in the previous phase p � 1, sends the message on its incident edge of color
r.
By de�nition of k-edge coloring collisions are avoided. After D phases broadcast is
completed. Hence we get.
Theorem 3.2.10 Algorithm Color-and-Transmit completes broadcasting in any n-node
network of diameter D and maximum degree � in time O(D�):
We �nally observe that for larger knowledge radius rAlgorithm Conquest-and-Feedback,
described in Section 3, can be modi�ed in a straightforward way allowing faster broad-
casting. Instead of \conquering" layers one by one giving \feedback" after each layer, the
source message is broadcast to segments of r consecutive layers, using a predetermined tree
spanning nodes of these layers. Then all nodes of the last layer of the segment send back to
the source the information about the part of the network at distance r from them, using
the same spanning tree and schedules similar as in the original algorithm. The source
extends the current tree to a tree spanning the next segment of layers and transmits this
entire information along this tree. Thus the next segment of r layers can be \conquered".
A phase informing a segment of r layers and giving feedback on the next segment takes
at most O(D�) rounds, and O(D=r) such phases are needed. This proves the following.
Chapter 3. Deterministic broadcasting time with partial knowledge 36
Proposition 3.2.3 For any positive integer r, there exists a broadcasting algorithm with
knowledge radius r which completes broadcasting in any network of diameter D and max-
imum degree � in time O(D2�=r).
3.3 Deterministic broadcasting in the radio model
In this section we consider the radio model. Moreover, we assume a very restrictive
knowledge available to nodes: each node knows only its own label.
In [28] the authors constructed a deterministic broadcasting algorithm working in time
O(n11=6) for arbitrary n-node radio networks of unknown topology and size. It should be
noted that since the size of the network is unknown, broadcast can be completed but no
node need be aware of this fact. Thus the precise de�nition of broadcasting time of an
algorithm working in unknown networks (given in [28]) is the following. An algorithm
accomplishes broadcasting in t rounds, if all nodes know the source message after round
t, and no messages are sent after round t.
This section contains an improvement of broadcasting time O(n11=6) from [28]. Using
a stronger combinatorial tool we show how to reduce deterministic broadcasting time
in unknown radio networks to O(n5=3(logn)1=3). Further improvements of deterministic
broadcasting time in unknown radio networks have been obtained. In [29] an algorithm
working in time O(n3=2) was developed. Independently, an upper bound O(n3=2plog n)
was shown in [93] using a di�erent method. Finally, the fastest currently known algorithm
for this task is the broadcasting algorithm from [30] working in time O(n(logn)2).
3.3.1 Preliminaries
In this subsection we recall the results from [28] which we will use to improve deterministic
broadcasting time in unknown radio networks.
De�nition 3.3.1 A family F of subsets of U is said to be k-selective for the set U i� for
any X � U; jX j � k; there is a set Y 2 F satisfying jX \ Y j = 1.
Let [a] = f1; 2; : : : ; ag, for any positive integer a. Using Vishkin's construction of a
deterministic sample [103] the following result was proved in [28].
Chapter 3. Deterministic broadcasting time with partial knowledge 37
Lemma 3.3.1 For any positive integer m there exists a 2dm=6e-selective family of size
O(25m=6) on the set [2m].
Using k-selective families, a broadcasting algorithm was constructed in [28]:
Theorem 3.3.1 [28] Let m� be such that 2m��1 < n � 2m
�. Suppose that Fm is a km-
selective family on [2m], for any positive integer m. Then it is possible to broadcast in any
n-node graph in timem�Xm=1
(km � jFmj+ 2m) � d2m=kme:
Using Lemma 3.3.1, Theorem 3.3.1 implies the existence of a broadcasting algorithm
working in time O(n11=6) in any n-node graph.
3.3.2 Smaller selective families and faster broadcasting
We show how smaller selective families can be constructed using stronger combinatorial
tools, and consequently how to decrease broadcasting time using the algorithm from [28].
De�nition 3.3.2 [33] A family C of subsets of [t] is called �-disjunct if for all C0; : : : ; C� 2C; we have
C0 6��[1
Ci:
We use the following result which follows from [46].
Theorem 3.3.2 For any positive integers n > �, there exists an integer t 2 O(�2 log n)
and a �-disjunct family C of n subsets of [t].
Theorem 3.3.2 was proved in [46] not constructively, but the following construction of
such a family was later given in [72]. Fix positive integers t and �. Denote by [t]k the
family of all subsets of size k of [t].
Construction.
Let r =l
t16�2
m; k = 4�r; m = 4r and let [t]k be sorted in an arbitrary order. Choose
the �rst element of [t]k as C1. If C1; :::; Ci�1 are already constructed, let Ci be the �rst
set in [t]k whose intersection with every set C1; :::; Ci�1 has size smaller than m, if such a
set exists.
Chapter 3. Deterministic broadcasting time with partial knowledge 38
Suppose that the construction outputs a family C = fC1; : : : ; Clg. It is proved in [72]
that l � (2=3)3t
16�2 and that C is �-disjunct. As a consequence we get that for any �xed
n > �, the above construction applied for t = b16�2(1�log3 2+(log3 2) logn)c 2 �(�2 log n)
yields an n-element �-disjunct family of subsets of [t].
Using Theorem 3.3.2 we can now prove the following lemma.
Lemma 3.3.2 For any positive integers n > �, there exists a �-selective family of subsets
of [n] of size t 2 O(�2 log n).
Proof: Let t = b16�2(1� log3 2+(log3 2) logn)c and consider an n-element �-disjunct
family C = fC1; : : : ; Cng of subsets of [t]. For p = 1; :::; t, de�ne Fp = fi � n : p 2 Cig.Then F = fF1; : : : ; Ftg is a �-selective family of subsets of [n]. Indeed, consider any set
A � [n] of size at most �. Let A = fi1; :::; irg, r � �. Since C is �-disjunct, there exists
p 2 [t] such that p 2 Ci1 and p 62 Cij , for all r � j > 1. Hence i1 2 Fp and ij 62 Fp, for all
r � j > 1. Consequently jFp \ Aj = 1. 2
Taking � = (2m=m)1=3 in Lemma 3.3.2 we get
Corollary 3.3.1 For any positive integer m there exists a (2m=m)1=3-selective family of
size O(22m=3 �m1=3) on the set [2m].
Now Theorem 3.3.1 and Corollary 3.3.1 imply the following announced result.
Theorem 3.3.3 Broadcasting in an arbitrary unknown n-node radio network can be com-
pleted in time O(n5=3(logn)1=3).
Chapter 4
Concurrent multicast in weighted networks.
4.1 Introduction
In this chapter we consider the problem of concurrent multicast, that is, the problem of
information dissemination from a set of source nodes to a set of destination nodes in a
weighted communication network.
Multicasting has been the focus of growing research interest, many future applications
of computer networks such as distance education, remote collaboration, teleconferencing,
and many others will rely on the capability of the network to provide multicasting services
[55, 77]. Our model considers concurrent multicast in which a group of nodes in the
network needs to multicast to a common set of destinations.
Processors in a network cooperate to solve a problem, in our case to perform concurrent
multicast, by exchanging messages along communication channels. Networks are usually
modelled as connected graphs with processors represented as vertices and communication
channels represented as edges. For each channel, the cost of sending a message over that
channel is measured by assigning a weight to the corresponding edges. Our goal is to give
algorithms to e�ciently perform concurrent multicast in the network.
The typical measure of the communication cost of an algorithm is the number of
messages sent across the network during its execution. This measure assumes that the
cost of sending a message along any channel is equal to one. However, it is more realistic to
include the edge weights into the communication cost of an algorithm. More speci�cally,
we assume the cost of transmitting a message over a channel be equal to the weight of
the corresponding edge. This point of view was advocated in [12, 105] and several papers
Chapter 4. Concurrent multicast in weighted networks. 40
have followed this line of research since then. The communication cost of an algorithm
is the sum of the costs of the edges used by the algorithm, each cost added as many
times as the corresponding edge is used for a message transmission by the algorithm. The
communication time of an algorithm is the interval of time necessary for the completion
of the algorithm itself, under the assumption that to each edge (i; j) it is associated the
travel time t(i; j) needed for a message from the node i to reach its neighbor j. The
communication complexity of an algorithm is de�ned as the product of its communication
cost and its communication time [105].
4.1.1 Statement of the problem and summary of our results
We consider the communication network modelled by a graph H = (V;E), where the
node set V represents the set of processors and the edge set E represents the set of
communication channels. Each edge (i; j) in H is labelled by the communication cost
c(i; j)> 0 of sending a message from node i to node j, where c(i; j) = c(j; i).
Concurrent Multicast Problem (CM): Let S and D be two arbitrary subset of V .
Nodes in S are the sources and nodes in D are the destinations. Each node a 2 S holds
a block of data B(a). The goal is to disseminate all these blocks so that each destination
node b 2 D gets all the blocks B(a), for all a 2 S.
We are interested in protocols for the Concurrent Multicast Problem which have min-
imum communication cost, where the communication cost of a protocol is the sum of the
costs of all message transmissions performed during its execution.
We �rst study the concurrent multicast problem under the classical assumptions that
all the blocks known to a node i at each time instant of the execution of the protocol can
be freely concatenated and the resulting message can be transmitted to a node j with
cost c(i; j): This assumption is reasonable when the combination of blocks results in a new
message of the same size (for example, blocks are boolean values and each node in D has
to know the AND of all blocks of the nodes in S [105]). It is not too hard to see that
a protocol for the CM problem in this scenario can be obtained as follows: construct in
H a Steiner Tree T with terminal nodes equal to the set S [ fvg; v 2 V; by transmitting
over the edges of T one can accumulate all the blocks of the source nodes into v; then, by
using another Steiner Tree T 0 with terminal nodes equal to D[fvg; one can broadcast theinformation held by v to all nodes in D; thus completing the CM. It is somewhat surprising
Chapter 4. Concurrent multicast in weighted networks. 41
to see that this two phase protocol, accumulation plus broadcasting, is essentially the best
one can do. The non-simple proof of the optimality of the protocol is given in Sections 4.3
and 4.4. Since determining an optimal solution for CM problem is in general NP-hard for
arbitrary S and D (while the problem is solvable in polynomial time for S = D = V [105]),
in Section 4.5.1 we give an approximate-cost polynomial time distributed algorithm for the
CM problem. Subsequently, in Section 4.6 we turn our attention to a di�erent scenario,
in which the assumption that the cost of the transmission of a message be independent
of the number of blocks composing it no longer holds, therefore message transmissions
must consist of one block of data at time. Communication protocols which works by
exchanging messages of limited size have recently received considerable attention (see for
example [22, 19, 18, 61]). The CM problem remains NP-hard in this case, therefore we
also provide polynomial time approximate cost solutions. In Section 4.5.2 we consider
the on-line version of the CM problem. The on-line version speci�es that the source and
destination nodes be supplied one at a time and the existing solution be extended to
connect the current sources and destinations before receiving a request to add/delete a
node from the current source/destination set. We will prove that the characterization we
have given for the optimal cost solution to the CM problems allows us to derive an e�cient
solution also to the on-line version. Finally, in Section 4.7 we consider the communication
time and the communication complexity of our algorithms.
4.1.2 Related work
In case S = D = V the CM problem reduces to the gossiping problem which arises in a
large class of scienti�c computation problems [23]. In case jSj = 1 and D � V the CM
problem reduces to the multicasting problem [2, 17, 24, 104] and in case D = V to the
broadcasting problem, both problems have been well investigated because of their relevance
in the context of parallel/distributed systems [101]. In particular the broadcasting and
gossiping problems have been investigated under a varieties of communication models and
have accumulated a large literature, we refer the reader to the survey papers [51, 67, 68].
In this section we limit ourselves to brie y discuss some works whose results are either
strictly related to ours or can be seen as corollaries of the results contained in this chapter.
In case S = D = V we get the problem of Gossiping in weighted networks, a problem �rst
considered in weighted complete networks by Wolfson and Segall [105]. One of the main
Chapter 4. Concurrent multicast in weighted networks. 42
results of [105] was to prove that the communication cost of an optimal gossiping algorithm
is equal to twice the cost of a minimum spanning tree of the weighted complete graph. As
a consequence of more general results (i.e., our characterization of optimal cost instances
of the CM problem given in Section 4.3) we are able to extend above quoted results of
[105] to general weighted graphs, i.e., not necessarily complete. Gossiping in weighted
complete networks in which blocks of data cannot be freely combined was studied in [61].
If messages must consist of exactly one block then our result of Section 4.6 implies one
of the results of [61], that the minimum cost of an instance is equal to jV j� (cost of
a minimum spanning tree of H); again, in the present case H does not need to be the
complete graphs. Another problem strictly related to ours is the Set{to{Set Broadcasting
problem [97], which asks for the minimum number of message transmissions call(S;D) toperform concurrent multicast from a set S to a set D in a complete unweighted graph. Our
results imply a solution equivalent to the one given in [81, 82] for the so called "telephone
communication model", namely
call(S;D) =( jSj+ jDj � 1 if S \ D = ;,jSj+ jDj � 2 if S \ D 6= ;.
Other work related to the results of this chapter is contained in [17, 69].
4.2 Multi{digraphs associated to instances of concurrent
multicast
We introduce here the notion of multi{digraph associated to an instance of a CM algorithm.
We will consider the concurrent multicast problem on a weighted communication graph
H = (V;E) from the source set S to the destination set D.The sequence of message transmissions (calls) of an instance of a concurrent multicast
algorithm will be represented by a labelled multi{digraph I = (V;A(I)) having as node
set the same set V of nodes of H and as arc set the multiset A(I) in which each arc (i; j)
represents a call from i to j. Arc labels represent the temporal order in which calls are
made and are denoted by `(i; j) for all (i; j) 2 A(I):
A path in I from node i to node j is called ascending if the sequence of labels of the arcs
on the path is strictly increasing when moving from i to j. Since a node b receives the
block of a source node a 2 S i� the multi{digraph I contains an ascending path from a to
b, the following property holds
Chapter 4. Concurrent multicast in weighted networks. 43
Fact 1 A labelled multi{digraph I = (V;A(I)) is an instance of a concurrent multicast
algorithm from S to D if and only if I contains an ascending path from a to b, for each
source a 2 S and destination b 2 D
An arc (i; j) 2 A(I) has cost c(i; j), the cost of the corresponding call along the edge fi; jgin H . The cost of an instance (that is, the cost of the associated multi{digraph) I is then
the sum of the costs of all the arcs of I , each added as many times as its multiplicity.
Example 1 Let H = (f1; 2; 3; 4; 5; 6; 7g;E) be the weighted communication graph of Fig-
ure 1(a). Consider the source set S = f1; 2g, the destination set D = f4; 5; 6g, and the
instance consisting of the following calls
At time 1: 1 sends B(1) to 3;
At time 2: 2 sends B(2) to 3;
At time 3: 3 sends (B(1); B(2)) to 4;
At time 4: 3 sends (B(1); B(2)) to 6 and 4 sends (B(1); B(2)) to 5.
The corresponding multi{digraph I is shown in Figure 1(b); each arc is labelled with the
time at which the corresponding call is made. We have cost(I) = 5.
1
(a)
2
3 6
54
7
2
2
2
2
3
1 1
1
1
1
1
1 2
3
4 6
1 2
3
4
4
5
(b)
Figure 1
4.3 Characterization of a minimum cost instance
In this section we �rst derive a lower bound on the cost of an optimal solution to the
CM problem. Then we show that the given bound is tight thus also obtaining an useful
characterization of a minimum cost instance of the CM problem.
Chapter 4. Concurrent multicast in weighted networks. 44
De�nition 1 Let I be an instance. A node i 2 V is called complete at time t if for each
a 2 S there exists in I an ascending path � from a to i such that `(e) � t for each arc e
on �:
In other words, a node is complete at time t if by time t it knows the blocks of all the
source nodes in S. Notice that if a node i is complete at time t and i calls node j at time
t0 > t, then i can send to j any block B(a), for a 2 S, and thus make j complete too.
Given I = (V;A(I)) and A0 � A(I), call subgraph of I induced by the arcs in A0 the
graph (V 0; A0) with V 0 = fi 2 V j i has at least one arc of A0 incident on itg.We will denote by
� CI = (V (CI); A(CI)) the subgraph of I induced by the subset of arcs
A(CI) = f(i; j) 2 A(I) j there exists t < `(i; j) s.t. i is complete at time tg
� CI the subgraph induced by the subset of arcs A(CI) = A(I) nA(CI).
Notice that CI consists of all the arcs of I corresponding to a call made by a complete
node.
Example 1 (continued). For the instance I in Figure 1(b), the subgraphs CI and CI are
given in Figure 2(a) and 2(b), respectively.
4 6
(b)
3
4
4
5
31 2
1 2
3
(a)
Figure 2
Lemma 4.3.1 If I is a minimum cost instance, then
1) the multi{digraph CI is a forest,
2) the node set of each tree in CI contains at least one node in D:
Chapter 4. Concurrent multicast in weighted networks. 45
Proof. In order to prove that CI is a forest, it is su�cient to show that at most one
arc enters each node v in CI : Suppose there exists a node v such that at least two arcs
of CI enter v. Then all arcs incoming on v but the one with smallest label, call it t, can
be omitted. Indeed, since v is complete at time t, all successive calls to v are redundant.
Hence, there exists an instance of smaller communication cost, contradicting the optimality
of I:
We show now 2). If there exist a tree T in CI containing no destination node in D; thenthe nodes of T are not in any ascending path from a to b, for each a 2 S and b 2 D:Therefore, the calls corresponding to the arcs of T can be omitted. This contradicts the
optimality of I: 2
We denote by R(I) the set of the roots of the trees in the forest CI :
The following Theorem is one of our main technical tools.
Theorem 1 There exists a minimum cost instance I with jR(I)j= 1.
Before proving Theorem 1 we derive its consequences. The following Theorem 2 allows
us to obtain the desired lower bound to the minimum cost of an instance.
Given the graph H = (V;E), let us denote by ST (X) the Steiner tree in H on the terminal
node set X , that is, the minimum cost tree T with E(T ) � E and X � V (T ) � V .
Theorem 2 Consider the communication graph H = (V;E) and the sets S;D � V .
Let Imin be a minimum cost instance for the CM problem from the source set S to the
destination set D. Then
cost(Imin) � minv2V
fcost(ST (S [ fvg)) + cost(ST (D [ fvg))g: (4.1)
Proof. Consider the graph H and the sets S;D � V . Theorem 1 implies that there exists
a minimum cost instance I such that its subgraph CI is a tree. Let r denote the root of
CI . By de�nition, r is the �rst node to become complete in I and each node b 2 D (a part
r, if r 2 D) becomes complete by receiving a message along a path from r to b in the tree
CI . Therefore, CI is a tree whose node set includes each node in D [ frg and it must hold
cost(CI) � cost(ST (D [ frg)): (4.2)
Chapter 4. Concurrent multicast in weighted networks. 46
Moreover, for each a 2 S we have that either a = r or CI contains an ascending path froma to r, in order r to get complete. Therefore, the cost of CI cannot be less that that of
any tree whose node set includes S [ frg and
cost(CI) � cost(ST (S [ frg)): (4.3)
The above inequalities (4.2) and (4.3) imply
cost(I) = cost(CI) + cost(CI) � cost(ST (D [ frg)) + cost(ST (S [ frg)):2
We show now that the inequality in Theorem 2 holds with equality. The following
Theorem 3 establishes the minimum cost of an instance of the CM problem.
Theorem 3 Consider the communication graph H = (V;E) and the sets S;D � V . Let
Imin be a minimum cost instance for the concurrent multicast problem on S and D. Then
cost(Imin) =
8>><>>:minv2V fcost(ST (S [ fvg)) + cost(ST (D [ fvg))g if S \ D = ;,
minv2S\Dfcost(ST (S [ fvg)) + cost(ST (D [ fvg))g if S \ D 6= ;,(4.4)
where ST (X) represents a Steiner tree of the communication graph H spanning all nodes
in X.
Proof. By Theorem 2, cost(Imin) is lower bounded by minv2V fcost(ST (S [ fvg)) +cost(ST (D [ fvg))g. If we denote by r the node in V for which the above minimum is
reached, then
cost(Imin) � cost(ST (S [ frg)) + cost(ST (D [ frg)): (4.5)
We show now that there exists an instance I of cost equal to cost(ST (S [ frg)) +cost(ST (D [ frg)).Consider the Steiner tree ST (S [ frg), and denote by T1 the directed tree obtained from
ST (S [ frg) by rooting it in r and directing all its edges toward the root r. Label each
arc of T1 so that each directed path in T1 is ascending, let @ denote the maximum label
used in T1.
Consider now the Steiner tree ST (D [ frg), and denote by T2 the directed tree obtained
from ST (D[frg) by rooting it in r and directing all its edges away from the root r. Label
each arc (i; j) of T2 with a label `(i; j)> @ so that each directed path in T2 is ascending.
Chapter 4. Concurrent multicast in weighted networks. 47
Consider then the multi{digraph I such that V (I) = V and E(I) = E(T1) [ E(T2). By
de�nition of T1 and T2 we get that I contains an ascending path (a; : : :; r; : : : ; b), for each
a 2 S and b 2 D. Hence, I is an instance of the CM problem and its cost is
cost(I) = cost(T1) + cost(T2) = cost(ST (S [ frg)) + cost(ST (D [ frg)):
Since cost(I) � cost(Imin), by (4.5) we have
cost(Imin) = cost(ST (S [ frg)) + cost(ST (D [ frg))= min
v2Vcost(ST (S [ fvg)) + cost(ST (D [ fvg)):
Finally, it is easy to see that in case S \ D 6= ;, at least one node for which the
minimum is attained must be a node in S \ D. Hence the theorem holds. 2
4.4 Proof of Theorem 1
In order to prove Theorem 1 we need some intermediate de�nitions and results.
Two roots r1; r2 2 R(I), are called mergeable if there exists an instance I 0 with
cost(I 0) � cost(I), R(I 0) � R(I), and jR(I 0) \ fr1; r2gj = 1.
Given any multi{digraph G, let indegG(x) represent the number of di�erent tails of
arcs entering x in G, that is, indegG(x) = jfy j (y; x) 2 A(G)gj.
De�nition 2 Let I be an instance, and � = (x1; � � � ; xn) be an ascending path in I: An
ascending path � = (y1; � � � ; ym) is called an outlet of � if � and � are arc{disjoint, y1 = xi
for some 1 < i < n, and the path (x1; : : : ; xi = y1; : : : ; ym) is ascending.
If �0 = (xi; � � � ; xn) has no outlets, then � is called the last outlet of � and �0 is called the
end part of �:
Lemma 4.4.1 Let I be a minimum cost instance with indegCI(r) � 2 for all r 2 R(I):
Let r1; r2 2 R(I); with r1 6= r2; and suppose there exist u; v 2 V and the ascending paths:
� �(u; r1) from u to r1,
� �(u; r2) from u to r2 and such that it has no outlets leading to r1;
� �(v; r2) from v to r2:
Chapter 4. Concurrent multicast in weighted networks. 48
If cost(�(u; r1)) � cost(�(v; r2)) and �(v; r2) has no outlets, then the roots r1 and r2 are
mergeable.
Proof. Let I ��(v; r2) be the graph obtained from I by deleting all the arcs on the path
�(v; r2): By the hypothesis �(v; r2) has no outlets. Therefore, in I � �(v; r2) only the
nodes in the tree of CI rooted in r2 could have no ascending paths from each source in S:Thus if we add to I � �(v; r2) an ascending path from a to r2, for all a 2 S, the resultinggraph will be again an instance. We show now how to do this, so that the new instance
I 0 has still minimum cost. The transformations are showed in Figure 3.u = 1
α( u ,
βr
v
1)xm-1
xmr1= r2
α(v , r2)
x
x2
(u , r2)
Figure 3: I 0 is obtained from I by deleting �(v; r2) and adding the path (xm; xm�1; : : : ; x2; x1).
Notice that, since �(v; r2) has no outlets, deleting the path �(v; r2) does not destroy
any ascending path leading to the root r1: Let r1 be complete at time t: In order to
get the instance I 0 we will �rst add to I � �(v; r2) a path � from r1 to r2; such that �
is ascending with all labels larger than t: This is done as follows. Let �(u; r1) = (u =
x1; � � � ; xm�1; xm = r1); we add the arc (xm; xm�1) labelled by t+ 1; then, we add the arc
(xm�1; xm�2) labelled by t + 2 and so on until we add the arc (x2; x1) labelled by t +m:
In such a way we have added the inverse path of �(u; r1) that we denote by ��1(u; r1): By
concatenating the path ��1(u; r1) with the ascending path �(u; r2); we get a path from r1
to r2; but it could be not ascending. In order to get the desired path �; we add t+m� 1
to the label of all the arcs on �(u; r2): Notice that the last relabelling could destroy some
ascending path. In order to preserve all the ascending paths we add t+m� 1 to each arc
on any outlet of �(u; r2): We are sure that the last relabelling cannot postpone the time
at which r1 is complete, because of the assumption that any outlet of �(u; r2) does not
lead to r1: This gives the new instance I 0 with
cost(I 0) = cost(I)� cost(�(v; r2)) + cost(��1(u; r1))
Chapter 4. Concurrent multicast in weighted networks. 49
= cost(I)� cost(�(v; r2)) + cost(�(u; r1))
� cost(I):
Because of the new ascending path � , we have R(I 0) � R(I)nfr2g; and the Lemma holds.
2
In the sequel any quoted path must be intended as an ascending path.
Lemma 4.4.2 Let I be a minimum cost instance with indegCI(r) � 2 for all r 2 R(I):
For each r1; r2 2 R(I); r1 6= r2; there exist u; v and the following distinct paths on I:
� �(u; r1) from u to r1;
� �(u; r2) from u to r2 such that it has no outlets leading to r1;
� �(v; r2) from v to r2;
� �(v; r1) from v to r1 such that it has no outlets leading to r2:
Proof. Consider r1; r2 2 R(I) and a 2 S with r1 6= a 6= r2: Notice that such a source a
exists, indeed no minimum cost instance can have both r1; r2 2 S with r1 6= r2, if jSj � 2:
Since r1 and r2 are roots of CI ; they must receive the blocks B(a) of a. Therefore there
exist two ascending paths, � = (a = x1; � � � ; xm = r1) and � = (a = y1; � � � ; yn = r2): In
general the path � could have some outlet that leads to r1: Among these outlets let �(u; r1)
be the last one, i.e. let �(u; r1) = (u = yi = z1; � � � ; zl = r1); for some 1 < i < m; be
the outlet leading to r1 such that the path �(u; r2) = (u = yi � � � ; yn = r2) has no outlets
that lead to r1: Consider now any a0 2 S, where a0 and a need not be distinct. With
analogous reasoning, we can show that there exist a node v and the two paths �(v; r2)
and �(v; r1) such that �(v; r1) has no outlets leading to r2: Notice that since indegCI(r1),
indegCI(r2) � 2; it is always possible to �nd �(v; r2) and �(v; r1) such that they are
distinct from �(u; r1) and �(u; r2): 2
We have then the following lemma.
Lemma 4.4.3 Let I be a minimum cost instance with jR(I)j � 2 and indegCI(r) � 2, for
all r 2 R(I): Then R(I) contains two mergeable roots.
Chapter 4. Concurrent multicast in weighted networks. 50
Proof. Suppose on the contrary that there is no pair of mergeable roots in R(I):
Step 0 Let r1; r2 2 R(I): Consider u1; u2 2 V (I) and the paths �(u1; r1), �(u1; r2),
�(u2; r2) and �(u2; r1) given in Lemma 4.4.2. W.l.o.g. suppose that
cost(�(u1; r1)) � cost(�(u2; r2)): (4.6)
Apply Lemma 4.4.1 to the paths �(u1; r1), �(u1; r2) and �(u2; r2): the hypothesis that r1
and r2 are not mergeable and (4.6) necessarily imply that �(u2; r2) has at least one outlet.
See Figure 4(a).
Step 1 This step is showed in Figure 4(b). Since �(u2; r2) has at least one outlet, there
exist a root r3 2 R(I) and a node u02 on �(u2; r2) such that �(u02; r3) = (u02; � � � ; r3) is thelast outlet of �(u2; r2): Denote by �2 the end part of �(u2; r2): Applying Lemma 4.4.2
to the roots r2 and r3; we can �nd also a node u3 and the paths �(u3; r2) and �(u3; r3)
leading from u3 to r2 and r3, respectively. Consider now the roots r2; r3 and the paths �2;
�(u3; r2), and �(u3; r3). By Lemma 4.4.1, recalling that �2 is without outlets and r2; r3
are not mergeable, it follows that
cost(�2) < cost(�(u3; r3)): (4.7)
Applying again Lemma 4.4.1 to the paths �2, �(u3; r3) and �(u02; r3): the hypothesis that
r2 and r3 are not mergeable and (4.7), necessarily imply that �(u3; r3) has at least one
outlet.
We can iterate the reasoning done in Step 1 as follows.
Step i� 1. From Step i� 2 we know that there exist a node ui and a path �(ui; ri) having
at least one outlet. Therefore, we can �nd a root ri+1 and a node u0i on �(ui; ri) such
that �(u0i; ri+1) = (u0i; � � � ; ri+1) is the last outlet of �(ui; ri): Denote by �i the end part
of �(ui; ri), that is, �i = (u0i; : : : ; ri). By following the same reasoning as in Step 1, we
can conclude that there exist a node ui+1 and a path �(ui+1; ri+1) that has at least one
outlet.
Chapter 4. Concurrent multicast in weighted networks. 51
(b)
r r1 2
u u1 2
α
u’
2
u3
2
r3 r4
β 1 2
β 2 1
β
(u ,r )
(u ,r )
α(u’,r )2 3
α(u , r )3 3
(u ,r )β 3 2
r r1 2
u u1 2
β 1 2(u ,r )
(u , r )α 2 2
α(u , r )
β(u ,r )2 1
(a)
1 1
r3
(u , r )1 1
Figure 4
We execute Steps 1 to q � 1, where q is chosen as the smallest integer such that
rq+1 = ri for some 2 � i < q: (4.8)
Notice that q � 2jR(I)j+ 1. We show now that
cost(�2) > cost(�3) > � � � > cost(�q�1) > cost(�q) (4.9)
Given i, with 2 � i < q, consider the roots ri and ri+1 and the paths �i, �(u0i; ri+1), and
�i+1; (as obtained in Steps i � 2 and i � 1). If we apply Lemma 4.4.1 to these paths,
recalling that �i has no outlets, we can deduce that cost(�i) > cost(�i+1): Hence (4.9)
holds.
By the de�nition of q given in (4.8) we know that rq+1 = ri, for some 2 � i < q. Let us
then apply Lemma 4.4.1 to the roots rq and ri and the paths �q, �i, and �(u0q; rq+1 = ri):
recalling that �i has no outlets, we get that the inequality cost(�i) < cost(�q) must hold.
This contradicts (4.9).
Therefore, the assumption that R(I) does not contain a pair of mergeable roots leads to
a contradiction and the Lemma holds. 2
We can now complete the proof of Theorem 1. We show that given a minimum cost
instance I; with jR(I)j > 1, there exists another instance I�, with jR(I�)j = 1, such that
cost(I�) = cost(I): We distinguish two cases.
Case 1: there exists r 2 R(I) with indegCI(r) = 1: We �rst notice that in such a case
r 2 S: Indeed assuming r 2 V n S and that r has only one incoming arc (x; r) 2 E(CI);
Chapter 4. Concurrent multicast in weighted networks. 52
then we necessarily have that `(y; x) < `(x; r) and x is complete at a time smaller than
`(x; r). Thus r is not a root in R(I):
Consider then r 2 R(I) \ S with indegCI(r) = 1: We show now that there exists an
instance I� such that R(I�) = frg and cost(I�) = cost(I): Let (x; a) 2 E(CI) with label
`(x; r) = t such that r is complete at time t: Since r is complete at time t = `(x; r), we get
1) for all r0 2 S nfrg there exists an ascending path (r0) = (r0; � � � ; x; r), from r0 to r;
Moreover, we must have
2) there is no ascending path (r; � � � ; y; x) from r to x with `(y; x) < `(x; r);
otherwise considering all the paths in 1) and the path (r; � � � ; y; x); we would have x
complete at time `(y; x) < `(x; r); then (x; r) 2 E(CI) that would imply x 2 R(I) and
r 62 R(I):
To get the instance I�, let us make the following modi�cations on the instance I :
� leave unchanged the labels of all the arcs on the paths (r0); for each r0 2 S n frg;� increase the label of all the other arcs in I of the value t = `(x; r):
In such a way we obtain a multi{digraph I� which is again an instance. Indeed, the
paths (r0) in 1) make r complete at time t in I�: Also, since r 2 S; we have that for
all q 2 R(I) n frg there exists in I an ascending path (r; � � � ; q): Because of the above
modi�cations, all these paths have label larger than t. Hence, I� contains an ascending
path from r to every node q 2 R(I) and therefore, to every node b 2 D: This impliesthat I� is an instance. Obviously, I� has the same cost of I; moreover we have also
R(I�) = frg:Case 2: indeg
CI(r) � 2, for each r 2 R(I): Lemma 4.4.3 implies that R(I) contains
two mergeable roots, that is, there exists an instance I 0 with cost(I 0) � cost(I) and
jR(I 0)j � jR(I)j � 1:
The theorem follows by iteratively applying Case 2 until we get an instance I� such that
either jR(I�)j = 1 or I� satis�es Case 1. 2
4.5 Algorithms for Concurrent Multicast
In this section we present algorithms for the concurrent multicast problem.
If jSj = 1 and D = V (or S = V and jDj = 1) the CM problem easily reduces to
the construction of a minimum spanning tree of the communication graph H . When H
Chapter 4. Concurrent multicast in weighted networks. 53
is the complete graph on V and S = D = V , Wolfson and Segall [105] proved that the
problem is again equivalent to the construction of a minimum spanning tree of H . We
notice that our Theorem 3 proves that for S = D = V this result of [105] is true for any
communication graph H = (V;E). In general for arbitrary S and D, S 6= V or D 6= V , by
the results of previous Sections 4.3 and 4.4 and by the NP-completeness of determining
Steiner trees [104], we obtain that determining a CM minimum cost instance is in general
NP-hard.
4.5.1 Approximation Algorithms
We present here a distributed approximation algorithm for the CM problem. We assume,
as in other papers (see [105, 60, 61]), that each node knows the identity of all the other
nodes and the set of communication costs of the edges. The algorithm CM(H;S;D) givenin Figure 5 is executed by each node.
The trees TS and TD are subgraphs of H with node sets such that S [ frg � V (TS) and
D [ frg � V (TD), for some node r; a more detailed description will be given later. The
trees TS and TD are identical at all the nodes given that the construction procedure is
identical at all nodes.
CM(H;S;D) =� executed at node x, given the graph H and the sets S and D
1. Construct the trees TS and TD, root them in the (common) node r;
2. [A node in (V (TS)\ V (TD)) n frg executes both 2.1 and (after) 2.3; node r executes 2.2]
2.1. If in V (TS)� frg, wait until received from all sons in TS . Send to the parent in TS
a message containing all blocks received plus the block B(x), if x 2 S;
2.2. If equal to r wait until received from all sons in TS . Send to each son in TD a message
containing all blocks received plus the block B(r), if r 2 S, that is, send all blocks
B(a), for each a 2 S;
2.3. If in V (TD)� frg, wait until received from the parent in TD. Send to each son in TD
a message containing all blocks B(a), for each a 2 S.
Chapter 4. Concurrent multicast in weighted networks. 54
Figure 5
The algorithm is asynchronous and does not require nodes to know when the blocks of the
nodes in S are ready nor the time messages take to travel between pairs of nodes. It is
easy to see that the algorithm terminates and each destination node in D knows the blocks
of all the sources in S. We consider now its communication cost. Since the algorithm uses
only once each edge in TS and TD we immediately get that its cost is
cost(TS) + cost(TD):
Let STapx(X) denote the tree obtained by using an approximation algorithm for the
construction of the Steiner tree ST (X). Several e�cient algorithms have been proposed
in the literature. The simpler algorithm [104] is greedy, it has complexity O(jV j2) andapproximation factor 2, that is, cost(STapx(X))=cost(ST (X)) � 2, for any set X . The
polynomial algorithm with the best known approximation factor for Steiner trees in graphs
has been given in [70] and has approximation factor 1:598. Fixed an approximation algo-
rithm, we can then choose r so that
cost(STapx(S[frg))+cost(STapx(D[frg)) = minv2V
cost(STapx(S[fvg))+cost(STapx(D[fvg))
and then choose the trees TS and TD used in the algorithm CM(H;S;D) as STapx(S[frg)and STapx(D [ frg), respectively. Therefore, by using the best approximation algorithm
for the construction of the trees, we get that the cost of the algorithm CM(H;S;D) is
cost(TS) + cost(TD) = minv2V
cost(STapx(S [ fvg)) + cost(STapx(D [ fvg))� cost(STapx(S [ fsg)) + cost(STapx(D [ fsg))� 1:598(cost(ST (S [ fsg)) + cost(ST (D [ fsg)));
for any node s. By choosing s as the node that gets the minimum in the lower bound
(4.1), we get
Theorem 4 The ratio between the cost of CM(H;S;D) and the cost of a minimum cost
algorithm is upper bounded by 1.598.
Chapter 4. Concurrent multicast in weighted networks. 55
4.5.2 On{line algorithms
In this section we consider the dynamic concurrent multicast problem, which allows the
sets of nodes to be connected vary on the time. We will show that the characterization
we gave for the optimal cost solution to the concurrent multicast problem allows to derive
e�cient algorithms also for the dynamic version.
A dynamic algorithm receives in input a sequence of requests ri = (xi; si; �i), for
i = 1; 2; : : :, where xi is a node in H , the component si 2 fS;Dg specify if xi is a source
or destination node, and �i 2 fadd; removeg speci�es if the node xi must be added or
removed from the set si. As an example, (x;D; add) de�nes the operation of adding x to
the current set of destinations. The sets
Si = fa j there exists j � i with rj = (a;S; add); r` 6= (a;S; remove); for each j < ` �ig,
Di = fa j there exists j � i with rj = (a;D; add); r` 6= (a;D; remove); for each j <
` � igare the source and destination sets on which we are required to perform CM after the
request ri.
The algorithm will be the same as given in Figure 2, but we will make a di�erent choice
of the trees TS and TD in order to have the possibility of dynamically and e�ciently modify
them according to the sequence of requests.
We �rst consider the case of no remove requests. W.l.o.g., assume that the �rst two
requests are r1 = (a;S; add) and r2 = (b;D; add); that is, S2 = fag and D2 = fbg. We
simply connect a and b by a minimum weight path � in H ; formally, we have TS2 = (fag; ;)and TD2
coincide with the path �. In general, for sake of e�ciency, with the request ri
we want to add the new node without modifying the existing trees [73]. If ri requires
to add a0 to Si�1, we connect a0 to nodes in TSi�1 by a shortest path in H form a0 to a
node in TSi�1 . Similarly for ri = (b0;D; add). Therefore, at each step we have the tree TSi
rooted in a which spans all nodes in Si and a tree TDirooted in a which spans all nodes
in Di [ fag. Using the results proved in [73] we can get
cost(TSi) = O(log jSij)cost(ST (Si)) (4.10)
and
cost(TDi) = O(log jDij)cost(ST (Di)) + cost(�): (4.11)
Chapter 4. Concurrent multicast in weighted networks. 56
Denote by CM(i) the algorithm of Figure 5 when using the trees TSi and TDi. By (4.10),
(4.11), and (4.1) we get the following result.
Theorem 5 Consider a sequence of add requests r1; : : : ; rk and let ni = maxfjSij; jDijg,for i = 1; : : : ; k. The ratio between the cost of CM(i) and the cost of an optimal CM
algorithm on Si and Di is O(logni).
In case of remove requests, it is not possible to have a bounded performance ratio if
no rearrangements (changes in the structure) of the trees are allowed after requests [73].
Several papers have recently considered the problem of e�ciently maintaining dynamic
Steiner trees, that is, with a limited number rearrangements [2, 24, 73]. It is clear from
the above results that any algorithm for the dynamic Steiner tree problem can be applied
to dynamically maintain the trees TSi and TDiobtaining equivalent results for our CM
problem.
4.6 Concurrent multicast without block concatenation
In this section we consider the concurrent multicast problem under the hypothesis that
each message transmission must consist of exactly one block B(a), for some a 2 S. Underthe hypothesis of this section we have the following lower bound.
Lemma 4.6.1 For any instance I, it holds that cost(I) �Pa2S cost(ST (D [ fag)).
Proof. Since each message can carry exactly 1 block, we can label each arc (x; y) of I also
with the a 2 S such that the corresponding message sent from x to y consists of the block
B(a) of a. For each a 2 S we can then consider the subgraph Ia which consists of all the
arcs of the multi{digraph I with label equal to a. For each a 2 S the subgraph Ia must
contain a path from a to each b 2 D. This implies that cost(Ia) � cost(ST (D[ fag)) and
cost(I) �Pa2S cost(Ia) �P
a2S cost(ST (D [ fag)): 2
Consider now the following algorithm. Again we assume that each node knows the
identity of all the other nodes and the sets S and D. The algorithm BLOCK-CM(H;S;D)given in Figure 6 is executed by each node. The trees Ta are identical at all the nodes
given that the construction procedure is identical at all nodes.
Chapter 4. Concurrent multicast in weighted networks. 57
BLOCK-CM(H;S;D) =� executed at each node, given the graph H and the sets S and D1. For each a 2 S, construct a tree Ta spanning a and all nodes in D.2. For each Ta: if a then send B(a) to all neighbours, otherwise, wait until received B(a)
from one neighbour in Ta and resend it to each of the other neighbours (if any) in Ta.
Figure 6
We immediately get that the above algorithm is correct and that its cost is
Xa2S
cost(Ta): (4.12)
Assuming Ta be a Steiner tree on D [ fag, by Theorem 4.6.1, we would get an optimal
cost algorithm. The NP-hardness of the Steiner tree problem [104] implies that the CM
problem without block concatenation is NP-hard as well.
Constructing Ta by the approximation algorithm for the Steiner tree ST (D[fag) givenin [70], we get
cost(Ta) < 1:598 cost(ST (D[ fag)): (4.13)
By (4.12) and (4.13) and from Theorem 4.6.1 we obtain
Theorem 6 The ratio between the cost of BLOCK-CM(H;S;D) and the cost of a mini-
mum cost algorithm is upper bounded by 1:598.
It is clear that, since the algorithm is based on the use of approximate Steiner trees, all
the discussion done in Section 4.5.2 on the dynamic implementation can be applied also
in this case.
4.7 Communication time and communication complexity
In this section we evaluate the time and communication complexity of the algorithms given
in Section 4.5 and 4.6. Let I be an instance of concurrent multicast from S to D on a
graph H . Denote by �a the time needed for node a 2 S to have its block ready and let
� = f�a : a 2 Sg. Moreover, denote by t(a; b) the travel time, i.e., the time needed for a
message from node a to reach its neighbor b and let @ = ft(a; b) : (a; b) 2 E(H)g.
Chapter 4. Concurrent multicast in weighted networks. 58
The communication time of I , time(I), represents the minimum time required to perform
the calls in the order speci�ed by the temporal labels of the arcs in I; with respect to the
sets � and @.
For each a 2 S and b 2 D; a 6= b; we denote by t�(a; b) the shortest path from a to b
with respect to the travel times in @. According to [105], the following lower bound on
the communication time of any CM instance I holds
time(I) � maxa2S;b2D
f�a + t�(a; b)g: (4.14)
We now evaluate the communication time of the algorithm CM(H;S;D). Following the
same reasoning as in [[105], Theorem 5] we can get the following result.
Theorem 7 Denote by Imin a concurrent multicast instance of minimum communication
time and by I the instance executed by the algorithm CM(H;S;D). Then time(I)time(Imin)
� jV j:
Moreover, following [105], it is easy to show that the above upper bound is tight.
The communication complexity of an instance I is de�ned as comm(I) = cost(I) �time(I):
Theorem 8 Denote by Imin a concurrent multicast instance of minimum communica-
tion complexity and by I the instance executed by the algorithm CM(H;S;D). Then
comm(I)comm(Imin)
� 1:598 � jV j:
Proof. By Theorem 4 and Theorem 7 we havecomm(I)
comm(Imin)=
cost(I)
cost(Imin)� time(I)
time(Imin)� 1:598 �
jV j: 2
We now evaluate the communication time of the algorithm BLOCK-CM(H;S;D). LetI be an instance executed by the algorithm BLOCK-CM(H;S;D) and for each a 2 S;let Ta be the tree spanning a and all nodes in D used in the algorithm. For each pair of
nodes a 2 S and b 2 D; denote by �(a; b) = (a = io; i1; � � � ; ih = b) the path from a to
b in Ta: Also let �a;b denote the time it takes for the block B(a) of node a 2 S to reach
node b 2 D using the path �(a; b): In other words, �a;b represents the total travel time of
B(a); taking into account for l = 0; 1; � � � ; h� 1 both the travel time t(il; il+1) of each arc
(il; il+1) on �(a; b) and the time that B(a) waits at node il on �(a; b) before being sent to
il+1: Therefore the communication time of the algorithm BLOCK-CM(H;S;D) is
time(I) = maxa2S;b2D
f�a + �a;bg:
Chapter 4. Concurrent multicast in weighted networks. 59
Theorem 9 Denote by Imin a concurrent multicast instance of minimum communication
time and by I the instance executed by the algorithm BLOCK-CM(H;S;D). Thentime(I)
time(Imin)� jV j+ jSj:
Proof. Let s 2 S and d 2 D be the nodes such that time(I) = maxa2S;b2Df�a + �a;bg =�s+�s;d: Each node il on the path �(s; d) = (s = io; i1; � � � ; ih = d) can send only 1 block at
time to il+1: Therefore when B(s) arrives in il; the link (il; il+1) can be busy because node
il has to send some other blocks to il+1 before sending B(s): Let bl denote the number
of blocks sent from il to il+1 before sending the block B(s): The time elapsed from the
receiving of B(s) at il to the receiving of B(s) at il+1; is blt(il; il+1) + t(il; il+1): Hence,
denoting by tmax = maxft(i; j) 2 @g the largest travel time,
�s + �s;d = �s + b0t(i0; i1) + t(i0; i1) + � � �+ bh�1t(ih�1; ih) + t(ih�1; ih)
� �s + b0tmax + tmax + � � �+ bh�1tmax + tmax � �s + jSjtmax + htmax
� �s + jSjtmax+ (jV j � 1)tmax:
The last inequality holds because the number h of hops of the path �(s; d) can be at most
(jV j � 1): Recalling (4.14), we have
time(I)
time(Imin)� �s + jSjtmax + (jV j � 1)tmax
maxa2S;b2Df�a + t�(a; b)g � 1 + jSj+ jV j � 1 = jSj+ jV j:
The last inequality holds since both �s and tmax are not larger than maxa2S;b2Df�a +t�(a; b)g: 2
We show now that the above upper bound is tight. Consider a communication net-
work modelled by a complete graph H = (f0; 1; � � � ; jV j � 1g; E) with source set S =
f0; 1; � � � ; jSj � 1g; jSj � jV j, having the following cost function:
cost(i; j) =
(1 if j = i+ 1,
c otherwise,
for some constant c: It is clear that if c is large enough, the algorithm BLOCK-CM(H;S;D)can use only the edges of the path
0|{1|{2|{ � � �|{(jV j � 2)|{(jV j � 1)
Suppose that t(i; j) = 1 for each (i; j) 2 E and �i = i for each i 2 S. If jV j � 1 2 D, itfollows that for the instance I executed by the algorithm, the block B(0) waits in each
Chapter 4. Concurrent multicast in weighted networks. 60
node i 2 S nf0g exactly 1 time unit before being sent to i+1; i.e. B(0) arrives at node i at
time 2i�1 for each i 2 S: Therefore, we have time(I) = 2jSj�1+ jV j�jSj = jV j+ jSj�1;
while if we minimize only the communication time regardless the communication cost, we
have time(Imin) = 1:
Finally we bound the communication complexity of the algorithm BLOCK-CM(H;S;D).
Theorem 10 Denote by Imin a concurrent multicast instance of minimum communication
complexity and by I the instance executed by the algorithm BLOCK-CM(H;S;D). Then
comm(I)
comm(Imin)� 1:598 � (jV j+ jSj)
Proof. By Theorems 6 and 9 we getcomm(I)
comm(Imin)=
cost(I)cost(Imin)
� time(I)time(Imin)
� 1:598 � (jV j+ jSj):2
Chapter 5
Broadcasting in hypercubes and star graphs with
dynamic faults
5.1 Introduction
Broadcasting in an interconnection network in presence of failures is an important issue
in distributed computing. A recent survey by Pelc [92] presents a thorough discussion of
a variety of fault-tolerant information di�usion problems in interconnection networks. In
this chapter we consider the shouting communication mode in which any node can inform
all its neighbours in one time unit. Under this assumption it is immediate to see that
broadcasting can be accomplished in a number of time units equal to the diameter of the
network, and this bound is clearly optimal. Now, assume that at any time instant a number
of message transmissions (calls) less than the edge{connectivity of the network can fail.
The problem is to �nd an upper bound on the number of time units necessary to complete
broadcasting under this additional assumption. In [54] the problem of estimating the
broadcasting time in the n{dimensional hypercube was studied in this model; in particular,
the authors proved that n+ 2dlogne+ 6 time units are su�cient to perform broadcasting
in the n{dimensional hypercube under the assumption that at any time units at most n�1
message transmissions can fail. The same problem has been further investigated in [26].
Let D be the diameter of the network and let k (smaller than the edge{connectivity) be
the maximum number of faulty calls per time units. The authors of [26] proved:
� For �xed D, O(kD=2�1) time units are su�cient to perform broadcasting, and there exist
networks for which this bound is best possible;
� for �xed k, O(Dk+1) time units are su�cient, and there exist networks for which this
Chapter 5. Broadcasting in hypercubes and star graphs with dynamic faults62
bound is best possible.
Moreover, they proved that for multidimensional tori, O(D) time units are su�cient.In this chapter we shall prove that for networks of practical interests it is possible
to prove much tighter results. We �rst consider the problem of broadcasting in the n-
dimensional hypercube in presence of up to n�1 transmission failures at any time instant.
We improve on the results of [54] by showing that n + 7 time units are su�cient to
accomplish broadcasting. We also consider the analogous problem for the star network.
We prove that broadcasting in the star graph can be accomplished in only 11 time units
more than that it would be necessary to broadcast in absence of transmission failures.
The star interconnection network has been proposed as a promising alternative to the
hypercube in [7]. Since then, it has been subject of a considerable amount of studies
showing, among other things, that the star graph exhibits several superior performances
with respect to the hypercube.
5.2 Broadcasting in the Hypercube
Let f0; 1gn be the set of all binary vectors of length n. The n-dimensional hypercube is
the graph Hn = (V;E) with V = f0; 1gn and E = f(x; y) : x; y 2 V; dH(x; y) = 1g, wheredH(x; y) is the Hamming distance between x and y, that is, the number of components on
which x and y di�er. For any x 2 V and integer i, 1 � i � n, let us denote by x(i) the
vertex of Hn which di�ers from x only on the i-th coordinate.
Let T (n) be the minimum number possible of time units necessary to broadcast in Hn
under the assumption of the shouting model and that at each time instant at most n� 1
calls may fail. We need the following facts.
Fact 2 [54] If at time T at most n vertices of Hn are uninformed, then at time T + 2 all
vertices of Hn are informed.
Fact 3 [102] Between two nodes x; y in Hn there are n edge-disjoint paths, of which
dH(x; y) of length dH(x; y) and n � dH(x; y) of length at most dH(x; y) + 2.
The following theorem is the main result of this section.
Theorem 11
n+ 2 � T (n) � n+ 7:
Chapter 5. Broadcasting in hypercubes and star graphs with dynamic faults63
Proof. We �rst prove the upper bound. Let k = dn=2e. Any set of n � k indices
I = fi1; : : : ; in�kg � f1; : : : ; ng and n � k bits xi1 ; : : : ; xin�k 2 f0; 1g, individuate a k{
subcube ofHn whose 2k vertices corresponds to the 2k binary vectors of length n that have
�xed values xi1 ; : : : ; xin�k in the positions i1; : : : ; in�k and arbitrary values otherwise. For
any set of indices I = fi1; : : : ; in�kg let us denote by HI the set of all 2n�k vertex disjoint
k{subcubes of Hn obtained by �xing all values xi1 ; : : : ; xin�k in the positions i1; : : : ; in�k
in all the 2n�k possible ways. Therefore, any k{subcube in HI is individuated by a given
assignment of values to the components xi1 ; : : : ; xin�k .
Step 1. At time unit 2, at least (n� k + 1) k{subcubes of Hn have an informed vertex.
Let s = 00 : : :0 be the originator of the broadcast. At least one neighbour of s receives
the message at time unit 1. Without loss of generality, let us assume that the informed
neighbor of s at time 1 is the vertex s0 = s(1) = 10 : : :0. At time unit 2 at least n � 1
vertices among s(2); : : : ; s(n); s0(2); : : : ; s0(n) are informed. Among such informed vertices
let us choose arbitrary n� k � 1 ones,
s(`1); : : : ; s(`i); s0(`i+1); : : : ; s
0(`n�k�1):
De�ne P = f1g [ f`1; : : : ; `i; `i+1; : : : ; `n�k�1g. Notice that some of these indices may be
equal. Let fm1; : : : ; mrg � f1; : : : ; ngnP be such that the set Q = P [ fm1; : : : ; mrg has
cardinality exactly n � k. Consider now HQ, containing 2n�k vertex disjoint k{subcubes
of Hn. Recall that each k{subcube in HQ is obtained by �xing the coordinates indexed
by the indices in the set Q with some (n� k)-tuple of bits. It is immediate to see that no
pairs of vertices among
s; s0; s(`1); : : : ; s(`i); s0(`i+1); : : : ; s
0(`n�k�1) (5.1)
has the same (n�k) bits in the positions with indices in Q. Therefore, each of the n�k+1
informed vertices in (5.1) belongs to a di�erent k{subcube of Hn.
Step 2. At time unit k + 3 at least a whole k{subcube of HQ is informed.
Recall that at time unit 2 there are (n�k+1) k{subcubes in HQ each of them containing
an informed vertex. Let v1; : : : ; vn�k+1 be such vertices. Suppose, by contradiction, that
above claim is false, that is, no k-subcube of HQ is totally informed at time k + 3. In
particular, this means that at time unit k + 3 it is possible to �nd n � k + 1 uninformed
vertices y1; : : : ; yn�k+1 where each yi belongs to the same k{subcube of vi, for i = 1; : : : ; n�
Chapter 5. Broadcasting in hypercubes and star graphs with dynamic faults64
k+1. There are k edge{disjoint paths between nodes vi and yi in the same k{subcube, of
total length at most k2 (see Fact 3). Therefore, the total number of edges the information
has to cross from the vi's in order to be sure that one of the yi receives it is (n�k+1)k2�(n�k+1)k+1. At time 2+t the information has crossed at least t((n�k+1)k�(n�1)) edges,
to be sure that at least one of the yi is informed, we must have t((n�k+1)k� (n� 1)) �(n� k + 1)k2 � (n� k + 1)k + 1, from which we get that
t =
&(n� k + 1)k2 � (n � k + 1)k+ 1
(n� k + 1)k� (n� 1)
'= k+
�(k � 2)k + 1
(n� k + 1)k � (n� 1)
�= k+1; for n 6= 1;
time units su�ces. Therefore, by the time unit t + 2 = k + 3 one of the yi's is surely
informed, contradicting the fact that none of the yi's is informed.
Step 3. At time unit n + 3, in each k{subcube of HQ there are at most n � 1 uninformed
vertices.
The previous result says, in other terms, that at time unit k+3 there is an informed vertex
in each of the 2k (n � k){subcube in HR, where R = f1; : : :ngnQ. Fix a k{subcube H 0
in HQ and observe that H 0 intersects any (n � k){subcube in HR in exactly one vertex.
Let us now count the minimum number of faulty transmissions that must occur in an
(n� k){subcube Hn�k 2 HR to be sure that the only vertex v in common between Hn�k
and H 0 is not informed at time unit n+ 3. >From Fact 3 only two cases can occur: 1) all
(n� k) edged{disjoint paths between the informed vertex in Hn�k and v are of length at
most (n�k); 2) one path has length (n�k+1) and the remaining (n�k� 1) have length
at most n�k�1. In case 1) to be sure that v is not informed at time n+3 at least (n�k)
faulty transmissions must have been occurred in Hn�k between time unit k+4 and n+3.
In case 2) the number of occurred faults must have been at least 2(n� k � 1) � (n � k)
for all n � 4. In all cases, to be sure that the common vertex between Hn�k and H0 is not
informed at time unit n+3 at least (n� k) faulty transmissions must have been occurred
in Hn�k between time unit k+4 and n+3. Since the total number of faulty calls that may
have occurred between time k + 4 and n + 3 is at most (n � 1)(n � k), we can conclude
that at most n � 1 nodes in H 0 can still be uninformed at time n + 3. The trivial cases
n = 2 and n = 3 will be solved later on.
Step 4. At time unit n+ 7 all vertices in Hn are informed
Let us consider an arbitrary k{subcube Hk 2 HQ and an uninformed node v in Hk at time
unit n+3. First, we want to prove that at time unit n+4 at least a neighbor of v is informed.
Chapter 5. Broadcasting in hypercubes and star graphs with dynamic faults65
Let us consider only the uninformed v's in Hk that have all neighbors uninformed at time
n + 3 otherwise there is nothing to prove. Vertex v has 1
2dn=2e (dn=2e � 1) vertices at
distance 2; since we know from the previous step that the number of uninformed vertices
in Hk is at most n� 1 and we have considered v and all its neighbors to be uninformed,
we can say that the number of informed vertices at distance 2 from v is at least
1
2
�n
2
���n
2
�� 1
���(n� 1)�
��n
2
�+ 1
��=
1
2
�n
2
�2
+1
2
�n
2
�� n + 2: (5.2)
Each of these informed vertices can call two neighbors of v; since the number of calls that
can fail is n� 1 we have that the number of surviving calls is at least 2(12
�n2
�2+ 1
2
�n2
��n + 2)� (n � 1) which is bigger than 1 for n 6= 4; 6. We can then conclude that at time
n+ 4 at least a neighbor of v is informed. Therefore, we have proved that all uninformed
vertices in Hn have at least an informed neighbor. Since the number of faulty calls in the
successive time instant n + 5 is at most n � 1, we have that at most n � 1 vertices will
remain uninformed. Using Fact 2 we can conclude that at time instant n + 7 all vertices
of Hn are informed. To prove the theorem also in cases n = 2; 3; 4; 6 we observe that from
Proposition 2 of [54] one has T (n) � 2n, therefore T (n) � n+7 also in case n 2 f2; 3; 4; 6g.To prove the lower bound, let us consider two nodes of Hn, say x and y, such that
dH(x; y) = n and consider two neighbours x(i) and y(j) of x and y, respectively. Let
us assume that x(i) be the originator of the broadcast. It is easy to see that if the set
of failures is concentrated around x(i), leaving fault{free only the transmission from x(i)
to x, and this holds during all time steps but the last one, where the set of failures is
concentrated around y(j), leaving fault-free only the transmission from y to y(j), then at
least n + 2 time units are necessary.
5.3 Broadcasting in the Star Graph
Let �n be the set (group) of all permutations of symbols 1; 2; : : : ; n. Given u = u1 : : :un 2�n and i 2 f2; : : : ; ng, denote by uhii = ui : : :u1 : : :un the element of �n obtained from
u by permuting the �rst symbol u1 with the i-th ui. The n-star graph Sn has vertex set
V (Sn) = �n and edge set E(Sn) = f(u;uhii) : u 2 �n; 2 � i � ng: Sn has n! vertices, is a
Cayley graph, is regular of degree n� 1, is both edge and vertex symmetric, has diameter
d(Sn) equal to b3(n � 1)=2c and has edge and vertex connectivity equal to n � 1 (see
[10, 9, 7, 27, 99]). We shall make use of the following result.
Chapter 5. Broadcasting in hypercubes and star graphs with dynamic faults66
Fact 4 [27] For any two vertices u;v 2 V (Sn), there exist n�1 edge{disjoint paths betweenu and v of total length at most (n � 3)
j3
2nk+ 2n
Let T (n) be the minimum number possible of time units necessary to broadcast in Sn
under the assumption of the shouting model and that at each time instant at most n� 2
calls may fail. T (n) is obviously lower bounded by the diameter of Sn. The following
theorem is the main result of this section.
Theorem 12
d(Sn) + 2 � T (n) � d(Sn) + 11:
Proof. Given the n-star Sn and an integer i, 1 � i � n, let us denote by in the (n � 1){
substar of Sn induced by all (n�1)! vertices of Sn having the symbol i in the n-th position.
The n! nodes of the n{star graph Sn can be partitioned among the n di�erent (n � 1){
substars 1n; 2n; : : : ;nn. Let Id = 12 : : :n be the identity permutation. We assume that
Id is the originator of the broadcast. We shall prove the theorem through a sequence of
steps.
Step 1. At time instant 4 at least 2 + dn�22e (n� 1){substars have an informed vertex.
At time instant 1 at least a neighbor of Id receives the message from Id. Without loss
of generality, let us assume that Idhni be such a node. For any (n � 1){substar in,
2 � i � n� 1, there exist two disjoint paths of length 2
Id �! Idhii �! Idhiihni and Idhni �! Idhnihii �! Idhnihiihni (5.3)
whose terminal vertices belong to in. Therefore, in order to prevent a generic (n � 1){
substar in to have an informed vertex at time 4, it is necessary that in at least a time
instant between 2 and 4 both paths in (5.3) be a�ected by faults. Since the number of faults
in any time instant is n� 2 we have that at time unit 4 at least dn�22e (n� 1){substars
receive the information through the paths in (5.3). Considering also the (n� 1){substars
to which the informed vertices Id and Idhni belong to, the claim is proved.
Step 2. At time instant b3(n�1)=2c+5 at least a whole (n�1){substar of Sn is informed.
Consider the 2 + dn�22e (n� 1)substars that we know have an informed vertex at time 4.
We want to prove that at least one of them is totally informed at time unit b3(n�1)=2c+5.Suppose not, that is, it is possible to �nd 2
�2 + dn�2
2e�vertices xi and yi, i = 1; : : : ; 2 +
Chapter 5. Broadcasting in hypercubes and star graphs with dynamic faults67
dn�22e such that all the xi's are informed at time 4, all the yi's are uninformed at time
b3(n � 1)=2c + 5, and each pair fxi; yig belong to the same (n � 1){substar. We know
that between each xi and yi there are (n� 2) edge{disjoint paths, of total length at most
(n� 4)j3
2(n � 1)
k+2 (n� 1) (see Fact 4). By the analogous reasoning made at Step 2 of
Theorem 11, it is easy to see that after
4 +
�l1
2(n� 2)
m+ 2
� h(n� 4)
j3
2(n � 1)
k+ 2 (n� 1)
i��l
1
2(n� 2)
m+ 2
�(n� 2) + 1�l
1
2(n� 2)
m+ 2
�(n� 2)� (n� 2)
��3
2(n� 1)
�+ 5
time units at least one of the yi will be informed. The obtained contradiction proves the
claim.
Step 3. At time instant b3(n�1)=2c+7 at most (n�1)(n�2) vertices of Sn are uninformed.
Without loss of generality, we assume that 1n be the (n � 1){substar informed at time
b3(n�1)=2c+5. Given i; j 2 f1; : : : ; ng, i 6= j, let us denote by i� j a generic permutationof the symbols 1; 2; : : : ; n, that has symbol i in the �rst position and symbol j in the last
position. Clearly, for any pair i and j there are (n� 2)! such a kind of permutations. To
prove the claim, we �rst partition the node set of Sn in three disjoint sets: L1, L2 and L3.
Set L1 contains all nodes of the (n � 1){substar 1n. Set L2 contains all permutations of
kind 1 � 2, 1 � 3; : : : ; 1 �n. Set L3 contains all permutations of kind (1 � i)hji, for 2 � i � n
and 2 � j � n � 1. It is clear that L1, L2, and L3 are pairwise disjoint. Moreover,
jL1j = (n � 1)!, jL2j = (n � 1)(n � 2)!, and jL3j = (n � 1)(n � 2)(n � 2)!, therefore,
jL1j+ jL2j+ jL3j = n! and thus they constitute a partition of the node set of Sn. Let us
now consider the edges among the sets L1, L2, and L3. We �rst notice that each node
1 � i 2 L2 is adjacent to the node i � 1 2 L1 = 1n, for any i, 2 � i � n. Moreover, each
node (1 � i)hji 2 L3 is adjacent to node (1 � i) 2 L2, but no node in L3 is adjacent to any
node in L1. Recalling that at time unit T = b3(n � 1)=2c+ 5 all vertices in L1 = 1n are
informed and all nodes in L2 are adjacent to vertices in L1, we get that at time instant
T + 1 at most n � 2 vertices in L2 are uninformed. Since any vertex in L2 is adjacent to
(n� 2) vertices in L3, we obtain that at time unit T + 2 the total number of uninformed
vertices in Sn is at most (n� 2)2 + (n� 2) = (n� 1)(n� 2).
Chapter 5. Broadcasting in hypercubes and star graphs with dynamic faults68
Step 4. At time instant b3(n� 1)=2c+ 11 all vertices of Sn are informed.
Let us consider time unit T + 2 and an uninformed vertex v of Sn. Vertex v has n +
(n � 1)(n � 2) vertices at distance at most 2 (including also v). Since we know that at
time instant T + 2 at most (n� 1)(n� 2) vertices of Sn are uninformed, we can conclude
that there are at least n informed vertices at distance at most 2 from v; therefore, at time
T + 3 at least a neighbor of v is informed. Since any uninformed vertex v has at least an
informed vertex, we can say that at time T + 4 a total of at most n � 2 vertices in Sn
still will be uninformed. We now prove that 2 additional time instants are su�cient to
inform all nodes in Sn. We prove this last claim by contradiction. Suppose that at time
T + 6 there exists an uninformed vertex x. Since the number of faulty calls is (n� 2), we
know that there exists at time T +5 a neighbor y of x that is also uninformed. The set of
neighbors of x is disjoint from the set of neighbors of y (since Sn is bipartite). Therefore,
the union of these two sets contains 2(n� 1) elements. Since we have already proved that
at time T + 4 at most (n� 2) vertices were uninformed, we also have that at time T + 4
at least 2(n� 1)� (n � 2) = n neighbors either of x or y are informed. This contradicts
the fact that at time T + 5 both x and y are uninformed.
To conclude the proof of the theorem, we just notice that in the case n = 4 and n = 6
it is easy to prove directly that T (4) � d(S4)+11 = 6+11 and T (6) � d(S6)+11 = 9+11.
The proof of the lower bound is similar to the one given in Theorem 11.
Chapter 6
Summary and open problems
In Chapter 2, we have studied the problem of deterministic distributed graph coloring of
an n-vertex graph with maximum degree �: We have given the �rst known deterministic
distributed algorithm, relying only on partial knowledge accessible to vertices, which �nds
a O(�) vertex-coloring faster than in polylogarithmic time (precisely the execution time is
O(log�(n=�))). Our methods also give an edge-coloring algorithm with the same number
of colors and time. In addition, we have shown how to get fast coloring algorithms in the
one-way communication model.
Being one of the fundamental graph problem, the graph coloring problem has applications
in many algorithmic problems on communication networks (see subsection 3.2.4 of Chap-
ter 3 for an interesting application to broadcasting in the one-way model).
Many interesting problems remain open in this area. It is well known how to determinis-
tically �nd a maximal independent set (MIS) in time log� n for graphs of bounded degree
[62]. No polylog time algorithm has been discovered for the unbounded degree case, nor
has anyone yet proved a superpolylog time lower bound. Results for the unbounded degree
case can be found in [3, 88]. It would be a signi�cant improvement over these studies to
understand if a polylog time algorithm can be found for unbounded degrees.
Our vertex-coloring algorithm achieves, as an intermediary step, a O(� log�(n=�)) color-
ing in two time units. This is equivalent to say that a O(� log�(n=�)) coloring can be
found in one time unit relying on knowledge radius two. While there is no room for further
signi�cant improvements for knowledge radius 1 (see [86, 100]), it is not clear what the
situation is for knowledge radius 2, in particular, is our O(� log�(n=�)) coloring the best
we can achieve?
Chapter 6. Summary and open problems 70
In Chapter 3, we have studied the impact of the amount of knowledge accessible to
nodes on the broadcasting time. We have considered both the one-way model and the
radio model. For the one-way model, we have considered di�erent knowledge radii, more-
over we have assumed that, apart from the knowledge radius, each node knows also the
maximum degree � and the number n of nodes of the network. In order to broadcast in
this model, while for knowledge radius 0 there is nothing to do but to cross every edge
of the graph, for knowledge radius 1, we have given a broadcasting algorithm working
in time O(min(n;D2�)), and showed that for bounded maximum degree � this algo-
rithm is asymptotically optimal. It would be interesting to know what the situation is
for unbounded degree graphs. As for knowledge radius 2, we have showed how to broad-
cast in time O(D� logn)) and proved a lower bound (D�) on broadcasting time, when
D� 2 O(n). This lower bound is valid for any constant knowledge radius. It would be
interesting to �nd matching upper and lower bounds for every D and �:
Concerning the radio model, we have studied the broadcasting time in arbitrary n-node
radio networks each node knows nothing more than its own label. Under this model,
the authors of [28] constructed a deterministic broadcasting algorithm working in time
O(n11=6): In Section 3.3, we have presented an improved deterministic broadcasting algo-
rithm needing time O(n5=3(logn)1=3): The fastest currently known algorithm for this task
is the broadcasting algorithm from [30] working in time O(n(logn)2). presented for the
same problem, it is natural to ask whether the algorithm of [30] is the best possible.
In Chapter 4, we have considered the problem of concurrent multicast (CM) from a
set of sources S to a set of destinations D in a network modelled as a graph H = (V;E)
with edge costs. We have given a characterization of minimum cost algorithms for CM.
Precisely, we have shown that the following two phases simple protocol is surprisingly the
one achieving the minimum communication cost: construct in H a Steiner Tree T with
terminal nodes equal to the set S [ fvg; v 2 V ; by transmitting over the edges of T one
can accumulate all the blocks of the source nodes into v; then, by using another Steiner
Tree T 0 with terminal nodes equal to D [ fvg; one can broadcast the information held by
v to all nodes in D; thus completing the CM. Since determining an optimal solution for
CM problem is in general NP-hard for arbitrary S and D (while the problem is solvable in
polynomial time for S = D = V [105]), we have provided an approximate-cost polynomial
Chapter 6. Summary and open problems 71
time distributed algorithm for the CM problem both in the case (a) when the cost of the
transmission of a message is independent of the number of blocks composing it, and when
(b) the message transmissions must consist of one block of data at a time. We have also
analyzed the completion time of our CM algorithms. We have proved that the completion
time of our algorithms is of a factor of jV j and of jV j+ S away from the optimum for the
case (a) and (b) respectively.
It remains as an interesting open problem to give the characterization of optimal commu-
nication complexity (the product of communication cost and completion time).
References
[1] N. Alon, A. Bar-Noy, N. Linial and D. Peleg, A Lower bound for radio broadcast, Journal of
Computer and System Sciences 43 (1991), 290-298.
[2] E. Aharoni, R. Cohen, \ Restricted Dynamic Steiner Trees for Scalable Multicast in Datagram
Networks", IEEE/ACM Transactions on Networking, 6, (1998), 286{297.
[3] B. Awerbuch, A. Goldberg, M. Luby and S. Plotkin, Network decomposition and locality in
distributed computation,, in Proc. 30th IEEE Symp. on Foundations of Computer Science,
1989, pp. 364-369.
[4] B. Awerbuch, O. Goldreich, D. Peleg and R. Vainish, A Tradeo� Between Information and
Communication in Broadcast Protocols, Journal of the ACM 37, (1990), 238-256.
[5] S. Akl, K. Qiu, and I. Stojmenovi�c, \Fundamental Algorithms for the Star and Pancake
Interconnection Networks with Applications to Computational Geometry",NETWORKS, 23,
(1993), 215{225.
[6] S. Akl and K. Qiu, \A Novel Routing Scheme on the Star and Pancake Networks and its
Applications", Parallel Computing 19, (1993), 95{101.
[7] S. B. Akers, D. Harel and B. Krishnamurty, \The Star Graph: An Attractive Alternative to
the n{cube", Proc. Intl. Conf. on Parallel Processing, 1987, 393{400.
[8] S. B. Akers and B. Krishnamurty, \The Fault Tolerance of Star Graphs", 2nd Intl. Conf. on
Supercomputing, San Franciso, May 1987, 270-276.
[9] S. B. Akers and B. Krishnamurty, \On Group Graphs and their Fault{Tolerance", IEEE
Trans. on Comp., Vol. 36, 7, (1987), 885{888.
[10] S. B. Akers and B. Krishnamurty, \A Group Theoretic Model for Symmetric Interconnection
Networks", IEEE Trans. on Comp. 38, 4, (1989), 555-566.
[11] S. Albers and M. R. Henzinger, Exploring unknown environments, Proc. 29th Symp. on
Theory of Computing (1997), 416-425.
References 73
[12] B. Awerbuch, A. Barowtz, D. Peleg, \Cost{Sensitive Analysis of Communication Protocols",
Proceedings of 9th Annual ACM Symp. on Principles of Distributed Computing (PODC'90),
(1990), 177{187.
[13] B. Awerbuch, A new distributed depth-�rst-search algorithm, Information Processing Letters
20 (1985), 147-150.
[14] P. Berman and C. Coulston, \On{line algorithms for Steiner Tree Problems", Proceedings of of
Twentyninth Annual ACM Symposium on Theory of Computing STOC'97, (1997), 344{353.
[15] P. Berthome, A. Ferreira, and S. Perennes, \Decomposing Hierarchical Cayley Graphs, with
Applications to InformationDissemination and AlgorithmDesign", In Proc. Fifth IEEE Symp.
on Parallel and Distr. Comput. , 1993, 720{723.
[16] R. Bar-Yehuda, O. Goldreich, and A. Itai, On the time complexity of broadcast in multi-
hop radio networks: An exponential gap between determinism and randomization, Journal of
Computer and System Sciences 45 (1992), 104-126.
[17] A. Bar-Noy, S. Guha, J. Naor, and B. Schieber, \Multicasting in Heterogeneous Networks",
Proceedings of Thirtieth Annual ACM Symposium on Theory of Computing STOC '98,
(1998), 448-453.
[18] J.{C. Bermond, L. Gargano, and S. Perennes, \Sequential Gossiping by Short Messages",
Discr. Appl. Math., 86 (1998) 145-155.
[19] J.{C. Bermond, L. Gargano, A. Rescigno, and U. Vaccaro, \Fast Gossiping by Short Mes-
sages", SIAM J. on Computing, 27 (1998), 917{941.
[20] R. Bar-Yehuda, A. Israeli, and A. Itai, Multiple communication in multihop radio networks,
SIAM J. on Computing 22 (1993), 875-887.
[21] D. Bruschi and M. Del Pinto, Lower bounds for the broadcast problem in mobile radio net-
works, Distr. Comp. 10 (1997), 129-135.
[22] A. Bagchi, E.F. Schmeichel, and S.L. Hakimi, \Parallel Information Dissemination by Pack-
ets", SIAM J. on Computing, 23 (1994), 355-372.
[23] D. P. Bertsekas, and J. N. Tsitsiklis, Parallel and Distributed Computation: Numerical Meth-
ods, Prentice{Hall, Englewood Cli�s, NJ, 1989.
[24] F. Bauer, A. Varma, \Aries: a Rearrangeable Inexpensive Edge{Based On{Line Steiner Al-
gorithm, Proceedings of INFOCOM'96, 361{368.
[25] A. Bar-Noy and S. Kipnis, Designing broadcasting algorithms in the postal model for message
passing systems, Proc. 5th Ann. ACM Symp. on Par. Alg. and Arch. (1992), 11-22.
References 74
[26] B. S. Cheblus, K. Diks, and A. Pelc, \Broadcasting in Synchronous Networks with Dynamic
Faults", NETWORKS, to appear.
[27] K. Day and A. Tripathi, \A Comparative Study of Topological Properties of Hypercubes and
Star Graphs", IEEE Trans.on Parallel and Distr. Syst., 5, 1, (1994), 31 {38.
[28] B. S. Chlebus, L. G�asieniec, A. Gibbons, A. Pelc and W. Rytter, Deterministic broadcasting
in unknown radio networks, Proc. 11th Ann. ACM-SIAM Symp. on Discrete Algorithms,
SODA'2000, 861-870.
[29] B.S. Chlebus, L. G�asieniec, A. �Ostlin and J.M. Robson, Deterministic radio broadcasting,
Proc. 27th Int. Coll. on Automata, Languages and Programming, (ICALP'2000), July 2000,
Geneva, Switzerland, LNCS 1853, 717-728.
[30] M. Chrobak, L. G�asieniec and W. Rytter, Fast broadcasting and gossiping in radio networks,
Proc. 41th Symposium on Foundations of Computer Science (FOCS'2000), to appear.
[31] I. Chlamtac and S. Kutten, On broadcasting in radio networks - problem analysis and protocol
design, IEEE Transactions on Communications 33 (1985), 1240-1246.
[32] R. Cole and U. Vishkin, Deterministic coin tossing and accelerating cascades: micro and
macro techniques for designing parallel algorithms, in Proc. 18th Symposium on Theory of
Computing, 1986, pp. 206-219.
[33] D. Z. Du and F. H. Hwang, Combinatorial group testing and its applications, World Scienti�c,
Singapore, 1993.
[34] K. Diks, E. Kranakis, D. Krizanc and A. Pelc, The impact of knowledge on broadcasting time
in radio networks, Proc. 7th Annual European Symposium on Algorithms, ESA'99, Prague,
Czech Republic, July 1999, LNCS 1643, 41-52.
[35] M. Dietzfelbinger, S. Madhavapeddy, and I.H. Sudborough, \Three Disjoint Path Paradigms
in Star Networks", Third IEEE Symp. on Parallel and Distr. Comput., 1991, 400{406.
[36] G. De Marco and A. Pelc, Fast distributed graph coloring with O(�) colors, to appear in
Proc. 12th Ann. ACM-SIAM Symp. on Discrete Algorithms, SODA 2001.
[37] G. De Marco and A. Pelc, \Deterministic broadcasting time with partial knowledge of the
network", to appear in Proceedings of 11th Annual International Symposium on Algorithms
And Computation (ISAAC 2000).
[38] G. De Marco and A. Pelc, \Faster broadcasting in unknown radio networks",
to appear in Information Processing Letters.
References 75
[39] G. De Marco and U. Vaccaro, \Broadcasting in Hypercubes and Star Graphs with Dynamic
Faults", Information Processing Letters, 66 (6) 1998, 321{326.
[40] G. De Marco and A. A. Rescigno, \Tight Bounds on Broadcasting with Dynamic Faults", in
Proceedings of Sixth Italian Conference On Computer Science, Prato (Italy), P. Degano, U.
Vaccaro and G.Pirillo (Eds.), pp. 52{64, World Scienti�c, 1998.
[41] G. De Marco and A. A. Rescigno, \Tighter bounds on broadcasting in torus networks in
presence of dynamic faults", Parallel Processing Letters 10 (1) 2000, 39{49.
[42] G. De Marco L. Gargano and U. Vaccaro, \Concurrent Multicast in Weighted Networks", to
appear in Theoretical Computer Science
[43] G. De Marco L. Gargano and U. Vaccaro, \Concurrent Multicast in Weighted Networks", in
Proceedings of Sixth Scandinavian Workshop on Algorithm Theory (SWAT'98), Stockholm,
Sweden, S. Arnborg and L. Ivansson (Eds.), Lectures Notes in Computer Science, vol. 11432,
pp. 127-136, Springer-Verlag, 1998.
[44] X. Deng and C. H. Papadimitriou, Exploring an unknown graph, Proc. 31st Symp. on Foun-
dations of Computer Science (1990), 356-361.
[45] S. Dobrev and I. Vrto \Optimal Broadcasting in Hypercubes with Dynamic Faults." Infor-
mation Processing Letters 71(2): 81-85 (1999)
[46] P. Erd�os, P. Frankl and Z. F�uredi, Families of �nite sets in which no set is covered by the
union of r others, Israel J. Math., 51 (1985), 79-89.
[47] R.C. Entringer and P.J. Slater, Gossips and telegraphs, J. Franklin Institute 307 (1979), 353-
360.
[48] S. Even and B. Monien, On the number of rounds necessary to disseminate information, Proc.
1st ACM Symp. on Parallel Algorithms and Architectures, June 1989, 318-327.
[49] P. Fragopoulou and S.G. Akl, \Optimal Communication Algorithms on Stars Graphs using
Spanning Tree Constructions", Journal of Parallel and Distributed Computing, 24 (1995),
55{71.
[50] P. Fragopoulou and S.G. Akl, \Edge-Disjoint Spanning Trees on the Star Network with Ap-
plications to Fault Tolerance", IEEE Transactions on Computers, 45 (1996).
[51] P. Fraigniaud and E. Lazard, Methods and problems of communication in usual networks,
Disc. Appl. Math. 53 (1994), 79-133.
[52] P. Fraigniaud, A. Pelc, D. Peleg and S. Perennes, Assigning labels in unknown anonymous
networks, Proc. 19th ACM Symp. on Principles of Distributed Computing (PODC'2000), July
2000, Portland, Oregon, U.S.A., to appear.
References 76
[53] U. Feige, D. Peleg, P. Raghavan and E. Upfal, Randomized broadcast in networks, Random
Structures and Algorithms 1 (1990), 447-460.
[54] P. Fraignaud, C. Peyrat, \Broadcasting in a Hypercube when some Calls Fail", Information
Processing Letters, 39 (1991), 115{119.
[55] A. Frank, L. Wittie, and A. Bernstein, \Multicast Communication in Network Computers",
IEEE Software, 2, (1985), 49{61.
[56] M. R. Garey, D. S. Johnson, Computer and Intractability: A Guide to the Theory of NP{
Completeness, W.H. Freeman and Company, (1979).
[57] L. Gargano, A.L. Liestman, J. Peters and D. Richards, \Reliable Broadcasting", Discrete
Applied Math., 53, (1994), 135{148.
[58] I. Gaber and Y. Mansour, Broadcast in radio networks, Proc. 6th Ann. ACM-SIAM Symp.
on Discrete Algorithms, SODA'95, 577-585.
[59] L. Gargano, A. Pelc, S. Perennes and U. Vaccaro, E�cient communication in unknown net-
works, Proc. 26th International Workshop on Graph-Theoretic Concepts in Computer Science
(WG'2000), June 2000, Konstanz, Germany, to appear.
[60] L. Gargano and A. A. Rescigno, \Communication Complexity of Fault{Tolerant Information
Di�usion", Theoretical Computer Science, 209 (1998), 195-211.
[61] L. Gargano , A. Rescigno, and U. Vaccaro, \Fault{Tolerant Hypercube Broadcasting via
Information Dispersal", NETWORKS, 23, (1993), 271-282.
[62] A. V. Goldberg and S. A. Plotkin, G.E. Shannon Parallel Symmetry-Breaking in Sparse
Graphs, in Proc. 19th Symposium on Theory of Computing, 1987, pp. 315-324.
[63] L.Gasieniec and A. Pelc, Broadcasting with a bounded fraction of faulty nodes, Journal of
Parallel and Distributed Computing 42 (1997), 11-20.
[64] R. Gallager, A Perspective on Multiaccess Channels, IEEE Trans. on Information Theory 31
(1985), 124-142.
[65] D.A. Grable and A. Panconesi, Nearly optimal distributed edge colouring in O(log logn)
rounds,, in Random Structures & Algorithms, 1997, pp. 385-405.
[66] C. Gowrisankaran, \Broadcasting in Recursively Decomposable Cayley Graphs", Discr. Appl.
Math, 53, (1994), 171{182.
[67] S. M. Hedetniemi, S. T. Hedetniemi, and A. Liestman, \A Survey of Gossiping and Broad-
casting in Communication Networks", Networks, 18, (1988), 129{134
References 77
[68] J. Hromkovi�c, R. Klasing, B. Monien, and R. Peine, Dissemination of Information in Inter-
connection Networks (Broadcasting and Gossiping) re ect more realistically, in: Ding-Zhu Du
and D. Frank Hsu (Eds.) Combinatorial Network Theory, Kluwer Academic Publishers, 1995,
pp. 125-212.
[69] S. E. Hambrusch, A. A. Khokhar, and L. Yi , \Scalable S-To-P Broadcasting on Message-
Passing MPPs", IEEE Transactions on Parallel and Distributed Systems, 9(8), (1998).
[70] S. Hougardy and H.J. Promel, \A 1.598 Approximation Algorithm for the Steiner Problem in
Graphs", Proceedings of Tenth ACM-SIAM Symp. on Discrete Algorithms (SODA'99), Jan.
1999, to appear.
[71] F. K. Hwang, D. S. Richards, and P. Winter, The Steiner Tree Problem, vol. 53, Annals of
Discrete Mathematics. North-Holland, The Netherlands, 1992.
[72] F. K. Hwang and V. T. S�os, Non-adaptive hypergeometric group testing, Studia Scient. Math.
Hungarica 22 (1987), 257-263.
[73] M. Imase and B.M. Waxman, \Dynamic Steiner Tree Problem", SIAM J. Discr. Math, 4
(1991), 369{384.
[74] D.W. Krumme, G. Cybenko and K.N. Venkataraman, Gossiping in minimal time, SIAM J.
Computing 21 (1992), 111-139.
[75] J. Koml�os and A. G. Greenberg, An asymptotically nonadaptive algorithm for con ict resolu-
tion in multiple-access channels, IEEE Trans. on Information Theory, IT-31 n. 2 (1985), pp.
302-306.
[76] E. Kushilevitz and Y. Mansour, An (D log(N=D)) lower bound for broadcast in radio net-
works, SIAM J. on Computing 27 (1998), 702-712.
[77] V. P. Kompella, J. C. Pasquale, and G. C. Polyzos, \Multicast Routing for Multimedia Com-
munication", IEEE/ACM Transactions on Networking, 3, (1993), 286{292.
[78] G. Kortsarz and D. Peleg, \Approximating shallow{light trees',' Proceedings of Eighth ACM-
SIAM Symp. on Discrete Algorithms (SODA'97), Jan. 1997.
for the Steiner Trees Problems", Journal of Combinatorial Optimization, 1 (1997), 47-65.
[79] D.W. Krumme, Fast gossiping for the hypercube, SIAM J. Computing 21 (1992), 365-380.
[80] S. Kutten and D. Peleg, Fault-local distributed mending, Proc. 14th ACM Symposium on
Principles of Distributed Computing (1995), 20-27.
[81] H.{M. Lee and G.J. Chang, \Set{to{Set Broadcasting in Communication Networks", Discr.
Appl. Math., 40 (1992), 411{421.
References 78
[82] Q. Li, Z. Zhang, and J. Xu, \A Very Short Proof of a Conjecture Concerning Set{to{Set
Broadcasting", NETWORKS, 23 (1993), 449{450.
[83] N. Linial,Distributive algorithms - global solutions from local data, in Proc. 28th IEEE Annual
Symposium on Foundations of Computer Science, 1985, pp. 331-335.
[84] N. Linial, Locality in distributed graph algorithms, SIAM J. Computing 21 (1992), 193-201.
[85] M. Luby, Removing randomness in parallel computation without a processor penalty, in Proc.
29 th IEEE Symposium on Foundations of Computer Science, 1988, pp. 162-173.
[86] A. Mayer, M. Naor and L. Stockmeyer, Local computations on static and dynamic graphs, in
Proc. 3rd. Israel Symposium on Theory and Computing Systems, 1995.
[87] M.V. Marathe, R. Ravi, R. Sundaram, S.S. Ravi, D.J. Rosenkrantz, and H.B. Hunt III,
\Bicriteria network design problems", Proceedings of the 22nd International Colloquium on
Automata, Languages and Programming (ICALP'95), July 1995, 487{498.
[88] A. Panconesi and A. Srinivasan, Improved distributed algorithms for coloring and network
decomposition problems, in Proc. ACM Symposium on Theory of Computing, ACM, New
York, 1992, pp. 581-592.
[89] A. Panconesi and A. Srinivasan, On the complexity of distributed network decomposition, in
Journal of Algorithms, 20, 1996, pp. 356-374.
[90] P. Panaite and A. Pelc, Exploring unknown undirected graphs, Proc. 9th Ann. ACM-SIAM
Symposium on Discrete Algorithms (SODA'98), 316-322.
[91] M. Szegedy and S Vishwanathan, Locality based graph coloring, in Proc. 25th ACM Symposium
on Theory of Computing, 1993, pp. 201-207.
[92] A. Pelc, \Fault Tolerant Broadcasting and Gossiping in Communication Networks", NET-
WORKS, 28, (1996), 143{156.
[93] D. Peleg, Deterministic radio broadcast with no topological knowledge, manuscript (2000).
[94] D. Peleg, \A Note on Optimal Time Broadcast in Faulty Hypercubes", Journal of Parallel
and Distributed Computing, 26, (1995), 132{135.
[95] K. Qiu, H. Meijer, and S. G. Akl, \Decomposing a Star Graph into Disjoint Cycles", Infor-
mation Processing Letters, 39, (1991), 125{129.
[96] K. Qiu, S. G. Akl, and I. Stojmenovi�c, \Data Communication and Computational Geome-
try on the Star and Pancake Networks", 3rd IEEE Symposium on Parallel and Distributed
Computing, TX, 1991.
References 79
[97] D. Richards and A. Liestman, \Generalizations of Broadcasting and Gossiping", NET-
WORKS, 18 (1988), 125{138.
[98] P.J. Slater, E. Cockayne and S.T. Hedetniemi, Information dissemination in trees, SIAM J.
Comput. 10 (1981), 692-701.
[99] S. Sur and P.K. Srimani, \Topological Properties of the Star Graph", Computers Math. Ap-
plic., 25, (1993), 87{98.
[100] M. Szegedy and S Vishwanathan, Locality based graph coloring, in Proc. 25th ACM Sympo-
sium on Theory of Computing, 1993, pp. 201-207.
[101] A. S. Tanenbaum, Computer Networks, Prentice Hall, Englewood Cli�s, N.J., 1981.
[102] Y. Saad and M. H. Schultz, \Topological Properties of the Hypercube", IEEE Transactions
on Computer, 37, (1988), 867{882.
[103] V. G. Vizing, On an estimate of the chromatic class of a p-graph, Diskret. Analiz. 3 (1964),
25-30.
[104] P. Winter, \Steiner Problems in Networks: a Survey", NETWORKS, 17 (1987), 129{167.
[105] O. Wolfson and A. Segall, \The Communication Complexity of Atomic Commitment and of
Gossiping", SIAM J. on Computing, 20 (1991), 423{450.