the space division multiple access protocol: asynchronous implementation and performance
TRANSCRIPT
computer communicatii
ELSEVIER Computer Communications 2 I (1998) 227-242
The Space Division Multiple Access Protocol: asynchronous implementation and performance
Lewis Barnett
Department of Mathematics and Cornpurer Science, University of Richmond, Richmond, VA 23173. USA
Received 26 September 1997; accepted 26 September 1997
Abstract
Implementation of communication protocols based on collision resolution algorithms on broadcast bus networks requires a level of synchronization among participating stations such that idle steps in the algorithm can be globally and unambiguously detected. We present performance results for two collision resolution algorithms, the Capetanakis tree protocol and space division multiple access, a new
algorithm, using a previously described asynchronous implementation technique. Two variants of space division multiple access are discussed. One variant improves upon the mechanism for implementing the necessary synchronization. The other relaxes the requirements for a station with a new packet joining the resolution algorithm. The performance effects of these variations are examined by simulating the behavior of the protocols. The variants are shown to improve the performance of the protocol under some traffic loads. In addition, one of the
variants is shown to improve on the fairness of the original algorithm over a variety of offered loads and maximum data rates. 0 1998 Elsevier Science B.V.
Keywords: Local area network; MAC protocol; Collision resolution algorithm
1. Introduction
Collision resolution algorithms (CRAs) are an alternative
method for broadcast bus medium access control (MAC) which have several desirable properties. These algorithms
use feedback from the channel to resolve collisions, reduc-
ing the average delay and variance of delay experienced by other access mechanisms. Offsetting these desirable proper-
ties is the requirement that all stations need complete infor- mation on channel history. These algorithms have usually
been discussed in terms of slotted transmission channels and fixed-length packet transmission.
Protocols based on CRAs operate by initially allowing all stations on the network with a packet ready for transmission to transmit in a slot. If a collision occurs, the stations are
divided into two groups according to some globally agreed upon criterion. The stations in the “first” group are allowed
to transmit in the next slot, and if a collision results, the
group is further subdivided and the resolution proceeds recursively. When the first group has been resolved, the second group is allowed to resolve. The time required to successfully transmit all of the packets involved in the original collision is referred to as a collision resolution epoch. Stations which become ready after the beginning of an epoch must wait for the epoch to end before attempting
0140-3664/98/$19.00 0 1998 Elsevier Science B.V. All rights reserved PII SO140-3664(97)00184-9
transmission. Several different algorithms have been pro- posed in the literature based on different criteria for per-
forming the subdivision. The Capetanakis tree protocols used a uniquely assigned or randomly generated address
for each station [ 1,2]. First-come first-served protocol uses
the arrival time of packets within an enabled transmission
“window” as the subdivision criterion [3,4]. More recently, interest in packetized voice and other time-constrained applications has led to the proposal of variants of the first-
come first-served protocol which take transmission dead- lines into account [5].
Practical application of these techniques is limited by the necessity of synchronous operation of the stations partici-
pating in the protocol. The required level of synchronization is difficult to achieve in a bus structured local area network (LAN) using the carrier sense multiple access with collision
detection (CSMAKD) technique [6]. For example, consider the situation where the operation of a CRA results in an
algorithm step where no stations have permission to trans-
mit. As a result, no activity indicating the outcome of the algorithm step appears on the network. How do stations decide when this “idle step” is officially over so that they can proceed with the algorithm? This question has been investigated by Towsley [7] and Molle [8]. In both cases, asynchronous operation used knowledge of the maximum
228 L. BarnetKomputer Communications 21 (1998) 227-242
propagation delay on the network to determine the end of
“steps” in the algorithm. This technique can be problematic when counting successive idle steps if there is a significant
difference in the clock rates of the participating stations. To
avoid this potential problem, Molloy presented a mechan- ism for signaling the end of idle steps in a collision resolu-
tion algorithm by causing an intentional collision which is
consistently observable by all stations on the network [9].
Variable length messages were considered by Wu and
Chang [lo]. Their work describes a protocol where mes-
sages are divided into fixed length packets, with the first-
come first-served algorithm being used to arbitrate control of the bus for transmission of trains of such packets.
There has been a recent increase in interest in commu-
nication protocols based on CRAs. Control applications for
vehicles is one potential application area. There has also
been interest in CRA-based LAN network protocols tailored for distributed real time applications [ 111. CRAs also seem
promising in the area of time-constrained applications, such
as packetized voice [5].
In this paper, we first describe a new CRA, the space
division multiple access protocol (SDMA). This protocol
uses Molloy’s method for signaling the end of idle steps and takes advantage of information about the position of
stations on the network in its operation. We then describe two variant implementation strategies for CRAs on broad- cast bus networks. Through simulation, the performance of
these algorithms is shown to be competitive with Ethernet in terms of overall network utilization and much better than
Ethernet in terms of the variability of delay experienced by
transmitting stations for the traditional 10 Mbps CSMAKD bus. In addition, the protocols are shown to provide fairer
access to the channel than Ethernet under a variety of traffic
loads. Finally, the behavior of the protocol as data rates
increase from 10 Mbps to 100 Mbps is examined. The remainder of the paper is organized as follows. First,
some of the difficulties of implementing CRAs in a broad-
cast bus network are discussed. Next, the operation of the SDMA protocol and its variants is described. Presentation of simulation results on the performance of SDMA and its
variants under various traffic loads and network configura- tions follows. Finally, a summary and some conclusions are
presented.
2. Asynchronous implementation of collision resolution
Consider the problem of establishing the level of synchro- nization required by CRAs on a typical CSMA/CD bus. Such a bus has no inherent slotting and no shared control signaling. A method for implementing collision resolution algorithms on such an asynchronous (unslotted) broadcast bus was presented by Molloy [9]. The problem with imple- menting most CRAs on an asynchronous bus is that it is difficult for stations to agree on exactly when an algorithm step ends if it produces no activity on the network. Since
there are no slot boundaries on an asynchronous bus, some other mechanism must be employed to signal the end of
such an idle step in the protocol. The method presented by
Molloy requires that when a period equal in duration to twice the maximum propagation delay of the network dur- ing which no transmissions occur is observed, some subset
of the stations on the network should transmit. This subset must consist of at least two stations. This transmission is
intended to produce a collision which signals to all stations
on the network that an idle step has occurred and is now
over. We refer to a collision of this type as an idle marking
collision. This collision is visible to all stations on the net-
work and acts as a signal to proceed with the next step of the
algorithm. The most easily identified subset to participate in
the idle marking collision is the set of all stations on the
network. Choosing an alternative subset for this purpose is discussed in Section 3.2.
This mechanism allows CRAs to be implemented on unslotted broadcast bus networks without the addition of
special hardware such as globally accessible clocks or separate control lines. There is therefore no restriction
(other than those imposed by the CSMAKD mechanism
itself) on the lengths of packets. Slotted media also impose additional latency for the initial transmission attempt, since
a station must wait for the next slot boundary before attempting transmission. This additional latency is elimi- nated in protocols using Molloy’s method, improving the
low load performance of these algorithms.
3. The space division multiple access protocol
The SDMA protocol is a specialization of the class of
protocols which use a unique address to perform collision
resolution, such as the Capetanakis tree protocol. The basic operation of such protocols was described in Section 1.
Protocols based on CRAs typically do not require any cor- relation between the address of a station and the location of
the station on the network. This simplifies adding new sta- tions to a network; it is only necessary to find a unique
address to assign to the new station. However, performance improvements can result from using information about the actual positions of stations on the network in the collision
resolution process. The SDMA protocol exploits one such
improvement. (Various unidirectional broadcast protocols
such as Expressnet [ 121, DQDB [ 131 and Hymap [ 141
have made use of upstream/downstream positional informa- tion. These protocols are intended for use on folded bus or dual bus layouts.)
We shall describe the original SDMA protocol and two variants which address limitations in the original and in the asynchronous implementation technique.
3.1. The original protocol
The SDMA protocol is similar to the address-based
L. BamerKomputer Communications 21 (1998) 227-242 229
Capetanakis protocol with one important difference: the
address space for stations corresponds directly to the physical
length of the network. If the address of the station is inter-
preted as a location (e.g., distance from one end of the cable),
then the collision clear time can be reduced with each address space subdivision. The collision clear time is the amount of
time between the detection of a collision by a station and the
time at which the station is guaranteed that all signals gener-
ated by stations involved in the collision have been received.
To see why this time can be reduced under the conditions described above, consider the situation when a collision
occurs in such a network. Let L be the length of the network. When a collision occurs, the protocol divides the stations into
two subsets, one subset containing addresses less than W2
and the other subset containing stations whose address is
greater than or equal to L/2. The protocol first enables the
stations with addresses less than L/2 to transmit at the next
opportunity. Since addresses correspond to locations, this corresponds to a physical partitioning of the cable, with all
the stations in the enabled subset on the same half of the
network. This means that the maximum propagation delay
between enabled stations will be at most half of the full propagation delay of the network. So, should another colli- sion occur, the collision itself will be of shorter duration
owing to the proximity of the colliding stations (each detects
the signal of the other and aborts transmission sooner), and the subsequent channel clear time will be half what it was for
the previous step. On sparsely populated cables, further gains can be made by keeping track of the unused station positions.
If an enabled range of addresses consists wholly of unused positions, a step can be skipped.
There are several restrictions imposed by the SDMA
scheme. First, the protocol is biased in favor of stations with low addresses. For example, consider a situation in
which a high-address station generates a new packet just after the beginning of a collision resolution epoch. Since
the station cannot join the current resolution round, it is possible for any station with a lower address to transmit a
packet in the current round and the next round before the station with the higher address is allowed to transmit one packet. Note that this scenario can occur in any CRA which
uses addresses as the criterion for subdivision. It is simply rendered more apparent by the correlation of addresses with
station locations in SDMA. Second, stations are required to
continuously monitor the channel at all times as in the first- come, first-served protocol. This makes it more difficult for new stations to be added to a network. Third, SDMA
expects the network to be a single linear cable, a significant layout limitation. To mitigate this situation, the protocol can
be adapted to handle branching layouts by mapping addresses to create a logical linear cable with length equal to the sum of the lengths of the branches. This mapping must, of course, assign addresses such that stations on dif- ferent branches are not assigned the same address. While this addresses typical layouts incorporating repeaters, there is no direct analog to SDMA on the increasingly popular
1OBaseT star layout. The notion of position would have to
be replaced by the distance of stations from the hub, and
although some performance gain could result from using
this distance as the subdivision metric, the improvement
would not be as great as in the bus layout. The restriction to a traditional bus layout means that SDMA suffers from
the same limitations as do CSMA/CD protocols in terms of
the distance/bandwidth product. As bandwidth increases,
the maximum distance of an SDMA network would have
to decrease to maintain a constant minimum packet size. These limitations restrict the range of applications for
which SDMA is appropriate. There are obvious difficulties
involved in relocating stations on a network, so the best use
of SDMA would be in applications where the locations of
stations are fixed. Such applications might involve networks
of computing elements, device controllers or intelligent sen- sors embedded in structures or vehicles.
3.2. SDMA variants
We now describe two variants of the SDMA protocol
which result in improvements in the overall performance
of the protocol. The first, SDMA with restricted topology (SDMA-RT), requires that the stations on the network be
evenly spaced with no gaps. This allows the size of the set of stations participating in an idle marking collision to be
reduced under some circumstances, resulting in better over-
all throughput. The second, SDMA with relaxed entry
(SDMA-RE), gains some performance improvement by allowing stations to join collision resolution epochs while
they are in progress, rather than forcing newly ready stations to wait for the end of the current epoch before joining the
algorithm. In the discussion that follows, we make use of
several new terms.
Definition 1: A ready station is a station which is cur-
rently attempting to transmit a packet.
Definition 2: The currently enabled segment of the net- work is the portion of the network from which trans-
missions are allowed during the current step of the algorithm.
Definition 3: An enabled station is a station whose
address falls within the currently enabled segment. Definition 4: A regularly spaced network is a network
in which stations are placed at regular intervals with no gaps in the pattern.
3.2.1. SDMA with restricted topology
The original SDMA protocol uses the idle marking
collision technique with all stations on the network partici- pating in each idle marking collision. While it is possible to identify smaller subsets of stations to fulfil this role at each idle step, the subset must be chosen carefully. It is not sufficient to require that stations in the currently enabled segment of the network transmit, because this may result in an empty subset. To see how this could occur, consider
230 L. Burnett/Computer Communications 21 (1998) 227-242
Sl s2
s 1.1 s 1.2 II s 2.1 s 2.2
nnnn 1 2 3 4 5 6
Fig. I. A collision resulting in an “empty” enabled network segment.
Fig. 1. The horizontal line is the cable, with the numbered
tick marks representing stations. The shaded chevrons
depict the propagation of signals originating from some of the stations in space (the horizontal axis) and time (the
vertical axis, advancing downwards). Recall that the origi- nal SDMA protocol puts no restrictions on where a station
can be placed on the network. When the original collision
involving stations 1, 2, 3 and 4 occurs, the algorithm sub-
divides the network into two segments, labeled S, and SZ, with the stations in Sr enabled in the next step of the algo- rithm. These stations transmit and collide again, resulting
in a further subdivision of St into S,,, and S1,2. Stations 1 and 2 collide a third time, and are eventually resolved.
When the resolution of the stations in S,.r is completed, S1,2 is enabled. Since there are no stations in this segment of the network, no transmissions are attempted, resulting in an idle step in the algorithm. However, if only stations in the
currently enabled network segment were required to parti- cipate in the idle marking collision, no such collision would
be generated in this situation, and the protocol would fail, absent some fallback mechanism for dealing with such a situation.
It is possible to identify several candidate subsets to par- ticipate in the idle marking collision. We would like to keep the set as small and close together as possible to limit the duration of the idle marking collision. It would also be desirable to limit membership in the set to stations which are actively attempting to transmit. This set of stations may also be empty. This situation would occur if stations 3 and 4
in Fig. I were not currently trying to transmit. We have already demonstrated that in the case of an unrestricted
layout, the currently enabled segment may be empty.
Given the nature of the SDMA algorithm, the smallest set
of stations that is guaranteed to be nonempty is the set of
stations from the “parent” segment of the current segment (segment S, in the example of Fig. 1).
If we assume that the network is regularly spaced, then
we are able to use the smallest possible subset of stations for the idle marking collision, those in the currently enabled
segment of the network. Regular spacing implies that
every possible subinterval contains at least one station. If
we make no restriction on the number of stations, it is pos-
sible for the algorithm to enter a state in which the enabled interval consists of a single station which is not ready. In
such a state, the transmission by this station to mark the end
of the idle would not collide with any other transmission, producing a successful packet. Since the operation of the protocol in all cases is the same for either a recognized idle
step or a successful transmission, this situation is not of great concern. However, to avoid any potential confusion
about the destination of such a packet, we shall assume that
it is addressed to a nonexistent station. We refer to the variant of SDMA which uses this subset as SDMA with
restricted topology, or SDMA-RT.
3.2.2. SDMA with relaxed entry
Most CRAs require stations which become ready (i.e.,
have a new packet to send) while a collision resolution
epoch is in progress to wait until the current epoch com- pletes to make their first transmission attempt. The nature of
the unfairness in the SDMA protocol noted in Section 4.2 is in part attributable to this strategy: stations with lower
addresses resolve sooner and, thus, are more likely to become ready again before the beginning of the next
epoch. We investigated the effects of removing this restric- tion on joining in-progress resolution epochs. For SDMA,
allowing stations to join an epoch in progress does not cause any inconsistency in the algorithm. Since ready stations in
the successfully resolved portion of the network will not be
re-enabled before the end of the current epoch, new ready stations in this part of the network will see no difference between this arrangement and the original protocol; they
will wait until the next resolution epoch to transmit. How- ever, stations in the unresolved portion of the network will
participate in the current resolution and transmit much sooner than they would have under the original protocol. This arrangement would seem to favor the stations which were penalized under the original protocol, since the resolved segment of the network grows from low addresses to high addresses. Because segments that have completed resolution are never re-enabled during the same epoch, the protocol still guarantees that no station is allowed to transmit more than once in a given epoch. We refer to this variant of the protocol as SDMA with relaxed entry or SDMA-RE.
L. BaneftKomputer Communications 21 (1998) 227-242 231
3.3. Simulation
Three SDMA variants, Ethernet and the Capetanakis tree
protocol are compared in this paper. The Capetanakis pro- tocol was chosen because it is the CRA which SDMA most closely resembles. The protocols were simulated by using a
custom simulation package called Netsim, written by the author. Netsim uses the discrete event simulation method
to simulate a finite network of stations. Modeling of colli-
sions is very detailed, taking into account the positions of all stations involved in the collision and the effects of propaga-
tion delay on the detection time of the beginning and end of
collisions at all stations on the network. This point is parti- cularly important in accurately modeling SDMA, which
depends on reducing the length of collision bursts for
the performance improvement it achieves over similar
algorithms. Netsim also provides very flexible specification of packet length and inter-arrival time distributions, allow- ing realistic traffic loads to be simulated. Netsim is
described in more detail in [ 151.
The simulation parameters specified 32 regularly spaced
stations producing identical traffic loads. We consider three packet length distributions: fixed-length maximum-sized
packets (1526 bytes, which includes the packet preamble),
small fixed-length packets (76 bytes), and a discrete packet distribution abstracted from observations of the University of Richmond campus Ethernet which had a mean packet
length of 252 bytes. We present results in two categories: performance comparison of the protocols for 10 Mbps con- tention bus networks, and comparison of the protocols with
capacity loads as the effective data rate of the network increases.
The discrete distribution is shown in Table 1. The measured network was a single segment of a larger campus
network, which connected 20 workstations from various vendors. One of the connected machines was a file server
(primarily used for faculty accounts) and another was a mail, Domain Name, and print server. The segment is con-
nected to the building network with a DEC Bridge 90, and
individual computers are connected via twisted pair to DEC Repeater 9OTs. Samples of 10 000 packets were made at 1 h intervals over a period of 24 h on a typical work day. The
Table I Discrete packet-length distribution
Bytes Percentage of packets
75 47.5
105 4.4
135 18.0
165 16.3
444 2.6
889 0.8
1065 3.4
1305 0.4
1515 6.6
average packet size observed is much smaller than that
reported in other studies. For example, Gusella reported
an average packet size of 578 bytes [16]. We attribute the
difference primarily to three factors. First, none of the com-
puters on the measured segment were diskless, so interac- tion with the file server generally did not involve paging.
Second, it was discovered that the DEC networking equip-
ment contributed a significant number of small packets used
for management purposes to the traffic on the network.
These packets made up approximately 23% of the total
packets observed. The three packet-length distributions considered in
this work represent the best case for overall efficiency
(maximum-sized packets), the worst case (small packets)
and a realistic intermediate packet-length distribution. All
experiments used an exponential distribution of packet
inter-arrival times. The simulation concentrates on the behavior of the medium access mechanism, so no buffering in the stations is modeled.
The Capetanakis tree protocol results are for an asynchro-
nous version of the protocol constructed by using the idle
marking collision technique described in Section 2. The
simulation is essentially the same as the SDMA simulation,
except that the collision duration is not halved with each subdivision of the interval of enabled addresses. This simu-
lates the version of the Capetanakis protocol which uses unique addresses rather than coin flips to generate address
bits. Though the Capetanakis protocol does not correlate
addresses with station locations, this was not modeled in the Capetanakis simulation used here. Both the SDMA and Capetanakis implementations incorporate the Massey
improvement, where colliding stations that see an idle step in the left subinterval immediately subdivide rather than
transmitting in the right subinterval and colliding again [ 171.
The quantities reported were produced by averaging the results of three runs for each set of parameters. At least
100 000 packets were transmitted in each run.
4. Performance of SDMA
It was expected that the Ethernet protocol would prove superior to the SDMA protocol under light traffic condi-
tions, the situation for which the Ethernet protocol was designed. It was further expected that the collision resolu-
tion strategy employed by the SDMA protocol would result
in higher overall throughput and lower overall delay under conditions where the network was more heavily loaded. It was also expected that SDMA’s performance would be
slightly better than an asynchronous implementation of the Capetanakis tree protocol, the CRA to which it is most similar, in both overall throughput and average delay.
SDMA-RT improves on the overall utilization of the ori- ginal protocol by decreasing the overhead for idle marking collisions. It exhibits little or no difference from SDMA in terms of fairness of access. On the other hand, SDMA-RE
232 L. BamettlComputer Communications 21 (1998) 227-242
c .s
0.6
Q ,N
5 0.4
1.5 Offered load
Fig. 2. Average utilization versus offered load for the packet mixture with exponentially distributed arrivals.
was expected to mitigate the location bias of SDMA by
allowing stations to eliminate the wait for the end of the current collision resolution epoch before joining the
algorithm. Simulation shows that this is in fact the case over an important range of offered loads.
We discuss the performance of the protocols in terms of
average utilization, average delay, variance of delay, and fairness (represented as the standard deviation of individual station throughput) in the following sections.
4.1. Pe$ormance on 10 Mbps networks
For comparisons of SDMA with the Capetanakis protocol
and Ethernet, we present only results for the packet mixture here. The advantages displayed by SDMA were also present to a lesser extent in the maximum-sized packet load and to a
greater extent in the small packet load. The complete per- formance comparison for 10 Mbps networks is given in
[18]. In the comparison of the three SDMA variants, we show results for all three traffic patterns.
4.1.1. Average network utilization
Fig. 2 shows the average network utilization as a percen-
tage of the total capacity of the network versus the offered
traffic load, also presented as a percentage of the capacity of the network. Offered load is the sum of the loads presented by each station running on an otherwise idle network and is
a measure of the traffic that the stations are attempting to introduce onto the network. The figure shows that, contrary to expectations, under light loads (Offered load < I), SDMA does achieve higher utilization of the network.
This situation becomes more pronounced when the offered
load approaches and exceeds the capacity of the network. At an offered load level of 200% of capacity, SDMA achieves a
utilization of 82.4%, as compared with 81.4% for Capeta- nakis and 71.7% for Ethernet. The difference between
SDMA and Capetanakis is significant at the 95% confidence level over the range of offered loads from approximately
80% to the highest load considered. Figs. 3-5 show the average overall utilization of the net-
work for the three SDMA variants versus the offered load
1.0
0.6
r .$
0.6
a = 5 0.4
1.5 Offered load
Fig. 3. Average utilization for SDMA variants with traffic consisting of 1526 byte packets.
L. BarnettKomputer Communications 21 (1998) 227-242 233
0.8
c .g
0.6
Q 4
5 0.4
1.5
Offered load
Fig. 4. Average utilization for SDMA variants with traffic consisting of the packet mixture.
for the three traffic loads discussed in Section 3.3. In all
three cases, there is little to distinguish between the three protocols a low loads, since all follow the CSMA/CD dis-
cipline for initial access to the medium. For maximum-sized
packets there is little difference in overall utilization over
the entire range of loads considered. For this load, the trans- mission time of the large packets dominates other factors in
determining the utilization of these three protocol variants. Results for the packet mixture and the small packet loads
show a larger distinction among the three protocols. When the load is less than the capacity of the network, both of the
variants show an advantage over the original SDMA in maximum utilization, with SDMA-RE demonstrating a
slight advantage over SDMA-RT. As the load increases, SDMA-RE maintains its advantage, while the utilization
of SDMA-RT approaches the level of SDMA. This reflects the fact that, as the network becomes busier and the number
of stations involved in each collision increases, the number of idle steps in the operation of the algorithm (from which
SDMA-RT gains its advantage) decreases. For the packet
mixture at the highest offered load considered (approxi- mately 300% of capacity), the average utilization for
SDMA was 83.1%, versus 83.5% for SDMA-RT and
84.1% for SDMA-RE. As was the case in the comparison
with the Capetanakis protocol in Section 4, the difference
between the protocols is significant at the 95% confidence level for the range of offered loads between 80% and the
maximum load considered. At a similar load for the small
packet traffic pattern, the utilization for SDMA was 60.8%, versus 61.2% for SDMA-RT and 63.5% for SDMA-RE.
4.1.2. Average delay
Fig. 6 shows the average queueing delay for packets in
seconds versus average utilization. The average packet from the packet mixture takes approximately 200 ms to transmit.
This number is reflected in the low load values shown in this figure. The average delay for Ethernet is only slightly higher
than the transmission time at low offered load levels. This
behavior is matched by both of the CRAs, since they also transmit packets immediately when there is no contention.
0.80
0.60
1.5
Offered load
Fig. 5. Average utilization for SDMA variants with traffic consisting of 76 byte packets
234 L. BarnettKomputer Communications 21 (1998) 227-242
z .ti $ 0.004 -
x
0.0 0.2 0.4 0.6 0.6 1.0
Fig. 6. Average delay versus utilization for the packet mixture with expo- Fig. 8. Average delay for SDMA variants with traffic consisting of the
nential arrivals. packet mixture.
As the load increases and contention for the channel begins,
the two CRAs demonstrate a clear advantage over Ethernet, with SDMA experiencing slightly lower average delay than
the Capetanakis protocol. At an offered load of approxi-
mately 75%, Ethernet’s average delay is 2.13 ms, the Capetanakis protocol’s delay is 1.34 ms, and SDMA’s
delay is 1.24 ms. As the load increases to approximately the capacity of the network, the Ethernet delay is 3.71 ms versus 2.53 ms for Capetanakis and 2.37 ms for SDMA.
The difference between average delay for SDMA and Capetanakis is significant at the 95% confidence level
over the range of offered loads from approximately 50%
through the highest loads considered.
The picture of the relative performance of the three SDMA variants indicated in Section 4.1.1 is mirrored as expected by the average delay behavior of the three proto-
cols, shown in Figs. 7-9. It is worth noting that although the curves appear to be asymptotic over the loads shown in
0.040
0.030
3 3 * 0.020
Jl
$
0.010
0.000 (
! .,,,.--il
0.2 0.4 0.6 0.6 1.0 Utilization
Fig. 7. Average delay for SDMA variants with traffic consisting of Fig. 9. Average delay for SDMA variants with traffic consisting of 76 byte
1526 byte packets. packets.
0.0060
0.0000 0.0 0.2 0.4 0.6 0.6 1.0
Utilization
these figures, plotting the delay against the offered load shows a steep increase over the region between 50% offered
load and 200% offered load, after which the rate of increase
moderates significantly.
4.1.3. Variance of delay
Fig. 10 shows the variance of the packet delay versus the
network utilization for the three protocols. Lower variance of delay is a well-known advantage of CRAs over random backoff collision handling methods, as demonstrated by
these figures. At low loads where little contention occurs, there is little difference between the three protocols. As
contention begins, the variance of delay for Ethernet
increases much more rapidly than for either of the CRAs. Once the traffic becomes heavy the SDMA algorithm
resolves collisions among larger numbers of stations more efficiently than Ethernet. When packets are involved in such
collisions, the delay they experience is much more regular
than Ethernet packets transmitted under similar load
0.0030
0.0020
H 3 2 x
0.0010
0.0000 0.0 0.2 0.4 0.6 0.6
Utilization
L. BameftKomputer Communications 21 (1998) 227-242 235
0” 5.0e-06
o.Oe+Oo 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
Utilization Utilization
Fig. 10. Delay variance versus utilization for the packet mixture with expo- Fig. 12. Delay variance for SDMA variants with traffic consisting of the
nential arrivals. packet mixture.
conditions owing to the unpredictability of Ethernet’s bin-
ary exponential backoff algorithm. Though not evident from
this figure, the variance of delay for the two CRAs levels off and remains steady as offered load increases past the
capacity of the network.
Figs. 1 1 - 13 show the variance of the delay for the SDMA variants versus offered load for the three traffic patterns. As for the previous metrics, little can be said about the relative behavior of the three variants under the load composed of
large packets. However, it is clear under the other two loads that the variance for SDMA-RE is much worse than the
variance for the other two variants. Fig. 12 also suggests
that SDMA-RT has slightly better variance of delay than SDMA over part of the offered load range.
4.2. Location bias of SDMA
The primary feature of the SDMA protocol is that it takes advantage of the actual positions of stations on the network
0.00025
0.00020
$ 5 0.00015 ‘Z F
E $ 0.00010
0.00005
o.ooooo 0.0 0.2 0.4 0.6 0.8 1.0
Utilization Utilization
Fig. Il. Delay variance for SDMA variants with traffic consisting of Fig. 13. Delay variance for SDMA variants with traffic consisting of
1526 byte packets. 76 byte packets.
8 5 .- 5 1 e-05
s 0”
B-06
to achieve an advantage in overall network performance over similar protocols which do not take advantage of
such information. It is therefore not surprising that stations
at different locations on the network experience different
levels of responsiveness. Similar effects have been observed
in other protocols for different types of network that make use of the location of stations as part of their protocol [ 191. Simulation results indicate that the SDMA protocol gives a
significant performance advantage to those stations which are considered to have low addresses on the network, i.e.,
those which are closest to the “left hand” end of the cable - addresses are calculated by distance from this end of the
cable. Fig. 14 shows the average utilization and delay for each station for SDMA and Ethernet. (The Capetanakis tree
protocol is not included in this comparison because the simulation of the protocol did not decouple the station
addresses from the physical location of the station. The
Capetanakis protocol should have better behavior in terms
2e-05
1.5e-05
1 e-05
58-06
0 0.0 0.1 0.2 0.3 0.4 0.5 0.6
236 L. BarnettKhnputer Communications 2/(1998) 227-242
(a) ( W
0.008
SDMA mlxed lengths SDMA. mixed lengths
0.00 0.000 0 4 8 12 16 20 24 28 32 0 4 8 12 16 20 24 28 32
Station number
0.00 0 4 0 12 16 20 24 28 32
Station number
Station number
Ethernet, mtxed [email protected]
0.000 0 4 8 12 16 20 24 28 32
Station number
Fig. 14. Station utilization and delay for the packet mixture. 0 is offered load.
of fairness than SDMA.) Results are shown for offered loads of approximately 50%, 100% and 200% of capacity. The bias is demonstrated most strongly under heavy overloads,
and is not evident under the more normal 50% load. As Fig. 14(c) and Fig. 14(d) show, Ethernet also displays a location
bias under the same circumstances, although it is stations at
either end of the network which suffer. This effect was noted by Gonsalves and Tobagi [20].
To provide a direct comparison of the unfairness of the two protocols, Fig. 15 shows the standard deviation of uti- lization at individual stations versus offered load. The spread is much smaller for SDMA than for Ethernet for any load greater than 50%. At offered load approximately equal to the capacity of the network, the standard deviation for SDMA is 0.0013, versus 0.0021 for Ethernet.
Figs. 16-18 show the standard deviation of per station
utilization versus the offered load for the three SDMA var- iants. Recall that the motivation for the design of SDMA-RE was the location bias of the original SDMA protocol. These
figures show that the strategy adopted by SDMA-RE is only
partially successful. Over the range of loads from roughly
the point at which contention begins (which varies for the three loads) up to loads of approximately 200% of capacity, the spread of utilization among the hosts on the network is lower for SDMA-RE than for the other two variants. For any load in excess of 200% of capacity, SDMA-RE exhibits unfairness which is more severe than that of the original algorithm. The degree of unfairness for the SDMA-RT protocol parallels that of the SDMA protocol, but is consistently slightly lower.
To investigate further the nature of the unfairness intro- duced by the SDMA-RE variant, we show the per station
L. BametrKomputer Communications 21 (1998) 227-242 231
1 .o 2.0 3.0
Offered load
Fig. 15. Standard deviation of per station utilization versus offered load for the packet mixture with exponential arrivals.
0.010
$j 0.008
Z p s C 0.008 .g m iii
*ij 0.004
5 -0
8 0.002
Offered load
Fig. 16. Protocol fairness: standard deviation of per station utilization for SDMA variants with traffic consisting of 1526 byte packets.
utilization in Fig. 19. This plot is for the large packet traffic address stations exhibited by SDMA, it is clear from this load simulated at the crossover point of about 200% offered figure that stations are not provided fair access to the net- load. While there does not appear to be a consistent pattern work under these circumstances. The data indicate that such as the decrease in utilization from low-address to high- stations which are rightmost in a collision resolution interval
I I I s I ’ I ’ I
Offered load
Fig. 17. Protocol fairness: standard deviation of per station utilization for SDMA variants with traffic consisting of the packet mixture.
238
Fig. 18. Protocol fairness: standard deviation of per station utilization for SDMA variants with traffic consisting 76 byte packets.
L. BantettKomputer Communications 21 (1998) 227-242
Offered load
r
0.000 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32
Station number
Fig. 19. Per station utilization of SDMA-RE for 1526 byte packets at a load of approximately 200% of network capacity.
receive a smaller share of the capacity of the network than work to compensate for shorter bit times as the data rate other stations. increases.
4.3. Pet$omance at higher data rates
Given the trend towards higher data rates in LANs, we also considered the performance of the various protocols as
data rates increased. CSMA protocols require certain adjust-
ments as the data rate at which they operate increases in order to maintain the essential relationship between maxi- mum propagation delay and minimum packet size. The two choices are to decrease the maximum extent of the network in proportion to the shrinking bit time, or to maintain the physical size of the network but increase the minimum allowed packet size so that the time to transmit still matches the unchanged propagation time. The 100 Mbps versions of Ethernet took the former approach. For this reason, the results discussed here also decrease the length of the net-
To isolate the effects of data rate increase, all other parameters were fixed to the extent possible. In particular,
the offered load was held as close as possible to 100% of capacity for the given data rate in all runs. Again, there were
two possible methods for maintaining the offered load level
as the data rate increased: increase the size of the packets
transmitted or increase the rate at which packets were trans- mitted while fixing the distribution of packet sizes. The
experiments reported here used the latter technique. Packet- size distribution depends primarily on the applications which make use of a network, not on the data rate of the network.
In summary, each data point presented in the following results reflects a change in three parameters: the maximum data rate of the network, the size of the network, and the rate at which stations generate packets. Other aspects of the simulation remained as described in Section 3.3.
L. BarnettKomputer Communications 21 (1998) 227-242 239
4.3.1. Average network utilization Figs. 20-22 show the average network utilization for
SDMA, the Capetanakis protocol and Ethernet for the
three packet-size distributions discussed in Section 3.3 as the maximum data rate of the network increases. The most striking aspect of these figures is the fact that the advantage
in throughput enjoyed by the collision resolution algorithms
at a maximum data rate of 10 Mbps is not sustained over the
range of maximum data rates simulated. In the case of
Fig. 20, the average throughput curves cross when the maxi- mum data rate is approximately 30 Mbps, after which point
the average throughput is higher for Ethernet. This behavior is a result of the fact that, in order to maintain a.constant
offered load of approximately 100% of the capacity of the
network, the rate at which packets were generated was
scaled in proportion to the increase in the data rate for each set of levels simulated. For example, at 10 Mbps
using 76 byte packets, SDMA successfully transmitted
10273 packets per second, while Ethernet transmitted
8211 packets per second. With the data rate increased to 50 Mbps, SDMA transmitted 28 633 packets per second
and Ethernet transmitted 30485 packets per second. Given that the collision resolution algorithms cause packets to
bunch up waiting for the beginning of the next collision
resolution epoch to begin transmitting, an increase in the packet transmission rate translates to an increase in the
number of ready stations at the beginning of each epoch, more collisions necessary to resolve the epoch, and thus into
longer per packet delays. At 50 Mbps, fewer than 11% of packets transmitted by SDMA experienced fewer than four
collisions, while 74% of packets transmitted by Ethernet experienced fewer than four collisions. In fact, 42% of the
packets transmitted by Ethernet encounted no contention at
50 Mbps, while fewer than 1% of the SDMA packets were transmitted successfully on their first attempt.
Given that a traffic load consisting entirely of minimum- length packets is an endpoint which is not observed in
0.70
0.60 la0 -1
0.20 0 20 40 60 80 100
Max Data Rate (Mbps)
0.8 0 20 40 60 80 100
Max Data Rate (Mbps)
Fig. 20. Average network utilization versus maximum data rate, traffic Fig. 22. Average network utilization versus maximum data rate, traffic
consisting of 76 byte packets, network size decreased to maintain slot consisting of 1526 byte packets, network size decreased to maintain slot
size, load scaled to maintain approximately 100% utilization. size, load scaled to maintain approximately 100% utilization.
0.8 i I
= 0.7 - .s m 2
‘5 0.6 -
0.5 -
0 20 40 60 80 100
Max Data Rate (Mbps)
Fig. 21. Average network utilization versus maximum data rate, traffic
consisting of the packet mixture, network size decreased to maintain slot
size, load scaled to maintain approximately 100% utilization.
practice, it is interesting to compare the results of Fig. 20
with Fig. 2 1. Here, the situation is similar, but the crossover point for the curves has shifted further out the X axis, to a
maximum data rate of approximately 58 Mbps. So, for a more realistic mixture of packets, SDMA enjoys a perfor-
mance advantage over a wider range of data rates. It bears repeating that the packet mixture used in these simulations
has a mean packet size which is lower than other observed
mixtures (see Section 3.3). We would therefore expect that the crossover point for packet mixtures with larger mean
packet sizes would occur at even higher data rates. Fig. 22 lends weight to this hypothesis. This figure shows the aver-
age throughput for a traffic load composed of 1526 byte packets. Here, the curves do not cross over within the simu-
lated range of maximum data rates.
4.3.2. Average delay Figs. 23-25 show the average delay for SDMA, the Cape-
tanakis protocol and Ethernet for the three packet-size
240 L. BamettKomputer Communications 21 (1998) 227-242
1 .Oe-04
7.5e-05 al
2 2 9 5.0e-05 3 3
2.5e-05
0.0030
0.0025
0.0020 g g 0.0015 1
* 0.0010
0.0005
0.0000 t J
0 20 40 60 80 100 O.Oe+OO
0 20 40 60 60 100 Max Data Rate (Mbps)
Fig. 23. Average delay versus maximum data rate, traffic consisting of
76 byte packets, network size decreased to maintain slot size, load scaled
to maintain approximately 100% utilization.
Max Data Rate (Mbps)
Fig. 26. Delay variance versus maximum data rate, traffic consisting of
76 byte packets, network size decreased to maintain slot size, load scaled
to maintain approximately 100% utilization.
distributions discussed in Section 3.3 as the maximum data rate of the network increases. Average delay decreases as maximum data rate increases mainly because the actual transmission time of a packet is shorter at higher data rates. Because the size of the network is shrinking, collision duration is also shorter on average. As expected, the curves display the same crossover behavior as the throughput curves discussed in Section 4.3.1. As the data rate increases, the average delays for SDMA and the Capetanakis protocol converge. Results for the SDMA-RE protocol indicate that it maintains better average delay than Capetanakis over the data rates examined here. (These data were omitted from the figures to reduce clutter, since the results for the two variant protocols were consistent with those presented in Section 4 and provided no new insights.)
0.004
0.003
0.002
0.001
0.000 1 . n a a * ’ 0 20 40 60 80 100
Max Data Rate (Mbps)
Fig. 24. Average delay versus maximum data rate, traffic consisting of the
packet mixture, network size decreased to maintain slot size, load scaled to
maintain approximately 100% utilization.
4.3.3. Delay variance
Figs. 26-28 show the delay variance for the three packet- size distributions as the maximum data rate of the network
0.008
% 0.006 8 P 0” 0.004
0.002
0.000 t 1 0 20 40 60 80 100
3.0e-04
8 2.0e-04 E ‘C
J 3 6 l.Oe-04
O.Oe+OO 0 20 40 60 80
Max Data Rate (Mbps) Max Data Rate (Mbps)
Fig. 25. Average delay versus maximum data rate, traffic consisting of Fig. 27. Delay variance versus maximum data rate, traffic consisting of the
1526 byte packets, network size decreased to maintain slot size, load scaled packet mixture, network size decreased to maintain slot size, load scaled to
to maintain approximately 100% utilization. maintain approximately 100% utilization.
L. BamettKomputer Communications 21 (1998) 227-242 241
8 I ‘C
F
4.0e-04
P 5 P 2.0e-04
O.Oe+OO 0 20 40 60 80 100
Max Data Rate (Mbps)
Fig. 28. Delay variance versus maximum data rate, traffic consisting of
1526 byte packets, network size decreased to maintain slot size, load scaled
to maintain approximately 100% utilization.
increases. The variance of delay for the CRAs decreases
slightly with increasing data rate, but there are no dramatic
changes. Ethernet, on the other hand, experiences a signifi-
cant decrease in the variability of delay as the data rate increases. As described in Section 4.3.2, this is probably due to the decrease in transmission time, the decrease in
network length and the related decrease in collision duration.
5. Conclusions
We have described three variants of the space division
multiple access protocol, a new collision resolution
algorithm. The performance of the original protocol and
the two variants was investigated through simulation. The
original protocol was compared with the simulated perfor- mance of Ethernet and an asynchronous version of the
Capetanakis tree protocol at a data rate of 10 Mbps under identical network conditions. SDMA was found to achieve a
higher average utilization and lower average delay under heavy load than either of the other protocols. The protocol
also achieved a significant improvement in the variance of delay versus Ethernet. SDMA’s performance advantage
over Ethernet holds in essentially any situation where con- tention occurs at the 10 Mbps data rate.
The original protocol was shown to be biased in favor of
stations with low addresses. However, comparison of indi-
vidual station performance under the same network layout and traffic conditions showed that this bias was less severe
than a similar location bias in the Ethernet protocol. We
conclude that the location bias, being no worse than that present in a widely deployed protocol, should not be a ser- ious impediment to the use of SDMA.
The SDMA-RE variant was designed to counteract the location bias experienced by stations using the original SDMA protocol. The design was successful for a wide range of traffic loads in the load region most likely to be
of interest in real networks. The algorithm introduces its
own pattern of unfairness, but this unfairness manifests
itself at much higher offered loads than the problem with
the original SDMA. The technique used in the SDMA-RT variant could be
used to reduce the size of the set of stations participating in
the idle marking collision for any CRA which uses unique
addresses as the criterion for subdividing an interval. How-
ever, the performance benefit demonstrated for SDMA-RT
is a direct result of the correspondence between address and
physical location assumed by SDMA. On the other hand, the
“relaxed entry” technique is applicable to any collision
resolution algorithm which has the property that stations which have transmitted in the current collision resolution
epoch are never re-enabled within the same epoch.
The protocols were also compared at higher data rates
with offered load approximately equal to the capacity of
the network. In these experiments, SDMA was shown to have better performance than Ethernet in most interesting
situations. Ethernet had higher utilization for traffic loads
with a predominance of small packets at higher data rates.
Acknowledgements
I wish to thank Arthur Charlesworth, Joseph F. Kent and Brian Whetten for careful proofreading of this paper and
helpful comments regarding its content,
References
[I] J. Capetanakis, Generalized TDMA: the multi-accessing tree protocol,
IEEE Transactions on Communications 27 ( IO) ( 1979) 1476- 1484.
[2] I. Capetanakis, Tree algorithms for packet broadcast channels, IEEE
Transactions on Infromation Theory 25 (5) (1979) 505-515.
[3] R.G. Gallagher, Conflict resolution in random access broadcast net-
works, in: Proceedings of the AFSOR Workshop on Communication
Theory Applications, Setpember 1978, pp. 74-76.
141 B.S. Tsybakov, V.A. Mikhailov, Random multiple packet access:
part-and-try algorithm, Problemy Peredachi Inform. 16 (1980) 65-79.
[5] S.S. Panwar, D. Towsley, Y. Armoni, Collision resolution algorithms
for a time-constrained multiaccess channel, IEEE Transactions on
Communications 41 (7) (1993) 1023- 1026.
[6] R.M. Metcalfe, D. Boggs, Ethernet: distributed packet switching for
local networks, Communications of the ACM, 19(7) (1976) 395-404.
[7] D. Towsley, G. Venkatesh, Window random access protocols for local
computer networks, IEEE Transactions on Computers 3 I (8) (1982)
715-722.
[8] M. Molle, Asynchronous multiple tree algorithms, in: Proceedings of
SIGCOMM 83 Symposium on Communications Architectures and
Protocols, ACM, March 1983, pp. 241-218.
[9] M.K. Molloy, Collision resolution in an unslotted environment, Com-
puter Networks 9 (3) ( 1985) 209-2 14.
[IO] Y.-T. Wu, J.-F. Chang, Collision resolution for variable-length mes- sages, IEEE Transactions on Communications 4 I (9) (I 993) I28 I - 1283.
[I I] B. Whetten, S. Steinberg, D. Ferrari, The packet starvation effect in
CSMA/CD LANs and a solution, in: Proceedings of the 19th
Conference on Local Computer Networks, Minneapolis, MN, 2-5
October 1994, pp. 206217.
242 L. BamettKomputer Communications 21 (1998) 227-242
[ 121 F.A. Tobagi, F. Borgonovo, L. Fratta, Expressnet: a high-performance
integrated services local area network, IEEE Journal on Selected
Areas in Communications 1 (1983) 898-912.
[ 131 Institute of Electrical and Electronics Engineers, IEEE Standards for
Local and Metropolitan Area Networks: Distributed Queue Dual Bus
(DQDB) Subnetwork of a Metropolitan Area Network (MAN), IEEE
Std 802.6 1990, 1990.
[ 161 R. Gusella, A measurement study of diskless workstation traffic on an
Ethernet, IEEE Transactions on Communications 38 (9) (1990) I557-
1568.
[ 141 M. Rios, N.D. Georganas, A hybrid multiple-access protocol for data
and voice-packet over local area networks, IEEE Transactions on
Computers 34 (1) (1985) 90-94.
[ 171 J. Massey, Collision resolution algorithms and random access com-
munication, Technical Report UCLA-ENG-8016, UCLA, 1980.
[ 181 L. Bamett, A topology-aware collision resolution algorithm, in: Pro-
ceedings of the 19th Conference on Local Computer Networks, Min-
neapolis, MN, 2-5 October 1994, pp. 196-205.
[ 191 H. Kaur, G. Campbell, DQDB - an access delay analysis, in: INFO-
COM ‘90, 1990, pp. 630-635.
[ 151 B.L. Bamett III, An Ethernet performance simulator for undergradu- [20] T.A. Gonsalves, F.A. Tobagi, On the performance effects of station
ate networking, in: Proceedings of the ACM SIGCSE Technical Sym- locations and access parameters in Ethernet networks, IEEE Transac-
posium, February 1993, pp. 145-150. tions on Communications 36 (4) (1988) 441-449.