rate-based control schemes for abr traffic — design principles and performance comparison

11
EISEVIER and ISDN SYSTEMS Computer Networks and ISDN Systems 29 (1997) 583-593 Rate-based control schemes for ABR traffic - Design principles and performance comparison Ram Krishnan ’ Motor&. Inc., 20 Cabot Boulevard. MS: M4- (5, Mansfield, MA 02048-1193. USA Abstract A new ATM service category, the Available Bit Rate service, has been introduced in the ATM Forum. It dynamically allocates available bandwidth to users by controlling the flow of user traffic with feedback. The Forum has ratified the rate-based flow control framework for the support of this new service. In this paper, we provide a recipe for designing rate-based feedback schemes demonstrating the rich variety of available switch mechanisms. Each aspect of the feedback control loop mechanism is explored in detail and several available choices are investigated. Two example switch mechanisms are provided that illustrate the rate-based control design principles. The ability of these mechanims to support the desired objectives of an ABR service is compared using a reference network configuration. Simulation results show that the rate-based framework allows a great degree of architectural flexibility in the design of switch mechanisms. However, the explicit rate based approach is more capable than single bit feedback approaches in providing immediate access to available bandwidth in the presence of VBR sources. The rate-based framework provides switch vendors sufficient flexibility to choose a mechanism among several available options, based on their performance requirements and cost budgets. 0 1997 Elsevier Science B.V. Keywords: ATM; Available Bit Rate service; Traffic management; Rate-based feedback flow control; Resource management; ATM networks and systems; Routing and congestion control 1. Introduction The hottest technology for the next decade of internetworking, Asynchronous Transfer Mode (ATM), will provide the standard protocol for 21st- century voice, video and data integration. ATM is a networking protocol with the potential to support applications with distinct tolerances for delay, jitter, and cell loss and distinct requirements for bandwidth or throughput. Different traffic classes have been defined to address the service requirements of di- ’ Email: [email protected] verse applications expected to be supported by ATM networks. Constant Bit Rate and Variable Bit Rate services were intended to address applications such as circuit emulation, voice or entertainment-quality video with precisely defined requirements for throughputs, delays and delay variations. A third service class referred to as Unspecified Bit Rate (UBR), also called Best-Effort traffic class, has been defined for applications that do not require any performance guarantees. There was a clear need for another traffic class that provided support of applications unaware of their throughput requirements. Bursty LAN traffic is 0169.7552/97/$17.00 0 1997 Elsevier Science B.V. All rights reserved P/l SO169-7552(96)00121-3

Upload: ram-krishnan

Post on 02-Jul-2016

213 views

Category:

Documents


1 download

TRANSCRIPT

EISEVIER

and ISDN SYSTEMS

Computer Networks and ISDN Systems 29 (1997) 583-593

Rate-based control schemes for ABR traffic - Design principles and performance comparison

Ram Krishnan ’ Motor&. Inc., 20 Cabot Boulevard. MS: M4- (5, Mansfield, MA 02048-1193. USA

Abstract

A new ATM service category, the Available Bit Rate service, has been introduced in the ATM Forum. It dynamically allocates available bandwidth to users by controlling the flow of user traffic with feedback. The Forum has ratified the rate-based flow control framework for the support of this new service. In this paper, we provide a recipe for designing rate-based feedback schemes demonstrating the rich variety of available switch mechanisms. Each aspect of the feedback control loop mechanism is explored in detail and several available choices are investigated. Two example switch mechanisms are provided that illustrate the rate-based control design principles. The ability of these mechanims to support the desired objectives of an ABR service is compared using a reference network configuration. Simulation results show that the rate-based framework allows a great degree of architectural flexibility in the design of switch mechanisms. However, the explicit rate based approach is more capable than single bit feedback approaches in providing immediate access to available bandwidth in the presence of VBR sources. The rate-based framework provides switch vendors sufficient flexibility to choose a mechanism among several available options, based on their performance requirements and cost budgets. 0 1997 Elsevier Science B.V.

Keywords: ATM; Available Bit Rate service; Traffic management; Rate-based feedback flow control; Resource management; ATM networks and systems; Routing and congestion control

1. Introduction

The hottest technology for the next decade of internetworking, Asynchronous Transfer Mode (ATM), will provide the standard protocol for 21st- century voice, video and data integration. ATM is a networking protocol with the potential to support applications with distinct tolerances for delay, jitter, and cell loss and distinct requirements for bandwidth or throughput. Different traffic classes have been defined to address the service requirements of di-

’ Email: [email protected]

verse applications expected to be supported by ATM networks. Constant Bit Rate and Variable Bit Rate services were intended to address applications such as circuit emulation, voice or entertainment-quality video with precisely defined requirements for throughputs, delays and delay variations. A third service class referred to as Unspecified Bit Rate (UBR), also called Best-Effort traffic class, has been defined for applications that do not require any performance guarantees.

There was a clear need for another traffic class that provided support of applications unaware of their throughput requirements. Bursty LAN traffic is

0169.7552/97/$17.00 0 1997 Elsevier Science B.V. All rights reserved P/l SO169-7552(96)00121-3

584 R. Krklmun / Computer Netuwks utul ISDN Systems 29 (1997) 583-593

an example of such an application. Such client-server based applications desire immediate bandwidth ac- cess and cannot be supported efficiently using either CBR or VBR service. They have no requirements for guaranteed bandwidth access - they contend for available bandwidth. Consequently, such connections should not be rejected for lack of bandwidth. The ATM Forum spearheaded the effort to develop a new service category, called the Available Bit Rate (ABR) service, that provides such a service. ABR is in- tended to carry bursty flows of information that might be generated by LAN emulation or TCP file transfer traffic. Unlike CBR and VBR applications, these applications cannot declare precise service re- quirements to the network. Additionally, they cannot convey precise information regarding their traffic parameters to the network for policing purposes. But they have the ability to reduce their information transfer rate if the network requires them to do so. Likewise, they may wish to increase their rate if the network has sufficient bandwidth available. It is clear that data applications that require lossless ser- vice cannot afford to use the UBR service. Guaran- tees on cell losses are required to prevent throughput collapse as observed by Floyd and Romanow [9]. This implies the need for a mechanism that facili- tates feedback from network switches to endsystems to enable ATM networks to support a “virtually lossless” ABR service. Hence, one of the objectives of the ABR service is to minimize cell loss at the expense of delay.

Feedback flow control mechanisms in packet net- works have been studied extensively in the past [3]. The purpose of feedback in the context of ABR service is to use available bandwidth (after allocation to CBR and VBR sources> efficiently and allocate it evenly among the active ABR connections. Other objectives include instantaneous access to bandwidth which is required to offer dynamic ABR services. The ANSI Frame Relay standard proposed the notion of controlling the rate of a traffic source through a two bit header in the Frame Relay packet [l]. This is a rate-based analogue of the DECnet protocol that adjusts window sizes. At fixed intervals, the algo- rithm makes a binary decision whether to increase the current window size by a fixed amount or to decrease it by an amount proportional to the current window size based on the feedback it receives from

the network. This results in a linear increase or exponential decrease of the window size as a func- tion of time. Congestion is signaled by the network in the forward direction of the connection through a single bit congestion indicator in each packet. The destination determines the state of the network through these bits and signals it to the source via acknowledgment packets, achieving an end-to-end feedback control loop.

Significant advances have been made in the ATM Forum in defining a closed-loop rate-based traffic control algorithm in support of ABR service. In the next section, we will review the rate-based proposal for ABR traffic. A detailed description of the rate- based framework is included in this section. Section 3 provides a recipe for designing rate-based feedback schemes demonstrating the rich variety of available switch mechanisms. Each aspect of the feedback control loop mechanism is explored in detail and several available choices are investigated. Two ex- ample switch mechanisms are provided in Section 4 to illustrate the rate-based control design principles. The ability of these mechanisms to support the de- sired objectives of an ABR service is compared using a reference network configuration in Section 5. A simple EFCI mechanism and a per-VC queueing scheme are also simulated as a basis for reference.

2. Rate-based congestion control framework

The ATM Forum overwhelmingly endorsed a rate-based framework to support ABR service, which encompasses a broad range of implementations, characterised by varying degrees of cost and com- plexity. The framework specifies the following: l End-to-end flow control is supported, but provides

an option for intermediate switches or networks to segment the control loop.

l Switches can support ABR connections by either setting the EFCI bit or providing more detailed explicit feedback to endsystems.

l Defines mechanisms and control-information for- mats to allow switches implementing a variety of feedback mechanisms to coexist within the same control loop and interoperate with the endsystems. A precise definition of the SES (Source End

R. Krishnun / Computer Networks und ISDN Systems 29 (1997) 583493 Fiti.

System) and DES (Destination End System) behav- ior, the content and format of the ATM control cells called RM (Resource Management) cells and a range of feasible switch mechanisms are specified in [lo]. These mechanisms are character&d by different lev- els of complexity and achieve varying degrees of fairness. The wide range of options demonstrate the flexibility in the choice of switch mechanisms avail- able within the rate-based framework.

2. I. Basic operation

Fig. 1 presents the key elements of a typical communication network implementing a feedback control scheme that dynamically regulates the flow of data transported through the network between SES and DES. The endsystems typically reside in the (NIC) Network Interface Cards at the extreme points of an ATM VC which includes a forward (SES to DES) and a backward (DES to SES) path. A key feature of rate-based proposals include the ability of the SES to submit cells into the network at a vari- able, shaped or controlled rate.

Switching elements route ATM cells from SES to DES providing the necessary resources such as port bandwidth and buffers. Contention for these limited resources leads to congestion resulting in excessive delay or potential loss of ATM cells. A switch implements both cell scheduling mechanism for port bandwidth management, and cell buffer manage- ment. Congestion detection in the switch is facili- tated by monitoring the level of resource usage, which is then indicated to the endsystems via con- gestion indication. Feedback from the network to endsystems gives the necessary information to re- spond, by appropriately modifying their rates to changes in the available bandwidth, so that conges- tion is controlled and available bandwidth is used efficiently.

SUBNETWORK

py-~+Y$-l

C-- 5

Rate-based end-to-end feedback loop

Fig. I.

The SES sets up a connection with a call setup request for ABR connection. During this signaling setup, values for a set of ABR specific parameters are signaled by the SES and the network elements. Some of these parameters are requested by the source based on its requirements with subsequent modifica- tion by the network (e.g., PCR (Peak Cell Rate). MCR (Minimum Cell Rate), etc.) while others could be set by the network, viz., those impacting the increase/decrease behavior such as AIR (Additive Increase Rate). Nrm and RDF (Rate Decrease Fac- tor).

The rate at which an ABR SES is allowed to transmit is denoted by Allowed Cell Rate (ACR). which is initially set to ICR (Initial Cell Rate). ACR is always bounded by MCR and PCR. An SES always starts a transmission by sending an RM cell followed by data cells. The SES continues to send RM cells after every Nrm - 1 data cells, inorder to provide the network an opportunity to convey state information back to the source. The source writes its current rate into the CCR (Current Cell Rate) field of the RM cell, and the rate at which it desires to transmit cells (usually the maximum rate, PCR) in the ER (Explicit Rate) field. As the RM cell tra- verses the network, the switches use the information in the CCR field to decide allocation of bandwidth among competing ABR connections. Switches also have the option of reducing the value of the ER field or setting the CI (Congestion Indication) bit to 1. EFCI-capable only switches ignore the content of the RM cell.

When the RM cell arrives at the destination, the latter reverses the DIR (direction) bit and turns it back towards the source. If the destination is con- gested and is unable to support the rate specified in the ER field, it reduces ER to a rate it can support. Additionally, if the destination had observed a set EFCI in the most recent data cell received, it is required to set the turned around RM cell’s CI bit to indicate congestion. This is necessary to support a network comprising of only EFCI switches. As the turned-around RM cells proceed through the net- work, explicit-rate capable switches examine the ER field, and modify the value in the ER field if they cannot support that rate. Clearly, switches should not attempt to increase the ER value. Switches incapable of computing an explicit rate can set the CI field in the RM cell to notify endsystems of congestion.

586 R. Kri.shnun/ Cornpurer Networks und ISDN Systems 29 (1997) 583-593

When the RM cell arrives back at the SES, the SES resets its rate, ACR, based on the information carried in the RM cell. If the CI bit is not set, ACR is allowed to increase upto the value contained in the ER field (in increments of AIR), but never exceeds PCR. If the CI bit is set, ACR is decreased by the Decrease Factor (RDF); ACR is further decreased to the value of ER contained in the RM cell, never dropping below MCR.

3. Characterisation of feedback control schemes

A network element/switch can be classified along multiple dimensions according to its role in the feedback control loop mechanism. Some of these mechanism options are:

Congestion monitoring. A whole range of mecha- nisms exist for the switch to determine the onset of congestion. Some of these include:

(a> Simple queue fill rhresholding. The switch declares the onset of congestion if the instantaneous queue size exceeds a threshold, THigh. Normal “un- congested” state is detected when the queue size drops below another threshold, TLOW,. Two such thresholds are defined for purposes of hyterisis.

(b) Queue input rate thresholding. With rate- thresholding, the incoming cell rate into the queue is monitored on a short interval basis. If this rate exceeds a predefined threshold, congestion is de- clared by the switch. One possible implementation - if the count of queue input cells exceeds. T,,, in an interval of T,,,, congestion can be declared. This mechanism implements congestion avoidance - by controlling the incoming rate, the onset of congestion can be prevented, enabling the system to be operated at the knee of the throughput-delay curve [4].

(c) Queue growth rate thresholding. Instead of indicating congestion by a queue-thresholding ap- proach, change in queue size can be monitored to signal the onset of congestion. In other words, the first difference of queue depth is used as the conges- tion indicator. This allows tight control of the queue size.

Congestion notification. Once the status of conges- tion is detected, there are several mechanisms avail-

able to the switch to convey the information to the end-systems.

(a> Single bif feedback. In this scheme, the EFCI (Explicit Forward Congestion Indicator) bit in the ATM cell header is set by the switch during conges- tion. The end-systems use this one-bit information contained in each cell to increase/decrease their cell rates by an incremental amount. This basic mecha- nism imposes very small implementation burdens on the switches and minimal processing. However, a congested switch has no ability to mark VCs selec- tively. In particular, a switch will mark all VCs regardless of the individual VC’s rates. This gives rise to the “beat-down” problem - A VC traversing multiple hops is subject to being marked more often than those traversing fewer congested links (owing to the unsynchronised congestion waves at each of these switches), with the result that its bandwidth is driven down for unacceptably long periods of time. This undesirable effect is proportional to the number of traversed congested links. The limitations of non- selective binary making mechanisms implies the need for some form of intelligent marking to achieve fair distribution of available bandwidth. Such binary marking mechanisms have been employed in packet switching networks to facilitate feedback to endsys- terns. It can be used as a reference to estimate/quan- tify the improvement realised by the adoption of more complex approaches.

(b) Zntefligent marking. In this approach, cells belonging to VCs are selectively marked based on their current cell rates which are carried in the RM cells. In an example implementation, an estimate of the fair share of the bandwidth at a port is computed as a moving average over the rates carried in the RM cells [2]. Upon congestion, it sets the CI bit in cells belonging to only those VCs whose rates are above this fair share. VCs with rates below the fair share are not affected.

(c) Explicit rate indication. Single-bit binary feedback schemes were initially proposed for con- nectionless networks which have no knowledge about individual flows. It was recognised that connection- oriented ATM networks are in a position to use information about individual connections to make more informed decisions. This has led to a lot of research in explicit rate schemes, wherein a source sets its rate based on the information contained in the

R. Krishnan / Computer Networks and ISDN Systems 29 (1997) 583-593 587

RM cells [8]. Incremental rate schemes which in- crease/decrease by increments (which are often pro- portional to their current cell rates) are more conser- vative than explicit rate schemes and in most in- stances, less sensitive to changes in network parame- ters. Incorrect explicit rate computation may lead to large queue sizes and associated performance degra- dation. However, explicit rate schemes facilitate straight-forward policing - the rates are explicitly specified in the returning RM cells. Such schemes are less sensitive to the choice of the initial cell rate. Moreover, explicit rate schemes facilitate rapid con- vergence time as opposed to incremental rate schemes that oscillate around the optimal operating point.

The more sophisticated approaches require vari- ous degrees of processing of the content of RM cells, and various amounts of memory necessary to store information needed, possibly on a per-connection basis, to determine the explicit rate. In this paper, we would like to quantify the performance advantages of intelligent bit marking or explicit rate evaluation approaches with respect to efficiency and fairness in the steady-state allocation of bandwidth.

Queue service. The final dimension that we explore is the switch service mechanism at a port. Complex scheduling mechanisms add to the complexity and cost of the switch hardware while providing en- hanced performance improvements.

(a) FIFO per switch port. This is the most ele- mentary service mechanism wherein cells belonging to all VCs are queued at a single location and no information about individual VCs is maintained. Im- plementors favor this approach owing to its simplic- ity. As will be shown later, FIFO queueing suffers from serious performance degradations if used along with binary marking schemes.

(b) Per-VC accounting. In this approach, a table indexed by the VC labels for the ABR connections routed through the switch port is maintained. How- ever, this method does not require per-VC queueing or scheduling. This accounting information can be used to selectively mark VCs based on per-VC queueing information. This approach enables the switch to dynamically monitor the number of active VCs and the cell rate of each VC. Reasonably fair allocations of bandwidth for each VC is possible with this approach.

(c) Per-VC queueing. Perfect isolation is possible with this service scheduling mechanism. It can be implemented using Fair Queueing or Weighted Round Robin per-VC scheduling at the ports. Having a separate queue for each of the VCs allows the delay and loss behavior to be isolated from each other. A single FIFO queue can lead to potentially undesirable interactions between competing flows. Additionally, it is possible to specify bounds on the end-to-end latency with per-VC queueing. While this may not be a requirement for ABR service currently, it may still be desirable in the future for supporting weak real-time applications such as video teleconfer- encing. Switch implementation issues arising from per-VC queueing/scheduling are discussed in [ 131. Such per-connection buffer management schemes can protect well-behaved users from malicious users, provide fair access to network resources over rela- tively small time intervals, and preserve the charac- teristics of traffic as it traverses the network.

4. Some example mechanisms

In this section, we study some example switch mechanisms constructed by choosing schemes from each of the dimensions described in the previous section and combining them. This “mix and match” capability illustrates the rich multitude of mecha- nisms available within the rate-based framework. Switch vendors have the flexibility to pick from a wide range of available mechanisms based on cost and performance requirements. The simple EFCI and the per-VC queueing mechanism provide a range against which other switch mechanisms can be com- pared. In the per-VC queueing mechanism, each connection has access to a separate queue and the link controller services the queues in a round-robin fashion. The simple EFCI mechanism has a single FIFO queue operating with a queue thresholding congestion detection mechanism.

4.1. Example switch mechanism 1

This scheme is based on per-VC accounting of the instantaneous queue length of each active VC [6,7]. It provides intelligent 1 -bit congestion notification without yielding a value for explicit rate feedback.

588 R. Krishnan / Computer Nehwwks and ISDN Systems 29 (1997) 583-593

Congestion is detected with simple queue-fill thresh- more, other mechanisms to compute an explicit rate olding. exist as reported in [8].

In this scheme [7], each switch port maintains a register for each VC supported at that port that contains the number of cells belonging to that VC waiting to be served at that port. Let q[i] denote the number of cells waiting in the FIFO queue belonging to virtual connection i. The port also maintains a register containing the total queue length, Q, at that port. Congestion is indicated by marking either the EFCI or the CI bit on cells belonging to virtual connection i if and only if

All the switch algorithms with the exception of switch mechanism 2 are single-bit feedback ap- proaches (though some of them might employ intelli- gent single-bit feedback). However, they clearly demonstrate that the rate-based framework does not impose a specific method for scheduling cells and managing buffers at switches supporting ABR traf- fic. This enables a wide range of possible architec- tures for switch vendors.

(i) Q > T where T is a global threshold, and (ii) q[il > t wh ere t is a local threshold. r could

either be a settable parameter or can be made to depend on N, the number of virtual connections for which q[ i] > 0, through the formula r = x * Q/N, where x is a programmable threshold.

5. Simulation results

4.2. Example switch mechanism 2

This scheme utilises an explicit rate scheme for congestion notification coupled with a FIFO service scheduling mechanism. Rate thresholding is em- ployed to detect congestion in this example mecha- nism. Switches measure their input arrival rate and compare it with the link bandwidth. If the ratio of the input rate to the available link bandwidth exceeds a settable threshold, z, congestion is declared.

Source model. In our simulations, we assume a mixture of persistent and bursty sources. Persistent (also known as greedy) sources transmit cells when- ever their rate allows them to do so. Bursty sources can be modeled by an ON-OFF traffic distribution. A typical ON-OFF source is either in active mode, when it is transmitting at a constant rate or in idle mode when it is silent [5]. A mean burst length of 22 cells was used for the bursty sources. The mean silence duration was appropriately chosen to achieve a desired utilisation of the source.

The switches compute a “fair” share for con- strained sources passing through that node, and re- duce the ER field in the returning RM cells to the fair share if it is congested. Using exponential weighted averaging, a mean allowed cell rate (MACR) is computed and the fair share is set at a fraction of this average:

MACR = ( 1 - (y)~~C~ + ACCR, Fair Share = /3 * MACR.

Here, cr is the exponential averaging factor and p is a multiplier (called switch down pressure factor). The suggested values of (Y and p are l/16 and 7/8, respectively. For a more detailed description of the mechanism, see [2]. Another down pressure fac- tor can be used if the switch is “very” congested. Various refinements have been proposed to fine-tune this mechanism. However, for comparison purposes, we limit our attention to the basic scheme. Further-

Network model. We use the chain configuration shown in Fig. 2 to compare the different feedback flow-control schemes. The multi-hop configuration provides a means to evaluate the fairness properties of the various control schemes. As shown in the figure, the set of connections labeled “a” and “b” traverse four switches and three backbone links en- route to their destinations. All the other sets pass through two switches and a single backbone link. Each labeled set denotes an access link shared by N connections. All sources follow the reference behav- ior outlined in [IO]. All connections are unidirec- tional. The bandwidth of the backbone links are as follows: BB 1 = 77.5 Mbps (50% of 155 Mbps), BB2 = 13 1.75 Mbps (85% of 155 Mbps), BB3 = 155

b C

h a

Fig. 2

R. Krishnun/ Computer Networks und tSDN Systems 29 (1997) 583-593 589

Mbps. The endstations have an access distance corre- sponding to 400 m. Propagation delay is assumed to be 5.0 @s/km. The inter-switch distance is assumed to be 400 m, corresponding to a LAN scenario.

We used a value of N = 3 in our simulations. The set of connections corresponding to “b”, “d”, “f” and “h” have an average utilisation of 1% (bursty) while those corresponding to “a”, “c”, “e” and “g” attempt to fully utilise the bandwidth whenever available (persistent). Ideally, the low utilisation sources (1%) should have their requirements satis- fied and the persistent sources should gain access to the link bandwidth when the former turn idle. From the figure and backbone bandwidth values, it is clear that the sets “a” and “b” are bandwidth-limited at BBl. “e”s rate of transmission is constrained at BB2 while the set “g” (which traverses BB3) ac- cesses bandwidth made available after allocation to sets “a”, “b”, “e” and “f”. The switches are assumed to be non-blocking.

The parameters for the simulation configuration are as specified in [l I]:

PCR = 155 Mbps, MCR = PCR/lOOO,

ICR = PCR/20,

AIR = PCR/2900, RDF = 256, Nrm = 33,

Link distances = 400 m (for LAN),

Host distances = 400 m. For our investigations, we compared the various switching mechanisms’ ability to support the desired objectives of an ABR service - the ability to utilise bandwidth whenever it becomes available, allocate the bandwidth fairly among competing ABR connec- tions and provide a very low cell loss probability. These requirements imply that the following perfor- mance parameters need to be compared - link utili- sation, fairness measure and maximum switch buffer occupancy. We use the maximum buffer occupancy

Table I

numbers to determine the memory required to pro- vide a lossless service. Average buffer occupancy numbers are irrelevant for delay-tolerant ABR ser- vice. i.e., delay is not a QoS parameter. Max-min criteria for fairness applied to the example configura- tion dictates the following longterm transmission rates for the connections. Connections belonging to sets “a” and “c” receive 7.081% of PCR - Let us run through this example computation.

Total useful available bandwidth at BBl = (32/33) (50%), PCR = 48.485% PCR (since Nrm = 32 cells, one out of every 33 cells is a RM control cell). Bandwidth consumed by the low-rate sources, “b” and “d” = 6% PCR (each set represents N = 3 connections).

Therefore, bandwidth available to a single persis- tent connection equals

42.485/6 = 7.081% of PCR.

Using similar arguments, the fair rates for the other connections can be written down as follows: Connec- tions belonging to set “e” are allocated 10.313% of PCR while those of set “g” gain access to 11.93% of PCR.

Table 1 specifies the allocated bandwidths by the various switching mechanisms, expressed as a per- centage of the PCR. The backbone utilisation of the link interconnecting SW1 and SW2 is also shown. All the low-rate sources gain access to their share of bandwidth, viz., 1%. A global queue threshold of 100 cells have been used in all the queue-threshold- ing mechanisms. A value of 10 cells is used for the local threshold, t. For the rate-thresholding mecha- nism (switch mechanism 2), arrival of every 30 cells initiates a new rate measurement with z set to 0.95.

Clearly, the simulation results reveal that the per- VC queueing mechanism wherein each connection has access to a separate queue and the link controller services the queues in a round-robin fashion is the

Switch mechanism BW of “a” BW of “c” BW of .‘e” BW of “g” pXt.

util.

Simple EFCI 4.780 8.977 10.390 14.17 94.5 Switch mechanism 1 6.403 7.544 10.437 12.42 95.7 Per-VC queueing 6.936 6.97 1 10.302 il.791 95.4 Switch mechanism 2 6.866 6.855 9.622 I 1.387 94.3

590 R. Krishnan / Computer Networks and ISDN Systems 29 (1997) 583-593

most “fair” switch mechanism. An individual per- VC queue is considered to be in a congested state if an incoming cell on that VC causes either the local threshold or the global threshold to be exceeded. Congestion state is indicated to the source by mark- ing the CI bit in the backward RM cells. This mechanism is investigated to serve as a reference service discipline. A more complex implementation would involve computing an explicit rate per connec- tion leading to improved fairness and performance.

As expected, the simple EFCI switch mechanism suffers from the “beat-down” problem and signifi- cantly deviates from the fair allocation. The beat- down problem occurs when a connection traversing multiple switches experiences congestion continually (owing to the unsynchronised congestion waves at each of these switches) and be driven down to MCR for unacceptably long periods of time. This phenom- ena never happens across a single switch and rarely across two switches. A connection traversing more than two switches is susceptible to this problem. Switch mechanism 1, the per-VC accounting scheme, provides a more equitable distribution of bandwidth achieved with a more complex intelligent “bit-set- ting” switch mechanism. Switch mechanism 2, the rate-measuring mechanism combined with an ex- plicit-rate setting operation, compares very favorably with the per-VC queueing scheme with regards to fairness without the need to have per-VC queues and the associated complex scheduling operation in the switches. However, rate-limiting results in a slight loss in throughput owing to its operation in the congestion avoidance regime. A higher value of z increases the backbone utilisation and bandwidths allocated to the connections at the cost of increased queue lengths, as shown in Fig. 3. All the throughput numbers reported in this paper refer to “goodput” - the fraction of data cells received at the destination. Since Nrm = 33, maximum achievable goodput = 32/33 = 96.7%. Table 2 summarises the maximum queue occupancy levels and fairness measures for the switch mechanisms. We use the fairness index suggested in [8].

Fairness index = i I c yi ‘/n c y,?, \i I i

where the yi’s are the normalised allocations: yi = Received allocation/Optimal allocation. From the

Fig. 3. Switch mechanism 2.

table, it is clear that switch mechanism 2 offers a higher fairness index than mechanism 1. It should be pointed out that the fairness indices are only a quantitative mechanism to compare fairness proper- ties of competing schemes and should not be used to comment on the fairness allocation of a single mech- anism taken in isolation, i.e., it is not a linear measure. For example, it should not be inferred from the table that the simple EFCI mechanism is 95.37% fair. SW1 is the most congested switch and has the maximum queue occupancy (in cells). With the ex- ception of switch mechanism 2, all the other schemes use incremental rate changes (as opposed to explicit rate changes) and hence need more buffers to ensure low cell loss. Switch mechanism 2 has reduced buffer requirements due to (1) explicit rate operation - the network conveys its state more precisely to endpoints thereby adapting to state changes more effectively, (2) it uses a rate-limiting congestion avoidance mechanism.

Fig. 3 illustrates the impact of the ratio, z, on the maximum queue occupancy and the backbone utili-

Table 2

Switch mechanism Maximum queue occupancy

Fairness index

Simple EFCI 409 95.37 Switch mechanism 1 343 98.83 Per-VC queueing 315 99.99 Switch mechanism 2 133 99.98

R. Krishnan/ Computer Networks and iSDN Systems 29 (1997) 583-593 591

Fig. 4. Switch mechanism 1.

sation. Clearly, increasing z increases the utilisation in the network at the cost of increased buffer sizes. Values in the range 0.95 to 1.0 offer a reasonable compromise. The same tradeoff exists in queue- thresholding policies as well. Increasing the value of queue thresholds (THiRh and TLO,,,) increases the network throughput and the maximum buffer occu- pancies.

Fig. 4 depicts the role of local thresholds in providing fair access to bandwidths in per-VC ac- counting and per-VC queueing schemes. Increasing the value of local threshold (for a fixed global threshold) permits the shorter-hop VCs to gain unfair access to bandwidth resources. From the figure, it is also clear that maximum buffer occupancy increases as a result. Clearly, large values of local thresholds reduces the effectiveness of per-VC accounting to isolate connections. Perfect fairness can be achieved by disabling the global threshold parameter. While fairness improves substantially, lack of a global threshold causes a significant increase in the maxi- mum buffer size.

Fig. 5 illustrates the effect of the measurement interval (in cells) on the performance of the rate- limiting mechanism. Long measurement intervals

Table 3

Fig. 5. Switch mechanism 2.

make it difficult to react quickly to overload situa- tions (owing to large time constants) with the result that the buffer builds up before control actions are initiated. On the contrary, if the measurement inter- val is too small, transient overloads affect the perfor- mance of the system adversely. A value of around 50 cells results in a reasonably high backbone utilisation as well as reduced buffer sizes. The simulations were run for a value of z = 0.98.

Performance in a WAN scenario. The interswitch distance in Fig. 2 is increased to 1000 km to reflect WAN distances and the simulations are repeated with the following changes - Persistent sources are assumed and THigh = 1000 cells while the local threshold, t = 80 cells. ICR = 183 cells/set and AIR = PCR/320. Table 3 summarises the observa- tions.

Max-min fair allocations for the connections can be computed and they are as follows: connections belonging to set “a”, “b”, “c” and “d” = 4.167% PCR, to sets “e” and “f” = 5.833% PCR and to sets “g” and “h” = 6.667% PCR. Fairness is con- siderably worse due to the large feedback delays inherent in WAN distances. Simple EFCI mechanism

Switch mechanism BW of “a” BW of “c” BW of “e” BW of “g” perct. util.

Simple EFCI * Switch mechanism 1 1.31 I 5.805 4.827 9.279 85.7 Per-VC queueing 2.333 5.798 4.608 9.358 97.6 Switch mechanism 2 3.063 4.765 5.162 7.649 94.2

592 R. Krishnun/Computer Networks and ISDN Systems 29 (1997) 583-593

Table 4 Table 5

Switch mechanism

Switch mechanism I Per-VC queueing Switch mechanism 2

Maximum queue occupancy

4434 2618

918

Fairness index

84.4 82.4 91.2

Switch mechanism Maximum queue Bandwidth of “b” occupancy (as a o/o PCR)

Simple EFCI 2167 1.21 (84.26%) Switch mechanism 1 1870 I .79 (87.9%) Per-VC queueing 2382 1.72 (87.28%) Switch mechanism 2 1111 11.l9(90%)

results in very large queues for the above parameter set and results are not shown for that case. A simple FIFO EFCI mechanism is clearly unsuited for WAN applications when high values of THigh are used to achieve reasonable link utilisations. From the table, it is clear that reasonably fair allocations occur only for switch mechanism 2. All the remaining single-bit feedback approaches favor short-hop connections. Therefore, explicit rate feedback approaches are es- pecially critical for good performance in WAN sce- narios. Table 4 summarises the maximum queue occupancy levels and fairness measures for the dif- ferent switch mechanisms.

ing scheme, starve these connections. This could be attributed to the slow response time characteristics of the single bit feedback as opposed to the constant monitoring of the rate in the explicit rate-based approach and immediate feedback of this rate in the RM cell. This is especially relevant in the presence of bursty VBR sources when the available bandwidth to ABR sources fluctuates in a dynamic fashion and rapid feedback is necessary for the ABR sources to gain access to this bandwidth.

6. Conclusions Impact of VBR sources. This is investigated (in a LAN environment where the interswitch distance equals 400 m) in the next set of simulations. Once again, the network configuration in Fig. 2 is used for the simulation, with all connections belonging to set ‘a’ (which traverses the entire network and hence impacts all the switches) configured as open-loop VBR sources with bursty ON-OFF traffic character- istics. The VBR load is tuned to be 10% of PCR and the burst length of the VBR sources is assumed to be 100 cells. Persistent traffic generators are used to model ABR sources. VBR traffic is always served with higher priority at the switch’s scheduler. Param- eters used in the previous LAN scenario experiment are employed here as well. A value of z = 0.8 is used for switch mechanism 2. The results are sum- marised in Table 5.

The ATM Forum also considered a hop-by-hop credit-based flow control proposal which allocated credits to the transmitter on a hop by hop basis [12]. Each credit allowed the scheduler (at the transmitter side) to transmit one cell. The receiver allocated credits only when it had the requisite free buffer space per VC. This mandates the use of a fair scheduling mechanism at the switches. Moreover, enough buffer space per VC needs to be reserved to sustain the maximum possible throughput. This re- quirement excludes the use of static buffer allocation policies in WAN scenarios. Dynamic allocation poli- cies, which vary buffer allocations based on band- width demands, require a more complex protocol for effective operation.

The second column in the table depicts the total bandwidth received by all the connections belonging to set “b” - (BBl link utilisations are shown in the paranthesis.1 These connections contend with the VBR sources for switch and transmission link re- sources and would receive 13.33% PCR in a fair allocation. Only the explicit rate approach comes close to supporting this allocation while all the single bit feedback algorithms, including the per-VC queue-

The rate-based proposal requires neither a hop-by- hop control nor a per-VC scheduling algorithm to achieve fairness and throughput efficiency as shown in the previous section. It allows a great degree of architectural flexibility as demonstrated by the wide variety of switch mechanisms surveyed in the paper. The simulation results have pointed out the various tradeoffs present in the switch mechanisms, through- put versus buffer occupancy being one among them. Switch vendors can make intelligent choices based

R. Krishnan/ Computer Nehwrks und ISDN Systems 29 (1997) 583-593 593

on their performance requirements and cost budgets. Designing fair, efficient mechanisms using the rate- based framework without incurring a lot of complex- ity in scheduling and buffer management functions is possible as shown in the paper.

References

[II

121

[31

(41

bl

[61

[71

DSSl - Core aspects of frame protocol for the use with frame relay bearer service, American National Standards Institute, Telecommunications Committee (ANSI TIS I), 1990. K.-Y. Siu and H.-Y. Tzeng, Adaptive proportional rate con- trol for ABR service in ATM networks, UC Irvine Tech. Rept. No. 1102, Electrical and Computer Engineering, 1994. A. Chamy, An algorithm for rate allocation in a packet- switching network with feedback, Master’s Thesis, MIT Lab for Computer Science, 1994. R. Jain, S. Kalyanaraman and R. Viswanathan, The OSU scheme for congestion avoidance using explicit rate indica- tion, OSU Tech. Rept., 1994. K. Sriram and W. Whitt, Characterizing superposition arrival processes in packet multiplexers for voice and data, l/Z.!% Truns. Selected Areas Commun. 4 (6) (1986) 833-846. R. Krishnan, A comparison of per-VC queueing and explicit rate-setting mechanism, in: ATM Forum Technical Commit- tee Meeting, Ottawa, 1994. F. Bonomi, K.W. Fendick and K. Meier-Hellstem, A com- parative study of EPCRA compatible schemes for the support of fair ABR service through intelligent congestion indication, in: ATM Forum Technical Committee Meeting, Ottawa, 1994.

[81

191

[lOI

I1 11

[121

[I31

[I41

R. Jain, Congestion control and traffic management in ATM networks: Recent advances and a survey, Computer Nrhz.orks ISDN Systems 28 (131(1996) 1723-1738. S. Floyd and A. Romanow, Dynamics of TCP traffic over ATM networks, in: Proc. ACM SIGCOM’94 (1994) 79-88. ATM user network interface specification, Version 4.0, Base- line text for traffic management group, The ATM Forum. 1995. F. Bonomi and K.W. Fendick, The rate-based flow control framework for the available bit rate ATM service, IEEE Nrnvork 9 (2) (1995) 25-39. H.T. Kung and R. Morris, Credit-based flow control for ATM networks, /E./X Network 9 (2) (I 995) 40-48. K.K. Ramakrishnan and P. Newman, Integration of rate and credit schemes for ATM flow control, IEEE Network 9 (2) (1995) 49-56. K.K. Ramakrishnan and R. Jain, A binary feedback scheme for congestion avoidance in computer networks, ACM Tram.\. Comput. Systems 8 (2) ( 1990) 158- I8 I.

Ram Krishnan received the Bachelor’s degree in electrical mgi- neering from the Indian Institute of Technology, Madras, India in 1988 and the MS. and Ph.D. degree from the University of Southern California, Los Angeles in May 1994, both in electrical engineering. He held a visiting summer position at AT&T Bell Laboratories in 1991. He has been working in the Networking Research Department at Motorola Information Systems Group, Mansfield, MA, since May 1994. His current research interests are in ATM switching, traffic management and cable data network architectures and MAC protocols. He is presently involved in the next generation cable data products development. He represents Motorola in ATM Forum traffic management group and the IEEE 802.14 data over cable standards.