chin-ying wang advisor: sonia fahmy department of computer sciences purdue university
Post on 12-Jan-2016
20 Views
Preview:
DESCRIPTION
TRANSCRIPT
Chin-Ying WangAdvisor: Sonia Fahmy
Department of Computer SciencesPurdue University
http://www.cs.purdue.edu/homes/fahmy/
Multicast Congestion Control in the Internet: Fairness and
Scalability
Multicast Congestion Control in the Internet: Fairness and
ScalabilitySponsored by Tektronix
and the Schlumberger Foundation technical merit award
What is Multicasting? PGM PGMCC Feedback Aggregation Fairness Conclusions and Ongoing Work
OverviewOverview
What is Multicasting?What is Multicasting?
Multicasting: allows information exchange among
multiple senders and multiple receivers
Popular applications include:audio/video conferencing, distributed games, distance learning, searching, server and database synchronization, and many more
= group member
How does Multicasting Work?How does Multicasting Work?
A single datagram is transmitted from the sending host
This datagram is replicated at network routers and forwarded to interested receivers via multiple outgoing links
Using multicast connections traffic and management overhead not number of participants
If reliability is required, receivers provide feedback to notify the sender whether the data is received
datagram
feedbackSS RouterRouter
RR
RR
RRRouterRouter
S = SenderR = Receiver
= data
= ACK/NAK
S
R R
R
RouterRouterRouterRouter
Feedbackimplosion
The Feedback Implosion ProblemThe Feedback Implosion Problem
R
The Congestion Control ProblemThe Congestion Control Problem
How should the sender determine the sending rate?
500 Kb/s 1000 Kb/s300 Kb/s750 Kb/s
?SS RouterRouter
RR
RR
RR
RouterRouter
To study the impact of feedback aggregation on a promising protocol, the PGMCC multicast congestion control protocol
To evaluate PGMCC performance when competing with bursty traffic in a realistic Internet-like scenario
Ultimately, to design more scalable and more fair multicast congestion control techniques
Our GoalsOur Goals
Multicast Congestion ControlMulticast Congestion Control
Single-rate schemes: Sender adapts to the slowest receiver TCP-like service: one window/rate for all
the receivers Limitations:
Underutilization on some links• Selects the slowest receiver in the group
(“crying baby syndrome”)
500 Kb/s 1000 Kb/s300 Kb/s750 Kb/s
?=300 Kb/s
SS RouterRouter
R1R1
R2R2
R3R3
RouterRouter
The PGM Multicast ProtocolThe PGM Multicast Protocol PGM: Pragmatic General Multicast Single sender and multiple-receiver
multicast protocol Reliability: NAK based retransmission
requests Scalability: feedback aggregation and
selective repair forwarding Suppress replicated NAKs from the same sub-
tree in each router
PGM NAK/NCF DialogPGM NAK/NCF Dialog
PGM Receivers
PGM Sender
PGM Receiver
Subnet Subnet
Subnet
NAK
RDATAODATA
NCFNAK
NCF
NCF
NCF
NCF
NCF
NAK
NAK
NAK
NAK
RouterRouter
RouterRouter
RouterRouter
See [Miller1999] and RFC for more details.
PGMCC [Rizzo2000] PGMCC [Rizzo2000] Use TCP throughput approximation to
decide on the group representative, called “ACKer” Update acker to I when T(I) < cT(J)
Current acker
Newly joined receiver whose throughput T(I) < c× current acker’s throughput T(J)
500 Kb/s 1000 Kb/s300 Kb/s750 Kb/s
300 Kb/s
SS RouterRouter
RIRI
RJRJ
RKRK
RouterRouter
PGMCC (cont’d) PGMCC (cont’d)
Attempts to be TCP-friendly, i.e., on the average, no more aggressive than TCP
ACKs are used between the sender and acker TCP-like increase and decrease
Throughput of each receiver is computed as a function of fields in NAK packets: Round Trip Time (RTT) Packet loss
Feedback Aggregation Experimental TopologyFeedback Aggregation Experimental Topology
Ns-2 Simulator is used.All links are 10 Mb/s with 50 ms delay.
Goal: To determine if there are unnecessary/missing acker switches due to feedback aggregation
25 % loss
PR1PR1
PSPS
PR3PR3
PR2PR2
20 % loss
PR4PR4
RouterRouter RouterRouter
Feedback Aggregation Experimental Result
Feedback Aggregation Experimental Result
PGMCC FairnessPGMCC Fairness Simulate PGMCC in a realistic scenario
similar to the current Internet The objective is to determine whether
PGMCC remains TCP friendly in this scenario Different bottleneck link bandwidths are
used in the simulation: Highly congested network Medium congestion Non-congested network
General Fairness (GFC-2) General Fairness (GFC-2) Experimental TopologyExperimental Topology
S0
D17D16
S3
S9
router0router0
S1S4S10S11
S2S12
S21S20S5
S8S7
S6
S13
S14S15S16S17
S18S19
D9
D11D10
D12D21D20
D2D7
D6D8D1
D0
D4D5
D13
D14
D18D19
PS
PR5
PR4PR3PR2
PR1
Link0Link1
Link3Link2
Link4Link5
router3router3
router2router2
router1router1
D15
D3
router6router6
router5router5
router4router4
22 source nodes (S*) and 22 destination nodes (D*)
NewReno TCP connection is run between each pair of source and destination nodes
One UDP flow sending Pareto traffic runs across Link4 with a 500 ms on/off interval
All simulations were run for 900 seconds TCP connection traced runs from S4 to D4
Topology (cont’d)Topology (cont’d)
Link bandwidth between each node and router is 150 kbps with 1 ms delay
Link bandwidths and delays between routers are:
Link0 Link1 Link2 Link3 Link4 Link5
Bandwidth
(kbps)
50 100 50 150 150 50
Delay (ms) 20 10 5 5 5 10
Topology (cont’d)Topology (cont’d)
Highly Congested Network Highly Congested Network
PGM has a higher throughput in the first 50 seconds
Afterwards, PGM has very low throughput due to time-outs
Maintain all simulation parameters unchanged except increasing the link bandwidth between routers from 2.5 and 3.5 times the bandwidth in “highly congested” network
PGM flow outperforms TCP during initial acker switching
TCP has higher throughput when the timeout interval at PGM sender does not adapt to the increase of the acker RTT
Medium CongestionMedium Congestion
Medium Congestion (cont’d)
Medium Congestion (cont’d)
Bandwidth = 2.5×”Congested”
Medium Congestion (cont’d)
Medium Congestion (cont’d)
Bandwidth = 3.5×”Congested”
Maintain all simulation parameters unchanged except increasing the link bandwidth between routers from 10 and 80 times the bandwidth in highly congested network
PGM flow outperforms TCP flow as the bandwidth increases Frequent acker switches cause the
increase of the PGMCC sender’s window The RTT of the PGMCC acker is shorter
than the TCP flow RTT at many instances
Non-congested NetworkNon-congested Network
Non-congested Network (cont’d)
Non-congested Network (cont’d)
Bandwidth = 10×”Congested”
Non-congested Network (cont’d)
Non-congested Network (cont’d)
Bandwidth = 80×”Congested”
Feedback aggregation: Results in incorrect acker selection with
PGMCC Problem is difficult to remedy without router
assistance PGMCC fairness in realistic scenarios:
Initial acker switches causes the PGM flow to outperform the TCP flow due to the steep increase of the PGM sending window
A TCP-like retransmission timeout is needed to avoid the PGM performance degradation caused by using a fixed timeout interval
Main Results Main Results
Ongoing Work Ongoing Work
Conduct Internet experiments with various reliability semantics (e.g., unreliable and semi-reliable transmission) and examine their effect on PGMCC, especially on acker selection with insufficient NAKs
Exploit Internet tomography in multicast and geo-cast application-layer overlays [NOSSDAV2002, ICNP2002]
top related