wide area network performance analysis methodology wenji wu, phil demar, mark bowden fermilab...
TRANSCRIPT
Wide Area Network Performance Analysis Methodology
Wenji Wu, Phil DeMar, Mark BowdenFermilab
ESCC/Internet2 Joint Techs Workshop 2007
2
Topics Problems End-to-End Network Performance Analysis
TCP transfer throughput TCP throughput is network-end-system limited TCP throughput is network-limited
Network Performance Analysis Methodology Performance Analysis Network Architecture Performance Analysis Steps
3
1. Problems
What, Where, and How are the performance bottlenecks of network applications in wide area networks?
How to diagnose network/application performance quickly and efficiently?
4
Disks Operating System
ApplicationsCPU
MEM
Disks Operating System
Network Applications
CPUMEM
Network
R/S
Router RouterCable
NIC NIC
12
3
4
5
6
78
9
Network Application Performance Factors !!!
1’2’
3’
4’
5’
6’
7’
• Network Delay• Bandwidth• Packet Drop Rate
• CPU speed• MEM Size• System Load• Disk I/O Speed • Operating System
• R/W buffer size• Disk cache size
• NIC Speed
End System
R/S
LAN
WAN
Disks Operating System
ApplicationsCPU
MEM
Disks Operating System
Network Applications
CPUMEM
Network
R/S
Router RouterCable
NIC NIC
12
3
4
5
6
78
9
Network Application Performance Factors !!!
1’2’
3’
4’
5’
6’
7’
• Network Delay• Bandwidth• Packet Drop Rate
• CPU speed• MEM Size• System Load• Disk I/O Speed • Operating System
• R/W buffer size• Disk cache size
• NIC Speed
End System
R/S
LAN
WAN
6
2.1 TCP transfer throughput
An end-to-end TCP connection can be separated into: the sender, the networks, and the receiver.
TCP adaptive windowing scheme consists of a send-window (Ws), congestion-window (CWND), and receive-window (WR). Congestion Control: congestion-window Flow Control: receive window
The overall end-to-end performance of TCP throughput is decided by the sender, the network, and the receiver, which are modeled and symbolized in the sender as Ws, CWND, and WR. Assume the round trip time RTT, the instantaneous TCP throughput at time t:
Throughput (t) = min{Ws(t), CWND(t), WR(t) }/RTT(t)
7
2.1 TCP transfer throughput (cont)
If any of the three windows is small, especially when such conditions last for a relatively long period of time, the overall TCP throughput would be seriously degraded. The TCP throughput is network-end-system-limited for the
duration T, if it has:
The TCP throughput is network-limited for the duration T, if it has:
∫ ∫<<T T
S tdtCWNDtdtW0 0
)()()()( , or
∫ ∫<<T T
R tdtCWNDtdtW0 0
)()()()(
∫ ∫<<T T
S tdtWtdtCWND0 0
)()()()( , and
∫ ∫<<T T
R tdtWtdtCWND0 0
)()()()(
8
2.2 TCP throughput is network-end-system limited
User/Kernel space split Network application in user space, in process context Protocol processing in kernel, in the interrupt context
Interrupt-driven operating system Hardware interrupt -> Software interrupt -> process
User space Kernel Space
send
Send buffer
Socket write
Network application
SB
SλTCP
User spaceKernel Space
receive
Receive buffer
Socket read
Network application
RB
RλTCP
(a) TCP Sender (b) TCP Receiver
9
2.2 TCP throughput is network-end-system limited (cont)
Factors leading to a relatively small window of WS(t) & WR(t) Poorly-designed network application Performance-limited hardware
CPU, disk I/O subsystem, system buses, memory Heavily-loaded network end systems
System interrupt loads are too high Interrupt coalescing, Jumbo Frame
System process load are too high Poorly configured TCP protocol parameters
TCP Send/Receive buffer size TCP window scaling in high speed, long distance networks.
10
2.3 TCP throughput is network-limited
Two Facts: TCP sender tries to estimate the available bandwidth in the
networks, and represents it as CWND with congestion control algorithms.
TCP assumes packet drops are caused by network congestion. Any packet drops will lead to a reduction in CWND.
Two determining factors for CWND Congestion control algorithm Network Conditions (Packet drops)
11
2.3 TCP throughput is network-limited (cont)
TCP congestion control algorithm is evolving Standard TCP congestion control (Reno/NewReno)
Slow start, congestion avoidance, retransmission timeouts, fast retransmit and fast recovery
AIMD scheme for congestion avoidance Perform well in traditional networks Cause under-utilized problem in high-speed and long-distance networks
High-speed TCP variants: FAST TCP, HTCP, HSTCP, BIC, and CUBIC
Modify the AMID congestion avoidance scheme of standard TCP to be more aggressive,
Keep the same fast retransmit and fast recovery algorithm Solve the under-utilized problem in high speed and long distance networks
12
2.3 TCP throughput is network-limited (cont)
With high-speed TCP variants, it is mainly the packet drops that lead to a relatively small CWND
The following conditions could lead to packet drops Network congestion. Network infrastructure failures. Network end systems.
Packet drops in Layer 2 queues due to limited queue size. Packet dropped in ring buffer due to system memory pressure.
Routing changes. When a route changes, the interaction of routing policies, iBGP, and the
MRAI timer may lead to transient disconnectivity. Packet reordering.
Packet reordering will cause duplicate ACKs to the sender. RFC 2581 suggest a TCP sender should consider three or more dupACKs as an indication of packet loss. With severe packet reordering, TCP might misinterpret it as packet losses.
13
2.3 TCP throughput is network-limited (cont)
Congestion window is manipulated on the unit of Maximum Segment Size (MSS). Larger MSS entails higher TCP throughput.
Larger MSS is efficient for both networks and network end systems.
15
Disks Operating System
ApplicationsCPU
MEM
Disks Operating System
Network Applications
CPUMEM
Network
R/S
Router RouterCable
NIC NIC
12
3
4
5
6
78
9
Network Application Performance Factors !!!
1’2’
3’
4’
5’
6’
7’
• Network Delay• Bandwidth• Packet Drop Rate
• CPU speed• MEM Size• System Load• Disk I/O Speed • Operating System
• R/W buffer size• Disk cache size
• NIC Speed
End System
R/S
LAN
WAN
Disks Operating System
ApplicationsCPU
MEM
Disks Operating System
Network Applications
CPUMEM
Network
R/S
Router RouterCable
NIC NIC
12
3
4
5
6
78
9
Network Application Performance Factors !!!
1’2’
3’
4’
5’
6’
7’
• Network Delay• Bandwidth• Packet Drop Rate
• CPU speed• MEM Size• System Load• Disk I/O Speed • Operating System
• R/W buffer size• Disk cache size
• NIC Speed
End System
R/S
LAN
WAN
16
Network Performance Analysis Methodology
An end-to-end network/application performance is viewed as Application-related problems,
Beyond the scope of any standardized problem analysis Network end system problems Network path problems
Network performance analysis methodology Analyze and appropriately tune the network end systems Network path analysis, with remediation of detected problems
where feasible If network end system and network path analysis do not
uncover significant problems or concerns, packet trace analysis will be conducted. Any performance bottlenecks will manifest themselves in the
packet traces.
17
3.1 Network Performance Analysis Network Architecture
7 ’R/S
Router RouterCable
8
9
WAN
`
Network End System
`
BR
BREnd-to-end Path
LAN LANPacket Trace
Diagnosis Server
WAN
Network End System Diagnosis Server
Network PathDiagnosis ServerNESDS
NPDS
PTDS
NES NESDS NPDS
NES
PTDS
NESDS
NPDS
NPDS
NES
BR Border Router
18
Network end system diagnosis server. We use Network Diagnostic Tool (NDT).
collect various TCP network parameters in the network end systems, and identify their configuration problems
Identify local network infrastructure problems such as faulted Ethernet connections, malfunctioning NICs, and Ethernet duplex mismatch.
Network path diagnosis server We use OWAMP applications to collect and diagnose one-way
network path statistics. The forward and reverse path might not be symmetric The forward and reverse path traffic loads likely not symmetric The forward and reverse path might have different Qos schemes
Other tools such as Ping, traceroute, pathneck, iperf, and PerfSONAR etc could be used.
3.1 Network Performance Analysis Network Architecture (cont)
19
3.1 Network Performance Analysis Network Architecture (cont)
Packet trace diagnosis server Directly connect to the border router, can port-mirror any port
in the border router TCPDump, used to record packet traces TCPTrace, used to analyze the recorded packet traces Xplot, used to examine the recorded traces visually
20
Step 1: Definition of the problem space Step 2: Collect of network end system information &
network path characteristics Step 3: Network end system diagnosis Step 4: Network path performance analysis
Route changes frequently? Network congestion: delay variance large? Bottleneck location? Infrastructure failures: examine the counter one by one Packet reordering: load balancing? Parallel processing?
Step 5: Evaluate packet trace pattern
3.2 Network/Application Performance Analysis Steps
21
Collection of network end system information
CPU CPU speed; CPU numbers;
Memory Memory size; memory latency;
Bus Maximum bus bandwidth; Disk Maximum d isk I/O bandwidth; Hardware
NIC Maximum bandwidth; I nterrupt coalesc ing supported? TCP off loading supported? Jumbo frame supported?
Operating system
Operating system type, versions; 32bit/64bit? For Linux, identify the kernel version;
System Loads
Network applications running context; Maximum system background loads;
Network Applications
Network application traffic generation pattern; Storage system involved?
TCP Parameters
Send/Receive (Socket) buffer size; Timestamp option enabled? Window scaling option enabled? Window scaling parameter;
TCP reordering t hreshold; Congestion control algorithm; Total TCP memory size; Maximum Segment Size; SACK enabled? D -SACK enabled? ECN enabled?
Software
NIC Driver Parameter
Device driver send/recv queue size; TCP offloading enabled? Interrupt coalescing enabled? Jumbo Frame e nabled?
Table 1 Network End Systems Information
22
Collection of network path characteristics
Network path characteristics Round-trip time (ping) Sequence of routers along the paths (traceroute) One-way delay, delay variance (owamp) One-way packet drop rate (owamp) Packet reordering (owamp) Current achievable throughput (iperf) Bandwidth bottleneck location (pathneck)
25
Conclusion Fermilab is working on developing a performance analysis
methodology Objective is to put structure into troubleshooting network
performance problems Project is in early stages of development
We welcome collaboration & feedback Biweekly Wide-Area-Working-Group (WAWG) meeting on
alternate Friday mornings Send email to [email protected]