ppt
Upcoming SlideShare
Loading in...5
×
 

ppt

on

  • 913 views

 

Statistics

Views

Total Views
913
Views on SlideShare
913
Embed Views
0

Actions

Likes
0
Downloads
1
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • Can not produce (in the sender) or consume (in the receiver) efficiently in the user space. In the sender, TCP has few new data to send; in the receiver, TCP receiver-buffer fill up. New technologies: multi-core CPU, PCI-Express etc
  • Accurate bandwidth estimation is difficult Network provide very little network status information to network end systems. Network condition change instantaneously.
  • When networks are congested, router/switch buffer overflows lead to packet drops. such as dirty fibers, malfunctioning line cards, flawed Ethernet connections

ppt ppt Presentation Transcript

  • Wide Area Network Performance Analysis Methodology Wenji Wu, Phil DeMar, Mark Bowden Fermilab ESCC/Internet2 Joint Techs Workshop 2007 [email_address] , [email_address] , [email_address]
  • Topics
    • Problems
    • End-to-End Network Performance Analysis
      • TCP transfer throughput
      • TCP throughput is network-end-system limited
      • TCP throughput is network-limited
    • Network Performance Analysis Methodology
      • Performance Analysis Network Architecture
      • Performance Analysis Steps
  • 1. Problems
    • What, Where, and How are the performance bottlenecks of network applications in wide area networks?
    • How to diagnose network/application performance quickly and efficiently?
    View slide
  •   View slide
  • 2. End-to-End Network/Application Performance Analysis
  • 2.1 TCP transfer throughput
    • An end-to-end TCP connection can be separated into: the sender, the networks, and the receiver.
    • TCP adaptive windowing scheme consists of a send-window (W s ), congestion-window (CWND), and receive-window (W R ).
      • Congestion Control: congestion-window
      • Flow Control: receive window
    • The overall end-to-end performance of TCP throughput is decided by the sender, the network, and the receiver, which are modeled and symbolized in the sender as W s, CWND, and W R. Assume the round trip time RTT, the instantaneous TCP throughput at time t:
      • Throughput (t) = min{ W s (t) , CWND(t), W R (t) }/RTT(t)
  • 2.1 TCP transfer throughput (cont)
    • If any of the three windows is small, especially when such conditions last for a relatively long period of time, the overall TCP throughput would be seriously degraded.
      • The TCP throughput is network-end-system-limited for the duration T, if it has:
      • The TCP throughput is network-limited for the duration T, if it has:
  • 2.2 TCP throughput is network-end-system limited
    • User/Kernel space split
      • Network application in user space, in process context
      • Protocol processing in kernel, in the interrupt context
    • Interrupt-driven operating system
      • Hardware interrupt -> Software interrupt -> process
  • 2.2 TCP throughput is network-end-system limited (cont)
    • Factors leading to a relatively small window of W S (t) & W R (t)
      • Poorly-designed network application
      • Performance-limited hardware
        • CPU, disk I/O subsystem, system buses, memory
      • Heavily-loaded network end systems
        • System interrupt loads are too high
          • Interrupt coalescing, Jumbo Frame
        • System process load are too high
      • Poorly configured TCP protocol parameters
        • TCP Send/Receive buffer size
        • TCP window scaling in high speed, long distance networks.
  • 2.3 TCP throughput is network-limited
    • Two Facts:
      • TCP sender tries to estimate the available bandwidth in the networks, and represents it as CWND with congestion control algorithms.
      • TCP assumes packet drops are caused by network congestion. Any packet drops will lead to a reduction in CWND.
    • Two determining factors for CWND
      • Congestion control algorithm
      • Network Conditions (Packet drops)
  • 2.3 TCP throughput is network-limited (cont)
    • TCP congestion control algorithm is evolving
      • Standard TCP congestion control (Reno/NewReno)
        • Slow start, congestion avoidance, retransmission timeouts, fast retransmit and fast recovery
        • AIMD scheme for congestion avoidance
          • Perform well in traditional networks
          • Cause under-utilized problem in high-speed and long-distance networks
      • High-speed TCP variants: FAST TCP, HTCP, HSTCP, BIC, and CUBIC
          • Modify the AMID congestion avoidance scheme of standard TCP to be more aggressive,
          • Keep the same fast retransmit and fast recovery algorithm
          • Solve the under-utilized problem in high speed and long distance networks
  • 2.3 TCP throughput is network-limited (cont)
    • With high-speed TCP variants, it is mainly the packet drops that lead to a relatively small CWND
    • The following conditions could lead to packet drops
      • Network congestion.
      • Network infrastructure failures.
      • Network end systems.
        • Packet drops in Layer 2 queues due to limited queue size.
        • Packet dropped in ring buffer due to system memory pressure.
      • Routing changes.
        • When a route changes, the interaction of routing policies, iBGP, and the MRAI timer may lead to transient disconnectivity.
      • Packet reordering.
        • Packet reordering will cause duplicate ACKs to the sender. RFC 2581 suggest a TCP sender should consider three or more dupACKs as an indication of packet loss. With severe packet reordering, TCP might misinterpret it as packet losses.
  • 2.3 TCP throughput is network-limited (cont)
    • Congestion window is manipulated on the unit of Maximum Segment Size (MSS). Larger MSS entails higher TCP throughput.
    • Larger MSS is efficient for both networks and network end systems.
  • 3. Network Performance Analysis Methodology
  •  
  • Network Performance Analysis Methodology
    • An end-to-end network/application performance is viewed as
      • Application-related problems,
        • Beyond the scope of any standardized problem analysis
      • Network end system problems
      • Network path problems
    • Network performance analysis methodology
      • Analyze and appropriately tune the network end systems
      • Network path analysis, with remediation of detected problems where feasible
      • If network end system and network path analysis do not uncover significant problems or concerns, packet trace analysis will be conducted.
        • Any performance bottlenecks will manifest themselves in the packet traces.
  • 3.1 Network Performance Analysis Network Architecture
    • Network end system diagnosis server.
      • We use Network Diagnostic Tool (NDT).
        • collect various TCP network parameters in the network end systems, and identify their configuration problems
        • Identify local network infrastructure problems such as faulted Ethernet connections, malfunctioning NICs, and Ethernet duplex mismatch.
    • Network path diagnosis server
      • We use OWAMP applications to collect and diagnose one-way network path statistics.
        • The forward and reverse path might not be symmetric
        • The forward and reverse path traffic loads likely not symmetric
        • The forward and reverse path might have different Qos schemes
      • Other tools such as Ping, traceroute, pathneck, iperf, and PerfSONAR etc could be used.
    3.1 Network Performance Analysis Network Architecture (cont)
  • 3.1 Network Performance Analysis Network Architecture (cont)
    • Packet trace diagnosis server
      • Directly connect to the border router, can port-mirror any port in the border router
      • TCPDump, used to record packet traces
      • TCPTrace, used to analyze the recorded packet traces
      • Xplot, used to examine the recorded traces visually
    • Step 1: Definition of the problem space
    • Step 2: Collect of network end system information & network path characteristics
    • Step 3: Network end system diagnosis
    • Step 4: Network path performance analysis
      • Route changes frequently?
      • Network congestion: delay variance large? Bottleneck location?
      • Infrastructure failures: examine the counter one by one
      • Packet reordering: load balancing? Parallel processing?
    • Step 5: Evaluate packet trace pattern
    3.2 Network/Application Performance Analysis Steps
  • Collection of network end system information
  • Collection of network path characteristics
    • Network path characteristics
      • Round-trip time (ping)
      • Sequence of routers along the paths (traceroute)
      • One-way delay, delay variance (owamp)
      • One-way packet drop rate (owamp)
      • Packet reordering (owamp)
      • Current achievable throughput (iperf)
      • Bandwidth bottleneck location (pathneck)
  • Traffic trace from Fermi to OEAW What happened?
  • Traffic trace from Fermi to Brazil What happened?
  • Conclusion
    • Fermilab is working on developing a performance analysis methodology
      • Objective is to put structure into troubleshooting network performance problems
      • Project is in early stages of development
    • We welcome collaboration & feedback
      • Biweekly Wide-Area-Working-Group (WAWG) meeting on alternate Friday mornings
        • Send email to WAWG@FNAL.GOV