• Like
Revised PPT
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

Published

 

  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
    Be the first to like this
No Downloads

Views

Total Views
320
On SlideShare
0
From Embeds
0
Number of Embeds
0

Actions

Shares
Downloads
4
Comments
0
Likes
0

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide
  • I’m Matt Zekauskas and you’re not.
  • Before we really start, I want to say that we don’t have all the answers. I’m going to tell you about common problems, and tools and techniques we’ve found useful. You’ll learn more about specific tools over the course of the day. However, if you have a common problem, or a particularly difficult problem, we’d like to hear about it. In fact, we collect “war stories” for publication on our web site. In addition, if you have a tool or technique that we don’t talk about today, please do speak up during the day, or send us details after the workshop.
  • I’ll start off with a list of network problems that we find affect performance of most applications. Packet loss slows TCP (bulk data transfer), and causes dropouts with voice and “jaggies” with video. Jitter, or the change in the rate that packets arrive, can also cause TCP to slow down, or at least react to problems more slowly; excessive jitter in a real-time application can cause some packets to be treated as if they were lost, causing dropouts and video problems. If you have an interactive application, say remote control of a scanning-tunneling microscope, jitter makes it hard for humans to react. We can learn how to deal with latency, but we can’t adjust for arbitrary changes in latency. An Ohio state study of h.323 codecs found that jitter caused more problems than loss (up to some point). Out-of-order packets can be viewed as extreme jitter; beside the problems already listed, many applications are not written well (for example, early MPEG-2 and HDTV codecs), and out of order packets can cause more problems than lost packets. Applications should be written to be tolerant of out-of-order packets (since parallelism in the network can generate them naturally, and they actually occur quite frequently), but for now, reducing the number of out of order packets will improve application efficiency. Duplicate packets waste bandwidth, and in extreme cases can cause TCP to slow down or confuse real-time applications. Excessive latency makes it difficult for interactive applications (video conferencing, remote instrument control), although humans can compensate to some extent. TCP’s control system also has more trouble as latency increases, since it reacts more slowly. In general, it is best to engineer paths so that latency is minimized.
  • Let’s look at how “vanilla” Reno TCP (still the most commonly deployed TCP stack) reacts to losses to see how important limiting loss along a path is. If a path is congested, it is obvious (at least once the problem link is found) because link utilization is high. However losses can be caused for other reasons (which we will get to in a moment), and these “non-congestive” losses are especially hard to track down. Let’s say our goal is modest (for modern workstations): send 100 megabits from coast-to-coast. With full size Ethernet packets (1500 bytes for 100Mbps interfaces) you need a probability of packet loss on the order of one in a million. That’s one loss every 83 seconds. How about if you have gigabit Ethernet? Then the loss probability must be less than one in ten to the negative eighth power, or one loss every 497 seconds. The situation gets better if you can use so-called “jumbo-frames”, or 9000 byte packets, it goes back to close to the 100 megabit case. That’s one reason to try and make high-performance paths “9000-byte clean”. The situation also gets better with some of the newer TCP algorithms (high-speed TCP, BIC TCP, FAST TCP) which is why there’s a lot of research into new bulk-transfer control algorithms. But we would still like a way to find and remove non-congestive losses as much as possible.
  • OK, we talked about network problems. However, you also should know that the number one reason for TCP not running at “full-speed” is for it to be starved for buffer space. Vendors ship TCP stacks with buffers that are tuned for the commercial Internet. If the buffer is too small, TCP, which uses a “sliding window” for flow control, must wait for packets to be acknowledged in order to advance the window and send more data. Essentially the sender is forced to stop and wait. You need to be able to buffer the number of bits you can send in one round-trip time at your desired speed. For example, with a 70 millisecond round-trip time (more-or-less trans-continental North America), to sustain one gigabit per second you need 8.4 megabytes of buffer space. For 100Mbps at the same distance you need 855 kilobytes. Many stacks default to 64 kilobytes, which only allows 7.4 Mbps. One word of caution: network kilobits, megabits, gigabits are powers of 10. Memory kilobytes and megabytes are in powers of two, a kilobyte being 1024 bytes (2^10) and a megabyte being 1,048,576 bytes (2^20).
  • I also want to mention that the same problem carries up to the applications themselves. We won’t be speaking more about this today, but for video and audio (streaming media) the lack of buffer space in the application (in our world, MPEG-2 based applications are especially bad) means the application is very sensitive to packet loss or reordering. Of course, if your application is interactive, then increased buffering can lead to lag in response, which is not desirable, either. This generalizes to bad network application behavior, so that they are not robust to network changes or anomalies. Drops will occur. Reordering will occur. Even if only very occasionally. Even applications that would like to use TCP to do bulk transfer can do things like not hand enough data to TCP to allow it to stream over long distances. One that was brought to light recently is scp; popular versions of scp do not provide large enough buffers for TCP to stream. (There is a pointer to a good version of scp off a tcp tuning page at PSC mentioned later.)
  • So what causes these problems? Here’s a laundry-list of the “usual suspects”. First on the list, and most common is a bad host configuration. As we just mentioned, this is usually because operating systems ship tuned to the commercial internet, and we have very different paths over the Internet2 infrastructure (in particular the “bandwidth delay product” is much greater). Second is duplex mismatch, usually due to autoconfiguration failure, with one side believing it is full-duplex (can send and receive simultaneously), and the other side believing it is half-duplex (can only send or receive one at a time). This is a legacy of how the Ethernet standard has evolved. This is the major cause of “non-congestive” packet losses. Wiring or fiber problems can cause non-congestive packet losses. Bad equipment (anything from host interfaces that cannot run full-speed, to host, switch, router, or fiber equipment failure) can cause excessive delays, jitter, or non-congestive packet loss. Bad routing can cause excessive latency, or sometimes jitter due to multiple different length paths being used. Congestion causes varying delays and packet loss.
  • Here’s an example that is at least three years old now, but still illustrates the difficulty in debugging problems. Folks at JPL were trying to do a demo to the Goddard Space Flight Center near Washington DC. Here you have real rocket scientists working on the problem. And they were smart… they made sure they were using Abilene. They had their hosts tuned well. Things worked fine locally… therefore the problem HAD to be in the backbone network, right? (Here the target was a 50mbps TCP flow.) We had test equipment inside Abilene. We were able to demonstrate 80Mbps easily from Los Angeles to Washington, DC. We offered them the use of our equipment, but this caused them to go ask friends at NCAR in Colorado… and they found the problem was on the path from LA to Boulder. This set the local network folks off to check, and in the end the problem was a bad fiber connection in California. Having intermediate test points to segment the problem is crucial. Having periodic tests running on common paths (we have such tests in the backbone) allows you to refute quickly “problem is in the middle of the network” finger pointing.
  • OK. This slide illustrates one way to attack a performance problem. First and foremost, if you are planning a demo, or other event, do test ahead of time. If you have a concerned application community, this may mean periodic testing among points close to key equipment. For example, all of the VLBI sites may test among each other. It may also mean periodic testing within your network to points in Abilene, or other campuses you talk to frequently. Now say you have a problem that the periodic testing did not pick up (there are just too many paths to test them all). The first question – do you have connectivity and reasonable latency? Ping will give you round-trip times, assuming it isn’t blocked along the way. We’ll describe a tool, owamp, that measures one-way delay, which allows you to disambiguate problems that might occur asymmetrically – asymmetric routing, asymmetric traffic queuing, a dirty fiber can cause asymmetric problems (since each fiber transmits light in one direction). Are you seeing many losses with these low-rate tests? If so, there’s something terribly wrong. If the latency is not what you expect, there may be a routing problem. The best-known tool is traceroute, and you can use that to make sure the path looks reasonable. It goes through your campus, possibly through a gigapop, across Abilene and down to the other side in a reasonable fashion (not taking a scenic tour of the US, for example). Remember that you have to test in the opposite direction; the Abilene router proxy and traceroute servers can help. Has the host been tuned? Is there potentially a duplex-mismatch one of the local Ethernet connections? Here, running NDT, also to be described today, can point out a series of common problems. NDT itself relies on web100, which instruments the Linux kernel. You might consider installing a web100 machine (or using machines with web100 code); there are additional diagnostics you can run using the web100-provided variables, and the kernel itself is better “out of the box”: it can automatically tune buffers on some TCP connections. If routing looks reasonable, and the host is reasonable, you may have a problem in the path. (Large losses in the low rate tests also indicate path problems, assuming it isn’t a duplex mismatch problem, local congestion--perhaps a denial of service attack--or even broken network hardware on the end system.) Iperf is a tool to run synthetic TCP streams (memory-to-memory) between two machines. Bwctl is a tool that we will talk about today that adds authentication and scheduling to iperf, and allows you to test to multiple points, including midpoints within Abilene.
  • And, for path problems, the best strategy is usually to “divide and conquer”; test to a midpoint, and see which side the problem is on, and then test to a midpoint on one side, until you’ve exhausted your midpoints and have localized the problem as much as you can. We’re working on tools to automate this process, but for now it’s manual. This picture shows testing to a bunch of points that you have access to.
  • Let’s start with tools that check out the hosts, and the network connections near the hosts.
  • Very rudimentary (NOTE: Do not read any of the items on this page – flip through quickly)
  • I want to mention this one because it’s fresh. But we haven’t had time to extensively evaluate it. (NOTE: Do not read any of the items on this page – flip through quickly)
  • This is the one we’re actively developing. More on this later today. (NOTE: Do not read any of the items on this page – flip through quickly)
  • As an aside, I mentioned Web100 earlier in a bullet. Here’s what Web100 is, and why you might want to put it on systems you use, if you can. (NOTE: Do not read any of the items on this page – flip through quickly) KEEP THIS TEXT FOR FOLKS WHO DOWNLOAD THE SLIDES: It is a kernel modification, currently to Linux 2.6 series kernels. There is a TCP MIB draft in the IETF to try and standardize the export-TCP-state part of Web100, and we expect Microsoft and others to pick that up. (Microsoft already has some of the elements in recent Windows server versions)
  • Rather than the generic NDT tool, there are also specific tools for videoconferencing. Moderate-rate UDP is a substitute, but the H.323 Beacon from Ohio State (free) and ViDeNet Scout (uses licensed software) actually run the protocol and capture behavior. (NOTE: Do not read any of the items on this page – flip through quickly)
  • OK, what about tools to help us resolve path problems?
  • Here’s a brief description of OWAMP just to get you oriented. More on this tool later today. (NOTE: Do not read any of the items on this page – flip through quickly)
  • Likewise, a brief description of BWCTL. More on this tool later today, also. (NOTE: Do not read any of the items on this page – flip through quickly)
  • Finally, here are some pointers to other tools that are used. They are here more for reference than for detailed explanation now.
  • Here are some commercial tools that we know of. (NOTE: show this slide but don’t read through the items) Spirent makes testers that can rigorously evaluate routers (and paths), and work at line rate. NetIQ has little drones that you can run with a command and control console. It can simulate some application behavior, and also has a capture then replay ability. Agilent makes testers like spirent, and also has a product called FireHunter that is used by ISPs. It does things like pings, and FTP fetches, and Web fetches, and can issue alerts when things go out of spec. Ixia makes boxes like Agilent and Spirent. Brix Networks is interesting, because they make measurement points that you can deploy, and then run tests like owamp among them, as well as other tests specifically design to probe QoS parameters and limits. We already mentioned Apparent Networks.
  • Here are a couple of noncommercial tools, along with a pointer to a whole bunch more. (NOTE: show this slide but don’t read through the items) The direct pointer to iperf, which bwctl uses. Flowscan is a tool to process netflow output and create pretty aggregate graphs. There is another set of tools, called “flow-tools” from ohio state, that Abilene uses (note: Ohio State is an Internet2 technology evaluation center). SLAC has a perl script that can be used with a web server to provide traceroutes. Les Cottrell and his group at SLAC also has a huge list of tools.
  • OK, let’s see what’s in Abilene.
  • Abilene does both active tests, and passive tests. Of particular interest is the router proxy. You can give mediated commands to the router to query state. This can be very useful. You can also issue traceroutes and pings from the Abilene routers.
  • Why does Abilene take all these measurements, and publish the results? (NOTE: Do not read ll of the items on this page – flip through quickly)
  • We currently have four machines at each router node. Here are their roles. (NOTE: show this slide but don’t read through the items) Add slide probably: 1.4ghz PIII, dual bank. Whatever the chipset. Fairly slow, but fastest we could get with a 48VDC supply off the shelf when we were building Abilene.
  • Abilene uses BWCTL for throughput. (NOTE: show this slide but don’t read through the items)
  • Abilene uses OWAMP for latency. (NOTE: show this slide but don’t read through the items)
  • A word on Abilene’s utilization. (NOTE: Do not read any of the items on this page – flip through quickly)
  • And you’ve all seen the weathermap. This is an older version (no Chicago!)
  • There are lots of tools at the NOC page Netflow data is currently at Ohio State And we make weekly summaries to try and understand what traffic is passing over the network, and watch for trends.
  • OK. So, we’ve talked about problems. Diagnostic strategies. Tools we can use during those diagnostic strategies, and what’s available within abilene. I’d like to take a moment to revisit the end-to-end performance initiative vision, and talk a bit about what campuses can do.
  • Add separate slide for application communities? Not yet, so say one of the important set of end-to-end paths is the set that a particular application community cares about. Nuclear physics sites. Medical sites. Astronomy sites.
  • But, because there are many many paths, there will have to be some ability to do tests on the fly. So make them available. In the long run, provide a tool to do most of it for you, and just hand back the results. Probably skip: This is not limited to just the US; we work with folks in Europe, Asia, and elsewhere. So there has to be a way to interoperate. That’s one of the things Internet2 is working on now.
  • So, what can you do? A few simple things. Make utilization data available, at least at the edge of your campus. Monitor not only utilization, but things that can cause losses… packet drop and error counters. Placing points at the edge of your campus will allow you to test ad-hoc from within your campus to the edge, and allow you to constantly monitor campus connectivity you think is important. (There will be some more use cases later; one possibility is just making sure your university to university traffic goes over the high-performance network--as long as it’s up). Thanks for listening to the overview.
  • Probably want to move these to the end… interrupts the flow. References for debugging strategies and application design. Flip through these quickly, they are here so participants can look later.
  • Just because my name ‘ll probably come off the front if it becomes standard course material

Transcript

  • 1. Finding Network Problems that Influence Applications: Measurement Tools Matt Zekauskas, matt@internet2.edu Georgia Performance Workshop DRAFT DRAFT for comment DRAFT DRAFT
  • 2. Outline
    • Problems, typical causes, diagnostic strategies
    • Tools: First mile, host issues
    • Tools: Path issues
    • Tools: Others to be aware of
    • Tools within Abilene
    • End-to-End Measurement Infrastructure
  • 3. We Would Like Your Help
    • What problems are you experiencing?
    • Have you used a good tool?
    • Give us the benefit of your experience: successful problem resolution!
  • 4. What Are The Problems? (1)
    • Packet loss
    • Jitter
    • Out-of-order packets (extreme jitter)
    • Duplicated packets
    • Excessive latency
      • Interactive applications
      • TCP’s control system
  • 5. For TCP
    • Eliminating loss is the goal
    • Non-congestive losses especially tricky
    • TCP: 100 Mbit Ethernet coast-to-coast:
      • Full size packets… need 10 -6 P loss [Mathis]
      • Less than 1 loss every 83 seconds
    • http://www.psc.edu/~mathis/papers/JTechs200105/
    • GigE: 10 -8 , 1 loss every 497 seconds
  • 6. What Are The Problems? (2)
    • TCP: lack of buffer space
      • Forces protocol into stop-and-wait
      • Number one TCP-related performance problem.
      • 70ms * 1Gbps = 70*10^6 bits, or 8.4MB
      • 70ms * 100Mbps = 855KB
      • Many stacks default to 64KB, or 7.4Mbps
  • 7. What Are The Problems? (3)
    • Video/Audio: lack of buffer space
      • Makes broadcast streams very sensitive to previous problems
    • Application behaviors
      • Stop-and-wait behavior; Can’t stream
      • Lack of robustness to network anomalies
  • 8. The Usual Suspects
    • Host configuration errors (TCP buffers)
    • Duplex mismatch (Ethernet)
    • Wiring/Fiber problem
    • Bad equipment
    • Bad routing
    • Congestion
      • “ Real” traffic
      • Unnecessary traffic (broadcasts, multicast, denial of service attacks)
  • 9. JPL/Caltech – GSFC
    • The situation
      • Using Abilene
      • Tuned hosts
      • Things work locally
    • Therefore it MUST be Abilene
      • Tests show good flows router-router
      • Intermediate tests point towards CA
    • Bad fiber connection!
  • 10. Strategy
    • Most problems are local…
    • Test ahead of time!
    • Is there connectivity & reasonable latency? (ping -> OWAMP)
    • Is routing reasonable (traceroute)
    • Is host reasonable (NDT; Web100)
    • Is path reasonable (iperf -> BWCTL)
  • 11. One Technique: Problem Isolation via Divide and Conquer
  • 12. Outline
    • Problems, typical causes, diagnostic strategies
    • Tools: First mile, host issues
    • Tools: Path issues
    • Tools: Others to be aware of
    • Tools within Abilene
    • End-to-End Measurement Infrastructure
  • 13. Internet2 Detective
    • A simple “is there any hope” tool
      • Windows “tray” application
      • Red/green lights, am I on Internet2
      • Multicast available
      • IPv6 available
    • http://detective.internet2.edu/
  • 14. NLANR Performance Advisor
    • Geared for the naive user
    • Run at both ends, and see if a standard problem is detected.
    • Can also work with intermediate servers
    • http://dast.nlanr.net/Projects/Advisor
  • 15. NDT
    • Network Debugging Tool
    • Java applet
    • Connects to server in middle, runs tests, and evaluates heuristics looking for host and first mile problems.
    • Has detailed output.
    • You’ll see lots of detail later today.
    • A commercial tool that tests for TCP buffer problems: http://www.dslreports.com/tweaks/
  • 16. Host/OS Tuning: Web100
    • Goal: TCP stack, tuning not bottleneck
    • Large measurement component
      • TCP performance not what you expect? Ask TCP why!
        • Receiver bottleneck (out of receiver window)
        • Sender bottleneck (no data to send)
        • Path bottleneck (out of congestion window)
        • Path anomalies (duplicate, out of order, loss)
    • www.web100.org
  • 17. Reference Servers (Beacons)
    • H.323 conferencing
      • Goal: portable machines that tell you if system likely to work (and if not, why?)
      • Moderate-rate UDP of interest
      • E.g., H.323 Beacon http://www.osc.edu/oarnet/itecohio.net/beacon/
      • ViDeNet Scout, http://scout.video.unc.edu/
  • 18. Outline
    • Problems, typical causes, diagnostic strategies
    • Tools: First mile, host issues
    • Tools: Path issues
    • Tools: Others to be aware of
    • Tools within Abilene
    • End-to-End Measurement Infrastructure
  • 19. OWAMP – Latency/Loss
    • One-Way Active Measurement Protocol
    • Requires NTP-Synchronized clocks
    • Look for one-way latency, loss
    • Authentication and Scheduling
    • Again, lots more later today
  • 20. BWCTL -- Throughput
    • A tool for throughput testing that includes scheduling and authentication.
    • Currently uses iperf for actual tests.
    • Can assign users (or IP addresses) to classes, give classes different throughput limits or time limits.
    • Periodic and on-demand testing.
    • Lots more later today.
  • 21. Outline
    • Problems, typical causes, diagnostic strategies
    • Tools: First mile, host issues
    • Tools: Path issues
    • Tools: Others to be aware of
    • Tools within Abilene
    • End-to-End Measurement Infrastructure
  • 22. Some Commercial Tools
    • Caveat: only a partial list, give me more!
    • Spirent (nee Netcom/Adtech):
      • SmartBits: test at low & high rates, QoS; test components or end-to-end path
    • NetIQ: Chariot/Pegasus
    • Agilent (like SmartBits, and FireHunter)
    • Ixia (like SmartBits/Spirent)
    • Brix Networks (like AMP/Owamp, for ‘QoS’)
    • Apparent Networks: path debugger
  • 23. Some Noncommercial Tools
    • Iperf: dast.nlanr.net/Projects/iperf
      • See also http://www-itg.lbl.gov/nettest/
      • http://www-didc.lbl.gov/NCS/
    • Flowscan:
      • http://www.caida.org/tools/utilities/flowscan/
      • http://net.doit.wisc.edu/~plonka/FlowScan/
    • SLAC’s traceroute perl script:
      • http://www.slac.stanford.edu/comp/net/wan-mon/traceroute-srv.html
    • One large list:
      • http://www.slac.stanford.edu/xorg/nmtf/nmtf-tools.html
  • 24. Outline
    • Problems, typical causes, diagnostic strategies
    • Tools: First mile, host issues
    • Tools: Path issues
    • Tools: Others to be aware of
    • Tools within Abilene
    • End-to-End Measurement Infrastructure
  • 25. Abilene: Measurements from the Center
    • Active (latency, throughput)
      • Measurement within Abilene
      • Measurements to the edge
    • Passive
      • SNMP stats (esp. core Abilene links)
      • Variables via router proxy
      • Router configuration
      • Route state
      • Characterization of traffic
        • Netflow; OCxMON
  • 26. Goal
    • Abilene goal to be an exemplar
      • Measurements open
      • Tests possible to router nodes
      • Throughput tests routinely through backbone
      • … as well as existing utilization, etc.
      • The “Abilene Observatory” http://abilene.internet2.edu/observatory
  • 27. Abilene: Machines
    • GigE connected high-performance tester
      • bwctl, “nms1”, 9000 byte MTU
    • Latency tester
      • owamp, “nms4”, 100bT
    • Stats collection
      • SNMP, flow-stats, “nms3”, 100bT
    • Ad-hoc tests
      • NDT server, “nms2”, gigE, 1500 byte MTU
  • 28. Throughput
    • Take tests 1/hr, 20 seconds each
      • IPv4 TCP
      • IPv6 TCP (no discernable difference)
      • IPv4 UDP (on our platforms flakey at 1G)
      • IPv6 UDP (ditto)
    • Others test to our nodes
    • Others test amongst themselves
    • Net result: 25% of traffic (NOT capacity) is measurement
  • 29. Latency
    • CDMA used to synchronize NTP
      • www.endruntechnologies.com
    • Test among all router node pairs
    • 10/sec
    • IPv4 and IPv6
    • Minimal sized packets
    • Poisson schedule
  • 30. Passive - Utilization
    • The Abilene NOC takes
      • Packets in,out
      • Bytes in,out
      • Drops/Errors
      • ..for all interfaces, publishes internal links & peering points (at 5 min intervals)
      • ..via SNMP polling – every 60 sec
    • http://loadrunner.uits.iu.edu/weathermaps/abilene/abilene.html
  • 31.  
  • 32. Abilene Pointers
    • http://www.abilene.iu.edu/
      • Monitoring
      • Tools
    • http://www.itec.oar.net/abilene-netflow
    • http://netflow.internet2.edu/weekly/ (summaries)
  • 33. Outline
    • Problems, typical causes, diagnostic strategies
    • Tools: First mile, host issues
    • Tools: Path issues
    • Tools: Others to be aware of
    • Tools within Abilene
    • End-to-End Measurement Infrastructure
  • 34. End-to-End Measurement Infrastructure Vision
    • Ongoing monitoring to test major elements, and end-to-end paths.
      • Elements: gigaPoP links, peering, …
      • Utilization
      • Delay
      • Loss
      • Occasional throughput
      • Multicast connectivity
  • 35. End-to-End Measurement Infrastructure Vision II
    • Many more end to end paths than can be monitored.
    • Diagnostic tools available on-demand (with authorization)
      • Show routes
      • Perform flow tests (perhaps app tests)
      • Parse/debug flows (a-la tcpdump or OCXmon with heuristic tools)
  • 36. What Campuses Can Do
    • Export SNMP data
      • I have an “Internet2 list”, can add you
      • Monitor loss as well as throughput
    • Performance test point at campus edge
      • Hopefully, the result of today’s workshop
      • Possibly also traceroute “looking glass”
      • Commercial (e.g., NetIQ) complements
      • We have a master list
  • 37. Strategy (references) (1)
    • See also
      • http://e2epi.internet2.edu/ Look at stories, documents, tools
      • http://e2epi.internet2.edu/ndt/ Pointer to the tool, and using it for debugging the last mile
  • 38. Strategy (references) (2)
      • http://www.psc.edu/networking/projects/tcptune/ How to tweak OS parameters (also scp pointer)
      • http://www.ncne.org/research/tcp/ TCP debugging the detailed way
      • http://dast.nlanr.net/Guides/WritingApps/ Tips for app writers
      • http://dast.nlanr.net/Guides/GettingStarted And some checking to do by hand & debugging.
  • 39. Acknowledgements
    • The original presentation by Matt   Zekauskas using ideas inspired by material from NLANR DAST, Matt   Mathis, and others.
    • Copyright Internet2 2005, All Rights Reserved.
    • Your mileage may vary. Caveat Emptor. It’s a desert topping and a floor wax. They all do that. It’s a feature. It’s wafer-thin. Sleep is for the weak. Coffee won’t hurt you, look what it’s done for meeee…
  • 40. www.internet2.edu