Network is by design transparent so hard to find out information about how it is working etc. GGF Grid High Performance Network group is trying to bring together networkers/applications writers/users by creating documents on “Top ten things network engineers wish grid programmers knew” and vice versa. http://www.csm.ornl.gov/ghpn/ Understanding is hard: Immense, moving target, traditional (e.g. Poisson distributions) mathematical tools don’t work, looking for invariants, need parsimonious models. See Vern Paxson’s work, e.g. http://www.icir.org/vern/talks/vp-painfully-hard.UCB-mig.99.ps.gz The top three networking problems according to a paper by Claudia DeLuna of JPL, are Ethernet duplex, host configuration and bad media. Failure cause breakdown for 3 Internet sites indicated 51% caused by operator error. “ Self Repairing Computers”, Scientific American, June 2003 Reviewing user reported long lasting (typically days, i.e. does not include router reboots, or time out for reconfiguration) WAN problems that SLAC over the last two years, the biggest contributors (30%) were a combination of mis-configured routers (loose unicast RPF filters, wrong buffer size, poorly chosen backup route), misconfigured switches (needed reboot, PVC incorrectly rate limited), firewalls (limit throughput, reset window scaling option). Note these are mainly engineering problems or bugs as opposed to problems we need to research to know how to fix each one individually. However, we do need to investigate how to accurately and automatically identify and report on the location and cause of such problems for the end-user.
May not be realistic to try and solve this as shown. Need to divide and conquer, e.g. ID the AS responsible for where the problem is and report to them with relevant supporting information. To do this need tools to partition problem.
Web services are in their early days, they have a steep learning curve, the schemas and not mature
Spreadsheet cottrelliepmwizard.xls Most users are unaware of the bottleneck bandwidth on the path
Spreadsheet cottrelliepmesnet-to-all-longterm.xls CERN data only goes back to Aug-01. It confirms S.E. Europe & Russia are catching up, and India & Africa are falling behind PingER is arguably the most extensive set of measurements of the end-to-end performance of the Internet going back almost ten years. Measurements are available from over 30 sites in 13 countries to sites in over 100 countries. We will use the PingER results to: demonstrate how the Internet performance to the regions of the world has evolved over the last 9 years; identify regions that have poor connectivity, how far they are behind the developed world and whether they are catching up or falling further behind; and illustrate the correlation between the UN Technology Achievement Index and Internet performance. Ghana, Nigeria and Uganda are all satellite links with 800-1100ms RTTs. The losses to Ghana & Nigeria are 8-12% while to Uganda they are 1-3%. The routes are different. The route from SLAC to Ghana uses ESnet-Worldcom-UUNET, Nigeria goes CalREN-Qwest-Teiianet-New Skies satellite, Uganda goes Esnet-Level3-Intelsat. For both Ghana and Nigeria there are no losses (for 100 pings) until the last hop when over 40 of 100 packets were lost. For Uganda the losses (3 in 100 packets) also occur at the last hop. Worksheet: for trends: \Zwinsan2ccottrelliepmesnet-to-all-longterm.xls for Africa: \Zwinsan2ccottrelliepmafrica.xls
Slow start on 200ms RTT takes about 8secs on 10Gbps, on 1Gbps takes ~ 6 secs BER of 1/10^8 is not that high. For example the “SURA Optical Network Cookbook” (see http://www1.sura.org/3000/opcook.pdf) suggests that a BER of 1/10^9 is typical.
Anonymization to address privacy concerns can remove much of usefulness of data. Sampling can introduce biases
WAN Monitoring Issues Prepared by Les Cottrell, SLAC, for the
WAN Monitoring Issues Prepared by Les Cottrell, SLAC, for the NASA/LSN Workshop on Optical Network Testbeds NASA Ames August 9-11, 2004 www.slac.stanford.edu/grp/scs/net/talk03/jet-aug04.ppt Partially funded by DOE/MICS Field Work Proposal on Internet End-to-end Performance Monitoring (IEPM), also supported by IUPAP
The Problem <ul><li>Distributed systems are very hard </li></ul><ul><ul><li>A distributed system is one in which I can't get my work done because a computer I've never heard of has failed . Butler Lampson </li></ul></ul><ul><li>Network is deliberately transparent </li></ul><ul><li>The bottlenecks can be in any of the following components: </li></ul><ul><ul><li>the applications </li></ul></ul><ul><ul><li>the OS </li></ul></ul><ul><ul><li>the disks, NICs, bus, memory, etc. on sender or receiver </li></ul></ul><ul><ul><li>the network switches and routers, and so on </li></ul></ul><ul><li>Problems may not be logical </li></ul><ul><ul><li>Most problems are operator errors, configurations, bugs </li></ul></ul><ul><li>When building distributed systems, we often observe unexpectedly low performance </li></ul><ul><ul><ul><li>the reasons for which are usually not obvious </li></ul></ul></ul><ul><li>Just when you think you’ve cracked it, in steps security </li></ul>
E2E Monitoring Goals <ul><li>Solving the E2E performance problem is the critical problem for the user </li></ul><ul><ul><li>Improve e2e throughput for data intensive apps in high-speed WANs </li></ul></ul><ul><ul><li>Provide ability to do performance analysis & fault detection ins Grid computing environment </li></ul></ul><ul><ul><li>Provide accurate, detailed, & adaptive monitoring of all distributed components including the network </li></ul></ul>
Anatomy of a Problem Applications Developer Applications Developer How do you solve a problem along a path? Hey, this is not working right! The computer Is working OK The network is lightly loaded All the lights are green We don’t see anything wrong Looks fine Others are getting in ok Not our problem From an Internet2 E2E presentation by Russ Hobby System Administrator LAN Administrator Campus Networking Gigapop Gigapop Backbone Campus Networking LAN Administrator System Administrator Talk to the other guys Everything is AOK No other complaints
Needs <ul><li>Measurement tools to quickly, accurately and automatically identify problems </li></ul><ul><ul><li>Automatically take action to investigate and gather information, on-demand measurements </li></ul></ul><ul><li>Tools need to scale to 10Gbps and beyond </li></ul><ul><li>Standard ways to discover request and report results of measurements </li></ul><ul><ul><li>GGF/NMWG schemas </li></ul></ul><ul><ul><li>Share information with people and apps across a federation of measurement infrastructures </li></ul></ul>
Achieving throughput <ul><li>User can’t achieve throughput available (Wizard gap) </li></ul><ul><li>Big step just to know what is achievable </li></ul>
User throughput S.E. Europe, Russia: catching up Latin Am., Mid East, China: keeping up India, Africa: falling behind C. Asia, Russia, S.E. Europe, L. America, M. East, China: 4-5 yrs behind India, Africa: 7 yrs behind Important for policy makers
Hi-perf Challenges <ul><li>Packet loss hard to measure by ping </li></ul><ul><ul><li>For 10% accuracy on BER 1/10^8 ~ 1 day at 1/sec </li></ul></ul><ul><ul><li>Ping loss ≠ TCP loss </li></ul></ul><ul><li>Iperf/GridFTP throughput at 10Gbits/s </li></ul><ul><ul><li>To measure stable (congestion avoidance) state for 90% of test takes ~ 60 secs ~ 75GBytes </li></ul></ul><ul><ul><li>Requires scheduling implies authentication etc. </li></ul></ul><ul><li>Using packet pair dispersion can use only few tens or hundreds of packets, however: </li></ul><ul><ul><li>Timing granularity in host is hard (sub μ sec) </li></ul></ul><ul><ul><li>NICs may buffer (e.g. coalesce interrupts. or TCP offload) so need info from NIC or before </li></ul></ul><ul><li>Security: blocked ports, firewalls, keys vs. one time passwords, varying policies, Kerberos vs ssh etc. </li></ul>
Passive measurements <ul><li>Security & privacy concerns </li></ul><ul><ul><li>SNMP access to routers </li></ul></ul><ul><ul><li>Sniffers see all traffic </li></ul></ul><ul><li>Keeping up with capturing and analysis </li></ul><ul><ul><li>Only headers, sampling </li></ul></ul><ul><li>Vast amounts of data, needs excellent data-mining tools </li></ul><ul><li>Gives utilization, retries </li></ul>
Optical <ul><li>Could be whole new playing field, today’s tools no longer applicable: </li></ul><ul><ul><li>No jitter (so packet pair dispersion no use) </li></ul></ul><ul><ul><li>Instrumented TCP stacks a la Web100 may not be relevant </li></ul></ul><ul><ul><li>Layer 1 & 2 switches make traceroute less useful </li></ul></ul><ul><ul><li>Losses so low, ping not viable to measure </li></ul></ul><ul><ul><li>High speeds make some current techniques fail or more difficult (timing, amounts of data etc.) </li></ul></ul>