• Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
    Be the first to like this
No Downloads

Views

Total Views
297
On Slideshare
0
From Embeds
0
Number of Embeds
0

Actions

Shares
Downloads
2
Comments
0
Likes
0

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide
  • Explain the contents of the presentation First section will be on general networking concepts inherit in all computer systems Next section will focus on MPE/iX networking performance issues and concepts Finish up with a look at MPE/iX system performance and how it relates to MPE networking
  • What is performance? Performance is an obscure concept and it varies from person to person. Bandwidth is the amount of data a networking link can send on a consistent basis. This is affected by the power of the system, the speed of the networking link and the complexity of the network itself. Response time is the time it takes for the user to receive data back from a command to a system. This is affected mostly by the complexity of the network and its efficiency of processing data. There is a possible secondary affect if the target system is bogged down for some reason. System performance can affect the network if it is bogged down and can’t service requests from the network. General networking will look at basic networking performance concepts that are independent of the system. System specific networking performance encompasses the issues that are affected by the systems interface with the network.
  • Performance is affected by many different factors. A main factor is the complexity of the network. A simple network setup has few places where performance can be impacted. In this example, the points of problems are the system, the DTC or the terminal/printer. Each is easy to pinpoint and fix if there is a problem.
  • This network is a high level model of Jeff’s home office setup. It is much more complex with many more points of failure. Not only is there a system and an end user to contend with, but there are a minimum of 5 problem points in between: a VPN server, the INTERNET and all its connections and routing mechanisms, a server at Jeff’s ISP, Jeff’s home router and Jeff’s hub. Anyone of these could have problems that affect the system performance.
  • Complex, far reaching networks require the use of many different types of networking hardware. Three common components are, from simplest to most complex are Hubs, Switches and Routers. Explain each component.
  • What are some common tools to check the integrity of a network? Most common is the PING utility. PING executes a simple check to see if the target system/device is active and responding. An affirmative response and the amount of time it took to respond are the only data returned.
  • Another common and more complex tool is TRACEROUTE. TRACEROUTE returns the complete path from source to destination and all intermediate stops in between. This is done by finding the incrementing steps on the path, starting with 1 step and increasing until the final destination is reached or the max number of hops is reached, whichever happens first. Three attempts at each stop are attempted and the amount of time for each attempt is returned. This can show where a segment on a complex network maybe having problems.
  • This is an example of a more complex tracing from a system in California to an HP system in Atlanta.
  • This is yet another complex tracing from California to HP in Bristol England. A study of 3000 pings/traceroutes to connections worldwide showed an average response time from source to destination and back of 150 ms.
  • Routers are most complex elements of large networks. Sophisticated computer systems with an operating system that focuses on finding the most efficient path for networking traffic. Internal algorithms to balance speed with line utilization. If possible, connection data is balance across multiple paths to efficiently utilize networking bandwidth. Many routers contain tools that allow the administrator to monitor internal resource usage in order to keep the router from getting bogged down.
  • Since routers are intelligent computer systems, many times data is stored and forwarded to other connections with the used of internal data buffers. The more memory the better, but also the more expensive the router. Also, higher performance will cost more. Need to balance networking performance with cost of ownership. Example here was taking from internal HP router problem from almost a year ago.. Middle buffer pool has quite a few misses, showing that buffers were not available. This causes packet drops. If possible, simple limits increases will minimize these misses.
  • Switches don’t have the capability of doing sophisticated routing, but there are some configurations that can affect performance. Make sure all connections into the switch have the proper duplex level set. A mix in duplex mode will lead to potential collisions, packets loss and delays. Auto negotiations settings vary in effectiveness widely, depending on the switch vendor. If auto negotiation is suspect, hard wired duplex and speed maybe needed. Some switches also allow the linking of multiple paths into a logical trunk. Much better performance, but limited in connection to other devices that support the same trunk architecture.
  • Simple hubs are configurable. Hubs by definition are half-duplex, so devices connected to the hub will need to be communicating at half duplex. If not, packet loss will occur.
  • Many other possibilities from the software side. HTTP 1.0 doesn’t use persistent connections, HTTP 1.1 does. Eliminates the need for connection startup and teardown. Some networking devices support large data frames known as jumbo frames. Utilizes networking bandwidth more efficiently, especially with high speed, gigabit networks. The systems themselves need to be able to keep the networking pipes full. System I/O bus architectures, processor speed, memory utilization and interrupt handling all affect the ability to pump data on the network. If networking bandwidth is low compared to network capability and network seems to behaving well on its own, a close inspection of the system setup maybe in order. Some potential remedies? Upgrade to faster system with faster bus. Add more memory. If an MP system is being used, see if it is possible to architect interrupts to be handled on a dedicated processor.
  • MPE networking consists of several abstract layers. F intrinsics are file intrinsics FOPEN, FCLOSE, FREAD, FWRITE…etc All networking links shared by multiple stacks.
  • MPE 100 BT and VG links designed for full duplex use. VG was better standard and architecture but lost out due to market forces…VHS vs. Beta example. Full Duplex can be affected by mismatch connections or by applications design. An application that isn’t designed for parallel execution of inbound and outbound data will not take full advantage of full duplex capabilities. An MP system can better utilize Full Duplex by allowing processors to handle concurrent inbound and outbound data.
  • ACC links performance completely related to link connectivity being used. Phone speeds and satellite technologies are approaching slow LAN speeds. When using the ACC, it is best to design it as a single point into the system and not a general purpose system to system connectivity mechanism. Too slow to have multiple systems accessing each other via an ACC connection.
  • MPE/iX Transports include AFCP – Avesta Flow Control Protocol. This is a proprietary protocol that is used to talk to DTCs. Not routeable but highly efficient for terminal I/O traffic. Different networking setups could require change in AFCP retransmission protocols. These can be changed in NMMGR with the TUNE DTC option. Normal mode is by default and used in the majority of cases. Short retransmission mode is used if there is heavy packet loss and data needs to be retransmitted quickly in order to assure good system performance. Long retransmission mode is used for larger networks that may take longer to acknowledge packets. This eliminates the potential for extra packets on the network. Variable timer mode is for when the network is very busy and there are potential collisions among sent packets. The retransmission mode is selected randomly among a lower and upper limit so as to maximize the chance for successful retransmission. 1.2 and 2.0 modes are self explanatory and redundant for current releases of the OS.
  • TCP/IP also has a set of parameters that can be altered for performance. These parameters can be altered depending on the networking performance and setup. For networks that are losing packets, the retransmission interval can be lowered. If the network is large a far flung, the retransmission level can be raised. Similar adjustments can be made to the other parameters. Wait for Response interval is lowered on networks losing packets, raised for long networks.
  • When using BSD Sockets, sending application requires buffer usage. Bigger buffers are better, but at the expense of wasted memory. 1k byte buffers seem to work the best in terms of the optimal balance between application efficiency and efficiency in the network. The key is that larger buffers are only useable if the application can use them. 1k byte buffers will do no good if an application needs to consistently use 1 bytes sends. It is also important to note that applications that startup and teardown sockets regularly and often will encounter a large performance hit. Best to keep a pool of sockets, if possible, to make connections to target systems.
  • NetIPC is similar in behavior to BSD sockets…it is HP proprietary. 1k buffers also work there. It is also important if the application can use fixed length buffers for sends and receives. If not, an extra send/receive needs to be done that has the data length to be transferred. This causes extra delay in using the network, a response time delay and an unnecessary set of packets on the network. As with sockets, startup and teardown is expensive. Tradeoff this to managing multiple connection resources.
  • Telnet is open standard terminal connectivity. Inefficient character modem data transfer with delays in character echoing. Most noticeable in widespread networks. In block mode application cases (VPLUS), data is sent in one large chunk and performance and response is comparable with VT/DTC connections. For CI type commands where echoing is involved, Advance Telnet has an advantage by offloading the echoing to the client Telnet system. To use Advanced Telnet, must need MPE/iX patch (6.0,6.5,7.0,7.5 available) and Telnet client that supports Advanced Telnet. Currently only QCTERM supports it.
  • DTC terminal I/O is efficient by design but requires expensive hardware and proprietary networking stack. Doesn’t mix with TCP/IP based solutions. There is a program on MPE/iX, DTSTUNEB, that allows the adjusting of data buffer allocation for TIO. All data buffers come from a common pool and a greedy device could use all the buffers. Limits placed on each device as to how many data buffers to use and the global data buffer pool is adjustable. Change is made the NMCONFIG file and requires a DTC shutdown/restart to have an affect.
  • How can system performance be affected by the networking stacks? Data structures used by all networking connections. Data buffers used for sending and receiving data over the network. Timers are used in multiple places for retransmissions of packets and keeping connections up. Timers can be expensive and multiple timers can inflict delays on the system to manage them. System manager needs to truly understand networking needs and eliminate redundant, unneeded connections. If the system is a smaller system, some busy connection can exhaust resources. In some cases, the creating of “dummy” devices can expand the resource pool to keep active connections from starving.
  • Newer MPE systems can spend a lot of time servicing interrupts on a busy networking system. Older networking link adapters were smart and they could do more work. This offloaded work from the system. However, these adapters were expensive. Cheaper links were more cost efficient, but “dumb” links require the system to do more of the work. If there is a lot of networking traffic, the system can get bogged down handling networking interrupts. Solution includes faster CPU or potentially more memory. Also, forcing interrupts onto a dedicated CPU can help with efficiency on an MP system.
  • Different types of connectivity can also affect networking performance. DTC connections are the most efficient but limited in application type and networking scope. VT is also efficient but requires a dedicate process to handle VT specific connections for each user. This extra process can sometimes push system limits, depending on the type of system and the number of connections.
  • Telnet is least efficient because of open standards support. Block mode on Telnet is comparable to VT with about a 90% efficiency level. CI type character commands are worse and bring the efficiency comparison to about 70%.

Transcript

  • 1. Network Performance: An MPE/iX Overview Jeff Bandle HP MPE/iX Networking Architect
  • 2. CONTENTS
    • General Networking
      • Common networking terms
      • Networking concepts independent of MPE/iX
    • MPE/iX Specific Networking
      • Overview of MPE/iX networking stacks
      • Ideas for performance changes on MPE/iX
    • System Performance
      • How MPE/iX networking performance affects the system
  • 3. INTRODUCTION
    • What is performance?
      • Bandwidth
      • Response time
      • System
    • General Networking vs. System Specific Networking
  • 4. GENERAL NETWORKING
    • Network setup complexity is a factor
      • Simple Network – Fewer layers to propagate data
    e3000 DTC Terminal Printer
  • 5. GENERAL NETWORKING
    • Network setup complexity
      • Complex network – More layers/hardware delays data propagation
      • Study of “pings” to 3000 international sites – 150 ms avg.
    E3000 Home Router Hub Work PC VPN Server ISP Server THE INTERNET
  • 6. GENERAL NETWORKING
    • Use or Routers, Switches and Hubs
      • Hub - A hub is a small, simple, inexpensive device that joins multiple computers together at a low-level network protocol layer.
      • Switch - A switch is a small device that joins multiple computers together at a low-level network protocol layer. Technically, switches operate at layer two (Data Link Layer) of the OSI model.
      • Router - A router is a physical device that joins multiple networks together. Technically, a router is a "layer 3 gateway," meaning that it connects networks (as gateways do), and that it operates at the network layer of the OSI model.
  • 7. GENERAL NETWORKING
    • Common tools to check complex networking
      • Ping
        • ping [-oprv] [-i address] [-t ttl] host packet-size [[-n] count]
        • : ping nack.cup.hp.com
        • PING nack.cup.hp.com: 64 byte packets
        • 64 bytes from 15.13.195.50: icmp_seq=0. time=1. ms
        • 64 bytes from 15.13.195.50: icmp_seq=1. time=1. ms
        • 64 bytes from 15.13.195.50: icmp_seq=2. time=1. ms
        • 64 bytes from 15.13.195.50: icmp_seq=3. time=1. ms
        • 64 bytes from 15.13.195.50: icmp_seq=4. time=1. ms
        • 64 bytes from 15.13.195.50: icmp_seq=5. time=1. ms
        • 64 bytes from 15.13.195.50: icmp_seq=6. time=1. ms
        • 64 bytes from 15.13.195.50: icmp_seq=7. time=1. ms
  • 8. GENERAL NETWORKING
    • Common tools to check complex networking
      • Traceroute
        • traceroute [-dnrv] [-w wait] [-m max_ttl] [-p port#] [-q nqueries] [-s src_addr] host [data size]
        • traceroute to cup.hp.com (15.75.208.53), 30 hops max, 20 byte packets
        • 1 cup47amethyst-oae-gw2.cup.hp.com (15.244.72.1) 1 ms 1 ms 1 ms
        • 2 hpda.cup.hp.com (15.75.208.53) 1 ms 1 ms 1 ms
  • 9. GENERAL NETWORKING
    • Traceroute (cont)
    • traceroute to atl.hp.com (15.45.88.30), 30 hops max, 20 byte packets
    • 1 cup47amethyst-oae-gw2.cup.hp.com (15.244.72.1) 1 ms 1 ms 1 ms
    • 2 cup44-gw.cup.hp.com (15.13.177.65) 1 ms 1 ms 1 ms
    • 3 cupgwb01-legs1.cup.hp.com (15.61.211.71) 1 ms 1 ms 1 ms
    • 4 palgwb02-p7-4.americas.hp.net (15.243.170.45) 2 ms 1 ms 1 ms
    • 5 atlgwb02-p6-1.americas.hp.net (15.235.138.17) 60 ms 60 ms 60 ms
    • 6 atlgwb03-vbb102.americas.hp.net (15.227.140.7) 60 ms 60 ms 60 ms
    • 7 atldcrfc5.tio.atl.hp.com (15.41.16.205) 61 ms 60 ms 60 ms
    • 8 i3107at1.atl.hp.com (15.45.88.34) 60 ms 60 ms 60 ms
  • 10. GENERAL NETWORKING
    • Traceroute (cont)
    • traceroute to www-dev.bri.hp.com (15.144.120.100), 30 hops max, 20 byte packets
    • 1 cup47amethyst-oae-gw2.cup.hp.com (15.244.72.1) 1 ms 1 ms 1 ms
    • 2 cup44-gw.cup.hp.com (15.13.177.65) 1 ms 1 ms 1 ms
    • 3 cupgwb01-legs1.cup.hp.com (15.61.211.71) 1 ms 1 ms 1 ms
    • 4 palgwb02-p7-4.americas.hp.net (15.243.170.45) 2 ms 2 ms 1 ms
    • 5 atlgwb02-p6-1.americas.hp.net (15.235.138.17) 60 ms 60 ms 61 ms
    • 6 15.227.138.42 (15.227.138.42) 183 ms 204 ms 183 ms
    • 7 bragwb02.europe.hp.net (15.203.204.2) 183 ms 184 ms 184 ms
    • 8 15.203.202.18 (15.203.202.18) 188 ms 227 ms 188 ms
    • 9 15.144.16.4 (15.144.16.4) 189 ms 188 ms 189 ms
    • 10 www-dev.bri.hp.com (15.144.120.100) 189 ms 188 ms 188 ms
  • 11. GENERAL NETWORKING
    • Hardware Potential Performance Changes
      • Routers –
        • Use router tools to analyze networking traffic
        • Readjust traffic loads to balance across different connections (if possible)
        • Use tools to verify memory usage is not being compromised for connections
  • 12. GENERAL NETWORKING
    • Hardware Potential Performance Changes
      • Routers –
        • Since routers have intelligence inside of them, data is stored in buffers
        • Common performance problems related to buffer allocation
        • Middle buffers, 600 bytes (total 150, permanent 25): 147 in free list (10 min, 150 max allowed) 61351931 hits, 137912 misses, 51605 trims, 51730 created 91652 failures (0 no memory)
          • permanent : take the number of total buffers in a pool and add about 20%.
          • min-free : set min-free to about 20-30% of the permanent number of allocated buffers in the pool.
          • max-free : set max-free to something greater than the sum of permanents and minimums
          • buffer middle permanent 180
          • buffer middle min-free 50
          • buffer middle max-free 235
          • Adjust for traffic burst
            • Slow traffic – Min free goes up
            • Fast traffic – Permanent goes up
  • 13. GENERAL NETWORKING
    • Hardware Potential Performance Changes
      • Switches
        • Dependant on type and brand, changeable parameters vary
          • Change speed (10/100/1000 mbps) to match other devices
          • Change Duplex level (Half/Full to relieve conflicts)
          • Autonegotiation isnt’ full foolproof (If possible nail port paramters)
          • Link multiple ports together in a trunk (not all switches)
            • Limited to direct connections with peer switch
  • 14. GENERAL NETWORKING
    • Hardware Potential Performance Changes
      • Hubs
        • Hubs usually don’t have parameters that can be changed for performance
        • If they are bundled with a switch, use switch information to make changes
        • Most hubs, by default, are half-duplex in operation
          • Need to validate that connections into the hub are half duplex
  • 15. GENERAL NETWORKING
    • Other Potential Issues
      • Difference in software standards
        • HTTP 1.0 vs. HTTP 1.1 – Persistent connections
        • Large data frames – Not standard in all hardware
      • Systems need to work to keep “pipes” full
        • Introduction of Fiberchannel starting to push 100BT
    • Other places for tips and tricks
      • www.web100.org - Pointers to tools for performance analysis
      • www.compnetworking.about.com – High level info on networks
      • www. practicallynetworked .com - SOHO networking information
  • 16. MPE/iX SPECIFIC NETWORKING
    • MPE/iX Networking Stacks Made of Multiple Layers
    F Intrinsics Sockets/NetIPC APIs ADCP AFCP Telnet TCP/IP/UDP Network Links
  • 17. MPE/iX SPECIFIC NETWORKING
    • MPE/iX Networking Stack – Links
      • 100BT/100VG – Full Duplex vs. Half Duplex
        • Full Duplex allows for send and receive traffic at the same time
        • 100 VG had some advantages but lost out on marketing side VHS vs. Beta
        • Full Duplex can be affected by connections
        • Full Duplex can be affected by application design
        • MP systems also affect Full Duplex behavior
  • 18. MPE/iX SPECIFIC NETWORKING
    • MPE/iX Networking Stack – Links
      • ACC – WAN Link
        • Speeds limited by connection medium
          • Phone speeds and satellite technologies – 2 mbps possible
        • Best used as an access point into a network, not as interconnect between systems.
  • 19. MPE/iX SPECIFIC NETWORKING
    • MPE/iX Networking Stack – Transports
        • AFCP – Used to communicate with DTC device – HP Proprietary
          • Configuration within NMMGR to change parameters
          • After selecting DTC to configure, select TUNE DTC option
          • Set 1: Normal timer mode
          • Set 2: Short retransmission timer mode
          • Set 3: Long retransmission timer mode
          • Set 4: Variable timer mode
          • Set 5: MPE XL Release 1.2 timer mode
          • Set 6: MPE XL Release 2.1 timer mode
  • 20. MPE/iX SPECIFIC NETWORKING
    • MPE/iX Networking Stack – Transports
        • TCP/IP – Used to communicate with open standards based devices
          • Configuration with NMMGR
          • Within the NS->UNGUIDED CONFIG->NETXPORT->GPROT->TC
          • [1024] Maximum Number of Connections
          • [2] Retransmission Interval Lower Bound (Secs)
          • [180] Maximum Time to Wait For Remote Response (Sec)
          • [4] Initial Retransmission Interval (Secs)
          • [4] Maximum Retransmissions per Packet
          • [600 ] Connection Assurance Interval (Secs)
          • [4 ] Maximum Connection Assurance Retransmissions
  • 21. MPE/iX SPECIFIC NETWORKING
    • MPE/iX Networking Stack – APIs
      • Sockets – Standards based networking connectivity interface
        • Sending data requires use of data buffers
        • Tradeoff between efficiency in application and efficiency in networking
        • Studies seem to point to 1k byte buffers being optimal balance
        • Only works if application can package data.
        • Connection startup/teardown is expensive – AVOID IF POSSIBLE
  • 22. MPE/iX SPECIFIC NETWORKING
    • MPE/iX Networking Stack – APIs
      • NetIPC – HP Propriety networking connectivity interface
        • Similar to open standards sockets
        • 1k byte buffers are optimal if application allows
        • Fix length data blocks remove need to negotiate buffer length
          • Eliminates an extra IPCRECEIVE call for get length of data
        • Connection startup/teardown is expensive – AVOID IF POSSIBLE
  • 23. MPE/iX SPECIFIC NETWORKING
    • MPE/iX Networking Stack – Services
      • Telnet – Open standards terminal connectivity
        • Based on very inefficient 1-character transfer mode
        • Most common complaint is character echo response
        • Block mode response is comparable to VT/DTC
        • Character echo improved with Advanced Telnet functionality
          • Requires terminal emulator that supports it.
          • QCTERM as an example
  • 24. MPE/iX SPECIFIC NETWORKING
    • MPE/iX Networking Stack – Services
      • DTC TIO/ADCP – HP Proprietary terminal connectivity
        • Efficient block mode data transfer
        • Higher cost due to needing DTCs and special applications
        • DTSTUNEB can be used to adjust buffer parameters
          • WARNING – Due so at your own risk.
          • Change total number of data buffers created - # per ldev
          • Change maximum number of data buffers useable per ldev – 24 is default
  • 25. MPE/iX SYSTEM PERFORMANCE to NETWORKING
    • Networking connections use resources
      • Data structures for each socket/NetIPC connection
      • Data buffers for each DTC/Telnet connection
      • Timer structures used by all layers
      • Busy connections on small systems can exhaust resources
        • “Fake” system by creating more “dummy” devices
  • 26. MPE/iX SYSTEM PERFORMANCE to NETWORKING
    • System is very busy servicing interrupts
      • Tradeoff between “smart” cards and “dumb” cards
        • Network adapters could do more work
        • Newer cards are cheaper, but system needs to do processing
        • High LAN traffic situations see this more often as problem
        • Solution is to get more CPU
        • Efficiencies have been introduced into MPE/iX stacks
  • 27. MPE/iX SYSTEM PERFORMANCE to NETWORKING
    • Connectivity mix can affect system performance
      • VT vs. DTC vs. Telnet
        • DTC is most efficient
          • Handles data away from the system
          • Very few data transfers per I/O request
        • VT is efficient also because of HP proprietary
          • Has limits because of sitting on TCP/IP stack
          • Requires driver applications on sending and receiving systems
  • 28. MPE/iX SYSTEM PERFORMANCE to NETWORKING
    • Connectivity mix can affect system performance (cont)
        • Telnet is least efficient because of need to support open standards
          • Block mode applications (VPLUS) comparable to VT
            • Telnet is 90% as efficient as VT in block mode
          • CI commands most overhead for Telnet – 1 character at a time response
            • Telnet is 70% as efficient as VT in character mode
  • 29. MPE/iX SYSTEM PERFORMANCE to NETWORKING
    • Check for application type with regards to I/O
      • Block mode access vs. character mode access
      • Internal studies show that frame size is either:
        • Very small - < 140 bytes
        • Max value – 1500 bytes
        • Nothing in between
      • If many character mode applications being used, system network will bog down
      • Move to block mode alternative, higher CPU speeds or offload to other systems
  • 30. MPE/iX SYSTEM PERFORMANCE to NETWORKING
    • Check for application type with regards to I/O (cont)
      • Check to see how networking connections are being made
      • Multiple starts/shutdowns for connection are EXPENSIVE
      • On small 918 class system, 15 user test FAFFed system
        • Higher CPU
        • Different connectivity methods
        • More memory
  • 31. WRAPUP
    • If you suspect networking performance problems, what can you do?
      • Characterize problem – can’t connect, lost packets, system is bogging down
      • Understand where heaviest use is coming from
        • Single application use – Can application/parms be tweaked to ease performance pressure?
        • Multiple users rapidly connecting to system – Can users be directed to connect by differing methods
        • Network is experiencing problems – Isolate segment that is causing problem
          • Check router, switch for potential problems
  • 32.