Explain the contents of the presentation First section will be on general networking concepts inherit in all computer systems Next section will focus on MPE/iX networking performance issues and concepts Finish up with a look at MPE/iX system performance and how it relates to MPE networking
What is performance? Performance is an obscure concept and it varies from person to person. Bandwidth is the amount of data a networking link can send on a consistent basis. This is affected by the power of the system, the speed of the networking link and the complexity of the network itself. Response time is the time it takes for the user to receive data back from a command to a system. This is affected mostly by the complexity of the network and its efficiency of processing data. There is a possible secondary affect if the target system is bogged down for some reason. System performance can affect the network if it is bogged down and can’t service requests from the network. General networking will look at basic networking performance concepts that are independent of the system. System specific networking performance encompasses the issues that are affected by the systems interface with the network.
Performance is affected by many different factors. A main factor is the complexity of the network. A simple network setup has few places where performance can be impacted. In this example, the points of problems are the system, the DTC or the terminal/printer. Each is easy to pinpoint and fix if there is a problem.
This network is a high level model of Jeff’s home office setup. It is much more complex with many more points of failure. Not only is there a system and an end user to contend with, but there are a minimum of 5 problem points in between: a VPN server, the INTERNET and all its connections and routing mechanisms, a server at Jeff’s ISP, Jeff’s home router and Jeff’s hub. Anyone of these could have problems that affect the system performance.
Complex, far reaching networks require the use of many different types of networking hardware. Three common components are, from simplest to most complex are Hubs, Switches and Routers. Explain each component.
What are some common tools to check the integrity of a network? Most common is the PING utility. PING executes a simple check to see if the target system/device is active and responding. An affirmative response and the amount of time it took to respond are the only data returned.
Another common and more complex tool is TRACEROUTE. TRACEROUTE returns the complete path from source to destination and all intermediate stops in between. This is done by finding the incrementing steps on the path, starting with 1 step and increasing until the final destination is reached or the max number of hops is reached, whichever happens first. Three attempts at each stop are attempted and the amount of time for each attempt is returned. This can show where a segment on a complex network maybe having problems.
This is an example of a more complex tracing from a system in California to an HP system in Atlanta.
This is yet another complex tracing from California to HP in Bristol England. A study of 3000 pings/traceroutes to connections worldwide showed an average response time from source to destination and back of 150 ms.
Routers are most complex elements of large networks. Sophisticated computer systems with an operating system that focuses on finding the most efficient path for networking traffic. Internal algorithms to balance speed with line utilization. If possible, connection data is balance across multiple paths to efficiently utilize networking bandwidth. Many routers contain tools that allow the administrator to monitor internal resource usage in order to keep the router from getting bogged down.
Since routers are intelligent computer systems, many times data is stored and forwarded to other connections with the used of internal data buffers. The more memory the better, but also the more expensive the router. Also, higher performance will cost more. Need to balance networking performance with cost of ownership. Example here was taking from internal HP router problem from almost a year ago.. Middle buffer pool has quite a few misses, showing that buffers were not available. This causes packet drops. If possible, simple limits increases will minimize these misses.
Switches don’t have the capability of doing sophisticated routing, but there are some configurations that can affect performance. Make sure all connections into the switch have the proper duplex level set. A mix in duplex mode will lead to potential collisions, packets loss and delays. Auto negotiations settings vary in effectiveness widely, depending on the switch vendor. If auto negotiation is suspect, hard wired duplex and speed maybe needed. Some switches also allow the linking of multiple paths into a logical trunk. Much better performance, but limited in connection to other devices that support the same trunk architecture.
Simple hubs are configurable. Hubs by definition are half-duplex, so devices connected to the hub will need to be communicating at half duplex. If not, packet loss will occur.
Many other possibilities from the software side. HTTP 1.0 doesn’t use persistent connections, HTTP 1.1 does. Eliminates the need for connection startup and teardown. Some networking devices support large data frames known as jumbo frames. Utilizes networking bandwidth more efficiently, especially with high speed, gigabit networks. The systems themselves need to be able to keep the networking pipes full. System I/O bus architectures, processor speed, memory utilization and interrupt handling all affect the ability to pump data on the network. If networking bandwidth is low compared to network capability and network seems to behaving well on its own, a close inspection of the system setup maybe in order. Some potential remedies? Upgrade to faster system with faster bus. Add more memory. If an MP system is being used, see if it is possible to architect interrupts to be handled on a dedicated processor.
MPE networking consists of several abstract layers. F intrinsics are file intrinsics FOPEN, FCLOSE, FREAD, FWRITE…etc All networking links shared by multiple stacks.
MPE 100 BT and VG links designed for full duplex use. VG was better standard and architecture but lost out due to market forces…VHS vs. Beta example. Full Duplex can be affected by mismatch connections or by applications design. An application that isn’t designed for parallel execution of inbound and outbound data will not take full advantage of full duplex capabilities. An MP system can better utilize Full Duplex by allowing processors to handle concurrent inbound and outbound data.
ACC links performance completely related to link connectivity being used. Phone speeds and satellite technologies are approaching slow LAN speeds. When using the ACC, it is best to design it as a single point into the system and not a general purpose system to system connectivity mechanism. Too slow to have multiple systems accessing each other via an ACC connection.
MPE/iX Transports include AFCP – Avesta Flow Control Protocol. This is a proprietary protocol that is used to talk to DTCs. Not routeable but highly efficient for terminal I/O traffic. Different networking setups could require change in AFCP retransmission protocols. These can be changed in NMMGR with the TUNE DTC option. Normal mode is by default and used in the majority of cases. Short retransmission mode is used if there is heavy packet loss and data needs to be retransmitted quickly in order to assure good system performance. Long retransmission mode is used for larger networks that may take longer to acknowledge packets. This eliminates the potential for extra packets on the network. Variable timer mode is for when the network is very busy and there are potential collisions among sent packets. The retransmission mode is selected randomly among a lower and upper limit so as to maximize the chance for successful retransmission. 1.2 and 2.0 modes are self explanatory and redundant for current releases of the OS.
TCP/IP also has a set of parameters that can be altered for performance. These parameters can be altered depending on the networking performance and setup. For networks that are losing packets, the retransmission interval can be lowered. If the network is large a far flung, the retransmission level can be raised. Similar adjustments can be made to the other parameters. Wait for Response interval is lowered on networks losing packets, raised for long networks.
When using BSD Sockets, sending application requires buffer usage. Bigger buffers are better, but at the expense of wasted memory. 1k byte buffers seem to work the best in terms of the optimal balance between application efficiency and efficiency in the network. The key is that larger buffers are only useable if the application can use them. 1k byte buffers will do no good if an application needs to consistently use 1 bytes sends. It is also important to note that applications that startup and teardown sockets regularly and often will encounter a large performance hit. Best to keep a pool of sockets, if possible, to make connections to target systems.
NetIPC is similar in behavior to BSD sockets…it is HP proprietary. 1k buffers also work there. It is also important if the application can use fixed length buffers for sends and receives. If not, an extra send/receive needs to be done that has the data length to be transferred. This causes extra delay in using the network, a response time delay and an unnecessary set of packets on the network. As with sockets, startup and teardown is expensive. Tradeoff this to managing multiple connection resources.
Telnet is open standard terminal connectivity. Inefficient character modem data transfer with delays in character echoing. Most noticeable in widespread networks. In block mode application cases (VPLUS), data is sent in one large chunk and performance and response is comparable with VT/DTC connections. For CI type commands where echoing is involved, Advance Telnet has an advantage by offloading the echoing to the client Telnet system. To use Advanced Telnet, must need MPE/iX patch (6.0,6.5,7.0,7.5 available) and Telnet client that supports Advanced Telnet. Currently only QCTERM supports it.
DTC terminal I/O is efficient by design but requires expensive hardware and proprietary networking stack. Doesn’t mix with TCP/IP based solutions. There is a program on MPE/iX, DTSTUNEB, that allows the adjusting of data buffer allocation for TIO. All data buffers come from a common pool and a greedy device could use all the buffers. Limits placed on each device as to how many data buffers to use and the global data buffer pool is adjustable. Change is made the NMCONFIG file and requires a DTC shutdown/restart to have an affect.
How can system performance be affected by the networking stacks? Data structures used by all networking connections. Data buffers used for sending and receiving data over the network. Timers are used in multiple places for retransmissions of packets and keeping connections up. Timers can be expensive and multiple timers can inflict delays on the system to manage them. System manager needs to truly understand networking needs and eliminate redundant, unneeded connections. If the system is a smaller system, some busy connection can exhaust resources. In some cases, the creating of “dummy” devices can expand the resource pool to keep active connections from starving.
Newer MPE systems can spend a lot of time servicing interrupts on a busy networking system. Older networking link adapters were smart and they could do more work. This offloaded work from the system. However, these adapters were expensive. Cheaper links were more cost efficient, but “dumb” links require the system to do more of the work. If there is a lot of networking traffic, the system can get bogged down handling networking interrupts. Solution includes faster CPU or potentially more memory. Also, forcing interrupts onto a dedicated CPU can help with efficiency on an MP system.
Different types of connectivity can also affect networking performance. DTC connections are the most efficient but limited in application type and networking scope. VT is also efficient but requires a dedicate process to handle VT specific connections for each user. This extra process can sometimes push system limits, depending on the type of system and the number of connections.
Telnet is least efficient because of open standards support. Block mode on Telnet is comparable to VT with about a 90% efficiency level. CI type character commands are worse and bring the efficiency comparison to about 70%.
Network Performance: An MPE/iX Overview Jeff Bandle HP MPE/iX Networking Architect
Hub - A hub is a small, simple, inexpensive device that joins multiple computers together at a low-level network protocol layer.
Switch - A switch is a small device that joins multiple computers together at a low-level network protocol layer. Technically, switches operate at layer two (Data Link Layer) of the OSI model.
Router - A router is a physical device that joins multiple networks together. Technically, a router is a "layer 3 gateway," meaning that it connects networks (as gateways do), and that it operates at the network layer of the OSI model.