Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Ch24 system administration


Published on

Published in: Technology
  • Be the first to comment

  • Be the first to like this

Ch24 system administration

  1. 1. Performance Analysis Chapter 24
  2. 2. Chapter Goals <ul><li>Understand the basic terminology of performance monitoring and analysis. </li></ul><ul><li>Understand proper methods of monitoring a system’s performance. </li></ul><ul><li>Knowledge of the tools that allow you to monitor system performance. </li></ul><ul><li>Understand how to analyze the data provided by the monitoring tools. </li></ul><ul><li>Understand how to apply the data to improve system performance. </li></ul><ul><li>Understand what to tune, and why to tune it. </li></ul>
  3. 3. General Performance Tuning Rules <ul><li>Right-size the system to start with. </li></ul><ul><ul><li>You do not want to start with an overtaxed system with the intention of providing a turbo-charged service. UNIX is very demanding on hardware. UNIX generally provides each process with (the illusion of) unlimited resources. This often leads to problems when system resources are taxed. Windows operating systems and applications often understate system requirements. The OS and/or applications will operate in a sparse environment, but the performance is often abysmal. </li></ul></ul>
  4. 4. General Performance Tuning Rules <ul><li>Determine the hardware requirements of specific types of servers. </li></ul><ul><ul><li>Generally, e-mail and web servers require high-throughput network links, and medium to large memory capacity. Mail servers typically require significantly more disk space than web servers. Database servers typically require large amounts of memory, high capacity, high-speed disk systems, and significant processing elements. Timeshare systems require significant processing elements, and large amounts of memory. </li></ul></ul>
  5. 5. General Performance Tuning Rules <ul><li>Monitor critical systems from day one in order to get a baseline of what “normal” job mixes and performance levels are for each system. </li></ul><ul><li>Before making changes to a system configuration, make sure user jobs are not causing problems. </li></ul><ul><ul><li>Check for rabbit jobs, users running too many jobs, or jobs of an inappropriate size on the system. </li></ul></ul><ul><li>A performance problem may be temporary, so you need to think through any changes before you implement them. </li></ul><ul><ul><li>You might also want to discuss proposed changes with other system administrators as a sanity check. </li></ul></ul>
  6. 6. General Performance Tuning Rules <ul><li>Once you are ready to make changes, take a scientific approach to implementing them. </li></ul><ul><ul><li>You want to ensure that the impact of each change is independently measurable. </li></ul></ul><ul><ul><li>You also want to make sure you have a goal in mind, at which point you stop tuning and move on to other projects. </li></ul></ul><ul><li>Before you begin making changes to the system, consider the following. </li></ul><ul><ul><li>Always know exactly what you are trying to achieve. </li></ul></ul><ul><ul><li>Measure the current system performance before making any changes. </li></ul></ul><ul><ul><li>Make one change at a time. </li></ul></ul>
  7. 7. Change Rules <ul><ul><li>Once you do make a change, make sure to monitor the altered system for a long enough period to know how the system performs under various conditions (light load, heavy load, I/O load, swapping). </li></ul></ul><ul><ul><li>Do not be afraid to back out of a change if it appears to be causing problems. </li></ul></ul><ul><ul><ul><li>When you back a change out, go back to the system configuration immediately previous to the “bad” configuration. Do not try to back out one change and insert another change at the same time. </li></ul></ul></ul><ul><ul><li>Take copious notes. </li></ul></ul><ul><ul><ul><li>These are often handy when you upgrade the OS, and have to start the entire process over. </li></ul></ul></ul>
  8. 8. Resource Rules <ul><li>Install as much memory as you can afford. </li></ul><ul><li>Disk systems can also have a substantial impact on system performance. </li></ul><ul><li>Network adapters are well-known bottlenecks. </li></ul><ul><li>Eliminate unused drivers, daemons, and processes on the system. </li></ul><ul><li>Know and understand the resources required by the applications you are running. </li></ul>
  9. 9. Terminology <ul><li>Bandwidth: </li></ul><ul><ul><li>The amount of a resource available. If a highway contains four lanes (two in each direction), each car holds four people, and the maximum speed limit allows 6 cars per second to pass over a line across the road, the “bandwidth” of the road is 24 people per second. Increasing the number of lanes will increase the bandwidth. </li></ul></ul><ul><li>Throughput: </li></ul><ul><ul><li>Percentage of the bandwidth you are actually getting. Continuing with the road example, if the cars only hold one person, the protocol is inefficient (not making use of the available capacity). If traffic is backed up due to an accident and only one or two cars per second can pass the line, the system is congested, and the throughput is impacted. Likewise, if there is a toll booth on the road, the system experiences delays (latency) related to the operation of the toll booth. </li></ul></ul>
  10. 10. Terminology <ul><li>Utilization: </li></ul><ul><ul><li>How much of the resource was used. It is possible to use 100% of the resource, and yet have 0% throughput (consider a traffic jam at rush hour). </li></ul></ul><ul><li>Latency: </li></ul><ul><ul><li>How long it takes for something to happen. In the case of the road example, how long does it take to pay the toll? </li></ul></ul><ul><li>Response time: </li></ul><ul><ul><li>How long the user thinks it takes for something to occur. </li></ul></ul><ul><li>Knee: </li></ul><ul><ul><li>Point at which throughput starts to drop off as load increases. </li></ul></ul>
  11. 11. Terminology <ul><li>Benchmark: </li></ul><ul><ul><li>Set of statistics that (hopefully) shows the true bandwidth and/or throughput of a system. </li></ul></ul><ul><li>Baseline : </li></ul><ul><ul><li>Set of statistics that shows the performance of a system over a long period of time. </li></ul></ul><ul><ul><li>Instantaneous data about the system’s performance is rarely useful for tuning the system. But long-term data is not very useful either, as peaks and valleys in the performance graph tend to disappear over time. </li></ul></ul><ul><ul><li>You need to know the long-term performance characteristics, as well as the “spikes” caused by short-lived processes. A good way to obtain long-term (and short-term) information is to run the vmstat command every five seconds for a 24-hour period. Collect the data points, reduce/graph these data points, and study the results. </li></ul></ul>
  12. 12. Windows Monitoring <ul><li>Task Manager </li></ul><ul><li>The Cygwin package allows the administrator to build and install several UNIX tools to monitor system performance. </li></ul><ul><li>For sites that do not use the Cygwin toolkit, there are several third-party native Windows tools that might be useful when you need to monitor system performance. Among these tools are: </li></ul><ul><ul><li>vtune products/vtune/ </li></ul></ul><ul><ul><li>SysInternals </li></ul></ul>
  13. 13. UNIX Monitoring <ul><li>ps </li></ul><ul><li>top </li></ul><ul><li>vmstat </li></ul><ul><li>iostat </li></ul><ul><li>nfsstat </li></ul><ul><li>netstat </li></ul><ul><li>mpstat </li></ul><ul><li>accounting </li></ul>
  14. 14. Unix Monitoring <ul><li>Most versions of UNIX ship with an accounting package that can monitor the system performance, and record information about commands used. </li></ul><ul><ul><li>Many sites run the detailed system accounting package in order to bill departments/users for the use of the computing resources they consume. </li></ul></ul><ul><ul><li>The accounting packages can also be very useful tools for tracking system performance. </li></ul></ul><ul><ul><li>Although the accounting information is generally most useful as a post-mortem tool (after the processes has completed), it is sometimes possible to gather semi real-time information from the system accounting utilities. </li></ul></ul><ul><ul><li>System auditing packages can give a lot of information about the use of the system, but these packages also add considerable load to the system. </li></ul></ul><ul><ul><ul><li>Process accounting utilities will generally add 5% of overhead to the system load, and auditing utilities can add (up to) 20% of overhead load to the system. </li></ul></ul></ul>
  15. 15. Accounting <ul><li>Why run accounting? </li></ul><ul><ul><li>Bill for resources used . </li></ul></ul><ul><ul><ul><li>CPU time used </li></ul></ul></ul><ul><ul><ul><li>Memory used </li></ul></ul></ul><ul><ul><ul><li>Disk space used </li></ul></ul></ul><ul><ul><ul><li>Printer page accounting </li></ul></ul></ul><ul><ul><li>Detailed job flow accounting (Banks/Insurance/Stock trading) </li></ul></ul><ul><ul><ul><li>Keep track of every keystroke </li></ul></ul></ul><ul><ul><ul><li>Keep track of every transaction </li></ul></ul></ul><ul><ul><li>Security </li></ul></ul><ul><ul><ul><li>track every network connection </li></ul></ul></ul><ul><ul><ul><li>track every local login </li></ul></ul></ul><ul><ul><ul><li>Track every keystroke </li></ul></ul></ul>
  16. 16. Accounting <ul><li>Two types of accounting </li></ul><ul><ul><li>Process accounting </li></ul></ul><ul><ul><ul><li>Track what commands are used </li></ul></ul></ul><ul><ul><ul><li>Track what system calls are issued </li></ul></ul></ul><ul><ul><ul><li>Track what libraries are used </li></ul></ul></ul><ul><ul><ul><li>Good for security (audit trail) </li></ul></ul></ul><ul><ul><ul><li>Good when multiple users have access to system </li></ul></ul></ul><ul><ul><ul><li>Good way to track what utilities and applications are being used, and who is using them. </li></ul></ul></ul>
  17. 17. Accounting <ul><ul><li>Detailed accounting </li></ul></ul><ul><ul><ul><li>Track every I/O operation </li></ul></ul></ul><ul><ul><ul><ul><li>Disk </li></ul></ul></ul></ul><ul><ul><ul><ul><li>Tape </li></ul></ul></ul></ul><ul><ul><ul><ul><li>tty </li></ul></ul></ul></ul><ul><ul><ul><ul><li>Network </li></ul></ul></ul></ul><ul><ul><ul><ul><li>Video </li></ul></ul></ul></ul><ul><ul><ul><ul><li>Audio </li></ul></ul></ul></ul><ul><ul><ul><li>Primarily used for billing </li></ul></ul></ul>
  18. 18. Accounting <ul><li>Charging for computer use </li></ul><ul><ul><li>Almost unheard of in academia (today). </li></ul></ul><ul><ul><ul><li>Some Universities charge research groups for CPU time. </li></ul></ul></ul><ul><ul><ul><li>Some Universities charge for printer supplies. </li></ul></ul></ul><ul><ul><ul><li>Some Universities charge for disk space and backups. </li></ul></ul></ul><ul><ul><li>Most companies that run accounting have a central computing facility. </li></ul></ul><ul><ul><ul><li>Subsidiaries buy computing time from the central group. </li></ul></ul></ul><ul><ul><ul><li>Accounting is used to pay for support, supplies, … </li></ul></ul></ul>
  19. 19. Accounting <ul><li>Why avoid accounting? </li></ul><ul><ul><li>Log files are huge </li></ul></ul><ul><ul><ul><li>Must have disk space for them. </li></ul></ul></ul><ul><ul><ul><ul><li>15 minutes of detailed accounting on a system with one user generated a 20 MB log file! </li></ul></ul></ul></ul><ul><ul><ul><ul><li>15 minutes of process accounting on a system with one user generated a 10 MB log file! </li></ul></ul></ul></ul><ul><ul><ul><li>Must have (and bill) cpu time for accounting. </li></ul></ul></ul><ul><ul><ul><ul><li>Accounting can require a lot of CPU/disk resources </li></ul></ul></ul></ul><ul><ul><ul><ul><li>Who will pay for the CPU/disk resources used by accounting </li></ul></ul></ul></ul><ul><ul><ul><li>Must decide what information to keep, and what to pitch. </li></ul></ul></ul>
  20. 20. Accounting <ul><li>What can accounting track? </li></ul><ul><ul><li>Some of the common things to track: </li></ul></ul><ul><ul><ul><li>CPU time </li></ul></ul></ul><ul><ul><ul><li>Memory usage </li></ul></ul></ul><ul><ul><ul><li>Disk usage </li></ul></ul></ul><ul><ul><ul><li>I/O usage </li></ul></ul></ul><ul><ul><ul><li>Connect time </li></ul></ul></ul><ul><ul><ul><li>Dial-up/Dial-out usage </li></ul></ul></ul><ul><ul><ul><li>Printer accounting </li></ul></ul></ul>
  21. 21. Accounting <ul><li>Solaris </li></ul><ul><ul><li>Auditing </li></ul></ul><ul><ul><ul><li>Perform audit trail accounting </li></ul></ul></ul><ul><ul><ul><li>Relies on the Basic Security Module (BSM). </li></ul></ul></ul><ul><ul><ul><li>Can monitor TONS of stuff. </li></ul></ul></ul><ul><ul><ul><ul><li>Processes </li></ul></ul></ul></ul><ul><ul><ul><ul><li>Function/subroutine calls </li></ul></ul></ul></ul><ul><ul><ul><ul><li>System calls </li></ul></ul></ul></ul><ul><ul><ul><ul><li>Ioctls </li></ul></ul></ul></ul><ul><ul><ul><ul><li>Libraries loaded </li></ul></ul></ul></ul><ul><ul><ul><ul><li>File operations (open close read write create remove) </li></ul></ul></ul></ul><ul><ul><ul><ul><li>File system operations (stat, chmod chown, …) </li></ul></ul></ul></ul><ul><ul><ul><ul><li>Can configure to monitor successful/unsuccessful operations </li></ul></ul></ul></ul><ul><ul><ul><ul><li>Can monitor on a per user basis </li></ul></ul></ul></ul>
  22. 22. Accounting <ul><li>Solaris </li></ul><ul><ul><ul><li>Audit binaries </li></ul></ul></ul><ul><ul><ul><ul><li>auditconfig </li></ul></ul></ul></ul><ul><ul><ul><ul><li>auditd – the audit daemon </li></ul></ul></ul></ul><ul><ul><ul><ul><li>praudit – print audit information </li></ul></ul></ul></ul><ul><ul><ul><ul><li>auditon – turn on auditing </li></ul></ul></ul></ul>
  23. 23. Accounting <ul><li>Solaris </li></ul><ul><ul><li>Audit files </li></ul></ul><ul><ul><ul><li>Control Files in /etc/security </li></ul></ul></ul><ul><ul><ul><ul><li>audit_class </li></ul></ul></ul></ul><ul><ul><ul><ul><li>audit_control </li></ul></ul></ul></ul><ul><ul><ul><ul><li>audit_data </li></ul></ul></ul></ul><ul><ul><ul><ul><li>audit_event </li></ul></ul></ul></ul><ul><ul><ul><ul><li>audit_startup </li></ul></ul></ul></ul><ul><ul><ul><ul><li>audit_user </li></ul></ul></ul></ul><ul><ul><ul><ul><li>audit_warn </li></ul></ul></ul></ul><ul><ul><ul><ul><li>device_allocate </li></ul></ul></ul></ul><ul><ul><ul><ul><li>device_maps </li></ul></ul></ul></ul>
  24. 24. Accounting <ul><li>Solaris </li></ul><ul><ul><li>Audit Files </li></ul></ul><ul><ul><ul><li>Data Files in /var/audit </li></ul></ul></ul><ul><ul><ul><ul><li>YYYYMMDDHHMMSS.YYYYMMDDHHMMSS.hostname </li></ul></ul></ul></ul><ul><ul><ul><ul><li>YYYYMMDDHHMMSS.not_terminated.hostname </li></ul></ul></ul></ul>
  25. 25. Accounting <ul><li>Solaris </li></ul><ul><ul><li>Accounting </li></ul></ul><ul><ul><ul><li>Daily Accounting </li></ul></ul></ul><ul><ul><ul><li>Connect Accounting </li></ul></ul></ul><ul><ul><ul><li>Process Accounting </li></ul></ul></ul><ul><ul><ul><li>Disk Accounting </li></ul></ul></ul><ul><ul><ul><li>Calculating User Fees </li></ul></ul></ul>
  26. 26. Accounting <ul><li>Solaris </li></ul><ul><ul><li>Accounting </li></ul></ul><ul><ul><ul><li>/usr/lib/acct/acctdisk </li></ul></ul></ul><ul><ul><ul><li>/usr/lib/acct/acctdusg </li></ul></ul></ul><ul><ul><ul><li>/usr/lib/acct/accton </li></ul></ul></ul><ul><ul><ul><li>/usr/lib/acct/acctwtmp </li></ul></ul></ul><ul><ul><ul><li>/usr/lib/acct/closewtmp </li></ul></ul></ul><ul><ul><ul><li>/usr/lib/acct/utmp2wtmp </li></ul></ul></ul>
  27. 27. Accounting <ul><li>Solaris </li></ul><ul><ul><li>Accounting binaries </li></ul></ul><ul><ul><ul><li>acctcom – search/print accounting files </li></ul></ul></ul><ul><ul><ul><li>acctcms – generate command accounting from logs </li></ul></ul></ul><ul><ul><ul><li>acctcon – turn accounting on/off </li></ul></ul></ul><ul><ul><ul><li>acctmerg – merge multiple account forms into a report </li></ul></ul></ul><ul><ul><ul><li>Acctprc – programs to generate process accounting logs </li></ul></ul></ul><ul><ul><ul><li>fwtmp – manipulate connect accounting records </li></ul></ul></ul><ul><ul><ul><li>runacct – run daily accounting summary </li></ul></ul></ul>
  28. 28. Accounting <ul><li>Solaris </li></ul><ul><ul><li>Accounting </li></ul></ul><ul><ul><ul><li>Data Files </li></ul></ul></ul><ul><ul><ul><ul><li>/var/adm/pacct </li></ul></ul></ul></ul><ul><ul><ul><ul><li>/var/adm/acct/fiscal </li></ul></ul></ul></ul><ul><ul><ul><ul><li>/var/adm/acct/nite </li></ul></ul></ul></ul><ul><ul><ul><ul><li>/var/adm/acct/sum </li></ul></ul></ul></ul>
  29. 29. Performance Analysis <ul><li>Unix User Interface researchers report that an average user perceives a system to be slow when response times are longer than 0.7 seconds! </li></ul>
  30. 30. Performance Analysis <ul><ul><li>CPU time – </li></ul></ul><ul><ul><ul><li>How long does the user’s job take to complete? </li></ul></ul></ul><ul><ul><ul><ul><li>Is the job time critical? </li></ul></ul></ul></ul><ul><ul><ul><li>What other jobs are running? </li></ul></ul></ul><ul><ul><ul><ul><li>Context switches are costly. </li></ul></ul></ul></ul><ul><ul><ul><ul><li>Must share cpu cycles with other processes </li></ul></ul></ul></ul><ul><ul><ul><li>What is the system load average? </li></ul></ul></ul><ul><ul><li>Memory speed – </li></ul></ul><ul><ul><ul><li>Does the job need to be loaded into memory? </li></ul></ul></ul><ul><ul><ul><li>How quickly can memory be filled with pertinent information? </li></ul></ul></ul><ul><ul><ul><li>Is the job swapped out? </li></ul></ul></ul><ul><ul><ul><ul><li>Swapping brings disk system into picture. </li></ul></ul></ul></ul><ul><ul><ul><ul><li>Swapping invalidates cache for this job. </li></ul></ul></ul></ul><ul><ul><ul><ul><li>Swapping is easy to eliminate/minimize! </li></ul></ul></ul></ul><ul><ul><ul><li>Does the job fit into cache? </li></ul></ul></ul>
  31. 31. Performance Analysis <ul><ul><li>Disk I/O bandwidth – </li></ul></ul><ul><ul><ul><li>Bus Speed </li></ul></ul></ul><ul><ul><ul><li>Controller width/speed </li></ul></ul></ul><ul><ul><ul><li>How fast can information be pulled off of disk? </li></ul></ul></ul><ul><ul><ul><ul><li>SCSI vs IDE vs RAID </li></ul></ul></ul></ul><ul><ul><ul><ul><li>Rotational latency </li></ul></ul></ul></ul><ul><ul><ul><ul><li>Caching in controller/drive </li></ul></ul></ul></ul><ul><ul><ul><li>Disk system speed will have an effect on memory speed (swapping). </li></ul></ul></ul><ul><ul><li>Network I/O bandwidth – </li></ul></ul><ul><ul><ul><li>Are files stored on a network file system? </li></ul></ul></ul><ul><ul><ul><li>Does network file system do any caching? </li></ul></ul></ul><ul><ul><ul><li>Shared/switched media? </li></ul></ul></ul><ul><ul><ul><li>Full/half duplex? </li></ul></ul></ul>
  32. 32. Performance Analysis <ul><li>CPU bound jobs are difficult to measure. </li></ul><ul><ul><li>Use ps and top to see what is running. </li></ul></ul><ul><ul><li>Use uptime to determine load averages </li></ul></ul><ul><ul><ul><li>1 minute average is good for “spiky” load problems </li></ul></ul></ul><ul><ul><ul><li>5 minute average is good metric to monitor for “normal” activity </li></ul></ul></ul><ul><ul><ul><li>15 minute average is good indicator of overload conditions </li></ul></ul></ul><ul><ul><li>Use sar to determine the system cpu states. </li></ul></ul><ul><ul><ul><li>System accounting can track amount of time each CPU spends working on idle/system/user jobs. </li></ul></ul></ul><ul><ul><li>Use mpstat to determine what multi-processor systems are doing. </li></ul></ul><ul><ul><ul><li>One busy processor and one idle processor is probably “normal” operation. </li></ul></ul></ul><ul><ul><li>Use vmstat and iostat to determine percentage of time system is running user/kernel processes. </li></ul></ul><ul><ul><ul><li>Less detail than sar, but good general information. </li></ul></ul></ul>
  33. 33. Performance Analysis <ul><li>How can you improve CPU performance? </li></ul><ul><ul><li>More cpu(s) </li></ul></ul><ul><ul><li>Faster cpu(s) </li></ul></ul><ul><ul><li>Lock jobs to specific cpu(s) </li></ul></ul><ul><ul><li>Lock cpu(s) to specific tasks </li></ul></ul>
  34. 34. Performance Analysis <ul><li>Before you can diagnose performance problems, you must have a good idea of what is reasonable for your system. </li></ul><ul><ul><li>Monitor system and develop a fingerprint of typical job mixes, load average, memory use, disk use, network throughput, number of users, swapping, job size. </li></ul></ul><ul><ul><li>If something happens to the performance use these metrics to determine what has changed. </li></ul></ul><ul><ul><ul><li>Did jobs get larger? </li></ul></ul></ul><ul><ul><ul><li>More disk or network I/O? </li></ul></ul></ul><ul><ul><ul><li>Less free memory? </li></ul></ul></ul><ul><ul><ul><li>More swapping? </li></ul></ul></ul><ul><ul><ul><li>More users? </li></ul></ul></ul><ul><ul><ul><li>More jobs? </li></ul></ul></ul>
  35. 35. CPU Performance <ul><li>In general, the output of the top , vmstat , w , and other utilities that show processor-state statistics can tell you a lot about the performance of the CPU subsystem. </li></ul><ul><ul><li>If the CPU is in user mode more than 90% of the time, with little or no idle time, it is executing application code. </li></ul></ul><ul><ul><ul><li>This is probably what you want it to do, but too many user jobs running concurrently may be detrimental to any one job getting any work done. </li></ul></ul></ul><ul><ul><li>If the CPU is in system mode more than 30% of the time, it is executing system code (probably I/O, or other system calls). </li></ul></ul><ul><ul><ul><li>Context switches are a symptom of high I/O activity (if the interrupt rate is also high). </li></ul></ul></ul><ul><ul><ul><li>If seen in conjunction with high system call activity, it is a sign of poor code (nonlocal data, open, read, close, or loop). </li></ul></ul></ul><ul><ul><li>If the CPU is idle more than 10% of the time, the system is waiting on I/O (disk/network). </li></ul></ul><ul><ul><ul><li>This could be a symptom of poor application code (no internal buffering) or overloaded disk/network subsystems. </li></ul></ul></ul>
  36. 36. CPU Performance <ul><ul><li>If the system exhibits a high rate of context switches, the system is displaying symptoms of a number of possible problems. </li></ul></ul><ul><ul><ul><li>Context switches occur when one job yields the processor to another job. </li></ul></ul></ul><ul><ul><ul><li>This may occur because the scheduler time slice expired for the running job, because the running job required input/output or because a system interrupt occurred. </li></ul></ul></ul><ul><ul><li>If the number of context switches is high, and the interrupt rate is high, the system is probably performing I/O. </li></ul></ul><ul><ul><ul><li>If the number of context switches is high, and the system call rate is high, the problem is likely the result of bad application coding practices. </li></ul></ul></ul><ul><ul><ul><li>Such practices include a program loop that repeatedly performs the sequence “open a file, read from the file, close the file.” </li></ul></ul></ul>
  37. 37. CPU Performance <ul><li>If the system exhibits a high trap rate, and few system calls, the system is probably experiencing page faults, experiencing memory errors, or attempting to execute unimplemented instructions. </li></ul><ul><ul><li>Some chips do not contain instructions to perform certain mathematical operations. </li></ul></ul><ul><ul><li>Such systems require that the system generate a trap that causes the system to use software routines to perform the operation. </li></ul></ul><ul><ul><ul><li>An example of this situation occurs when you attempt to run a SPARC V8 binary on a SPARC V7 system. </li></ul></ul></ul><ul><ul><ul><li>The SPARC V7 system contains no integer multiply/divide hardware. SPARC V8 systems contain hardware multiply/divide instructions, so compiling a program on the V8 architecture imbeds these instructions in the program. </li></ul></ul></ul><ul><ul><ul><li>When this same program is run on a V7 system, the OS has to trap the instructions, call a software routine to perform the calculation, and then return to the running program with the answer. </li></ul></ul></ul>
  38. 38. Performance Analysis <ul><li>Memory is a critical system resource. </li></ul><ul><ul><li>Unix is very good at finding/hoarding memory for disk/network buffers. </li></ul></ul><ul><ul><ul><li>Unix buffering scheme </li></ul></ul></ul><ul><ul><ul><ul><li>At boot time, size memory. </li></ul></ul></ul></ul><ul><ul><ul><ul><ul><li>Kernel takes all memory and hoards it </li></ul></ul></ul></ul></ul><ul><ul><ul><ul><ul><li>As jobs start, kernel begrudgingly gives some memory back to them. </li></ul></ul></ul></ul></ul><ul><ul><ul><ul><li>In some versions of UNIX: </li></ul></ul></ul></ul><ul><ul><ul><ul><ul><li>Disk buffers are allocated on file system (disk partition) basis </li></ul></ul></ul></ul></ul><ul><ul><ul><ul><ul><li>Network buffers are allocated on a per-interface basis. </li></ul></ul></ul></ul></ul>
  39. 39. Performance Analysis <ul><li>Memory is a critical system resource. </li></ul><ul><ul><li>Before upgrading the cluster systems OIT looked at the memory question: </li></ul></ul><ul><ul><ul><li>With 64 Meg jobs took X minutes to run. </li></ul></ul></ul><ul><ul><ul><li>With 128 Meg of memory, the same jobs took X/3 minutes to run. </li></ul></ul></ul><ul><ul><ul><li>With 256 Meg of memory, the same job did not run any faster, but you could run multiple instances of same job with no degradation in performance. </li></ul></ul></ul><ul><ul><li>Memory is cheap. Buy lots! </li></ul></ul>
  40. 40. Performance Analysis <ul><li>Monitoring memory use. </li></ul><ul><ul><ul><li>Use pstat -s to look at swap information on BSD systems. </li></ul></ul></ul><ul><ul><ul><li>Use swap -l to look at swap on System V systems. </li></ul></ul></ul><ul><ul><ul><li>Use sar -r to look at swap information </li></ul></ul></ul><ul><ul><ul><li>Use vmstat to look at memory statistics. </li></ul></ul></ul><ul><ul><ul><li>Use top to monitor job sizes and swap information. </li></ul></ul></ul><ul><ul><ul><li>If there is any sign of swapping </li></ul></ul></ul><ul><ul><ul><ul><li>Memory is cheap! Buy Lots! </li></ul></ul></ul></ul><ul><ul><ul><li>Can adjust reclaim rate, and other memory system parameters, but it is usually more profitable to add memory. </li></ul></ul></ul>
  41. 41. Memory Performance <ul><li>Unlike CPU tuning, memory tuning is a bit more objective. Quantifying CPU performance can be somewhat elusive, but quantifying memory usage is usually pretty straightforward. </li></ul><ul><li>Job Size </li></ul><ul><ul><li>An easy diagnostic for memory problems is to add up the size of all jobs running on the system, and compare this to the size of the system’s physical memory. </li></ul></ul><ul><ul><li>If the size of the jobs is grossly out of proportion to the size of the system memory, you need to do something to change this situation. </li></ul></ul><ul><ul><ul><li>You could use a scheduler that uses job size as one of the criteria for allowing a job to run, remove some processes from the system (for example migrate some applications to another server), or add memory to lessen the disparity in the requested versus available memory. </li></ul></ul></ul>
  42. 42. Memory Performance <ul><li>Swapping/Paging </li></ul><ul><ul><li>Under BSD operating systems, the amount of virtual memory is equal to the swap space allocated on the system disks plus the size of the shared text segments in memory. </li></ul></ul><ul><ul><ul><li>The BSD VM system required that you allocate swap space equal to or greater than the size of memory. Many BSD environments recommended that you allocate swap space equal to 4x the size of real memory. </li></ul></ul></ul><ul><ul><li>Under System V UNIX kernels, the total amount of virtual memory is equal to the size of the swap space plus the size of memory, minus a small amount of “overhead” space. </li></ul></ul><ul><ul><ul><li>The system does not begin to swap until the job memory requirements exceed the size of the system memory. </li></ul></ul></ul>
  43. 43. Memory Performance <ul><li>You can estimate the system’s virtual memory requirements on BSD systems by looking at the output of the top and/or ps commands. </li></ul><ul><ul><li>If you add up the values in the RSS columns (resident set size), you can get an idea of the real memory usage on the system. </li></ul></ul><ul><li>Adding up the values in the SZ column gives you an estimation of the VM requirements for the system. </li></ul><ul><ul><li>If the total of all SZ values increases over time (with the same jobs running), one or more applications probably have memory leaks. </li></ul></ul><ul><ul><li>The system will eventually run out of swap space, and hang or crash. </li></ul></ul><ul><li>Some kernels allow you to modify the page scan/reclaim process. </li></ul><ul><ul><li>This allows you to alter how long a page stays in real memory before it is swapped or paged out. </li></ul></ul><ul><ul><li>Such modifications are tricky, and should only be performed if you know what you are doing. </li></ul></ul>
  44. 44. Memory Performance <ul><li>If you see that the scan rate ( sr column in vmstat output) value is roughly equal to the free rate ( fr column in vmstat output), the system is releasing pages as quickly as they are scanned. </li></ul><ul><ul><li>If you tune the memory scan parameters to increase the period between when the page is scanned and when it is paged out (allow pages to stay in memory for a longer period), the VM system performance may improve. </li></ul></ul><ul><ul><li>On the other hand, if the sr value is greater than the fr value, decreasing the period between scan and paging time may improve VM system performance. </li></ul></ul>
  45. 45. Memory Performance <ul><li>VM Symptoms </li></ul><ul><ul><li>The following indicators may be useful when tuning the VM system. </li></ul></ul><ul><ul><ul><li>Paging activity may be an indicator of file system activity. </li></ul></ul></ul><ul><ul><ul><li>Swapping activity is usually an indicator of large memory processes thrashing. </li></ul></ul></ul><ul><ul><ul><li>Attaches and reclaim activity is often a symptom of a program in a loop performing a “file open, read, and close” operation. </li></ul></ul></ul><ul><ul><ul><li>If the output of netstat –s shows a high error rate, the system may be kernel memory starved. This often leads to dropped packets, and memory allocation (malloc) failures. </li></ul></ul></ul>
  46. 46. Memory Performance <ul><li>Shared Memory </li></ul><ul><ul><li>Large database applications often want to use shared memory for communications among the many modules that make up the database package. </li></ul></ul><ul><ul><ul><li>By sharing the memory, the application can avoid copying chunks of data from one routine to another, therefore improving system performance and maximizing the utilization of system resources. </li></ul></ul></ul><ul><ul><ul><li>This generally works fine, until the application requests more shared memory than the system has available. </li></ul></ul></ul><ul><ul><ul><li>When this situation occurs, system performance will often nosedive. </li></ul></ul></ul><ul><ul><li>Under Solaris, the /usr/bin/ipcs command may be used to monitor the status of the shared memory, and semaphore system. </li></ul></ul>
  47. 47. Memory Performance <ul><li>mmap </li></ul><ul><ul><li>If an application is running from a local file system, you might want to look into using the mmap utility to map open files into the system address space. </li></ul></ul><ul><ul><ul><li>The use of mmap replaces the open, malloc, and read cycles with much more efficient operation for read-only data. </li></ul></ul></ul><ul><ul><ul><li>When the application is using network file systems, this might actually cause a degradation of system performance. </li></ul></ul></ul><ul><ul><ul><li>Using the cachefs file system with NFS will improve this situation, as this allows the system to page to a local disk instead of through the network to an NFS disk. </li></ul></ul></ul>
  48. 48. Performance Analysis <ul><li>How can you improve the memory system? </li></ul><ul><ul><li>Add memory </li></ul></ul><ul><ul><ul><li>It’s cheap </li></ul></ul></ul><ul><ul><li>Use limits </li></ul></ul><ul><ul><ul><li>they’re ugly </li></ul></ul></ul><ul><ul><ul><li>payoff is not (usually) very good. </li></ul></ul></ul>
  49. 49. Performance Analysis <ul><li>Disk I/O is one of the most critical factors in system performance. </li></ul><ul><ul><li>Most file access goes through the disk I/O system. </li></ul></ul><ul><ul><ul><li>Multiple “hot” file systems on one disk will be a problem. </li></ul></ul></ul><ul><ul><ul><li>Slow disks will be a problem </li></ul></ul></ul><ul><ul><ul><li>Narrow controllers will be a problem </li></ul></ul></ul><ul><ul><ul><li>Partitioning of disks will have an effect on buffering </li></ul></ul></ul><ul><ul><ul><li>Disk geometry of disk will have an effect on buffering </li></ul></ul></ul><ul><ul><li>Swapping/paging goes through the disk I/O system. </li></ul></ul><ul><ul><ul><li>Split swap space over multiple spindles to increase interleave </li></ul></ul></ul><ul><ul><ul><li>If swapping: Buy More Memory (It’s cheap) </li></ul></ul></ul><ul><ul><li>Use iostat to look at the disk I/O system. </li></ul></ul>
  50. 50. Disk Performance <ul><li>Swapping </li></ul><ul><ul><li>In general, if a system is swapping this is a symptom that it does not have enough physical memory. </li></ul></ul><ul><ul><ul><li>Add memory to the system to minimize the swapping/paging activity before continuing. </li></ul></ul></ul><ul><ul><ul><li>You might also consider migrating some of the load to other systems in order to minimize contention for existing resources. </li></ul></ul></ul><ul><ul><li>If the system contains the maximum memory, and the system is still swapping, there are some things you can do to improve the performance of the swapping/paging subsystem. </li></ul></ul><ul><ul><ul><li>First, try to split the swap partitions across several disk drives and (if possible) disk controllers. </li></ul></ul></ul><ul><ul><ul><li>Most current operating systems can interleave swap writes across several disks to improve performance. </li></ul></ul></ul><ul><ul><ul><li>Adding controllers to the swap system can increase the bandwidth of the swap subsystem immensely. </li></ul></ul></ul>
  51. 51. Disk Performance <ul><li>Read/Modify/Write </li></ul><ul><ul><li>One major problem for disk systems is the read/modify/write sequence of operations. This sequence is typical of updates to a file (read the file into memory, modify the file in memory, and then write the file out to disk). This sequence is a problem for (at least) the following reasons. </li></ul></ul><ul><ul><ul><li>There is a delay between the read and the write, so the heads have probably been repositioned to perform some other operation. </li></ul></ul></ul><ul><ul><ul><li>The file size may change, requiring the new file to be written to non-contiguous sectors/cylinders on the disk. This causes more head movement when the file has to be written back to the disk. </li></ul></ul></ul>
  52. 52. Disk Performance <ul><ul><li>It may seem simple to avoid or minimize such operations, but consider the following : </li></ul></ul><ul><ul><ul><li>A typical “make” operation might read in 50 include files. The compiler might create 400 object files for a large make operation. </li></ul></ul></ul><ul><ul><ul><li>File system accesses require an inode lookup, a file system stat , a direct block read, an indirect block read, and a double-indirect block read to access a file. When the file is written to disk, the same operations are required. </li></ul></ul></ul>
  53. 53. Disk Performance <ul><ul><ul><li>When a database application needs to perform a data insert operation it needs to write the data to disk. </li></ul></ul></ul><ul><ul><ul><ul><li>It also needs to write the transaction to a log, and read/modify/write an index. </li></ul></ul></ul></ul><ul><ul><ul><ul><li>Databases typically exhibit 50% updates, 20% inserts, and 30% lookups. </li></ul></ul></ul></ul><ul><ul><ul><ul><li>This can lead to 200 (or more) I/O operations per second on medium-size databases! </li></ul></ul></ul></ul><ul><ul><ul><ul><li>Trying to store such a database on a single disk is sure to fail. </li></ul></ul></ul></ul><ul><ul><ul><ul><li>You would probably get by with a four-drive RAID for such applications, but a six- or eight-drive stripe would be a better bet for high performance. </li></ul></ul></ul></ul>
  54. 54. Disk Performance <ul><li>File Servers </li></ul><ul><ul><li>File servers should be delegated to providing one task: storing and retrieving files from system disks. </li></ul></ul><ul><ul><ul><li>Although this might seem like it should be a simple task, lack of planning when creating and populating file systems may cause severe performance problems. </li></ul></ul></ul><ul><ul><li>If a file server seems sluggish, use the iostat , vmstat , and other commands available on the system to monitor the disk subsystem. </li></ul></ul><ul><ul><ul><li>You need to determine which disks are experiencing large numbers of transfers, and/or large amounts of information read/written to disks. </li></ul></ul></ul>
  55. 55. Disk Performance <ul><li>Unbalanced Disks </li></ul><ul><ul><li>Monitor the disk activity on an overloaded system to determine what file systems are being accessed most often. </li></ul></ul><ul><ul><ul><li>If most of the disk activity is centered on one disk drive, while other disk drives sit idle, you probably have an unbalanced disk system. </li></ul></ul></ul><ul><ul><ul><li>A typical disk drive can handle (roughly) 50 I/O operations a second. </li></ul></ul></ul><ul><ul><ul><li>If you are trying to perform 100 I/O operations/second to a single disk drive, system performance will suffer! </li></ul></ul></ul><ul><ul><li>If you see signs that one disk is being heavily accessed, while other disks sit idle, you might consider moving file systems in order to spread high-activity file systems across multiple disks. </li></ul></ul><ul><ul><ul><li>Place one high-activity file system on a disk drive with one or more low activity file systems. This minimizes head/arm movement, and improves the utilization of the on-drive and on-controller caches. </li></ul></ul></ul>
  56. 56. Disk Performance <ul><li>Unbalanced Disks </li></ul><ul><ul><li>Too many hot file systems on a single disk drive/stripe is another typical problem. </li></ul></ul><ul><ul><ul><li>The tendency is to use all of the space available on the disk drives. Many times the administrator will partition a large disk into two (or more) file systems, and load files on all of the partitions. </li></ul></ul></ul><ul><ul><ul><li>However, when all of the file systems begin to experience high volumes of access requests the disk head-positioner and the bandwidth of the disk drive become bottlenecks. </li></ul></ul></ul><ul><ul><ul><li>It is usually better to waste some disk space and leave partitions empty than to place multiple active file systems on a drive. </li></ul></ul></ul><ul><ul><ul><li>If you must do so, try to place inactive or read-only file systems on one partition, with an active read/write partition on another partition. </li></ul></ul></ul>
  57. 57. Disk Performance <ul><ul><li>Another way to disperse file system load is to break up large multifunction file systems into smaller, more easily dispersed chunks. </li></ul></ul><ul><ul><ul><li>For example, the UNIX /usr file system often contains the system/site source code, system binaries, window system binaries, and print and mail spool files. </li></ul></ul></ul><ul><ul><ul><li>By breaking the /usr file system into several smaller file systems, the sysadmin can disperse the load across the disk subsystem. </li></ul></ul></ul><ul><ul><ul><li>Some of the more typical ways to break /usr into smaller chunks include making separate partitions for /usr/bin , /usr/lib , /usr/openwin , /usr/local , and /usr/spool . </li></ul></ul></ul>
  58. 58. Disk Performance <ul><li>RAID </li></ul><ul><ul><li>Some believe that by default RAID provides better performance than Single Large Expensive Disks (SLEDs). </li></ul></ul><ul><ul><li>Others believe that RAID is only useful if you want a redundant, fault-tolerant disk system. </li></ul></ul><ul><ul><ul><li>In reality, RAID can provide both of these capabilities. </li></ul></ul></ul><ul><ul><ul><li>However, a poorly configured RAID can also cause system performance and reliability degradation. </li></ul></ul></ul>
  59. 59. Disk Performance <ul><li>RAID </li></ul><ul><ul><li>Due to RAID’s flexibility and complexity, RAID subsystems present some tough challenges when it comes to performance monitoring and tuning. </li></ul></ul><ul><ul><ul><li>Most operating levels of RAID have well-known performance characteristics. </li></ul></ul></ul><ul><ul><ul><li>Design your file systems such that high-performance file systems are housed on RAID volumes that provide the best performance (typically RAID 0). </li></ul></ul></ul><ul><ul><ul><li>For improved reliability, RAID level 1, 4, or 5 would be a better choice. </li></ul></ul></ul><ul><ul><ul><li>However, even within the RAID levels there are some general guidelines to keep in mind while designing RAID volumes. </li></ul></ul></ul>
  60. 60. Disk Performance <ul><li>Disk Stripes </li></ul><ul><ul><li>One of the prime considerations for tuning RAID disk systems is “stripe size.” </li></ul></ul><ul><ul><ul><li>RAID allows you to “gang” several disks to form a “logical” disk drive. </li></ul></ul></ul><ul><ul><ul><li>These logical drives allow you to attain better throughput, and large-capacity file systems. </li></ul></ul></ul><ul><ul><ul><li>However, you need to be careful when you design these file systems. </li></ul></ul></ul>
  61. 61. Disk Performance <ul><li>Disk Stripes </li></ul><ul><ul><li>The basic unit of storage on a standard disk is the disk sector (typically 512 bytes). </li></ul></ul><ul><ul><ul><li>On a RAID disk system, the basic unit of storage is referred to as the block size, which in reality is the sector size. </li></ul></ul></ul><ul><ul><ul><li>However, RAID disks allow you to have multiple disks ganged such that you stripe the data across all disks. </li></ul></ul></ul><ul><ul><ul><li>The number of disks in a RAID file system is referred to as the interleave factor, whereas the “stripe size” is the block size multiplied by the interleave factor. </li></ul></ul></ul><ul><ul><ul><li>You typically want the size of an access to be a multiple of the stripe size. </li></ul></ul></ul>
  62. 62. Disk Performance <ul><li>Sequential I/O Optimizations </li></ul><ul><ul><li>When using RAID, each disk I/O request will access every drive in the stripe in parallel. </li></ul></ul><ul><ul><ul><li>The block size of the RAID stripe is equal to the access size divided by the interleave factor. </li></ul></ul></ul><ul><ul><ul><ul><li>For example, a file server that contains a four-drive RAID array that allows 64-kilobyte file system accesses would be best served by reading/writing 16-kilobyte chunks of data to/from each in the array in parallel. </li></ul></ul></ul></ul><ul><ul><ul><ul><li>A file server with an eight-disk stripe that allowed 8-kilobyte file system accesses should be tuned to read/write 1 kilobyte to/from each disk in the stripe. </li></ul></ul></ul></ul><ul><ul><ul><ul><li>Such setups (RAID with a four- to eight-drive interleave) can provide a 3x to 4x improvement in I/O throughput compared to a single disk system. </li></ul></ul></ul></ul>
  63. 63. Disk Performance <ul><li>Random I/O Optimizations </li></ul><ul><ul><li>When using RAID for random I/O operations, you want each request to hit a different disk in the array. </li></ul></ul><ul><ul><ul><li>You want to force the I/O to be scattered across the available disk resources. </li></ul></ul></ul><ul><ul><ul><li>In this case you want to tune the system such that the block size is equal to the access size. </li></ul></ul></ul><ul><ul><ul><ul><li>For example, a file server that allows 8-kilobyte file accesses across a six-disk RAID stripe should employ a 48-kilobyte stripe size, whereas a database server that allowed 2-kilobyte accesses across a four-drive RAID array should employ an 8-kilobyte stripe size. </li></ul></ul></ul></ul>
  64. 64. Disk Performance <ul><li>File System Optimizations </li></ul><ul><ul><li>The way an OS manages memory may also impact the performance of the I/O subsystem. </li></ul></ul><ul><ul><ul><li>For example, the BSD kernel allocates a portion of memory as a buffer pool for file system I/O, whereas System V kernels use main memory for file system I/O. </li></ul></ul></ul><ul><ul><ul><li>Under System V, all file system input/output operations result in page-in/page-out memory transactions! </li></ul></ul></ul><ul><ul><ul><ul><li>This is much more efficient than the BSD buffer-pool model. </li></ul></ul></ul></ul><ul><ul><ul><ul><li>You can tune the BSD kernel to use 50% of system memory as buffer pool to improve file system performance. </li></ul></ul></ul></ul>
  65. 65. Disk Performance <ul><li>Disk-based File Systems </li></ul><ul><ul><li>File systems stored on local disks are referred to as disk-based file systems (as opposed to network file systems, or memory-based file systems). </li></ul></ul><ul><ul><li>There are several items related to disk-based file systems the administrator might want to tune to improve the performance of the system. </li></ul></ul>
  66. 66. Disk Performance <ul><li>Zone Sectoring </li></ul><ul><ul><li>Most modern disk drives employ zone-sectoring technology. </li></ul></ul><ul><ul><ul><li>This means that the drive has a larger number of storage sectors on the outer cylinders than it has on the inner cylinders; hence, the outer cylinders provide “more dense” storage than the inner cylinders. </li></ul></ul></ul><ul><ul><ul><li>As the platter rotates, more sectors are under the read/write heads (per revolution) on the high-density cylinders than on the low-density cylinders. </li></ul></ul></ul><ul><ul><ul><li>In many cases, two thirds of the disk’s storage space is on the outer (high-density) cylinders. </li></ul></ul></ul><ul><ul><ul><li>This implies that you can attain higher performance if you just use the outer two-thirds of the disk drive, and “waste” one-third of the drive’s storage capacity. </li></ul></ul></ul>
  67. 67. Disk Performance <ul><li>Zone Sectoring </li></ul><ul><ul><li>File systems should be sized with this constraint in mind. </li></ul></ul><ul><ul><ul><li>While wasting one-third of the storage capacity seems counterproductive, in reality system performance will be much better if you waste some space. </li></ul></ul></ul><ul><li>Free Space </li></ul><ul><ul><li>Most modern file systems do not perform well when they are more than 90% filled. </li></ul></ul><ul><ul><ul><li>When the file system gets full, the system has to work harder to locate geographically “close” sectors on which to store the file. </li></ul></ul></ul><ul><ul><ul><li>Fragmentation becomes a performance penalty, and read/modify/write operations become extremely painful, as the disk heads may have to traverse several cylinders to retrieve and then rewrite the file. </li></ul></ul></ul>
  68. 68. Disk Performance <ul><ul><li>On user partitions (where the user’s files are stored) you can use the quota system to ensure that you never fill the file system to more than 90% capacity. </li></ul></ul><ul><ul><ul><li>This entails calculating how much space each user can have, and checking that you do not allow more total quota space than 90% of the total partition size. </li></ul></ul></ul><ul><ul><ul><li>This can be a tedious process. </li></ul></ul></ul><ul><ul><ul><li>More commonly, the sysadmin watches the file system, and if it approaches 90% full moves one or two of the space hogs to another partition. </li></ul></ul></ul>
  69. 69. Disk Performance <ul><li>Linux Ext3 Performance Options </li></ul><ul><ul><li>The Ext2 file system has a reputation for being a rock-solid file system. The Ext3 file system builds on this base by adding journaling features. </li></ul></ul><ul><ul><ul><li>Ext3 allows you to choose from one of three journaling modes at file system mount time: data=writeback , data=ordered , and data=journal . </li></ul></ul></ul><ul><ul><ul><li>To specify a journal mode, you can add the appropriate string ( data=journal ) to the options section of your /etc/fstab , or specify the –o data=journal command-line option when calling mount from the command line. </li></ul></ul></ul><ul><ul><ul><li>To specify the data journaling method used for root file systems ( data=ordered is the default), you can use a special kernel boot option called rootflags . </li></ul></ul></ul><ul><ul><ul><li>To force the root file system into full data journaling mode, add rootflags=data=journal to the boot options. </li></ul></ul></ul>
  70. 70. Disk Performance <ul><li>data=writeback Mode </li></ul><ul><ul><li>In data=writeback mode, Ext3 does not do any form of data journaling at all. </li></ul></ul><ul><ul><li>In this mode, Ext3 provides journaling similar to that found in XFS file systems; that is, only the metadata is actually journaled. </li></ul></ul><ul><ul><li>This could allow recently modified files to become corrupt in the event of an unexpected crash or reboot. </li></ul></ul><ul><ul><li>Despite this drawback, data=writeback mode should give the best performance under most conditions. </li></ul></ul>
  71. 71. Disk Performance <ul><li>data=ordered Mode </li></ul><ul><ul><li>In data=ordered mode, Ext3 only journals metadata, but it logically groups metadata and data blocks into a single unit called a transaction. </li></ul></ul><ul><ul><ul><li>When it is time to write the new metadata out to disk, the associated data blocks are written first. </li></ul></ul></ul><ul><ul><ul><li>The data=ordered mode solves the corruption problem found in data=writeback mode, and does so without requiring full data journaling. </li></ul></ul></ul><ul><ul><ul><li>In general, data=ordered Ext3 file systems perform slightly slower than data=writeback file systems, but significantly faster than their full data journaling counterparts. </li></ul></ul></ul>
  72. 72. Disk Performance <ul><li>data=ordered Mode </li></ul><ul><ul><li>When appending data to files, data=ordered mode provides all of the integrity guarantees offered by Ext3’s full data journaling mode. </li></ul></ul><ul><ul><ul><li>However, if part of a file is being overwritten and the system crashes, it is possible that the region being written will contain a combination of original blocks interspersed with updated blocks. </li></ul></ul></ul><ul><ul><ul><li>This can happen because data=ordered provides no guarantees as to which blocks are overwritten first, and therefore you cannot assume that just because overwritten block x was updated that overwritten block x-1 was updated as well. </li></ul></ul></ul><ul><ul><ul><li>Instead, data=ordered leaves the write ordering up to the hard drive’s write cache. </li></ul></ul></ul><ul><ul><li>In general, this limitation does not end up negatively impacting system integrity very often, in that file appends are usually much more common than file overwrites. </li></ul></ul><ul><ul><ul><li>For this reason, data=ordered mode is a good higher-performance replacement for full data journaling. </li></ul></ul></ul>
  73. 73. Disk Performance <ul><li>data=journal Mode </li></ul><ul><ul><li>The Ext3 data=journal mode provides full data and metadata journaling. </li></ul></ul><ul><ul><ul><li>All new data is written to the journal first, and then to the disk. </li></ul></ul></ul><ul><ul><ul><li>In the event of a crash, the journal can be replayed, bringing both data and metadata into a consistent state. </li></ul></ul></ul><ul><ul><ul><li>Theoretically, data=journal mode is the slowest journaling mode, in that data gets written to disk twice rather than once. </li></ul></ul></ul><ul><ul><ul><li>However, it turns out that in certain situations data=journal mode can be blazingly fast. </li></ul></ul></ul><ul><ul><li>Ext3’s data=journal mode is incredibly well suited to situations in which data needs to be read from and written to disk at the same time. </li></ul></ul><ul><ul><ul><li>Therefore, Ext3’s data=journal mode, assumed to be the slowest of all Ext3 modes in nearly all conditions, actually turns out to have a major performance advantage in busy environments for which interactive I/O performance needs to be maximized. </li></ul></ul></ul>
  74. 74. Disk Performance <ul><li>TIP: On busy (Linux) NFS servers, the server may experience a huge storm of disk-write activity every 30 seconds when the kernel forces a sync operation. The following command will cause the system to run kupdate every 0.6 seconds rather than every 5 seconds. In addition, the command will cause the kernel to flush a dirty buffer after 3 seconds, rather than after the default of 30 seconds. </li></ul><ul><ul><li>echo 40 0 0 0 60 300 60 0 0 > /proc/sys/vm/bdflush </li></ul></ul>
  75. 75. Disk Performance <ul><li>BSD Disk System Performance </li></ul><ul><ul><li>The Berkeley file system, when used on BSD-derived operating systems, also provides some methods for improving the performance of the file system. </li></ul></ul><ul><ul><ul><li>For file servers with memory to spare, it is possible to increase BUFCACHEPERCENT . </li></ul></ul></ul><ul><ul><ul><li>That is, it is possible to increase the percentage of system RAM used as file system buffer space. </li></ul></ul></ul><ul><ul><ul><li>To increase BUFCACHEPERCENT , add a line to the kernel configuration similar to the following. </li></ul></ul></ul><ul><ul><ul><ul><li>option BUFCACHEPERCENT=30 </li></ul></ul></ul></ul><ul><ul><ul><li>You can set the BUFCACHEPERCENT value as low as 5% (the default) or as high as 50%. </li></ul></ul></ul>
  76. 76. Disk Performance <ul><li>BSD Disk System Performance </li></ul><ul><ul><li>Another method that can be used to speed up the file system is softupdates . </li></ul></ul><ul><ul><ul><li>One of the slowest operations in the traditional BSD file system is updating metainfo , which happens when applications create or delete files and directories. </li></ul></ul></ul><ul><ul><ul><li>softupdates attempts to update metainfo in RAM instead of writing to the hard disk for every metainfo update. </li></ul></ul></ul><ul><ul><ul><li>An effect of this is that the metainfo on disk should always be complete, although not always up to date. </li></ul></ul></ul>
  77. 77. Disk Performance <ul><li>Network File Systems </li></ul><ul><ul><li>Network file systems are “at the mercy” of two bottlenecks: the disk system on the server and the network link between the server and the client. </li></ul></ul><ul><ul><ul><li>One way to improve performance of NFS file systems is to use the TCP protocol for transport instead of the UDP protocol. </li></ul></ul></ul><ul><ul><ul><li>Some operating systems (Solaris, among others) have already made TCP their default transport for this reason. </li></ul></ul></ul><ul><ul><li>Another way to improve the performance of NFS is to increase the size of the data chunks sent and received. </li></ul></ul><ul><ul><ul><li>In NFS V1 and V2, the chunk size is 8 kilobytes. In NFS v3, the chunk size is up to 32 kilobytes. </li></ul></ul></ul>
  78. 78. Disk Performance <ul><li>cachefs </li></ul><ul><ul><li>Another way to improve NFS performance is to use a client-side cache. </li></ul></ul><ul><ul><ul><li>By default, the NFS file system does not provide client-side caching. </li></ul></ul></ul><ul><ul><ul><li>Allocating memory or disk space as a cache for network files can improve the performance of the client system at the cost of local disk space or memory for jobs. </li></ul></ul></ul><ul><ul><ul><li>cachefs provides huge improvements for read-only (or “read-mostly”) file systems. A good size for the cachefs is 100 to 200 megabytes. </li></ul></ul></ul>
  79. 79. Performance Analysis <ul><li>Network I/O can be very critical in a heavily networked environment. </li></ul><ul><ul><li>NFS/AFS performance relies on the network performance. </li></ul></ul><ul><ul><ul><li>NFS is extremely dependent (no cache) </li></ul></ul></ul><ul><ul><ul><ul><li>Large transfer size (8k v3, 32 k v4) </li></ul></ul></ul></ul><ul><ul><ul><ul><li>UDP implementation </li></ul></ul></ul></ul><ul><ul><ul><li>AFS is less dependent (has disk cache) </li></ul></ul></ul><ul><ul><ul><li>Web servers are very network sensitive </li></ul></ul></ul><ul><ul><ul><ul><li>Lots of small transfers (input) </li></ul></ul></ul></ul><ul><ul><ul><ul><li>Lots of larger transfers (output) </li></ul></ul></ul></ul><ul><ul><ul><ul><li>Also disk system dependent. </li></ul></ul></ul></ul><ul><ul><ul><li>Use netstat to view network statistics. </li></ul></ul></ul><ul><ul><ul><li>Use nfsstat to look at nfs statistics. </li></ul></ul></ul>
  80. 80. Performance Analysis <ul><li>How can you improve network I/O? </li></ul><ul><ul><li>Change network connections to switched technology. </li></ul></ul><ul><ul><ul><li>Upgrade half duplex to full duplex </li></ul></ul></ul><ul><ul><li>Change to a faster network technology </li></ul></ul><ul><ul><ul><li>Upgrade 10 Mbit to 100 Mbit Ethernet. </li></ul></ul></ul><ul><ul><li>Fewer hosts on the network. </li></ul></ul><ul><ul><ul><li>Less traffic on critical links == more headroom. </li></ul></ul></ul><ul><ul><li>Faster network core. </li></ul></ul><ul><ul><li>Off-site caching </li></ul></ul><ul><ul><ul><li>Particularly useful for web services </li></ul></ul></ul><ul><ul><li>Create dense request packets </li></ul></ul><ul><ul><li>Use keepalives </li></ul></ul>
  81. 81. Network Performance <ul><li>Trunking </li></ul><ul><ul><li>Network adapters are engineered to provide specific bandwidth to the system. </li></ul></ul><ul><ul><ul><li>A 10-megabit Ethernet adapter will typically provide 6 to 10 megabits per second of bandwidth when operating in half-duplex mode. </li></ul></ul></ul><ul><ul><ul><li>You can improve the performance of the network subsystem by configuring the adapter to operate in full duplex mode. </li></ul></ul></ul><ul><ul><ul><li>But what do you do if you need 500 megabits of throughput from a database server to the corporate web server? </li></ul></ul></ul><ul><ul><li>Many vendors allow you to cluster multiple network interfaces to provide improved performance. </li></ul></ul><ul><ul><ul><li>By clustering interfaces, you can also provide redundancy, as the system will continue to operate, albeit at lower performance levels, if one interface fails. </li></ul></ul></ul><ul><ul><ul><li>An example of trunking is the Sun Multipath package. </li></ul></ul></ul>
  82. 82. Network Performance <ul><li>Trunking </li></ul><ul><ul><li>NOTE: Some applications seem to have problems working with systems using network trunking. At least one popular utility that allows UNIX servers to operate as Apple file/print servers experiences difficulties (and horrible performance) when used in a trunk environment! </li></ul></ul><ul><li>Collisions </li></ul><ul><ul><li>Anytime a packet is involved in a collision, network performance suffers. </li></ul></ul><ul><ul><ul><li>The damaged packet(s) will need to be retransmitted, adding load to an already overburdened network. </li></ul></ul></ul><ul><ul><ul><li>Consider transitioning all connections to switched hardware. </li></ul></ul></ul><ul><ul><ul><li>The switched hardware may still experience collisions, but usually at much lower rates than shared-mode hardware. </li></ul></ul></ul>
  83. 83. Network Performance <ul><li>TCP_NODELAY </li></ul><ul><ul><li>Under most circumstances, TCP sends data when it is “handed” to the TCP stack. </li></ul></ul><ul><ul><ul><li>When outstanding data has not yet been acknowledged, TCP gathers small amounts of output to be sent in a single packet once an acknowledgment has been received. </li></ul></ul></ul><ul><ul><ul><li>For a small number of clients, such as windowing systems that send a stream of mouse events that receive no replies, this process may cause significant delays. </li></ul></ul></ul><ul><ul><ul><li>To circumvent this problem, TCP provides a socket-level option, TCP_NODELAY , which may be used to tune the operation of the TCP stack in regard to these delays. </li></ul></ul></ul><ul><ul><ul><li>Enabling TCP_NODELAY can improve the performance of certain network communications. </li></ul></ul></ul>
  84. 84. Network Performance <ul><li>HIWAT/LOWAT </li></ul><ul><ul><li>Most operating systems make certain assumptions about the type of network connection likely to be encountered. </li></ul></ul><ul><ul><ul><li>These assumptions are used to set the size and number of network buffers that can be used to hold inbound and outbound packets. </li></ul></ul></ul><ul><ul><li>Systems with several network interfaces, or systems that are connected to very high-speed networks, may realize an improvement in network performance by increasing the number of buffers available to the network stack. </li></ul></ul><ul><ul><ul><li>This is typically accomplished by tuning variables in the TCP stack. </li></ul></ul></ul><ul><ul><ul><ul><li>Under Solaris and HP-UX (among others), you can tune the number of buffers by setting the hiwat and lowat variables via the ndd command. </li></ul></ul></ul></ul>
  85. 85. Common sense: <ul><ul><li>Don’t overload the system. </li></ul></ul><ul><ul><ul><li>Unix does not deal well when presented with overload conditions. The same is true for NT. </li></ul></ul></ul><ul><ul><ul><ul><li>ALWAYS keep at least 10% free space on disk partitions. </li></ul></ul></ul></ul><ul><ul><ul><ul><li>Keep 35% free bandwidth on network links </li></ul></ul></ul></ul><ul><ul><ul><ul><li>Eliminate swapping! </li></ul></ul></ul></ul><ul><ul><ul><ul><li>Try to keep 30 to 50 % free cycles on CPU’s </li></ul></ul></ul></ul><ul><ul><li>Don’t run accounting/quotas if your goal is peak performance. </li></ul></ul><ul><ul><li>Run system scripts late at night (or other off-hours). </li></ul></ul><ul><ul><li>Don’t run backups during peak hours. </li></ul></ul>
  86. 86. Common Sense <ul><ul><li>Watch for runaway jobs. </li></ul></ul><ul><ul><ul><li>Users hate “killer” programs, but they do have their place! </li></ul></ul></ul><ul><ul><li>Watch for hardware problems that might aggravate problems </li></ul></ul><ul><ul><ul><li>Disk errors will cause the disk system to retry - this makes the system slower! </li></ul></ul></ul><ul><ul><ul><li>Network errors require retransmission of packets – this makes the system slower! </li></ul></ul></ul><ul><ul><ul><li>Slow (or speed mismatched) memory DIMMS may cause system to stall (wait states) – this makes the system slower! </li></ul></ul></ul><ul><ul><li>Try to run “native mode” programs </li></ul></ul><ul><ul><ul><li>Binary compatibility mode is slow </li></ul></ul></ul><ul><ul><ul><li>Vmware and other “virtual environments” can be very slow </li></ul></ul></ul><ul><ul><ul><li>Interpreted languages can be slow (sh code vs C code, compiled Java vs. Interpreted Java) </li></ul></ul></ul>
  87. 87. Common Sense <ul><li>Watch for stupid programmer tricks </li></ul><ul><ul><ul><li>Walking backward through arrays defeats cache </li></ul></ul></ul><ul><ul><ul><ul><li>Try to optimize loops such that critical data resides in cache. </li></ul></ul></ul></ul><ul><ul><ul><ul><li>Sparse matrix operations can be avoided! </li></ul></ul></ul></ul><ul><ul><ul><li>Single character reads/writes defeat buffering </li></ul></ul></ul><ul><ul><ul><ul><li>User should read large blocks into a buffer in their code, then work from this buffer. </li></ul></ul></ul></ul><ul><ul><ul><li>File open/file close operations are slow </li></ul></ul></ul><ul><ul><ul><li>Rabbit jobs </li></ul></ul></ul><ul><ul><ul><li>Background processes </li></ul></ul></ul><ul><ul><ul><li>Zombie processes </li></ul></ul></ul>
  88. 88. Summary <ul><li>System performance analysis and tuning is an iterative process. </li></ul><ul><li>The sysadmin must use scientific methodology when attempting to tune a system’s performance. </li></ul>