olitaire
Upcoming SlideShare
Loading in...5
×
 

Like this? Share it with your network

Share

olitaire

on

  • 402 views

 

Statistics

Views

Total Views
402
Views on SlideShare
402
Embed Views
0

Actions

Likes
0
Downloads
1
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

olitaire Document Transcript

  • 1. Solitaire Interglobal Whitepaper: DB2 Performance on IBM System p® and System x® Introduction One of the most difficult architectural component decisions in today’s information management environment is that which addresses the choice of database management technology. By far the largest market share in this type of software is held by the Oracle®, Microsoft SQL Server® and IBM DB2 products. In a recent TM release of an ongoing study by the Solitaire Interglobal Ltd. (SIL) research team, the production behaviors of Oracle and DB2 on the IBM® System p® and similar behaviors of Oracle, SQL Server and DB2 on the IBM® System x® platforms, were analyzed and compared. This release is a scheduled part of the continuing Operational Characterization Master Study (OPMS) that has been conducted over the last 19 years by the SIL research team. OPMS is used in support of industry standards and performance certification worldwide. The characteristics of the different DBMS products have been distilled by a review of more than 2,354,000 data points covering over 4,100 closely watched production comparisons, with more being collected each day. The relative advantages translate into hard cost savings for each customer than can substantially affect the bottom line cost of ownership. The real world affect on business is considerable. This is most achievable when a good fit between the DBMS and hardware platform is employed. Allowing the strengths of the components to mingle is one of the best ways to support the customer in their continuing quest for lower costs and improved user satisfaction. During this study, the main behavioral characteristics of both hardware and DBMS were examined closely. These traits affect the overt capacity and reliability of the combined architecture. The resultant behavior has to be examined within this conceptual framework to be clearly understood and to prepare that understanding for projection to other situations. Although the raw performance of a subject system is an important metric, the translation of that performance into business terms is more germane to today’s market. In this venue, the issue of relative costs can be a more significant metric than base performance. This measure encompasses a myriad of other factors, including reliability, staffing levels, vendor service responsiveness and time-to-market effects. (847) 931-9100 One Douglas Avenue, Suite 200 FAX (847) 931-7938 Elgin IL 60120 www.sil-usa.com
  • 2. DB2 Performance on IBM System p and System x Whitepaper Benchmarks versus Production Behavior The SIL research activity focuses on the capture of actual customer production information, rather than the construction and observation of benchmark tests. One of the main reasons for this strategy is that benchmarks tend to be artificial constructs, which miss some (or all) of the important behavioral combinations. Although every effort can be devoted to making a comprehensive test bed, the sheer number of combinations can be defeating. This problem was succinctly illustrated by a corollary study performed by SIL. In this study, 250 benchmark evaluation efforts were tracked against the number of unique combinations of system activities that occurred within 18 calendar months. All of these benchmarks were setup to fully define the activities of the subject systems. These activities were further weighted with the performance characteristics and timings that made a significant impact on the operations of the testing organization. The tracked number of activities that were carefully defined to the test system is shown below. This is identified by organization number, which was assigned in the order that the organization was incorporated into the study, and therefore indicates no priority or relative precedence among the scenarios. Benchmark-defined Activities Benchmark Activities 250 200 Test Count 150 100 50 0 1 17 33 49 65 81 97 113 129 145 161 177 193 209 225 241 Client ID The actual number of unique activity combinations that were identified during the subsequent 18-month testing period varied significantly from those defined by the test bed. While not making any assertion concerning the relative importance of the ©Solitaire Interglobal Ltd., 2008 Page 2
  • 3. DB2 Performance on IBM System p and System x Whitepaper different activity combinations, the large gap between the defined test activities and the naturally-occurring ones is large enough to cast doubts about the comprehensiveness of the benchmarks. Activity Comparison – Benchmark versus Actual Activity Comparison 350 300 250 200 150 100 50 Benchmark Activities 0 13 25 37 49 61 73 85 97 1 109 121 133 145 157 169 181 193 205 217 229 241 Client ID Benchmark Activities Actual Activities While the relative importance is difficult to judge on the overall scope of activities in this comparison, it is possible to look at the performance sensitivity of both the test- defined and undefined activities in this situation. In this analysis, the activities shown are those that were not defined in the test bed and which showed a resultant performance load that exceeded the maximum threshold delineated by the benchmark test. Since each one of these excessive activities represents a significant risk to the operational system, the large number shown on this analysis identifies a notable area of exposure. ©Solitaire Interglobal Ltd., 2008 Page 3
  • 4. DB2 Performance on IBM System p and System x Whitepaper Activity Sensitivity – Actual Impact over Tested Untested Sensitive Activities 60 50 Problem Count 40 30 20 10 0 1 16 31 46 61 76 91 106 121 136 151 166 181 196 211 226 241 Client ID Another reason for the SIL approach is that benchmark tests most frequently span too small a cross-section of actual activity to produce a significant difference in the recorded observations. Also, the combination of values that are used to test and set definitive boundaries tend to occur only within the benchmark, and are seldom found in actual production. Therefore, the correlation of the benchmark to actual customer production operations is very small. The approach taken by SIL uses a compilation and correlation of operational production behavior, using real systems and real business activities. Some of these systems have provided detailed side-by-side comparison runs of Oracle, SQL Server and DB2, while others have contributed to a pool of systems running with similar volumetrics. For the purposes of this investigation, over 650 AIX® and 3,450 Intel- based production systems were observed, recorded and analyzed to substantiate the findings. ©Solitaire Interglobal Ltd., 2008 Page 4
  • 5. DB2 Performance on IBM System p and System x Whitepaper DBMS Characteristics The basic memory handling is considerably different among the DBMS products, with unique patterns of memory usage and reclamation significantly affecting the required memory levels, refresh patterns and overall workload. Each of these areas of differentiation can be clearly seen in the details of the comparative tests. While each test is unique, a substantive trend can be seen in the distinctive handling of memory for each DBMS. Hardware Characteristics The IBM equipment has definitive characteristics of its own to contribute to the overall behavioral mixture. These traits are actually a combination of the operating system and the physical hardware that the operating system addresses. While looking at the System p running AIX, the salient characteristics are: • Width of the I/O channels • High level of start I/Os that the AIX operating system supports • Efficiency of the memory-handling algorithms • Network port addressing efficiencies Obviously, the same hardware running a different operating system will demonstrate different combined qualities. These marks of the hardware and operating system provide an interesting amalgamation with DBMS activity. The I/O trait is a dual-dimensioned factor. The first dimension is the number of bytes that can be sent along the necessary platform pathways, while the second dimension defines the sheer number of I/O activities that can be initiated in a single second on one platform processor. The conduit restriction of the I/O channels is a function of both of these factors. A similar synergy can be seen in the combination of the System x platform and the DBMS product, although the differences from other Intel-based servers are not as pronounced. Here the contributing hardware characteristics are the breadth of the I/O channel on the System x and the efficiency of the memory algorithms. Study Scope The evaluation domain included platforms from IBM and other vendors. This broad scope was used in order to crystallize an understanding of the impact of DBMS and platform synergy. Once the domain was collected, the relative degree of difference in performance on each platform set was compared to provide the net effect of the respective combinations. These comparison options were run in one of two general topologies – side-by-side and correlated. The side-by-side process methodology allows the system options to have separately dispatched activities. In this architecture, none of the executing ©Solitaire Interglobal Ltd., 2008 Page 5
  • 6. DB2 Performance on IBM System p and System x Whitepaper platforms is dependant on the execution or completion of any of the others. Due to this lack of dependency, no allowance was made for serial latency in the analyzed timings. The second architecture (referred to as “correlated” in this document) identifies one executing system as the parent or original operation. Each transaction or execution is first sent to that system. Then, depending on the type of spawning mechanism, the same request can be sent on immediately to the next system, and the next and so on. In the test cases, only the DSS system used the correlated form of the parallel testing mechanism. Therefore, the DSS performance was adjusted to eliminate the dependency figures and latencies. The following diagram illustrates the difference in execution architecture between the two methods. Dispatching Architectures The details of the different test comparisons were intriguing and provide a good insight into the relative efficiencies of the possible system deployments. ©Solitaire Interglobal Ltd., 2008 Page 6
  • 7. DB2 Performance on IBM System p and System x Whitepaper System p Comparisons The projects (and their method of comparison) that were examined for the System p platforms are: • Credit card processes – side-by-side • E-mail posting – side-by-side • Web usage – side-by-side • Customer support (CRMS) activities - side-by-side • Decision Support System (DSS) – side-by-side • Transactions – correlated Many of the systems were run either in full or partial parallel for comparison purposes, while operating in full production. All of the System p tested were in the p5xx model range and used the p6 architecture. The number of separate organizations, along with the expanded venues that were run at those locations is shown in the following table. Study Structure – System p Comparison Components Comparison Option System Type Core Systems Count Credit Card 41 62 Email 121 173 Web Usage 306 1,193 CRMS 27 70 DSS 41 223 Transaction 117 456 Total 653 2,177 Therefore, the total UNIX®/AIX comparison environment was comprised of 2,177 separate comparison options of 653 core systems. The individual projects were comprised of normal business activity in the subject areas. The activity was tracked by the contributing business action, including the number of expected rows per query, etc. A short list of the individual actions can be seen in Appendix A for each general project type. Performance The first area of examination is the base performance of the relative systems. A detailed comparison of the client tests, showing Oracle and DB2 performance characteristics, are represented by the system-specific graphs. ©Solitaire Interglobal Ltd., 2008 Page 7
  • 8. DB2 Performance on IBM System p and System x Whitepaper Credit Card Authorization Activities 40,000 35,000 30,000 25,000 TPS 20,000 15,000 10,000 5,000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 Activity ID DB2 Oracle Email Activities 35,000 30,000 25,000 20,000 TPS 15,000 10,000 5,000 0 1 2 3 4 5 6 7 8 9 10 Activity ID DB2 Oracle ©Solitaire Interglobal Ltd., 2008 Page 8
  • 9. DB2 Performance on IBM System p and System x Whitepaper Web Activities 60,000 50,000 40,000 TPS 30,000 20,000 10,000 0 1 2 3 4 5 6 7 8 9 10 11 12 Activity ID DB2 Oracle CRMS 40,000 35,000 30,000 25,000 TPS 20,000 15,000 10,000 5,000 0 1 2 3 4 5 6 7 8 9 10 Activity ID DB2 Oracle ©Solitaire Interglobal Ltd., 2008 Page 9
  • 10. DB2 Performance on IBM System p and System x Whitepaper Transaction Processing Activities 60000 50000 40000 TPS 30000 20000 10000 0 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 55 57 59 61 63 DB2 Oracle Activity ID DSS Activities 1000 900 800 700 600 TPM 500 400 300 200 100 0 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 55 57 59 61 63 65 67 69 71 73 75 77 79 81 83 85 87 DB2 Oracle Activity ID The operational characteristics and efficiencies of the DB2 and Oracle comparisons show that the DB2 DBMS demonstrates a good range of differentiation when hosted on IBM platforms. The most critical factor in the transaction efficiency appears to ©Solitaire Interglobal Ltd., 2008 Page 10
  • 11. DB2 Performance on IBM System p and System x Whitepaper depend on two factors within the DB2 architecture: memory handling and buffering. The efficiency of the DB2 memory handling shows faster throughput, although the variations in response are more widely varied than that of Oracle with its shared pool allocation. Even if the shared pool dynamics are removed, the optimized buffering mechanisms for storage and retrieval provide clear advantages in efficiency. The amount of divergence varies widely, based on application, platform, etc. When DB2 is hosted on IBM System p equipment, the range of improved performance is heavily clustered, with a synergistic alignment of memory handling optimizations in both DBMS and hardware. This provides a more sharply defined demarcation to answering the DBMS uncertainty. The summation of the differences in performance can be seen on the following chart. Average TPS Summation - pSeries 40,000 35,000 30,000 25,000 20,000 15,000 10,000 5,000 0 DB2 CCA Oracle E-mail Web CRMS Usage Trans Proc Application Type The DSS comparison does not lend itself to the transaction per second metric, and so its comparison was done on a different basis and is summarized in the chart below. ©Solitaire Interglobal Ltd., 2008 Page 11
  • 12. DB2 Performance on IBM System p and System x Whitepaper Average Query Per Minute Summation - pSeries 600 500 400 300 200 DB2 100 Oracle 0 DSS Application Type The improved efficiency of the DB2 product on the System p is significantly higher than on the overall equipment population domain, ranging from 15.1% through 37.0% on the transaction-based metrics and averaging 58.6% better on the query- based metric. Once again this can be traced to the efficiencies in memory handling and buffering. Even more significant is the small mean variance for each of the application type comparisons. The data values representing DBMS execution on all platform types varied by approximately 22.8 points. For the System p and DB2 combination, variance was less than 3.6 points. This is significant in that it increases the predictability of performance behavior. Since the dependability of the projections is indicated by how closely they are clustered among the observations, this tight correlation is important. Business Measures Many businesses consider the total cost and risk of ownership to be key metrics when evaluating purchase options. Some of the contributory metrics that go into building this picture are reliability, staffing, total costs, and time-to-market effects. These factors were measured during the same time period that the performance metrics were gathered for all of the subject tests. A summary of these findings is presented here, so that a more comprehensive understanding of the real benefits of the platform choice can be obtained. Reliability The dependability of the implementation is a combination of the individual reliability of each component, along with the quality and effectiveness of the actual implementation. As such, both the planned and unplanned outages affect the overall usability of the total system. The charts below show the number of outages that were ©Solitaire Interglobal Ltd., 2008 Page 12
  • 13. DB2 Performance on IBM System p and System x Whitepaper recorded during the testing period, as well as, the total number of minutes that those outages consumed. Normalized Outages (All) 14 12 10 Outage Count 8 6 4 2 0 Month 1 Month 2 Month 3 Month 4 Month 5 Month 6 Month 7 Month 8 Month 9 Month 10 Month 11 Month 12 Month DB2 Oracle The number of outages has been normalized for a 200-platform operation, with both planned and unplanned outages included. One of the most contributory factors is the need for reorganization and redeployment. Each of these outages takes valuable access time away from the corporate resources. The following chart shows the total number of minutes of unavailability that those outages represent. Normalized Monthly Outages (all) 2000 1800 1600 1400 Outage Time 1200 1000 800 600 400 200 0 Month 1 Month 2 Month 3 Month 4 Month 5 Month 6 Month 7 Month 8 Month 9 Month 10 Month 11 Month 12 Month DB2 Oracle When the root causes of reliability are examined, the biggest reported differences can be found in the amount and count of planned downtime occurrences. DB2 was heavily favored in this regard, with customers consistently reporting easier movement and allocation of resources without any interruption of availability. With fewer and shorter outages for normal operational activities, the overall availability and reliability of the DBMS shows some clear differentiation. The secondary contribution to the reliability was the need for fewer upgrades, which require significant outages and schedule interrupts. ©Solitaire Interglobal Ltd., 2008 Page 13
  • 14. DB2 Performance on IBM System p and System x Whitepaper The timing and periodicity of the tracked outages is also significant in that it displays some interesting patterns for support. Even in this relatively restricted sampling, the interval of operational support has a clearly higher frequency for the Oracle installations than for the DB2 implementations. This is supported by subjective customer feedback on the occurrence of operational activities for DBAs and storage personnel. Staffing The number and skill category of the staff required to support the various implementations is another way of looking at the business issues for a deployment. The chart below shows the relative staffing for a normalized operations group. This comparison can be used to understand the challenges of staffing and overall budget costs for the two DBMS implementations on IBM equipment. These staffing figures were collected from the actual operation groups measured in the other metric collection efforts, and cover organizations in North and South America, Asia, Europe, Antarctica and Australia. These organizations have reported staffing for 24x7 coverage, rather than single shift. Since a wide variety of working environments have been included in the analysis base, all the reporting has been normalized using a unit of measure called FTE (full-time equivalent), which assumes a standard 2080-hour work year. Relative Staffing by Discipline 8.0 7.0 6.0 5.0 FTE 4.0 3.0 2.0 1.0 0.0 Ap Ba ati Bu D D H O Pe Re Se ity St Tr U Ac H O Pl Se to So e D vi Sy e se ma W at is S W pe loy rk an af s re ity or e d rf co rv ck po pl si and nag st cap cu ry ft k ab rec ivi nt Su s r or fic ra w de etw ag an ne ag an ni ic em ic nd un up r an si ce ad ag , p g ar k as m dP ng tio en onf t pp p ss arc d t d m em lan an e on ge n n m or a an fil es in M n an m t ep e m ng ana se pla is ent nin m an ac en ov ng tr m a an tu lo o ar a ag erv at er ro an nt ru en ym t ni h ch nin ag io em ice m y ce s c em n em en s pr ss e em d n t & ot t e ig M pl and en dm ec ur t an tio at s g ni pro ge io n A ng n m / DB2 Oracle en in re t is -c tr du on a. .. .. .. . Skill Category ©Solitaire Interglobal Ltd., 2008 Page 14
  • 15. DB2 Performance on IBM System p and System x Whitepaper The source chart that was used to construct this graph has also been included. Staff Discipline DB2 Oracle Account management 0.1 0.1 Application management 0.9 1.4 Backup and archiving 0.1 0.7 Business recovery services 0.2 0.3 Database management and administration 1.5 4.3 Disk and file management 0.9 1.2 HW and network configuration / re-configuration 0.3 0.3 HW deployment 0.3 0.3 Operations 4.7 7.2 OS support 0.3 0.3 Planning and process management 0.7 1.0 Performance tuning 0.7 2.1 Repository management 0.2 0.2 Security and virus protection 0.1 0.1 Service desk 3.2 5.7 Software deployment 0.3 0.5 Storage capacity planning 0.3 0.7 Systems research, planning and product management 0.1 0.1 Traffic management & planning 0.9 0.9 User administration 0.2 0.5 Total staffing level 16.0 27.9 Note that even though the number of staff to support this small section of machines has a small variation, given FTE rates that run in excess of $96,250 in most operations, the 11.9 FTE difference accumulates quickly. This cost and risk component is extremely sensitive to the degree and quality of integration and any autonomic features. The substantial strides made in this area in recent releases have significantly affected the necessary staffing levels. One of the staffing areas that shows a large amount of difference is the service desk, or help center. This difference can be traced to a substantial variance in overall calls, and an equivalent reduction in the amount of time that is required to settle each call. Since many of the calls to the reported help desks were due to latency issues, uneven performance results and resource unavailability, the differences in the DBMS reliability translates into significant support cost savings. Not only are fewer calls initiated (25.4% less), but a higher percentage of calls are resolved in Tier I analysis (31.2%). Several significant contributing factors are present in the different resolution levels. Primarily, more support calls are resolved in Tier I because fewer spawned actions have to be performed. A spawned support action can be a further request to operations to cancel a job, restart a database instance or other technically restricted service requests. A lower number of calls that are initiated by hung jobs and overloaded database queues translate into faster resolution percentages on the support calls. The other significant contributor to the higher portion of Tier I resolutions appears to be increased transparency to database performance. While the ©Solitaire Interglobal Ltd., 2008 Page 15
  • 16. DB2 Performance on IBM System p and System x Whitepaper number of performance-initiated calls have been reported as slightly higher for the DB2 DBMS, a significant portion of those calls resolve themselves during the actual call itself. This results in a shorter handling time per call that ranges from 17.3- 45.7%. All of these factors help to form a clear differentiation in staffing. Time-to-Market Effects Understanding relative time-to-market metrics tend to be difficult, since it is very rare that organizations actually conduct concurrent development of similar systems. In addition, rapidly changing application development methods and tools can cause comparisons to be done between significantly different development streams, with highly skewed results. In order to avoid such a misleading situation, a set of highly- concurrent and paired development efforts were analyzed carefully to understand the factors of the application development and speed as correlated to a DBMS. These 25 development efforts highlight some interesting characteristics and trends. All of the contributory factors, such as staffing, reliability, etc. radically affect the speed in which a company can move a business concept from inception to market. This nimbleness is a key element of increasing market share and continued corporate viability. While the performance metrics were gathered on the production systems, additional measurements were also collected to track the amount of time that the systems took to move from the initial conception to the full production implementation. The results of this effort are summarized on the following chart. Time to Market - Days 1600 1400 1200 Staff Days 1000 800 600 400 200 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 Tracked System Pair ID DB2 Oracle The 25 systems tracked for this portion of the study were paired based on either simultaneous comparative development, or function point equivalents and application type. The comparison is intended to be evocative and not quantitative, since other critical success factors can enter into this picture. However, there are some interesting contributors to the difference in the time-to-market interval. These are primarily the lower number of schedule delays and a substantially lower staff time to provision, where provisioning is defined as the creation of the resources necessary ©Solitaire Interglobal Ltd., 2008 Page 16
  • 17. DB2 Performance on IBM System p and System x Whitepaper to operate a system (e.g., storage space). The significant reduction in provisioning time has been reported as 8.2-26.7%. The reasons for schedule delays appear to be layered, but any time additional funding has to be obtained, the schedules tend to expand. Additional data points show that the tools available within the DBMS play a big role in the speed of implementation. The broad DBA performance and monitoring tools were cited in 24 of the 25 cases as a significant efficiency factor, although the historical performance analysis was identified in 19 of the situations as a major assistance in operational tuning. Hence, any reliance on the speed of implementation should take this source of efficiency into account. Service-Oriented Architecture One area of growing interest in the industry is the implementation of service- orientated architecture (SOA). In general, SOA operates under the tenets of: o Reuse o Granularity o Modularity o Composability o Componentization o Interoperability o Compliance to standard (common and industry) o Service Categorization, including  Provisioning and delivery  Monitoring and tracking These are the general principals. From an observed characteristic perspective, the result of these principles can be quantified by higher utilization of the resources, faster provisioning and lack of defect. For most organizations, success in SOA implementation can be defined as more than a 25% increase in resource utilization and a 30% or more increase in the speed of provisioning. For the purposes of this study that metric has been used. While the advantages in cost containment and flexibility with SOA are widely accepted, the successful implementation of this architecture is significantly affected by the underlying DBMS and hardware components. If the success rate of SOA implementation is viewed against the DBMS, a noticeable difference is apparent. The success rate has been normalized on a basis of 100 implementations, to avoid skewing the results. ©Solitaire Interglobal Ltd., 2008 Page 17
  • 18. DB2 Performance on IBM System p and System x Whitepaper SOA Implementation Success 100 90 80 Success Percentage (%) 70 60 50 40 30 20 10 0 DB2 Oracle DBMS All of these factors are significant when an organization selects a platform. To pay attention simply to performance is to ignore significant data and to make decisions with willfully truncated information. System x Comparisons Similarly to the examination of the System p platforms, the System x platforms were investigated for raw performance and a myriad of other factors, including reliability, time-to-market effects, staffing levels and total costs. Microsoft SQL Server® was also included in this venue, due to its substantial market position in the Intel-based platform arena. Performance The examination of performance on the System x was restricted to the models normally used for general workload, such as System x Models x3950 and Blades such as the HS21XM. This examination was conducted within the available production environments and centered on those application workloads that typically are designated for System x, such as client ERP activity, customer support presentation layers, order entry, etc. The delineation between the DBMS products is not as clearly defined on the System x platforms as on the System p. Primarily due to the smaller range of platform characteristics in the production domain for Intel-based systems, the System x shows a smaller, but still definitive, advantage with the DB2 product over Oracle and MS SQL Server. The projects examined under this category include platforms utilizing Microsoft Windows or Linux operating systems. These projects (and their method of comparison) that were examined for the System x platforms are: ©Solitaire Interglobal Ltd., 2008 Page 18
  • 19. DB2 Performance on IBM System p and System x Whitepaper • ERP – side-by-side • CRMS – side-by-side • Web usage – side-by-side • Order entry – side-by-side • Retail and distribution - side-by-side • Transactions – correlated Many of the systems were run either in full or partial parallel for comparison purposes, in full production. The number of separate organizations, along with the expanded venues that were run at those locations is shown in the following table. Study Structure – System x Comparison Components System Type Core Systems Comparison Option Count ERP 53 70 CRMS 384 783 Order Entry 243 877 Retail and Distribution 97 217 Web Usage 2,034 15,886 Transaction 647 2,646 Total 3,458 20,480 Therefore, this total comparison environment was comprised of 20,480 separate comparison options of 3,458 core systems. The individual projects were comprised of normal activity in the subject areas. The activity was tracked by the contributing business action, including the number of expected rows per query, etc. A short list of the individual actions can be seen in Appendix A. A detailed comparison of the client tests, showing Oracle, MS SQL Server and DB2 performance characteristics, is represented by the following system-specific graphs. ERP Activities 8000 7000 6000 5000 TPS 4000 3000 2000 1000 0 1 2 3 4 5 6 7 8 9 10 11 Activity ID DB2 MS SQL Server Oracle ©Solitaire Interglobal Ltd., 2008 Page 19
  • 20. DB2 Performance on IBM System p and System x Whitepaper CRMS Activities 9000 8000 7000 6000 5000 TPS 4000 3000 2000 1000 0 1 2 3 4 5 6 7 8 9 10 Activity ID DB2 MS SQL Server Oracle Web Activities 20000 18000 16000 14000 12000 TPS 10000 8000 6000 4000 2000 0 1 2 3 4 5 6 7 8 9 10 11 12 DB2 MS SQL Server Oracle Activity ID ©Solitaire Interglobal Ltd., 2008 Page 20
  • 21. DB2 Performance on IBM System p and System x Whitepaper Order Entry Activities 14000 12000 10000 8000 TPS 6000 4000 2000 0 1 2 4 8 9 11 12 14 15 16 18 19 21 22 23 25 26 28 29 30 32 33 34 36 DB2 MS SQL Server Oracle Activity ID Retail and Distribution Activities 10000 9000 8000 7000 6000 TPS 5000 4000 3000 2000 1000 0 1 2 3 4 5 6 7 8 9 10 Activity ID DB2 MS SQL Server Oracle ©Solitaire Interglobal Ltd., 2008 Page 21
  • 22. DB2 Performance on IBM System p and System x Whitepaper Transaction Processing Activities 18000 16000 14000 12000 10000 TPS 8000 6000 4000 2000 0 1 4 7 9 12 15 18 21 23 26 29 32 34 37 40 43 46 48 51 54 57 59 62 Activity ID DB2 MS SQL Server Oracle The amount of divergence between the performance of DB2, MS SQL Server and Oracle on System x is not as wide as with the System p and UNIX arena, but is still based on application, platform, etc. The summation of the differences in performance can be seen on the following chart. Average TPS Summary 16,000 14,000 12,000 10,000 Oracle TPS 8,000 MS SQL Server DB2 6,000 4,000 2,000 0 ERP Web CRMS Order Entry Retail and Distr Trans Proc ©Solitaire Interglobal Ltd., 2008 Page 22
  • 23. DB2 Performance on IBM System p and System x Whitepaper The improved efficiency of the DB2 product on the System x is notably higher than on the overall equipment population domain, ranging from 26.6% through 81.4% on all of the transaction-based metrics, compared to Oracle. The comparison with MS SQL Server shows increased efficiency of 20.1% through 56.8%. Business Measures Similarly to the UNIX environment, the total cost and risk of ownership remain the primary focus when evaluating purchase options. The contributory metrics that go into building this picture are still reliability, staffing, total costs and time-to-market effects. These factors were measured during the same time period that the performance metrics were gathered for all of the subject tests for the Intel-based test group. As in the UNIX section, a summary of these findings is presented here. Reliability The dependability of the implementation is a combination of the individual reliability of each component, along with the quality and effectiveness of the actual implementation. As such, both the planned and unplanned outages affect the overall usability of the total system. The charts below show the number of outages that were recorded during the testing period, as well as the total number of minutes that those outages consumed. Normalized Monthly Outages (all) 25 20 15 Count 10 5 0 Month 1 Month 2 Month 3 Month 4 Month 5 Month 6 Month 7 Month 8 Month 9 Month 10 Month 11 Month 12 Month DB2 MS SQL Server Oracle The number of outages has been normalized for a 75-platform operation, with both planned and unplanned outages included. Since normal learning curve may skew the results, the first three months of any project incorporating a previously-unknown ©Solitaire Interglobal Ltd., 2008 Page 23
  • 24. DB2 Performance on IBM System p and System x Whitepaper operational architecture have also been excluded. In the remaining factors, one of the most prevalent contributors is the need for database reorganization and redeployment. Each of these outages takes valuable access time away from the corporate resources. The following chart shows the total number of minutes of unavailability that those outages represent. Normalized Monthly Outage Time (all) 450 400 350 300 Time (minutes) 250 200 150 100 50 0 Month 1 Month 2 Month 3 Month 4 Month 5 Month 6 Month 7 Month 8 Month 9 Month 10 Month 11 Month 12 DB2 MS SQL Server Oracle The architectural characteristics of the different DBMS products result in widely varied patterns of tuning and “burn-in” where procedures and methods are established for a particular implementation. It is notable that while the number of outages for DB2 are higher during the initial quarter-year, the overall time of outage is significantly lower than would be expected, with lower per outage time averages, as can be seen in the following chart. Average Time per Outage 30.0 25.0 20.0 Time 15.0 10.0 5.0 0.0 1 2 3 4 5 6 7 8 9 10 11 12 Month DB2 MS SQL Server Oracle ©Solitaire Interglobal Ltd., 2008 Page 24
  • 25. DB2 Performance on IBM System p and System x Whitepaper Staffing Another important perspective of business issues for any deployment is the number and skills of the supporting staff. These factors represent a significant cost contribution and are indicative of the complexity and risk of the environment. The chart below shows the relative staffing for a normalized operations group. This comparison can be used to understand the challenges of staffing and overall budget costs for the three DBMS implementations on IBM equipment. These staffing figures were collected from the actual operation groups measured in the other metric collection efforts, and cover organizations in North and South America, Asia, Europe, Antarctica and Australia. Relative Staffing by Discipline 8.0 7.0 6.0 5.0 FTE 4.0 3.0 2.0 1.0 0.0 Account management Application management Backup and archiving Business recovery services Database management and Disk and file management HW and network configuration HW deployment Operations OS support Planning and process Performance tuning Repository management Security and virus protection Service desk Software deployment Storage capacity planning Systems research, planning Traffic management & User administration and product management management / re-configuration administration planning DB2 MS SQL Server Oracle Skill Category The source chart that was used to construct this graph has also been included. Staff Discipline Detail MS SQL Staff Discipline DB2 Oracle Server Account management 0.1 0.1 0.2 Application management 0.5 0.5 0.7 Backup and archiving 0.1 0.2 0.3 Business recovery services 0.1 0.2 0.2 Database management and administration 1.3 2.0 2.1 Disk and file management 0.5 0.8 0.6 HW and network configuration / re-configuration 0.5 0.5 0.5 HW deployment 0.1 0.1 0.1 Operations 3.0 3.8 3.5 OS support 0.5 0.5 0.5 Planning and process management 0.2 0.3 0.2 ©Solitaire Interglobal Ltd., 2008 Page 25
  • 26. DB2 Performance on IBM System p and System x Whitepaper MS SQL Staff Discipline DB2 Oracle Server Performance tuning 0.3 1.0 0.7 Repository management 0.1 0.3 0.3 Security and virus protection 0.3 0.3 0.3 Service desk 1.0 1.3 1.0 Software deployment 0.4 0.4 0.4 Storage capacity planning 0.1 0.1 0.2 Systems research, planning and product management 0.1 0.1 0.1 Traffic management & planning 0.4 0.6 0.4 User administration 0.1 0.2 0.1 Total staffing level 9.7 13.3 12.4 Note that even though the number of staff to support this small section of machines has a small variation, given FTE rates that run in excess of $68,500 in most operations, the FTE difference accumulates quickly. Once again, integration and any autonomic features affect the necessary staffing levels. One of the staffing areas that shows a difference is the service desk, or help center. This difference can be traced to a substantial variance in overall calls, and an equivalent reduction in the amount of time that is required to settle each call. Since many of the calls to the reported help desks were due to due to latency issues, uneven performance results and resource unavailability, the differences in the DBMS reliability translates into significant support cost savings. Not only are fewer calls initiated (9.1% less), but a higher percentage of calls are resolved in Tier I analysis (12.0%). This results in a shorter handling time per call that ranges from 3.2-7.6%. Time-to-Market Effects All of the contributory factors, such as staffing, reliability, etc. radically affect the speed in which a company can move a business concept from inception to market. This nimbleness is a key element of increasing market share and continued corporate viability. In addition to the production performance metrics, data was also collected on various aspects of the project lifecycle. Relevant portions of the examination are summarized on the following chart. ©Solitaire Interglobal Ltd., 2008 Page 26
  • 27. DB2 Performance on IBM System p and System x Whitepaper Time-to-Market - Average 1200 1000 800 Staff Days 600 400 200 0 Small Medium Large Very large Implementation Class DB2 MS SQL Server Oracle Note that the implementation interval is broken down by the relative size of the project. This has been done to avoid any obliteration of important details. The systems tracked for this portion of the study were associated based on either simultaneous comparative development, or function point equivalents and application type. The comparison is intended to be evocative and not quantitative. The average development days to market are significant indications within the application classification. Conclusions After extensive review of more than 2,354,000 data points covering over 4,100 closely watched production comparisons, the advantage of running DB2 on IBM System p and System x equipment is strongly supported. The advantages of the synergy between DBMS and platform translate into hard cost savings for each customer, and substantially affect the bottom line cost of ownership. The resulting cost containment stems from advantages in performance, reliability, staffing and agility. By demonstrating higher performance levels, and consistent reliability, the implementing customer can expect lower staffing levels and faster time-to-market. This in turn positions the organization for better strategic competition, profit levels and market response. This is most achievable when a good fit between the DBMS and hardware platform is employed. Allowing the strengths of the System x and System p platforms to mingle with the strengths of the DB2 product is one of the best ways to support the customer in their continuing quest for lower costs and improved user satisfaction. ©Solitaire Interglobal Ltd., 2008 Page 27
  • 28. DB2 Performance on IBM System p and System x Whitepaper Attributions AIX, DB2, IBM, eServer, System p, and System x are trademarks or registered trademarks of International Business Machines Corporation in the United States of America and other countries. Microsoft, Windows, SQL Server and the Windows logo are trademarks of Microsoft Corporation in the United States of America. UNIX is a registered trademark in the United States of America and other countries, licensed exclusively through The Open Group. Oracle is a registered trademark in the United States of America and other countries of ORACLE Corporation. Other company, product and service names may be trademarks or service marks of others. ©Solitaire Interglobal Ltd., 2008 Page 28
  • 29. DB2 Whitepaper: Performance on eServer System p and System x Appendix A – Detail Test Cases Appendix A: Test Cases UNIX Test Cases A short list of the individual actions can be seen below for each general project type. In all of the situations, the components of the system type were carefully recorded, with overall performance and capacity characteristics analyzed closely. The findings from this analysis were then scrutinized to show root cause and the unique performance footprint that each DBMS and platform creates. Tracked System Activity: Credit Card Processes 1. card verification 2. card approval 3. card rejection 4. merchant verification 5. merchant approval 6. merchant rejection 7. purchase amount approval 8. purchase amount rejection 9. posting purchase to cardholder account 10. posting purchase to merchant account 11. posting returns to cardholder account 12. posting returns to merchant account 13. posting cardholder payments 14. posting merchant payments 15. card limit adjustment 16. over limit notification to cardholder 17. merchant limit adjustment 18. periodic billing to cardholder 19. billing adjustments to cardholder 20. special request bill printing 21. periodic accounting to merchant 22. accounting adjustments to merchant 23. fraud analysis 24. clearinghouse batch transmission 25. cash advance posting 26. interest recalculation 27. retroactive posting calculations A total of 41 core systems were tracked with this type of large-scale application. The subject organizations covered a mixture of full financial service companies, others that are solely focused on this market sector and retail organizations that also offer this type of customer product. Fourteen of these organizations ran full parallel efforts on different equipment and DBMS that allowed closely detailed comparisons. The remainder ran partial comparisons that provided significant input. ©Solitaire Interglobal Ltd., 2005 Page A.1
  • 30. DB2 Whitepaper: Performance on eServer System p and System x Appendix A – Detail Test Cases Tracked System Activity: Email Posting 1. send message 2. retrieve message 3. open message 4. delete message from active 5. erase message from account 6. forward message 7. add new address 8. delete address 9. amend address 10. save draft copy of message A total of 121 core systems were tracked with this type of application. The subject organizations covered a mixture of industries and market sectors. Sixty of these organizations ran full parallel efforts on different equipment and DBMS that allowed closely detailed comparisons. The remainder ran partial comparisons that provided significant input. Tracked System Activity: Web Usage 1. logon 2. logoff 3. follow link 4. display image 5. access menu 6. post message 7. display post 8. update information field 9. cancel session 10. search via engine 11. search direct 12. view online AV content A total of 306 core systems were tracked with this type of application. The subject organizations covered a mixture of industries and market sectors. Seventy-two of these organizations ran full parallel efforts on different equipment and DBMS that allowed closely detailed comparisons. The remainder ran partial comparisons that provided significant input. Tracked System Activity: Customer Support 1. log new support call 2. update support call data 3. log completion of support call 4. lookup customer name 5. lookup customer address 6. lookup customer program information 7. lookup customer bill or invoice 8. update customer name 9. update customer address ©Solitaire Interglobal Ltd., 2005 Page A.2
  • 31. DB2 Whitepaper: Performance on eServer System p and System x Appendix A – Detail Test Cases 10. update customer program information A total of 27 core systems were tracked with this type of application. The subject organizations covered a mixture of industries and market sectors. Eleven of these organizations ran full parallel efforts on different equipment and DBMS that allowed closely detailed comparisons. The remainder ran partial comparisons that provided significant input. Tracked System Activity: DSS 1. receive data 2. clean data 3. resolve data discrepancies 4. verify user 5. insert one row of data into single small table 6. insert one row of data into single medium table 7. insert one row of data into single large table 8. insert one row of data into each of 3-5 small tables 9. insert one row of data into each of 3-5 medium tables 10. insert one row of data into each of 3-5 large tables 11. insert one row of data into each of 3-5 combination tables 12. insert ten rows of data into single small table 13. insert ten rows of data into single medium table 14. insert ten rows of data into single large table 15. insert ten rows of data into each of 3-5 small tables 16. insert ten rows of data into each of 3-5 medium tables 17. insert ten rows of data into each of 3-5 large tables 18. insert ten rows of data into each of 3-5 combination tables 19. insert 100 rows of data into single small table 20. insert 100 rows of data into single medium table 21. insert 100 rows of data into single large table 22. insert 100 rows of data into each of 3-5 small tables 23. insert 100 rows of data into each of 3-5 medium tables 24. insert 100 rows of data into each of 3-5 large tables 25. insert 100 rows of data into each of 3-5 combination tables 26. insert 1000 rows of data into single small table 27. insert 1000 rows of data into single medium table 28. insert 1000 rows of data into single large table 29. insert 1000 rows of data into each of 3-5 small tables 30. insert 1000 rows of data into each of 3-5 medium tables 31. insert 1000 rows of data into each of 3-5 large tables 32. insert 1000 rows of data into each of 3-5 combination tables 33. roll-up data for 3-5 small tables from daily to weekly 34. roll-up data for 3-5 small tables from daily to calendar month 35. roll-up data for 3-5 small tables from daily to fiscal month 36. roll-up data for 3-5 small tables from daily to calendar quarter 37. roll-up data for 3-5 small tables from daily to fiscal quarter 38. roll-up data for 3-5 small tables from daily to calendar year ©Solitaire Interglobal Ltd., 2005 Page A.3
  • 32. DB2 Whitepaper: Performance on eServer System p and System x Appendix A – Detail Test Cases 39. roll-up data for 3-5 small tables from daily to fiscal year 40. roll-up data for 3-5 medium tables from daily to weekly 41. roll-up data for 3-5 medium tables from daily to calendar month 42. roll-up data for 3-5 medium tables from daily to fiscal month 43. roll-up data for 3-5 medium tables from daily to calendar quarter 44. roll-up data for 3-5 medium tables from daily to fiscal quarter 45. roll-up data for 3-5 medium tables from daily to calendar year 46. roll-up data for 3-5 medium tables from daily to fiscal year 47. roll-up data for 3-5 large tables from daily to weekly 48. roll-up data for 3-5 large tables from daily to calendar month 49. roll-up data for 3-5 large tables from daily to fiscal month 50. roll-up data for 3-5 large tables from daily to calendar quarter 51. roll-up data for 3-5 large tables from daily to fiscal quarter 52. roll-up data for 3-5 large tables from daily to calendar year 53. roll-up data for 3-5 large tables from daily to fiscal year 54. roll-up data for 3-5 combination tables from daily to weekly 55. roll-up data for 3-5 combination tables from daily to calendar month 56. roll-up data for 3-5 combination tables from daily to fiscal month 57. roll-up data for 3-5 combination tables from daily to calendar quarter 58. roll-up data for 3-5 combination tables from daily to fiscal quarter 59. roll-up data for 3-5 combination tables from daily to calendar year 60. roll-up data for 3-5 combination tables from daily to fiscal year 61. retrieve one row of data from single small table 62. retrieve one row of data from single medium table 63. retrieve one row of data from single large table 64. retrieve one row of data from each of 3-5 small tables 65. retrieve one row of data from each of 3-5 medium tables 66. retrieve one row of data from each of 3-5 large tables 67. retrieve one row of data from each of 3-5 combination tables 68. retrieve ten rows of data from single small table 69. retrieve ten rows of data from single medium table 70. retrieve ten rows of data from single large table 71. retrieve ten rows of data from each of 3-5 small tables 72. retrieve ten rows of data from each of 3-5 medium tables 73. retrieve ten rows of data from each of 3-5 large tables 74. retrieve ten rows of data from each of 3-5 combination tables 75. retrieve 100 rows of data from single small table 76. retrieve 100 rows of data from single medium table 77. retrieve 100 rows of data from single large table 78. retrieve 100 rows of data from each of 3-5 small tables 79. retrieve 100 rows of data from each of 3-5 medium tables 80. retrieve 100 rows of data from each of 3-5 large tables 81. retrieve 100 rows of data from each of 3-5 combination tables 82. retrieve 1000 rows of data from single small table 83. retrieve 1000 rows of data from single medium table 84. retrieve 1000 rows of data from single large table 85. retrieve 1000 rows of data from each of 3-5 small tables ©Solitaire Interglobal Ltd., 2005 Page A.4
  • 33. DB2 Whitepaper: Performance on eServer System p and System x Appendix A – Detail Test Cases 86. retrieve 1000 rows of data from each of 3-5 medium tables 87. retrieve 1000 rows of data from each of 3-5 large tables 88. retrieve 1000 rows of data from each of 3-5 combination tables A total of 41 core systems were tracked with this type of application. The subject organizations covered a mixture of industries and market sectors. Sixteen of these organizations ran full parallel efforts on different equipment and DBMS that allowed closely detailed comparisons. The remainder ran partial comparisons that provided significant input. Tracked System Activity: Transactions 1. insert one row of data into single small table 2. insert one row of data into single medium table 3. insert one row of data into single large table 4. insert one row of data into each of 3-5 small tables 5. insert one row of data into each of 3-5 medium tables 6. insert one row of data into each of 3-5 large tables 7. insert one row of data into each of 3-5 combination tables 8. insert ten rows of data into single small table 9. insert ten rows of data into single medium table 10. insert ten rows of data into single large table 11. insert ten rows of data into each of 3-5 small tables 12. insert ten rows of data into each of 3-5 medium tables 13. insert ten rows of data into each of 3-5 large tables 14. insert ten rows of data into each of 3-5 combination tables 15. insert 100 rows of data into single small table 16. insert 100 rows of data into single medium table 17. insert 100 rows of data into single large table 18. insert 100 rows of data into each of 3-5 small tables 19. insert 100 rows of data into each of 3-5 medium tables 20. insert 100 rows of data into each of 3-5 large tables 21. insert 100 rows of data into each of 3-5 combination tables 22. read data in single small table 23. read data in single medium table 24. read data in single large table 25. read data in 3-5 small tables 26. read data in 3-5 medium tables 27. read data in 3-5 large tables 28. read data in 3-5 combination tables 29. update one row of data in each of 3-5 small tables 30. update one row of data in each of 3-5 medium tables 31. update one row of data in each of 3-5 large tables 32. update one row of data in each of 3-5 combination tables 33. update ten rows of data in single small table 34. update ten rows of data in single medium table 35. update ten rows of data in single large table 36. update ten rows of data in each of 3-5 small tables ©Solitaire Interglobal Ltd., 2005 Page A.5
  • 34. DB2 Whitepaper: Performance on eServer System p and System x Appendix A – Detail Test Cases 37. update ten rows of data in each of 3-5 medium tables 38. update ten rows of data in each of 3-5 large tables 39. update ten rows of data in each of 3-5 combination tables 40. update 100 rows of data in single small table 41. update 100 rows of data in single medium table 42. update 100 rows of data in single large table 43. update 100 rows of data in each of 3-5 small tables 44. update 100 rows of data in each of 3-5 medium tables 45. update 100 rows of data in each of 3-5 large tables 46. update 100 rows of data in each of 3-5 combination tables 47. delete one row of data from each of 3-5 small tables 48. delete one row of data from each of 3-5 medium tables 49. delete one row of data from each of 3-5 large tables 50. delete one row of data from each of 3-5 combination tables 51. delete ten rows of data from single small table 52. delete ten rows of data from single medium table 53. delete ten rows of data from single large table 54. delete ten rows of data from each of 3-5 small tables 55. delete ten rows of data from each of 3-5 medium tables 56. delete ten rows of data from each of 3-5 large tables 57. delete ten rows of data from each of 3-5 combination tables 58. delete 100 rows of data from single small table 59. delete 100 rows of data from single medium table 60. delete 100 rows of data from single large table 61. delete 100 rows of data from each of 3-5 small tables 62. delete 100 rows of data from each of 3-5 medium tables 63. delete 100 rows of data from each of 3-5 large tables 64. delete 100 rows of data from each of 3-5 combination tables A total of 117 core systems were tracked with this type of application. The subject organizations covered a mixture of industries and market sectors. Fifty-one of these organizations ran full parallel efforts on different equipment and DBMS that allowed closely detailed comparisons. The remainder ran partial comparisons that provided significant input. Intel-based Test Cases Tracked System Activity: ERP 1. refresh screen 2. move from one screen to another 3. add a customer 4. update customer data 5. terminate a customer 6. add a vendor 7. update vendor data 8. terminate a vendor 9. add an employee ©Solitaire Interglobal Ltd., 2005 Page A.6
  • 35. DB2 Whitepaper: Performance on eServer System p and System x Appendix A – Detail Test Cases 10. update employee data 11. terminate an employee A total of 53 core systems were tracked with this type of large-scale application. The subject organizations covered a mixture of full financial service companies, others that are solely focused on this market sector and retail organizations that also offer this type of customer product. Twelve of these organizations ran full parallel efforts on different equipment and DBMS that allowed closely detailed comparisons. The remainder ran partial comparisons that provided significant input. Tracked System Activity: Customer Support 1. log new support call 2. update support call data 3. log completion of support call 4. lookup customer name 5. lookup customer address 6. lookup customer program information 7. lookup customer bill or invoice 8. update customer name 9. update customer address 10. update customer program information A total of 384 core systems were tracked with this type of application. The subject organizations covered a mixture of industries and market sectors. Eighty-six of these organizations ran full parallel efforts on different equipment and DBMS that allowed closely detailed comparisons. The remainder ran partial comparisons that provided significant input. Tracked System Activity: Retail and Distribution 1. check inventory 2. apply payment 3. calculate discount 4. consolidate shipments 5. create backorder 6. delete suspended transaction 7. search for alternative inventory location 8. search for product by keyword 9. update buy list 10. validate credit A total of 97 core systems were tracked with this type of application. The subject organizations covered a mixture of industries and market sectors. Twenty-two of these organizations ran full parallel efforts on different equipment and DBMS that allowed closely detailed comparisons. The remainder ran partial comparisons that provided significant input. Tracked System Activity: Order Entry 1. retrieve customer data 2. verify customer credit standing ©Solitaire Interglobal Ltd., 2005 Page A.7
  • 36. DB2 Whitepaper: Performance on eServer System p and System x Appendix A – Detail Test Cases 3. add ship-to address 4. retrieve product data 5. insert one detail order line 6. insert ten detail order lines 7. insert 100 detail order lines 8. update one detail order line 9. update ten detail order lines 10. update 100 detail order lines 11. insert one detail return line 12. insert ten detail return lines 13. insert 100 detail return lines 14. update one detail return line 15. update ten detail return lines 16. update 100 detail return lines 17. verify in-stock products 18. lookup product location 19. purge one row of order header information 20. purge ten rows of order header information 21. purge 100 rows of order header information 22. purge one row of order detail information 23. purge ten rows of order detail information 24. purge 100 rows of order detail information A total of 243 core systems were tracked with this type of application. The subject organizations covered a mixture of industries and market sectors. Fifty-six of these organizations ran full parallel efforts on different equipment and DBMS that allowed closely detailed comparisons. The remainder ran partial comparisons that provided significant input. Tracked System Activity: Web Usage 1. logon 2. logoff 3. follow link 4. display image 5. access menu 6. post message 7. display post 8. update information field 9. cancel session 10. search via engine 11. search direct 12. view online AV content A total of 2,034 core systems were tracked with this type of application. The subject organizations covered a mixture of industries and market sectors. One hundred thirty-nine of these organizations ran full parallel efforts on different equipment and DBMS that allowed closely detailed comparisons. The remainder ran partial comparisons that provided significant input. ©Solitaire Interglobal Ltd., 2005 Page A.8
  • 37. DB2 Whitepaper: Performance on eServer System p and System x Appendix A – Detail Test Cases Tracked System Activity: Transactions 1. insert one row of data into single small table 2. insert one row of data into single medium table 3. insert one row of data into each of 3-5 small tables 4. insert one row of data into each of 3-5 medium tables 5. insert one row of data into each of 3-5 combination tables 6. insert ten rows of data into single small table 7. insert ten rows of data into single medium table 8. insert ten rows of data into each of 3-5 small tables 9. insert ten rows of data into each of 3-5 medium tables 10. insert ten rows of data into each of 3-5 combination tables 11. insert 100 rows of data into single small table 12. insert 100 rows of data into single medium table 13. insert 100 rows of data into each of 3-5 small tables 14. insert 100 rows of data into each of 3-5 medium tables 15. insert 100 rows of data into each of 3-5 combination tables 16. read data in single small table 17. read data in single medium table 18. read data in 3-5 small tables 19. read data in 3-5 medium tables 20. read data in 3-5 combination tables 21. update one row of data in each of 3-5 small tables 22. update one row of data in each of 3-5 medium tables 23. update one row of data in each of 3-5 combination tables 24. update ten rows of data in single small table 25. update ten rows of data in single medium table 26. update ten rows of data in each of 3-5 small tables 27. update ten rows of data in each of 3-5 medium tables 28. update ten rows of data in each of 3-5 combination tables 29. update 100 rows of data in single small table 30. update 100 rows of data in single medium table 31. update 100 rows of data in each of 3-5 small tables 32. update 100 rows of data in each of 3-5 medium tables 33. update 100 rows of data in each of 3-5 combination tables 34. delete one row of data from each of 3-5 small tables 35. delete one row of data from each of 3-5 medium tables 36. delete one row of data from each of 3-5 combination tables 37. delete ten rows of data from single small table 38. delete ten rows of data from single medium table 39. delete ten rows of data from each of 3-5 small tables 40. delete ten rows of data from each of 3-5 medium tables 41. delete ten rows of data from each of 3-5 combination tables 42. delete 100 rows of data from single small table 43. delete 100 rows of data from single medium table 44. delete 100 rows of data from each of 3-5 small tables 45. delete 100 rows of data from each of 3-5 medium tables 46. delete 100 rows of data from each of 3-5 large tables ©Solitaire Interglobal Ltd., 2005 Page A.9
  • 38. DB2 Whitepaper: Performance on eServer System p and System x Appendix A – Detail Test Cases 47. delete 100 rows of data from each of 3-5 combination tables A total of 647 core systems were tracked with this type of application. The subject organizations covered a mixture of industries and market sectors. One hundred sixty-two of these organizations ran full parallel efforts on different equipment and DBMS that allowed closely detailed comparisons. The remainder ran partial comparisons that provided significant input. ©Solitaire Interglobal Ltd., 2005 Page A.10