Case Study for TPC-C Informix
     Innovator-C benchmarks
Eric Vercelletto         Session F12
Begooden IT Consulting   4/24/2012 4:45 PM
Agenda
•   Introduction
•   Reasons for performing this operation
•   Define the battlefield: Stress test or Benchmark?
•   Scenario of the operation
•   What benefits can you take from it




             4/24/2012          Session F12         2
Introduction
• Begooden IT Consulting is an IBM ISV company, mainly
  focused on Informix technology services.
• It has been created in 2010, 24 years after I started
  implementing and using Informix solutions.
• Our 15+ years experience within Informix Software
  France and Portugal helped us to acquire in depth
  product knowledge as well as solid field experience.
• Our services include Informix implementation auditing,
  performance tuning, issue management …
• We also happen to be the Querix reseller for France and
  French speaking countries (except Canada and Louisiana)
• The company is based in Combrit, Finistère, France

           4/24/2012              Session F12               3
Agenda
•   Introduction
•   Reasons for performing this operation
•   Define the battlefield: Stress test or Benchmark?
•   Scenario of this operation
•   What benefits can you take from it




             4/24/2012          Session F12         4
Reasons of this test: start
• The starting point was an article about Innovator-C
  discussing about how and why this edition was the best
  choice among the other free of charge RDBMS.
• Innovator-C was qualified by IBM to fit “departmental
  applications”.
• What is the real meaning of “departmental application”?
• What is the real processing capacity of Innovator-C ?




            4/24/2012            Session F12          5
Why choose Innovator-C?
• Innovator-C is a free real production license, it has
  zero impact on the budget.
• Launched in 2010 against all odds, it did not really
  gain a lot of credibility in the user community.
• A close look at specs shows a very complete product,
  although public rumor never qualifies it as an
  enterprise-class product.
• Stupid to promote a free product ? Maybe, but we
  should look beyond that and understand the real
  potential.

           4/24/2012            Session F12           6
Innovator-C limitations
• Innovator-C can use a maximum of 4 CPU VPs
• Innovator-C can use a maximum of 2 Gb of SHMEM
  on 1 server
• PDQPRIORITY is not usable
• Enterprise Replication is restricted to 2 root nodes
• HDR is limited to 1 secondary instance
• Shared Disk Secondary is not available
• Remote Standalone Secondary is not available


           4/24/2012             Session F12             7
Innovator-C strengths
•   No disk/chunk size and number limitations
•   No limitations on hardware (CPU/memory)
•   You have Enterprise Replication
•   You have HDR
•   You have a number of very useful datablades
•   You have an internal scheduler
•   You have the storage provisioning feature



            4/24/2012            Session F12      8
Innovator-C: the smart way
• You have a rock-solid and long lasting DBMS

• Once configured, required DBA time is generally far
  below other DBMS vendors DBA time.
• No need to change DBMS brand/vendor if your
  system reaches upper limits, just upgrade
• Consolidate your project with firm steps, and grow
  with no pain and no bad surprises.


          4/24/2012            Session F12              9
300 users on Innovator-C? you bet!
• Informix is fast, but we have forgotten how much.
• Informix scales well, but not sure how well
• Is Innovator-C a sub-product of Growth or Ultimate?
• Initially questions, quickly turning into predictions
  and bets.
• The only way to make sure about the answers: run a
  stress test and a benchmark



           4/24/2012             Session F12              10
The right way to achieve answers
• Most representative OLTP application: the TPC-C
  Benchmark
• We “unfortunately” have no visibility on how
  Informix does on TPC-C since 1997.
• Running an official TPC is extremely expensive: let’s
  take a free one!
• We have no budget for enormous servers with 1000+
  expensive disk bays: let’s use a 1200 us$ config.
• Greedy and proud to be, but we want performance
  for our buck!

           4/24/2012            Session F12           11
By the way, what are the questions?
We have two questions:
• how many TPC-C users can work in good conditions on
  our cheap server?
• what is the cost per tpc-c transaction delivered by
  Innovator-C on this server ?
• Question #1 will be handled thru a Stress Test using the
  “unofficial” but yet close to genuine TPC-C application.
• Question # 2 will be handled by the same TPC-C
  Benchmark, kindly made available to us by a Spanish
  University.

           4/24/2012               Session F12               12
Agenda
•   Introduction
•   Reasons for performing this operation
•   Define the battlefield: Stress test or Benchmark?
•   Scenario of this operation
•   What benefits can you take from it




             4/24/2012          Session F12        13
Set the battlefield: Benchmark Vs Stress Test
               the Benchmark
 • A benchmark is used to determine the performance level of
   an infrastructure under defined conditions.
 • The TPC Council is the official organism/referee in charge of
   this task for hardware vendors and DBMS vendors
 • Standard applications have strict requirements on data
   schema, data capacity, data integrity and queries scenario.
 • Results are expressed in transactions per minute (tpmC for
   instance), infrastructure cost for 1 tpm, and more recently
   power energy spent for 1 tpm ( cf the smarter planet )
 • It will answer our question: “what is the cost of one
   transaction for Innovator-C”


              4/24/2012                Session F12                 14
Purpose and specs of the TPC-C Benchmark
 • TPC-C simulates an average+ OLTP application.
 • The database schema features 9 tables, with integrity constraints
   and indexes.
 • Tables data cardinality is clearly defined
 • The data integrity must be respected, a transaction recover can be
   requested at anytime, after system crash.
 • It executes 5 different complex transactions, including simple select,
   insert, update and delete. It also includes somewhat complex and
   non-indexed queries.
 • 4 of the 5 transactions serve to generate workload on the DBMS,
   while only one transaction is measured: “New Order”
 • A successful test must have 90% or more of all transactions
   response time <= 5 seconds for simple transactions, 20 seconds for
   the complex transactions, else the test is not valid.


               4/24/2012                   Session F12                 15
Set the battlefield: Benchmark Vs Stress Test
                the Stress Test
 • Also known as Load Test, the Stress Test is a part of the
   correct implementation of a testing process.
 • it is used to understand at which point the infrastructure
   stops responding to performance requirements
 • the target application is used with as many users as
   necessary to put the infrastructure under heavy
   conditions.
 • When response times get below acceptable, we have
   reached the inflexion point.
 • It will answer to our question “How many users does
   Innovator-C stand on a low range QuadCore server”.

             4/24/2012              Session F12             16
Agenda
•   Introduction
•   Reasons for performing this operation
•   Define the battlefield: Stress test or Benchmark?
•   Scenario of this operation
•   What benefits can you take from it




             4/24/2012          Session F12        17
Description of the Infrastructure
                     Database software:
                     11g Enterprise Edition      Total cost: n.nnn.nnn US$,
                                                 + dbms licenses




                        Database cluster: 6 servers with the following specs
                        * CPU: 48 X Intel Core i7 990X @3,46 GHz (6 cores)
                        * RAM: 512 Gb DDR3
                        • Disks: 1200 X SCSI 500Gb @20,000rpm
                        • OS: mainframe++




  Clients: 4 servers with the following specs:
  •    CPU: 2 x Intel Core I7 990x @ 3,46Ghz            Client software: tpc-c
  •    RAM 256 Gb                                       Transaction monitor: Tuxedo
  •    Disks: 4 X Sata 3 500 Gb @ 15,000rpm




       4/24/2012                                          Session F12                 18
Oops… the real description

Database server : 1 server with the following specs:
•   CPU: 1 x Intel Quad Q9400 (4x2.66Ghz)
•   RAM 16 Gb                                                      Total cost: 1.200 US$,
•   Disks: 4 X Sata 2 / 500 Gb @ 7,200 rpm
•   OS: linux Fedora 14                                            including Informix licenses

                                                                          Clients: sorry, budget is closed, use the same box!
                                                                          Client software: tpc-c developped in Informix esql/c
                                                                          Transaction monitor: embedded in tpc-c application



Database software:
IBM Informix Innovator-C Edition 11.70 FC4




                                                 Your database server ?



                            4/24/2012                               Session F12                                        19
Description of TPC-C UVA for Informix
• open source TPC-C developed in embedded C for another open
  source RDBMS, by students of the Valladolid University, Spain,
  under control of Pr Diego Llanos.
• Self running bundle including database build, data load, benchmark
  run and results analysis
• Satisfies all the TPC-C rules and requirements, but does not run as
  fast as the official TPC-C.
• Features a custom transaction manager handling queries requested
  by client processes. Dialog between tm and clients is done thru
  shared memory messages and semaphores.
• Adapted for Informix ESQL/C, plus added some enhancements for
  Informix. Still needs enhancement to deliver full gas.
• Returns a tpmC-UVA result, not acceptable as an official tmpC
  granted by TPC Council. ( but we saved a lot of money …)


             4/24/2012                  Session F12                 20
The Stress Test (getting set)
• Objective: determine how many user terminals can run
  tpc-c with Innovator-C installed on a cheap server
• The tpc-c test must PASS!
• For accuracy purpose, think times and keyboard times
  will be set to regular user speed
• No cheating: checkpoints will be executed like they
  should in a production environment (15mn interval)
• No cheating: we will not use prepared statements
• No cheating: we will not use RAM Disks
• Shared memory limited to 2 Gb
• CPU VPs number limited to 4


           4/24/2012             Session F12             21
The Stress Test: 1st run
• Crazy or brave? First run at 50 warehouses/10terminals per
  warehouse ( i-e 500 user terminals )
• Think time: from 5 to 12 secs, according to transaction type
• Typing time : from 2 to 18secs, according to transaction type
• Warmup time: 30 mn
• Measure time: 60 mn
• tpmC obtained: 590.208 tpmC-uva
• Result: PASSED
• Basic statistics, user time: avg 38%, min 31%, max 49%
• Basic statistics, io wait: avg 11%, min 6%, max 25%

              4/24/2012              Session F12             22
The Stress Test: 2nd run

• Good guess for the First run, increase to 55 warehouses /
  10terminals per warehouse ( i-e 550 terminals )
• Warmup time: 45 mn
• Measure time: 240 mn
• tpmC obtained: 610.728 tpmC-uva
• Result: PASSED
• Overall appreciation: io wait starts impacting negatively
  the system performance with long peaks at 40%


             4/24/2012            Session F12           23
The Stress Test: 3rd run

• increase to 60 warehouses / 10terminals per warehouse
  (i-e 600 terminals )
• Warmup time: 45 mn
• Measure time: 240 mn
• tpmC obtained: 497.567 tpmC-uva
• Result: FAILED
• Overall appreciation: io wait is too heavy and almost stops
  transactions during checkpoints.


             4/24/2012             Session F12           24
The Stress Test: run #2 detailed results
•   Global result: COMPUTED THROUGHPUT: 610.728 tpmC-uva using 55 warehouses.
    252680 Transactions commited.

•   NEW-ORDER TRANSACTIONS:
    109931 Transactions within measurement time (130117 Total).
    Percentage of "well done" transactions: 94.221%
    Response time (min/med/max/90th): 0.008 / 3.327 / 107.466 / 2.920
•   PAYMENT TRANSACTIONS:
    109824 Transactions within measurement time (130300 Total).
    Percentage of "well done" transactions: 95.213%
    Response time (min/med/max/90th): 0.001 / 2.664 / 107.702 / 2.800
•   ORDER-STATUS TRANSACTIONS:
    10963 Transactions within measurement time (13012 Total).
    Percentage of "well done" transactions: 95.457%
    Response time (min/med/max/90th): 0.007 / 2.545 / 105.946 / 2.840
•   DELIVERY TRANSACTIONS:
    10982 Transactions within measurement time (13042 Total).
    Percentage of "well done" transactions: 96.767%
    Response time (min/med/max/90th): 0.000 / 1.114 / 99.241 / 0.080
    Percentage of execution time < 80s : 99.727%
    Execution time min/avg/max: 0.023/2.518/101.781
•   STOCK-LEVEL TRANSACTIONS:
    10980 Transactions within measurement time (13025 Total).
    Percentage of "well done" transactions: 97.304%
    Response time (min/med/max/90th): 0.003 / 2.630 / 98.372 / 2.720




                         4/24/2012                                      Session F12   25
The Stress Test: run #2 in 2 charts




    4/24/2012        Session F12      26
Percentage




                             20
                                  40
                                                              60
                                                                                                     80




                         0
                                                                                                                                 100
                                                                                                                                              120
                                                                                                                                                          160


                                                                                                                                                    140
                   23/02…
                   23/02…
                   23/02…
                   23/02…
                   23/02…
                   23/02…
                   23/02…
                   23/02…
                   23/02…
                   23/02…
                   23/02…
                   23/02…




4/24/2012
                   23/02…
                   23/02…
                   23/02…
                   23/02…
                   23/02…
                   23/02…
                   23/02…




              Timeline
                   23/02…
                   23/02…
                   23/02…
                   23/02…
                   23/02…
                   23/02…
                   23/02…
                   23/02…
                   23/02…
                   23/02…




Session F12
                   23/02…
                   23/02…
                   23/02…
                   23/02…
                                                                                                                                                                TPC-C Informix Innovator-C : System Stats Vs Informix Stats




                   23/02…
                   23/02…
                   23/02…
                   23/02…
                                                                                                                                                                                                                              The Stress Test: run #2 in 2 charts




                                                                                                                                  CPU user%


                                                                                                     CPU waitIO%
                                                                                                                   CPU system%




27
                                                                                  IFMX read_cache%
                                                              IFMX write_cache%
                                       IFMX Xactions nbr/10
The benchmark (getting set)
• Objective: obtain the highest possible tpmC-uva figure
• The tpc-c test must PASS!
• For optimal speed, think times and keyboard times will
  be set to 0.
• No cheating: checkpoints will be executed (5mn interval)
• No cheating: we will not use prepared statements
• No cheating: we will not use RAM Disks
• Shared memory still limited to 2 Gb
• CPU VPs number limited to 4

           4/24/2012              Session F12            28
The benchmark: step 1

• Objective: obtain the highest possible tpmC-uva
  figure with 1 warehouse / 1 CPU vp (max application
  speed)
• No checkpoint will occur during measure time, all
  reads and write keep in cache
• Warmup time: 10 mn
• Measure time: 20 mn
• Result obtained is 2433 tpmC-uva


          4/24/2012            Session F12          29
The benchmark: step 2

• Objective: obtain the highest possible tpmC-uva
  figure with 2 warehouses / 1 CPU vp (max application
  speed)
• No checkpoint will occur during measure time, all
  reads and write keep in cache
• Warmup time: 10 mn
• Measure time: 20 mn
• Result obtained is 2500 tpmC-uva
• The system remains “idleissimo”

          4/24/2012            Session F12          30
The benchmark: step 3

• Objective: show how Innovator-C scales brilliantly by
  running 15 warehouses / 150 user terminals
• Checkpoints will occur every 5 mn
• Use 3 CPU VPs, keep one for the application
• Warmup time: 20 mn
• Measure time: 40 mn
• Result obtained is 2315 tpmC-uva 150 users terminals
• This makes it a 0.51 us$ per tpmC! ( 2315tpmC/1200$)


            4/24/2012            Session F12          31
The Benchmark step 3: detailed results
•   COMPUTED THROUGHPUT: 2314.950 tpmC-uva using 15 warehouses.
    212940 Transactions commited.
•   NEW-ORDER TRANSACTIONS:
    92598 Transactions within measurement time (123190 Total).
    Percentage of "well done" transactions: 91.416%
    Response time (min/med/max/90th): 0.033 / 1.750 / 22.180 / 3.800
•   PAYMENT TRANSACTIONS:
    92581 Transactions within measurement time (123227 Total).
    Percentage of "well done" transactions: 92.579%
    Response time (min/med/max/90th): 0.023 / 1.676 / 22.951 / 3.680
•   ORDER-STATUS TRANSACTIONS:
    9261 Transactions within measurement time (12326 Total).
    Percentage of "well done" transactions: 92.117%
    Response time (min/med/max/90th): 0.140 / 1.619 / 18.158 / 3.520
•   DELIVERY TRANSACTIONS:
    9249 Transactions within measurement time (12328 Total).
    Percentage of "well done" transactions: 94.972%
    Response time (min/med/max/90th): 0.000 / 0.852 / 15.346 / 2.640
    Percentage of execution time < 80s : 100.000%
    Execution time min/avg/max: 0.500/1.472/43.267
•   STOCK-LEVEL TRANSACTIONS:
    9251 Transactions within measurement time (12320 Total).
    Percentage of "well done" transactions: 99.503%
    Response time (min/med/max/90th): 0.230 / 1.608 / 13.933 / 3.520




                         4/24/2012                                     Session F12   32
The benchmark, step # 3 in 2 charts




     4/24/2012       Session F12      33
The benchmark, step # 3 in 2 charts
                                 System Stats Vs Informix Stats
              100

               90
CPU user%
CPU system%    80
CPU waitio%
IFMX read_cache 70
IFMX write_cache
                60
IFMX Xactions/100
               50

               40

               30

               20

               10

                0




                     4/24/2012                         Session F12
                                                TimeLine             34
Agenda
•   Introduction
•   Reasons for performing this operation
•   Define the battlefield: Stress test or Benchmark?
•   Scenario of this operation
•   What benefits can you take from it




             4/24/2012          Session F12        35
Lessons learned
• Even with the application running on the server, the
  system shows an incredible rate of idleness
• The performance obtained with 1 warehouse / 1 CPU vp is
  similar with the figure obtained with 15 warehouses / 3
  CPU VPs. Nice scalability graph.
• 15 warehouses can give 100% read cache for tpc-c for
  Innovator-C. Above 15 wh will decrease the read cache
  rate, thus the global performance will not be optimal.
• You now understand what “departmental application”
  means and know what Innovator-C is capable of.
• You don’t have to pay millions to have performance!
            4/24/2012            Session F12          36
Benefits of self bundled benchmark
• A great tool to calibrate your Informix instance along
  with your hardware
• A great tool to help you tune and optimize your OLTP
  installation without involving hundreds of users
• Gain confidence and accurate knowledge of what
  IBM Informix editions are capable of
• After completing some yet necessary enhancements,
  we can help you install, use and analyze results on
  your implementation for tuning purpose.
            4/24/2012           Session F12          37
Credits / Special thanks
• Ronan Chatain, the guy surfing the big wave, who
  kindly let me use this picture.
• Erwan Crouan, the photographer of the guy in the
  big wave shot 7 miles from home, who owns the
  picture.
• Professor Diego Llanos, of Universidad de Valladolid,
  Spain, who let me use and modify his tpc-c
  benchmark source code.
• Jean-Georges and Stuart, who believed in this project
  and helped a lot
• Vladimir, from IBM, who was of precious help
           4/24/2012           Session F12          38
Questions?!?




4/24/2012           Session F12   39
Case Study for TPC-C Informix
  Innovator-C benchmarks
          Eric Vercelletto
   eric.vercelletto@begooden-it.com

F12 vercelletto innovator-c_tpc_benchmark

  • 1.
    Case Study forTPC-C Informix Innovator-C benchmarks Eric Vercelletto Session F12 Begooden IT Consulting 4/24/2012 4:45 PM
  • 2.
    Agenda • Introduction • Reasons for performing this operation • Define the battlefield: Stress test or Benchmark? • Scenario of the operation • What benefits can you take from it 4/24/2012 Session F12 2
  • 3.
    Introduction • Begooden ITConsulting is an IBM ISV company, mainly focused on Informix technology services. • It has been created in 2010, 24 years after I started implementing and using Informix solutions. • Our 15+ years experience within Informix Software France and Portugal helped us to acquire in depth product knowledge as well as solid field experience. • Our services include Informix implementation auditing, performance tuning, issue management … • We also happen to be the Querix reseller for France and French speaking countries (except Canada and Louisiana) • The company is based in Combrit, Finistère, France 4/24/2012 Session F12 3
  • 4.
    Agenda • Introduction • Reasons for performing this operation • Define the battlefield: Stress test or Benchmark? • Scenario of this operation • What benefits can you take from it 4/24/2012 Session F12 4
  • 5.
    Reasons of thistest: start • The starting point was an article about Innovator-C discussing about how and why this edition was the best choice among the other free of charge RDBMS. • Innovator-C was qualified by IBM to fit “departmental applications”. • What is the real meaning of “departmental application”? • What is the real processing capacity of Innovator-C ? 4/24/2012 Session F12 5
  • 6.
    Why choose Innovator-C? •Innovator-C is a free real production license, it has zero impact on the budget. • Launched in 2010 against all odds, it did not really gain a lot of credibility in the user community. • A close look at specs shows a very complete product, although public rumor never qualifies it as an enterprise-class product. • Stupid to promote a free product ? Maybe, but we should look beyond that and understand the real potential. 4/24/2012 Session F12 6
  • 7.
    Innovator-C limitations • Innovator-Ccan use a maximum of 4 CPU VPs • Innovator-C can use a maximum of 2 Gb of SHMEM on 1 server • PDQPRIORITY is not usable • Enterprise Replication is restricted to 2 root nodes • HDR is limited to 1 secondary instance • Shared Disk Secondary is not available • Remote Standalone Secondary is not available 4/24/2012 Session F12 7
  • 8.
    Innovator-C strengths • No disk/chunk size and number limitations • No limitations on hardware (CPU/memory) • You have Enterprise Replication • You have HDR • You have a number of very useful datablades • You have an internal scheduler • You have the storage provisioning feature 4/24/2012 Session F12 8
  • 9.
    Innovator-C: the smartway • You have a rock-solid and long lasting DBMS • Once configured, required DBA time is generally far below other DBMS vendors DBA time. • No need to change DBMS brand/vendor if your system reaches upper limits, just upgrade • Consolidate your project with firm steps, and grow with no pain and no bad surprises. 4/24/2012 Session F12 9
  • 10.
    300 users onInnovator-C? you bet! • Informix is fast, but we have forgotten how much. • Informix scales well, but not sure how well • Is Innovator-C a sub-product of Growth or Ultimate? • Initially questions, quickly turning into predictions and bets. • The only way to make sure about the answers: run a stress test and a benchmark 4/24/2012 Session F12 10
  • 11.
    The right wayto achieve answers • Most representative OLTP application: the TPC-C Benchmark • We “unfortunately” have no visibility on how Informix does on TPC-C since 1997. • Running an official TPC is extremely expensive: let’s take a free one! • We have no budget for enormous servers with 1000+ expensive disk bays: let’s use a 1200 us$ config. • Greedy and proud to be, but we want performance for our buck! 4/24/2012 Session F12 11
  • 12.
    By the way,what are the questions? We have two questions: • how many TPC-C users can work in good conditions on our cheap server? • what is the cost per tpc-c transaction delivered by Innovator-C on this server ? • Question #1 will be handled thru a Stress Test using the “unofficial” but yet close to genuine TPC-C application. • Question # 2 will be handled by the same TPC-C Benchmark, kindly made available to us by a Spanish University. 4/24/2012 Session F12 12
  • 13.
    Agenda • Introduction • Reasons for performing this operation • Define the battlefield: Stress test or Benchmark? • Scenario of this operation • What benefits can you take from it 4/24/2012 Session F12 13
  • 14.
    Set the battlefield:Benchmark Vs Stress Test the Benchmark • A benchmark is used to determine the performance level of an infrastructure under defined conditions. • The TPC Council is the official organism/referee in charge of this task for hardware vendors and DBMS vendors • Standard applications have strict requirements on data schema, data capacity, data integrity and queries scenario. • Results are expressed in transactions per minute (tpmC for instance), infrastructure cost for 1 tpm, and more recently power energy spent for 1 tpm ( cf the smarter planet ) • It will answer our question: “what is the cost of one transaction for Innovator-C” 4/24/2012 Session F12 14
  • 15.
    Purpose and specsof the TPC-C Benchmark • TPC-C simulates an average+ OLTP application. • The database schema features 9 tables, with integrity constraints and indexes. • Tables data cardinality is clearly defined • The data integrity must be respected, a transaction recover can be requested at anytime, after system crash. • It executes 5 different complex transactions, including simple select, insert, update and delete. It also includes somewhat complex and non-indexed queries. • 4 of the 5 transactions serve to generate workload on the DBMS, while only one transaction is measured: “New Order” • A successful test must have 90% or more of all transactions response time <= 5 seconds for simple transactions, 20 seconds for the complex transactions, else the test is not valid. 4/24/2012 Session F12 15
  • 16.
    Set the battlefield:Benchmark Vs Stress Test the Stress Test • Also known as Load Test, the Stress Test is a part of the correct implementation of a testing process. • it is used to understand at which point the infrastructure stops responding to performance requirements • the target application is used with as many users as necessary to put the infrastructure under heavy conditions. • When response times get below acceptable, we have reached the inflexion point. • It will answer to our question “How many users does Innovator-C stand on a low range QuadCore server”. 4/24/2012 Session F12 16
  • 17.
    Agenda • Introduction • Reasons for performing this operation • Define the battlefield: Stress test or Benchmark? • Scenario of this operation • What benefits can you take from it 4/24/2012 Session F12 17
  • 18.
    Description of theInfrastructure Database software: 11g Enterprise Edition Total cost: n.nnn.nnn US$, + dbms licenses Database cluster: 6 servers with the following specs * CPU: 48 X Intel Core i7 990X @3,46 GHz (6 cores) * RAM: 512 Gb DDR3 • Disks: 1200 X SCSI 500Gb @20,000rpm • OS: mainframe++ Clients: 4 servers with the following specs: • CPU: 2 x Intel Core I7 990x @ 3,46Ghz Client software: tpc-c • RAM 256 Gb Transaction monitor: Tuxedo • Disks: 4 X Sata 3 500 Gb @ 15,000rpm 4/24/2012 Session F12 18
  • 19.
    Oops… the realdescription Database server : 1 server with the following specs: • CPU: 1 x Intel Quad Q9400 (4x2.66Ghz) • RAM 16 Gb Total cost: 1.200 US$, • Disks: 4 X Sata 2 / 500 Gb @ 7,200 rpm • OS: linux Fedora 14 including Informix licenses Clients: sorry, budget is closed, use the same box! Client software: tpc-c developped in Informix esql/c Transaction monitor: embedded in tpc-c application Database software: IBM Informix Innovator-C Edition 11.70 FC4 Your database server ? 4/24/2012 Session F12 19
  • 20.
    Description of TPC-CUVA for Informix • open source TPC-C developed in embedded C for another open source RDBMS, by students of the Valladolid University, Spain, under control of Pr Diego Llanos. • Self running bundle including database build, data load, benchmark run and results analysis • Satisfies all the TPC-C rules and requirements, but does not run as fast as the official TPC-C. • Features a custom transaction manager handling queries requested by client processes. Dialog between tm and clients is done thru shared memory messages and semaphores. • Adapted for Informix ESQL/C, plus added some enhancements for Informix. Still needs enhancement to deliver full gas. • Returns a tpmC-UVA result, not acceptable as an official tmpC granted by TPC Council. ( but we saved a lot of money …) 4/24/2012 Session F12 20
  • 21.
    The Stress Test(getting set) • Objective: determine how many user terminals can run tpc-c with Innovator-C installed on a cheap server • The tpc-c test must PASS! • For accuracy purpose, think times and keyboard times will be set to regular user speed • No cheating: checkpoints will be executed like they should in a production environment (15mn interval) • No cheating: we will not use prepared statements • No cheating: we will not use RAM Disks • Shared memory limited to 2 Gb • CPU VPs number limited to 4 4/24/2012 Session F12 21
  • 22.
    The Stress Test:1st run • Crazy or brave? First run at 50 warehouses/10terminals per warehouse ( i-e 500 user terminals ) • Think time: from 5 to 12 secs, according to transaction type • Typing time : from 2 to 18secs, according to transaction type • Warmup time: 30 mn • Measure time: 60 mn • tpmC obtained: 590.208 tpmC-uva • Result: PASSED • Basic statistics, user time: avg 38%, min 31%, max 49% • Basic statistics, io wait: avg 11%, min 6%, max 25% 4/24/2012 Session F12 22
  • 23.
    The Stress Test:2nd run • Good guess for the First run, increase to 55 warehouses / 10terminals per warehouse ( i-e 550 terminals ) • Warmup time: 45 mn • Measure time: 240 mn • tpmC obtained: 610.728 tpmC-uva • Result: PASSED • Overall appreciation: io wait starts impacting negatively the system performance with long peaks at 40% 4/24/2012 Session F12 23
  • 24.
    The Stress Test:3rd run • increase to 60 warehouses / 10terminals per warehouse (i-e 600 terminals ) • Warmup time: 45 mn • Measure time: 240 mn • tpmC obtained: 497.567 tpmC-uva • Result: FAILED • Overall appreciation: io wait is too heavy and almost stops transactions during checkpoints. 4/24/2012 Session F12 24
  • 25.
    The Stress Test:run #2 detailed results • Global result: COMPUTED THROUGHPUT: 610.728 tpmC-uva using 55 warehouses. 252680 Transactions commited. • NEW-ORDER TRANSACTIONS: 109931 Transactions within measurement time (130117 Total). Percentage of "well done" transactions: 94.221% Response time (min/med/max/90th): 0.008 / 3.327 / 107.466 / 2.920 • PAYMENT TRANSACTIONS: 109824 Transactions within measurement time (130300 Total). Percentage of "well done" transactions: 95.213% Response time (min/med/max/90th): 0.001 / 2.664 / 107.702 / 2.800 • ORDER-STATUS TRANSACTIONS: 10963 Transactions within measurement time (13012 Total). Percentage of "well done" transactions: 95.457% Response time (min/med/max/90th): 0.007 / 2.545 / 105.946 / 2.840 • DELIVERY TRANSACTIONS: 10982 Transactions within measurement time (13042 Total). Percentage of "well done" transactions: 96.767% Response time (min/med/max/90th): 0.000 / 1.114 / 99.241 / 0.080 Percentage of execution time < 80s : 99.727% Execution time min/avg/max: 0.023/2.518/101.781 • STOCK-LEVEL TRANSACTIONS: 10980 Transactions within measurement time (13025 Total). Percentage of "well done" transactions: 97.304% Response time (min/med/max/90th): 0.003 / 2.630 / 98.372 / 2.720 4/24/2012 Session F12 25
  • 26.
    The Stress Test:run #2 in 2 charts 4/24/2012 Session F12 26
  • 27.
    Percentage 20 40 60 80 0 100 120 160 140 23/02… 23/02… 23/02… 23/02… 23/02… 23/02… 23/02… 23/02… 23/02… 23/02… 23/02… 23/02… 4/24/2012 23/02… 23/02… 23/02… 23/02… 23/02… 23/02… 23/02… Timeline 23/02… 23/02… 23/02… 23/02… 23/02… 23/02… 23/02… 23/02… 23/02… 23/02… Session F12 23/02… 23/02… 23/02… 23/02… TPC-C Informix Innovator-C : System Stats Vs Informix Stats 23/02… 23/02… 23/02… 23/02… The Stress Test: run #2 in 2 charts CPU user% CPU waitIO% CPU system% 27 IFMX read_cache% IFMX write_cache% IFMX Xactions nbr/10
  • 28.
    The benchmark (gettingset) • Objective: obtain the highest possible tpmC-uva figure • The tpc-c test must PASS! • For optimal speed, think times and keyboard times will be set to 0. • No cheating: checkpoints will be executed (5mn interval) • No cheating: we will not use prepared statements • No cheating: we will not use RAM Disks • Shared memory still limited to 2 Gb • CPU VPs number limited to 4 4/24/2012 Session F12 28
  • 29.
    The benchmark: step1 • Objective: obtain the highest possible tpmC-uva figure with 1 warehouse / 1 CPU vp (max application speed) • No checkpoint will occur during measure time, all reads and write keep in cache • Warmup time: 10 mn • Measure time: 20 mn • Result obtained is 2433 tpmC-uva 4/24/2012 Session F12 29
  • 30.
    The benchmark: step2 • Objective: obtain the highest possible tpmC-uva figure with 2 warehouses / 1 CPU vp (max application speed) • No checkpoint will occur during measure time, all reads and write keep in cache • Warmup time: 10 mn • Measure time: 20 mn • Result obtained is 2500 tpmC-uva • The system remains “idleissimo” 4/24/2012 Session F12 30
  • 31.
    The benchmark: step3 • Objective: show how Innovator-C scales brilliantly by running 15 warehouses / 150 user terminals • Checkpoints will occur every 5 mn • Use 3 CPU VPs, keep one for the application • Warmup time: 20 mn • Measure time: 40 mn • Result obtained is 2315 tpmC-uva 150 users terminals • This makes it a 0.51 us$ per tpmC! ( 2315tpmC/1200$) 4/24/2012 Session F12 31
  • 32.
    The Benchmark step3: detailed results • COMPUTED THROUGHPUT: 2314.950 tpmC-uva using 15 warehouses. 212940 Transactions commited. • NEW-ORDER TRANSACTIONS: 92598 Transactions within measurement time (123190 Total). Percentage of "well done" transactions: 91.416% Response time (min/med/max/90th): 0.033 / 1.750 / 22.180 / 3.800 • PAYMENT TRANSACTIONS: 92581 Transactions within measurement time (123227 Total). Percentage of "well done" transactions: 92.579% Response time (min/med/max/90th): 0.023 / 1.676 / 22.951 / 3.680 • ORDER-STATUS TRANSACTIONS: 9261 Transactions within measurement time (12326 Total). Percentage of "well done" transactions: 92.117% Response time (min/med/max/90th): 0.140 / 1.619 / 18.158 / 3.520 • DELIVERY TRANSACTIONS: 9249 Transactions within measurement time (12328 Total). Percentage of "well done" transactions: 94.972% Response time (min/med/max/90th): 0.000 / 0.852 / 15.346 / 2.640 Percentage of execution time < 80s : 100.000% Execution time min/avg/max: 0.500/1.472/43.267 • STOCK-LEVEL TRANSACTIONS: 9251 Transactions within measurement time (12320 Total). Percentage of "well done" transactions: 99.503% Response time (min/med/max/90th): 0.230 / 1.608 / 13.933 / 3.520 4/24/2012 Session F12 32
  • 33.
    The benchmark, step# 3 in 2 charts 4/24/2012 Session F12 33
  • 34.
    The benchmark, step# 3 in 2 charts System Stats Vs Informix Stats 100 90 CPU user% CPU system% 80 CPU waitio% IFMX read_cache 70 IFMX write_cache 60 IFMX Xactions/100 50 40 30 20 10 0 4/24/2012 Session F12 TimeLine 34
  • 35.
    Agenda • Introduction • Reasons for performing this operation • Define the battlefield: Stress test or Benchmark? • Scenario of this operation • What benefits can you take from it 4/24/2012 Session F12 35
  • 36.
    Lessons learned • Evenwith the application running on the server, the system shows an incredible rate of idleness • The performance obtained with 1 warehouse / 1 CPU vp is similar with the figure obtained with 15 warehouses / 3 CPU VPs. Nice scalability graph. • 15 warehouses can give 100% read cache for tpc-c for Innovator-C. Above 15 wh will decrease the read cache rate, thus the global performance will not be optimal. • You now understand what “departmental application” means and know what Innovator-C is capable of. • You don’t have to pay millions to have performance! 4/24/2012 Session F12 36
  • 37.
    Benefits of selfbundled benchmark • A great tool to calibrate your Informix instance along with your hardware • A great tool to help you tune and optimize your OLTP installation without involving hundreds of users • Gain confidence and accurate knowledge of what IBM Informix editions are capable of • After completing some yet necessary enhancements, we can help you install, use and analyze results on your implementation for tuning purpose. 4/24/2012 Session F12 37
  • 38.
    Credits / Specialthanks • Ronan Chatain, the guy surfing the big wave, who kindly let me use this picture. • Erwan Crouan, the photographer of the guy in the big wave shot 7 miles from home, who owns the picture. • Professor Diego Llanos, of Universidad de Valladolid, Spain, who let me use and modify his tpc-c benchmark source code. • Jean-Georges and Stuart, who believed in this project and helped a lot • Vladimir, from IBM, who was of precious help 4/24/2012 Session F12 38
  • 39.
  • 40.
    Case Study forTPC-C Informix Innovator-C benchmarks Eric Vercelletto eric.vercelletto@begooden-it.com