Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Georgia Tech: Performance Engineering - Queuing Theory and Predictive Modeling

4,047 views

Published on

This is one lecture in a semester long course \'CS4803EPR\' I put together and taught at Georgia Tech, entitled "Enterprise Computing Performance Engineering"
----
Performance Engineering Overview - Part 2…
Queuing Theory Overview
Early life-cycle performance modeling


Simple Distributed System Model
Sequence Diagrams

  • Be the first to comment

Georgia Tech: Performance Engineering - Queuing Theory and Predictive Modeling

  1. 1. Performance Engineering Overview 2 Enterprise Computing Performance Brian Wilson CS 4803 EPR
  2. 2. Lecture Overview <ul><li>Performance Engineering Overview - Part 2… </li></ul><ul><li>Queuing Theory Overview </li></ul><ul><li>Early life-cycle performance modeling </li></ul><ul><ul><li>Simple Distributed System Model </li></ul></ul><ul><ul><li>Sequence Diagrams </li></ul></ul>Enterprise Computing Performance - Course Overview
  3. 3. Queuing Theory Simplified A brief introduction to queuing theory, as it applies to computing performance
  4. 4. What is Queuing Theory? a collection of mathematical models of various queuing systems that take inputs based on probability or assumption, and that provide quantitative parameters describing the system performance.
  5. 5. Introduction <ul><li>Series of mathematical formulae </li></ul><ul><li>Calculates event probability </li></ul><ul><li>Predicts capacity </li></ul>Enterprise Computing Performance - Course Overview
  6. 6. What’s Queuing Theory? <ul><li>The theoretical study of waiting lines, expressed in mathematical terms </li></ul>Enterprise Computing Performance - Course Overview input output queue server residence time = wait time + service time
  7. 7. Types of Queues <ul><li>Markovian </li></ul><ul><ul><li>Exponential distribution </li></ul></ul><ul><li>Deterministic </li></ul><ul><ul><li>Constant arrival rates </li></ul></ul><ul><li>General </li></ul><ul><ul><li>Arbitrary or random distribution of arrival rates </li></ul></ul>Enterprise Computing Performance - Course Overview
  8. 8. Queuing Disciplines <ul><li>The representation of the way the queue is organized (rules of inserting and removing customers to/from the queue): </li></ul><ul><li>1) FIFO (First In First Out) also called FCFS (First Come First Serve) - orderly queue. </li></ul><ul><li>2) LIFO (Last In First Out) also called LCFS (Last Come First Serve) - stack. </li></ul><ul><li>3) SIRO (Serve In Random Order). (distributed/web) </li></ul><ul><li>4) Priority Queue, that may be viewed as a number of queues for various priorities. </li></ul><ul><li>5) Many other more complex queuing methods that typically change the customer’s position in the queue according to the time spent already in the queue, expected service duration, and/or priority. Typical for computer multi-access systems. </li></ul>Enterprise Computing Performance - Course Overview
  9. 9. What’s a Bottleneck? <ul><li>If the Production Rate, on average over time, exceeds Consumption Rate… </li></ul><ul><li>Performance Bottleneck! </li></ul><ul><li>What’s Job Flow Balance? </li></ul><ul><ul><li>T = A </li></ul></ul>Enterprise Computing Performance - Course Overview
  10. 10. QT Assumptions <ul><li>For any Queuing Theory to work on paper, averages for all numbers must be assumed </li></ul><ul><li>Cannot calc real-time (instantaneous) data-points without mechanical means </li></ul>Enterprise Computing Performance - Course Overview
  11. 11. Formulae Notation <ul><li>A = Arrival (Production) Rate (usually noted: ) </li></ul><ul><li>Ts = Service (Consumption) Time: Average Time it takes to service one message in the queue </li></ul><ul><li>Tq = Average Time a message spends in the queue (I.e. drop of water in the funnel) </li></ul><ul><li>T = Ts + Tq [Average response time] </li></ul><ul><li>1 = 100% Service Capacity or 1 Time Unit </li></ul><ul><li>(1/Ts) = Service Rate </li></ul><ul><li>(1/A) = Average Job Inter-arrival Time [average amount of time between job arrivals] </li></ul>Enterprise Computing Performance - Course Overview
  12. 12. Kendall Notation <ul><li>Queuing systems are described with 3 parameters… </li></ul><ul><li>Parameter 1 </li></ul><ul><ul><li>M = Markovian inter-arrival rates </li></ul></ul><ul><ul><li>D = Deterministic inter-arrival rates </li></ul></ul><ul><li>Parameter 2 </li></ul><ul><ul><li>M = Markovian service rates </li></ul></ul><ul><ul><li>G = General service rates </li></ul></ul><ul><ul><li>D = Deterministic service times </li></ul></ul><ul><li>Parameter 3 </li></ul><ul><ul><li>Number of servers </li></ul></ul><ul><li>Examples: </li></ul><ul><ul><li>M/M/1 - D/D/2 - M/G/3 </li></ul></ul>Enterprise Computing Performance - Course Overview
  13. 13. Example: The M/M/1 System Enterprise Computing Performance - Course Overview Job output queue Exponential server
  14. 14. Little’s Law 1 <ul><li>Sometimes called “Little’s Theorem” </li></ul><ul><li>“ Length of a queue is the product of the message arrival rate multiplied by the time they stay in the queue. ” </li></ul><ul><li>Notated: Q = ATq </li></ul>Enterprise Computing Performance - Course Overview
  15. 15. Little’s Law 2 <ul><li>Alternate definition: </li></ul><ul><li>“ The average number of jobs waiting in the queue ( N ) is equal to the product of the average arrival rate and the average response time . ” </li></ul><ul><li>Notated: N = AT </li></ul>Enterprise Computing Performance - Course Overview
  16. 16. Little’s “LAW” <ul><li>A great way to remember it: </li></ul><ul><li>If we notate the length of time spent in the queue as L the arrival rate as A and the time spent in the queue (residence or Wating time), then we can say: L = AW </li></ul>Enterprise Computing Performance - Course Overview
  17. 17. Web Server Queuing Model Enterprise Computing Performance - Course Overview
  18. 18. Review Questions <ul><li>A system is said to be stable when? </li></ul><ul><li>A > (1/Ts) means? Explain… </li></ul><ul><li>If the avg inter-arrival times (1/A) are unpredictable (no correlation to a known or trend number), the arrival rate exhibits what type of distribution? </li></ul><ul><li>What’s another name for a memoryless state? </li></ul><ul><li>Such queuing networks are called ____ if new jobs arrive from outside the network, and may eventually depart from the network. </li></ul><ul><li>__________ Theory describes discreet, yet rare events where arrival rates are randomly distributed, yet can be averaged over a given period of time. </li></ul>Enterprise Computing Performance - Course Overview
  19. 19. Resources <ul><li>A really good website for queuing tools and techniques: http://www.me.utexas.edu/~jensen/ORMM/index.html </li></ul><ul><li>Queuing Theory Terminology: http://www.me.utexas.edu/~jensen/ORMM/models/unit/queue/subunits/terminology/index.html </li></ul>Enterprise Computing Performance - Course Overview
  20. 20. Early Life-cycle Performance Modeling A brief overview
  21. 21. Sequence Diagram Example Enterprise Computing Performance - Course Overview
  22. 22. Expanded Sequence Enterprise Computing Performance - Course Overview
  23. 23. Distributed System Model Enterprise Computing Performance - Course Overview
  24. 24. Resource Requirements Enterprise Computing Performance - Course Overview See Page 38 Add requirements (in terms of time) for resources such as CPU, Disk, NetDelay, etc for each step of each scenario.
  25. 25. Performance Prediction Tools Enterprise Computing Performance - Course Overview <ul><li>Many new UML tools for modeling </li></ul><ul><li>Very time intensive </li></ul><ul><li>Many assumptions </li></ul><ul><li>Must be done before design finalization </li></ul><ul><li>Saves time and money in the long run </li></ul>

×