Upm scube-dynsys-presentation


Published on

Published in: Technology
  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Upm scube-dynsys-presentation

  1. 1. Building Dynamic Models of Service Compositions With Simulation of Provision Resources Dragan Ivanovi´1 , Martin Treiber2 , c Manuel Carro 1 , Schahram Dustdar2 1 Universidad Polit´cnica de Madrid e 2 Technische Universit¨t Wien aS-Cube Industrial Dissemination Workshop, Thales, Paris, 24/2/2012
  2. 2. SOA Context Service Orchestrations: centralized control flow, services combined to achieve more complex tasks. Exposed as services to the outside world — composability. Cross organizational boundaries — loose coupling. Usually designed around the notion of business processes. Events, asynchronous exchanges, stateful conversations. Often expressed in a specialized language: BPEL, YAWL, etc. Service-Level Agreements (SLAs): constraints that provider and client are expected to comply with. Providers often implement some type of elasticity (of provision resources) to ensure SLA and meet demand in different scenarios for load/request rates. From the client perspective: attributing perceived QoS to logic, components, and/or infrastructure resources is difficult.
  3. 3. Motivation SLA offering based on provider’s assesment of the expected behavior of the provision system for a given orchestration: for different input scenarios, with appropriate resource scaling policies.Goal of this workPredict dynamic (time-dependent) behavior of service provision with QoSattributes, taking into account characteristics of the particular compositionbeing served. Given an inflow of requests (data) ⇒ infer how it maps into outflow. Taking workflow structure into account ⇒ information on behavior/bottlenecks of orchestration components. Flows are defined on continuous time domains ⇒ using real (dense) functions of time gives dynamic system simulation.
  4. 4. Overview of the of the Approach Overview Approach Sensitivity etc. Policy Testing What-If Analysis QoS Estimates Workflow Monitoring Data Simulation Definition Dynamic Petri Net Workflow Model Model Abstract Exec. YAWL BPEL BPEL Provision Resource Model
  5. 5. Modeling an Orchestration Petri Nets !()*+,+*-.,/0*,/12*-, !& Petri Nets as intermediate representation: /1,*3()/4,1-*,15/617-6, Used with several front-end languages. /8(-+7/71-,9:1++7;4, Weak constraints on net shape. +/1)0(+/7)(4,)01+*-<= #% !" Places (circles) stand for conditions in #8(-+7/71-+,+*-.,/0*, workflow execution. !% #" #$ /12*-,/1,(,1>,/0*78, Including start and finish. 15/617-6,:()*+= Tokens (dots) denote running instances !$ They mark places. Transitions (squares) stand for workflow activities and fire when all input places (preconditions) are marked (stochastic choice). A step through a Petri net fires all transitions that can fire and moves tokens to new places.
  6. 6. Places and Transitions as Flows and Stocks PT-nets oversimplify behavior: transition times not taken into account (only discrete steps). In reality: Transitions (activities) take some non-zero time. Tokens (orchestration instances) stay in places for a negligible time. To recover that dynamics, we use continuous-time models. Initial Stock (S0 ) Inflow (fi ) Outflow (fo ) Stock (S) d t S|t=0 = S0 , dt S = fi − fo ⇔ S = S0 + 0 (fi − fo )dt Transitions → stocks [number of instances of the activity]. Places → flows [number of instances transiting per second].
  7. 7. m p (t): a new stock ∀p ∈ •m ··· qt = argmin p∈•m {m p (t)}Continuous-Time (CT) Model Derivation m(t) = mqt (t) m d dt m p (t) = p(t)w pm − om (t) , ∀p ∈ om m(t)/Tm Tm 0 Model derivation process mechanical. om (t) = qt (t)wqt m Tm = 0 Can be fully automated. Fig. 7. ODE scheme for a transition. CT scheme for a place: ··· non-input places aggregate outputs from one or more p p(t) = ∑m∈•p {om (t)} activities. Fig. 8. ODE scheme for a place. m p (t): a new stock ∀p ∈ •m Slightly more ··· qt = argmin p∈•m {m p (t)} complex for Figure 7 shows a general ODE scheme for a transition m ∈ M w m(t) = mqt (t) transitions with input places. With single input place, the transition continuously acc d m from the input place, and discharges them either om (t) , ∀p ∈ •m (Tm = dt m p (t) = p(t)w pm − instantaneously several input (Tm 0) through om (t). When a mo transition has omore = m(t)/Tm Tm 0 its execution places. m (t) than one input place, smallest number of accumulated tokens. Atqtime Tmq=∈ •m denotes th qt (t)w t m t, t 0 which the smallest number of tokens has been accumulated. Because a to collect a token 7. ODE schemeinput transition.fire, the smallest tok Fig. from all of its for a places to mqt (t) dictates m(t). When the average execution time Tm 0, the outflow om (t) = m(t)
  8. 8. Composability ··· ··· Modeling asynchronous ··· ··· sA rA message exchange between As sA Ae rA Ar services is easy. As Ae Ar Fig. 10. Asynchronous messaging partner We “open” the transition boxes that represent component orscheme. Fig. 10. services and connect their CT-models.Asynchronous messaging scheme. ··· ··· Φ φm Accounting for failure: ··· ··· Φ m ⇒ m φm introduce failure 1 − φm probabilities, and a m ⇒ m 1 − φm single failure sink. Fig. 11. Failure accounting scheme. Fig. 11. Failure accounting scheme. tions. Both can be built into the model automatically, when translatin orchestration language with known formal semantics. Here we discu
  9. 9. p1Example: Data Cleansing Service pin A Formatter service Format AND-split Service p2 p3 FA p1 Data Checker Service Calibration Service pin FormatterService Calibration service C Search Service Format precision not ok AND-split w5p Logging Search Service multiple Check Service Service hits Check Service w4mh Service Log Service B p4 E p5 w4h w4nh p2 p p3 w5nh w5h F hit no hit no hit 6 Data Checker Calibration Calibration Service Storage Service Service Add Service Service C Search Service hit precision not ok G Add Service w5p Machine HPS Logging Service condition Searchinvoke multiple Service Loop Check Service Service hits p8 Check Service p7 w4mh Service Log Service B p4 E p5 Service Service Parallel Execution p9 pout w4h w4nh p6 w5nh w5h hit no hit no hit AND-join (J) Hd Fig. 1. OverviewStorageof data cleansing process Storage Service A(t) = pin (t) − p1 (t) oF (t) = F (t)/TService Add F p1 (t) = A(t)/TAdt Service Fig. 2. A PT-net representation of workflow d p6 (t) = p4 (t)w4nh + p5 (t)w5nh p2 (t) = p1 (t) G (t) = p6 (t) − o (t) hit from Fig. 1 G dtd G Add Service B(t) = p2 (t) − p8 (t) Machine oG (t) = G (t)/TG p8 (t) = B(t)/TBdt HPS condition Service invoke Loop Service d p8 p7 p7 (t) = p4 (t)w4h + oG (t) + p5 (t)w5h p3 (t) = p1 (t) + oF (t) Jp (t) = p8 (t) − p9 (t) dt 8result = p not satisfying (e.g., d J (t)precision (t) thep search result), a re-calibration of the p d is (t) − p (t) low = p (t) − p of pout C (t) 3 Service 4 Servicep 7 Parallel Execution = C (t)/TC 9 4 (t) 9search service is done manually and the re-calibrated Service Service is invoked again. 7 dt dt d d AND-join (J) HIn parallel, a Logging Service is executed to log service invocations (execution times dt E (t) = p4 (t)w4mh − p5 (t) p5 (t) = E (t)/TE dt F (t) = p5 (t)w5p − oF (t)and serviceFig.(t) Overview pof(t) = H(t)/TH 1. data cleansing process(t) Jp (t) ≤ Jp (t) Storage Service d precision). H(t) = p9 (t) − pout out p9 (t) = p8 8 7 dt p7 (t) Jp (t) Jp (t) Fig. 2. A PT-net representation of workflow Figure 2 shows a representation of the cleansing process workflow from Figure 1 in 7 8the shape of a PT-net. Services from the workflow are markedfrom Fig. 1 (A . . . H), as transitionsand additional transitions are introduced for the AND-split/join. Places represent the
  10. 10. Checker 0.00 0.00 1.00 0.53 1.00 Calibrator 59.00 74.00 78.00 81.73 85.50 13Preliminary Validation of CT Model Logger 1.00 1.00 1.00 1.00 1.00 Adder 0.00 0.00 0.50 0.50 1.00 CT models can be used to Storage distribution of expected running produce 1.00 1.00 1.00 1.40 2.00 times by feeding it with single unit spike (a Dirac pulse). Table 1. Statistical parameters of service substitute Assessed by comparing discrete simulations with predictions by CT models in a range of simple workflow patterns. Data cleansing example simulated and compared with CT model prediction. Reasonable correspondence between medians. Calibration of the model using experimental data is key. Fig. 13. Distribution of execution times.
  11. 11. Simulation Model for Several Orchestrations Several orchestrations can interact through use of common resources. Plug CT model(s) of orchestration(s) into simulation framework. Incoming Requests Te Rejects i R E Tin e = R/Te pin = β · R/Tin β Successful finishes pout S Resource Orch. CT Model Model F pfail Failures Resource model aggregates values from orchestration CT model to model provision infrastructure capacity and load. Blocking factor (β) quenches request inflow when capacities exceeded.
  12. 12. Simulation Model (Cont.) Incoming Requests Te Rejects i R E Tin e = R/Te pin = β · R/Tin β Successful finishes pout S Resource Orch. CT Model Model F pfail Failures Basic accounting for rejects (E ), successes (S) and failures (F ). Provides for separation of concerns – separated development of: Orchestration model (resource independent). Resource model (orchestration independent).
  13. 13. Interactive Simulation Simulation model fed to (existing) dynamic simulation tool: Simulating concurrently served orchestrations under different input regimes. QoS regarding running time and reliability “for free”. Others (like cost of component services/resources) derivable. Typical simulations: what-if, goal-seek, sensitivity. Simulation results for compositions/QoS metrics ⇒ potential inputs for BP/KPI simulation in service networks tools (e.g. SNAPT of UoC). Currently working on simulation tool better tailored for services.
  14. 14. Relation to Monitoring Adaptation Prediction from the simulation ⇒ input for proactive adaptation Related to other S-Cube QoS prediction techniques. Simulation gives the expected time frame for failure occurences. Continuous monitoring: keep model in step with actual behavior. E.g. using CEP-based ESPER framework for choreographies (USTUTT). Revising model parameters with empirical evidence Together: dashboards to provide continuous feedback to human controllers. E.g.: provisioning multi-service applications under varying request rates in the work of Chi-Hung Chi (Tsinghua University)
  15. 15. Conclusions Dynamic models of service orchestrations with provision resources: a handy tool to Test scenarios, Assess expected QoS, Design and validate infrastructure management policies. Working on integration with tools for automatic derivation of PT-net models from executable orchestration specs. Also aiming at modeling elasticity in cloud infrastructures. Challenge: cover other, more advanced and data-dependent service composition descriptions e.g.: Concrete abstract and executable process specifications (e.g., BPEL), Colored Petri-nets, Other formalisms.
  16. 16. Building Dynamic Models of Service Compositions With Simulation of Provision Resources Dragan Ivanovi´1 , Martin Treiber2 , c Manuel Carro 1 , Schahram Dustdar2 1 Universidad Polit´cnica de Madrid e 2 Technische Universit¨t Wien aS-Cube Industrial Dissemination Workshop, Thales, Paris, 24/2/2012