Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

PEnDAR: software v&v for complex systems

556 views

Published on

The PEnDAR project investigated the application of stochastic engineering techniques to verification and validation of complex and cyber-physical systems

Published in: Engineering
  • Be the first to comment

  • Be the first to like this

PEnDAR: software v&v for complex systems

  1. 1. © Predictable Network Solutions Ltd 2017 www.pnsol.com PEnDAR: Cost/performance-driven V&V for distributed/cyber-physical systems Presentation for Software Validation and Verification for Complex Systems Workshop, May 2017
  2. 2. © Predictable Network Solutions Ltd 2017 www.pnsol.com 2 Goals • Consider how to enable Validation & Verification of cost and performance • for distributed and hierarchical systems • using developments of well-tried tools • supporting both initial and ongoing incremental development • Provide early visibility of cost/performance hazards • avoiding costly failures • maximising the chances of successful in-budget delivery of acceptable end-user outcomes Partners Supported by: PEnDAR project
  3. 3. © Predictable Network Solutions Ltd 2017 www.pnsol.com Focus on performance Functional correctness is not enough
  4. 4. © Predictable Network Solutions Ltd 2017 4 www.pnsol.com Nature of performance • In an ‘ideal world’, systems would always respond instantaneously • and without exceptions/failures/errors • In practice this doesn’t happen • there is always some delay and some chance of failure: some impairment • Thus performance is a privation • the absence of impairment • like ‘darkness’ or ‘silence’ • Quantity also matters • require a certain rate or volume of responses with a given bound on impairment
  5. 5. © Predictable Network Solutions Ltd 2017 www.pnsol.com 5Sources of impairment Causality Information: Takes time! Communicated Computed
  6. 6. © Predictable Network Solutions Ltd 2017 www.pnsol.com 6Sources of impairment Synchronisation Implicit: resource sharing Exclusive Discrete: Locks etc Long timescales Statistical Continuous: CPU, interface… Short timescales Explicit Communication Data dependency Causality Information: Takes time! Communicated Computed
  7. 7. © Predictable Network Solutions Ltd 2017 www.pnsol.com 7Sources of impairment Synchronisation Implicit: resource sharing Exclusive Discrete: Locks etc Long timescales Statistical Continuous: CPU, interface… Short timescales Explicit Communication Data dependency Causality Information: Takes time! Communicated Computed Process algebra
  8. 8. © Predictable Network Solutions Ltd 2017 www.pnsol.com 8Sources of impairment Synchronisation Implicit: resource sharing Exclusive Discrete: Locks etc Long timescales Statistical Continuous: CPU, interface… Short timescales Explicit Communication Data dependency Causality Information: Takes time! Communicated Computed Stochastic process algebra
  9. 9. © Predictable Network Solutions Ltd 2017 www.pnsol.com 9Sources of impairment Synchronisation Implicit: resource sharing Exclusive Discrete: Locks etc Long timescales Statistical Continuous: CPU, interface… Short timescales Explicit Communication Data dependency Causality Information: Takes time! Communicated Computed Imperfection Discrete Statistical Exceptions Failures Resource exhaustion Stochastic process algebra
  10. 10. © Predictable Network Solutions Ltd 2017 www.pnsol.com 10Sources of impairment Synchronisation Implicit: resource sharing Exclusive Discrete: Locks etc Long timescales Statistical Continuous: CPU, interface… Short timescales Explicit Communication Data dependency Causality Information: Takes time! Communicated Computed Imperfection Discrete Statistical Exceptions Failures Resource exhaustion ∆Q Framework
  11. 11. © Predictable Network Solutions Ltd 2017 11 www.pnsol.com Measure of performance: ∆Q • ∆Q is a measure of the ‘quality impairment’ of an outcome • The extent of deviation from ‘instantaneous and infallible’ • Nothing in the real world is perfect so ∆Q always exists • ∆Q is conserved • A delayed outcome can’t be ‘undelayed’ • A failed outcome can’t be ‘unfailed’ • ∆Q can be traded • E.g. accept more delay in return for more certainty of completion • ∆Q has an algebra • Can manipulate it mathematically
  12. 12. © Predictable Network Solutions Ltd 2017 www.pnsol.com 12 ∆Q can be represented with an improper random variable • Combines continuous and discrete probabilities • Thus encompasses normal behaviour and exceptions/failures in one model ∆Q is composable • Supports hierarchical V&V Representation of ∆Q 0.1 0.2 0.3 0.4 0.6 0.5 0.7 0.8 0.9 1.0 0.0 Cumulativeprobability Response time 2 4 6 8 10 12 14 16 Tangible mass encodes distribution of response time Intangible mass encodes probability of exception/failure
  13. 13. © Predictable Network Solutions Ltd 2017 www.pnsol.com Outline of a coherent methodology
  14. 14. © Predictable Network Solutions Ltd 2017 www.pnsol.com 14Performance/resource analysis Sub- system Sub- system Sub- system ∆Q Shared resources Starting with a functional decomposition: • Take each subsystem in isolation • analyse performance • modelling remainder of system as ∆Q • quantify resource consumption • may be dependent on the ∆Q • Examine resource sharing • within system – quantify resource costs • between systems – quantify opportunity cost • Successive refinements • consider couplings • iterate to fixed point Quantitative Timeliness Agreement
  15. 15. © Predictable Network Solutions Ltd 2017 15 www.pnsol.com Quantifying intent • The key challenge is to establish quantified intentions • For outcomes/resources/costs • Then a variety of mathematical techniques can be applied • queuing theory • large deviation theory • ∆Q algebra • This is “only rocket science” • Not brain surgery! • Set of tools already developed
  16. 16. © Predictable Network Solutions Ltd 2017 16 www.pnsol.com 16 Quantifying timeliness Outcome requirement: Suppose we had a specification for how long it’s acceptable to wait for a system outcome: • 50% of responses within 5 seconds • 95% of responses within 10 seconds • 99.9% of responses within 15s • 0.1% failure rate This can be represented by an improper CDF 0.1 0.2 0.3 0.4 0.6 0.5 0.7 0.8 0.9 1.0 0.0 Cumulativeprobability 2 4 6 8 10 12 14 16 Response time
  17. 17. © Predictable Network Solutions Ltd 2017 www.pnsol.com Click to edit Master title style 17 • Suppose the black line shows the delivered CDF • From measurement, simulation or analysis • This is everywhere above and to the left of the requirement curve • This means that the timeliness requirement is satisfied • If not, there is a performance hazard Meeting a timeliness requirement 0.1 0.2 0.3 0.4 0.6 0.5 0.7 0.8 0.9 1.0 0.0 2 4 6 8 10 12 14 16 Cumulativeprobability Response time
  18. 18. © Predictable Network Solutions Ltd 2017 www.pnsol.com 18 1. Decompose the performance requirement following system structure • Using engineering judgment/best practice/cosmic constraints • Creates initial subsystem requirements 2. Validate the decomposition by re- combining via the behaviour • Formally and automatically checkable • Can be part of continuous integration • Captures interactions and couplings • Necessary and sufficient: • IF all subsystems function correctly and integrate properly • AND all subsystems satisfy their performance requirements • THEN the overall system will meet its performance requirement • Apply this hierarchically until • Behaviour is trivially provable OR • Have a complete set of testable subsystem verification/acceptance criteria Validating performance requirements
  19. 19. © Predictable Network Solutions Ltd 2017 www.pnsol.com Capturing interactions
  20. 20. © Predictable Network Solutions Ltd 2017© Predictable Network Solutions Ltd 2017 www.pnsol.com 20 Outcomes Delivered Resources Consumed Variability Exception /failure Externally created Mitigation System impact Propagation Scale Schedulability Capacity Distance Number Time Space Density
  21. 21. © Predictable Network Solutions Ltd 2017© Predictable Network Solutions Ltd 2017 www.pnsol.com 21 Outcomes Delivered Resources Consumed Variability Exception /failure Scale
  22. 22. © Predictable Network Solutions Ltd 2017© Predictable Network Solutions Ltd 2017 www.pnsol.com 22 Outcomes Delivered Resources Consumed Variability Exception /failure Scale Distance Number Time Space Schedulability Capacity Density
  23. 23. © Predictable Network Solutions Ltd 2017© Predictable Network Solutions Ltd 2017 www.pnsol.com 23 Outcomes Delivered Resources Consumed Variability Exception /failure Mitigation Propagation Scale Schedulability Capacity Distance Number Time Space Density
  24. 24. © Predictable Network Solutions Ltd 2017© Predictable Network Solutions Ltd 2017 www.pnsol.com 24 Outcomes Delivered Resources Consumed Variability Exception /failure Externally created Mitigation Propagation Scale Schedulability Capacity Distance Number Time Space Density
  25. 25. © Predictable Network Solutions Ltd 2017© Predictable Network Solutions Ltd 2017 www.pnsol.com 25 Outcomes Delivered Resources Consumed Variability Exception /failure Externally created Mitigation System impact Propagation Scale Schedulability Capacity Distance Number Time Space Density
  26. 26. © Predictable Network Solutions Ltd 2017© Predictable Network Solutions Ltd 2017 www.pnsol.com 26 Outcomes Delivered Resources Consumed Variability Exception /failure Externally created Mitigation System impact Propagation Scale Schedulability Capacity Distance Number Time Space Density
  27. 27. © Predictable Network Solutions Ltd 2017 27 www.pnsol.com Summary • System performance validation consists of: • Analysing the interaction of system behaviour and subsystem performance requirements • Showing that this ‘adds up’ to meet the quantified requirements (QTA) • System performance verification consists of showing that subsystems meet their QTAs • By analysis and/or measurement of ∆Q of the subsystems’ observable behaviour • This provides acceptance and/or contractual criteria for third-party subsystems or services • Substantially reduces performance integration risks and hence re-work
  28. 28. © Predictable Network Solutions Ltd 2017 www.pnsol.com Project findings
  29. 29. © Predictable Network Solutions Ltd 2017 29 www.pnsol.com Project methodology • Investigate system-of-system use cases • Extract key aspects by talking to multiple stakeholders inside VF • Consider barriers, costs and benefits of applying performance V&V • Consider application of performance V&V within established methodologies • Automotive etc. • Application with TVS toolchain • Run an industry focus group to explore issues and validate findings • Questionnaire • Webinars • Interviews
  30. 30. © Predictable Network Solutions Ltd 2017 30 www.pnsol.com Key questions 1. Can the well-tried tools be adapted to be used outside a consultancy model? • Yes – performance validation and verification is practical in appropriate contexts 2. Can these tools be applied within existing V&V methodologies for automotive etc.? • Yes – current approaches can be thereby extended to include V&V of system performance 3. Do the tools have application beyond V&V? • Yes – they can be used earlier in the SDLC to deliver significant benefits, but there are organisational/process barriers to overcome
  31. 31. © Predictable Network Solutions Ltd 2017 www.pnsol.com Click to edit Master title style 31 Support stages of the SDLC • Design • Feasibility analysis • Hierarchical decomposition • Subsystem acceptance criteria • Verification • Checking delivery of quantified outcomes • Evaluating resource usage • Re-verification during system lifetime • Validation • Quantification of performance criteria • Checking coverage and consistency Quantify hazards • Failure to meet outcome requirements • Physical constraints • Schedulability constraints • Supply chain constraints • Failure to meet resource constraints • Scaling • Correlations Interaction with System Development Life Cycle
  32. 32. © Predictable Network Solutions Ltd 2017 www.pnsol.com 32 • Avoid infeasible developments • 'fail early’ • prune blind alleys • Address scalability early • including real-world constraints • avoid 'heavy tail’ • See the whole risk landscape • not just the 'first problem’ • Be able to write a safety case • Even for shared-resource distributed systems • Can handle subsystems with undocumented characteristics • Mitigate this with in-life measurement of ∆Q driven by the validation • Forewarned is forearmed • Inexpensive, focused data rather than wide-angle ‘big data’ approach • Can integrate with current V&V toolchains • E.g. automotive Benefits
  33. 33. © Predictable Network Solutions Ltd 2017 33 www.pnsol.com Exploitation: barriers • Need to ‘quantify intent’ • may meet resistance • Effort in adopting new tools and techniques • Processes and procedures may change • Upfront work needed before “real” development starts • may not fit management expectations/metrics of ‘progress’ • V&V process models that leave integration to the end • blocks opportunity to validate performance decomposition at the outset
  34. 34. © Predictable Network Solutions Ltd 2017 34 www.pnsol.com Exploitation: opportunities • Create progress metrics • Capturing successive risk reduction • Support management and engineering • Assist customers concerned about: • Risks of bad performance • to reputation • to insurability • to safety • Costs of verifying and scaling up a prototype • Need a performance model • Paper submitted to IEEE Design&Test special issue • Waiting for decision
  35. 35. © Predictable Network Solutions Ltd 2017 www.pnsol.com Thank you! If you would like further details or want to discuss potential applications, please contact us at: info@pnsol.com

×