Your SlideShare is downloading. ×
Hypothesis Based Testing: Power + Speed.
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×

Introducing the official SlideShare app

Stunning, full-screen experience for iPhone and Android

Text the download link to your phone

Standard text messaging rates apply

Hypothesis Based Testing: Power + Speed.

735
views

Published on

STAG Software presented a webinar on Feb 16, 2012, 2.00pm to 3.00pm EST. …

STAG Software presented a webinar on Feb 16, 2012, 2.00pm to 3.00pm EST.

The objective of this webinar is to present a different approach to delivering “Clean Software” by enabling one to leverage his/her intellect and thereby unleash the power to generate speed.

Published in: Education, Technology

0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
735
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
69
Comments
0
Likes
1
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. Hypothesis Based TestingPower + SpeedT AshokFounder & CEO, STAG SoftwareArchitect-HBT in.linkedin.com/in/AshokSTAG ash_thiru Webinar: 16 Feb 2012, 1400-1500 EST
  • 2. Tools and process have been significant drivers to speed up testing. Adoption of lean principles is seen as enabling focus & continuous evolution. Our appetite for speed is yet to be satiated. So where do go from here? Leveraging intellect is key to more power and speed. Hypothesis Based Testing (HBT) is a personal scientific test methodology that focuses on leveraging one’s intellect by enabling sharp goal focus, providing tools for scientific thinking to rapidly assess cleanliness of software.© 2012. STAG Software Private Limited. All rights reserved.
  • 3. The “Tsunami” effect... 4 3 2 1 Speed up Leverage Asset Optimize process execution Intellect reuse Power + Speed© 2012. STAG Software Private Limited. All rights reserved.
  • 4. The “Tsunami” effect... 4 Cleanliness criteria 3 2 Potential Defect Types 1 Quality Levels Speed up Leverage Asset Optimize execution Intellect reuse process Complete test cases Cleanliness Index Validation suite Power + Speed© 2012. STAG Software Private Limited. All rights reserved.
  • 5. Clarity of Purpose Requirements, Features, System User story to satisfy Clear Goal may have Cleanliness criteria Potential Defect Type impacts “What to Test” & “Test for What”© 2012. STAG Software Private Limited. All rights reserved.
  • 6. Expectations = Cleanliness Criteria Expectations “Properties of the system” Needs Features Environment Expectations delivered Behavior by Needs (Requirements) Structure via Features that display Behavior Material constructed from Materials in accordance to a Structure in a given Environment© 2012. STAG Software Private Limited. All rights reserved.
  • 7. Quality Growth - Nine staged filter Objective Issues That user expectations are met L9 End user value User flows, experience That it deploys well in the real environment L8 Clean Deployment Compatibility, migration That the stated attributes are met L7 Attributes met Performance, security, volume, load... That it does not mess up the environment L6 Environment cleanliness Resource leaks, Compatibility... That end-to-end flows work correctly L5 Flow correctness Business flow conditions, Linkages That the functional behavior is correct L4 Behavior correctness Functionality L3 Structural integrity That the internal structure is robust Internal structural issues That the user interface is clean L2 Input interface cleanliness UI issues That inputs are handled well L1 Input cleanliness Input data handling© 2012. STAG Software Private Limited. All rights reserved.
  • 8. HBT - Personal scientific test methodology Powered by STEMTM - Defect detection technology EIGHT disciplines of THINKING D8 SIX stages of DOING S6 D1 S1 powered by S1: Understand expectations D1: Business value understanding S2: Understand context D2: Defect hypothesis S3: Formulate hypothesis D3: Strategy & Planning S4: Devise proof D4: Test design S5: Tooling support D5: Tooling S6: Assess & Analyze D6:Visibility D7: Execution & Reporting D8: Analysis & Management Power + Speed© 2012. STAG Software Private Limited. All rights reserved.
  • 9. D1 Business value understanding D2 Defect hypothesis D3 Test strategy & planning Landscaping EFF model Orthogonality principle Viewpoints (Error-Fault-Failure) Tooling needs assessment Reductionist principle Defect centricity principle Defect centered AB Interaction matrix Negative thinking Quality growth principle Operational profiling Orthogonality principle Techniques landscape Attribute analysis Defect typing Process landscape GQM D4 Test design D5 Tooling Reductionist principle Automation complexity Input granularity principle 32 core assessment Box model Minimal babysitting Behavior-Stimuli approach concepts principle Techniques landscape Separation of concerns Complexity assessment Tooling needs analysis Operational profiling D6 Visibility D7 Execution & Reporting D8 Analysis & Management GQM Contextual awareness Gating principle Quality quantification Defect rating principle Cycle scoping model© 2012. STAG Software Private Limited. All rights reserved.
  • 10. Hypothesis Based Testing (HBT) S1 Cleanliness criteria Potential defect types S3 S2 Staged & purposeful Expectations detection S4 Complete test cases S6 Goal directed measures Sensible automation S5© 2012. STAG Software Private Limited. All rights reserved.
  • 11. Cleanliness criteria Potential defect types Clear Baseline S1, S2 Staged & purposeful detection Expectations Set a clear goal for quality Complete test cases Example: Clean Water implies Goal directed measures Sensible automation 1.Colorless 2.No suspended particles 3.No bacteria 4.Odorless What information(properties) can be used to identify this? ...Marketplace,Customers, End users ...Requirement(flows), Usage, Deployment ... Features, Attributes ...Stage of development, Interactions ... Environment, Architecture ... Behavior, Structure© 2012. STAG Software Private Limited. All rights reserved.
  • 12. Cleanliness criteria Potential defect types A goal focused approach S3 to cleanliness Staged & purposeful detection Identify potential defect types that Expectations can impede cleanliness Complete test cases Example: Data validation Timeouts Goal directed Sensible automation Resource leakage measures Calculation Storage Presentation Transactional ... Scientific approach to hypothesizing defects is about looking at FIVE Aspects - Data, Logic, Structure, Environment & Usage from THREE Views - Error injection, Fault proneness & Failure Use STEM core concepts > Negative thinking (Aspect) > EFF Model (View) “A Holmes-ian way of looking at properties of elements”© 2012. STAG Software Private Limited. All rights reserved.
  • 13. Cleanliness criteria Potential defect types Levels, Types & Techniques - STRATEGY Staged & purposeful detection S4 NINE levels to Cleanliness Expectations L9 End user value Complete test cases L8 Clean Deployment Goal directed L7 Attributes met Sensible automation measures L6 Environment cleanliness Quality Levels L3 L5 Robustness PDT7 PDT6 L4 Behavior correctness L2 PDT5 Test Techniques (T1-T4) PDT4 L1 PDT3 TT5 T4 L3 Structural integrity PDT2 PDT: PDT1 Potential Defect Types TT4 L2 Input interface cleanliness T3 TT3 L1 Input cleanliness Test Types PDT7 TT5 PDT6 TT2 T2 TT4 PDT5 PDT4 TT1 T1 TT3 PDT3 TT2 PDT2 TT: TT1 PDT1 Test Types “Fractional distillation of bug mixture”© 2012. STAG Software Private Limited. All rights reserved.
  • 14. Cleanliness criteria Potential defect types Countable test cases & Fault coverage Staged & purposeful detection Expectations Use STEM Core concepts > Box model Complete test cases > Behavior Stimuli approach S4 > Techniques landscape Goal directed > Coverage evaluation Sensible automation measures to - Model behavior - Create behavior scenarios - Create stimuli (test cases) Test Scenarios/Cases R1 PDT1 Irrespective of who designs, #scenarios/ TS1 TC1,2,3 R2 PDT2 cases shall be same - COUNTABLE TT R3 TS2 TC4,5,6,7 PDT3 Requirements & Fault traceability That test cases for a given requirement shall have the ability to detect specific types of defects Guarantee test adequacy. FAULT COVERAGE Guarantee implies that the means to the end is rational & provable© 2012. STAG Software Private Limited. All rights reserved.
  • 15. HBT Test Case Architecture Organized by Quality levels sub-ordered by items (features/modules..), Level segregated by type, ranked by importance/priority, Item sub-divided into conformance(+) and robustness(-), classified by early (smoke)/late-stage evaluation, Type tagged by evaluation frequency, linked by optimal execution order, classified by execution mode (manual/automated) Priority Focus Stage A well architected set of test cases is like a effective bait that can ‘attract‘ defects in the Frequency system. Order It is equally important to ensure that they Mode are well organized to enable execution optimisation and have the right set of information to ensure easy automation.© 2012. STAG Software Private Limited. All rights reserved.
  • 16. Cleanliness criteria Potential defect types Focused scenarios + Good Automation Architecture Staged & purposeful detection Expectations Level based testcriteria Cleanliness scenarios yield Complete test cases shorter scripts that are more flexible for change and S5 easily maintainable. Goal directed Sensible automation measures L9 End Expectations user value L8 Clean Deployment L7 Attributes met L6 Environment cleanliness L5 Robustness L4 Behavior correctness L3 Structural integrity L2 Input interface cleanliness L1 Input cleanliness© 2012. STAG Software Private Limited. All rights reserved.
  • 17. Cleanliness criteria Potential defect types “Cleanliness Index” - Improved visibility Staged & purposeful detection L4 Expectations PDT10 TT8 Complete test cases PDT9 TT7 S6 L3 PDT9 TT6 Cleanliness PDT8 TT5 Goal directed Sensible automation measures PDT7 TT4 L2 PDT6 PDT5 TT3 L1 PDT4 PDT3 TT2 PDT2 PDT1 TT1 Stage Quality report CC1 CC2 CC3 CC4 R1 Met R2 Not met R3 Partially met R4 R5© 2012. STAG Software Private Limited. All rights reserved.
  • 18. HBT - A Case Study Two Teams, one using HBT & The other conventional approach Test case details Test case details (HBT) Module HBT Normal Increase Module Total Positive Negative M1 100 28 257% M1 100 59 41 M2 85 52 63% M2 85 68 17 M3 95 66 44% M3 95 67 28 M4 132 72 83% M4 132 112 20 M5 127 28 354% M5 127 85 42 M6 855 116 637% M6 855 749 106 TOTAL 1394 362 285% TOTAL 1394 1140 254 Nearly 3x increase in #test cases increasing probability of higher defect yield 2x improvement in negative cases increasing probability of better defect yield Defect details Effort details (person-hours) HBT Normal Increase Stage HBT Normal #Defects 32 16 100% Test analysis & 30 20 Design 20 (Major), 12(Minor) Out of these 32 defects, few were Front loading of effort resulted residual defects, one being critical to in lowering support cost corrupt the entire data.© 2012. STAG Software Private Limited. All rights reserved.
  • 19. HBT Results 50%-1000% reduction in post-release defects Re-architecting test assets increases test coverage by 250% 30% defect leakage reduction from early stage Smart automation - 3x reduction in time Deskilling - Less experienced staff do better, faster ramp up, lower cost ‘Holes’ found & fixed at requirement stage Test assessment accelerates integration, de-risks deployment© 2012. STAG Software Private Limited. All rights reserved.
  • 20. Summarizing... 4 Cleanliness criteria 3 2 Potential Defect Types 1 Quality Levels Speed up Leverage Asset Optimize execution Intellect reuse process Complete test cases Cleanliness Index Validation suite Power + Speed© 2012. STAG Software Private Limited. All rights reserved.
  • 21. Thank you! Follow us @stagsoft Check out our blog at www.stagsoftware.com/blog© 2012. STAG Software Private Limited. All rights reserved.