Experiences with Semi-scripted Exploratory Testing Simon Morley Agile Testing Days 2011 15 Nov 2011
About Me I work as a Test Coordinator with many different teams and testers. I jump in as a team leader from time to time And even do some testing! Context Multimedia IP networks To “telecom grade”
Overview Preamble... Case story Background / Problem Description Approaches Observations &  Lessons
Preamble My usage of... Scripted Exploratory Semi-Scripted
Semi-Scripted & Exploratory?
What? Experiences with Combining the following approaches Scripted Testing Scripted Testing vs Scripted Execution Exploratory Testing
Scripted Execution vs Scripted Testing?
Another Example: Login to PC Step1: Enter User-name Step2: Enter Password What if password has expired? What if it was a guest account (no password)? Result: Successfully logged-in
Semi-Scripted? “ Walking in the woods” Using some pre-defined set-up as an enabler to an exploratory session The pre-defined set-up should not exclude observation and feedback. Walking with someone else is valuable
 
Why “Walking in the woods” Seasons change -> set-up/config changes Repeating walks can actually help reveal new information (due to  conditions changing ) Terrain changes -> systems “change” due to use Prolonged usage of systems build up different problems/conditions “under the surface”
Case Story Background Feature Walkthroughs Test Ideas & Feasibility Test Execution Some Success Indicators Challenges and Lessons
Background
Experience Context Background factors – challenging... Short deadline Trial Feature Complex Environment Traditional Approach
Environment Unchanged Changed NE NE (SUT) Sim NE NE NE NE Sim NE Sim NE Sim NE
Result?
Additional terms of reference Catch-up effort Gather information for feature assessment “ Free hand” Possibilities...
Approach Possibilities
Initial Feature Analysis
Feature Walkthrough
Test Feasibility
Learning
Feature Walkthrough All were 'up to speed' before the discussion All the team plus external experience Discussion – Q&A - Brainstorming Whiteboard is centre-stage Good toolsmiths make a difference!
Feature Walkthrough -> Brainstorming for risks
Brainstorming #1
Brainstorming #2
First real testing  of the feature!
Test Ideas
Test Ideas -> Test Feasibility
Learnings Gather the right people to look at the problem. Allow time for reflection.
Estimation Notes Main estimation figures were: Who was needed and for how long Equipment need & availability (feedback) Estimation was not based on test case ideas
Test Execution
Semi-Scripted? Same input Same input Config Changes Provisioned Data Changes NE NE (SUT) Sim NE NE NE NE Sim NE Sim NE Sim NE
Test Execution
Test Execution
Test Execution
Test Execution
Test Execution
No “test idea constraints” linked with tester thinking & reflection from execution feeds back into better test ideas.
When you're not interested in numbers Then testers are no longer  interested in talking about them!
Results from execution Found issues not found elsewhere Results used to declare better information on the feature when going to trial Start of change in mindset
Some Success Indicators
A Success Indicator... A test report changes from “ Run X test cases, where Y failed and Z passed” And becomes more like..
“ Feature A tested, there are some interaction issues with feature B and, by implication, risks for features C and D. We have not explicitly tested feature C. Recommend some further investigation with feature A in combination with features B and C. As feature D is not to be used in trial a priority decision is needed on the inclusion of D into scope. If feature D is to be put into general usage then further test and investigation is recommended.”
Pilot activity spread to end-2-end teams with  greater engagement and end-value appreciation. Issues in the product for end-user  usage & interactions are found -> adding value to the product reporting
Observation & Analysis
What's happening here? Testing has been framed  as an investigative activity! Scripting is re-framed as a tool rather than a goal for testing!
Note on labels The testing was never labelled as one form or another The frames in which it was presented/discussed changed Questions to trigger the activity changed Move away from valuing test cases Move to relate the value in PO terms
Sometimes labels get in the way! Framing a choice differently  -> transition!
Change? How? This is a transition rather than a big bang change. Why? Adaptive, small steps, fail fast & improve! Time was key – pull the team in a direction, but allow them to see that the benefit from the change -> then they control the speed of change and so it  happens  quicker! The team is challenged to think about the activity – and with the right support & encouragement they generally do think more!
Factors particular to this case “ Free hand” No expectation on change of mindset First pilot was in a separated test team.
To some external observers... Maybe not so much different A product with some level of testing is delivered But, from the team perspective The value of the testing and ideas was debated & discussed early All in the team & PO value this earlier feedback Ideas formed, tried out, re-evaluated
Challenges Getting past test case counting Relating value in test ideas to end-user value Working with good test story telling is important Whole team involved in initial analysis Prototype early – good toolsmiths are important
Recap Big Challenges Used the challenges as a means to start the transition Part of transition was enabled by using scripting as an enabler But also to trigger thinking / questioning mindset -> throughout (workshop->test analysis/feasibility->execution)
Lessons
Lessons #1 Feature Walkthrough: First real  testing  of the feature happens early Gather the right people to look at the problem. Allow time for reflection. Test Ideas: No “test idea constraints”. Scripts are an enabler not the goal.
Lessons #2 Execution No “test idea constraints” linked with tester thinking & reflection from execution feeds back into better test ideas. When you're not interested in numbers  then, t hen testers are no longer  interested in talking about them!
Lessons #3 General Challenge the team (testers) to think and they generally will think! Allow the team to see the benefits of change (transition) – then they will begin to drive the speed of change. Frame testing differently Frame scripts as tools and enablers rather than fountains of knowledge!
Lesson #4 It takes practice to get your testing experience from: To:
Questions?
Thank You! Blog: http://testers-headache.blogspot.com Twitter: @YorkyAbroad
Attributions http://www.flickr.com/photos/kalavinka/ http://www.flickr.com/photos/kalavinka/4617897952/ http://www.flickr.com/photos/tesla314/2666463779/ http://www.flickr.com/photos/txmx-2/6054712319/ http://www.flickr.com/photos/28385889@N07/3058090229/in/photostream/ http://www.flickr.com/photos/somegeekintn/3810233454/ http://www.flickr.com/photos/sashomasho/260136080/

Experiences with Semi-Scripted Exploratory Testing

  • 1.
    Experiences with Semi-scriptedExploratory Testing Simon Morley Agile Testing Days 2011 15 Nov 2011
  • 2.
    About Me Iwork as a Test Coordinator with many different teams and testers. I jump in as a team leader from time to time And even do some testing! Context Multimedia IP networks To “telecom grade”
  • 3.
    Overview Preamble... Casestory Background / Problem Description Approaches Observations & Lessons
  • 4.
    Preamble My usageof... Scripted Exploratory Semi-Scripted
  • 5.
  • 6.
    What? Experiences withCombining the following approaches Scripted Testing Scripted Testing vs Scripted Execution Exploratory Testing
  • 7.
    Scripted Execution vsScripted Testing?
  • 8.
    Another Example: Loginto PC Step1: Enter User-name Step2: Enter Password What if password has expired? What if it was a guest account (no password)? Result: Successfully logged-in
  • 9.
    Semi-Scripted? “ Walkingin the woods” Using some pre-defined set-up as an enabler to an exploratory session The pre-defined set-up should not exclude observation and feedback. Walking with someone else is valuable
  • 10.
  • 11.
    Why “Walking inthe woods” Seasons change -> set-up/config changes Repeating walks can actually help reveal new information (due to conditions changing ) Terrain changes -> systems “change” due to use Prolonged usage of systems build up different problems/conditions “under the surface”
  • 12.
    Case Story BackgroundFeature Walkthroughs Test Ideas & Feasibility Test Execution Some Success Indicators Challenges and Lessons
  • 13.
  • 14.
    Experience Context Backgroundfactors – challenging... Short deadline Trial Feature Complex Environment Traditional Approach
  • 15.
    Environment Unchanged ChangedNE NE (SUT) Sim NE NE NE NE Sim NE Sim NE Sim NE
  • 16.
  • 17.
    Additional terms ofreference Catch-up effort Gather information for feature assessment “ Free hand” Possibilities...
  • 18.
  • 19.
  • 20.
  • 21.
  • 22.
  • 23.
    Feature Walkthrough Allwere 'up to speed' before the discussion All the team plus external experience Discussion – Q&A - Brainstorming Whiteboard is centre-stage Good toolsmiths make a difference!
  • 24.
    Feature Walkthrough ->Brainstorming for risks
  • 25.
  • 26.
  • 27.
    First real testing of the feature!
  • 28.
  • 29.
    Test Ideas ->Test Feasibility
  • 30.
    Learnings Gather theright people to look at the problem. Allow time for reflection.
  • 31.
    Estimation Notes Mainestimation figures were: Who was needed and for how long Equipment need & availability (feedback) Estimation was not based on test case ideas
  • 32.
  • 33.
    Semi-Scripted? Same inputSame input Config Changes Provisioned Data Changes NE NE (SUT) Sim NE NE NE NE Sim NE Sim NE Sim NE
  • 34.
  • 35.
  • 36.
  • 37.
  • 38.
  • 39.
    No “test ideaconstraints” linked with tester thinking & reflection from execution feeds back into better test ideas.
  • 40.
    When you're notinterested in numbers Then testers are no longer interested in talking about them!
  • 41.
    Results from executionFound issues not found elsewhere Results used to declare better information on the feature when going to trial Start of change in mindset
  • 42.
  • 43.
    A Success Indicator...A test report changes from “ Run X test cases, where Y failed and Z passed” And becomes more like..
  • 44.
    “ Feature Atested, there are some interaction issues with feature B and, by implication, risks for features C and D. We have not explicitly tested feature C. Recommend some further investigation with feature A in combination with features B and C. As feature D is not to be used in trial a priority decision is needed on the inclusion of D into scope. If feature D is to be put into general usage then further test and investigation is recommended.”
  • 45.
    Pilot activity spreadto end-2-end teams with greater engagement and end-value appreciation. Issues in the product for end-user usage & interactions are found -> adding value to the product reporting
  • 46.
  • 47.
    What's happening here?Testing has been framed as an investigative activity! Scripting is re-framed as a tool rather than a goal for testing!
  • 48.
    Note on labelsThe testing was never labelled as one form or another The frames in which it was presented/discussed changed Questions to trigger the activity changed Move away from valuing test cases Move to relate the value in PO terms
  • 49.
    Sometimes labels getin the way! Framing a choice differently -> transition!
  • 50.
    Change? How? Thisis a transition rather than a big bang change. Why? Adaptive, small steps, fail fast & improve! Time was key – pull the team in a direction, but allow them to see that the benefit from the change -> then they control the speed of change and so it happens quicker! The team is challenged to think about the activity – and with the right support & encouragement they generally do think more!
  • 51.
    Factors particular tothis case “ Free hand” No expectation on change of mindset First pilot was in a separated test team.
  • 52.
    To some externalobservers... Maybe not so much different A product with some level of testing is delivered But, from the team perspective The value of the testing and ideas was debated & discussed early All in the team & PO value this earlier feedback Ideas formed, tried out, re-evaluated
  • 53.
    Challenges Getting pasttest case counting Relating value in test ideas to end-user value Working with good test story telling is important Whole team involved in initial analysis Prototype early – good toolsmiths are important
  • 54.
    Recap Big ChallengesUsed the challenges as a means to start the transition Part of transition was enabled by using scripting as an enabler But also to trigger thinking / questioning mindset -> throughout (workshop->test analysis/feasibility->execution)
  • 55.
  • 56.
    Lessons #1 FeatureWalkthrough: First real testing of the feature happens early Gather the right people to look at the problem. Allow time for reflection. Test Ideas: No “test idea constraints”. Scripts are an enabler not the goal.
  • 57.
    Lessons #2 ExecutionNo “test idea constraints” linked with tester thinking & reflection from execution feeds back into better test ideas. When you're not interested in numbers then, t hen testers are no longer interested in talking about them!
  • 58.
    Lessons #3 GeneralChallenge the team (testers) to think and they generally will think! Allow the team to see the benefits of change (transition) – then they will begin to drive the speed of change. Frame testing differently Frame scripts as tools and enablers rather than fountains of knowledge!
  • 59.
    Lesson #4 Ittakes practice to get your testing experience from: To:
  • 60.
  • 61.
    Thank You! Blog:http://testers-headache.blogspot.com Twitter: @YorkyAbroad
  • 62.
    Attributions http://www.flickr.com/photos/kalavinka/ http://www.flickr.com/photos/kalavinka/4617897952/http://www.flickr.com/photos/tesla314/2666463779/ http://www.flickr.com/photos/txmx-2/6054712319/ http://www.flickr.com/photos/28385889@N07/3058090229/in/photostream/ http://www.flickr.com/photos/somegeekintn/3810233454/ http://www.flickr.com/photos/sashomasho/260136080/

Editor's Notes

  • #3 These experiences are based on bullets 2&3, and through bullet 1 spreading the word to other teams.
  • #8 Marionette: Following instructions of the puppeteer But suppose there is some danger in front of him that the puppeteer doesn't see? He can decide to divert or modify his action, and even when to continue following directions. So, it's not blindly following instructions...
  • #10 Using some pre-defined set-up as an enabler to an exploratory session, i.e. there can be a certain amount of pre-meditated design and thought before you start exploring The pre-defined set-up and exploration should not exclude observation and feedback. Walking with someone else is valuable (just like testing with someone else adds a new/different dimension)
  • #12 Note, you need a way to determine if/when to carry on Script usage could also be called script-supported testing
  • #14 Large Organisation Some features are for trial purposes Many specifications & protocols Many features and interactions Perpetual question: “How much detail is enough?” Prototype in a conservative test team set-up But, the new thinking has followed into end-2- end team set-ups
  • #17 “ Erfinden sie ein problem” = “they invent a problem”
  • #20 Say which bits will be zoomed-in....
  • #25 Trial, demo & live? A trial might have different performance requirements to live usage. Other apsects not required might be expansion/scaleability needs or robustness and negative testing.
  • #30 This is a 'weeding-out' activity also. Input to 'silent evidence' here is: not possible due to tool or time restraints.
  • #33 Some traditional aspects kept Some new aspects/ideas introduced
  • #36 Note, you can't just make things ambiguous and then thinking and reflection happens. The thinking and questioning must be encouraged as a part of the activity. The ambiguity stops ideas getting 'boxed-in'.
  • #40 Don’t let test cases ”constrain” (box-in) ideas – create them with ”enough” ambiguity so that you’re not constrained, boxed-in, no restricted to a comfort zone and eventually ”thinking outside your comfort zone/box”
  • #48 Testing has been framed as an investigative activity Try this, what happens?, grey areas? Look closer Contrast with, “this is the spec -> test to that” Moves away from reliance on only confirmatory testing
  • #49 By changing the frames of reference the labelling of the activity was avoided. Sometimes labels get in the way. Framing a problem differently is much more subtle -> acceptable.
  • #54 I think of test cases as test idea constraints – because they tend to lock-in ideas about what a test case produces -> it takes on much more value than it should... Getting past test case counting Relating value in test ideas to end-user value Emphasized with PO/team involvement early Working with good test story telling is important Whole team involved in initial analysis Reduces handovers + creates a feeling of involvement Prototype early – good toolsmiths are important Fail fast – throw away ideas before they cost too much!
  • #55 The thinking and reflection about the test ideas, feeding back to improve them -> these are attributes of what I call good testing and attributes that align with many aspects of good agile testing. BIg challebge – abscence of team set-up, wow & processes----- (was also an aid to not have defined WoW & processes!) Reflect on semi-script & exploratory....
  • #61 Q&A session