New Model Testing:
A New Test Process and Tool
@paul_gerrard
Paul Gerrard
paul@gerrardconsulting.com
gerrardconsulting.com
Slide 1Intelligent Definition and Assurance
My Goals in this webinar
• Share some R & D with you
• Get you thinking about your test process and
opportunities to improve
• Show you a brief demo of a tool we’re
working on
• Invite collaboration.
Intelligent Definition and Assurance Slide 2
Agenda (my journey)
Intelligent Definition and Assurance Slide 3
Test Process
Will Robots
Replace
Testers?
Tool
Architecture
A
Demonstration
Testing Styles,Approaches
“Structured/Waterfall/
Staged”Testing
• Systematic
• Transparent, Documented
• Reviewable
• Auditable
• Repeatable, measurable
• Automatable
• Inflexible, not responsive
• Obsolescent/inaccurate
documentation
• Prone to biases, inattention
• Outdated process
• Expensive, Inefficient
• Unimaginative, boring
“Exploratory”Testing
• Agile (with a small ‘a’)
• Improvised, imaginative
• Flexible and responsive to change
• Faster, cheaper
• More effective
• Personally enjoyable
• Not repeatable
• Not easily automated
• Little or no documentation
• Hard to manage
• Hard to scale
• Opaque
• Not auditable, measurable.
Intelligent Definition and Assurance Slide 4Some of these are perceptions – which vary
Different strokes for different folks
• I’ve never believed that these styles are
‘opposites’, rather they differ in emphasis
• Like the Agile manifesto, it’s more a set of
values/preferences that drive behaviour:
– Differences in scale and timing
– Planned v improvisational
– Heavy v light documentation
– Process and governance v agility and freedom
– Etc. etc.
• Is there a fundamental thought process that
underpins all testing?
Intelligent Definition and Assurance Slide 5
Forget Logistics
(for the time being)
Document or not?
Automated or manual?
Agile v waterfall?
This business or that business?
This technology v that technology?
ALL Testing is
Exploratory
We explore sources of knowledge ...
... to build test models ...
... that inform our testing.
Judgement, exploring and testing
Testing
(the system)
Our model(s) are adequate
Our model(s) are not adequate
Exploring
(sources)
Judgement
Creates
models
Uses
models
We explore sources of knowledge to build test models that inform our testing
BTW – Do Developers think/explore the same way? I think so.
New ModelTesting
My BBC talk: http://www.bbc.co.uk/academy/technology/article/art20150522113029398
29 page paper: http://dev.sp.qa/download/newModel
The New Model
• Makes no assumption about logistics, context
• It is not a process model with entry/exit criteria,
procedures, deliverables etc.
• All models are wrong, but I believe this is useful
– An attempt to understand our thought processes
– Our brains can work on several processes
simultaneously
– Can help us better understand, information flows,
feedback and review processes, automation etc.
– Focuses attention on (test) model-making.
Intelligent Definition and Assurance Slide 10
An Obvious Question
Is it feasible to combine the best of
structured and exploratory testing
and create a new test approach?
Will Robots Replace
Testers?
Some research
There is a paper at:
https://tkbase.com/resources/viewResource/14
A recent study*…
• Over the next two decades, 47% of jobs in the
US may be under threat
• It ranks 702 occupations in order of their
probability of computerisation
– Telemarketers: 99% likely
– Recreational therapists: 0.28% likely
– Computer programmers: 48% likely
• Something significant is going on out there
• If programmers have a 50/50 chance of being
replaced by robots, we should think seriously on
how the same might happen to testers.
* “The future of employment: how susceptible are jobs to computerisation?”
http://www.oxfordmartin.ox.ac.uk/downloads/academic/The_Future_of_Employment.pdf
Intelligent Definition and Assurance Slide 13
Some systems-related occupations
Intelligent Definition and Assurance Slide 14
Occupation Rank (out of
702)
Probability of
Computeris-
ation
Computer and Information Research Scientists 69 1.5%
Network and Computer Systems Administrators 109 3.0%
Computer and Information Systems Managers 118 3.5%
Information Security Analysts, Web Developers,
and Computer Network Architects
208 21%
Computer Occupations, All Other 212 22%
Computer Programmers 293 48%
Computer Support Specialists 359 65%
Computer Operators 428 78%
Inspectors, Testers, Sorters, Samplers and
Weighers
670 98%
Test Automation =
Mechanical Tools
What we REALLY need are
THINKING TOOLS
The term Test Automation misleads
• It misleads as a label because the whole of
testing cannot be automated
• The label is bad, but the scope of Test
Automation is what I call ‘Applying’ in the
New Model
Intelligent Definition and Assurance Slide 16
Testers need ThinkingTools
• There are ten testing activities in the New Model
– Test automation tools only support one:‘Applying’
• The remaining nine activities (information
gathering, analysis, modelling, challenging, test
design and so on) are not well supported
• All require some level of thinking and skills
• Checking is possible when a system and its
purpose are well understood and trusted
• Test automation tools are simple in principle…
… compared to the rest of the test process.
Intelligent Definition and Assurance Slide 17
Four quadrant model of intelligent
test tools
Ability to Investigate
AbilitytoCaptureKnowledge
• Text editors, Screen Shots
Models, visualisations, relationships, transformations
• Note Takers
• Mind Maps
• UML/Case Tools
Control,imagination,discernment,foresight
• Pencil and paper, sketching tools
Intelligent Definition and Assurance Slide 18
Intelligent Definition and Assurance Slide 19
TERMINATOR
TESTER
Not Yet!
Cervaya™
Tool Architecture
A nine month bot journey
(But it’s been a twenty year testing
journey so far)
The vision thing
• I want a bot partner/pair that supports
exploratory testing
• You know my view (model)
of testing already
• Can we use the explore v test paradigm in a bot
that allows you to:
– Explore, take notes and model
– Record ideas, risks, tests, outcomes and bugs
– Generate reports and documentation as a
consequence, rather than requiring you write them
• Codename: Cervaya (cervaya.com is a holding page for now)
Intelligent Definition and Assurance Slide 21
Schematic
Intelligent Definition and Assurance Slide 22
Schema Server
Schema
Manager
(web site)
Schema
Repository
(web service)
Cervaya Bot Client
Robot Engine
Cervaya Server
Actions
(web services)
Application
Web Site
Speech
Recognition
Interface
Command
Line
Interface
Administer Schemas
through the Web
Download
Schema Perform Robot
Actions through
services
Application Reporting,
Monitoring, Control,
Management
Many App Servers – one
(or many) per Schema
robotschemas
cervaya
The target user is…
• Not really a freestyle exploratory tester working
in a start-up or small product company
• Testers working in:
– Regulated, high-integrity, safety critical
– High documentation, high accountability
– Environments where testers are constrained by their
processes
– Testers struggling to apply ‘trad’ methods in an Agile,
Digital, DevOps environment
• Why can’t we dictate a detailed test plan for a
bot to document, analyse, visualise?
Intelligent Definition and Assurance Slide 23
Bot Schema – State Model+
Slide 24Intelligent Definition and Assurance
Simplified Schema
Intelligent Definition and Assurance Slide 25
Session
Notes
Observation
Navigation
Modelling
Home
Always start here –
ready to commence the
session
In session – everything
is done inside a session
Places, features, forms
and fields comprise the
model
Notes and observations
are related to places,
features, forms or fields
Notes can be ideas,
concerns, risks,
questions etc.
Test
Tests focus on forms and
fields (but also end-to-
end scenarios)
Demo
A partial prototype
The location hierarchy
• Explorers create maps (or should do)
• The system map is a hierarchy of locations as
follows:
• Application
– Places
• Features
– Forms
» Fields
• All other content is located with respect to some
level in this hierarchy.
Intelligent Definition and Assurance Slide 27
Demo agenda
• Cervaya
– apps, versions and charters
• Speechbot
• Thinkbot
• Cervaya
– Explorer
– Session Status/Log
– Creating and running tests
– Viewing the model
• D3 and system maps/data visualisations.
A New Test Process
Exploration support
• Frustration:
– testers are unimaginative, working by-rote
– constant pressure to cut costs
• Productivity of exploratory test approaches is proven
• Testers want to explore, but the need for control and
documentation constrains them
• Testers needs tools that can capture plans and tester
activity in real-time
• The next generation will be led by tools that support
the exploration of sources of knowledge.
• These tools might use a “Surveying” metaphor.
Intelligent Definition and Assurance Slide 30
Staged Process
System Testing Re-Test
Big, up-front Test
Planning
Regression
Test
Development
Automation
From Staged to Continuous
Scoping exploration sessions
Interactive Testing Sessions
Clarifications
Continuous Integration
Changes
Refinements
Continuous Dev, Test and Delivery
Test Process
• Cervaya can be used to create test plans
– Initially based on documentation
– Evolving plan based on chartered sessions
• Testing in the small – chartered testing
• Testing in the large – Cervaya has the building
blocks for end to end tests
• Could it generate the documentation
required in regulated industries?Why not?
Intelligent Definition and Assurance Slide 32
Real-Time test management
• The activity of the test team can be seen in
real-time
• Testers can see each other’s activity and liaise
when necessary
• Developers can also see progress and
requests for more information very quickly
– E.g. selected queries, test failures could be posted
to Slack.
Intelligent Definition and Assurance Slide 33
From Staged to ContinuousTesting
• In Waterfall projects, tests work in stages
– Hi- then low-level test plan, test procedures, execution, re-
testing, regression-testing
• If testers are shifted left (into the development teams) they
can:
– Model features, forms as they are created by the developers
– The test plan at a low level emerges and is accurate as it
iterates over time
– Longer/end to end tests – not yet available but will be soon
• Tests and test run histories are manages against versions
(and platforms/environments soon)
• The aim is to export tests as soon as practical to be
automated and provide a feedback mechanism from tools.
Intelligent Definition and Assurance Slide 34
The best of both worlds?
“Structured/Waterfall/
Staged”Testing
• Systematic
• Transparent, Documented
• Reviewable
• Auditable
• Repeatable, measurable
• Automatable
“Exploratory”Testing
• Agile (with a small ‘a’)
• Improvised, imaginative
• Flexible and responsive to
change
• Faster, cheaper
• More effective
• Personally enjoyable
Intelligent Definition and Assurance Slide 35
I’m looking for
Collaboration
Want to know more?
Email paul@gerrardconsulting.com
Q & maybe some A
A new test process?
• The “tester as surveyor” affects the relationship of
testing to development
• A new style of testing process emerges:
– No more documentation created in a knowledge vacuum
– Iterative, incremental knowledge acquisition and capture
process closely aligned with the delivery of features
• Could this be an Agile test process at last?
• At least: it fits the increasingly popular Continuous
Delivery, DevOps development approaches.
Intelligent Definition and Assurance Slide 38
System Surveying
• A System Survey incorporates features, and captures the
architecture of the system from a user perspective
– Testers pair with developers and survey features
– The knowledge required to build systems emerges over time
– So does the design of the system
– So should the models and documentation produced by testers
• Surveys that evolve the System Model/Map are shared
• The basis of component and system tests is the System
Model itself
• No need for extensive scripts or test procedures!
• The information required for scripting is in the model.
Intelligent Definition and Assurance Slide 39
A process that suits automation
• Test process comprises a sequence of parallel actions
– Sequence: survey, model refinement then testing
– Parallel: small subsets of functionality selected for surveys
– These processes are both iterative and incremental as learning
proceeds
• Scalable: if you survey it, you can test it
• Automate: What you can survey and test, you can
probably automate
• “Humans make the early maps; tools will follow the
trails we make.”
• We don’t need Machine Learning to do this:
– Simple tools make suggestions that better inform and enrich
exploration and testing.
Intelligent Definition and Assurance Slide 40
All exploration and testing is
driven by charters
• Charters represent the low-level plans for all
sessions
• Charters can be reused for multiple sessions,
if required
Intelligent Definition and Assurance Slide 41
Services
• Can Cervaya handle web services as objects
to test?
• In principle yes – use your imagination:
– Map the form/field information to service/fields
– Service tests map directly to form tests
• But we need to create an option that doesn’t
require you to use your imagination.
Intelligent Definition and Assurance Slide 42
Sessions and logging
• Sessions can be as short or as long as you like
• Testers typically work in 60-120 minute bursts
– 90 minutes is typical
• Every action in a session is logged against the
session (locations selected, created, notes
taken, test created and run)
• The activity of all testers is logged and can be
scrutinised for audit purposes.
Intelligent Definition and Assurance Slide 43
Running a Test
• When you run a test
– You are presented with the test details (navigation, input
values and expected outcomes)
– You can log a test status, outcomes if different from
expected and an interpretation
– You can also log a screen shot or other file
• The test run log record the detail above and tags the
run with the app version/build defined in the charter
• You can (if required) run the same test multiple times
in the same session and against the same version.
Intelligent Definition and Assurance Slide 44
Challenges, opportunities
• Is it feasible to generate
a high or low level test
plan from Surveyor?
• Environments/platforms
• Integrations
– Chatbot – Slack etc.
– Internal IM
– Stories
– Test automation a la
BDD – both ways
• Visualisations
– Coverage
– Reporting
– Impact analysis
• Other benefits
– Instant replays
– Heat map, floor plan
– Tester paths
– Real time notifications
– Real time test(er)
management
Intelligent Definition and Assurance Slide 45
New Model Testing:
A New Test Process and Tool
@paul_gerrard
Paul Gerrard
paul@gerrardconsulting.com
gerrardconsulting.com
Slide 46Intelligent Definition and Assurance

New Model Testing: A New Test Process and Tool

  • 1.
    New Model Testing: ANew Test Process and Tool @paul_gerrard Paul Gerrard paul@gerrardconsulting.com gerrardconsulting.com Slide 1Intelligent Definition and Assurance
  • 2.
    My Goals inthis webinar • Share some R & D with you • Get you thinking about your test process and opportunities to improve • Show you a brief demo of a tool we’re working on • Invite collaboration. Intelligent Definition and Assurance Slide 2
  • 3.
    Agenda (my journey) IntelligentDefinition and Assurance Slide 3 Test Process Will Robots Replace Testers? Tool Architecture A Demonstration
  • 4.
    Testing Styles,Approaches “Structured/Waterfall/ Staged”Testing • Systematic •Transparent, Documented • Reviewable • Auditable • Repeatable, measurable • Automatable • Inflexible, not responsive • Obsolescent/inaccurate documentation • Prone to biases, inattention • Outdated process • Expensive, Inefficient • Unimaginative, boring “Exploratory”Testing • Agile (with a small ‘a’) • Improvised, imaginative • Flexible and responsive to change • Faster, cheaper • More effective • Personally enjoyable • Not repeatable • Not easily automated • Little or no documentation • Hard to manage • Hard to scale • Opaque • Not auditable, measurable. Intelligent Definition and Assurance Slide 4Some of these are perceptions – which vary
  • 5.
    Different strokes fordifferent folks • I’ve never believed that these styles are ‘opposites’, rather they differ in emphasis • Like the Agile manifesto, it’s more a set of values/preferences that drive behaviour: – Differences in scale and timing – Planned v improvisational – Heavy v light documentation – Process and governance v agility and freedom – Etc. etc. • Is there a fundamental thought process that underpins all testing? Intelligent Definition and Assurance Slide 5
  • 6.
    Forget Logistics (for thetime being) Document or not? Automated or manual? Agile v waterfall? This business or that business? This technology v that technology?
  • 7.
    ALL Testing is Exploratory Weexplore sources of knowledge ... ... to build test models ... ... that inform our testing.
  • 8.
    Judgement, exploring andtesting Testing (the system) Our model(s) are adequate Our model(s) are not adequate Exploring (sources) Judgement Creates models Uses models We explore sources of knowledge to build test models that inform our testing BTW – Do Developers think/explore the same way? I think so.
  • 9.
    New ModelTesting My BBCtalk: http://www.bbc.co.uk/academy/technology/article/art20150522113029398 29 page paper: http://dev.sp.qa/download/newModel
  • 10.
    The New Model •Makes no assumption about logistics, context • It is not a process model with entry/exit criteria, procedures, deliverables etc. • All models are wrong, but I believe this is useful – An attempt to understand our thought processes – Our brains can work on several processes simultaneously – Can help us better understand, information flows, feedback and review processes, automation etc. – Focuses attention on (test) model-making. Intelligent Definition and Assurance Slide 10
  • 11.
    An Obvious Question Isit feasible to combine the best of structured and exploratory testing and create a new test approach?
  • 12.
    Will Robots Replace Testers? Someresearch There is a paper at: https://tkbase.com/resources/viewResource/14
  • 13.
    A recent study*… •Over the next two decades, 47% of jobs in the US may be under threat • It ranks 702 occupations in order of their probability of computerisation – Telemarketers: 99% likely – Recreational therapists: 0.28% likely – Computer programmers: 48% likely • Something significant is going on out there • If programmers have a 50/50 chance of being replaced by robots, we should think seriously on how the same might happen to testers. * “The future of employment: how susceptible are jobs to computerisation?” http://www.oxfordmartin.ox.ac.uk/downloads/academic/The_Future_of_Employment.pdf Intelligent Definition and Assurance Slide 13
  • 14.
    Some systems-related occupations IntelligentDefinition and Assurance Slide 14 Occupation Rank (out of 702) Probability of Computeris- ation Computer and Information Research Scientists 69 1.5% Network and Computer Systems Administrators 109 3.0% Computer and Information Systems Managers 118 3.5% Information Security Analysts, Web Developers, and Computer Network Architects 208 21% Computer Occupations, All Other 212 22% Computer Programmers 293 48% Computer Support Specialists 359 65% Computer Operators 428 78% Inspectors, Testers, Sorters, Samplers and Weighers 670 98%
  • 15.
    Test Automation = MechanicalTools What we REALLY need are THINKING TOOLS
  • 16.
    The term TestAutomation misleads • It misleads as a label because the whole of testing cannot be automated • The label is bad, but the scope of Test Automation is what I call ‘Applying’ in the New Model Intelligent Definition and Assurance Slide 16
  • 17.
    Testers need ThinkingTools •There are ten testing activities in the New Model – Test automation tools only support one:‘Applying’ • The remaining nine activities (information gathering, analysis, modelling, challenging, test design and so on) are not well supported • All require some level of thinking and skills • Checking is possible when a system and its purpose are well understood and trusted • Test automation tools are simple in principle… … compared to the rest of the test process. Intelligent Definition and Assurance Slide 17
  • 18.
    Four quadrant modelof intelligent test tools Ability to Investigate AbilitytoCaptureKnowledge • Text editors, Screen Shots Models, visualisations, relationships, transformations • Note Takers • Mind Maps • UML/Case Tools Control,imagination,discernment,foresight • Pencil and paper, sketching tools Intelligent Definition and Assurance Slide 18
  • 19.
    Intelligent Definition andAssurance Slide 19 TERMINATOR TESTER Not Yet!
  • 20.
    Cervaya™ Tool Architecture A ninemonth bot journey (But it’s been a twenty year testing journey so far)
  • 21.
    The vision thing •I want a bot partner/pair that supports exploratory testing • You know my view (model) of testing already • Can we use the explore v test paradigm in a bot that allows you to: – Explore, take notes and model – Record ideas, risks, tests, outcomes and bugs – Generate reports and documentation as a consequence, rather than requiring you write them • Codename: Cervaya (cervaya.com is a holding page for now) Intelligent Definition and Assurance Slide 21
  • 22.
    Schematic Intelligent Definition andAssurance Slide 22 Schema Server Schema Manager (web site) Schema Repository (web service) Cervaya Bot Client Robot Engine Cervaya Server Actions (web services) Application Web Site Speech Recognition Interface Command Line Interface Administer Schemas through the Web Download Schema Perform Robot Actions through services Application Reporting, Monitoring, Control, Management Many App Servers – one (or many) per Schema robotschemas cervaya
  • 23.
    The target useris… • Not really a freestyle exploratory tester working in a start-up or small product company • Testers working in: – Regulated, high-integrity, safety critical – High documentation, high accountability – Environments where testers are constrained by their processes – Testers struggling to apply ‘trad’ methods in an Agile, Digital, DevOps environment • Why can’t we dictate a detailed test plan for a bot to document, analyse, visualise? Intelligent Definition and Assurance Slide 23
  • 24.
    Bot Schema –State Model+ Slide 24Intelligent Definition and Assurance
  • 25.
    Simplified Schema Intelligent Definitionand Assurance Slide 25 Session Notes Observation Navigation Modelling Home Always start here – ready to commence the session In session – everything is done inside a session Places, features, forms and fields comprise the model Notes and observations are related to places, features, forms or fields Notes can be ideas, concerns, risks, questions etc. Test Tests focus on forms and fields (but also end-to- end scenarios)
  • 26.
  • 27.
    The location hierarchy •Explorers create maps (or should do) • The system map is a hierarchy of locations as follows: • Application – Places • Features – Forms » Fields • All other content is located with respect to some level in this hierarchy. Intelligent Definition and Assurance Slide 27
  • 28.
    Demo agenda • Cervaya –apps, versions and charters • Speechbot • Thinkbot • Cervaya – Explorer – Session Status/Log – Creating and running tests – Viewing the model • D3 and system maps/data visualisations.
  • 29.
    A New TestProcess
  • 30.
    Exploration support • Frustration: –testers are unimaginative, working by-rote – constant pressure to cut costs • Productivity of exploratory test approaches is proven • Testers want to explore, but the need for control and documentation constrains them • Testers needs tools that can capture plans and tester activity in real-time • The next generation will be led by tools that support the exploration of sources of knowledge. • These tools might use a “Surveying” metaphor. Intelligent Definition and Assurance Slide 30
  • 31.
    Staged Process System TestingRe-Test Big, up-front Test Planning Regression Test Development Automation From Staged to Continuous Scoping exploration sessions Interactive Testing Sessions Clarifications Continuous Integration Changes Refinements Continuous Dev, Test and Delivery
  • 32.
    Test Process • Cervayacan be used to create test plans – Initially based on documentation – Evolving plan based on chartered sessions • Testing in the small – chartered testing • Testing in the large – Cervaya has the building blocks for end to end tests • Could it generate the documentation required in regulated industries?Why not? Intelligent Definition and Assurance Slide 32
  • 33.
    Real-Time test management •The activity of the test team can be seen in real-time • Testers can see each other’s activity and liaise when necessary • Developers can also see progress and requests for more information very quickly – E.g. selected queries, test failures could be posted to Slack. Intelligent Definition and Assurance Slide 33
  • 34.
    From Staged toContinuousTesting • In Waterfall projects, tests work in stages – Hi- then low-level test plan, test procedures, execution, re- testing, regression-testing • If testers are shifted left (into the development teams) they can: – Model features, forms as they are created by the developers – The test plan at a low level emerges and is accurate as it iterates over time – Longer/end to end tests – not yet available but will be soon • Tests and test run histories are manages against versions (and platforms/environments soon) • The aim is to export tests as soon as practical to be automated and provide a feedback mechanism from tools. Intelligent Definition and Assurance Slide 34
  • 35.
    The best ofboth worlds? “Structured/Waterfall/ Staged”Testing • Systematic • Transparent, Documented • Reviewable • Auditable • Repeatable, measurable • Automatable “Exploratory”Testing • Agile (with a small ‘a’) • Improvised, imaginative • Flexible and responsive to change • Faster, cheaper • More effective • Personally enjoyable Intelligent Definition and Assurance Slide 35
  • 36.
    I’m looking for Collaboration Wantto know more? Email paul@gerrardconsulting.com
  • 37.
    Q & maybesome A
  • 38.
    A new testprocess? • The “tester as surveyor” affects the relationship of testing to development • A new style of testing process emerges: – No more documentation created in a knowledge vacuum – Iterative, incremental knowledge acquisition and capture process closely aligned with the delivery of features • Could this be an Agile test process at last? • At least: it fits the increasingly popular Continuous Delivery, DevOps development approaches. Intelligent Definition and Assurance Slide 38
  • 39.
    System Surveying • ASystem Survey incorporates features, and captures the architecture of the system from a user perspective – Testers pair with developers and survey features – The knowledge required to build systems emerges over time – So does the design of the system – So should the models and documentation produced by testers • Surveys that evolve the System Model/Map are shared • The basis of component and system tests is the System Model itself • No need for extensive scripts or test procedures! • The information required for scripting is in the model. Intelligent Definition and Assurance Slide 39
  • 40.
    A process thatsuits automation • Test process comprises a sequence of parallel actions – Sequence: survey, model refinement then testing – Parallel: small subsets of functionality selected for surveys – These processes are both iterative and incremental as learning proceeds • Scalable: if you survey it, you can test it • Automate: What you can survey and test, you can probably automate • “Humans make the early maps; tools will follow the trails we make.” • We don’t need Machine Learning to do this: – Simple tools make suggestions that better inform and enrich exploration and testing. Intelligent Definition and Assurance Slide 40
  • 41.
    All exploration andtesting is driven by charters • Charters represent the low-level plans for all sessions • Charters can be reused for multiple sessions, if required Intelligent Definition and Assurance Slide 41
  • 42.
    Services • Can Cervayahandle web services as objects to test? • In principle yes – use your imagination: – Map the form/field information to service/fields – Service tests map directly to form tests • But we need to create an option that doesn’t require you to use your imagination. Intelligent Definition and Assurance Slide 42
  • 43.
    Sessions and logging •Sessions can be as short or as long as you like • Testers typically work in 60-120 minute bursts – 90 minutes is typical • Every action in a session is logged against the session (locations selected, created, notes taken, test created and run) • The activity of all testers is logged and can be scrutinised for audit purposes. Intelligent Definition and Assurance Slide 43
  • 44.
    Running a Test •When you run a test – You are presented with the test details (navigation, input values and expected outcomes) – You can log a test status, outcomes if different from expected and an interpretation – You can also log a screen shot or other file • The test run log record the detail above and tags the run with the app version/build defined in the charter • You can (if required) run the same test multiple times in the same session and against the same version. Intelligent Definition and Assurance Slide 44
  • 45.
    Challenges, opportunities • Isit feasible to generate a high or low level test plan from Surveyor? • Environments/platforms • Integrations – Chatbot – Slack etc. – Internal IM – Stories – Test automation a la BDD – both ways • Visualisations – Coverage – Reporting – Impact analysis • Other benefits – Instant replays – Heat map, floor plan – Tester paths – Real time notifications – Real time test(er) management Intelligent Definition and Assurance Slide 45
  • 46.
    New Model Testing: ANew Test Process and Tool @paul_gerrard Paul Gerrard paul@gerrardconsulting.com gerrardconsulting.com Slide 46Intelligent Definition and Assurance