Test
Analysi
s&
Design –
good
practic
Raluca Gagea
es
2013, October

minor.
From here to test
analysis and test cases desi...
There is an A for
everything

 Theoretical
side of things
 Definitions
 Vocab
 ISTQB

 Practical
(realistic )
side o...
Fundamental Test
Process

P
L
A
N
N
I
N
G

A
N
A
L
Y
S
I
S

D
E
S
I
G
N

I
M
P
L
E
M
E
N
T

EXECUTION

EVALUATING EXIT CRI...
Test Analysis &
Design
Test Analysis & Design is the activity
where general testing objectives
are transformed into tangib...
Test Analysis &
Design Vocab

Test
Technic
Busine
Basis
al
Test
Test
ss
documen
Test
Use
Scena
Object
Object
Function ts
c...
Test Analysis &
Design Vocab
Subscription Form
User Name
Age
City
Postal Code

Test
Item
s
Submit

Test
Obje
ct

Input con...
Test Analysis &
Design – Why?
Review test
basis

Examine
specifications
Evaluate
testability

Test
Basis

Prevents defects...
Test Analysis &
Design – Why?
Analysis of
test items

Identify test
conditions

Test
Conditions

Gives us a high-level
lis...
Test Analysis &
Design – Why?
Design the
tests

Use test design
techniques

Test
Cases
The high risks areas
will be covere...
Test Analysis &
Design – Why?
Identify
test data
Test Data
At execution moment, the
test cases will be executed
using the ...
Test Analysis &
Design – Why?
Design the
environment set-up
Identify any
infrastructure and
Test
tools
Environment
Availab...
Test Analysis &
Design – Why?
Create
traceability

Test Basis
&
Test Cases
At every moment, we can
calculate the requireme...
Requirement
s
A
requiremen
t is a
singular
documented
physical
and
functional
need that a
particular
Test
product Oracle
o...
When can we really have
them all in place?
Lessons i’ve Learnt –
First Requirements
Some
documents
Are
coming in
they

Piece of
Requirement
s coming in

addre
ssing
...
Lessons i’ve Learnt – First
test cases “design”
Analyze
test basis, &
test oracle

Identify
critical
functionalit
Identify...
Lessons i’ve Learnt – Testing
Structure
Deliv
ery
Mode
l

Estimat
ed
level
of
change

Test
Levels

Time

Numbe
r of
Testin...
Testing Structures – some
examples
Traditional
Waterfall
 release cycles
are typically
several weeks to
several months
lo...
Testing Structures – some
examples
Agile
 release products in
shorter and frequent
release cycles, each
consisting of mul...
Testing Structures – some
examples
Agile
 release products in
shorter and
frequent release
cycles, each
consisting of
mul...
Testing Structures – some
examples
Testing of System of systems
 large, complex platforms that may contain
multiple syste...
Testing Structures – some
examples
Testing of System of
systems
 large, complex platforms
that may contain multiple
syste...
Test Cases Design – some
good practices
Why do we write test
cases?
The test cases are more than some
sentences used to te...
Test Cases Design – some
good practices
Write Test Cases before the
implementation of the
requirements
Write Test Cases fo...
Test Cases Design – some
good practices
Use same naming convention for all
the test cases in a project
Create unique names...
Test Cases Design – some
good practices

Action
Scenario
Test Case Title

a verb
the rest of
Target what your
that
describ...
Test Cases Design – some
good practices

Action – Target –
Scenario
Create – Task –
title is not supplied
Create – Task –
...
Test Cases Design – some
good practices
Write detailed description of every
step of execution.
Define one single action pe...
Test Cases Design – some
The Expected Result states:
good practices

"Verify if error message is
displayed."
Issue: As exe...
Test Cases Design – some
good practices
Each Test Case checks only one
testing idea, but two or more
expected results are ...
Test Cases Design – some
good practices
Expected results should met
the test case purpose.
Additional steps should be
spec...
Test Cases Design – some
good practices
TC ID

Execution Steps

Expected Results

TC01.01
1.Login as Customer 1. Successfu...
Test Cases Design –
important attributes
Effective
Self
cleaning

Repeatable

Traceable

Nonredundant

Clear structured an...
Test Cases Cascading vs
Independent design
Cascading
style
Test Cases built on
each other
Simpler
and
smaller
The output o...
Test Cases High-Level vs LowLevel writing style
High-level
style
Test Cases defining
what to test in
general
terms,
withou...
Test Cases Design – some
good practices
Test cases must evolve
during the entire software
development lifecycle.
Test Cases in requirements,
Due to changes Design – some
design or implementation, test
good practices
casesAs requirement...
Test Analysis & Design –
Metrics & Measurements

Percentage of
requirements or
quality
(product) risks
What cannotcovered ...
Thanks for attending
this session!
Questions?

Thoughts?

Debates?
Upcoming SlideShare
Loading in …5
×

Test analysis & design good practices@TDT Iasi 17Oct2013

533 views

Published on

Test analysis & design good practices@TDT Iasi 17Oct2013

Published in: Technology, Business
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
533
On SlideShare
0
From Embeds
0
Number of Embeds
3
Actions
Shares
0
Downloads
13
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide
  • In primul rand, ma numesc Raluca Gagea siimi face placeresafiuastaziaici, savaimpartasesc din lectiilepe care le-am invatat de-a lungultimpului, de celemaimulteorilovindu-ma cu capul de pragul de sus. Dacaatiavuttimpsaaruncati o privirepesteprimul slide, m-am ganditsaabordamaceastatematica un pic diferit, punand accent maimultperealitatesimaiputinpeparteateoreticasimai ales pecefacem de celemaimulteorigresitsi cum putemimbunatatiactivitatiledesfasurate de-a lungulacestei faze.
  • Astfel, pentru a delimitamai bine parteateoretica, de ceapractica, de cea de sugestiisibunepracticisi de exemple, am implementat un cod al culorilorplecand de la faptulcaindiferent de experientape care o are fiecareom din aceastasala, suntincrezatoarecapentrufiecareexistacelputin un punct de start in a face lucrurilesafunctionezemai bine. Astapoate fi realizabilaprinsedimentareaunorcunostinteteoretice la unii, prinintelegereapartii practice la altiisaudoarprinvalidareafaptuluicamaisuntsialtii care consideradreptbunepracticiceeaceuniidintrevoidejaaplicati. Astfel, spercafiecaredintrevoisaplecati cu celputin un lucru cu care safiticonfortabili, acestfiind din punctulmeu de vederesiprincipalulobiectiv al acesteisesiuni.
  • Uitandu-ne la nivelulprocesului fundamental de testare, stimcaactivitatile de planificaresuntcelecedaustartulfiecaruiciclu de dezvoltare al unuiprodus. De asemenea, acesteestemomentulidentificariiobiectivelortestarii, lucrufara de care nu puteminaintecatrecelalalte faze. Obiectiveletestariisunt de fapt o listaprioritizata de obiectivece tin de validareasiverificareaprodusuluipe care ildezvoltam/testam. Atfel, avematatobiectivece tin de validareafaptuluicaprodusulnostrusuporta exact functionalitatile de business cerute de catre client, carespectaanumitestandarde, darsiobiectivece tin de verificareafaptuluicaprodusulnostrufunctioneazacorect din punctul de vedere al tuturor layer-elor implicate. Folosimaceastalista de obiectivepentru a monitorizasimasuraprogresulsitotodatapentru a verificadacaactivitatile de testaresuntconsistente cu obiectiveleproiectului. Astfel, incepsiactivitatile de control cemonitorizezaprogresulsicompara permanent rezultateobtinute cu ceeace s-a planificat. Urmeazafaza de analizasi design in care obiectivele de test identificatesunttransformate in lucruritangibile cum ar fi structura de testarece ne vasustineactivitatile. Outputulacesteifaza de analiza de design va fi folositmaiapoi in fazele de implementaresiexecutiepentru a incepeefectivsacompunemtestelesisuitele de teste. N-am sa insist pefazele de evaluare a exit criteria, a rapoartariisi a activitatilor de inchidere a procesului de testare. Singurullucrupe care vreausa-l specific aiciestefaptulcaasa cum ne asternem, asavomdormi. Si aici ma refer la faptulca o structura a testariiorganizataeficient ne vaajuta in executie, in evaluareacriteriilor de iesiredintr-o fazasaualta, in raportare, etc.Despreanalizasi design vomdiscutaastazi.
  • Cum spuneamsimaidevreme, faza de analizasi design inseamnatrecerea de la obiective la lucruripalpabile. Inseamnaanaliza a tot ceeace ne pica in mana in vedereaextrageriiinformatiilor de test simaiapoi la designulstructuriice ne vasustineactivitatile de testare.
  • Avand in minteobiectiveletestarii, as vreasatrecemimpreunaprincatevanotiuni de vocabular, pentrusimplulmotiv de a aveaacelasinivel de intelegereasupratermenilor. Astapentruca mi s-a intamplat de multeorisavorbesc cu diverse persoanepeacelasisubiectdar in limbidiferite. Asacaastazi as preferasafolosimaceeasilimba.Am incercatsaintroductermeniiodata cu aparitialor in procesul de testare.
  • Pentru a privimaiusorparteapractica a lucrurilor, am luat un exemplu.
  • Candvorbim de requirements, trebuiesa ne gandim la maimultetipuri de requirements pe care le-am puteaaveapentru un produsceurmeazasa-l implementam / testam. Incepand cu celefunctionale, cesuntcelemaiintalnitesidescriu cum artrebuisafunctionezesistemul in termeni de business flows, functional flows, dar nu numai. Non-functional (atuncicandsuntclarspecificate) indica de obiceidiversiindici de performantape care sistemultrebuiesa ii atinga. De celemaimulteoriinsaeledescriu cum trebuiesaoperezesistemul. Pentrunoi requirements reprezintasidetaliile de arhitectura, de structurasi de design ale sistemului care ne pot impacta la un moment dat. Astainseamnasastimcenivelefizicesilogice are aplicatianoastra, cum impacteazaeleprocesul de testare, daca intra in responsabilitateanoastra, dacaexistaaltecomponentesausisteme cu care aplicatianoastratrebuiesa se integreze, etc.Un lucrufoarte important estesadelimitamscopulprocesului de testare in functie de obiectiveleidentificate. Aflareascopuluiinseamna de faptidentificarea in prealabil a documentelorcefac parte din test basis siapoiextragereaefectiva a activitatilorcevor fi supuseprocesului de testare. Desigur, putemaveatoateacestedocumentedisponibile, darresponsabilitateanoastraestesatestambaza de date. In test basis-ulnostru pot intra toateacestedocumente, daca de eledepindeactivitatea de testare. Dacadocumentele nu reprezintanimicpentrunoi, atunci e mai bine sa nu le includem, pentrusimplitatealucrurilor. Dacaavem de-a face cu maimultedocumente, ar fi bine in Test Strategy / Test Plan, safacemreferire exacta la ele, la locatialor, versiunealorsiunde se vafolosiunulsiundealtul.Odata cu identificarea test basis-ului, in cazul in care avem de-a face cu maimultedocumente, dar nu numai, artrebuisa ne punemsiproblemaidentificarii test oracle-ului. Spun caasta se aplica nu numaiatuncicandavemmaimultedocumente, ci sicandavem un sistemvechicetrebuitranspusintr-unulnou. Ne intrebamatunci: care este expected result-ul cu care artrebuisavalidezsistemul? Dacaavemvaloridiferite in documente, care dintreeleestecea expected? etc
  • As dorica la aceastaintrebaresagasimraspunsulimpreuna. Un Test Basis vomaveaincepand cu un draft de document pana la preamultedocumente in care ne incurcam. Candputemdefiniobiectiveletestarii? Celmai bine e saavemcelputin o ideedespreeleinainte de a trece in a identificaobiectele, itemiisi test conditions aferente. In schimbesteobligatoriusa le stimcand ne apucam de scris test cases. Identificareaunei suite de teste? Identificareatehnicilor de selectare a test caseurilor?
  • Asta e ce am invatateu in termeni de procesatuncicandprimescniste requirements. Presupunemca vin catevadocumente, indiversestadii (draft, reviewed, approved, etc.). Prima intrebarepe care mi-o pun estedacaacelecerinte pot fi testate. Daca da, incepprocesul de analiza a lorsi le adaug in fericita in test basis. Ce se intamplainsadacarequirementurile nu pot fi testate din varii motive …? Am grijasaridicaceastaproblemasisa nu le arunc la gunoi. Continuianalizalorpentrucacelputin la nivel de static analysis ele pot fi testate. Apoi le pastrez, poate le pot reciclamaitarziu, deci cu siguranta nu le arunc la gunoi.
  • Pornim de la faptulcaavemniste requirements si am identificat un test oracle. Inacest moment putemincepeanalizalor. Un lucru important de specific esteca tot acumincepemsaconturamstrategia de testare, fie ea requirements based sau risk-based si design-ulsuitelornoastre de test si implicit a test case-urilorva fi diferita. Catevalucruri de analizatcesuntstranscorelate de design-ulacestora:Primular fi identificareafunctionalitatilorcritice: critice din punctul de vedere al businessuluisau al produsului. De exemplu, dacaavem o componentaminora, dar de care depindtoatecomponentelemajore, atunciaceacomponentavadevenicritica. Celalaltcazar fi in care o componenta / functionalitateminora din punctulnostru de vedereestecritica din punctul de vedere al clientului. Identificandu-le peacestea, ne putemgandicefel de suite de testevomaveanevoie, dacaelevortrebui orientate catrefunctionalitati e2e, catretipuri de teste gen smoke sau regression, etc. Odataidentificateacestea, eletrebuieprioritizate. Candfacemasta, trebuiesaavem in vederesisituatii in care prioritatea se poateschimba din varii motive sitrebuiesaanalizameforturile implicate. Vomavea un exemplumaitarziu in care vomvedeacainevitabilstructuradocumentelor de requirements ne influenteazastructurasuitelorsichiar a test case-urilor.Al doilealucruar fi identificareatipurilor de testare de care vomaveanevoie, pornind de la analizarequirementurilor. Acum ne punemsiproblemadacava fi nevoiesacreem suite separate in functie de tipurile de testare implicate si tot acumartrebuisa ne gandimdacavomaveanevoie de stiluridiferite de a face design si de a scrie test caseurile.Un al treilealucru tine de identificareaautomatizariiunor suite de test, dacavomaveanevoie de maimulteastfel de suite, daca e nevoiesa le grupampefunctionalitati/componente, etcsidesigur, trebuiesa ne gandim la ce tip de design sistil de a scrie test caseurilevomrecurge.Tot acum e bine sa ne gandim la requirementurilecevortrebuiacoperite de test cases si cum vomputeasaAcesteasuntcatevaexemple, cu siguranta nu unicele, in care artrebuisa ne gandim la structurasuitelornoastre de test inca de la prima interactiune cu requirementurilor, chiardacaacestea nu sunt in variantalorfinala.
  • Suntemacum in momentul in care avemniste requirements, ne-am pus intrebarilenecesare la inceputsiacumincercampracticsafacem design-ul test case-urilorpentru a incepemaiapoisa le scriem.Intervineproblematoolului de test management in care trebuiesa le stocam.Sunt o serie de criteriipe care trebuiesa le analizamatuncicanddecidemsafolosim un anumit tool simai ales, canddecidemsafolosim o anumitastructurapentru suite si test case-urilenoastre.Un prim criteriueste model de livrare. Dacaavem de-a face cu un proiectiterativ, ne punemproblemarefolosiriiunor test caseuri la un moment datsi o versionaresi o posibilitate de a pastraacesteversiuni in cadrultoolului ne-arajuta. De asemenea, cu siguranta ne-arajutaconstruireaunei suite de regression pe care sa o imbunatatim de-a lungulciclurilor de testare. Un al doileacriteriuar fi nivele de testarenecesaresi ma refer la unit sau component, integration, system sau acceptance. Dacanoisuntemresponsabili de toate, atunci cu siguranta ne vainteresasagrupamaceste test cases in functie de nivelul de testare de care apartin, astacasadelimitamcorespunzatorfiecarefaza de testare.Un al treileacriteriupoate fi numarul de cicluri de testare. Asta ne ajutasadecidem cum vaarataierarhianoastra de suite si test cases astfelincatatuncicandnumarul de cicluri de executieva fi mare, saputemgasi cu usurintainformatiidesprecelelaltecicluri.Foarte important, este bine saobtineminca de la bun inceput o estimare a niveluluischimbarilor in sistem. Desigur, acestlucruesteinfluentatsi de catremodelul de livrare. Dacaavem de-a face cu un proiect cu un grad mare de schimbare, atuncivatrebuisa ne gandim de 2 oriinaintesacreem 1000 de test cases pentrulucruri care se pot reorganizaaltfelastfelincatinformatiarelevantasa nu dispara.O axafoarteimportanta in tot acestprocesestetimpul. Dacasuntemintr-un proiect time-boxed, poatesubestimat, trebuiesaavem mare grijaatuncicandhotaramstructura de testare in toolul de test management. In astfel de situatia, ineficientastructuriipoateavea un impact negativasupraintregiiechipe care in locsa se focusezepeexecutie, pierdetimp cu managementul test caseurilor.Nu in ultimul rand, un criteriufoarte important esteriscul. Dacaavem de-a face cu o strategie de tip risk-based, atunciartrebuisagrupam test cases-urile in functie de prioritatesisacrestemnivelullor de detalii.
  • Catevaexemple de structuri in functie de modalitatea de livrare.
  • Catevaexemple de structuri in functie de modalitatea de livrare.
  • Catevaexemple de structuri in functie de modalitatea de livrare.
  • Catevaexemple de structuri in functie de structurasistemului.
  • Catevaexemple de structuri in functie de structurasistemului.
  • Test analysis & design good practices@TDT Iasi 17Oct2013

    1. 1. Test Analysi s& Design – good practic Raluca Gagea es 2013, October minor. From here to test analysis and test cases design is just a small step, but one that needs special attention, experience, intuition and creativity and many good practices. I won't "teach" you how to do it best, I'll share with you some tips that helped me during testing and we'll try to cover: & some vocab stuff we usually don't use or we do it wrong & specific activities and their benefits & ways to measure progress and report the benefits & test cases writing/designing styles - advantages and disadvantages
    2. 2. There is an A for everything  Theoretical side of things  Definitions  Vocab  ISTQB  Practical (realistic ) side of things  Activities and their benefits  Good practices  Things to remember  Lessons to learn  Examples
    3. 3. Fundamental Test Process P L A N N I N G A N A L Y S I S D E S I G N I M P L E M E N T EXECUTION EVALUATING EXIT CRITERIA AND REPORTING CONTROL C L O S U R E
    4. 4. Test Analysis & Design Test Analysis & Design is the activity where general testing objectives are transformed into tangible test Test Test Design Analysis conditions and test designs. Process of looking at something that can be used to derive test information Process of identifying the associated highlevel test cases for a test item
    5. 5. Test Analysis & Design Vocab Test Technic Busine Basis al Test Test ss documen Test Use Scena Object Object Function ts cases rios Object 2 al 1 n Emails specifica Test Test Nontions Item Test Item 1 Test n functiona l condition condition specificat n 1 ions Test Test caseTest case 1 case n design techniques Effective test case 1 Effective test case n Input & Output Test Test suite data T E S T O B J E C T I V E S
    6. 6. Test Analysis & Design Vocab Subscription Form User Name Age City Postal Code Test Item s Submit Test Obje ct Input constraints User Name must be between 6 and 12 characters long, must start with a letter and include only digits. Test Age must be a number greater or equal to 18 and less than 65 City must be one of Ottava, Toronto, Montreal or Halifax Condit ions
    7. 7. Test Analysis & Design – Why? Review test basis Examine specifications Evaluate testability Test Basis Prevents defects appearing in the code At execution moment, all the requirements are translated in terms of testable items Select relevant documents only; Identify gaps and ambiguities in the specifications because we are trying to identify precisely what happens at each point in the system
    8. 8. Test Analysis & Design – Why? Analysis of test items Identify test conditions Test Conditions Gives us a high-level list of what we are interested in testing We can start identifying the type of generic test data we might need
    9. 9. Test Analysis & Design – Why? Design the tests Use test design techniques Test Cases The high risks areas will be covered by tests before the actual execution phase starts
    10. 10. Test Analysis & Design – Why? Identify test data Test Data At execution moment, the test cases will be executed using the most closest data than the one in production
    11. 11. Test Analysis & Design – Why? Design the environment set-up Identify any infrastructure and Test tools Environment Availability At execution moment, everything we need to carry out our work is in place
    12. 12. Test Analysis & Design – Why? Create traceability Test Basis & Test Cases At every moment, we can calculate the requirements testing coverage
    13. 13. Requirement s A requiremen t is a singular documented physical and functional need that a particular Test product Oracle or service must be or perform. Functional reqs Nonfunctional reqs Architectu ral reqs Design reqs Structura Test l Basis reqs Constraint reqs
    14. 14. When can we really have them all in place?
    15. 15. Lessons i’ve Learnt – First Requirements Some documents Are coming in they Piece of Requirement s coming in addre ssing reqs or additi onal info Tes tfor if me? Are they test able ? Inf o Re qs Ad d Bas relev to is Ad KT ant if d pac relev k to ant NO Perform static analysis YE S Ad d Tes to t Bas is
    16. 16. Lessons i’ve Learnt – First test cases “design” Analyze test basis, & test oracle Identify critical functionalit Identify ies needed testing types Identify automation Identify need ways to keep traceabilit y Which kind of test suites will be needed (e.g. smoke, regression, pe Do we need to r functionality,test have separate per component)? for suites /Create cases How can we each testing type separate test suites prioritize them? (manual vs for various what’s test automation, function automated the impact in terms of al vs non- different cases for effort when etc.) functional, purposes. changing the an Do we need to Choose priority (review test have differentdesign appropriate cases, execute design & writing and writing style for them, track cases. styles for the test these test
    17. 17. Lessons i’ve Learnt – Testing Structure Deliv ery Mode l Estimat ed level of change Test Levels Time Numbe r of Testin g Cycles Risk Testing Structu re
    18. 18. Testing Structures – some examples Traditional Waterfall  release cycles are typically several weeks to several months long, and usually have multiple phases of testing (Functional, Syste m test, Performanc e, User Acceptance Test, etc) during Organize by Test Phase Organize by Functionalit y or System and then by Phase
    19. 19. Testing Structures – some examples Agile  release products in shorter and frequent release cycles, each consisting of multiple Sprints in which one Organize by Sprints or more User Stories and then by User are targeted for Stories or development completion functionality -> when having large number of releases with shorter Sprint
    20. 20. Testing Structures – some examples Agile  release products in shorter and frequent release cycles, each consisting of multiple Sprints in Organize or more which one by moving Sprints at project User Stories are level as releases targeted for -> you have larger development release cycles completion with many Sprints
    21. 21. Testing Structures – some examples Testing of System of systems  large, complex platforms that may contain multiple systems or sub-systems each with its own development and QA tracks, and there may be a need to track testing progress on a per system Organize bases followed by System wide testing releases as projects, followed by System level testing. This is useful when different teams are
    22. 22. Testing Structures – some examples Testing of System of systems  large, complex platforms that may contain multiple systems or sub-systems each with its own development and QA tracks, and there may be Organize releases a need to track testing under the same progress on a per system project, followed by bases followed by System systems. This is useful wide testing when the platform has larger number of frequent releases
    23. 23. Test Cases Design – some good practices Why do we write test cases? The test cases are more than some sentences used to test various flows. They are our way to  proof the level of confidence in what we deliver by:  measuring the requirements coverage  and their status at every point in the development process: (if the requirements are covered by
    24. 24. Test Cases Design – some good practices Write Test Cases before the implementation of the requirements Write Test Cases for all the requirements Test Cases should map precisely to the requirements and not be an enhancement to the requirement
    25. 25. Test Cases Design – some good practices Use same naming convention for all the test cases in a project Create unique names for your test cases (use “tc” + identifier + titLe) use “action - Target – scenario” method to formulate the title
    26. 26. Test Cases Design – some good practices Action Scenario Test Case Title a verb the rest of Target what your that describes test is the focus of what you about and your test are doing how you (screen, obj (create, de distinguish ect lete, ensur multiple entity, prog e, edit, open, test cases ram) for the populate, l same Action ogin) and Target
    27. 27. Test Cases Design – some good practices Action – Target – Scenario Create – Task – title is not supplied Create – Task – title is the maximum allowable length
    28. 28. Test Cases Design – some good practices Write detailed description of every step of execution. Define one single action per execution step. Write clear and precise expected results
    29. 29. Test Cases Design – some The Expected Result states: good practices "Verify if error message is displayed." Issue: As executing the Test Case, what if the error message says, “Please provide postcode ", while it should say, "Your postcode is invalid"? Solution: The Expected Results states: "Verify that the error message about an invalid postcode is dispLayed.”
    30. 30. Test Cases Design – some good practices Each Test Case checks only one testing idea, but two or more expected results are totally testing idea: “payment can be performed by if there is credit acceptable MasterCarda need to card." perform several verifications for Expected results: that testing idea. 1) In DB, cc_transaction table, in MasterCard column, value 1 is registered. 2) Credit card balance is reduced by the amount equal to the amount of the payment.
    31. 31. Test Cases Design – some good practices Expected results should met the test case purpose. Additional steps should be specified separately.
    32. 32. Test Cases Design – some good practices TC ID Execution Steps Expected Results TC01.01 1.Login as Customer 1. Successful TC01.01 Login as Customer Verify User login. Verify User Customer 2.Navigate to Customer Navigate to Create 2. Create User is Create Keyword Keyword page User is Keyword page. 1.Info message is able to page. is displayed. able to displayed that create 3.Complete all 3.the keyword is Info message is create 1.Complete keyword fields with valid displayed that keywordThe Login and Navigate to steps successfully Wrong – data. Submit data. created. the keyword is are not required, as the purpose of the successfully 2.Navigate 2.Newly created 4.Navigate to test is to verify that the user is able to created. keyword is Keywords List successfully create keywords. Login and page. Verify that 4.displayed in the Keywords list the created keywords page displaying should be verified in list. page is 5.Verify that the keyword is present displayed. separate Test Cases. created keyword in the keywords is present in the 5. Newly created
    33. 33. Test Cases Design – important attributes Effective Self cleaning Repeatable Traceable Nonredundant Clear structured and Well flow of events, Practical and low maintainable, neithe Short rather than redundancy. Any Contain detailed correspondence The result of the test stepstest under test rbetweencase tolike lengthy; writtentest feature execution No simple nor too too needed can be Each drawbacksbein Should cover all the case should asimple language,aso complex;back to steps andmistakes; particularexpected spelling a function; Returnsseparated should not be Have high traced the test features/functionalit always the same, no no missingpersonthe use probabilityto is repeated inexecution environment for that any different test system of the cases exact results; requirement/use ies that how many matter have to be testpositive no GUI ablecases. Two test functionality / clean state errors detecting and unambiguously to case steps; understand timestested been it has negative execution defined scenarios; the scope of each casesnames not unnecessary should executed before limit to 15case stepstest expected executionsame find the steps and execution defect. results steps Test cases Complete Clear Detailed Accurate Evolvable Short and simple language
    34. 34. Test Cases Cascading vs Independent design Cascading style Test Cases built on each other Simpler and smaller The output of one Test Case becomes the input of the next Test Case. Arranging Test Cases in a right order saves time during test Independent style Test Case is Each self contained, does not rely on any other Test Cases Any number of Test Cases can be executed in any order Larger and more complex Harder to
    35. 35. Test Cases High-Level vs LowLevel writing style High-level style Test Cases defining what to test in general terms, without specific values for input data and expected results. less time to write greater flexibility in execution Low-level style Cases with Test specific values defined for both input and expected results. repetitive it can be executed even by a tester that is just learning the application easier to
    36. 36. Test Cases Design – some good practices Test cases must evolve during the entire software development lifecycle.
    37. 37. Test Cases in requirements, Due to changes Design – some design or implementation, test good practices casesAs requirements change, the testers become often obsolete, outof-date.adjust test cases accordingly must Given the pressures of having to complete the testing, Test cases must be modified to testers continue their tasks accommodate the additional without ever revisiting the test information obtained from other cases. The problem is that if the phases test cases become outdated, the Each test case modified upon initial work creating these testsa change and additional manual is wasted request should have in the description the record that describes tests executed without having a the change place cannot be test case in (email, meeting minutes, use case ID) repeated. As defects are found and corrected, test cases must be updated
    38. 38. Test Analysis & Design – Metrics & Measurements Percentage of requirements or quality (product) risks What cannotcovered by test be conditions measured cannot be managed. Percentage of test conditions covered by test cases of Number defects found during test analysis and design
    39. 39. Thanks for attending this session! Questions? Thoughts? Debates?

    ×