Software A software is a set of programs. They will take input and provide outputs. They aretwo types 1) Software Application 2) Software Product1) A software development for a specific customer requirements called as Software Application.2) A software development depending on overall requirements in market called as software product. The interested customers are purchasing the licenses of Software Product.Software Bidding : A proposal to develop a New Software is called Software Bidding. In SoftwareApplication Development, the proposal is coming from specific customer. In productdevelopment our organization is taking their own proposal.Kick of Meeting : The CEO category person is conducting a meeting with high level managementand select a Project Manager to handle the New Software Development Process.PIN (Project Initiation Note) Document : The selected Project Manager (PM) is preparing this document to estimate therequired people, the required technologies, required time and required resources. He/Shesubmitting the report to CEO. The CEO is conducting a review to give green signal toProject Manager.SDLC (Software Development Life Cycle) : (Water Model) Required Gathering ↓ Analysis & Planning ↓ Designing ↓ Code ↓ Testing ↓ Release & Maintenance
In above SDLC process, the single stage of testing is available and conducting thetesting by Developers. Due to these reasons, the organizations are concentrating onMultiple Stages of Testing and separate testing teams to achieve quality.Software Quality : → Meet Customer Requirements (Functionality) → Meet Customer Expectations (Usability Performance) → Cost to Purchase License → Time to ReleaseSoftware Quality Assurance (SQA) : The Monitoring and Measuring the strength of development process is called asSoftware Quality Assurance / Verification.Software Quality Control (SQC) : The Validation of product with respect to customer requirements is calling asSoftware Quality Control / Validation / Testing.“V” Model : ‘V’ Stands for Verification & Validation. This model is defining developmentprocess with Testing Stages. This model is extension of SDLC Model. Verification Validation Requirements User Acceptance TestingGathering & Review Analysis & Planning System Testing With Review High Level Design Integration Testing & Review (Programs Testing) Low Level Design & Review Unit Testing (Program Testing) Coding
In above ‘V’ Model Reviews are calling as Verification Methods and Testinglevels are calling as Validations. In small and medium scale organizations themanagement is maintaining the separate Testing Team for System Testing Only todecrease project cost, because the System Testing is Bottle Next Stage in SoftwareDevelopment Process.I) Reviews in Analysis : In general the software development process is starting with requirementsgathering from Specific Customer in Application Development and requirementsgathering from Model Customers in Product development. After gathering requirementsthe responsible Business Analyst is preparing BRS ( Business RequirementsSpecification) document. This document is also known as User RequirementSpecification or Customer Requirement Specification. After gathering requirements, the business analyst sit with Project Manger anddevelop SRS and Project Plan. The Software Requirements Specification Consists offunctional requirements to be developed and system requirements to be used. Example : BRS SRC Functional Requirement : 2 Inputs , 1 Out Put, ‘+’ is Addition Operation System Requirement : ‘C’ Language What? How? After completion of BRS & SRS preparations, the corresponding BusinessAnalyst is conducting a review to estimate completeness and correctness of thedocuments.→ Are they Correct Requirements?→ Are they Complete Requirements?→ Are they Achievable Requirements?→ Are they Reasonable(Time) Requirements?→ Are they Testable Requirements? Go to V Model Next
II) Reviews in Design : After completion of successful Analysis and Review, the Design Category peopleare preparing HLD, LLDs (High Level Design & Low Level Designs) The High LevelDesign specifies the overall architecture of the Software. It is also known as SystemDesign or Architectural Design. Example : Root LOGIN Mailing Chatting LOGOUT Leaf : Every Functionality or Module Internal Structure specified by Low Level DesignDocuments. These are also known as Structural Design or Component Design. Example : User User ID & Password Invalid LOGIN Data Base Re-Login Valid Next Window HLD is a system level design and LLD is component or Module level design. Soone Software design consists of one HLD and Multiple LLDs. The corresponding designers are conducting a review on that document forcompleteness and correctness.→ Are they Understandable Designs?→ Are they Correct Designs?→ Are they Complete Designs?→ Are they Followable Designs? Go to V Model Next
III) Unit Testing : After completion of successful designs and reviews the correspondingprogrammers are starting coding to construct a Software Physically. In this stage theprogrammers are writing programs and Test each program using White Box / Glass Box /Open Box Testing Techniques. } → Basic Paths Coverage → Control Structure Coverage → Program Technique Coverage → Mutation Coverage Programs(A) Basic Paths Coverage : The programmers are using this technique to estimate the Execution of aprograms. In this technique the programmer Executing a program more than one time tocover all areas of that program in execution.(B) Control Structure Coverage : After completion of successful Basic path coverage the correspondingprogrammer is concentrating on the Correctness of that program execution in terms ofInputs, Process and Outputs.(C) Program Technique Coverage After successful Basic Paths & Control Structure Coverage, the correspondingprogrammer is calculating the execution of that program. If that program execution speedis not acceptable then the programmer is performing changes in that program structurewithout disturbing the functionality. In this coverage the programmers are using Monitors and Profiles like 3rd partysoftware to calculate the execution speed of the program.Note :Monitors are used in VB.netProfilers are used in Java
(D) Mutation Coverage Mutation means a change in program. Programmers are performing changes inprograms to estimate the completeness and correctness of that program testing. Test Repeat Test Test ↓ ↓ ↓ Change Change ↓ ↓ ↓ Passed Passed (Incomplete Test Failed (Complete Testing) Basics Paths Coverage, Control Structure Coverage and Program TechniqueCoverage are applicable on a program to test. Mutation Coverage is applicable ProgramTesting to estimate completeness and correctness of that Testing. Go to V Model NextIV) Integration Testing : After completion of dependent programs development and Unit Testing, theprogrammers are interconnecting them to form a complete System / Software. This testing is also known as Interface Testing there are Four Approaches toIntegrate Programs and Testing.A) Top Down Approach :- In this approach the programmers are interconnecting main program and some ofsubprograms. In the place of remaining sub-programs, the programmers are usingTemporary programs called “Stub" Main STUB (Under Construction) Sub1 Sub2
B) Bottom Up Approach :- In this approach the programmers are interconnecting sub-programs withoutcoming from Main Program. Main Driver (Under Construction) Sub1 Sub2C) Hybrid Approach :- In is a combined approach of Top Down & Bottom Up approaches. It is alsoknown as Sand Witch Approach. Main Driver (Under Construction) Sub1 Driver (Under Construction) Sub2 Sub3D) System Approach :- The Integration of programs after completion of 100% coding is called SystemApproach or Big Bang Approach
V) System Testing : After completion of successful Integration Testing, the Development Team isReleasing a Software Build to separate Testing Team in our organization. This SystemTesting classified into Three Sub Stages. 1. Usability Testing 2. Functional Testing 3. Non-Functional Testing1. Usability Testing : In general the testing execution is starting with Usability Testing. During this Testthe Testing Team is Concentrating on “User Friendliness of Software Build” There are 2sublevels in this Usability Testing. a) User Interface Testing : → Ease of Use (Understandable Screens) → Look & Feel (Attractive Screens) → Speed in Interface (short Navigations in Screens) b) Manuals Support Testing : In this test the Testing Team is verifying the Help of that Software.Case Study : Receive S/w Build from Developers after Integration Testing. ↓ User Interface Testing ↓ Functional Testing ↓ Usability Testing Non-Functional Testing ↓ Manuals Testing
2. Functional Testing : It is a Mandatory Testing level in System Testing. During this test the TestingTeam is concentrating on the Correctness of Customer requirements in that S/w Build. This Testing classified into below sub tests.a) Control Flow Testing :- The changes in properties of objects in an Application / S/w Build with respect to mouse and keyboard operations.b) Error Handling Testing :- The prevention of wrong operations with meaningful messages.c) Input Domain Coverage :- Whether our S/w Build is taking valid type and size of inputs or not?d) Manipulations Coverage :- Whether our S/w Build is providing customer expected output or not?e) Database Testing :- The input of Front End Screens operations on Back End database contactf) Sanitation Testing :- Finding extra functionality with respect to Customer RequirementsCase Study :- Software Build Screens (Front End) Data Base (Back End) Control Flow Error Handling Data Base I/p Domain Testing Manipulations Sanitation Functional / Black Box Testing
3. Non-Functional Testing : It is an optional level in System Testing. This level is expensive and complex toconduct. During this test the Testing Team is concentrating on extra characteristics ofSoftware.a) Reliability Testing :- It is also known as Recovery Testing. During this test the Testing Team isvalidating whether our S/w Build is changing from Abnormal State to Normal State ornot?b) Compatibility Testing :- It is also known Portability Testing. During this test the Testing Team isconcentrating on whether our S/w Build is running on Customer Expected platform ornot? Platform means Operating System, Browser, Compilers and Other SystemSoftware’s.c) Configuration Testing :- It is also known as Hardware Compatibility Testing. During this test theTesting Team is concentrating on whether our S/w Build is supporting differenttechnology hardware devices or not? Ex :- Different Technology Printers, Networks … etc.,d) Inter System Testing :- It is also known as End to End Testing or Interoperability Testing. During thistest the Testing Team is concentrating on whether our S/w Build is co-existence withother Software application to share common resources or not?Case Study :- Compatibility Testing S/w Build → Operating System S/w Build → H/w Device Configuration Testing Ex : Printers Inter System Testing S/w Build → Other S/w Build
e) Data Volume Testing :- During this test the Testing Team is inserting model data in our Application Buildto estimate peak limit of data. This data limit estimate is calling as Data Volume Testing. Ex : 1) M.S.Access Technology Software are managing 2GB Data Base, SQL Server managing 6-7GB Data Base and Orcle Tech. managing 10-12GB Data Base as maximum.f) Installation Testing :- S/w Build Customer expected configuration system + Install Customer expected size of Ram, HDD, Supported S/w Processor, OS…. Etc., → Setup program execution to start Installation. → Easy interface during Installation. → Occupied disk space after Installation.g) Load Testing :- Load means that in number of Concurrent users are using our S/w Build at atime. During this test the Testing Team is executing our S/w Build under customerexpected configuration and customer expected load to estimate speed of processing orperformance. Client 1 □ Server Client 2 □. S/w Build . Process . Client N □h) Stress Testing :- The execution of our S/w Build under customer expected configuration and morethan Customer Load to estimate peak limit of Load is called Stress Testing.i) Endurance Testing :- The execution of our S/w Build under Customer Expected configuration andcustomer expected load to estimate continuity in processing is called Endurance Testing.j) Security Testing :-
It is also known as penetration testing. During this test the Testing Team isconcentrating on three factors. Authorizations : S/w Build is allowing valid users and preventing invalid users. Ex : Login with password, PIN, Digital Signatures, Finger Prints, Eye Retina, Scratch Cards….etc., Access Control : The permission of valid users to access functionality in Build. Ex : Admin, User Encryption / Decryption : The code conversation in between client and server process. Client Server Request Response Decrypted Encrypted Decrypted Cipher Text Encrypted Cipher Textk) Localization and Internationalization Testing :- This testing is applicable for Multi Languity Software. This type of softwares areallowing multiple user language characters. Ex : English, Spanish, French …. Etc., In localization testing the Test Engineer is providing multiple language charactersas Inputs to the S/w Build. In Internationalization Testing the Test Engineer is providinga common language character (English) to S/w as Input. In this scenario the 3rd partytools transfer common language character to other language characters.Note : Java Unicode is better technology to develop multi languity softwares.l) Parallel Testing :- It is also known as Competitive / Comparative Testing. During this test theTesting Team is comparing our S/w Build with old version of same S/w or with similarproduct in market to estimate competitiveness.VI) User Acceptance Testing :
After completion of successful System Testing the Project Manger isconcentrating on UAT to collect feedback from real customers or model customers.There are two ways in this User Acceptance Testing. α Alpha Testing β Beta Testing→ For S/w Application → For S/w Products→ By real customers with involvement → By Model Customers Of Developers and Testers→ In Development Site → In Model Customer SiteVII) Release Testing : After completion of UAT and their modifications the Project Manger is formingRelease Team or On Site Team to release application to Real Customer or to releaseProduct to license purchased customer. This release team or onsite team consists of FewProgrammers, Few Testers, Few Hardware Engineers with a Team Lead. This team isobserving below factors in Customer Site. 1) Complete Installation 2) Overall Functionality 3) Input devices handling (Key Board, Mouse….etc.,) 4) Output devices handling (Monitor, Printer….etc.,) 5) Secondary storage devices handling (Floppy, Pen Drive…etc.,) 6) O/s error handling 7) Co-existence with other S/w in customer site. The above factors checking in customer site is also known as Port Testing /Deployment Testing. After successful release, the release team is conducting training sessions tocustomer site people & then back to our organization.
VIII) Maintenance: During utilization of a Software, the customer site people are sending SoftwareChange Request (SCR) to our organization. These requests received by a special team inour organization called Change Control Board (CCB). This team is consists of FewProgrammers, Few Testers, Few Hardware Engineers along with Project Manager. S/w Change Request Enhancement Missed Deffects Impact Analysis Impact Analysis ↓ ↓ Perform S/w Perform S/w Changes Changes ↓ ↓ Conducted by CCB Test S/w Changes Test S/w Changes ↓ Improve Testing Process & People CapabilityCase Study :- Deliverable to be Testing Stages Responsibility Testing Techniques Tested Walk Through,Reviews in Analysis BRS & SRS BA Inspections & Peer Reviews Walk Through, Review in Design HLD & LLDs Designers Inspections & Peer Reviews White Box Testing Unit Testing Programs Programmers Techniques Interface in between Top Down, BottomIntegration Testing Programmers Programs Up, Hybrid, System Usability, Test Engineers / Functional / Black System Testing S/w Build Quality Control Box, Non- Engineers Functional Testing User Acceptance Real Customers / α -Testing, S/w Build Test Model Customers β - Testing S/w Release Factors Releasing Testing S/w Build Release Team (7 Factors in VII)Maintenance Level S/w changes CCB Regressing Testing Testing
Walk Through :- A document study to estimate completeness and correctnessInspection :- Search & Issue in a document called as InspectionPeer Reviews :- Comparing the document with other similar document. Challenges in Software Testing In general every Testing Team is planning formal testing to conduct. Due to somechallenges in testing, the Testing Teams are going to conduct Ad-hoc Testing orInformal Testing. There are Five Styles of Ad-Hoc Testing.a) Monkey / Chimpangy Testing :- Due to lack of time the Testing Team is conducting testing on Main Activities ofa Software. This type / stage of testing is called as Monkey Testing.b) Buddy Testing :- Due to lack of time the Project Management is combining one programmer andone Tester as a Buddy. This teams are conducting Development & Testing Parallely.c) Exploratory Testing :- It is also known as Artistic Testing. Due to lack of Documentation, the TestEngineers are depending on Past Experience, Discussions with others, Video Conferencewith customer site people, Internet Browsing & Similar S/w surfing to understandcustomer requirements. This type of testing is called Exploratory Testing.d) Pair Testing :- Due to lack of knowledge the Senior Test Engineers are groping with Junior TestEngineers to share their knowledge. This style of testing is called Pair Testing.e) Bebugging:- To estimate the efforts of Test Engineers the Development People are addingdefects to coding. This informal way is called Bebugging or Defect Feeding / Seeding.
System Testing Process Test Test Test Test TestInitiation Planning Design Execution Closure Test Reporting Development Vs System Testing S/w Bidding ↓ Kick of meeting ↓ PIN Document ↓ Requirements Gathering (BRS) ↓ Analysis & Planning (SRS & Project Plan) S/w Design & Review (HLD, LLDs) System Test Initiation ↓ ↓ Coding → Unit Testing (White Box Technique) System Test Planning ↓ ↓ Integration → Integration Testing Test Design Initial Build ↓ System Test Execution Test ↓ Reporting System Test closure ↓ User Acceptance Test ↓ Release & Maintenance
I) System Test Initiation : In general the System Testing process is starting with System Test Initiation byProject Manager or Test Manager. They will develop Test Strategy or Test MethodologyDocument. This document defines the reasonable Test to be applied in current project. SRS Test Initiation Test Strategy I/P O/P Project Manager / Test ManagerComponents in Test Strategy : The Test Strategy Document consists of below components to define TestApproach to be followed by Team in current project.1. Scope & Objective :- The Purpose of Testing in current project2. Business Issues :- The Budget allocation for Testing in current project Ex : 100% → Project Cost 64% 36% Development System Testing & Maintenance3. Rolls & Responsibilities :- The names of jobs in Testing Team and responsibility of each job in currentproject4. Communication & Status Reporting :- The required negotiations in between various jobs in Testing Team
* 5. Test Responsibility Matrix (TRM) :-** The list of reasonable test to be applied in current project. Ex. Testing Topic Yes/No Comment UI Testing Yes - Manual Testing Yes - Functional Testing Yes - Load Testing No Lack of Resources Stress Testing No Lack of Resources Endurance Testing No Lack of Resources Compatibility Yes - Testing No need with Inter System No respect to Testing requirements ..etc,, ..etc,, ..etc,, 6. Test Automation & Testing Tools :- The purpose of automation testing in current project and available testing tools in our organization. 7. Defect Reporting & Seeking :- The required negotiation in between Testing Team and Development Team to report & solve defects. 8. Change & Configuration Management :- The maintenance of deliverable in testing for future reference. 9. Risks & Assumptions :- The expected list of risks and solutions to over come. 10. Testing measurements & Metrics The list of measurements & Metrics to estimate test status. 11. Training Plan :- The required number of training sessions to Testing Team to understand customer requirements.
II) Test Planning : After completion of Test Strategy document preparation the Test Lead Categorypeople are concentrating on Test Plan Documents Preparation. SRS, HLD & LLDs Testing Team Formation Project Plan Identify Risks Test Plans Prepare Detailed Text Plans Test Strategy Review PlansTesting Team Formation : In general the Test Planning is starting with Testing Team formation. In this stagethe Test Lead is depending on below factors. → Project Size (No. of Functional Prints) → No.of Testers available on the bench → Test Duration W.R.T Project Plan → Available Test Environment Resources. (Ex. Testing Tools….)Case Study : Type of Project Developers : Testers → ERP, Client / Server, Website 3:1 → System S/w Application 1:1 → Machine Critical 1:7Identify Risks : After completion of Testing Formation the Test Lead is concentrating on TeamLevel Risks Analysis.Ex :- Risk 1 : Lack of Time Risk 2 : Lack of Resources Risk 3 : Lack of Documentation Risk 4 : Delays in Delivery Risk 5 : Lack of Development Process Seriouness Risk 6 : Lack of Communication
Prepare Detailed Test Plans : After Completion of Testing Team Formation and the risks analysis, the test lead is concentrating on test plan document preparation in IEEE 829 Format (Institute of Electrical and Electronics Engineer) Format : 1. Test Plan ID : Unique number or name for future reference about project. 2. Introduction : About Project 3. Test Items : The names of Modules or Functionalities in Project What 4. Features to be Tested : The names of functionalities to be tested.to Test 5. Features not to be Tested : The names of tested modules if available. 6. Test Approach : The List of selected tests by P.M. 7. Test Environment : The required Hardwares & Softwares to using testing. 8. Entry Criteria : Test Cases Designed, Test Environment Established, S/w Build received from Developers. Howto Test 9. Suspension Criteria : → Test Environment Abounded → Shows stopper in build (Build not working) → Pending defects are more 10. Exit Criteria : → All modules in build covered → Test duration exceeded → All major defects solved 11. Test Deliverables : The list of testing documents to be prepared by test engineers in testing. (Test Scenarios, Test Cases, Automation Programs, Test Log, Defects reports and weekend reports) 12. Staff and Training Needs : The names of selected test engineers & requiredWhom training sessions to understand customer requirements.to Test 13. Responsibilities : Work allocation to above selected test engineers. 9 All responsible tests on specified modules or specified testing on all modules.) When 14. Schedule : The dates & times to conduct testingto Test 15. Risks & Assumptions : The previously analyzed risks and solutions to over come. 16. Approvals : The signature of Test Lead & Project Manager.
Review Test Plan : After completion of Test Plan document preparation the test is conducting a review meeting to estimate completeness and correctness of that planed document. → Requirements / Module / Features / Functionalities Coverage → Testing Topics Coverage → Risks Oriented Coverage Note : After completion of Test Planning and before starting Test Designs, the Business Analyst and Test Lead are conducting Training Sessions to select Test Engineers on that customer requirements in Project. Some organizations are inviting Domain Experts / Subject Experts for that Training Sessions from out side. III) Test Design : After completion of required training sessions on customer requirements the corresponding Test Engineers are concentrating on Test Design to prepare Test Scenarios and Test Cases. The Test Scenarios specifies “What” to test. The Test Cases specifies “How” to test including a detailed procedure. From these sentences the Test Cases are drawing from Test Scenarios. There are four methods in this Test Design.Functional 1. Functional Specification Based Test Case Design Testing 2. Use Cases Based Test Case Design UT 3. User Interface Based Test Case Design NFT 4. Functional & System Specification Based Test Case Design 1. Functional Specification Based Test Case Design : To prepare Test Scenarios and Cases for Functional Testing, the Test Engineers are using this method. In this approach, the Test Engineers are preparing Scenarios and Cases depending on Functional Specifications in SRS. BRS ↓ Test Design SRS (Functional Test Scenarios Specifications) ↓ ↓ Test Cases HLD ↓ LLDs ↓ System Test Execution S/w Build
Approach :Step 1 :- Collect Functional Specifications related to responsible areas.Step 2 :- Take one specified and read that specification to gather entry point, requiredinputs, normal flow, coming outputs, alternative flows, exit point and exceptions arerules.Step 3 :- Prepare Test Scenarios depending on above gathering informationStep 4 :- Preview that Test Scenarios and implement them as Test CasesStep 5 :- Go to Step2 until all responsible Functional Specifications Study.Functional Specification – 1 :- A login process allows User ID& Password to Authorized users. The User IDobject is taking alphanumeric in lower case from 4 to 16 characters long. The passwordobject is taking alphabets in lower case from 4 to 8 characters long.Prepare Test Scenario.Test Scenario 1 :- Verify User ID objectBoundary Value Analysis (BVA) (Size) :Min = 4 Char. → Pass Max = 16 Char. → PassMin-1 = 3 Char. → Fail Max-1 = 15 Char. → PassMin+1 = 5 Char. → Pass Max+1 = 17Char. → FailEquivalence Class Partition (ECP) (Type) :Valid In-Valida-z, 0-9 A-Z, Special Characters, Blank FieldTest Scenario 2 :- Verify Password ObjectBoundary Value Analysis (BVA) (Size) :Min = 4 Char. → Pass Max = 8 Char. → PassMin-1 = 3 Char. → Fail Max-1 = 7 Char. → PassMin+1 = 5 Char. → Pass Max+1 = 9 Char. → FailEquivalence Class Partition (ECP) (Type) :Valid In-Valid0-9 a-z, A-Z, Special Characters, Blank Field
Test Scenario 3 :- Verify Password Object Login OperationDecision Table : User Id Password Expected O/p Valid Value Valid Value Next Window Valid Value In Valid Error Message Invalid Valid Error Message Valid Blank Field Error Message Bland Valid Error MessageNote : Exhaustive Testing is not possible due to this reason. The Testing Team isconducting Optimal Testing using Black Box Testing Techniques like BVA,ECP,Decision Table, regular expressions … etc.,Functional Specification – 2 :- In an Insurance application, users are applying for different types of Insurancepolicies. If a user select Type-A Insurance, then our system asks the age of that user. Theage value should be grater than 16 years and should be less than 80 years. Prepare TestScenario.Test Scenario 1 :- Verify Type-A selectionTest Scenario 2 :- Verify focus to Age when you selected Type-A InsuranceTest Scenario 3 :- Verify Age ValueBoundary Value Analysis (BVA) (Range) :Min = 17 → Pass Max = 79 → PassMin-1 = 16 → Fail Max-1 = 78 → PassMin+1 = 18 → Pass Max+1 = 80 → FailEquivalence Class Partition (ECP) (Type) :Valid In-Valid0-9 a-z, A-Z, Special Characters, Blank FieldFunctional Specification – 3 :- In a shopping application users are applying for different type to items purchaseorders. The purchase order is allowing user to select Item No. and to enter Qty. up to 10.The purchase order returns Total Amount along with one item price. Prepare TestScenario.
Test Scenario 1 :- Verify Item No. SelectionTest Scenario 2 :- Verify Qty. ValueBoundary Value Analysis (BVA) (Range) :Min = 1 → Pass Max = 10 → PassMin-1 = 0 → Fail Max-1 = 9 → PassMin+1 = 2 → Pass Max+1 = 11 → FailEquivalence Class Partition (ECP) (Type) :Valid In-Valid0-9 a-z, A-Z, Special Characters, Blank FieldTest Scenario 3 :- Verify Total Amount, given Qty. * Item PassFunctional Specification – 4 :- A Door Opened when a person comes to in front of the door and that door closedwhen that person went to inside. Prepare Test Scenario.Test Scenario 1 :- Verify Door Open Person Door Criteria Present Opened Pass Present Closed Fail Absent Opened Fail Absent Closed PassTest Scenario 2 :- Verify Door ClosePerson Door CriteriaInside Closed PassInside Opened FailTest Scenario 3 :- Verify Door operation when a person is standing at the middle of thedoor.Functional Specification – 5 :- In an e-banking application, the customers are connecting to Bank Server througha login process. This login allows customer to fill below fields.Password : 6 digits numberPrefix : 3 Digits number but does not start with 0 & 1Suffix : 6 Digits alphanumeric
Area Code : 3 Digits no but it is optionalCommand : Cheque Deposit, Money Transfer, Mini Statement and Bills Paid.Prepare Test Scenario.Test Scenario 1 :- Verify Password ValueBoundary Value Analysis (BVA) (Size) :Min = Max = 6 Digits → Pass 5 Digits → Fail 7 Digits → FailEquivalence Class Partition (ECP) (Type) :Valid In-Valid0-9 a-z, A-Z, Special Characters, Blank FieldTest Scenario 2 :- Verify PrefixBoundary Value Analysis (BVA) (Size) :Min = Max = 3 Digits → Pass 2 Digits → Fail 4 Digits → FailEquivalence Class Partition (ECP) (Type) :Valid In-Valid[2-9][0-9][0-9] a-z, A-Z, Special Characters, Blank FieldTest Scenario 3 :- Verify SuffixBoundary Value Analysis (BVA) (Size) :Min = Max = 6 Digits → Pass 5 Digits → Fail 7 Digits → FailEquivalence Class Partition (ECP) (Type) :Valid In-Valid0-9, a-z, A-Z Special Characters, Blank FieldTest Scenario 4 :- Verify Area CodeBoundary Value Analysis (BVA) (Size) :Min = Max = 3 Digits → Pass 2 Digits → Fail 4 Digits → FailEquivalence Class Partition (ECP) (Type) :Valid In-Valid0-9, Blank Field a-z, A-Z, Special Characters
Test Scenario 5 :- Verify command selection like Cheque Deposit, Money Transfer,Mini Statement and Bills Paid.Test Scenario 6 :- Verify login operation to connect to Bank Server Remaining Fields Area Code Expected O/p All are valid Valid Next Window All are valid Blank Field Next Window All are valid Invalid Error Message Any one Invalid Valid/Blank Error Message Any one Blank Field Valid/Blank Error MessageFunctional Specification – 6 :- In a library Management System the readers are applying for Identity No. to getthis no., the reader is filling below fields.Reader Name : Alphabets in lower case with Init Cap as single wordHouse Name : Alphabets in lower case as single wordPIN Code : Related to India Postal DepartmentCity Name : Alphabets in uppercase as single wordPhone No. : Related to India Subscribers and optionalPrepare Test ScenarioTest Scenario 1 :- Verify Reader NameBoundary Value Analysis (BVA) (Size) :Min = 1Char. → Pass Max = 256Char. → PassMin-1 = 0Char. → Fail Max-1 = 255Char. → PassMin+1 = 2Char. → Pass Max+1 = 257Char. → Fail(In any front end developed programs the default max. char are 256.)Equivalence Class Partition (ECP) (Type) :Valid In-Valid[A-Z][a-z]* 0-9, Special Characters, Blank FieldTest Scenario 2 :- Verify House NameBoundary Value Analysis (BVA) (Size) :Min = 1Char. → Pass Max = 256Char. → PassMin-1 = 0Char. → Fail Max-1 = 255Char. → PassMin+1 = 2Char. → Pass Max+1 = 257Char. → Fail
Equivalence Class Partition (ECP) (Type) :Valid In-Valid[a-z]* A-Z, 0-9, Special Characters, Blank FieldTest Scenario 3 :- Verify PIN CodeBoundary Value Analysis (BVA) (Size) :Min = Max = 6 Digits → Pass 5 Digits → Fail 7 Digits → FailEquivalence Class Partition (ECP) (Type) :Valid In-Valid[1-9][0-9][0-9][0-9][0-9][0-9] a-z, A-Z, Special Characters, Blank FieldTest Scenario 4 :- Verify City NameBoundary Value Analysis (BVA) (Size) :Min = 1Char. → Pass Max = 256Char. → PassMin-1 = 0Char. → Fail Max-1 = 255Char. → PassMin+1 = 2Char. → Pass Max+1 = 257Char. → FailEquivalence Class Partition (ECP) (Type) :Valid In-Valid[A-Z]* a-z, 0-9, Special Characters, Blank FieldTest Scenario 5 :- Verify Phone NumberBoundary Value Analysis (BVA) (Size) :Min = 10 Digits → Pass Max = 12 Digits → PassMin-1 = 9 digits → Fail Max+1 = 13 Digits → FailMin+1 = 11 Digits → PassEquivalence Class Partition (ECP) (Type) :Valid In-Valid0-9, Blank Field A-Z, a-z, Special Characters
Test Scenario 6 :- Verify Reader RegistrationDecision Table :Remaining Fields Telephone Number Expected O/pAll are valid Valid Identity No.All are valid Blank Field Identity No.All are valid Invalid Error Msg.Any one Invalid Valid / Blank Error Msg.Any one Blank Field Valid / Blank Error Msg.Functional Specification – 7 :- A Computer Shut Down OperationTest Scenario 1 : Verify Shut Down option selection using Shut DownTest Scenario 2 : Verify Shut Down option selection using Alt+F4Test Scenario 3 : Verify Shut Down option selection using Ctr+Alt+DelTest Scenario 4 : Verify Shut Down operation successTest Scenario 5 : Verify Shut Down operation using Run Command.Test Scenario 6 : Verify Shut Down operation when a process is runningTest Scenario 7 : Verify Shut Down operation using Power Off ButtonFunctional Specification – 8 :- Money With Drawl From ATM with all Rules and RegulationsTest Scenario 1 : Verify Card InsertionTest Scenario 2 : Verify Card Insertion in Wrong AngleTest Scenario 3 : Verify Cancel After Card InsertionTest Scenario 4 : Verify Language SelectionTest Scenario 5 : Verify Cancel after selection of LanguageTest Scenario 6 : Verify PIN EntryTest Scenario 7 : Verify operation with wrong PINTest Scenario 8 : Verify operation when you enter wrong PIN 3 times consecutively
Test Scenario 9 : Verify Cancel after enter PINTest Scenario 10 : Verify Amount type selectionTest Scenario 11 : Verify operation when you selected wrong account type with respected to the inserted cardTest Scenario 12 : Verify cancel after account type selectionTest Scenario 13 : Verify with drawl option selectionTest Scenario 14 : Verify cancel after selection of with drawlTest Scenario 15 : Verify amount entryTest Scenario 16 : Verify operation with wrong denomination in amountTest Scenario 17 : Verify with drawl operation success. (Correct amount, right receipt, able to take card back)Test Scenario 18 : Verify with drawl operation with grater than possible balance.Test Scenario 19 : Verify with drawl operation with grater than day limit.Test Scenario 20 : Verify with drawl operation with Net work problemTest Scenario 21 : Verify with drawl amount with lack of amount in ATMTest Scenario 22 : Verify with drawl operation with exceeded no.of Transactions per dayTest Scenario 23 : Verify with drawl operation with other bank cardTest Scenario 24 : Verify with drawl operation with stolen card
2. Use Cases Based Test Case Design : It is an alternative method for Functional Specification Based Test Case Design.In this method the Test Engineers are depending on Use Cases instead of FunctionalSpecifications to prepare Test Scenarios and Test Cases. BRS ↓ Use Cases SRS (Functional BA + Test Lead Test Scenarios Specifications) ↓ ↓ Test Cases HLD ↓ LLDs ↓ System Test Execution Coding (UT & IT) S/w Build From the above diagram the Business Analyst and Test Lead category people aredeveloping use cases depending on corresponding functional specifications in SRS. Every Use Case is an Implemented Form of Functional Specifications.Use Case Format :-1. Use Case ID : Unique number or name for future reference2. Use Case Description : The summery of corresponding Functionality3. Required Inputs : The required Inputs for corresponding Functionality4. Precondition : The necessary Condition to follow before operating corresponding functionality5. Events List : Events / Tasks Expected O/p or Out come (A Step by Step procedure with expected outputs)6. Activity Flow Diagram : A Pictorial / Diagrammatic of corresponding functionality7. Post Condition : Necessary tasks to do after corresponding functionality
8. Alternative events list : Alternative procedures to do this functionality if available9. Proto Type : A screen shot related to corresponding functionality.10. Related use cases : The names of other Use Cases relation to corresponding functionalityApproach :Step1 : Collect use cases of responsible areasStep2 : Take one use case and studyStep3 : Identify Entry Point, Required I/p, Normal Flow, Expected O/p, Exit Point, Alternative Flows and Exceptions rules.Step4 : Prepare Test Scenarios depending on above Identified Information.Step5 : Review that scenario and implement them as Test CasesStep6 : Go to Step2 until all responsible Use Cases StudyUse Case 1 :1. Use Case ID : UC_Login2. Use Case Description : Login operation is authorization3. Required Inputs : User ID is in alphabets lower from 4-16 characters long. The Password alpha numeric in lower case from 4-8Char. Long.4. Precondition : New User Registration to get valid User ID & Password5. Events List : Events / Tasks Expected O/p or Out come Enter User ID an Next window for valid user Password Values and and invalid data error msg. then click OK Button for Invalid user.
6. Activity Flow Diagram : Example : User User ID & Password Error Msg. LOGIN Data Base Re-Login Valid Next Window7. Post Condition : Log out operation is mandatory after successful Login8. Alternative events list : None9. Proto Type :10. Related use cases : UC_New User, UC_Logout
Test Scenario 1 :- Check User IDBoundary Value Analysis (BVA) (Size) :Min = 4Char. → Pass Max = 16Char. → PassMin-1 = 3Char. → Fail Max-1 = 15Char. → PassMin+1 = 5Char. → Pass Max+1 = 17Char. → FailEquivalence Class Partition (ECP) (Type) :Valid In-Valida-z A-Z, 0-9, Special Characters, Blank FieldTest Scenario 2 :- Check PasswordBoundary Value Analysis (BVA) (Size) :Min = 4Char. → Pass Max = 8Char. → PassMin-1 = 3Char. → Fail Max-1 = 7Char. → PassMin+1 = 5Char. → Pass Max+1 = 9Char. → FailEquivalence Class Partition (ECP) (Type) :Valid In-Valida-z,0-9 A-Z, Special Characters, Blank FieldTest Scenario 3 :- Check Ok Button Click User ID Password Expected Out Put Valid Valid Next Window Valid Invalid Invalid Data Error Msg. Invalid Valid Invalid Data Error Msg. Value Blank Field Invalid Data Error Msg.Blank Value Value Invalid Data Error Msg.Test Scenario 4 :- Check Cancel Button Event Expected Out Put Click Cancel after open login Login Window ClosedClick Cancel after enter user ID Login Window ClosedClick cancel after enter Password Login Window ClosedTest Scenario 5 :- Check Minimize IconTest Scenario 6 :- Check Maximize IconTest Scenario 7 :- Check Close Icon
Use Case 2 :1. Use Case ID : UC_Book_Issue2. Use Case Description : Issue a Book for Valid User3. Required Inputs : User ID is in below format Mm_yy-xxxx (4 digits) Book ID is in below format BOOK_xxxx4. Precondition : New User Registration to get valid User ID5. Events List : Events / Tasks Expected O/p or Out come Enter User ID Focus to Book ID for Valid User and then click and Invalid User error msg. for “Go” Button Invalid User Enter Book ID Book issued message for available and click “Go” book and unavailable book Button message for unavailable book id6. Activity Flow Diagram : Example : User Valid User ID Invalid User BOOK ISSUE Data Base Re-Login Valid Book ID Unavailable BOOK Book ISSUE Data Base Re-Login Valid “Book Issued”7. Post Condition : Received that issued book from Computer Operator8. Alternative events list : None
9. Proto Type : Book Issue - □X User ID Go Book ID Go10. Related use cases : UC_New User, UC_Book FeedingTest Scenario 1 :- Verify User IDBoundary Value Analysis (BVA) (Size) :Min = Max = 10 Position Value → Pass = 9 Position Value → Fail = 11 Position Value → FailEquivalence Class Partition (ECP) (Type) :Valid In-Valid [1-9][_][0-9][0-9][_][0-9][0-9][0-9][0-9] a-z, A-Z, 0-9,[0-2][_][0-9][0-9][_][0-9][0-9][0-9][0-9] Special Char. except _,Blank FieldTest Scenario 2 :- Verify “Go” button clickUser ID Expected O/p after click ‘Go’Valid Value Focus to Book IDInvalid Value “Invalid User” Error MessageBlank Field “Invalid User” Error MessageTest Scenario 3 :- Verify User IDBoundary Value Analysis (BVA) (Size) :Min = Max = 8 Position Value → Pass = 7 Position Value → Fail = 9 Position Value → FailEquivalence Class Partition (ECP) (Type) :Valid In-Valid[B][O][O][K][_][0-9][0-9][0-9][0-9] a-z, A-Z Except B,O,K, Special Char. except _,Blank Field
Test Scenario 4 :- Verify “Go” ClickBook ID Expected O/p after click “Go”Valid Book ID “Book issued” Msg.Invalid Book ID “Unavailable Book” MessageBlank Field “Unavailable Book” MessageTest Scenario 5 :- Verify minimized IconTest Scenario 6 :- Verify maximized IconTest Scenario 7 :- Verify close Icon3. User Interface Based Test Design : The Functional Specification Based Test Design or The Use Cases Based TestDesigns are using to prepare Test Scenarios and Cases for Functional Testing. This UserInterface Based Test Design is using by Test Engineers to prepare Test Scenarios andcases for “Usability Testing”. BRS ↓ SRS (UI Test Scenarios Requirements) ↓ ↓ Test Cases HLD ↓ LLDs ↓ System Test Execution Coding (UT & IT) S/w Build In this method the Test Engineers are depending on User Interface Requirementsin SRS. In general the Test Engineers are writing Common Test Scenarios for UsabilityTesting, which are applicable on any type of Application Scenarios.Test Scenario 1 :- Verify Spelling in every scenarioTest Scenario 2 :- Verify error msg. meaningTest Scenario 3 :- Verify Int.Cap of labels in every screen
Test Scenario 4 :- Verify color uniqueness through out the screensTest Scenario 5 :- Verify Font or Style uniqueness through the screensTest Scenario 6 :- Verify size uniqueness throughout the sceneTest Scenario 7 :- Verify alignment of objects in every screensTest Scenario 8 :- Verify line spacing uniqueness through out the screensTest Scenario 9 :- Verify Tool Tips of icons in every screen.Test Scenario 10 :- Verify default object in every screen.Test Scenario 11 :- Verify Uniform Background colors of objects in every screen.Test Scenario 12 :- Verify Scroll Bars when our screen size is grater than Desk TopTest Scenario 13 :- Verify keyboard accessing of every object in every screenTest Scenario 14 :- Verify abbreviations & Short cuts in screensTest Scenario 15 :- Verify Multiple Data Object positions in every screen. Ex : List Box, Menu, Table … etc.,Test Scenario 16 :- Verify Help Messages (Manual Support Testing)Test Scenario 17 :- Verify Functionally Grouped Objects in every screen.Test Scenario 18 :- Verify Boarders of Functionally Grouped Objects in every screensTest Scenario 19 :- Verify Labels of objects with respect to FunctionalityTest Scenario 20 :- Verify Window Labels with respect to Functionality4. Functional and System Specification Based Test Design : After completion of Test Scenarios selection for Functional and Usability Testingthe Test Engineers are concentrating on Test Scenario selection for Non-FunctionalTesting depending on Functional and System Specifications in SRS. Functional Specifications are describing the required functionalities in Softwareand System specifications are describing the required environment to be used.
BRS ↓ SRS Test Scenarios ↓ (Functional Test Cases Specifications + System Specifications) ↓ HLD & LLDs ↓ System Test Execution Coding (UT & IT) S/w BuildExample Test Scenarios for Compatibility Testing :Test Scenario 1 : Verify Login in Win NT with Customer expected configurationTest Scenario 2 : Verify Login in Win 2000 with Customer expected configurationTest Scenario 3 : Verify Login in Win Vista with Customer expected configurationAnd more…Example Test Scenarios for Performance Testing :Test Scenario 1 : Verify Login Under Customer expected Load and ConfigurationTest Scenario 2 : Verify Login Under more than Customer expected configurationAnd more….Example Test Scenarios for Installation Testing :Test Scenario 1 : Verify Setup Program to Start Installation.Test Scenario 2 : Verify Interface easiness during InstallationTest Scenario 3 : Verify occupied disk space after InstallationAnd more…Test Case Format : After completion of Test Scenarios selection for responsible areas in terms ofFunctional, Usability and Non-Functional Testing, the Test Engineers are implementingthem as Test Cases. Test Engineers are using IEEE (Institute of Electrical & ElectronicsEngineer) 829 Test Case Format.1. Test Case ID : Unique Number / Name for Future Reference2. Test Case Name : The Corresponding Test Scenario3. Feature to be Tested : The Name corresponding Module or Functionality
4. Test Suite ID : The Unique number or name of a Test Batch. This case is a member in that Batch5. Priority : The importance of this Test Case (P0 priority for Functional Test Cases, P1 Priority for Non-Functional Test Cases and P2 Priority for Usability Test Cases.)6. Test Environment : The required Hardware and Software to execute this test.7. Test Effort : Person per hour (Ex.20min is average Test Execution Time)8. Test Duration : The data and time to execute this test.9. Test Setup : The necessary tasks to do before start this test execution.10. Test Procedure / Data Matrix :Step Action / Required Expected Actual Defects Result CommentsNo. Task event I/p O/p O/p Id Test Design Test Execution } ECP (Type) BVA (Range / Size)I/p Object Data Matrix in Valid Invalid Min Max11. Test Case Pass / Fail Criteria : The Final result of this Test Case after executionNote 1 : In general the test engineers are not interesting to fill all fields in Test Case Format due to lack of time and similarity in fields values of Test Cases.Note 2 : The test engineers are using test procedure for operation test cases and data matrix for input object test cases.Functional Specification : In a Banking application the valid employees are creating fixed deposit operationswith depositors provided information. In this fixed deposit operation, the employees arefilling below fields.Depositor Name : Alphabets in Lower Case with Int.Cap, allows multiple words in nameAmount : 1500 to 1,00,000
Time : Up to 12 monthsInterest : Numeric with one decimalIf the time>10months, then the Interest>10% from Bank RulesPrepare Test Scenarios and Test Cases :Test Scenario 1 : Verify Depositor NameTest Scenario 2 : Verify AmountTest Scenario 3 : Verify TimeTest Scenario 4 : Verify InterestTest Scenario 5 : Verify Fixed Deposit OperationTest Scenario 6 : Verify Fixed Deposit Operation with Bank RuleTest Case Documents :Test Case 1 :-1. Test Case ID : TC_FD_Ravi_24th May_12. Test Case Name : Verify Depositor Name3. Test Suit ID : TS_FD4. Priority : P05. Test Setup : Depositor Name is taking inputs6. Data Matrix : ECP (Type) BVA (Size) I/p Object Valid Invalid Min MaxDepositor Name ([A-Z][a-z]*)* 0-9,Spl.Char, Blank Field 1 Char 256 CharTest Case 2 :-1. Test Case ID : TC_FD_Ravi_24th May_22. Test Case Name : Verify Amount3. Test Suit ID : TS_FD4. Priority : P05. Test Setup : Depositor Object is taking inputs
6. Data Matrix : ECP (Type) BVA (Range)I/p Object Valid Invalid Min MaxAmount 0-9 a-z, A-Z, Spl.Char, Blank Field 1500 100000Test Case 3 :-1. Test Case ID : TC_FD_Ravi_24th May_32. Test Case Name : Verify Time3. Test Suit ID : TS_FD4. Priority : P05. Test Setup : Time Object is taking inputs6. Data Matrix : ECP (Type) BVA (Range)I/p Object Valid Invalid Min Max Time 0-9 a-z, A-Z, Spl.Char, Blank Field 1 Month 12 MonthsTest Case 4 :-1. Test Case ID : TC_FD_Ravi_24th May_42. Test Case Name : Verify Interest3. Test Suit ID : TS_FD4. Priority : P05. Test Setup : Interest Object is taking inputs6. Data Matrix : ECP (Type) BVA (Range)I/p Object Valid Invalid Min Max Interest 0-9 . 0-9 with one decimal a-z, A-Z, Spl.Char, Blank Field 0.1 100
Test Case 5 :-1. Test Case ID : TC_FD_Ravi_24th May_52. Test Case Name : Verify Fixed Deposit Operation3. Test Suit ID : TS_FD4. Priority : P05. Test Setup : Valid Values are available in hand6. Test Procedure :Step No. Action Required I/p Expected O/p 1. Connect Bank Server Valid Exp Id Menu Appears 2. Select “FD” Option None Fixed Deposit Form Opened All are valid Acknowledgement 3. Fill Fields and Click Ok Any one Invalid Error Msg. Any one Blank Field Error Msg.Test Case 6 :-1. Test Case ID : TC_FD_Ravi_24th May_62. Test Case Name : Verify Fixed Deposit Operation with Bank Rule3. Test Suit ID : TS_FD4. Priority : P05. Test Setup : Valid Values are available in hand6. Test Procedure : Step Action Required I/p Expected O/p No. Connect Bank 1. Valid Exp Id Menu Appears Server Select “FD” Fixed Deposit Form 2. None Option Opened Valid Name, Amount, Time>10 Acknowledgement Fill Fields and with Interest>10 3. Click Ok Valid Name, Amount, Time>10 Error Msg. With Interest <=10
Like as above example the Test Engineers are implementing Test Scenarios as TestCases. Every Test Case is a combination of corresponding Test Scenario and requireddetails to apply this test on S/w Build.Test Cases Selection Review : After completion of Test Scenarios and Cases writing the Test Lead & TestEngineers are conducting a review meeting to estimate the completeness and correctnessof that documents. In this review the Testing Team is depending on below coverages. □ Requirements Oriented Coverage (Modules) □ Testing Topic Oriented Coverage (UT,FT,NFT)IV. Test Execution :- After completion of Test Design and Review the Testing Team is concentratingon below issue. □ Formal meeting with developers □ Test Environment Establishment □ Levels of Test Execution□ Formal Meeting :- In general the Test Execution process is starting with a Formal Meeting inbetween Testing Team & Development Team representatives. In this meeting thecorresponding representatives are concentrating on Build Version Control and DefectTracking. From Build version control concept, the Development Team is modifying S/wBuild Coding, to resolve defects and they will release that modified build with Uniqueversion number. This version numbering system is understandable to Test Engineers todistinguish Old Build & Modified Build. For this version controlling, the Developers areusing Version Control Tools also. (Ex : - VSS (Visual Source Safe)) To report mismatches to Development Team the Test Engineers are reporting thatmismatch to Defect Tracking Team (DTT) FirstTest Lead + Project Manager + Project Lead + Business Analyst → DTT
□ Test Environment Establishment :- After completion of Formal Meeting, the Testing Team is concentrating on TestEnvironment Establishment with required all Hardware and Software SERVER Configuration Repository TCP/IP TCP/IP FTP FTP TCP/IP FTP Development Project Environment Management Test EnvironmentFTP : File Transfer Petrol (Single Location)TCP/IP : Transmission Control Protocol / Internet Protocol (Different Location(s))□ Levels of Test Execution:- Development Testing Initial Build Level-0 (Sanity) Stable Build Defect Report Level-1 (Comprehensive) Defect Modified Build Fixing Level-2 (Regression) Level-3 (Final Regression)
Case Study :- Initial Build ↓ Sanity Testing (Level-0) ↓ Stable Build ↓ Comprehensive (Level-1) ↓ Defect Detection ↓ Modified Build ↓ Regression Test (Level-2) ↓ Defect Closing ↓ Master Build ↓ Final Regression (Leve-3) ↓ Golden Build (Able to Release)□ Levels of Test Execution Vs Test Cases :- Level -0 → Some P0 (Functional) Test Cases Level–1 → All P0,P1&P2 Test Cases Level-2 → Selected P0,P1&P2 Test Cases with respect to modification Level-3 → Selected P0,P1&P2 Test Cases with respect to Defect Density□ Level-0 Sanity Testing :- After Downloading Initial Build from Configuration Reporting in server, theTesting Team is concentrating on Level-0 sanity testing to estimate Testability of thatSoftware. Testability means that Understandable, Operatable, Observable, Controllable,Consistency, Simplicity, Maintainable and Automatable. If that Initial Build is not Stable then the Testing Team sends back that Build toDevelopers. If that build is Stable Build then the Test Engineers are concentrating onLevel-1 Test Execution to detect defects. This Level-0 testing is also known as SanityTesting / Smoke Testing / Testability Testing / Tester Acceptance Testing or BuildVerification Testing /n Octangle Testing.
□ Level-1 Comprehensive / Real Testing :- In this Level-1 Test Execution, the Test Engineers are executing all Test Cases asBatches. Every Test Batch Consist of a set of dependent Test Cases. In these test batchesthe end state of one test is Base State to Next State. Test batches are also known as TestSuite or Test Set or Test Build or Test Chain. Receive Stable Make Test Select Select a Build from Cases as A Batch Test Case Developers Batches Next Batch Yes Next Case Defect Step Take a Step Reporting No Expected in Case = Actual Build From the above diagram the Test Engineers are continuing Test Execution Batchby Batch and Case by Case in every Batch. If our Test Case Step expected is not equal toactual then the Test Engineer is concentrating on Defect Reporting. If possible, they willcontinue Test Execution also. In this Level-1 test execution, the Test Engineers are preparing Test LogDocument to specify test results.Test Log Document Format :- Test Case Results (Pass / Defect Executed Executed Comments ID Fail) ID By OnThere are three types of Test Results.→ Passed, All expected values are equal to Actual→ Failed, Any one expected are not equal to Actual→ Blocked, Test execution postponed due to incorrect parent functionality
V. Defect Reporting & Tracking :- During Level-1 Test Execution, the Test Case expected values are not equal toActual. These mismatches are calling as Defects / Issues / Bugs / FlawsDefect Report :-1. Defect ID : Unique No. or Name2. Description : Summary of that mismatch in between Tester expected value and Build actual value3. Build Version ID : The version number of Current Build (The Test Engineers detected this defect in that Build)4. Feature : The Name of Module or Functionality (Test Engineers detected this defect in that Module)5. Test Case ID : The ID of failed test case (Test Engineers detected this defect in that case execution)6. Reproducible : Yes → Defect appears every time in Test Execution No → Defect appears rarely in Test Execution7. If Yes, attach procedure :8. If No, attach procedure and screen shots :9. Severity : The seriousness of defect in terms of Functionality High / Critical :- Not able to continue testing without resolving. Medium / Major :- Able to Continue Testing but Compulsory / Mandatory to resolve Low / Minor :- Able to continue, May or May Not to resolve.10. Priority : The importance of defect to solve in-terms of customer interest. (High / Medium / Low)11. Detected By : The name of the Test Engineer12. Detected On : The data of detection and submission13. Status : New : Reporting first time Re-Open : Re-Reporting14. Assigned to : Report to Tracking Team15. Suggested Fix : Suggestion to Solve that Defect. (Optional)
Defect Reporting Process : Test Engineer Report Defect to DTT as New DTT Analize that Defect Accepted Defect Status No Changed to “ Rejected” Yes Categorized that defect and change status to “Open” No Data Assigned to Yes Testing Team related Defect No Procedure Assigned to Yes Testing Team Related Defect No
No H/w or Assigned to Yes H/w Team Infrastruct ure Defect No Code Related defect is Assigned to Development TeamCase Study :- Report Test Defect Assigned Project Lead Engineer Defect Tracking Team + Programmers Code Related Defect Report Test Defect Assigned Engineer Tracking Team BA+TL+TE Defect Test Case Procedure & Test Data Related Defect H/w or Report Infrastructure Test Defect Assigned Engineer Tracking Team Team Defect H/w or Environment Related Defect
Defect Life Cycle or Bug Life Cycle : New ↓ Assigned Reject Deferred ↓ Open ↓ Fixed Reopen ↓ ClosedNew : Reporting First TimeAssigned : Accepted by DTTReject : Not Accepted by DTTDeferred : Accepted but not interested to solve due to low severity and low priority.Open : Responsible Team is ready to resolveFixed : Defect not Correctly solved (or) Re reportingClosed : Defect correctly solved and confirmed through Regression Testing.Test Data Related Defect Fixing : If our reported defect accepted by Defect Tracking Team (DTT) and theydecided that defect as Test Data Related Mismatch. In this situation the responsibletesting team is concentrating Correct Data Collection (CDC) without having conceptualgap with the help of BA and TL and then, the Test Engineers are re-executing previouslyfailed test on same Build with correct test data. This test repetition is calling as Retestingor Confirmation Testing. Testing Build Failed Test Case Defect Reporting Data Related Defect Repeat Test Case Collect Correct Build With correct Data Data Retesting / Confirmation Testing
Test Script or Procedure Related Defect Fixing : If our reported defect accepted as Test Procedure Related Defect by DTT,then Responsible Testing Team is preparing Correct Procedure for that Test Case withhelp of TL and BA Testing Build Failed Test Case Report to DTT Procedure Related Defect Correct Test Repeat Test Case Procedure Build In correct Prepared by Test procedure Engineers Retesting / Confirmation TestingInfrastructure Related Defect Fixing : If our Report Defect Accepted by DTT as Environment Related orInfrastructure Related or Hardware Related Defect, then responsible Hardware Team isRe-establishing correct test environment. Testing Build Failed Test Case Report to DTT Environment Related Defect Re-establish Test Repeat Test Case Environment by Build In modified H/w Team environment Retesting / Confirmation Testing
Code Related Defect Fixing :- If our reported defect accepted as Code Related Defect, then the responsibleProgrammers / Developers are performing changes in Build Coding to Resolve thatdefect. PL Updates the Impact Analysis Selected Coding status of Defect by areas reviewed to “Open” Programmers by PL Review Document, Changes by changes by concerned Changes Yes Required in BA/Designers & person Project Lead (BA/Design) Documents Unit Test & Changes in No Make modified coding by Build Programmers PL changes Release Modified Build defect status to with Unique Version “Fixed” Number and Release Note After receiving build from Development Team, the Testing Team isconcentrating on re-testing & Regression Testing Test Cases Related Passed Tests Modified Passed Build Failed Test Build Passed Programmers Pass Report Defect Faild DTT Code Related Defect From the above model the test engineer is re-executing previously failed teston modified build to confirm defect fixing, called as Retesting or Confirmation Testing.
To identify side effects of defect fixing modifications in modified build, the testengineers is re-executing previously passed related test on that modified build calledRegression Testing.Level-2 Regression Testing : Take Modified Build and Release Note Identify severity of fixed defect in that Modified Build High Medium Low All P0 All P0 Some P0 All P1 Carefully Selected P1 Some P1 Carefully Selected P2 Cases And Some P2 Test Cases Some P2 Test Cases On that modified build to detect Side Effects in Build with respect to Modifications Specified in Release NoteCase 1:- If the development team fixed defect severity is High then the Test Engineersare repeating All P0, All P1 and Carefully Selected P2 Test Cases on that Modified Buildw.r.t. modifications specified in release note.Case 2 :- If the Development Team fixed defect severity is Medium then the TestEngineers are repeating All P0, Carefully Selected P1 and Some P2 Test Cases on thatmodified build w.r.t. modifications specified in release note.Case 3 :- If the Development Team fixed defect severity is Low then the Test Engineersare repeating Some P0, Some P1 and Some P2 Test Cases on that modified build w.r.t.modifications specified in release note.Case 4:- If the development team release modified build w.r.t. changes in CustomerRequirements then the Test Engineers are re-executing All P0, All P1 and CarefullySelected P2 Test Cases on that Modified Build w.r.t. changes in Customer requirements.In this case Test Engineers are performing changes in Test Scenarios and Test Casesw.r.t. changes in Customer Requirement.VI. Test Closure :-
After completion of all reasonable tests and detected defects closing, the testlead is conducting a review meeting to Stop Testing. In this review the TL is analyzingbelow factors with the involvement of Test Engineers.1. Coverage Analysis :- → Requirements Oriented Coverage (Module) → Testing Topic Related Coverage (Usability, Functional, Non-Functional)2. Defect Density Calculation : Ex : Modules / Requirement % A 20% B 20% C 40% ( Need Regression Test ) D 20% Total 100%3. Analysis of Deferred Defect : Whether the deferred defects are postponed or not?Level-3 Final Regression Testing : After completion of successful Test Closure review the Testing Team isconcentrating Leve-3 or Final Regression Testing. Identify High Defect Density Person / Module Hour Golden Defect Effort Reporting If Estimation Required Regression Plan Testing RegressionVII. User Acceptance Testing (UAT) :
After Completion of Final Regression Testing the Project Management isconcentrating on User Acceptance Testing to collect feedback from Real Customers /Model Customers. There are two ways in User Acceptance Testing, such as Alpha Testing andBeta Testing.VIII. Sign Off : After completion of successful User Acceptance Testing and theremodifications, the Test Lead is preparing Final Test Summary Report and reviewcorresponding Test Engineer from this project. The final Test Summary Report is acombination below document. → Test Strategy / Methodology → Test Plan(s) → Test Scenarios → Test Cases → Test Logs → Defect Reports → Requirements Traceability Matrix Required Test Case Result Detected Status Comments ID ID (Pass / Fail) ID (Closed / Deferred)It is a mapping between requirements and defects via test cases.Case Study (5Months of Testing Process) :- Deliverable Responsibility Duration Test Strategy PM / TM 4-5 days Test Planning Test Lead 4-5 days Requirements Training to BA + Domain / Subject Experts 5-10 days Test Engineers Test Scenarios & Review Test Engineer 5-10 daysTest Cases Implementation Test Engineer 10-15 days Review Build + Level-0 Test Engineer 2-3 days (Sanity Testing) ** Test Automation Test Engineer 10-15 days Level-1 and Level-2 Test Engineer 30-40 days Testing Execution Deliverable Responsibility Duration
On Going Defect Reporting Test Engineer (Same Day) Status Reporting Test Lead Weekly Twice Test Closure & Level-3 Test Lead & Test Engineer 5-10 days Real / Model Customers with In front of User Acceptance Testing 3-5 days Developers and Testers Sign Off Test Lead 1-2 days W-Model System Testing Development And Manual Test Automation N.F.T Load Runner & Req. Analysis J Meter F.T Win Runner / QTP S/w Design / Robot / Silk Usability Coding + Unit Testing Testing No Tools in Market Integration Testing Note : Test Automation is Build Optional From the above W-Model, the Testing Tools are available for Functional Testingand Some of Non-Functional Testing and Endurance Testing and Data VolumeTesting. The remaining Non-Functional Tests and Usability Testing conducted by TestEngineers Manually.
Win Runner 8.0 : Developed by Mercury Inter Active and Take over by Hewlett Packed (HP) Functional Testing Tool This Version released in “2005”January Supports VB, .Net, Java, Power Builder, HTML, Delphi, VC++, D2K, and Siebel and Siebel Technology Software for Functional Testing. To Support SAP, People Soft, XML, Multimedia and Oracle Applications (“ERPS”) including above technologies, Test Teams are using Quick Test Professional (QTP) Win Runner runs on windows only X-Runner for Unix / LinuxWin Runner Test Process : Receive Stable Build From Developers after Sanity Testing ↓ Identify Functional Test Cases (Priority P0) to Automate (English + Manual) ↓ Create Automation Programs (TSL) for that Functional Test Cases ↓ Runs Programs on S/w Build to detect defects ↓ Test Reporting if required From the above approach, the Test Engineers are concentrating ManualFunctional Test Cases into Test Script Language (TSL) programs. TSL is a “C” like languageAdd-in Manager : This window list out all Win Runner supporting technologies with respect tolicense. Test Engineers are selecting current project technology in that listWelcome Screen : After Successful Win Runner launching Welcome Screen is coming on theDesktop. The screen consists of 3 New Options like → Create a New Test → Open an Existing Test → A Quick Preview of Win Runner
Win Runner Icons : Start Recording ↓ Run From Top → Run From Arrow Stop Recording Pause (Stop Run)Win Runner Test Automation Frame Works : The Win Runner 8.0 is allowing you to convert our Manual Functional Test Casesinto Test Script Language (TSL) programs in 4 ways → Record and Playback Frame Work → Data Driven Frame Work → Keyword Driven Frame Work → Hybrid Frame WorkI. Record & Playback Frame Work : In this frame work the Test Engineers are converting manual test cases intoautomation programs with Two Steps of procedure. A. Recording Operations B. Inserting Check PointsA. Recording Operations :- In Test Automation program creation, the Test Engineers are recording S/w Buildoperations. There are two modes in recording such as Context Sensitive Mode andAnalog Mode. In Context Sensitive Mode, the tool is recording Mouse and Keyboard operationswith respect to objects and window in build. To select this mode the Test Engineers areusing below options. Click “Start Recording” icon Once Test Menu → Record Context Sensitive Option. To record mouse pointer movements with respect to desktop co-ordinates, TestEngineers are using Analog Mode in Win Runner. To select this mode we can use belowoptions.