• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Testing tools
 

Testing tools

on

  • 947 views

Useful

Useful

Statistics

Views

Total Views
947
Views on SlideShare
947
Embed Views
0

Actions

Likes
0
Downloads
66
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    Testing tools Testing tools Document Transcript

    • Software A software is a set of programs. They will take input and provide outputs. They aretwo types 1) Software Application 2) Software Product1) A software development for a specific customer requirements called as Software Application.2) A software development depending on overall requirements in market called as software product. The interested customers are purchasing the licenses of Software Product.Software Bidding : A proposal to develop a New Software is called Software Bidding. In SoftwareApplication Development, the proposal is coming from specific customer. In productdevelopment our organization is taking their own proposal.Kick of Meeting : The CEO category person is conducting a meeting with high level managementand select a Project Manager to handle the New Software Development Process.PIN (Project Initiation Note) Document : The selected Project Manager (PM) is preparing this document to estimate therequired people, the required technologies, required time and required resources. He/Shesubmitting the report to CEO. The CEO is conducting a review to give green signal toProject Manager.SDLC (Software Development Life Cycle) : (Water Model) Required Gathering ↓ Analysis & Planning ↓ Designing ↓ Code ↓ Testing ↓ Release & Maintenance
    • In above SDLC process, the single stage of testing is available and conducting thetesting by Developers. Due to these reasons, the organizations are concentrating onMultiple Stages of Testing and separate testing teams to achieve quality.Software Quality : → Meet Customer Requirements (Functionality) → Meet Customer Expectations (Usability Performance) → Cost to Purchase License → Time to ReleaseSoftware Quality Assurance (SQA) : The Monitoring and Measuring the strength of development process is called asSoftware Quality Assurance / Verification.Software Quality Control (SQC) : The Validation of product with respect to customer requirements is calling asSoftware Quality Control / Validation / Testing.“V” Model : ‘V’ Stands for Verification & Validation. This model is defining developmentprocess with Testing Stages. This model is extension of SDLC Model. Verification Validation Requirements User Acceptance TestingGathering & Review Analysis & Planning System Testing With Review High Level Design Integration Testing & Review (Programs Testing) Low Level Design & Review Unit Testing (Program Testing) Coding
    • In above ‘V’ Model Reviews are calling as Verification Methods and Testinglevels are calling as Validations. In small and medium scale organizations themanagement is maintaining the separate Testing Team for System Testing Only todecrease project cost, because the System Testing is Bottle Next Stage in SoftwareDevelopment Process.I) Reviews in Analysis : In general the software development process is starting with requirementsgathering from Specific Customer in Application Development and requirementsgathering from Model Customers in Product development. After gathering requirementsthe responsible Business Analyst is preparing BRS ( Business RequirementsSpecification) document. This document is also known as User RequirementSpecification or Customer Requirement Specification. After gathering requirements, the business analyst sit with Project Manger anddevelop SRS and Project Plan. The Software Requirements Specification Consists offunctional requirements to be developed and system requirements to be used. Example : BRS SRC Functional Requirement : 2 Inputs , 1 Out Put, ‘+’ is Addition Operation System Requirement : ‘C’ Language What? How? After completion of BRS & SRS preparations, the corresponding BusinessAnalyst is conducting a review to estimate completeness and correctness of thedocuments.→ Are they Correct Requirements?→ Are they Complete Requirements?→ Are they Achievable Requirements?→ Are they Reasonable(Time) Requirements?→ Are they Testable Requirements? Go to V Model Next
    • II) Reviews in Design : After completion of successful Analysis and Review, the Design Category peopleare preparing HLD, LLDs (High Level Design & Low Level Designs) The High LevelDesign specifies the overall architecture of the Software. It is also known as SystemDesign or Architectural Design. Example : Root LOGIN Mailing Chatting LOGOUT Leaf : Every Functionality or Module Internal Structure specified by Low Level DesignDocuments. These are also known as Structural Design or Component Design. Example : User User ID & Password Invalid LOGIN Data Base Re-Login Valid Next Window HLD is a system level design and LLD is component or Module level design. Soone Software design consists of one HLD and Multiple LLDs. The corresponding designers are conducting a review on that document forcompleteness and correctness.→ Are they Understandable Designs?→ Are they Correct Designs?→ Are they Complete Designs?→ Are they Followable Designs? Go to V Model Next
    • III) Unit Testing : After completion of successful designs and reviews the correspondingprogrammers are starting coding to construct a Software Physically. In this stage theprogrammers are writing programs and Test each program using White Box / Glass Box /Open Box Testing Techniques. } → Basic Paths Coverage → Control Structure Coverage → Program Technique Coverage → Mutation Coverage Programs(A) Basic Paths Coverage : The programmers are using this technique to estimate the Execution of aprograms. In this technique the programmer Executing a program more than one time tocover all areas of that program in execution.(B) Control Structure Coverage : After completion of successful Basic path coverage the correspondingprogrammer is concentrating on the Correctness of that program execution in terms ofInputs, Process and Outputs.(C) Program Technique Coverage After successful Basic Paths & Control Structure Coverage, the correspondingprogrammer is calculating the execution of that program. If that program execution speedis not acceptable then the programmer is performing changes in that program structurewithout disturbing the functionality. In this coverage the programmers are using Monitors and Profiles like 3rd partysoftware to calculate the execution speed of the program.Note :Monitors are used in VB.netProfilers are used in Java
    • (D) Mutation Coverage Mutation means a change in program. Programmers are performing changes inprograms to estimate the completeness and correctness of that program testing. Test Repeat Test Test ↓ ↓ ↓ Change Change ↓ ↓ ↓ Passed Passed (Incomplete Test Failed (Complete Testing) Basics Paths Coverage, Control Structure Coverage and Program TechniqueCoverage are applicable on a program to test. Mutation Coverage is applicable ProgramTesting to estimate completeness and correctness of that Testing. Go to V Model NextIV) Integration Testing : After completion of dependent programs development and Unit Testing, theprogrammers are interconnecting them to form a complete System / Software. This testing is also known as Interface Testing there are Four Approaches toIntegrate Programs and Testing.A) Top Down Approach :- In this approach the programmers are interconnecting main program and some ofsubprograms. In the place of remaining sub-programs, the programmers are usingTemporary programs called “Stub" Main STUB (Under Construction) Sub1 Sub2
    • B) Bottom Up Approach :- In this approach the programmers are interconnecting sub-programs withoutcoming from Main Program. Main Driver (Under Construction) Sub1 Sub2C) Hybrid Approach :- In is a combined approach of Top Down & Bottom Up approaches. It is alsoknown as Sand Witch Approach. Main Driver (Under Construction) Sub1 Driver (Under Construction) Sub2 Sub3D) System Approach :- The Integration of programs after completion of 100% coding is called SystemApproach or Big Bang Approach
    • V) System Testing : After completion of successful Integration Testing, the Development Team isReleasing a Software Build to separate Testing Team in our organization. This SystemTesting classified into Three Sub Stages. 1. Usability Testing 2. Functional Testing 3. Non-Functional Testing1. Usability Testing : In general the testing execution is starting with Usability Testing. During this Testthe Testing Team is Concentrating on “User Friendliness of Software Build” There are 2sublevels in this Usability Testing. a) User Interface Testing : → Ease of Use (Understandable Screens) → Look & Feel (Attractive Screens) → Speed in Interface (short Navigations in Screens) b) Manuals Support Testing : In this test the Testing Team is verifying the Help of that Software.Case Study : Receive S/w Build from Developers after Integration Testing. ↓ User Interface Testing ↓ Functional Testing ↓ Usability Testing Non-Functional Testing ↓ Manuals Testing
    • 2. Functional Testing : It is a Mandatory Testing level in System Testing. During this test the TestingTeam is concentrating on the Correctness of Customer requirements in that S/w Build. This Testing classified into below sub tests.a) Control Flow Testing :- The changes in properties of objects in an Application / S/w Build with respect to mouse and keyboard operations.b) Error Handling Testing :- The prevention of wrong operations with meaningful messages.c) Input Domain Coverage :- Whether our S/w Build is taking valid type and size of inputs or not?d) Manipulations Coverage :- Whether our S/w Build is providing customer expected output or not?e) Database Testing :- The input of Front End Screens operations on Back End database contactf) Sanitation Testing :- Finding extra functionality with respect to Customer RequirementsCase Study :- Software Build Screens (Front End) Data Base (Back End) Control Flow Error Handling Data Base I/p Domain Testing Manipulations Sanitation Functional / Black Box Testing
    • 3. Non-Functional Testing : It is an optional level in System Testing. This level is expensive and complex toconduct. During this test the Testing Team is concentrating on extra characteristics ofSoftware.a) Reliability Testing :- It is also known as Recovery Testing. During this test the Testing Team isvalidating whether our S/w Build is changing from Abnormal State to Normal State ornot?b) Compatibility Testing :- It is also known Portability Testing. During this test the Testing Team isconcentrating on whether our S/w Build is running on Customer Expected platform ornot? Platform means Operating System, Browser, Compilers and Other SystemSoftware’s.c) Configuration Testing :- It is also known as Hardware Compatibility Testing. During this test theTesting Team is concentrating on whether our S/w Build is supporting differenttechnology hardware devices or not? Ex :- Different Technology Printers, Networks … etc.,d) Inter System Testing :- It is also known as End to End Testing or Interoperability Testing. During thistest the Testing Team is concentrating on whether our S/w Build is co-existence withother Software application to share common resources or not?Case Study :- Compatibility Testing S/w Build → Operating System S/w Build → H/w Device Configuration Testing Ex : Printers Inter System Testing S/w Build → Other S/w Build
    • e) Data Volume Testing :- During this test the Testing Team is inserting model data in our Application Buildto estimate peak limit of data. This data limit estimate is calling as Data Volume Testing. Ex : 1) M.S.Access Technology Software are managing 2GB Data Base, SQL Server managing 6-7GB Data Base and Orcle Tech. managing 10-12GB Data Base as maximum.f) Installation Testing :- S/w Build Customer expected configuration system + Install Customer expected size of Ram, HDD, Supported S/w Processor, OS…. Etc., → Setup program execution to start Installation. → Easy interface during Installation. → Occupied disk space after Installation.g) Load Testing :- Load means that in number of Concurrent users are using our S/w Build at atime. During this test the Testing Team is executing our S/w Build under customerexpected configuration and customer expected load to estimate speed of processing orperformance. Client 1 □ Server Client 2 □. S/w Build . Process . Client N □h) Stress Testing :- The execution of our S/w Build under customer expected configuration and morethan Customer Load to estimate peak limit of Load is called Stress Testing.i) Endurance Testing :- The execution of our S/w Build under Customer Expected configuration andcustomer expected load to estimate continuity in processing is called Endurance Testing.j) Security Testing :-
    • It is also known as penetration testing. During this test the Testing Team isconcentrating on three factors. Authorizations : S/w Build is allowing valid users and preventing invalid users. Ex : Login with password, PIN, Digital Signatures, Finger Prints, Eye Retina, Scratch Cards….etc., Access Control : The permission of valid users to access functionality in Build. Ex : Admin, User Encryption / Decryption : The code conversation in between client and server process. Client Server Request Response Decrypted Encrypted Decrypted Cipher Text Encrypted Cipher Textk) Localization and Internationalization Testing :- This testing is applicable for Multi Languity Software. This type of softwares areallowing multiple user language characters. Ex : English, Spanish, French …. Etc., In localization testing the Test Engineer is providing multiple language charactersas Inputs to the S/w Build. In Internationalization Testing the Test Engineer is providinga common language character (English) to S/w as Input. In this scenario the 3rd partytools transfer common language character to other language characters.Note : Java Unicode is better technology to develop multi languity softwares.l) Parallel Testing :- It is also known as Competitive / Comparative Testing. During this test theTesting Team is comparing our S/w Build with old version of same S/w or with similarproduct in market to estimate competitiveness.VI) User Acceptance Testing :
    • After completion of successful System Testing the Project Manger isconcentrating on UAT to collect feedback from real customers or model customers.There are two ways in this User Acceptance Testing. α Alpha Testing β Beta Testing→ For S/w Application → For S/w Products→ By real customers with involvement → By Model Customers Of Developers and Testers→ In Development Site → In Model Customer SiteVII) Release Testing : After completion of UAT and their modifications the Project Manger is formingRelease Team or On Site Team to release application to Real Customer or to releaseProduct to license purchased customer. This release team or onsite team consists of FewProgrammers, Few Testers, Few Hardware Engineers with a Team Lead. This team isobserving below factors in Customer Site. 1) Complete Installation 2) Overall Functionality 3) Input devices handling (Key Board, Mouse….etc.,) 4) Output devices handling (Monitor, Printer….etc.,) 5) Secondary storage devices handling (Floppy, Pen Drive…etc.,) 6) O/s error handling 7) Co-existence with other S/w in customer site. The above factors checking in customer site is also known as Port Testing /Deployment Testing. After successful release, the release team is conducting training sessions tocustomer site people & then back to our organization.
    • VIII) Maintenance: During utilization of a Software, the customer site people are sending SoftwareChange Request (SCR) to our organization. These requests received by a special team inour organization called Change Control Board (CCB). This team is consists of FewProgrammers, Few Testers, Few Hardware Engineers along with Project Manager. S/w Change Request Enhancement Missed Deffects Impact Analysis Impact Analysis ↓ ↓ Perform S/w Perform S/w Changes Changes ↓ ↓ Conducted by CCB Test S/w Changes Test S/w Changes ↓ Improve Testing Process & People CapabilityCase Study :- Deliverable to be Testing Stages Responsibility Testing Techniques Tested Walk Through,Reviews in Analysis BRS & SRS BA Inspections & Peer Reviews Walk Through, Review in Design HLD & LLDs Designers Inspections & Peer Reviews White Box Testing Unit Testing Programs Programmers Techniques Interface in between Top Down, BottomIntegration Testing Programmers Programs Up, Hybrid, System Usability, Test Engineers / Functional / Black System Testing S/w Build Quality Control Box, Non- Engineers Functional Testing User Acceptance Real Customers / α -Testing, S/w Build Test Model Customers β - Testing S/w Release Factors Releasing Testing S/w Build Release Team (7 Factors in VII)Maintenance Level S/w changes CCB Regressing Testing Testing
    • Walk Through :- A document study to estimate completeness and correctnessInspection :- Search & Issue in a document called as InspectionPeer Reviews :- Comparing the document with other similar document. Challenges in Software Testing In general every Testing Team is planning formal testing to conduct. Due to somechallenges in testing, the Testing Teams are going to conduct Ad-hoc Testing orInformal Testing. There are Five Styles of Ad-Hoc Testing.a) Monkey / Chimpangy Testing :- Due to lack of time the Testing Team is conducting testing on Main Activities ofa Software. This type / stage of testing is called as Monkey Testing.b) Buddy Testing :- Due to lack of time the Project Management is combining one programmer andone Tester as a Buddy. This teams are conducting Development & Testing Parallely.c) Exploratory Testing :- It is also known as Artistic Testing. Due to lack of Documentation, the TestEngineers are depending on Past Experience, Discussions with others, Video Conferencewith customer site people, Internet Browsing & Similar S/w surfing to understandcustomer requirements. This type of testing is called Exploratory Testing.d) Pair Testing :- Due to lack of knowledge the Senior Test Engineers are groping with Junior TestEngineers to share their knowledge. This style of testing is called Pair Testing.e) Bebugging:- To estimate the efforts of Test Engineers the Development People are addingdefects to coding. This informal way is called Bebugging or Defect Feeding / Seeding.
    • System Testing Process Test Test Test Test TestInitiation Planning Design Execution Closure Test Reporting Development Vs System Testing S/w Bidding ↓ Kick of meeting ↓ PIN Document ↓ Requirements Gathering (BRS) ↓ Analysis & Planning (SRS & Project Plan) S/w Design & Review (HLD, LLDs) System Test Initiation ↓ ↓ Coding → Unit Testing (White Box Technique) System Test Planning ↓ ↓ Integration → Integration Testing Test Design Initial Build ↓ System Test Execution Test ↓ Reporting System Test closure ↓ User Acceptance Test ↓ Release & Maintenance
    • I) System Test Initiation : In general the System Testing process is starting with System Test Initiation byProject Manager or Test Manager. They will develop Test Strategy or Test MethodologyDocument. This document defines the reasonable Test to be applied in current project. SRS Test Initiation Test Strategy I/P O/P Project Manager / Test ManagerComponents in Test Strategy : The Test Strategy Document consists of below components to define TestApproach to be followed by Team in current project.1. Scope & Objective :- The Purpose of Testing in current project2. Business Issues :- The Budget allocation for Testing in current project Ex : 100% → Project Cost 64% 36% Development System Testing & Maintenance3. Rolls & Responsibilities :- The names of jobs in Testing Team and responsibility of each job in currentproject4. Communication & Status Reporting :- The required negotiations in between various jobs in Testing Team
    • * 5. Test Responsibility Matrix (TRM) :-** The list of reasonable test to be applied in current project. Ex. Testing Topic Yes/No Comment UI Testing Yes - Manual Testing Yes - Functional Testing Yes - Load Testing No Lack of Resources Stress Testing No Lack of Resources Endurance Testing No Lack of Resources Compatibility Yes - Testing No need with Inter System No respect to Testing requirements ..etc,, ..etc,, ..etc,, 6. Test Automation & Testing Tools :- The purpose of automation testing in current project and available testing tools in our organization. 7. Defect Reporting & Seeking :- The required negotiation in between Testing Team and Development Team to report & solve defects. 8. Change & Configuration Management :- The maintenance of deliverable in testing for future reference. 9. Risks & Assumptions :- The expected list of risks and solutions to over come. 10. Testing measurements & Metrics The list of measurements & Metrics to estimate test status. 11. Training Plan :- The required number of training sessions to Testing Team to understand customer requirements.
    • II) Test Planning : After completion of Test Strategy document preparation the Test Lead Categorypeople are concentrating on Test Plan Documents Preparation. SRS, HLD & LLDs Testing Team Formation Project Plan Identify Risks Test Plans Prepare Detailed Text Plans Test Strategy Review PlansTesting Team Formation : In general the Test Planning is starting with Testing Team formation. In this stagethe Test Lead is depending on below factors. → Project Size (No. of Functional Prints) → No.of Testers available on the bench → Test Duration W.R.T Project Plan → Available Test Environment Resources. (Ex. Testing Tools….)Case Study : Type of Project Developers : Testers → ERP, Client / Server, Website 3:1 → System S/w Application 1:1 → Machine Critical 1:7Identify Risks : After completion of Testing Formation the Test Lead is concentrating on TeamLevel Risks Analysis.Ex :- Risk 1 : Lack of Time Risk 2 : Lack of Resources Risk 3 : Lack of Documentation Risk 4 : Delays in Delivery Risk 5 : Lack of Development Process Seriouness Risk 6 : Lack of Communication
    • Prepare Detailed Test Plans : After Completion of Testing Team Formation and the risks analysis, the test lead is concentrating on test plan document preparation in IEEE 829 Format (Institute of Electrical and Electronics Engineer) Format : 1. Test Plan ID : Unique number or name for future reference about project. 2. Introduction : About Project 3. Test Items : The names of Modules or Functionalities in Project What 4. Features to be Tested : The names of functionalities to be tested.to Test 5. Features not to be Tested : The names of tested modules if available. 6. Test Approach : The List of selected tests by P.M. 7. Test Environment : The required Hardwares & Softwares to using testing. 8. Entry Criteria : Test Cases Designed, Test Environment Established, S/w Build received from Developers. Howto Test 9. Suspension Criteria : → Test Environment Abounded → Shows stopper in build (Build not working) → Pending defects are more 10. Exit Criteria : → All modules in build covered → Test duration exceeded → All major defects solved 11. Test Deliverables : The list of testing documents to be prepared by test engineers in testing. (Test Scenarios, Test Cases, Automation Programs, Test Log, Defects reports and weekend reports) 12. Staff and Training Needs : The names of selected test engineers & requiredWhom training sessions to understand customer requirements.to Test 13. Responsibilities : Work allocation to above selected test engineers. 9 All responsible tests on specified modules or specified testing on all modules.) When 14. Schedule : The dates & times to conduct testingto Test 15. Risks & Assumptions : The previously analyzed risks and solutions to over come. 16. Approvals : The signature of Test Lead & Project Manager.
    • Review Test Plan : After completion of Test Plan document preparation the test is conducting a review meeting to estimate completeness and correctness of that planed document. → Requirements / Module / Features / Functionalities Coverage → Testing Topics Coverage → Risks Oriented Coverage Note : After completion of Test Planning and before starting Test Designs, the Business Analyst and Test Lead are conducting Training Sessions to select Test Engineers on that customer requirements in Project. Some organizations are inviting Domain Experts / Subject Experts for that Training Sessions from out side. III) Test Design : After completion of required training sessions on customer requirements the corresponding Test Engineers are concentrating on Test Design to prepare Test Scenarios and Test Cases. The Test Scenarios specifies “What” to test. The Test Cases specifies “How” to test including a detailed procedure. From these sentences the Test Cases are drawing from Test Scenarios. There are four methods in this Test Design.Functional 1. Functional Specification Based Test Case Design Testing 2. Use Cases Based Test Case Design UT 3. User Interface Based Test Case Design NFT 4. Functional & System Specification Based Test Case Design 1. Functional Specification Based Test Case Design : To prepare Test Scenarios and Cases for Functional Testing, the Test Engineers are using this method. In this approach, the Test Engineers are preparing Scenarios and Cases depending on Functional Specifications in SRS. BRS ↓ Test Design SRS (Functional Test Scenarios Specifications) ↓ ↓ Test Cases HLD ↓ LLDs ↓ System Test Execution S/w Build
    • Approach :Step 1 :- Collect Functional Specifications related to responsible areas.Step 2 :- Take one specified and read that specification to gather entry point, requiredinputs, normal flow, coming outputs, alternative flows, exit point and exceptions arerules.Step 3 :- Prepare Test Scenarios depending on above gathering informationStep 4 :- Preview that Test Scenarios and implement them as Test CasesStep 5 :- Go to Step2 until all responsible Functional Specifications Study.Functional Specification – 1 :- A login process allows User ID& Password to Authorized users. The User IDobject is taking alphanumeric in lower case from 4 to 16 characters long. The passwordobject is taking alphabets in lower case from 4 to 8 characters long.Prepare Test Scenario.Test Scenario 1 :- Verify User ID objectBoundary Value Analysis (BVA) (Size) :Min = 4 Char. → Pass Max = 16 Char. → PassMin-1 = 3 Char. → Fail Max-1 = 15 Char. → PassMin+1 = 5 Char. → Pass Max+1 = 17Char. → FailEquivalence Class Partition (ECP) (Type) :Valid In-Valida-z, 0-9 A-Z, Special Characters, Blank FieldTest Scenario 2 :- Verify Password ObjectBoundary Value Analysis (BVA) (Size) :Min = 4 Char. → Pass Max = 8 Char. → PassMin-1 = 3 Char. → Fail Max-1 = 7 Char. → PassMin+1 = 5 Char. → Pass Max+1 = 9 Char. → FailEquivalence Class Partition (ECP) (Type) :Valid In-Valid0-9 a-z, A-Z, Special Characters, Blank Field
    • Test Scenario 3 :- Verify Password Object Login OperationDecision Table : User Id Password Expected O/p Valid Value Valid Value Next Window Valid Value In Valid Error Message Invalid Valid Error Message Valid Blank Field Error Message Bland Valid Error MessageNote : Exhaustive Testing is not possible due to this reason. The Testing Team isconducting Optimal Testing using Black Box Testing Techniques like BVA,ECP,Decision Table, regular expressions … etc.,Functional Specification – 2 :- In an Insurance application, users are applying for different types of Insurancepolicies. If a user select Type-A Insurance, then our system asks the age of that user. Theage value should be grater than 16 years and should be less than 80 years. Prepare TestScenario.Test Scenario 1 :- Verify Type-A selectionTest Scenario 2 :- Verify focus to Age when you selected Type-A InsuranceTest Scenario 3 :- Verify Age ValueBoundary Value Analysis (BVA) (Range) :Min = 17 → Pass Max = 79 → PassMin-1 = 16 → Fail Max-1 = 78 → PassMin+1 = 18 → Pass Max+1 = 80 → FailEquivalence Class Partition (ECP) (Type) :Valid In-Valid0-9 a-z, A-Z, Special Characters, Blank FieldFunctional Specification – 3 :- In a shopping application users are applying for different type to items purchaseorders. The purchase order is allowing user to select Item No. and to enter Qty. up to 10.The purchase order returns Total Amount along with one item price. Prepare TestScenario.
    • Test Scenario 1 :- Verify Item No. SelectionTest Scenario 2 :- Verify Qty. ValueBoundary Value Analysis (BVA) (Range) :Min = 1 → Pass Max = 10 → PassMin-1 = 0 → Fail Max-1 = 9 → PassMin+1 = 2 → Pass Max+1 = 11 → FailEquivalence Class Partition (ECP) (Type) :Valid In-Valid0-9 a-z, A-Z, Special Characters, Blank FieldTest Scenario 3 :- Verify Total Amount, given Qty. * Item PassFunctional Specification – 4 :- A Door Opened when a person comes to in front of the door and that door closedwhen that person went to inside. Prepare Test Scenario.Test Scenario 1 :- Verify Door Open Person Door Criteria Present Opened Pass Present Closed Fail Absent Opened Fail Absent Closed PassTest Scenario 2 :- Verify Door ClosePerson Door CriteriaInside Closed PassInside Opened FailTest Scenario 3 :- Verify Door operation when a person is standing at the middle of thedoor.Functional Specification – 5 :- In an e-banking application, the customers are connecting to Bank Server througha login process. This login allows customer to fill below fields.Password : 6 digits numberPrefix : 3 Digits number but does not start with 0 & 1Suffix : 6 Digits alphanumeric
    • Area Code : 3 Digits no but it is optionalCommand : Cheque Deposit, Money Transfer, Mini Statement and Bills Paid.Prepare Test Scenario.Test Scenario 1 :- Verify Password ValueBoundary Value Analysis (BVA) (Size) :Min = Max = 6 Digits → Pass 5 Digits → Fail 7 Digits → FailEquivalence Class Partition (ECP) (Type) :Valid In-Valid0-9 a-z, A-Z, Special Characters, Blank FieldTest Scenario 2 :- Verify PrefixBoundary Value Analysis (BVA) (Size) :Min = Max = 3 Digits → Pass 2 Digits → Fail 4 Digits → FailEquivalence Class Partition (ECP) (Type) :Valid In-Valid[2-9][0-9][0-9] a-z, A-Z, Special Characters, Blank FieldTest Scenario 3 :- Verify SuffixBoundary Value Analysis (BVA) (Size) :Min = Max = 6 Digits → Pass 5 Digits → Fail 7 Digits → FailEquivalence Class Partition (ECP) (Type) :Valid In-Valid0-9, a-z, A-Z Special Characters, Blank FieldTest Scenario 4 :- Verify Area CodeBoundary Value Analysis (BVA) (Size) :Min = Max = 3 Digits → Pass 2 Digits → Fail 4 Digits → FailEquivalence Class Partition (ECP) (Type) :Valid In-Valid0-9, Blank Field a-z, A-Z, Special Characters
    • Test Scenario 5 :- Verify command selection like Cheque Deposit, Money Transfer,Mini Statement and Bills Paid.Test Scenario 6 :- Verify login operation to connect to Bank Server Remaining Fields Area Code Expected O/p All are valid Valid Next Window All are valid Blank Field Next Window All are valid Invalid Error Message Any one Invalid Valid/Blank Error Message Any one Blank Field Valid/Blank Error MessageFunctional Specification – 6 :- In a library Management System the readers are applying for Identity No. to getthis no., the reader is filling below fields.Reader Name : Alphabets in lower case with Init Cap as single wordHouse Name : Alphabets in lower case as single wordPIN Code : Related to India Postal DepartmentCity Name : Alphabets in uppercase as single wordPhone No. : Related to India Subscribers and optionalPrepare Test ScenarioTest Scenario 1 :- Verify Reader NameBoundary Value Analysis (BVA) (Size) :Min = 1Char. → Pass Max = 256Char. → PassMin-1 = 0Char. → Fail Max-1 = 255Char. → PassMin+1 = 2Char. → Pass Max+1 = 257Char. → Fail(In any front end developed programs the default max. char are 256.)Equivalence Class Partition (ECP) (Type) :Valid In-Valid[A-Z][a-z]* 0-9, Special Characters, Blank FieldTest Scenario 2 :- Verify House NameBoundary Value Analysis (BVA) (Size) :Min = 1Char. → Pass Max = 256Char. → PassMin-1 = 0Char. → Fail Max-1 = 255Char. → PassMin+1 = 2Char. → Pass Max+1 = 257Char. → Fail
    • Equivalence Class Partition (ECP) (Type) :Valid In-Valid[a-z]* A-Z, 0-9, Special Characters, Blank FieldTest Scenario 3 :- Verify PIN CodeBoundary Value Analysis (BVA) (Size) :Min = Max = 6 Digits → Pass 5 Digits → Fail 7 Digits → FailEquivalence Class Partition (ECP) (Type) :Valid In-Valid[1-9][0-9][0-9][0-9][0-9][0-9] a-z, A-Z, Special Characters, Blank FieldTest Scenario 4 :- Verify City NameBoundary Value Analysis (BVA) (Size) :Min = 1Char. → Pass Max = 256Char. → PassMin-1 = 0Char. → Fail Max-1 = 255Char. → PassMin+1 = 2Char. → Pass Max+1 = 257Char. → FailEquivalence Class Partition (ECP) (Type) :Valid In-Valid[A-Z]* a-z, 0-9, Special Characters, Blank FieldTest Scenario 5 :- Verify Phone NumberBoundary Value Analysis (BVA) (Size) :Min = 10 Digits → Pass Max = 12 Digits → PassMin-1 = 9 digits → Fail Max+1 = 13 Digits → FailMin+1 = 11 Digits → PassEquivalence Class Partition (ECP) (Type) :Valid In-Valid0-9, Blank Field A-Z, a-z, Special Characters
    • Test Scenario 6 :- Verify Reader RegistrationDecision Table :Remaining Fields Telephone Number Expected O/pAll are valid Valid Identity No.All are valid Blank Field Identity No.All are valid Invalid Error Msg.Any one Invalid Valid / Blank Error Msg.Any one Blank Field Valid / Blank Error Msg.Functional Specification – 7 :- A Computer Shut Down OperationTest Scenario 1 : Verify Shut Down option selection using Shut DownTest Scenario 2 : Verify Shut Down option selection using Alt+F4Test Scenario 3 : Verify Shut Down option selection using Ctr+Alt+DelTest Scenario 4 : Verify Shut Down operation successTest Scenario 5 : Verify Shut Down operation using Run Command.Test Scenario 6 : Verify Shut Down operation when a process is runningTest Scenario 7 : Verify Shut Down operation using Power Off ButtonFunctional Specification – 8 :- Money With Drawl From ATM with all Rules and RegulationsTest Scenario 1 : Verify Card InsertionTest Scenario 2 : Verify Card Insertion in Wrong AngleTest Scenario 3 : Verify Cancel After Card InsertionTest Scenario 4 : Verify Language SelectionTest Scenario 5 : Verify Cancel after selection of LanguageTest Scenario 6 : Verify PIN EntryTest Scenario 7 : Verify operation with wrong PINTest Scenario 8 : Verify operation when you enter wrong PIN 3 times consecutively
    • Test Scenario 9 : Verify Cancel after enter PINTest Scenario 10 : Verify Amount type selectionTest Scenario 11 : Verify operation when you selected wrong account type with respected to the inserted cardTest Scenario 12 : Verify cancel after account type selectionTest Scenario 13 : Verify with drawl option selectionTest Scenario 14 : Verify cancel after selection of with drawlTest Scenario 15 : Verify amount entryTest Scenario 16 : Verify operation with wrong denomination in amountTest Scenario 17 : Verify with drawl operation success. (Correct amount, right receipt, able to take card back)Test Scenario 18 : Verify with drawl operation with grater than possible balance.Test Scenario 19 : Verify with drawl operation with grater than day limit.Test Scenario 20 : Verify with drawl operation with Net work problemTest Scenario 21 : Verify with drawl amount with lack of amount in ATMTest Scenario 22 : Verify with drawl operation with exceeded no.of Transactions per dayTest Scenario 23 : Verify with drawl operation with other bank cardTest Scenario 24 : Verify with drawl operation with stolen card
    • 2. Use Cases Based Test Case Design : It is an alternative method for Functional Specification Based Test Case Design.In this method the Test Engineers are depending on Use Cases instead of FunctionalSpecifications to prepare Test Scenarios and Test Cases. BRS ↓ Use Cases SRS (Functional BA + Test Lead Test Scenarios Specifications) ↓ ↓ Test Cases HLD ↓ LLDs ↓ System Test Execution Coding (UT & IT) S/w Build From the above diagram the Business Analyst and Test Lead category people aredeveloping use cases depending on corresponding functional specifications in SRS. Every Use Case is an Implemented Form of Functional Specifications.Use Case Format :-1. Use Case ID : Unique number or name for future reference2. Use Case Description : The summery of corresponding Functionality3. Required Inputs : The required Inputs for corresponding Functionality4. Precondition : The necessary Condition to follow before operating corresponding functionality5. Events List : Events / Tasks Expected O/p or Out come (A Step by Step procedure with expected outputs)6. Activity Flow Diagram : A Pictorial / Diagrammatic of corresponding functionality7. Post Condition : Necessary tasks to do after corresponding functionality
    • 8. Alternative events list : Alternative procedures to do this functionality if available9. Proto Type : A screen shot related to corresponding functionality.10. Related use cases : The names of other Use Cases relation to corresponding functionalityApproach :Step1 : Collect use cases of responsible areasStep2 : Take one use case and studyStep3 : Identify Entry Point, Required I/p, Normal Flow, Expected O/p, Exit Point, Alternative Flows and Exceptions rules.Step4 : Prepare Test Scenarios depending on above Identified Information.Step5 : Review that scenario and implement them as Test CasesStep6 : Go to Step2 until all responsible Use Cases StudyUse Case 1 :1. Use Case ID : UC_Login2. Use Case Description : Login operation is authorization3. Required Inputs : User ID is in alphabets lower from 4-16 characters long. The Password alpha numeric in lower case from 4-8Char. Long.4. Precondition : New User Registration to get valid User ID & Password5. Events List : Events / Tasks Expected O/p or Out come Enter User ID an Next window for valid user Password Values and and invalid data error msg. then click OK Button for Invalid user.
    • 6. Activity Flow Diagram : Example : User User ID & Password Error Msg. LOGIN Data Base Re-Login Valid Next Window7. Post Condition : Log out operation is mandatory after successful Login8. Alternative events list : None9. Proto Type :10. Related use cases : UC_New User, UC_Logout
    • Test Scenario 1 :- Check User IDBoundary Value Analysis (BVA) (Size) :Min = 4Char. → Pass Max = 16Char. → PassMin-1 = 3Char. → Fail Max-1 = 15Char. → PassMin+1 = 5Char. → Pass Max+1 = 17Char. → FailEquivalence Class Partition (ECP) (Type) :Valid In-Valida-z A-Z, 0-9, Special Characters, Blank FieldTest Scenario 2 :- Check PasswordBoundary Value Analysis (BVA) (Size) :Min = 4Char. → Pass Max = 8Char. → PassMin-1 = 3Char. → Fail Max-1 = 7Char. → PassMin+1 = 5Char. → Pass Max+1 = 9Char. → FailEquivalence Class Partition (ECP) (Type) :Valid In-Valida-z,0-9 A-Z, Special Characters, Blank FieldTest Scenario 3 :- Check Ok Button Click User ID Password Expected Out Put Valid Valid Next Window Valid Invalid Invalid Data Error Msg. Invalid Valid Invalid Data Error Msg. Value Blank Field Invalid Data Error Msg.Blank Value Value Invalid Data Error Msg.Test Scenario 4 :- Check Cancel Button Event Expected Out Put Click Cancel after open login Login Window ClosedClick Cancel after enter user ID Login Window ClosedClick cancel after enter Password Login Window ClosedTest Scenario 5 :- Check Minimize IconTest Scenario 6 :- Check Maximize IconTest Scenario 7 :- Check Close Icon
    • Use Case 2 :1. Use Case ID : UC_Book_Issue2. Use Case Description : Issue a Book for Valid User3. Required Inputs : User ID is in below format Mm_yy-xxxx (4 digits) Book ID is in below format BOOK_xxxx4. Precondition : New User Registration to get valid User ID5. Events List : Events / Tasks Expected O/p or Out come Enter User ID Focus to Book ID for Valid User and then click and Invalid User error msg. for “Go” Button Invalid User Enter Book ID Book issued message for available and click “Go” book and unavailable book Button message for unavailable book id6. Activity Flow Diagram : Example : User Valid User ID Invalid User BOOK ISSUE Data Base Re-Login Valid Book ID Unavailable BOOK Book ISSUE Data Base Re-Login Valid “Book Issued”7. Post Condition : Received that issued book from Computer Operator8. Alternative events list : None
    • 9. Proto Type : Book Issue - □X User ID Go Book ID Go10. Related use cases : UC_New User, UC_Book FeedingTest Scenario 1 :- Verify User IDBoundary Value Analysis (BVA) (Size) :Min = Max = 10 Position Value → Pass = 9 Position Value → Fail = 11 Position Value → FailEquivalence Class Partition (ECP) (Type) :Valid In-Valid [0][1-9][_][0-9][0-9][_][0-9][0-9][0-9][0-9] a-z, A-Z, 0-9,[1][0-2][_][0-9][0-9][_][0-9][0-9][0-9][0-9] Special Char. except _,Blank FieldTest Scenario 2 :- Verify “Go” button clickUser ID Expected O/p after click ‘Go’Valid Value Focus to Book IDInvalid Value “Invalid User” Error MessageBlank Field “Invalid User” Error MessageTest Scenario 3 :- Verify User IDBoundary Value Analysis (BVA) (Size) :Min = Max = 8 Position Value → Pass = 7 Position Value → Fail = 9 Position Value → FailEquivalence Class Partition (ECP) (Type) :Valid In-Valid[B][O][O][K][_][0-9][0-9][0-9][0-9] a-z, A-Z Except B,O,K, Special Char. except _,Blank Field
    • Test Scenario 4 :- Verify “Go” ClickBook ID Expected O/p after click “Go”Valid Book ID “Book issued” Msg.Invalid Book ID “Unavailable Book” MessageBlank Field “Unavailable Book” MessageTest Scenario 5 :- Verify minimized IconTest Scenario 6 :- Verify maximized IconTest Scenario 7 :- Verify close Icon3. User Interface Based Test Design : The Functional Specification Based Test Design or The Use Cases Based TestDesigns are using to prepare Test Scenarios and Cases for Functional Testing. This UserInterface Based Test Design is using by Test Engineers to prepare Test Scenarios andcases for “Usability Testing”. BRS ↓ SRS (UI Test Scenarios Requirements) ↓ ↓ Test Cases HLD ↓ LLDs ↓ System Test Execution Coding (UT & IT) S/w Build In this method the Test Engineers are depending on User Interface Requirementsin SRS. In general the Test Engineers are writing Common Test Scenarios for UsabilityTesting, which are applicable on any type of Application Scenarios.Test Scenario 1 :- Verify Spelling in every scenarioTest Scenario 2 :- Verify error msg. meaningTest Scenario 3 :- Verify Int.Cap of labels in every screen
    • Test Scenario 4 :- Verify color uniqueness through out the screensTest Scenario 5 :- Verify Font or Style uniqueness through the screensTest Scenario 6 :- Verify size uniqueness throughout the sceneTest Scenario 7 :- Verify alignment of objects in every screensTest Scenario 8 :- Verify line spacing uniqueness through out the screensTest Scenario 9 :- Verify Tool Tips of icons in every screen.Test Scenario 10 :- Verify default object in every screen.Test Scenario 11 :- Verify Uniform Background colors of objects in every screen.Test Scenario 12 :- Verify Scroll Bars when our screen size is grater than Desk TopTest Scenario 13 :- Verify keyboard accessing of every object in every screenTest Scenario 14 :- Verify abbreviations & Short cuts in screensTest Scenario 15 :- Verify Multiple Data Object positions in every screen. Ex : List Box, Menu, Table … etc.,Test Scenario 16 :- Verify Help Messages (Manual Support Testing)Test Scenario 17 :- Verify Functionally Grouped Objects in every screen.Test Scenario 18 :- Verify Boarders of Functionally Grouped Objects in every screensTest Scenario 19 :- Verify Labels of objects with respect to FunctionalityTest Scenario 20 :- Verify Window Labels with respect to Functionality4. Functional and System Specification Based Test Design : After completion of Test Scenarios selection for Functional and Usability Testingthe Test Engineers are concentrating on Test Scenario selection for Non-FunctionalTesting depending on Functional and System Specifications in SRS. Functional Specifications are describing the required functionalities in Softwareand System specifications are describing the required environment to be used.
    • BRS ↓ SRS Test Scenarios ↓ (Functional Test Cases Specifications + System Specifications) ↓ HLD & LLDs ↓ System Test Execution Coding (UT & IT) S/w BuildExample Test Scenarios for Compatibility Testing :Test Scenario 1 : Verify Login in Win NT with Customer expected configurationTest Scenario 2 : Verify Login in Win 2000 with Customer expected configurationTest Scenario 3 : Verify Login in Win Vista with Customer expected configurationAnd more…Example Test Scenarios for Performance Testing :Test Scenario 1 : Verify Login Under Customer expected Load and ConfigurationTest Scenario 2 : Verify Login Under more than Customer expected configurationAnd more….Example Test Scenarios for Installation Testing :Test Scenario 1 : Verify Setup Program to Start Installation.Test Scenario 2 : Verify Interface easiness during InstallationTest Scenario 3 : Verify occupied disk space after InstallationAnd more…Test Case Format : After completion of Test Scenarios selection for responsible areas in terms ofFunctional, Usability and Non-Functional Testing, the Test Engineers are implementingthem as Test Cases. Test Engineers are using IEEE (Institute of Electrical & ElectronicsEngineer) 829 Test Case Format.1. Test Case ID : Unique Number / Name for Future Reference2. Test Case Name : The Corresponding Test Scenario3. Feature to be Tested : The Name corresponding Module or Functionality
    • 4. Test Suite ID : The Unique number or name of a Test Batch. This case is a member in that Batch5. Priority : The importance of this Test Case (P0 priority for Functional Test Cases, P1 Priority for Non-Functional Test Cases and P2 Priority for Usability Test Cases.)6. Test Environment : The required Hardware and Software to execute this test.7. Test Effort : Person per hour (Ex.20min is average Test Execution Time)8. Test Duration : The data and time to execute this test.9. Test Setup : The necessary tasks to do before start this test execution.10. Test Procedure / Data Matrix :Step Action / Required Expected Actual Defects Result CommentsNo. Task event I/p O/p O/p Id Test Design Test Execution } ECP (Type) BVA (Range / Size)I/p Object Data Matrix in Valid Invalid Min Max11. Test Case Pass / Fail Criteria : The Final result of this Test Case after executionNote 1 : In general the test engineers are not interesting to fill all fields in Test Case Format due to lack of time and similarity in fields values of Test Cases.Note 2 : The test engineers are using test procedure for operation test cases and data matrix for input object test cases.Functional Specification : In a Banking application the valid employees are creating fixed deposit operationswith depositors provided information. In this fixed deposit operation, the employees arefilling below fields.Depositor Name : Alphabets in Lower Case with Int.Cap, allows multiple words in nameAmount : 1500 to 1,00,000
    • Time : Up to 12 monthsInterest : Numeric with one decimalIf the time>10months, then the Interest>10% from Bank RulesPrepare Test Scenarios and Test Cases :Test Scenario 1 : Verify Depositor NameTest Scenario 2 : Verify AmountTest Scenario 3 : Verify TimeTest Scenario 4 : Verify InterestTest Scenario 5 : Verify Fixed Deposit OperationTest Scenario 6 : Verify Fixed Deposit Operation with Bank RuleTest Case Documents :Test Case 1 :-1. Test Case ID : TC_FD_Ravi_24th May_12. Test Case Name : Verify Depositor Name3. Test Suit ID : TS_FD4. Priority : P05. Test Setup : Depositor Name is taking inputs6. Data Matrix : ECP (Type) BVA (Size) I/p Object Valid Invalid Min MaxDepositor Name ([A-Z][a-z]*)* 0-9,Spl.Char, Blank Field 1 Char 256 CharTest Case 2 :-1. Test Case ID : TC_FD_Ravi_24th May_22. Test Case Name : Verify Amount3. Test Suit ID : TS_FD4. Priority : P05. Test Setup : Depositor Object is taking inputs
    • 6. Data Matrix : ECP (Type) BVA (Range)I/p Object Valid Invalid Min MaxAmount 0-9 a-z, A-Z, Spl.Char, Blank Field 1500 100000Test Case 3 :-1. Test Case ID : TC_FD_Ravi_24th May_32. Test Case Name : Verify Time3. Test Suit ID : TS_FD4. Priority : P05. Test Setup : Time Object is taking inputs6. Data Matrix : ECP (Type) BVA (Range)I/p Object Valid Invalid Min Max Time 0-9 a-z, A-Z, Spl.Char, Blank Field 1 Month 12 MonthsTest Case 4 :-1. Test Case ID : TC_FD_Ravi_24th May_42. Test Case Name : Verify Interest3. Test Suit ID : TS_FD4. Priority : P05. Test Setup : Interest Object is taking inputs6. Data Matrix : ECP (Type) BVA (Range)I/p Object Valid Invalid Min Max Interest 0-9 . 0-9 with one decimal a-z, A-Z, Spl.Char, Blank Field 0.1 100
    • Test Case 5 :-1. Test Case ID : TC_FD_Ravi_24th May_52. Test Case Name : Verify Fixed Deposit Operation3. Test Suit ID : TS_FD4. Priority : P05. Test Setup : Valid Values are available in hand6. Test Procedure :Step No. Action Required I/p Expected O/p 1. Connect Bank Server Valid Exp Id Menu Appears 2. Select “FD” Option None Fixed Deposit Form Opened All are valid Acknowledgement 3. Fill Fields and Click Ok Any one Invalid Error Msg. Any one Blank Field Error Msg.Test Case 6 :-1. Test Case ID : TC_FD_Ravi_24th May_62. Test Case Name : Verify Fixed Deposit Operation with Bank Rule3. Test Suit ID : TS_FD4. Priority : P05. Test Setup : Valid Values are available in hand6. Test Procedure : Step Action Required I/p Expected O/p No. Connect Bank 1. Valid Exp Id Menu Appears Server Select “FD” Fixed Deposit Form 2. None Option Opened Valid Name, Amount, Time>10 Acknowledgement Fill Fields and with Interest>10 3. Click Ok Valid Name, Amount, Time>10 Error Msg. With Interest <=10
    • Like as above example the Test Engineers are implementing Test Scenarios as TestCases. Every Test Case is a combination of corresponding Test Scenario and requireddetails to apply this test on S/w Build.Test Cases Selection Review : After completion of Test Scenarios and Cases writing the Test Lead & TestEngineers are conducting a review meeting to estimate the completeness and correctnessof that documents. In this review the Testing Team is depending on below coverages. □ Requirements Oriented Coverage (Modules) □ Testing Topic Oriented Coverage (UT,FT,NFT)IV. Test Execution :- After completion of Test Design and Review the Testing Team is concentratingon below issue. □ Formal meeting with developers □ Test Environment Establishment □ Levels of Test Execution□ Formal Meeting :- In general the Test Execution process is starting with a Formal Meeting inbetween Testing Team & Development Team representatives. In this meeting thecorresponding representatives are concentrating on Build Version Control and DefectTracking. From Build version control concept, the Development Team is modifying S/wBuild Coding, to resolve defects and they will release that modified build with Uniqueversion number. This version numbering system is understandable to Test Engineers todistinguish Old Build & Modified Build. For this version controlling, the Developers areusing Version Control Tools also. (Ex : - VSS (Visual Source Safe)) To report mismatches to Development Team the Test Engineers are reporting thatmismatch to Defect Tracking Team (DTT) FirstTest Lead + Project Manager + Project Lead + Business Analyst → DTT
    • □ Test Environment Establishment :- After completion of Formal Meeting, the Testing Team is concentrating on TestEnvironment Establishment with required all Hardware and Software SERVER Configuration Repository TCP/IP TCP/IP FTP FTP TCP/IP FTP Development Project Environment Management Test EnvironmentFTP : File Transfer Petrol (Single Location)TCP/IP : Transmission Control Protocol / Internet Protocol (Different Location(s))□ Levels of Test Execution:- Development Testing Initial Build Level-0 (Sanity) Stable Build Defect Report Level-1 (Comprehensive) Defect Modified Build Fixing Level-2 (Regression) Level-3 (Final Regression)
    • Case Study :- Initial Build ↓ Sanity Testing (Level-0) ↓ Stable Build ↓ Comprehensive (Level-1) ↓ Defect Detection ↓ Modified Build ↓ Regression Test (Level-2) ↓ Defect Closing ↓ Master Build ↓ Final Regression (Leve-3) ↓ Golden Build (Able to Release)□ Levels of Test Execution Vs Test Cases :- Level -0 → Some P0 (Functional) Test Cases Level–1 → All P0,P1&P2 Test Cases Level-2 → Selected P0,P1&P2 Test Cases with respect to modification Level-3 → Selected P0,P1&P2 Test Cases with respect to Defect Density□ Level-0 Sanity Testing :- After Downloading Initial Build from Configuration Reporting in server, theTesting Team is concentrating on Level-0 sanity testing to estimate Testability of thatSoftware. Testability means that Understandable, Operatable, Observable, Controllable,Consistency, Simplicity, Maintainable and Automatable. If that Initial Build is not Stable then the Testing Team sends back that Build toDevelopers. If that build is Stable Build then the Test Engineers are concentrating onLevel-1 Test Execution to detect defects. This Level-0 testing is also known as SanityTesting / Smoke Testing / Testability Testing / Tester Acceptance Testing or BuildVerification Testing /n Octangle Testing.
    • □ Level-1 Comprehensive / Real Testing :- In this Level-1 Test Execution, the Test Engineers are executing all Test Cases asBatches. Every Test Batch Consist of a set of dependent Test Cases. In these test batchesthe end state of one test is Base State to Next State. Test batches are also known as TestSuite or Test Set or Test Build or Test Chain. Receive Stable Make Test Select Select a Build from Cases as A Batch Test Case Developers Batches Next Batch Yes Next Case Defect Step Take a Step Reporting No Expected in Case = Actual Build From the above diagram the Test Engineers are continuing Test Execution Batchby Batch and Case by Case in every Batch. If our Test Case Step expected is not equal toactual then the Test Engineer is concentrating on Defect Reporting. If possible, they willcontinue Test Execution also. In this Level-1 test execution, the Test Engineers are preparing Test LogDocument to specify test results.Test Log Document Format :- Test Case Results (Pass / Defect Executed Executed Comments ID Fail) ID By OnThere are three types of Test Results.→ Passed, All expected values are equal to Actual→ Failed, Any one expected are not equal to Actual→ Blocked, Test execution postponed due to incorrect parent functionality
    • V. Defect Reporting & Tracking :- During Level-1 Test Execution, the Test Case expected values are not equal toActual. These mismatches are calling as Defects / Issues / Bugs / FlawsDefect Report :-1. Defect ID : Unique No. or Name2. Description : Summary of that mismatch in between Tester expected value and Build actual value3. Build Version ID : The version number of Current Build (The Test Engineers detected this defect in that Build)4. Feature : The Name of Module or Functionality (Test Engineers detected this defect in that Module)5. Test Case ID : The ID of failed test case (Test Engineers detected this defect in that case execution)6. Reproducible : Yes → Defect appears every time in Test Execution No → Defect appears rarely in Test Execution7. If Yes, attach procedure :8. If No, attach procedure and screen shots :9. Severity : The seriousness of defect in terms of Functionality High / Critical :- Not able to continue testing without resolving. Medium / Major :- Able to Continue Testing but Compulsory / Mandatory to resolve Low / Minor :- Able to continue, May or May Not to resolve.10. Priority : The importance of defect to solve in-terms of customer interest. (High / Medium / Low)11. Detected By : The name of the Test Engineer12. Detected On : The data of detection and submission13. Status : New : Reporting first time Re-Open : Re-Reporting14. Assigned to : Report to Tracking Team15. Suggested Fix : Suggestion to Solve that Defect. (Optional)
    • Defect Reporting Process : Test Engineer Report Defect to DTT as New DTT Analize that Defect Accepted Defect Status No Changed to “ Rejected” Yes Categorized that defect and change status to “Open” No Data Assigned to Yes Testing Team related Defect No Procedure Assigned to Yes Testing Team Related Defect No
    • No H/w or Assigned to Yes H/w Team Infrastruct ure Defect No Code Related defect is Assigned to Development TeamCase Study :- Report Test Defect Assigned Project Lead Engineer Defect Tracking Team + Programmers Code Related Defect Report Test Defect Assigned Engineer Tracking Team BA+TL+TE Defect Test Case Procedure & Test Data Related Defect H/w or Report Infrastructure Test Defect Assigned Engineer Tracking Team Team Defect H/w or Environment Related Defect
    • Defect Life Cycle or Bug Life Cycle : New ↓ Assigned Reject Deferred ↓ Open ↓ Fixed Reopen ↓ ClosedNew : Reporting First TimeAssigned : Accepted by DTTReject : Not Accepted by DTTDeferred : Accepted but not interested to solve due to low severity and low priority.Open : Responsible Team is ready to resolveFixed : Defect not Correctly solved (or) Re reportingClosed : Defect correctly solved and confirmed through Regression Testing.Test Data Related Defect Fixing : If our reported defect accepted by Defect Tracking Team (DTT) and theydecided that defect as Test Data Related Mismatch. In this situation the responsibletesting team is concentrating Correct Data Collection (CDC) without having conceptualgap with the help of BA and TL and then, the Test Engineers are re-executing previouslyfailed test on same Build with correct test data. This test repetition is calling as Retestingor Confirmation Testing. Testing Build Failed Test Case Defect Reporting Data Related Defect Repeat Test Case Collect Correct Build With correct Data Data Retesting / Confirmation Testing
    • Test Script or Procedure Related Defect Fixing : If our reported defect accepted as Test Procedure Related Defect by DTT,then Responsible Testing Team is preparing Correct Procedure for that Test Case withhelp of TL and BA Testing Build Failed Test Case Report to DTT Procedure Related Defect Correct Test Repeat Test Case Procedure Build In correct Prepared by Test procedure Engineers Retesting / Confirmation TestingInfrastructure Related Defect Fixing : If our Report Defect Accepted by DTT as Environment Related orInfrastructure Related or Hardware Related Defect, then responsible Hardware Team isRe-establishing correct test environment. Testing Build Failed Test Case Report to DTT Environment Related Defect Re-establish Test Repeat Test Case Environment by Build In modified H/w Team environment Retesting / Confirmation Testing
    • Code Related Defect Fixing :- If our reported defect accepted as Code Related Defect, then the responsibleProgrammers / Developers are performing changes in Build Coding to Resolve thatdefect. PL Updates the Impact Analysis Selected Coding status of Defect by areas reviewed to “Open” Programmers by PL Review Document, Changes by changes by concerned Changes Yes Required in BA/Designers & person Project Lead (BA/Design) Documents Unit Test & Changes in No Make modified coding by Build Programmers PL changes Release Modified Build defect status to with Unique Version “Fixed” Number and Release Note After receiving build from Development Team, the Testing Team isconcentrating on re-testing & Regression Testing Test Cases Related Passed Tests Modified Passed Build Failed Test Build Passed Programmers Pass Report Defect Faild DTT Code Related Defect From the above model the test engineer is re-executing previously failed teston modified build to confirm defect fixing, called as Retesting or Confirmation Testing.
    • To identify side effects of defect fixing modifications in modified build, the testengineers is re-executing previously passed related test on that modified build calledRegression Testing.Level-2 Regression Testing : Take Modified Build and Release Note Identify severity of fixed defect in that Modified Build High Medium Low All P0 All P0 Some P0 All P1 Carefully Selected P1 Some P1 Carefully Selected P2 Cases And Some P2 Test Cases Some P2 Test Cases On that modified build to detect Side Effects in Build with respect to Modifications Specified in Release NoteCase 1:- If the development team fixed defect severity is High then the Test Engineersare repeating All P0, All P1 and Carefully Selected P2 Test Cases on that Modified Buildw.r.t. modifications specified in release note.Case 2 :- If the Development Team fixed defect severity is Medium then the TestEngineers are repeating All P0, Carefully Selected P1 and Some P2 Test Cases on thatmodified build w.r.t. modifications specified in release note.Case 3 :- If the Development Team fixed defect severity is Low then the Test Engineersare repeating Some P0, Some P1 and Some P2 Test Cases on that modified build w.r.t.modifications specified in release note.Case 4:- If the development team release modified build w.r.t. changes in CustomerRequirements then the Test Engineers are re-executing All P0, All P1 and CarefullySelected P2 Test Cases on that Modified Build w.r.t. changes in Customer requirements.In this case Test Engineers are performing changes in Test Scenarios and Test Casesw.r.t. changes in Customer Requirement.VI. Test Closure :-
    • After completion of all reasonable tests and detected defects closing, the testlead is conducting a review meeting to Stop Testing. In this review the TL is analyzingbelow factors with the involvement of Test Engineers.1. Coverage Analysis :- → Requirements Oriented Coverage (Module) → Testing Topic Related Coverage (Usability, Functional, Non-Functional)2. Defect Density Calculation : Ex : Modules / Requirement % A 20% B 20% C 40% ( Need Regression Test ) D 20% Total 100%3. Analysis of Deferred Defect : Whether the deferred defects are postponed or not?Level-3 Final Regression Testing : After completion of successful Test Closure review the Testing Team isconcentrating Leve-3 or Final Regression Testing. Identify High Defect Density Person / Module Hour Golden Defect Effort Reporting If Estimation Required Regression Plan Testing RegressionVII. User Acceptance Testing (UAT) :
    • After Completion of Final Regression Testing the Project Management isconcentrating on User Acceptance Testing to collect feedback from Real Customers /Model Customers. There are two ways in User Acceptance Testing, such as Alpha Testing andBeta Testing.VIII. Sign Off : After completion of successful User Acceptance Testing and theremodifications, the Test Lead is preparing Final Test Summary Report and reviewcorresponding Test Engineer from this project. The final Test Summary Report is acombination below document. → Test Strategy / Methodology → Test Plan(s) → Test Scenarios → Test Cases → Test Logs → Defect Reports → Requirements Traceability Matrix Required Test Case Result Detected Status Comments ID ID (Pass / Fail) ID (Closed / Deferred)It is a mapping between requirements and defects via test cases.Case Study (5Months of Testing Process) :- Deliverable Responsibility Duration Test Strategy PM / TM 4-5 days Test Planning Test Lead 4-5 days Requirements Training to BA + Domain / Subject Experts 5-10 days Test Engineers Test Scenarios & Review Test Engineer 5-10 daysTest Cases Implementation Test Engineer 10-15 days Review Build + Level-0 Test Engineer 2-3 days (Sanity Testing) ** Test Automation Test Engineer 10-15 days Level-1 and Level-2 Test Engineer 30-40 days Testing Execution Deliverable Responsibility Duration
    • On Going Defect Reporting Test Engineer (Same Day) Status Reporting Test Lead Weekly Twice Test Closure & Level-3 Test Lead & Test Engineer 5-10 days Real / Model Customers with In front of User Acceptance Testing 3-5 days Developers and Testers Sign Off Test Lead 1-2 days W-Model System Testing Development And Manual Test Automation N.F.T Load Runner & Req. Analysis J Meter F.T Win Runner / QTP S/w Design / Robot / Silk Usability Coding + Unit Testing Testing No Tools in Market Integration Testing Note : Test Automation is Build Optional From the above W-Model, the Testing Tools are available for Functional Testingand Some of Non-Functional Testing and Endurance Testing and Data VolumeTesting. The remaining Non-Functional Tests and Usability Testing conducted by TestEngineers Manually.
    • Win Runner 8.0 : Developed by Mercury Inter Active and Take over by Hewlett Packed (HP) Functional Testing Tool This Version released in “2005”January Supports VB, .Net, Java, Power Builder, HTML, Delphi, VC++, D2K, and Siebel and Siebel Technology Software for Functional Testing. To Support SAP, People Soft, XML, Multimedia and Oracle Applications (“ERPS”) including above technologies, Test Teams are using Quick Test Professional (QTP) Win Runner runs on windows only X-Runner for Unix / LinuxWin Runner Test Process : Receive Stable Build From Developers after Sanity Testing ↓ Identify Functional Test Cases (Priority P0) to Automate (English + Manual) ↓ Create Automation Programs (TSL) for that Functional Test Cases ↓ Runs Programs on S/w Build to detect defects ↓ Test Reporting if required From the above approach, the Test Engineers are concentrating ManualFunctional Test Cases into Test Script Language (TSL) programs. TSL is a “C” like languageAdd-in Manager : This window list out all Win Runner supporting technologies with respect tolicense. Test Engineers are selecting current project technology in that listWelcome Screen : After Successful Win Runner launching Welcome Screen is coming on theDesktop. The screen consists of 3 New Options like → Create a New Test → Open an Existing Test → A Quick Preview of Win Runner
    • Win Runner Icons : Start Recording ↓ Run From Top → Run From Arrow Stop Recording Pause (Stop Run)Win Runner Test Automation Frame Works : The Win Runner 8.0 is allowing you to convert our Manual Functional Test Casesinto Test Script Language (TSL) programs in 4 ways → Record and Playback Frame Work → Data Driven Frame Work → Keyword Driven Frame Work → Hybrid Frame WorkI. Record & Playback Frame Work : In this frame work the Test Engineers are converting manual test cases intoautomation programs with Two Steps of procedure. A. Recording Operations B. Inserting Check PointsA. Recording Operations :- In Test Automation program creation, the Test Engineers are recording S/w Buildoperations. There are two modes in recording such as Context Sensitive Mode andAnalog Mode. In Context Sensitive Mode, the tool is recording Mouse and Keyboard operationswith respect to objects and window in build. To select this mode the Test Engineers areusing below options. Click “Start Recording” icon Once Test Menu → Record Context Sensitive Option. To record mouse pointer movements with respect to desktop co-ordinates, TestEngineers are using Analog Mode in Win Runner. To select this mode we can use belowoptions.
    • Click “Start Recording” icon Twice Test Menu → Record AnalogEx :- Digital Signatures, Graphs Drawing and Image Movements.“F2” is a short cut key to change from one mode to another mode.Note :- In Analog Mode the Win Runner is Recording Mouse Pointer Movements withrespect to Desktop Co-ordinate. Due to this reason the Test Engineers are not changingcorresponding window position and monitor resolution.B. Inserting Check Point : After recording build operations, the Test Engineers are inserting check pointswith respect to expectations. Every check point is comparing Test Engineer givenExpected Value and Build Actual Value. There are Four check points in Win Runner. GUI (Graphical User Interface) Check Point Bitmap Check Point Database Check Point Text Check Point GUI (Graphical User Interface) Check Point : To verify properties of Objects, we can use this check point. It consists of 3 suboptions. i. For Single Property ii. For Object / Window iii. For Multiple Objecti. For Single Property :- To verify one property of one object we can use this option.Ex.-1 :Test Procedure :- Step Action Required I/p Expected O/p No. Open an order in Flight Order No. as Delete Order button 1 Reservation Window Valid “enabled”
    • Build :- Flight Reservation WindowAutomation Program :- set_window (“Flight Reservation”,1); menu_select_item (“File;Open Order….”); set_window (“Open Order”,1); button_set (“Order No.”, ON); edit_set (“Edit”, “1”); button_press (“OK”); set_window (“Flight Reservation”,1); button_check_info (“Delete Order”, “enabled”, 1);Ex.-2 :Test Procedure :- Step Action Required I/p Expected O/p No. Open an order in Flight Order No. as Insert Order button 1 Reservation Window Valid “disabled”Build :- Flight Reservation WindowAutomation Program :- set_window (“Flight Reservation”,1); menu_select_item (“File;Open Order….”); set_window (“Open Order”,1); button_set (“Order No.”, ON); edit_set (“Edit”, “1”); button_press (“OK”); set_window (“Flight Reservation”,1); button_check_info (“Insert Order”, “enabled”, 0);Note :- TSL is case sensitive language and it is taking # symbol for comments.
    • Ex.-3 :Test Procedure :- Step Action Required I/p Expected O/p No. Open an existing order in Flight Valid Order Update Order button 1 Reservation Window No. “disabled”Build :- Flight Reservation WindowAutomation Program :- set_window (“Flight Reservation”,1); menu_select_item (“File;Open Order….”); set_window (“Open Order”,1); button_set (“Order No.”, ON); edit_set (“Edit”, “1”); button_press (“OK”); set_window (“Flight Reservation”,1); button_check_info (“Update Order”, “enabled”, 0);Note :- No need to “Stop Recording” before “Inserting Check Point”Case Study :-1) Manual : Click a button TSL : button_press (“Button Name”);2) Manual : Select a Menu Option TSL : menu_select_item (“Menu Name; Option Name”);3) Manual : Fill a Text Box TSL : edit_set (“Edit Box Name”, “Given Text”);4) Manual : Select a Radio Button TSL : button_set (“Radiobutton Name”, ON);5) Manual : Check Box Selection TSL : button_set (“Check Box Name”, ON/OFF);
    • 6) Manual : Select an Item in List Box TSL : list_select_item (“List Box Name”, “Selected Item”);7) Manual : Fill a Password Box TSL : password_edit_set (“Password Object”, “encrypted value”);8) Manual : Activate a window TSL : win_active (“Window Name”);9) Manual : Auto Focus to window through an object operation TSL : set_window (“Window Name”, time);Ex.-4 :Test Procedure :- Step Action Required I/p Expected O/p No. Enter User ID and Valid User ID & “OK” button 1 Password Password “enabled”Build :- Login User ID Password OKAutomation Program :- set_window (“Login”,Time); edit_set (“User ID”, “Valid Value”); passoword_edit_set (“Password”, “Encrypted Value”); button_check_info (“OK”, “enabled”, 1);
    • Ex.-5 :Test Procedure :-Step No. Action Required I/p Expected O/p 1 Focus to Student window None “Submit” button “disabled” 2 Select Roll No. None “Submit” button “disabled” 3 Enter Student Name Valid Name “Submit” button “enabled”Build :- Student Roll No. Name SubmitAutomation Program :- win_active (“Student”); button_check_info (“Submit”, “enabled”, 0); list_select_item (“Roll No.”, “Selected Item”); button_check_info (“Submit”, “enabled”, 0); edit_set (“Name”, “Valid Value”); button_check_info (“Submit”, “enabled”, 1);Ex.-6 :Test Procedure :-Step No. Action Required I/p Expected O/p 1 Focus to Student window None “Submit” button “disabled” 2 Select Roll No. None “Submit” button “disabled” 3 Enter Student Name Valid Name “Submit” button “enabled”
    • Build :- Employee Emp. No. Name ○ Male ○ Female AcceptAutomation Program :- win_active (“Employee”); button_check_info (“Accept”, “enabled”, 0); list_select_item (“Emp. No.”, “Selected Item”); button_check_info (“Accept”, “enabled”, 0); edit_set (“Name”, “Valid Value”); button_check_info (“Accept”, “enabled”, 0); button_set (“Button Name (Male/Female)”, ON); button_check_info (“Accept”, “enabled”, 1);Ex.-7 :Test Procedure :- Step Action Required I/p Expected O/p No. Focus to Flight Reservation 1 None “Update” disabled Window Valid Order 2 Open an Existing Order “Update” Order disabled No. “Update” button 3 Perform a Change in that Order Valid Change “enabled”Build :- Flight ReservationAutomation Program :- win_active (“Flight Reservation”); button_check_info (“Update”, “enabled”, 0); menu_select_item (“File; Open Order…”); set_window (“Open Order”,1); button_set (“Order No”,ON); edit_set (“Edit”,1); button_press (“OK”);
    • button_check_info (“Update”, “enabled”,0); edit_set (“Name”, “Ravi Kiran”); button_check_info (“Update”, “enable”,1);Ex.-8 :Test Procedure :-Step No. Action Required I/p Expected O/p 1 Focus to Flight Reservation None Date of Flight object focused 2 Open an Existing Order Valid Order No. Date of Flight object focusedBuild :- Flight ReservationAutomation Program :- win_active (“Flight Reservation”); obj_check_info (“Date of Flight Object”, “focused”,1); set_window (“Flight Reservation”,1); menu_select_item (“File; Open Order…”); set_window (“Open Order”,1); button_set (“Order No”,ON); edit_set (“Edit”,1); button_press (“OK”); obj_check_info (“Date of Flight Object”, “focused”,1);ii. For object / window :- To verify more than one property of one object we can use this option.Ex.-1 :Test Procedure :-No. Action Req. I/p Expected O/p Focus to Flight Reservation and Valid Tickets Object value is Numeric 1 Open an Existing Order Order No. and value inbetween 1-10Build :- Flight ReservationAutomation Program :- win_active (“Flight Reservation”); set_window (“Flight Reservation”,1); menu_select_item (“File; Open Order…”); set_window (“Open Order”,1); button_set (“Order No”,ON); edit_set (“Edit”,1); button_press (“OK”); set_window (“Flight Reservation”,1); obj_check_gui (“Tickets:”, “list1.ckl”, “gui1”,1);# list1.ckl consists Range and Regular Expression # gui1 consists of 1-10 and [0-9]*
    • ii. For multiple objects:- We can use this option to verify more than one property of more than one object.Ex.-1 :Test Procedure :-Step Required Action Expected O/pNo. I/p Focus to Flight Insert Order, Delete Order and Update 1 None Reservation Window Order buttons are disabled Insert Order and Update Order buttons Open an Existing Valid Order 2 are disabled and Delete Order button is Order No. Enabled Perform a Change in Valid Insert Order is Disabled, Update Order 3 that Order Change and Delete Order EnabledBuild :- Flight ReservationAutomation Program :- win_active (“Flight Reservation”); win_check_gui (“Flight Reservation”, “list.ckl”, “gui1”,1); #Check Point set_window (“Flight Reservation”,1); menu_select_item (“File; Open Order…”); set_window (“Open Order”,1); button_set (“Order No”,ON); edit_set (“Edit”,1); button_press (“OK”); win_check_gui (“Flight Reservation”, “list.ckl”, “gui2”,1); # Check Point set_window (“Flight Reservation”,1); edit_set (“Tickets:”, “3”); win_check_gui (“Flight Reservation”, “list.ckl”, “gui3”,1); # Check PointNote : To save Test Creation and Execution Time, the Test Engineers are inserting “ForMultiple Object” Check Point “For Multiple Objects” option is applicable on Multiple Object in same windowCase Study-1 : obj_check_info() for single property obj_check_gui () for object / window win_check_gui () for multiple objects
    • Case Study-2 : Object Type Testable Properties Push Button Enabled, Focused Radio Button Enabled, Status (ON / OFF) Check Box Enabled, Status (ON / OFF) List / Combo Enabled, Value, Count Box Menu Enabled, Count Test Box / Edit Enabled, Value, Focused, Range, Regular Expression, Date Format, Box Time Format, … Table Grid Columns Count, Rows Count, Cell ContentCase Study-3 : BRS ↓ SRS Test Scenarios Use ↓ (Functional Cases Test Cases Specifications) ↓ ↓ Automation HLD & LLDs Programs ↓ Coding (UT & IT) Manual Testing ↓ Automation Testing Buildiii. Bitmap Check Point : We can use this check point to compare images. This check point is supportingStatic Images only. To Support movies like dynamic images comparison the Test Engineers are usingManual Testing (or) QTP Tool.Ex.-1 :Test Procedure :-Step Required Action Expected O/pNo. I/p Focus to Flight Reservation Build The Old Version Logo is equal 1 Old Version and select About None to Flight Reservation Build Option in Help Menu New Version Logo
    • Build :- Flight ReservationAutomation Program :- win_active (“Flight Reservation”); set_window (“Flight Reservation”,1); menu_select_item (“Help; About….”); set_window (“About Flight Reservation System”,1); obj_check_bitmap (“Button”, “Imgl”, 1); # Check PointEx.-2 :Test Procedure :-Step Required Action Expected O/pNo. I/p Focus to Flight Reservation Graph opened for existing 1 Window and select analysis menu, None data graphs option Existing Graph changed with Open an Existing Order and Valid 2 respect to changes in No.of perform a change in no. of Tickets Change TicketsBuild :- Flight ReservationAutomation Program :- win_active (“Flight Reservation”); set_window (“Flight Reservation”,1); menu_select_item (“Analysis ; Graphs…”); set_window (“Graphs”,1); obj_check_bitmap (“Gs_Drawing”, “Img1”,1, 158,26,178,154) # Screen area Check Point win_close (“Graph”); set_window (“Flight Reservation”,1); menu_select_item (“File; Open Order…”); set_window (“Open Order”,1); button_set (“Order No”,ON); edit_set (“Edit”,1); button_press (“OK”); set_window (“Flight Reservation”,1); edit_set (“Tickets:”, “3”); # Changes in Tickets button_press (“Update Order”); Order”);Note : The Win Runner Bitmap check point is comparing Complete Images or Part ofImages.
    • For object / window : obj_check_bitmap (“Image Name”, “Image File”, Time); For screen area : obj_check_bitmap (“Image Name”, “Image File”, Time, x,y,width, height); To verify manipulations (or) calculations for our application build we can usecheck point. This check point is a combination of “2 concepts” such asGet Text Option and If Condition. The Get Text option consists of 2 sub options 1. From Object / Window 2. From Screen Area.1. From Object / Window : To capture an object value we can use this option.Navigation :- Insert Menu → Get Text → Select Required ObjectSyntax :- obj_get_text ( “Object Name”, Variable);2. From Screen Area : To capture selected value from a screen, we can use this optionNavigation :- Insert Menu → Get Text → From Screen Area → Select required valueregion in that screen → Right click to relive from selection.Syntax :- obj_get_text (“Screen Name”, Variable, x1,y1,x2,y2);If Condition :- TSL is a “C” like language. It allows you to write control statements with “c”syntaxes. if (condition) { ---- ---- } else { ---- ---- }
    • Ex-1 :-Manual Expected :- Output = Input * 100Build :- Sample Input xxxxxxx Output xxxxxxxAutomation Program :- set_window (“Sample”,1); obj_get_text (“Input”, x); obj_get-text (“Output”, y); if (y = = x*100) printf (“Test is Pass”); else printf (“Test is Fail”);Ex-2 :-Manual Expected :- Total = Tickets * Price in an opened orderBuild :- Flight ReservationAutomation Program :- set_window (“Flight Reservation”,1); menu_select_item (“File; Open Order…”); button_set ( “Order No.”, ON); edit_set (“Edit”,1); button_press (“OK”); obj_get_text (“Total”,tot); obj_get_text (“Price”, p); obj_get_text (“Tickets”, t); p = substr(p,2,length(p)-1); tot = substr(tot,2,length(tot)-1); if (tot = = p*t) printf (“Test is Pass”); else printf (“Test is Fail”);
    • Ex-3 :Manual Expected : - Total = File1 size + File2 SizeBuild : Audit File1 xxxxxxx KB File2 xxxxxxx KB Total xxxxxxx KBAutomation Program : set_window (“Audit”,1); obj_get_text (“File1”,x); obj_get_text (“File2”,y); obj_get_text (“Total”,z); x = substr(x,1,length(x)-2); y = substr(y,1,length(y)-2); z = substr(z,1,length(z)-2); if (z = = x+y) printf (“Test is Pass”); else printf (“Test is Fail”);Ex-4 :Manual Expected : - Total = Price * QunatityBuild : Shopping Quantity xxxxxxx Price Rs: xxxxx /- Total Rs: xxxxx /-
    • Automation Program : set_window (“Shopping”,1); obj_get_text (“Quantity”,Q); obj_get_text (“Price”,P); obj_get_text (“Total”,T); P = substr(y,4,length(P)-5); T = substr(z,4,length(T)-5); if (T = = P * Q) printf (“Test is Pass”); else printf (“Test is Fail”);tl_step ( ) :- “tl” stands for Test Log (Test Result). We can use this statement to prepare ourown Pass / Fail Result.Syntax :- tl_step (“Step Name”, 0/1, “Message”); ‘0’ for Pass Other than ‘0’ FailNote :- Substr() : we can use this function to get required value from given string.Syntax :-Substr (“String Value” / Variable, Starting Position, length(“String Value” / Variable));Data Base Check Point : In a software functional testing, the test engineers are concentrating on back endcoverage, In this coverage, the test engineers are estimating the correctness of Front Endscreens operations on Back End Tables content in terms of Data Validations and DataIntegrity. Employee Emp Table Dept Table Emp No. : 101 Driven Emp.No. Name Dept Dept.No. Name Strenght Name : Abc X X X 10 Sales 20 Dep.No. : 10 X X X 21 Provider X X X OK 101 Abc 10 Data Validation Data Integrity
    • Driven : - Data Stored in Same SystemProvider :- Data Stored in another system. From the above example the insertion of new data correctness is called as “DataValidation” The changes in existing data correctness is calling as Data Integrity. To automate this Data Base Testing, test engineers are using “Data Base CheckPoint”. It consist of 3 sub points. A. Default Check B. Custom Check C. Runtime Record CheckA. Default Check :- To conduct Data Base testing, depending on the content of Data Base, we can usethis option.Ex :- Create DB Check Point (Current Content of DB selected as Expected) Perform Front End Operation Run DB Check Point = = Fail (Current Content of DB selected as Actual) ! = PassNavigation:Open Win Runner → Insert Menu → Data Base Check Point → Default Check →Specify Connect to Data Base Using ODBC (or) Data Junction (ODBC for Local DataBase and Data Junction for Remote Data Base) → Click Next → Click Create to selectconnectivity provided by developers → Write select statement → Click Finish → Openour application build in Front End → Perform an operation manually → Run data basecheck point → Analyze Results Manually.Note :- From the above Navigation, the Test Engineers are gathering some informationfrom developers like the name of connectivity in between our application build Front Endand Back End, the names of Tables including columns, Back End and the mapping inbetween Front End Screens and Back End Tables. This information is also known as“Data Base Design Document”.
    • B. Custom Check :- To conduct data base testing depending on Rows Count, Columns Count andContent, we can use this option. In general the test engineers are using Default Check Option. This option isshowing content only by default. The content of Data Base is measurable in-terms ofRows Count and Columns Count. Due to this reason the Test Engineers are using DefaultCheck instead of Custom Check.Syntax :- db_check (“Check list. Cdl”, “Expected Values File”); In above syntax checklist file specifies content as property in Default Check andRows count, Columns Count and Content as properties in Custom Check. Expected values file specifies the current content of data base with respect toselect statement.C. Runtime Record Check : We can use this option to estimate the correctness of Back End Table columnsand Front End Report Objects Default / Custom Check Front end Screen User Forms Data Base User Reports Runtime Record CheckNavigation :-Insert Menu → Data Base Check Point → Run Time Record Check → Click Next →Click Create to select connectivity provided by developers → Write select statementswith doubtful columns → Select doubtful objects for that columns → Click Next →Select one or more matching records option → Click FinishEx-1 : Objects DB Table Columns Order No. orders.order_number Name orders.customer_name (Pass)
    • Ex-2 : Objects (Fail) DB Table Columns Tickets orders.order_number Name orders.customer_nameSyntax:- db_record_check(“Checklist.cvr”,DVR_ONE_OR_MORE_MATCH,variable); In above syntax check list file specifies expected mapping in between Back EndTable Columns and Front End Report Objects. The indicator specifies the need of check point execution more that one time. Variable specifies that the no.of records matched.Case Study :-Check Point TSL StatementFor Single Property in GUI obj_check_info (“object name”, “property”, expectedCheck Point value);For object/window in GUI obj_check_gui (“cbject Name”, “checklist.ckl”, “expected value file”, time);For multiple objects in GUI win_check_gui (“window name”, “checklist.ckl”, “expeted value file”, time);For object/window in obj_check_bitmap (“image object”, “image file”, time);Bitmap Check PointFor Screen Area in Bitmap obj_check_bitmap (“image object’, “image file”, time, x,y,width,height);From object/window in Get obj_get_text (“object name”, variable);Text Check PointFrom screen area in Get obj_get_text(“object area name”, variable, x1,y1,x2,y2)TextDefault Check in Database db_check (“checklist.cdl”,”expecteddatabase content”);Check PointCustom Check in Database db_check (“checklist.cdl”, “expected database” , rows count columns count and content)Runtime Record Check in db_record_check (“checklist.cvr”,Database DVR_ONE_OR_MORE_MATCH, variable);
    • II. Data Driven Automation Frame Work :- It is an advanced automation frame work in win runner testing tool. The testengineers are executing an automation program with multiple test data in this framework. There are 4ways in Data Driven Testing. Key Board Flat File Build / Test Application Front End Under Test Objects Data (AUT) Excel Sheet Automation Program in TSLA. Test Data From Key Board :- To read values from keyboard, the test engineers are using below TSLStatement. Variable = create_input_dialog (“Message”);Ex:-1Manual Expected :- Delete Order Button Enabled After Open an Existing Order.Build : Flight ReservationText Data : 5 Valid Order NumbersAutomation Program :for (i=1; i<=5; i++){ x = create_input_dialog (“Enter Order Number”); set_window (“Flight Reservation”,1); menu_select_item (“File;Open Order….”); set_window (“Open Order”,1); button_set (“Order No.”, ON); edit_set (“Edit”, x); button_press (“OK”); set_window (“Flight Reservation”,1); button_check_info (“Delete Order”, “enabled”, 1);}# Sample Input in Automation Program is replaced by multiple inputs in execution iscalled Parameterization.
    • Ex:-2Manual Expected :- Tickets Object value is numeric in an Open Order.Build : Flight Reservation Text Data : 5 Valid Order NumbersAutomation Program :for (i=1; i<=5; i++){ x = create_input_dialog (“Enter Order Number”); set_window (“Flight Reservation”,1); menu_select_item (“File;Open Order….”); set_window (“Open Order”,1); button_set (“Order No.”, ON); edit_set (“Edit”, x); button_press (“OK”); set_window (“Flight Reservation”,1); obj_check_info (“Tickets:”, “list1.ckl”, “gui1”, 2);}Ex:-3Manual Expected :- Total = Number of Tickets * Price in an opened orderBuild : Flight Reservation Text Data : 5 Valid Order NumbersAutomation Program :for (i=1; i<=5; i++){ x = create_input_dialog (“Enter Order Number”); set_window (“Flight Reservation”,1); menu_select_item (“File;Open Order….”); set_window (“Open Order”,1); button_set (“Order No.”, ON); edit_set (“Edit”, x); button_press (“OK”); set_window (“Flight Reservation”,1); obj_get_text (“Tickets:”, t); obj_get_text (“Price:”, p); obj_get_text (“Total:”, tot); p = substr (p,2,length(p)-1); tot = substr (tot,2,length(tot)-1); if(tot == p*t) tl_step (“T1”,0,”Test Pass”); else tl_step (“T1”, 1, “Test Fail”);}
    • Ex-4 :Manual Expected : - Results = Input1 * Input2Build:- Multiply Input1 Input2 Ok ResultTest Data :- 10 pairs valid inputsAutomation Program :for (i=1; i<=5; i++){ x = create_input_dialog (“Enter Input1”); y = create_input_dialog (“Enter Input2”); set_window (“Multiply”,1); edit_set (“Input1”, x); edit_set (“Input2”, y); button_press (“OK”); obj_get_text (“Result:”, r); if(r == x*y) tl_step (“T1”,0,”Test Pass”); else tl_step (“T1”, 1, “Test Fail”);} Go TopB. Test Data From Flat File :- In this approach the test engineers are maintaining test data in a Flat File. In thisapproach the Win Runner is not taking the interaction of Test Engineers while runningtest. Build / Test Application Under Test Data (AUT) .txt Automation Program in TSL
    • To use file content as test data, the Test Engineers are using below TSLStatementsfile_open (“Path of File”, FO_MODE_READ);file_getline (“Path of File”, variable);file_close (“path of file”);Ex-1:Manual Expected :- Delete order button enabled after open an orderBuild :- Flight ReservationTest Data : C:Documents and SettingsBalajiDesktopTempRavi.txtAutomation Program:f=”C:Documents and SettingsBalajiDesktopTempRavi.txt”;file_open (f,FO_MODE_READ):while (file_getline(f,x) != E_FILE_EOF){ set_window (“Flight Reservation”,1); menu_select_item (“File;Open Order….”); set_window (“Open Order”,1); button_set (“Order No.”, ON); edit_set (“Edit”, x); # Parameterization button_press (“OK”); set_window (“Flight Reservation”,1); button_check_info (“Delete Order”, “enabled”, 1);}file_close(f);Silent Mode :- Win Runner continues test execution when a check point is failed also. The TestEngineers are using this option to continue test execution without interaction.Navigation :- Tools Menu – General Options – Run Tab – Select Run in Batch Mode Check Box – Click OkNote :- In silent mode the win runner is not executing create_input_dialog( ) Statement.
    • Ex-3 :-Manual Expected : Total = Price * QuantityBuild : Shopping Item No. Quantity Ok Price $xxxxxx Total $xxxxxxTest Data : C:Documents and SettingsBalajiDesktopTempRavi.txtRavi.txt contains like below statementsRamu Purchased 101 item as 10 piecesBhasha Purchased 102 item as 27 pieces .... etc.,Automation Program :f=”C:Documents and SettingsBalajiDesktopTempRavi.txt”;file_open (f,FO_MODE_READ):while (file_getline(f,x) != E_FILE_EOF){ split (x,y,” “); set_window (“Shopping”,1); edit_set (“Item No”, y[3]); # Parameterization edit_set (“Quantity”, y[6]); # Parameterization button_press (“OK”); obj_get_text (“Price”,p); obj_get_text (“Total”,t); p=substr(p,2,length(p)-1); t=substr(t,2,length(t)-1); if (t== p*y[6]) tl_step (“C1”, 0, “Calculation is Pass”); else tl_step (“C1”, 1, “Calculation is Fail”);}file_close(f); Go Top
    • C. Test Data From Front End Objects Some times the Test Engineers are re-executing their automation programdepending on multiple data objects in build like Menu’s, List Boxes, Tables, ActivexControls and Data Windows. Build / Application Under Test (AUT) Test Data From Build ObjectEx-1:Manual Expected : The Selected City Name in “Fly From” doesn’t appear in “Fly To”Build: Journey Fly From Fly ToTest Data : All Existing City Names in “Fly From”Automation Program :set_window (“Journey”,1);list_get_info (“Fly From”, “count”, n);for (i=0; i<n; i++){ list_get_item (“Fly From”, i, x); list_select_item (“Fly From”,x); if (list_select_item (“Fly To”,x) != E_OK) tl_step (“J1”, 0, “Item doesn’t appear”); else tl_step (“J1”, 1, “Item Appears”);}
    • Ex-2 :Manual Expected : Total = Price * Quantity in every row of billBuild : Bill is window name Sl.No. Quantity Price Total 1 X $xxxx $xxxx 2 X $xxxx $xxxx 3 X $xxxx $xxxx .etc .etc .etc .etcTest Data : - All existing rows in Bill tableAutomation Program :set_window (“Shopping”,1);tbl_get_rows_count (“Bill”, n);for (i=1; i<n; i++){ tbl_get_cell_data (“Bill”, “#”&i; “#1”, q); tbl_get_cell_data (“Bill”, “#”&i; “#2”, p); tbl_get_cell_data (“Bill”, “#”&i; “#3”,t); p=substr(p,2,length(p)-1); t=substr(t,2,length(t)-1); if (t == p*q) tl_step (“S1”, 0, “Test Pass”); else tl_step (“S1”, 1, “Test Fail”);}PRACTICE :Total = Internal Marks + External Marks of every studentBuild : Marks is a windowRoll No. Name Internals Externals Total 101 xxxxx xxxxx xxxxx xxxxx 102 xxxxx xxxxx xxxxx xxxxx .etc. .etc. .etc. .etc. .etc.Go Top
    • D. Test Data From an Excel Sheet :- Some times the test engineers are re-executing automation programs dependingon multiple inputs in an excel sheet, instead of Key Board, Flat Files and Front EndObjects. In this method the test engineers are filling Excel Sheet through importing datafrom Build Data Base or with Manual Entry. Build / AUT Test Front End DB Data .xls Automation Program in TSL II) Import Data I) Manual From DB Entry To create excel sheet oriented data driven test, Test Engineers are followingbelow navigation.Navigation:- Open Win Runner & Build – Create an Automation Program for Sample Inputs– Table Menu – Data Driven Wizard – Click Next – Specify the path of Excel Sheet –Specify Variable Name to Store that Excel Sheet Path – Select Import Data from DataBase – Click Next – Specify Connective DB Using UDBC / Data Junction – SelectSpecify SQL Statement Option – Click Next – Click Create to Select Connectivity of DBProvided by Developers – Write Select Statement to Import Data From Connected DB –Click Next – Replace Sample Input With Imported Excel Sheet Column Name inAutomation Program – Say Yes/No to Show Data Table (Excel Sheet) – Click Finish –Put Build in Base State and Click Run – Analyze Results after execution.Note : By Default the Win Runner is providing a default excel sheet for every testinstead of our own excel sheet.
    • Ex-1 :Manual Expected : Delete order button enabled after open an existing order.Build : Flight ReservationTest Data : Default.xls (Import Data From DB)Automation Program :table = “default.xls”;rc = ddt_open(table, DDT_MODE_READWRITE);if (rc!=E_OK && rc!=E_FILE_OPEN) pause (“Cannot Open Table”);ddt_update_from_db (“table”,”msqr1.sql”,count);ddt_save (table);ddt_get_row_count (table,n);for (i=1; i<=n; i++){ ddt_set_row (table,i); set_window (“Flight Reservation”,1); menu_select_tem (“File; Open Order….”); set_window (“Open Order”,1); button_set (“Order No.”,ON); edit_set (“Edit”, ddt_val (table, “order_number”)); button_press (“OK”); set_window (“Flight Reservation”,1); button_check_info (“Delete Order”, “enabled”, 1);}ddt_close (table); Go TopCase Study-1:ddt_open( ):- we can use this function to open an excel sheet in specified mode.Syntax :ddt_open (“Path of Excel Sheet”, DDT_MODE_READ / READWRITE);ddt_update_from_db ( ):- We can use this function to perform changes in excel sheet with respect tochanges in Data Base.Syntax:ddt_update_from_db (“Path of Excel Sheet”, “Select Statement query file”, Variable);ddt_save ( ): We can use this function to save excel sheet modification.Syntax : ddt_save (“Path of Excel Sheet”);
    • ddt_get_row_count ( ):- We can use this function to find no.of rows in an excel sheet.Syntax : ddt_get_row_count (“Path of Excel Sheet”, variable);ddt_set_row ( ):- We can use this function to point specific row in an excel sheet.Syntax : ddt_set_row (“Path of Excel Sheet”, row number);ddt_val ( ):- We can use this function to capture specified column valueSyntax : ddt_val (“Path of Excel Sheet”, column name);ddt_close ( ): To close a opened file, we can use this function.Syntax : ddt_close (“Path of Excel Sheet”);Case Study – 2 : DDT TSL Statement Silent Mode Test Engineer Interaction Approach (During Run Time)Test Data from create_input_dialog ( ); Off Mandatory Key Board Test Data file_open ( ); On / Off Optional From file_getline ( ); Flat File file_close ( ); Test Data list_get_item ( ); On / Off Optional From list_get_infoe ( ); Front End tble_get_rows_count ( ); Object tble_get_cell_data ( );Test Data From ddt_open ( ); On / Off Optional Excel Sheet ddt_save ( ); ddt_set_row ( ); ddt_update_from_db ( ); ddt_get_row_count ( ); ddt_val ( ); ddt_set_val ( ); ddt_close ( );