Transcript of "Exist Quality Assurance & Testing Services"
firstname.lastname@example.orgOutsourced Software Testing ServicesEvery company wants to ship their products faster, more cost-effectively and with lowerdefects. Were here to help. Exist QA specialists will work closely with your team so that youcan succeed without breaking your software, and your budget.Depending on the requirements of the project, we can provide:Testing in Agile Software DevelopmentExist enables our clients to respond quickly to strategic business requirements and adaptrapidly to changes as they come. Agility is at the core of our software developmentmodel.Throughout our history, we have worked successfully with customers in creating softwareusing agile development methodologies.We believe the core value of agile software development is in the ability to adapt to changesin project needs and requirements; and we employ test-driven development to produceinterim, user-visible results quickly and regularly to enable a powerfulfeedback loop.Enterprise-Level Tooling & MethodologyLevel up with proven, best-in-practice test processes, tools, and frameworks. Exist QA andsoftware testing experts can help with:Lifecycle QA: Create a methodology so that software testing becomes part of thedevelopment strategy. Start planning with testing involved on day 1. Construct test planseven before you write the first line of code.Testing Tools: Adopt the best testing tools for your needs.Some of the tools we use:Functional testingLoad testingSecurity testingData completenessPlatform compatibilityPerformance testingStress testingAcceptance testingSelenium RCSelenium IDEJMeterCucumberConsult with our team today about your testing requirements.Let’s talk: email@example.com.
firstname.lastname@example.orgOur Testing ProcessEach project undergoes a thorough testing process, managed by a dedicated QA team. Theinformation presented below defines the software quality assurance framework applied toprojects undertaken by Exist. It describes the testing strategy and approach to testing thatour QA team will use to validate the quality of the product prior to any release.Test Environment Preparation. As part of the objective of achieving the highest level ofquality of the software prior to any release, we prepare the test environment at the onset andthis involves:Technical analysis based on client materialsCreation of test plan including: thorough analysis of project specifications and goals,setup and configuration instructions of testing environments, detailed timetable fortesting with project milestones (refer to Test Plan Development)Determining testing approaches and methodsEnsuring all members have the same level of understanding of the client’sspecifications and requirementsProper setup of hardware and software requirementsEnsuring all team members are fully trained in use of bug tracking system and incorrect methods of bug reportingTesting Proper. We employ a mix of manual and automated testing processes for eachproject and we can employ various testing methods, depending on your requirements.Bug Reporting and Tracking. Our abilityto collaborate regardless of location stemsfrom the use of a web-based projectmanagement tool called DevelopmentEngineering Network or DEN for shortwhich offers our distributed teams as well asour customers a unified on-demand viewof all aspects of the software project lifecycle.This project management tool based from theopen source application Redmine provides usand our customers the ability to:monitor and track a projects progresscommunicate instantly and seamlesslyassess velocity of development andteam efficiencyidentify and resolve bottlenecks andissues during the early part ofdevelopment, reducing overruns in the long-termoptimize resourcesdetermine best practicesmaintain documentation of a projectBelow are our best practice methodologies for bug reporting and tracking.New bugs are reported in DENVerified more than once using the same environmentInformation of the issue must include summary, detailed steps to replicate,possible screenshot, environment it was encountered, priority, and assigneeOur iterativeengineering modelhelps you deliversoftware faster. Meetwith our team today todiscuss your project.Contact:email@example.com.
firstname.lastname@example.orgBug Reporting and Tracking continued ...Steps how to create an issue in DENChoose the specific projectChoose a tracker type (most commonly used is bug)Subject refers to the general description of an issueChoose the priority of the issue (blocker, major, minor, trivial)Assign issue to the most appropriate personIndicate the environment as to which the issue was encounteredProvide the detailed steps on how to replicate the issue on the description fieldSubmit DefectTester log defects into the defect tracking system, the defect is in Submitted orNew status, and set the related parties who will receive the mailTester should set the severity level and priority according to the related definition.Tester should describe the defect as much detail as possible to help the reviewer tounderstand and reproduce the defectIf Tester / Test Lead needs to update some items or add more information aboutthe defect, he/she could modify the defect, the status is still Submitted or NewRetesting DefectThe Tester should check the Resolved defect, and do retesting for this defect ASAPIf Tester finds the defect unresolved, tester should set the status of the defect toAssigned and record the reason in the defect notesIf Tester has validated that a defect has been fixed, tester should update the defectto Closed and record the pass reason in the comment areaIf the Tester/Test Lead found the new issue as the same with the Closed defect,the tester should Re-open the defect statusDefect StatusNew – Defect is first reported and submitted, or reopened to new reported statusbecause it could not pass the related acceptance criteriaRejected – After analysis, reported defect is confirmed to be invalid by Tester /Test Lead / DeveloperAssigned – After analysis, defect is accepted by the development team and isassigned to a party for resolutionIn-progress – Developer has started the development work on the defectResolved – Developer verified and defect has been resolved by developer or QATeamVerified – Defect has been internally validated (by development leader ormember); updated application has been deployed to QA testing environment ordefect has been verified by QA teamOn-Hold – Defect resolution is on-hold pending confirmation of issues andassignmentReopen – Defect is restarted from Pending, Rejected or Closed and is assigned to aparty for resolution / follow-upClosed – Defect has been validated and passed the acceptance criteriaExist has a dedicated testingteam working with AceMetrixs development team.Ace Metrix is the newstandard in televisionanalytics. Televisionadvertising represents aCMOs biggest risk andlargest expense, yet hassome of the least effectivemeasurement tools. AceMetrix solves this bybringing digital technology,analytics, and speed to TV.They measure creativeeffectiveness for every ad,every single time, in realtime– making media dollarswork harder.“Given the complexityof our software, Existteam has evolvedquickly anddemonstrated a goodunderstanding of ourtools which has allowedthem to deliver highquality results. Existrecords results into ourtracking systems withexcellent detail whichenables our engineeringteam to diagnose theissues quickly.”-- Greg Falzon,VP, Product Engineeringof Ace Metrix
email@example.comBug Reporting and Tracking continued ...Defect Severity. Defect severity is an indicator on how damaging the defect is, we havedefined four severity levels:Blocker Defect – is defined as the system is inoperable. Prevent function on the sitefrom being used, and no work-around. The tester or the user can not move forwardwith the test.Major Defect – is defined as a problem that prevents function from being used, orwork-around is possible. The tester can continue with other testing.Minor Defect – is defined as a problem making a function difficult to use, but nospecial work-around is required.Trivial Defect – is defined as a problem not affecting the actual function, but thebehavior is not right.Test Plan DevelopmentStep 1 – Establishing Test ObjectivesStep 1.1 – Identify Test Objectives. Identified test objectives include the requirementsdocument, wireframe and user stories as reference materials. The test objectivesshould be a reflection of the test requirements.Output: Statement of Test ObjectivesStep 1.2 – Define Completion Criteria. A completion criterion is the standard by whicha test objective is measured. The test team must be able to determine when a testobjective has been satisfied. One or more completion criteria must be specified foreach test objective. QA must check that each requirement and how it is validated isdocumented. Important test metrics that should be calculated and reported are thepercentage of test requirements that have been covered by test cases, and thepercentage of test requirements that have been successfully validated.Output: Statement of Objective Completion CriteriaStep 1.3 – Prioritize Test Objectives. Test objectives are prioritized based on the scalefrom low to high.Output: Prioritized Test ObjectivesStep 2 – Construct Test Plans. The purpose of the test plan is to specify WHO doesWHAT, and WHEN and WHY of the test design, test construction, test execution, andtest analysis steps. The test plan also describes the test environment and required testresources. The created test plan must also provide measurable goals by which theproduct ownder can gauge testing.The test plan is an operational document that is the basis for testing. It describes teststrategies and test cases. This document must be considered an evolving document. Inaddition, the test plan must be designed with test automation in mind.Inputs: Requirements document, Software Design Description documentStep 2.1 – Construct the System Test Plan. Identify the business scenarios to be tested.The user will employ the application system to conduct business day in and day outand according to daily, weekly, monthly and/or yearly business cycles. The taskidentifies the business process that can be translated into scripted test scenarios.A system test scenario is a set of test scripts which reflect user’s behavior in a typicalbusiness situation.Step 2.2 – Construct the Integration Test Plan. Workability of each module must beconsidered when creating the test plan for integration test. The focus is to create aplan where the integrated modules will be tested whether they do what they should do,and do not do what they should not do.Step 3 – Design and Construct Test Cases. The purpose of this step is to apply test casedesign techniques to design and build a set of intelligent test data. The data mustaddress the system as completely as possible, but it must also focus in on high-riskareas, and system/data components where weaknesses are traditionally found (systemboundaries, input value boundaries, output value boundaries, etc.)Would you beneﬁt intesting earlier in thesoftware developmentlife cycle?Let’s talk:firstname.lastname@example.org.Exist is helpingWhere2GetIt ensure thatquality is built into theirsystem as it continues toscale.Where2GetIt was foundedin 1997 and has since growninto an industry-leadingprovider of location-baseddigital marketing solutionspowering more than 500brands. Serving more than500,000 brick-and- mortarlocations, Where2GetIt haschannel strength thatreaches millions ofconsumers around theworld.
email@example.comTest Plan Development continued ...The test data set will be a compromise of economics and need. It is not economicallyfeasible to test every possible situation, so representative sampling of test conditions,etc. will be present in the test data.Created test cases are stored in the test case repository tool, TestLink, or GoogleDocsfor easier collaboration of results.Step 3.1 – Specify the test case design strategies. This step should identify what testcase design approaches will be used at what levels of testing.Step 3.2 – Design the test cases. This task involves applying the test case designtechniques to identify test date values that will be constructed. The test team must gethold of the functional requirement document, user stories from Project Manager, andwireframe/s as to be able to create a high level test case that will be regularly updatedas the project progresses.The test team is responsible for designing System Testing test cases. The test casedescription must be documented manually and stored in GoogleDocs.Step 3.3 – Construct the Test Data. This is the construction of the actual physical datasets that will satisfy the test cases designed in Step 3.2. The medium in which the dataare constructed will be determined at the time of construction.Step 4 – Execute Integration Tests. The purpose of integration testing is to prove thatthe software modules work together properly. It should prove that the integratedmodules do what they are intended to do, and that the integrated modules do not dothings they are not intended to do.Step 4.1 – Approve Test Environment. The purpose of this step is to verify that therequired test environment is in place before testing starts.The test team must ensure that QA staging environment is in place where the build fortesting will be deployed. Machine specifications must be considered that will match(or closely match) the actual production machine specifications.If a project does not require a staging environment, local machines of the testers willsuffice where the new builds will be deployed.Step 4.2 – Execute Integration Tests. This task is the responsibility of the test team. Itsfocus is to prove that the integrated software modules do what they should do, and donot do what they should not do. This test is conducted in a formal manner. The testersuse integrated test cases that have predicted outputs. The test results are recorded instructured test logs. The structured test logs and test scripts drive the integrationtesting process.Step 4.3 – Retest Problem Areas. This task is cyclic in nature. Retesting will continueuntil pre-specified stopping criterions are met.Step 5 – Execute System Tests. The purpose of the system test is to use the system in”controlled” test environment, but to do so as the user would use the system in theproduction environment. The system test should prove that the complete system willdo what it is supposed to do, and that it will not do anything that it is not supposed todo.Step 6 – Execute Regression Tests. The primary purpose of regression testing is toprove that the system enhancements, and routine tuning and maintenance do notaffect the original functionality of the system. The secondary purpose is to prove thatthe enhance/maintenance changes do what they are intended to do, and do not doanything that they are not intended to do. We use Selenium to design and buildautomated test scripts. The scripts can then be enhanced and replayed for eachsubsequent regression test.We useSeleniumfor automatedtesting to /for:Save time bydramatically speedingup testing of web appsby running multipletests in parallelFrequent regressiontestingInstant feedback toenable improvedcollaborationVirtually limitlessiterations of test caseexecutionCustomized reportingof application defectsDiscover defectsmissed by manualtestingDo you require testautomation?Contact:firstname.lastname@example.org.
email@example.comTest Case DevelopmentTest cases should be written comprehensively so that they can be used by new teammembers tasked to execute the testing. There are levels in which each test case will fall inorder to avoid duplicate in effort:Level 1: We write test cases based on the available specification and userdocumentation.Level 2: We write test cases based on the actual functional and system flow of theapplication.Level 3: Automation of the project. We minimize tester’s interaction with thesystem so we can focus more on current updated functionalities to test rather thanremaining busy with regression testing.Test cases are written in the QA team’s test case repository tool, GoogleDocs. The test groupis responsible for creating the test cases prior to any test execution. The following should beconsidered when writing test cases:Write the Test Case first.Before the actual testingcommences, QA must ensurethat test cases are available andwritten based on the levelsmentioned above.Read the specificationcarefully. Missing a point inthe spec is one of the mostcommon errors, and one thatcannot be checked in any otherway than hand.Test the simple stuff.Focusing only on the difficultlogical portions of the programis a mistake; most bugs aresimple things that are obviousonce tested.Test the error cases, therarer cases, and theboundary conditions. Thinkcarefully to ensure that everyerror condition, all the oddboundary conditions, etc aretried and tested.It is the responsibility of the test group to follow-up to the Project Manager for anyfunctional specifications, wireframes, user stories, or any documents that can aid in creatinghigh-level test cases. The writing of test cases will evolve from being high-level down tobecoming more specific as the documents are provided to the test group.Build a dedicated software testing team with us todayWhether you require independent testing services or looking to get started with testing involved on day 1 -- we can help.Have questions? Need a quote? Drop an email to firstname.lastname@example.org or call 632-9106010 / 1-310-728-2142 local 5304.For additional info, visit www.exist.com/software-testing.