All about testing
Upcoming SlideShare
Loading in...5
×
 

Like this? Share it with your network

Share

All about testing

on

  • 424 views

Basics of Software QA Testing

Basics of Software QA Testing

Statistics

Views

Total Views
424
Views on SlideShare
424
Embed Views
0

Actions

Likes
1
Downloads
4
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Microsoft Word

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

All about testing Document Transcript

  • 1. Key QA Documents I. PRAD The Product Requirement Analysis Document is the document prepared/reviewed by marketing, sales, and technical product managers. This document defines the requirements for the product, the "What". It is used by the developer to build his/her functional specification and used by QA as a reference for the first draft of the Test Strategy. II. Functional Specification The functional specification is the "How" of the product. The functional specification identifies how new features will be implemented. This document includes items such as what database tables a particular search will query. This document is critical to QA because it is used to build the Test Plan. QA is often involved in reviewing the functional specification for clarity and helping to define the business rules. III. Test Strategy The Test Strategy is the first document QA should prepare for any project. This is a living document that should be maintained/updated throughout the project. The first draft should be completed upon approval of the PRAD and sent to the developer and technical product manager for review. The Test Strategy is a high-level document that details the approach QA will follow in testing the given product. This document can vary based on the project, but all strategies should include the following criteria: · Project Overview - What is the project. · Project Scope - What are the core components of the product to be tested · Testing - This section defines the test methodology to be used, the types of testing to be executed (GUI, Functional, etc.), how testing will be prioritized, testing that will and will not be done and the associated risks. This section should also outline the system configurations that will be tested and the tester assignments for the project. · Completion Criteria - These are the objective criteria upon which the team will decide the product is ready for release · Schedule - This should define the schedule for the project and include completion dates for the PRAD, Functional Spec, and Test Strategy etc. The schedule section should include build delivery dates, release dates and the dates for the Readiness Review, QA Process Review, and Release Board Meetings. · Materials Consulted - Identify the documents used to prepare the test strategy · Test Setup - This section should identify all hardware/software, personnel pre-requisites for testing. This section should also identify any areas that will not be tested (such as 3rd party application compatibility.) IV. Test Matrix (Test Plan) The Test Matrix is the Excel template that identifies the test types (GUI, Functional etc.), the test suites within each type, and the test categories to be tested. This matrix also prioritizes test categories and provides reporting on test coverage. · Test Summary report · Test Suite Risk Coverage report Upon completion of the functional specification and test strategy, QA begins building the master test matrix. This is a living document and can change over the course of the project as testers create new test categories or remove non-relevant areas. Ideally, a master matrix need only be adjusted to include near feature areas or enhancements from release to release on a given product line. V. Test Cases As testers build the Master Matrix, they also build their individual test cases. These are the specific functions testers must verify within each test category to qualify the feature. A test case is identified by ID number and prioritized. Each test case has the following criteria: · Purpose - Reason for the test case · Steps - A logical sequence of steps the tester must follow to execute the test case · Expected Results - The expected result of the test case · Actual Result - What actually happened when the test case was executed · Status - Identifies whether the test case was passed, failed, blocked or skipped. · Pass - Actual result matched expected result · Failed - Bug discovered that represents a failure of the feature · Blocked - Tester could not execute the test case because of bug · Skipped - Test case was not executed this round · Bug ID - If the test case was failed, identify the bug number of the resulting bug.
  • 2. VI. Test Results by Build Once QA begins testing, it is incumbent upon them to provide results on a consistent basis to developers and the technical product manager. This is done in two ways: A completed Test Matrix for each build and a Results Summary document. For each test cycle, testers should fill in a copy of the project's Master Matrix. This will create the associated Test Coverage reports automatically (Test Coverage by Type and Test Coverage by Risk/Priority). This should be posted in a place that necessary individuals can access the information. Since the full Matrix is large and not easily read, it is also recommended that you create a short Results Summary that highlights key information. A Results Summary should include the following: · Build Number · Database Version Number · Install Paths (If applicable) · Testers · Scheduled Build Delivery Date · Actual Build Delivery Date · Test Start Date · Scope - What type of testing was planned for this build? For example, was it a partial build? A full-regression build? Scope should identify areas tested and areas not tested. · Issues - This section should identify any problems that hampered testing, represent a trend toward a specific problem area, or are causing the project to slip. For example, in this section you would note if the build was delivered late and why and what its impact was on testing. · Statistics - In this section, you can note things such as number of bugs found during the cycle, number of bugs closed during the cycle etc. VII. Release Package The Release Package is the final document QA prepares. This is the compilation of all previous documents and a release recommendation. Each release package will vary by team and project, but they should all include the following information: · Project Overview - This is a synopsis of the project, its scope, any problems encountered during the testing cycle and QA's recommendation to release or not release. The overview should be a "response" to the test strategy and note areas where the strategy was successful, areas where the strategy had to be revised etc. The project overview is also the place for QA to call out any suggestions for process improvements in the next project cycle. Think of the Test Strategy and the Project Overview as "Project bookends". · Project PRAD - This is the Product Requirements Analysis Document, which defines what functionality was approved for inclusion in the project. If there was no PRAD for the project, it should be clearly noted in the Project Overview. The consequences of an absent PRAD should also be noted. · Functional Specification - The document that defines how functionality will be implemented. If there was no functional specification, it should be clearly noted in the Project Overview. The consequences of an absent Functional Specification should also be noted. · Test Strategy - The document outlining QA's process for testing the application. · Results Summaries - The results summaries identify the results of each round of testing. These should be accompanied in the Release Package by the corresponding reports for Test Coverage by Test Type and Test Coverage by Risk Type/Priority from the corresponding completed Test Matrix for each build. In addition, it is recommended that you include the full Test Matrix results from the test cycle designated as Full Regression. · Known Issues Document - This document is primarily for Technical Support. This document identifies workarounds, issues development is aware of but has chosen not to correct, and potential problem areas for clients. · Installation Instruction - If your product must be installed as the client site, it is recommended to include the Installation Guide and any related documentation as part of the release package. · Open Defects - The list of defects remaining in the defect tracking system with a status of Open. Technical Support has access to the system, so a report noting the defect ID, the problem area, and title should be sufficient. · Deferred Defects - The list of defects remaining in the defect tracking system with a status of deferred. Deferred means the technical product manager has decided not to address the issue with the current release. · Pending Defects - The list of defects remaining in the defect tracking system with a status of pending. Pending refers to any defect waiting on a decision from a technical product manager before a developer addresses the problem. · Fixed Defects - The list of defects waiting for verification by QA. · Closed Defects - The list of defects verified as fixed by QA during the project cycle. The Release Package is compiled in anticipation of the Readiness Review meeting. It is reviewed by the QA Process Manager during the
  • 3. QA Process Review Meeting and is provided to the Release Board and Technical Support. · Readiness Review Meeting: The Readiness Review meeting is a team meeting between the technical product manager, project developers and QA. This is the meeting in which the team assesses the readiness of the product for release. This meeting should occur prior to the delivery of the Gold Candidate build. The exact timing will vary by team and project, but the discussion must be held far enough in advance of the scheduled release date so that there is sufficient time to warn executive management of a potential delay in the release. The technical product manager or lead QA may schedule this meeting. · QA Process Review Meeting: The QA Process Review Meeting is meeting between the QA Process Manager and the QA staff on the given project. The intent of this meeting is to review how well or not well process was followed during the project cycle. This is the opportunity for QA to discuss any problems encountered during the cycle that impacted their ability to test effectively. This is also the opportunity to review the process as whole and discuss areas for improvement. After this meeting, the QA Process Manager will give a recommendation as to whether enough of the process was followed to ensure a quality product and thus allow a release. This meeting should take place after the Readiness Review meeting. It should be scheduled by the lead QA on the project. · Release Board Meeting: This meeting is for the technical product manager and senior executives to discuss the status of the product and the teams release recommendations. If the results of the Readiness meeting and QA Process Review meeting are positive, this meeting may be waived. The technical product manager is responsible for scheduling this meeting. This meeting is the final check before a product is released. Due to rapid product development cycles, it is rare that QA receives completed PRADs and Functional Specifications before they begin working on the Test Strategy, Test Matrix, and Test Cases. This work is usually done in parallel. Testers may begin working on the Test Strategy based on partial PRADs or confirmation from the technical product manager as to what is expected to be in the next release. This is usually enough to draft out a high -level strategy outlining immediate resource needs, potential problem areas, and a tentative schedule. The Test Strategy is then updated once the PRAD is approved, and again when the functional specifications are complete enough to provide management with a committed schedule. All drafts of the test strategy should be provided to the technical product manager and it is QA's responsibility to ensure that information provided in the document (such as potential resource problems) is clearly understood. If the anticipated release does not represent a new product line, testers can begin the Master Test Matrix and test cases at the same time the project's PRAD is being finalized. Testers can build and/or refine test cases for the new functionality as the functional specification is defined. Testers often contribute to and are expected to be involved in reviewing the functional specification. The results summary document should be prepared at the end of each test cycle and distributed to developers and the technical product manager. It is designed more to inform interested parties on the status of testing and possible impact to the overall project cycle. The release package is prepared during the last test cycle for the readiness review meeting. Test Strategy Template QA Test Strategy: [Product and Version] [Document Version history in format MM-DD-YYYY] 1.0 PROJECT OVERVIEW [Brief description of project] 1.2 PROJECT SCOPE [More detailed description of project detailing functionality to be included] 2.0 MATERIALS CONSULTED
  • 4. [Identify all documentation used to build the test strategy] 3.0 TESTING · CRITICAL FOCUS AREAS [Areas identified by developers as potential problems above and beyond specific feature enhancements or new functionality already given priority 1 status by QA] · INSTALLATION: [Installation paths to be qualified by QA. Not all products require installation testing. However, those that do often have myriad installation paths. Due to time and resource constraints, QA must prioritize. Decisions on which installation paths to test should be made in cooperation with the technical product manager. Paths not slated for testing should also be identified here.] · GUI [Define what if any specific GUI testing will be done] · FUNCTIONAL [Define the functionality to be tested and how it will be prioritized] · INTEGRATION [Define the potential points of integration with other Media Map products and how they will be prioritized and tested] · SECURITY [Define how security issues will be tested and prioritized] · PERFORMANCE [Define what if any performance testing will be done and its priority] · FAILURE RECOVERY [Define what if any failure recovery testing will be done and its priority] 3.1 TECHNIQUE · [Technique used for testing. Automation vs. Manual] 3.2 METHODOLOGY [Define how testers will go about testing the product. This is where you outline your core strategy. Include in this section anything from tester assignments to tables showing the operating systems and browsers the team will qualify. It is also important to identify any testing limitations and risks] 4.0 TEST SET-UP 4.1 TEST PRE-REQUISITES [Any non-software or hardware related item QA needs to test the product. For example, this section should identify contact and test account information for 3rd party vendors] 4.2 HARDWARE QA has the following machines available for testing: Workstations: Servers: [Include processor, chip, and memory and disk space] Other: [Identify any other hardware needed such as modems etc.] 4.3 SOFTWARE [Identify all those software applications QA will qualify with the product and those QA will not qualify. For example, this is where you would
  • 5. list the browsers to be qualified. It is also important to identify what will not be qualified (for example, not testing with Windows 2000)] 4.4 PERSONNEL [Identify which testers are assigned to the project and who will test what. It is also important to identify who is responsible for the creation of the test strategy, test plan, test cases, release package, documentation review etc.] 5.0 COMPLETION CRITERIA [Identify how you will measure whether the product is ready for release. For example, what is the acceptable level of defects in terms of severity, priority, and volume?] 6.0 SCHEDULE 6.1 Project Schedule · PRD Review completed by [MM-DD-YYYY] - [STATUS] · Functional Specification completed [MM-DD-YYYY] - [STATUS] · Release Date approved by [MM-DD-YYYY] - [STATUS] · Test Strategy completed by [MM-DD-YYYY] - [STATUS] · Core Test Plan (functional) completed by [MM-DD-YYYY] - [STATUS] · Readiness Meeting - [STATUS] · QA Process Review Meeting - [STATUS] · Release Board Meeting - [STATUS] · Release on [MM-DD-YYYY] - [STATUS] 6.2 Build Schedule · Receive first build on [MM-DD-YYYY] - [STATUS] · Receive second build on [MM-DD-YYYY] - [STATUS] · Receive third build on [MM-DD-YYYY] - [STATUS] · Receive fourth build on [MM-DD-YYYY] - [STATUS] · Receive Code Freeze Build on [MM-DD-YYYY] - [STATUS] · Receive Full Regression Build on [MM-DD-YYYY] - [STATUS] · Receive Gold Candidate Build on [MM-DD-YYYY] - [STATUS] · Final Release on [MM-DD-YYYY] - [STATUS] 7.0 QA Test Matrix and Test Cases: The Release Package ___________________________________ · Project Overview - This is a synopsis of the project, its scope, any problems encountered during the testing cycle and QA's recommendation to release or not release. The overview should be a "response" to the test strategy and note areas where the strategy was successful, areas where the strategy had to be revised etc. The project overview is also the place for QA to call out any suggestions for process improvements in the next project cycle. Think of the Test Strategy and the Project Overview as "Project bookends". · Project PRAD - This is the Product Requirements Analysis Document, which defines what functionality was approved for inclusion in the project. If there was no PRAD for the project, it should be clearly noted in the Project Overview. The consequences of an absent PRAD should also be noted. · Functional Specification - The document that defines how functionality will be implemented. If there were no functional specifications, it should be clearly noted in the Project Overview. The consequences of an absent Functional Specification should also be noted. · Test Strategy - The document outlining QA's process for testing the application. · Results Summaries - The results summaries identify the results of each round of testing (see section VI - Results by Build). These should be accompanied in the Release Package by the corresponding reports for Test Coverage by Test Type and Test Coverage by Risk Type/Priority from the corresponding completed Test Matrix for each build. In addition, it is recommended that you include the full Test Matrix results from the test cycle designated as Full Regression. · Known Issues Document - This document is primarily for Technical Support. This document identifies workarounds, issues development is aware of but has chosen not to correct, and potential problem areas for clients. · Installation Instruction - If your product must be installed as the client site, it is recommended to include the Installation Guide and any related documentation as part of the release package.
  • 6. · Open Defects - The list of defects remaining in the defect tracking system with a status of Open. Technical Support has access to the system, so a report noting the defect ID, the problem area, and title should be sufficient. · Deferred Defects - The list of defects remaining in the defect tracking system with a status of deferred. Deferred means the technical product manager has decided not to address the issue with the current release. · Pending Defects - The list of defects remaining in the defect tracking system with a status of pending. Pending refers to any defect waiting on a decision from a technical product manager before a developer addresses the problem. · Fixed Defects - The list of defects waiting for verification by QA. · Closed Defects - The list of defects verified as fixed by QA during the project cycle. The Release Package is compiled in anticipation of the Readiness Review meeting. It is reviewed by the QA Process Manager during the QA Process Review Meeting and is provided to the Release Board and Technical Support. · Readiness Review Meeting: The Readiness Review meeting is a team meeting between the technical product manager, project developers and QA. This is the meeting in which the team assesses the readiness of the product for release. This meeting should occur prior to the delivery of the Gold Candidate build. The exact timing will vary by team and project, but the discussion must be held far enough in advance of the scheduled release date so that there is sufficient time to warn executive management of a potential delay in the release. The technical product manager or lead QA may schedule this meeting. · QA Process Review Meeting: The QA Process Review Meeting is meeting between the QA Process Manager (Barbara Thornton) and the QA staff on the given project. The intent of this meeting is to review how well or not well process was followed during the project cycle. This is the opportunity for QA to discuss any problems encountered during the cycle that impacted their ability to test effectively. This is also the opportunity to review the process as whole and discuss areas for improvement. After this meeting, the QA Process Manager will give a recommendation as to whether enough of the process was followed to ensure a quality product and thus allow a release. This meeting should take place after the Readiness Review meeting. It should be scheduled by the lead QA on the project. · Release Board Meeting: This meeting is for the technical product manager and senior executives to discuss the status of the product and the teams release recommendations. If the results of the Readiness meeting and QA Process Review meeting are positive, this meeting may be waived. The technical product manager is responsible for scheduling this meeting. This meeting is the final check before a product is released. Test Matrices Sample ___________________________________ Question: I need information about Metrics, which is used to find faults. it is something related with measurement. I need to know how it is used in Quality Assurance. Answer: You can measure the arrival and departure times of developers, if you have them clock in, but that won't tell you much since not all work is done in the office (and it doesn't mean that they're working when they're in the office). This is, however, still a metric. The same holds for a true "quality metric". The most familiar one is defects per thousand lines of (uncommented) code. But this metric assumes that: 1) you count the lines of code 2) the complexity of the code isn't an issue 3) the programmers aren't playing games (like using continuance characters so that what could have been written in one line isn't done in five lines) 4) all defects are uncovered in the code in a single pass 5) each defect discovered is all others 6) defects are uncovered in a linear manner between revisions or builds The fact is that first you need to know what your goal is. Then you need to discover or create a metric that will help you achieve that goal. Then you need to implement it and be prepared to adjust it. You can't use measurements (metrics) to find faults, at least not in software, so that's not a reasonable goal. You can use metrics to help determine if most of the defects have been discovered already. You can use them to tell you how much longer it will take to uncover a reasonable amount of defects. For either of these metrics you will need to know how previous projects of similar size and complexity (using
  • 7. similar languages, etc.) were done in order to get a reasonable comparison. Test Case: File Open # Test Description Test Cases/ Samples Pass/ Fail No. of Bugs Bug# Comments N/A Setup for [Product Name] setup - - - 1.1 Test that file types supported by the program can be opened 1.1 P/F # # 1.2 Verify all the different ways to open file (mouse, keyboard and accelerated keys) 1.2 P/F # # 1.3 Verify files can be open from the local drives as well as network 1.3 P/F # # Test Plan Sample ______________________________________ 1. Introduction Description of this Document This document is a Test Plan for the -Project name-, produced by Quality Assurance. It describes the testing strategy and approach to testing QA will use to validate the quality of this product prior to release. It also contains various resources required for the successful completion of this project. The focus of the -Project name- is to support those new features that will allow easier development, deployment and maintenance of solutions built upon the -Project name-. Those features include: [List of the features] This release of the -Project name- will also include legacy bug fixing, and redesigning or including missing functionality from previous release [List of the features] The following implementations were made: [List and description of implementations made] Related Documents [List of related documents such as: Functional Specifications, Design Specifications] Schedule and Milestones [Schedule information QA testing estimates] 2. Resource Requirements Hardware [List of hardware requirements] Software [List of software requirements: primary and secondary OS] Test Tools
  • 8. Apart from manual tests, the following tools will be used: - - - Staffing Responsibilities [List of QA team members and there responsibilities] Training [List of training's required] 3. Features To Be Tested / Test Approach [List of the features to be tested] Media Verification [The process will include installing all possible products from the media and subjecting them to basic sanity testing.] 4. Features Not To Be Tested [List of the features not to be tested] 5. Test Deliverables [List of the test cases/matrices or there location] [List of the features to be automated ] 6. Dependencies/Risks Dependencies Risks 7. Milestone Criteria Test Case Design _______________________________________ Test Case ID: It is unique number given to test case in order to be identified. Test description: The description if test case you are going to test. Revision history: Each test case has to have its revision history in order to know when and by whom it is created or modified. Function to be tested: The name of function to be tested. Environment: It tells in which environment you are testing. Test Setup: Anything you need to set up outside of your application for example printers, network and so on.
  • 9. Test Execution: It is detailed description of every step of execution. Expected Results: The description of what you expect the function to do. Actual Results: pass / failed If pass - What actually happen when you run the test. If failed - put in description of what you've observed. Characteristics of a Good Test: They are: likely to catch bugs no redundant not too simple or too complex. Test case is going to be complex if you have more than one expected results. Managing Workflow for Software Defects ___________________ Define the Workflow Statuses - When tracking software defects, it is important to define the workflow. Workflow is normally tracked via the "status". Let's create a simple workflow for a development team, where the tester finds a defect and follows it through resolution, quality assurance and closure. Below are some possible sets of statuses (workflow) for this process. Workflow Statuses: -Active -Resolved -QAed -Committee -Closed Flowchart the Workflow - Flowcharting the workflow allows team members to understand the process in full. We created the flowchart using Microsoft Word. Advanced Workflow - In our example above, we used simple workflow. However, if your team uses software to manage defects, you should be able to implement more robust workflow. For example, the software should allow you to define "state transitions". This identifies how a status can transition from one status to another. In our example, above, you may want to setup these transitions: Active - Can only transition to Resolved or Committee Committee - Can only transition to Active or Closed Resolved - Can only transition to Active or QAed QAed - Can only transition to Active or Closed Closed - No transitions allowed Likewise, the software should also allow you to define what fields (or items) you wish to make required upon different states. In the example above, if the defect is changed to Resolved, we may want to require that the programmer enter the resolution information (resolution code and description of how they resolved it). Robust defect tracking software will allow you to define the field attributes for each state transition. Software Planner (http://www.SoftwarePlanner.com) does this nicely, you can see how this is handled from Software Planner by viewing this movie: http://www.pragmaticsw.com/GuidedTours/Default.asp?FileName=Workflow Defect Severity - Another important aspect of defect tracking is to objectively define your defect severities. If this is subjective, team members will struggle classifying the severity. Below are severities that are objective: 1-Crash - Set when the defect causes the software to crash 2-Major Bug - Set when there is a major defect with NO workaround 3-Workaround - Set when there is a defect but it has a workaround 4-Trivial - Not a major bug, trivial (e.g. misspelling, etc) Defect Priority - Similar to severity, the priority for resolving the defect should be objective, not subjective. Below are priorities that are objective: 1-Fix ASAP - Highest level of priority, must be fixed as soon as possible 2-Fix Soon - Fix once the priority 1 items are completed 3-Fix If Time - Fix if time allows, otherwise, fix in a future release
  • 10. User Acceptance Test Release Template - Upon entering User Acceptance Testing, it is wise to create a document that describes how your QA process went. Here is a User Acceptance Test Release Report template: Effective methods of writing Defect description ______________ Testing is commonly used to execute software and finding defects. A Defect which describes of any variance or any discrepancies between actual and expected results in application software. Defects should be documented in such a way that any developer should understand the defect description and he/she can reproduce in their respective environment. Defects can be logged by using Tools like Eg Siebel, Track, PVCS etc….and it can also be logged by documenting the defects and maintaining the document in repository. Testers should write the defect description efficiently which will be useful for others within a project. And the documentation should be transparent. Best Practices of writing defects descriptions. · Pre-Requisite of a Defect Document. · An Abstract of a defect · Description and Observation of a defect · Screen shot of a Defect. Pre-Requisite of a Defect Document Document should contain few standard details: - Author Name or Submitter Name - Type of Defect (Eg: Enhancement, Issue or Defect) - Submitted Date - Fixed Date - Status of the defect - Project Phase (Eg: version 1.1, 2.0 etc…) - Version Found (Daily builds) - Severity (Eg, Critical, major, Minor, Cosmetic) Abstract of a Defect Testers should only specify a brief description of a defect Eg: Unable to update a record Description and Observation of Defect In Description column the first few lines should specify an exact problem in the application. And in the following paragraph mention in detail like steps to reproduce (Eg , Start from the application Logon till the defect was found in the application). Following with an Observation like (Eg, System displays an error message Eg: “Unable to update the record “. But according to functionality system should update the updated record). And it will be great if a tester specifies few more observation points like: - This particular defect occurs in a Particular version (Eg Adobe versions for a Report.) - This particular defect also found in other modules - Inconsistency of application while reproducing the defect (Eg, some times able to reproduce and sometimes not) Screen Shot of a defect By providing a screen shot along with the defect document it will be very much useful for the developers to exactly identify the defect and the cause. And will be useful for the testers to verify in future of that particular defect. Tips for Screen shot: - Screen shot should be self explanatory - A Figure like arrow , box or circle can be highlighted (were exactly the problem accrued this type highlighting will be helpful for GUI / Cosmetic related defects) - Use different colors for specific descriptions
  • 11. Conclusion By giving the brief description of defect the · Easy to analyze and cause of defect. · Easy to fix the defect · Avoid re-work · Testers can save time · Defect duplication can be avoided. · Keeping track for defects What is Testing? _______________________________________ Defect can be caused by a flaw in the application software or by a flaw in the application specification. For example, unexpected (incorrect) results can be from errors made during the construction phase, or from an algorithm incorrectly defined in the specification. Testing is commonly assumed to mean executing software and finding errors. This type of testing is known as dynamic testing, and while valid, it is not the most effective way of testing. Static testing, the review, inspection and validation of development requirements, is the most effective and cost efficient way of testing. A structured approach to testing should use both dynamic and static testing techniques. Testing and Quality Assurance What is the relationship between testing and Software Quality Assurance (SQA)? An application that meets its requirements totally can be said to exhibit quality. Quality is not based on a subjective assessment but rather on a clearly demonstrable, and measurable, basis. Quality Assurance and Quality Control are not the same. Quality Control is a process directed at validating that a specific deliverable meets standards, is error free, and is the best deliverable that can be produced. It is a responsibility internal to the team. QA, on the other hand, is a review with a goal of improving the process as well as the deliverable. QA is often an external process. QA is an effective approach to producing a high quality product. One aspect is the process of objectively reviewing project deliverables and the processes that produce them (including testing), to identify defects, and then making recommendations for improvement based on the reviews. The end result is the assurance that the system and application is of high quality, and that the process is working. The achievement of quality goals is well within reach when organizational strategies are used in the testing process. From the client's perspective, an application's quality is high if it meets their expectations. What is the difference between a bug, a defect, and an error? ___ Question: What is the difference between a bug, a defect, and an error? Answer: According to the British norm BS 7925-1: bug--generic term for fault, failure, error, human action that produces an incorrect result. Robert Vanderwall offers these formal definitions from IEEE 610.1. The sub-points are his own. mistake (an error): A human action that produces an incorrect result. - mistake made in translation or interpretation. - lots of taxonomies exist to describe errors. fault: An incorrect step, process or data definition. - manifestation of the error in implementation - this is really nebulous, hard to pin down the 'location' failure: An incorrect result. bug: An informal word describing any of the above. (Not IEEE) Web site that gave these definitions: A bug exists because what is supposed to work does not work as what you expected. Defects occur usually when a product no longer works the way it used to. He found these easy to understand definitions: A defect is for something that normally works, but it has something out-of-spec. On the other hand a Bug is something that was considered but not implemented, for whatever reasons. I have seen these arbitrary definitions: Error: programmatically mistake leads to error. Bug: Deviation from the expected result. Defect: Problem in algorithm leads to failure. Failure: Result of any of the above. Compare those to these arbitrary definitions: Error: When we get the wrong output i.e. syntax error, logical error Fault: When everything is correct but we are not able to get a result
  • 12. Failure: We are not able to insert any input See also http://en.wikipedia.org/wiki/Software_bug In other words, the software industry can still not agree on the definitions forbug, defect, error, fault, or failure. In essence, if you use the term to mean one specific thing, it may not be understood to be that thing by your audience. However, since the terms are not used correctly, you should learn the meaning of them where you work. They also could go under the name of PR (Problem Report), or CR (Change Request). How to Write a Fully Effective Bug Report ___________________ To write a fully effective report you must: - Explain how to reproduce the problem - Analyze the error so you can describe it in a minimum number of steps. - Write a report that is complete and easy to understand. Write bug reports immediately; the longer you wait between finding the problem and reporting it, the more likely it is the description will be incomplete, the problem not reproducible, or simply forgotten. Writing a one-line report summary (Bug's report title) is an art. You must master it. Summaries help everyone quickly review outstanding problems and find individual reports. The summary line is the most frequently and carefully read part of the report. When a summary makes a problem sound less severe than it is, managers are more likely to defer it. Alternatively, if your summaries make problems sound more severe than they are, you will gain a reputation for alarmism. Don't use the same summary for two different reports, even if they are similar. The summary line should describe only the problem, not the replication steps. Don't run the summary into the description (Steps to reproduce) as they will usually be printed independently of each other in reports. Ideally you should be able to write this clearly enough for a developer to reproduce and fix the problem, and another QA engineer to verify the fix without them having to go back to you, the author, for more information. It is much better to over communicate in this field than say too little. Of course it is ideal if the problem is reproducible and you can write down those steps. But if you can't reproduce a bug, and try and try and still can't reproduce it, admit it and write the report anyway. A good programmer can often track down an irreproducible problem from a careful description. For a good discussion on analyzing problems and making them reproducible, see Chapter 5 of Testing Computer Software by Cem Kaner. Software Testing 10 Rules ________________________________ 1. Test early and test often. 2. Integrate the application development and testing life cycles. You'll get better results and you won't have to mediate between two armed camps in your IT shop. 3. Formalize a testing methodology; you'll test everything the same way and you'll get uniform results. 4. Develop a comprehensive test plan; it forms the basis for the testing methodology. 5. Use both static and dynamic testing. 6. Define your expected results. 7. Understand the business reason behind the application. You'll write a better application and better testing scripts. 8. Use multiple levels and types of testing (regression, systems, integration, stress and load). 9. Review and inspect the work, it will lower costs. 10. Don't let your programmers check their own work; they'll miss their own errors. Bug Impacts __________________________________________ Low impact
  • 13. This is for Minor problems, such as failures at extreme boundary conditions that are unlikely to occur in normal use, or minor errors in layout/formatting. These problems do not impact use of the product in any substantive way. Medium impact This is a problem that a) Effects a more isolated piece of functionality. b) Occurs only at certain boundary conditions. c) Has a workaround (where "don't do that" might be an acceptable answer to the user). d) Occurs only at one or two customers. or e) Is very intermittent High impact This should be used for only serious problems, effecting many sites, with no workaround. Frequent or reproducible crashes/core dumps/GPFs would fall in this category, as would major functionality not working. Severe impact (Show Stopper) This should be reserved for only the most catastrophic of problems. Data corruption, complete inability to use the product at almost any site, etc. For released products, an urgent bug would imply that shipping of the product should stop immediately, until the problem is resolved. Bug Report Components _________________________________ by Mikhail Rakhunov SQAtester.com contributor Report number: Unique number given to a bug. Program / module being tested: The name of a program or module that being tested Version & release number: The version of the product that you are testing. Problem Summary: (data entry field that's one line) precise to what the problem is. Report Type: Describes the type of problem found, for example it could be software or hardware bug. Severity: Normally, how you view the bug. Various levels of severity: Low - Medium - High - Urgent Environment: Environment in which the bug is found. Detailed Description: Detailed description of the bug that is found How to reproduce: Detailed description of how to reproduce the bug. Reported by: The name of person who writes the report. Assigned to developer: The name of developer who assigned to fixed the bug. Status: Open: The status of bug when it entered. Fixed / feedback: The status of the bug when it fixed. Closed: The status of the bug when verified. (Bug can be only closed by QA person. Usually, the problem is closed by QA manager.) Deferred: The status of the bug when it postponed. User error: The status of the bug when user made an error. Not a bug:
  • 14. The status of the bug when it is not a bug. Priority: Assigned by the project manager who asks the programmers to fix bugs in priority order. Resolution: Defines the current status of the problem. There are four types of resolution such as deferred, not a problem, will not fix, and as designed. One question on the defects that we raise. We are supposed to give a severity and a priority to it. Now, the severity can be Major, Minor or Trivial and the Priority can be 1, 2 or 3 (with 1 being a high priority defect). My question is - why do we need two parameters, severity and priority, for a defect Can't we do only with one? It depends entirely on the size of the company. Severity tells us how bad the defect is. Priority tells us how soon it is desired to fix the problem. In some companies, the defect reporter sets the severity and the triage team or product management sets the priority. In a small company, or project (or product), particularly where there aren't many defects to track, you can expect you don't really need both since a high severity defect is also a high priority defect. But in a large company, and particularly where there are many defects, using both is a form of risk management. Major would be 1 and Trivial would be 3. You can add or multiply the two values together (there is only a small difference in the outcome) and then use the event's risk value to determine how you should address the problem. The lower values must be addressed and the higher values can wait.