Your SlideShare is downloading. ×
Gl istqb testing fundamentals
Gl istqb testing fundamentals
Gl istqb testing fundamentals
Gl istqb testing fundamentals
Gl istqb testing fundamentals
Gl istqb testing fundamentals
Gl istqb testing fundamentals
Gl istqb testing fundamentals
Gl istqb testing fundamentals
Gl istqb testing fundamentals
Gl istqb testing fundamentals
Gl istqb testing fundamentals
Gl istqb testing fundamentals
Gl istqb testing fundamentals
Gl istqb testing fundamentals
Gl istqb testing fundamentals
Gl istqb testing fundamentals
Gl istqb testing fundamentals
Gl istqb testing fundamentals
Gl istqb testing fundamentals
Gl istqb testing fundamentals
Gl istqb testing fundamentals
Gl istqb testing fundamentals
Gl istqb testing fundamentals
Gl istqb testing fundamentals
Gl istqb testing fundamentals
Gl istqb testing fundamentals
Gl istqb testing fundamentals
Gl istqb testing fundamentals
Gl istqb testing fundamentals
Gl istqb testing fundamentals
Gl istqb testing fundamentals
Gl istqb testing fundamentals
Gl istqb testing fundamentals
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

Gl istqb testing fundamentals

1,232

Published on

Published in: Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
1,232
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
154
Comments
0
Likes
0
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. Connect. Collaborate. Innovate. ISTQB Testing Fundamentals By - Portia Gautam 6/16/2011© Copyright GlobalLogic 2009 Internal 2011 1
  • 2. Connect. Collaborate. Innovate. Switch off your Mobile phone Or Put the Mobile phone on silent mode© Copyright GlobalLogic 2009 2011 2
  • 3. Objective Connect. Collaborate. Innovate.  Why testing is necessary?  Testing Process  Software Development Models  Understanding many testing concepts  Distinguish among various testing types© Copyright GlobalLogic 2009 2011 3
  • 4. Connect. Collaborate. Innovate. Causes of software defects  Error – If someone makes an error or mistake (a human action) in using the software, this may lead directly to a problem  Fault [or Defect] – an incorrect step, process, or data definition in a program  Failure – the inability of a system or component to perform its required function within the specified performance requirement. Defects and failures may arise from:  Errors in the specification, design and implementation of the software and system  Errors in use of the system  Environmental conditions  Intentional damage  Potential consequences of earlier errors, intentional damage, defects and failures© Copyright GlobalLogic 2009 2011 4
  • 5. Connect. Collaborate. Innovate. How costly is the defect? The cost of finding and fixing defects rises considerably across the life cycle© Copyright GlobalLogic 2009 2011 5
  • 6. Connect. Collaborate. Innovate. Why we need testing?  Because we all make mistakes, we all need to check our own work  It is a common misconception that the goal of testing is to eliminate all bugs, where a bug is defined as a defect, or a function that does not work as defined in the Requirements Definition document.  A tester’s job is to find defects; the goal of the testing project is to reduce the risk, not produce perfect software.  To reduce the risk, the best road-crossing strategy is being worked out© Copyright GlobalLogic 2009 2011 6
  • 7. Connect. Collaborate. Innovate. Testing & Quality  Testing helps us to measure the quality of software in terms of the number of defects found, the tests run, and the system covered by the tests.  Testing can give confidence in the quality of the soft-ware if it finds few or no defects Quality : Projects aim to deliver software to specification. For the project to deliver what the customer needs requires a correct specification It consists of : Verification: Are we building the product right? Make sure the product behaves the way we want it to Validation: Are we building the right product? The product build is not what the customer asked for; validation always involves comparison against requirements© Copyright GlobalLogic 2009 7
  • 8. Connect. Collaborate. Innovate. When the testing should be completed? Testing Principle - Exhaustive testing is impossible Three choices are there:  Test everything  Test nothing  Test some of the software Q: How many tests are needed to completely test a field which accepts 1 to 10 ? It is unlikely that the project timescales would allow for this number of tests.© Copyright GlobalLogic 2009 2011 8
  • 9. Connect. Collaborate. Innovate. Contd…. A Test Approach Instead we need a test approach which provides the right amount of testing for this project, these customers (and other stakeholders) and this software. How this is done?  Align the testing with the risks for the customers, the stake-holders, the project and the software  Assessing and managing risk (key activity and reason for testing) The right answer for how much testing is enough includes-  The level of risk, including technical and business risks related to the product and project constraints such as time and budget.  The testing effort can be varied based on the level of risk in different areas© Copyright GlobalLogic 2009 9
  • 10. Connect. Collaborate. Innovate. What is testing? Testing is a process rather than a single activity Life cycle - there are a series of activities involved throughout the software development life cycle  Planning  Both static and dynamic  Preparation  Evaluation  Software products and related work products  Detect defects© Copyright GlobalLogic 2009 2011 10
  • 11. Connect. Collaborate. Innovate. Contd…. The common perception of testing (that it only consists of running tests, i.e. executing the software) is not complete. This is one of the testing activities, but not all of the testing process. Testing -  Determine that software products satisfy specified requirements - if it meets requirements  Demonstrate that software products are fit for purpose - whether the software does enough to help the users to carry out their tasks© Copyright GlobalLogic 2009 11
  • 12. Connect. Collaborate. Innovate. Testing Objective Testing Principle - Testing shows presence of defects  Testing can show that defects are present, but cannot prove that there are no defects.  Testing reduces the probability of undiscovered defects remaining in the software but, even if no defects are found, it is not a proof of correctness. When can we meet our test objectives?  Finding defects  Gaining confidence in and providing information about the level of quality  Preventing defects Pesticide Paradox - If the same tests are repeated over and over again, eventually the same set of test cases will no longer find any new bugs. To overcome this the test cases need to be regularly reviewed and revised, and new and different tests need to be written to exercise different parts of the software or system to potentially find more defects© Copyright GlobalLogic 2009 2011 12
  • 13. Connect. Collaborate. Innovate. Testing Principles© Copyright GlobalLogic 2009 2011 13
  • 14. Connect. Collaborate. Innovate. Debugging removes defects  When a test finds a defect that must be fixed, a programmer must do some work to locate the defect in the code and make the fix.  A programmer examines the code for the immediate cause of the problem, repair the code and check that the code now executes as expected.  The fix is then tested separately© Copyright GlobalLogic 2009 14
  • 15. Connect. Collaborate. Innovate. Fundamental Test Activities Five basic activities are: 1. Test Planning and control – Major tasks of Planning are:  Determine the scope and risks and identify the objectives of testing  Determine the entry criteria  Determine the test approach (techniques, test items, coverage)  Implement the test policy and/or the test strategy  Determine the required test resources (e.g. people, test environment, PCs)  Schedule test analysis and design tasks, test implementation, execution and evaluation  Determine the exit criteria Major tasks of Controlling are:  Measure and analyze the results of reviews and testing  Monitor and document progress, test coverage and exit criteria  Provide information on testing  Initiate corrective actions  Make decisions© Copyright GlobalLogic 2009 15
  • 16. Connect. Collaborate. Innovate. Contd…. 2. Test Analysis and design – Major tasks are:  Review the test basis (such as the product risk analysis, requirements, architecture, design specifications, and interfaces), examining the specifications for the software  Identify test conditions based on analysis of test items, their specifications, this gives a high- level list  Design the tests using techniques to help select representative tests that relate to particular aspects of the software which carry risks or which are of particular interest  Evaluate testability of the requirements and system  Design the test environment set-up and identify any required infrastructure and tools© Copyright GlobalLogic 2009 16
  • 17. Connect. Collaborate. Innovate. Contd…. 3. Test Implementation and execution - Major tasks of Implementation are:  Develop and prioritize test cases using the techniques  Create test suites from the test cases for efficient test execution  Implement and verify the environment Major tasks of Execution are:  Execute the test suites and individual test cases  Log the outcome of test execution and record the identities and versions of the software under test, test tools and testware  Compare actual results with expected results  Report discrepancies as incidents/bugs  Repeat test activities (confirmation testing or re-testing) 4. Evaluating exit criteria and reporting -  Check test logs against the exit criteria specified in test planning  Assess if more tests are needed or if the exit criteria specified should be changed  Write a test summary report for stakeholders© Copyright GlobalLogic 2009 17
  • 18. Connect. Collaborate. Innovate. Contd…. 5. Test closure activities -  Check which planned deliverables are delivered  Ensure all incident reports have been resolved through defect repair or deferral  Finalize and archive testware, such as scripts, test environment, and any other test infrastructure, for later reuse  Hand over testware to the maintenance organization who will support the software  Make any bug fixes or maintenance changes, for use in confirmation testing and regression testing  Evaluate how the testing went and analyze lessons learned for future releases and projects  Include process improvements for the soft ware development life cycle as a whole and also improvement of the test processes.© Copyright GlobalLogic 2009 18
  • 19. Connect. Collaborate. Innovate. Entry & Exit Criteria  Entry Criteria - It ensures that the proper environment is in place to support the entire system test process. Item included:  All test hardware platforms must have been successfully installed, configured and Functioning properly.  All standard software tools including testing tools must have been successfully installed and functioning properly.  All documentation and design of the architecture must be available.  All personnel involved in the system test effort must be trained in tools to be used during testing process.  A separate QA environment must be available.  Proper test data is available.  Exit Criteria - It ensures that the project application has been satisfactorily completed before exiting the system test stage and clarifying the application as complete. Items must be met:  Application must provide the required services.  Ensure all application documentation has been completed and is up to date.  100% of all Priority 1 and priority 2 bugs must be resolved.© Copyright GlobalLogic 2009 19
  • 20. Connect. Collaborate. Innovate. The Psychology of Testing The success of testing is influenced by psychological factors:  Clear objectives  A balance of self-testing and independent testing  Recognition of courteous communication and feedback on defects Independent testing - who is a tester  It is difficult to find our own mistakes  Business analysts, marketing staff, architects and programmers often rely on others to help test their work  This other person might be a fellow analyst, designer or developer  A person who will use the soft-ware may help test it  Business analysts who worked on the requirements and design may perform some tests  Testing specialists - professional testers - are often involved  In fact, testing may involve a succession of people each carrying out a different level of testing. This allows an independent test of the system© Copyright GlobalLogic 2009 20
  • 21. Connect. Collaborate. Innovate. Software Development Models The waterfall model was one of the earliest models to be designed  Defects were being found too late in the life cycle, as testing was not involved until the end of the project  Testing also added lead time due to its late involvement V-model : The testing activities should be carried out in parallel with development activities© Copyright GlobalLogic 2009 2011 21
  • 22. Connect. Collaborate. Innovate. Levels of testing in V model Component testing –  Also known as unit, module and program testing, searches for defects in, and verifies the functioning of software (e.g. modules, programs, objects, classes, etc.) that are separately testable  Stubs and drivers are used to replace the missing software and simulate the interface between the software components in a simple manner  A stub is called from the software component to be tested; a driver calls a component to be tested  Component testing may include testing of functionality and specific non-functional characteristics such as resource-behavior (e.g. memory leaks), performance or robustness testing, as well as structural testing (e.g. decision coverage)© Copyright GlobalLogic 2009 22
  • 23. Connect. Collaborate. Innovate. Contd…. Integration testing – Tests interfaces between components, interactions to different parts of a system such as an operating system, file system and hard-ware or interfaces between systems. Types -  Top-down: takes place from top to bottom, following the control flow or architectural structure (e.g. starting from the GUI or main menu). Components or systems are substituted by stubs.  Bottom-up: takes place from the bottom of the control flow upwards. Components or systems are substituted by drivers.  Functional incremental: integration and testing takes place on the basis of the functions or functionality, as documented in the functional specification. Big Bang Integration Testing -  All components or systems are integrated simultaneously, after which everything is tested as a whole.  Advantage - that everything is finished before integration testing starts. There is no need to simulate (as yet unfinished) parts.  Disadvantage - that in general it is time-consuming and difficult to trace the cause of failures with this late integration© Copyright GlobalLogic 2009 23
  • 24. Connect. Collaborate. Innovate. Contd…. System testing -  It is concerned with the behavior of the whole system/product  It is most often the final test on behalf of development to verify that the system to be delivered meets the specification and its purpose may be to find as many defects as possible  It should investigate both functional and non-functional requirements of the system  System testing requires a controlled test environment with regard to, amongst other things, control of the software versions, testware and the test data Acceptance testing – It asks:  Can the system be released?, ‘  What, if any, are the outstanding (business) risks? and Has development met their obligations?. It is the responsibility of the user or customer, sometimes stakeholders© Copyright GlobalLogic 2009 24
  • 25. Connect. Collaborate. Innovate. Iterative life cycles A common feature of iterative approaches is that the delivery is divided into increments or builds with each increment adding new functionality. Examples are -  Prototyping  Rapid Application Development (RAD)  Rational Unified Process (RUP)  Agile development© Copyright GlobalLogic 2009 25
  • 26. Connect. Collaborate. Innovate. Contd.. Rapid Application Development  It is formally a parallel development of functions and subsequent integration  Components/functions are developed in parallel like a mini projects, the developments are time-boxed, delivered, and then assembled into a working prototype  It gives the customer something to see very quickly and use and to provide feedback regarding the delivery and their requirements Agile development  Extreme Programming (XP) is currently one of the most well-known agile development life cycle models  It promotes the generation of business stories to define the functionality© Copyright GlobalLogic 2009 26
  • 27. Different Levels of Testing Connect. Collaborate. Innovate.© Copyright GlobalLogic 2009 2011 27
  • 28. Functional Testing Connect. Collaborate. Innovate.  The techniques used for functional testing are often specification-based, but experienced-based techniques (will be discussed in next sessions) can also be used  Test conditions and test cases are derived from the functionality of the component or system  A model can also be developed, such as a process model, state transition model or a plain- language specification Testing functionality can be done from two perspectives:  Requirements-based :  It uses a specification of the functional requirements for the system for designing tests  Prioritize the requirements based on risk criteria and use this to prioritize the tests  Put your testing efforts on most critical tests  Business-process-based :  It uses knowledge of the business processes  Business processes describe the scenarios involved in the day-to-day business use of the system  Use cases are a very useful basis for test cases from a business perspective© Copyright GlobalLogic 2009 28
  • 29. Connect. Collaborate. Innovate. Non-functional Testing  It is the testing of the quality characteristics, or non-functional attributes of the system. It is not confined up to only performance, stress and load testing  It is the testing of how well the system works The characteristics are:  Functionality, consists of suitability, accuracy, security, interoperability  Reliability, consists of robustness, fault-tolerance, recoverability  Usability, consists of understandability, learnability  Efficiency, consists of time behavior (performance), resource utilization  Maintainability, consists of stability, testability  Portability, consists of adaptability, installability, replaceability Structural testing – Will be discussed in next sessions© Copyright GlobalLogic 2009 29
  • 30. Connect. Collaborate. Innovate. Testing Related to Changes Confirmation testing (re-testing) -  For a fixed defect a new version of the software is delivered  We need to execute the test again to confirm that the defect has indeed been fixed  Most important is to ensure that the test is executed in exactly the same way as it was the first time, using the same inputs, data and environment Regression testing –  The purpose is to verify that modifications in the software or the environment have not caused unintended adverse side effects  The system still meets its requirements  Regression tests are executed whenever the software changes, either as a result of fixes or new or changed functionality© Copyright GlobalLogic 2009 30
  • 31. Connect. Collaborate. Innovate. Maintenance Testing  Once deployed, a system is in service for years  During this time the system and its operational environment is often corrected, changed or extended, this life cycle phase is called maintenance testing.  A maintenance test process usually begins with the receipt of an application for a change or a release plan It consist of two parts:  Testing the changes  Regression tests to show that the rest of the system has not been affected by the maintenance work. Note : maintenance testing is different from maintainability testing, which defines how easy it is to maintain the system© Copyright GlobalLogic 2009 31
  • 32. Connect. Collaborate. Innovate. Triggers for Maintenance Testing It is triggered by modifications, migration, or retirement of the system.  Maintenance testing for migration (e.g. from one platform to another) : should include operational testing of the new environment, as well as the changed software.  Maintenance testing for the retirement of a system may include the testing of data migration or archiving, if long data-retention periods are required. Planned modifications types:  Perfective modifications - adapting software to the users wishes, for instance by supplying new functions or enhancing performance  Adaptive modifications - adapting software to environmental changes such as new hardware, new systems software or new legislation  Corrective planned modifications - deferrable correction of defects Ad-hoc corrective modifications types:  Ad-hoc corrective modifications are concerned with defects requiring an immediate solution, e.g. a production run which dumps late at night, a network that goes down with a few hundred users on line, a mailing with incorrect addresses.© Copyright GlobalLogic 2009 32
  • 33. Connect. Collaborate. Innovate. Testing Effort It is divided into two categories :  Black box testing - This approach tests all possible combinations of end-user actions. Black box testing assumes no knowledge of code and is intended to simulate the end-user experience.  White box testing. (Also known as glass box, clear box, and open box testing.) - You create test cases by looking at the code to detect any potential failure scenarios. Note : A failure of a white box test may result in a change that requires all black box testing to be repeated and white box testing paths to be reviewed and possibly changed.© Copyright GlobalLogic 2009 2011 33
  • 34. Connect. Collaborate. Innovate. “Thank You” for your learning contribution! Check new L&D Reward & Recognition Policy @ Confluence under Global Training For any queries Dial @ Learning: Noida: 4444, Nagpur:333, Pune:5222, Banglore:111 E mail: learning@globallogic.com Please submit Online Feedback to help L&D make continuous improvement……participation credit will be given only on feedback submission.© Copyright GlobalLogic 2009 2011 34

×