Istqb intro with question answer for exam preparation

10,124 views

Published on

Istqb intro with question answer for exam preparation
this is usefull to ISTQB exam preparation

Published in: Technology

Istqb intro with question answer for exam preparation

  1. 1. Introduction Let me start with a short Intro, I am a Self employed Musician and webdeveloper, and have spent lot of time understanding the software testing field and thestages of evolution in it. And I bring in front of you a very effective small dose of testingtheory. (Thanks to Mr Suresh Reddy, My senior mentor at NRSTT Hyderabad for althe guidance and coaching. ) I have divided the Concise Testing Theory ( C.T.T )into the following sections. Validation & Verification of any task can be called as Testing (1)SectionsWhat is Software Testing ?Who can do it? & Terminology.Software Development Life Cycle ( SDLC.)Testing methodology & Levels of testing.Types of Testing.Software Testing life cycleBug life cycle (BLC.)Automation Testing.Performance Testing.ISTQB & Software CertificationEach section is explained in Detail and will be enhanced as and when needed. Good luckand enjoy the syllabi. I will shortly create concise knols on Sql server & QTP.I do Believe Strongly that A good understanding of CTT can get you a job in IT ( at leastin India, US, UK & AU) very soon. All you need minimum is to be A graduate from anyfield ( 15 years of formal education).
  2. 2. Software Testing?Testing in general, is a process carried out by individuals or groups across all domains,whererequirements exist.Whereas, In more software subjective language, The comparison of EV (expected value )andAV (Actual value) is known as testing.To get a better understanding of the sentence above , lets see a small example,example 1.I have a bulb , and my requirement is that when i switch it on, it should glow. So Mynext step is to identify three thingsAction to be performed : turn the Switch on,Expected value : Bulb should glow,Actual value : Present Status of the bulb after performing the action ,( on oroff ).Now its the time for our test to generate a result based on a simple logic,IF EV = AV then the result is PASS , else Any deviation will make the result as FAIL.Now , ideally based on the difference or margin of difference between the EV and AV ,The Severity of the defect is decided and is subjected for further rectifications. Conclusive Definition of Testing : Testing can be defined as the process in which the defects are Identified, Isolatedand then subjected for rectification and Re ensuring that the end result is defectfree (highest quality) in order to achieve maximum customer satisfaction. ( AnInterview definition. ).Who can do testing & things to understand ( Pre requirement tounderstand Software testing effectively) As we have understood so far, that testing is a very general task and can beperformed by anybody. But we have to understand the importance of software testingfirst,In the earlier days of software development, the support for developing the programswas very little. Commercially very few rich companies used advanced software solutions.As three decades of research and development has passed , IT sector has becomesufficient enough to develop advanced software programming for even spaceexploration. Programming languages in the beginning were very complex and were not feasibleto achieve larger calculations, Hardware was costly and was very limited in its capacityto perform continuous tasks, and Networking bandwidths was a gen next topic.But Now Object oriented Programming languages, Terra bytes of memory andbandwidths of hundreds of M bps, are the foundations for futures interactive webgeneration. Sites like Google, MSN and Yahoo make life much easier to live with Chat ,Conferencing and Data management etc.
  3. 3. So coming back to our goal (testing) , We need to realize that Software is here tostay and become a part of mankind. Its the reality of tomorrow and a necessity to belearnt for healthy survival in the coming technological era. More and more companiesare spending billions of cash for achieving the limits and in all these process one thingis attaining its place more prominently which is none other than QUALITY. I used so much of background to stress quality because Its the final and direct goal ofQuality assurance people or Test engineers.  Quality can be defined as Justification of all of the users requirements in a product with the presence of value & safety.So as you can see that quality is a much wider requirement of the masses when it comesto consumption of any service, So the Job of a Tester is to act like a user and verify theproduct Fit ( defect free = quality ).  Defect can be defined as deviation from a users requirement in a particular product or service. So the same product have different level of defects or No defects based on tastes and requirements of different users.As more and more business solutions are moving from man to machine , bothdevelopment and testing are headed for an unprecedented boom. Developers are underconstant pressure of learning and updating themselves with the latest programminglanguages to keep up the pace with users requirements. As a result , Testing is also becoming an Integral part of Software companies toproduce quality results for beating the competition. Previously developers used to testthe applications besides coding, but this had disadvantages , Like time consumption ,emotional block to find defects in ones own creation and Post delivery maintenancewhich is a common requirement now. So Finally Testers have been appointed to simultaneously test the applications alongwith the development , In order to identify defects as soon as possible to reduce timeloss and improve efficiency of the developers. Who can do it? Well, Anyone can do it. You need to be a graduate to enter softwaretesting and comfortable relationship with the PC. All you have to get in your mind is thatyour job is to act like a third person or end user and taste (test) the food before theyeat, and report it to the developer.We are not correcting the mistake here, we are only identifying the defects and reportingit and waiting for the next release to check whether the defects are rectified and also tosee whether new defects have raised.Some Important terminologies
  4. 4.  Project : If something is developed based on a particular user/users requirements, and is used exclusively by them, then it is known as project. The finance is arranged by the client to complete the project successfully. Product : If something is developed based on companys specification (after a general survey of the market requirements) and can be used by multiple set of masses, then it is known as a product. The company has to fund the entire development and usually expect to break even after a successful market launch. Defect v/s Defective : If the product is justifying partial requirements of a particular customer but is usable functionally, then we say that The product has a defect, But If the product is functionally non usable , then even if some requirements are satisfied, the product still will be tagged as Defective. Quality Assurance : Quality assurance is a process of monitoring and guiding each and every role in the organization in order to make them perform their tasks according to the companys process guidelines. Quality Control or Validation : To check whether the end result is the right product, In other words whether the developed service meets all the requirements or not. Quality Assurance Verification : It is the method to check whether the developed product or project has followed the right process or guidelines through all the phases of development. NCR : Whenever the role is not following the process in performing the task assigned, then the penalty given to him is known as NCR ( Non Confirmacy raised). Inspection : Its a process of checking conducted by a group of members on a role or a department suddenly without any prior intimation. Audit : Audit is a process of checking conducted on the roles or a department with a prior notice, well in advance. SCM : Software configuration management : This is a process carried out by a team to attain the following things Version control and Change control . In other terms SCM team is responsible for updating all the common documents used across various domains to maintain uniformity, and also name the project and update its version numbers by gauzing the amount of change in the application or service after development. Common Repository : Its a server accessible by authorized users , to store and retrieve the information safely and securely. Base lining vs publishing : Its a process of finalizing the documents vs making it available to all the relevant resources. Release : Its a process of sending the application from the development department to the testing department or from the company to the market. SRN ( Software Release Note ) : Its a note prepared by the development department and sent to the testing department during the release and it contains information about Path of the build, Installation information, test data, list of known issues, version no, date and credentials etc., SDN ( Software delivery Note ) : Its a note prepared by a team under the guidance of a Project manager, and will be submitted to the customer during delivery. It contains a carefully crated User manual and list of known issues and workarounds.
  5. 5.  Slippage : The extra time taken to accomplish a task is known as slippage  Metrics v/s Matrix : Clear measurement of any task is defined as metrics, whereas a tabular format with linking information which is used for tracing any information back through references is called as Matrix.  Template v/s document : Template is a pre-defined set of questionnaire or a professional fill in the blanks set up , which is used to prepare an finalize any document. The advantages of template is to maintain uniformity and easier comprehension, throughout all the documentation in a Project, group, department , company or even larger masses.  Change Request v/s Impact Analysis : Change request is the proposal of the customer to bring some changes into the project by filling a CRT ( change request template). Whereas Impact analysis is a study carried out by the business analysts to gauze , how much impact will fall on the already developed part of the application and how feasible it is to go ahead with the change or demands of the customer. Software Development Life Cycle (SDLC) This is a nothing but a model which is used to achieve efficient results in softwarecompanies and consists of 6 main phases. We will discuss each stage in a descriptivefashion, dissected into four parts individually, such as Tasks, Roles , Process and Proofof each phase completion. Remember SDLC is similar to waterfall model where theoutput of one phase acts as the input for the next phase of the cycle.
  6. 6. SDLC Image See references below Initial phase Analysis phase Design phase Coding phase Testing phase Delivery and Maintenance Initial-Phase/ Requirements phase : Task : Interacting with the customer and gathering the requirements Roles : Business Analyst and Engagement Manager (EM). Process : First the BA will take an appointment with the customer, Collects therequirement template, meets the customer , gathers requirements and comes back tothe company with the requirements document.Then the EM will go through the requirements document . Try to find additionalrequirements , Get the prototype ( dummy, similar to end product) developed in orderto get exact details in the case of unclear requirements or confused customers, and alsodeal with any excess cost of the project.Proof : The Requirements document is the proof of completion of the first phase ofSDLC .Alternate names of the Requirements Document :(Various companies and environments use different terminologies, But the logic is same)FRS : Functional Requirements Specification.CRS : Customer/Client Requirement Specification,URS : User Requirement Specifications,BRS : Business Requirement Specifications,BDD : Business Design Document,BD : Business Document. Analysis Phase : Tasks : Feasibility Study,
  7. 7. Analysis Image references see below Tentative planning, Technology selection Requirement analysis Roles : System Analyst (SA) Project Manager (PM) Technical manager (TM)Process : A detailed study on the requirements and judging the possibilities andscope of the requirements is known as feasibility study. It is done by the Manager levelteams usually. After that in the next step we move to a temporary scheduling of staff toinitiate the project, Select a suitable technology to develop the project effectively (customers choice is given first preference , If it is feasible ). And Finally the Hardware, software and Human resources required are listedout in a document to baseline the project.Proof : The proof document of the analysis phase is SRS ( SystemRequirement Specification.) Design Phase : Tasks : High level Designing (H.L.D) : Low level Designing (L.L.D) Roles : Chief Architect ( handle HLD ) : Technical lead ( involved in LLD)Process : The Chief Architect will divide the whole project into modules by drawingsome graphical layouts using Unif[ied Modeling Language (UML). The Technical leadwill further divide those modules into sub modules using UML . And both will beresponsible for visioning the GUI ( The screen where user interacts with the
  8. 8. application OR Graphical User Interface .) and developing the Pseudo code (A dummycode Or usually, Its a set of English Instructions to help the developers in coding theapplication.) Coding Phase : Task : Developing the programs Roles: Developers/programmersProcess : The developers will take the support of technical design document and will befollowing the coding standards, while actual source code is developed. Some of theIndustry standard coding methods include Indentation, Color coding, Commenting etc.Proof : The proof document of the coding phase is Source Code Document (SCD). Testing Phase : Task : Testing Roles : Test engineers, Quality Assurance team.Process : Since It is the core of our article , we will look at a descriptive fashion ofunderstanding the testing process in an I T environment.  First, the Requirements document will be received by the testing department  The test engineers will review the requirements in order to understand them.  While revising , If at all any doubts arise, then the testers will list out all the unclear requirements in a document named Requirements Clarification Note (RCN).  Then they send the RCN to the author of the requirement document ( i.e, Business Analyst ) in order to get the clarifications.  Once the clarifications are done and sent to the testing team, they will take the test case template and write the test cases. ( test cases like example1 above).  Once the first build is released, they will execute the test cases.  While executing the test cases , If at all they find any defects, they will report it in the Defect Profile Document or DPD.  Since the DPD is in the common repository, the developers will be notified of the status of the defect.  Once the defects are sorted out, the development team releases the next build for testing. And also update the status of defects in the DPD.  Testers will here check for the previous defects, related defects, new defects and update the DPD.
  9. 9. Proof : The last two steps are carried out till the product is defect free , so qualityassured product is the proof of the testing phase ( and that is why it is a very importantstage of SDLC in modern times). Delivery & Maintenance phase Tasks : Delivery : Post delivery maintenance Roles : Deployment engineersProcess : Delivery : The deployment engineers will go to the customer environmentand install the application in the customers environment & submit the applicationalong with the appropriate release notes to the customer . Maintenance : Once the application is delivered the customer will start usingthe application, While using if any problem occurs , then it becomes a new task, Basedon the severity of the issue corresponding roles and process will be formulated. Somecustomers may be expecting continuous maintenance, In that case a team of softwareengineers will be taking care of the application regularly.Software Testing Methods & Levels of Testing. There are three methods of testing , Black Box, White Box & Grey box testing, Lets see. Black Box testing : If one performs testing only on the functional part of an application (where endusers can perform actions) without having the structural knowledge, then that method of testing is knownas Black box testing. Usually test engineers are in this category. White Box testing : If one performs testing on the structural part of the application then thatmethod of testing is known as white box testing. Usually developers or white box testers are the ones to doit successfully. Grey Box testing : If one performs testing on both the functional part as well as the structural partof an application, then that method of testing is known as Grey box testing. Its an old way of testing and isnot as effective as the previous two methods and is losing popularity recently.Levels of TestingThere are 5 levels of testing in a software environment. They are as follows,
  10. 10. Image: Systematic & Simultaneous Levels of testing. (The ticked options are the ones where Black box testers are required and the other performed by the developers usually).Unit level testing: A unit is defined as smallest part of an application. In this level of testing, each andevery program will be tested in order to confirm whether the conditions, functions and loops etc areworking fine or not. Usually the white box testers or developers are the performers at this level.Module level testing: A module is defined as a group of related features to perform a major task. At thislevel the modules are sent to the testing department and the test engineers will be validating thefunctional part of the modules.Integration level testing: At this level the developers will be developing the interfaces in order tointegrate the tested modules. While Integrating the modules they will test whether the interfaces thatconnect the modules are functionally working or not.Usually, the developers opt to integrate the modules in the following methods,  Top down Approach: In this approach the parent modules are developed first and then the corresponding child modules will be developed. In case while integrating the modules if any mandatory module is missing then that is replaced with a temporary program called stub, to facilitate testing.  Bottom up approach: In this approach the child modules will be developed first and will be integrated back to the parent modules. While integrating the modules in this way, if at all any mandatory module is missing, then that module is replaced with a temporary program known as driver to facilitate testing.  Hybrid or Sandwich approach: In this approach both the TD & BU approaches will be mixed due to several reasons.  Big Bang approach: In this approach one wait till all the modules are developed and integrate them finally at a time. System Level testing : Arguably, the Core of the testing comes in this level. Its a major phase in this testing level, becausedepending on the requirement and afford-ability of the company, Automation testing, load, performance& stress testing etc will be carried out in this phase which demands additional skills from a tester.System: Once the stable complete application is installed into an environment, then as a whole it can becalled as a system (envt. + Application) At SLS level, the test engineers will perform many different types of testing, but the most Important oneis,
  11. 11. System Integration Test: In this type of testing one will perform some actions on the modules thatwere integrated by the developers in the previous phase, and simultaneously check whether the reflectionsare proper in the related connected modules.Example 2: Lets take An ATM machine application, with the modules as follows,Welcome screen, Balance inquiry screen, Withdrawal screen & Deposit screen.Assuming that these 4 modules were integrated by the developers, So black box testers can performSystem Integration testing like this :- Test case 1:Check Balance: Lets say amount is XDeposit amount: Lets say amount is YCheck balance: Expected ValueIf the Actual Value is X + Y, then it is equal to the Expected Value. And because EV = AV, the result isPASS.User Acceptance Level Testing:At this level the user will be invited and in his presence, testing will be carried on. Its the final testingbefore user signs off and accepts the application. Whatever user may desire, the corresponding featuresneed to be tested functionally by the black box testers or Senior testers in the project. Types of Testing: There are two categories broadly to classify all the available types of testing software.They are Static testing & Dynamic testing. Static means testing where No actions areperformed on the application, and features like GUI and appearance related testingcomes under this category. And Dynamic is where user needs to perform some actionson the application to test like functionality checking, link checking etc. Initially there were very few popular types of testing, which were lengthy and manual.But as the applications are becoming more and more complex, Its inevitable that, notonly features , functionality and appearance but performance and stability also aremajor areas to be concentrated. The following types are the most is use and we will discuss about each of them one byone.Build Acceptance Test/Verification/Sanity testing (BAT) : Its a type of testingin which one will perform an overall testing on the application to validate whether it isproper for further detailed testing. Usually testing team conducts this in high riskapplications before accepting the application from the development department. Other two terms Smoke testing and Sanity testing are in debate from timeimmemorial to find a fixed definition. Majority feel that If developers test the overallapplication before releasing to the testing team for further action is known as Smoketesting and future actions performed by black box test engineers is known as Sanitytesting. But this is cited as vice versa.
  12. 12. Regression Testing : Its a type of testing in one will perform testing on the alreadytested functionality again and again. As the name suggests regression is revisiting thesame functionality of a feature, but it happens in the following two cases. Case1 : When the test engineers find some defects and report it to the developers, Thedevelopment team fixes the issue and releases the next build. Once the next build isreleased, as per the requirement the testers will check wether the defect is fixed, andalso whether the related features are still working fine which might have been affectedwhile fixing the defect. Example 3: Lets say that you wanted a bike with 17" tyres instead of 15" and sent thebike to the service center for changing it. Before sending it back for changing, you testedall the other features and you are satisfied with them. Once the bike is returned to youwith new tyres, you will also check the look and feel overall, and check whether thebrakes still work as expected, whether the mileage maintains its expected value and allrelated features that can be affected by this change. Testing the tyre in this case is Newtesting and all the other features will fall under Regression testing. Case 2 : Whenever some new feature is added to the application in the middle ofdevelopment, and released for testing. The test engineers may need to checkadditionally all the related features of the newly added feature. In this case also all thefeatures except the new functionality comes under Regression testing.Retesting : Its a type of testing in which one will perform testing on the samefunctionality again and again with multiple sets of data in order to come to a conclusionwhether its working fine or not. Example 4: Lets say the customer wants a bank application and the login screen musthave the password field that only accepts alphanumeric (eg. abc123, 56df etc) data andno special characters ( *,$,# etc). In this case one needs to test the field with all possiblecombinations and different sets of data to conclude that the password field is workingfine. Such kind of repeated testing on any feature is called Retesting. Alpha Testing & Beta Testing : These are both performed in the User acceptancetesting phase, If the customer visits the company and the comapnys own test engineersperform testing in their own premises, then it is referred as Alpha testing.If the testing is carried out at the clients environment and is carried out by the endusers or third party test experts, then it is known as Beta testing . ( Remember that boththese types are before the actual implementation of the software in the clientsenvironment, hence the term user acceptance. )Installation Testing : Its a type of testing in which one will install the application intothe environment by following the guidelines given in the Deployment document /Installation guide. If at all the installation is succesful then one will come to a conclusionthat the installation guide and the instructions in it are correct and it is appropriate toinstall the application, otherwise report the problems in the deployment document. One main point is to ensure that, In this type of testing we are checking the user manualand not the product. ( i.e, the installation/setup guide and not the applicationscapability to install.)
  13. 13. Compatibility testing : Its a type of testing in which one will install the applicationinto multiple environments prepared with different combinations or environmentalcomponents in order to check whether the application is suitable with thoseenvironments or not. Usually this type of testing is carried out on products rather thanprojects. Monkey Testing : Its a type of testing in which Abnormal actions are performedintentionally on the application in order to check the stability of the application.Remember it is different from stress testing or load testing. We are concentrating on thestability of the features and functionality in the case of extreme possibilities of actionsperformed on the application by different kind of users. Useability Testing : In this kind of testing we will check the user freindliness of theapplication. Depending on the complexity of the application, one needs to test whetherinformation about all the features are understandable easily. Navigation of theapplication must be easy to follow. Exploratory Testing : Its a type of testing in which domain experts ( knowledge ofbusiness & functions) will perform testing on the application without having theknowledge of requirements just by parallely exploring the functionality. If you go by thesimple definition of exploring, It means having minimum idea about something, & thendoing something related to it in order to know something more about it. End to End Testing : Its a type of testing in which we perform testing on various endto end scenarios of an application. Which means performing various operations on theapplications like different users in real time scenario. Example 5 : Lets take a bank application and consider the different end to endscenarios that exist within the application. Scenario 1 : Login, Balance enquiry , &Logout. Scenario 2 : Login, Deposit, Withdrawal & logout. Scenario 3 : Login, Balance enquiry, Deposit, Withdrawal, Logout. etc. Security Testing : Its a type of testing in which one will test whether the application isa secured application or not. In order to do the same, A black box test engineer willconcentrate on the following types of testing.  Authentication Testing : It is a type of testing in which one will try to enter into the application by entering different combinations of usernames and passwords in order to check whether the application is allowing only the authorized users or not.  Direct URL testing : It is a type of testing in which one will enter the direct URLs ( Uniform resource locator) of the secured pages and check whether the application is allowing the access to those areas or not.  Firewall Leakage testing : Firewall term is a very widely misunderstood word and it generally means a barrier between two levels. The definition comes from an Old African story where people used to put fire around them while sleeping, in order to prevent red-ants coming near them. So In this type of testing, One
  14. 14. will enter into the application as one level of user ( ex. member, ) and try to access other higher level of users pages (ex. admin ,) , In order to check whether the firewalls are working fine or not. Port testing: Its a type of testing in which one will install the application into theclients environment and check whether it is compatible with that environment. (According to the requirements, one needs to find what kind of environment is needed tobe tested, whether it is product or project etc .) Soak Testing/Reliability testing : Its a type of testing in which one will performtesting on the application continuosly for long period of time in order to check thestability of the application. Mutation testing : Its a type of testing in which one will perform testing on theapplication or its related factors by doing some changes to the logics or layers of theenvironment. (Refer envt. knol for details) Any thing from the functionality of a featureto the applications overall route can be tested with different set of combinations ofenvironment. Adhoc testing : Its a type of testing in which the test engineers will perform testing intheir own style after understanding the requirements clearly. ( Note that in Exploratorytesting the domain experts do not have the knowledge of requirements whereas in thiscase the test engineers have the expected values set. )
  15. 15. Ch-1Introduction ISTQB is the name of the certification board that certifies individuals and credits them with afoundation or advanced level honors as Software Testers.In November 02, The ISTQB was established in UK. Its entity is anyways in Belgium. Img1.1 Turkish Testing Board Member of ISTQB.org logoThe purpose of this certification is to create uniform standards in the field of software testingacross the world. ISTQB is part of the main Issuing comittee of The British Computer Societyestablished ISEB ( reg 1967 ref. wiki ) in order to certify the standards for systems & analysis,networking and designs. The Whole process of certification and accrediation is now regulated by the AmericanSoftware Testing Qualifications Board. The candidates who succesfuly complete the exam areawarded with the ISTQB Certified Tester Certificate and is valued highly by present qualityoriented companies, Both IT and Non IT organizations have really understood the meaning ofquality and have learnt that Testers can be skimmed through certification boards and hence thepopularity. Anyways, Since its a Major topic and requires more samples and questionaires, we havedivided the knol into seperate chapters to keep it short , up and running all the times. If you arecompletely new to testing or want to update the basics of software testing before preparing forISTQB , then Please visit the Software Testing Theory - Part 1 for simplified summary.Chapter -1 Fundamentals of Testing1.1 Why is testing necessary?bug, defect, error, failure, mistake, quality, risk, software, testing and exhaustive testing.
  16. 16. 1.2 What is testing?code, debugging, requirement, test basis, test case, test objective1.3 Testing principles1.4 Fundamental test processconformation testing, exit criteria, incident, regression testing, test condition, test coverage, testdata, test execution, test log, test plan, test strategy, test summary report and testware.1.5 The psychology of testingindependence.I) General testing principlesPrinciplesA number of testing principles have been suggested over the past 40 years and offer generalguidelines common for all testing.Principle 1 – Testing shows presence of defectsTesting can show that defects are present, but cannot prove that there are no defects. Testingreduces the probability of undiscovered defects remaining in the software but, even if no defectsare found, it is not a proof of correctness.Principle 2 – Exhaustive testing is impossible Testing everything (all combinations of inputs and preconditions) is not feasible except fortrivial cases. Instead of exhaustive testing, risk analysis and priorities should be used to focustesting efforts.Principle 3 – Early testingTesting activities should start as early as possible in the software or system development lifecycle, and should be focused on defined objectives.Principle 4 – Defect clustering
  17. 17. A small number of modules contain most of the defects discovered during pre-release testing, orare responsible for the most operational failures.Principle 5 – Pesticide paradoxIf the same tests are repeated over and over again, eventually the same set of test cases will nolonger find any new defects. To overcome this “pesticide paradox”, the test cases need to beregularly reviewed and revised, and new and different tests need to be written to exercisedifferent parts of the software or system to potentially find more defects.Principle 6 – Testing is context dependentTesting is done differently in different contexts. For example, safety-critical software is testeddifferently from an e-commerce site.Principle 7 – Absence-of-errors fallacyFinding and fixing defects does not help if the system built is unusable and does not fulfill theusers‟ needs and expectations.II) Fundamental test process1) Test planning and controlTest planning is the activity of verifying the mission of testing, defining the objectives of testingand the specification of test activities in order to meet the objectives and mission. It involves taking actions necessary to meet the mission and objectives of the project. In orderto control testing, it should be monitored throughout the project. Test planning takes intoaccount the feedback from monitoring and control activities.2) Test analysis and designTest analysis and design is the activity where general testing objectives are transformed intotangible test conditions and test cases.
  18. 18. Test analysis and design has the following major tasks:  Reviewing the test basis (such as requirements, architecture, design, interfaces).  Evaluating testability of the test basis and test objects.  Identifying and prioritizing test conditions based on analysis of test items, the specification, behaviour and structure.  Designing and prioritizing test cases.  Identifying necessary test data to support the test conditions and test cases.  Designing the test environment set-up and identifying any required infrastructure and tools.3) Test implementation and execution  Developing, implementing and prioritizing test cases.  Developing and prioritizing test procedures, creating test data and, optionally, preparing test harnesses and writing automated test scripts.  Creating test suites from the test procedures for efficient test execution.  Verifying that the test environment has been set up correctly.  Executing test procedures either manually or by using test execution tools, according to the planned sequence.  Logging the outcome of test execution and recording the identities and versions of the software under test, test tools and testware.  Comparing actual results with expected results.  Reporting discrepancies as incidents and analyzing them in order to establish their cause (e.g. a defect in the code, in specified test data, in the test document, or a mistake in the way the test was executed).  Repeating test activities as a result of action taken for each discrepancy. For example, reexecution of a test that previously failed in order to confirm a fix (confirmation testing), execution of a corrected test and/or execution of tests in order to ensure that defects have not been introduced in unchanged areas of the software or that defect fixing did not uncover other defects (regression testing).4) Evaluating exit criteria and reporting  Checking test logs against the exit criteria specified in test planning.  Assessing if more tests are needed or if the exit criteria specified should be changed.  Writing a test summary report for stakeholders.5) Test closure activities  Checking which planned deliverables have been delivered, the closure of incident reports or raising of change records for any that remain open, and the documentation of the acceptance of the system.
  19. 19.  Finalizing and archiving testware, the test environment and the test infrastructure for later reuse.  Handover of testware to the maintenance organization.  Analyzing lessons learned for future releases and projects, and the improvement of test maturity.III) The psychology of testing  Tests designed by the person(s) who wrote the software under test (low level of independence).  Tests designed by another person(s) (e.g. from the development team).  Tests designed by a person(s) from a different organizational group (e.g. an independent test team) or test specialists (e.g. usability or performance test specialists).  Tests designed by a person(s) from a different organization or company (i.e. outsourcing or certification by an external body).
  20. 20. Ch-2. Testing throughout the software life cycle2.1 Software development modelsCOTS, interactive-incremental development model, validation, verification, V-model.2.2 Test levels Alpha testing, beta testing, component testing (also known as unit/module/programtesting), driver, stub, field testing, functional requirement, non-functional requirement,
  21. 21. integration, integration testing, robustness testing, system testing, test level, test-drivendevelopment, test environment, user acceptance testing.2.3 Test types Black box testing, code coverage, functional testing, interoperability testing, loadtesting, maintainability testing, performance testing, portability testing, reliabilitytesting, security testing, specification based testing, stress testing, structural testing,usability testing, white box testing2.4 Maintenance testingImpact analysis, maintenance testing.i) Software development modelsa) V-model (sequential development model) Although variants of the V-model exist, a common type of V-model uses four test levels,corresponding to the four development levels.The four levels used in this syllabus are:component (unit) testing;integration testing;system testing;acceptance testing.b) Iterative-incremental development modelsIterative-incremental development is the process of establishing requirements,designing, building and testing a system, done as a series of shorter development cycles.Examples are: prototyping, rapid application development (RAD), Rational UnifiedProcess (RUP) and agile development models.c) Testing within a life cycle model In any life cycle model, there are several characteristics of good testing:? For every development activity there is a corresponding testing activity.? Each test level has test objectives specific to that level.? The analysis and design of tests for a given test level should begin during thecorresponding development activity.? Testers should be involved in reviewing documents as soon as drafts are available inthe development life cycle.
  22. 22. ii) Test levelsa) Component testingComponent testing searches for defects in, and verifies the functioning of, software (e.g.modules, programs, objects, classes, etc.) that are separately testable.Component testing may include testing of functionality and specific non-functionalcharacteristics, such as resource-behaviour (e.g. memory leaks) or robustness testing, aswell as structural testing (e.g. branch coverage).One approach to component testing is to prepare and automate test cases before coding.This is called a test-first approach or test-driven development.b) Integration testingIntegration testing tests interfaces between components, interactions with differentparts of a system, such as the operating system, file system, hardware, or interfacesbetween systems.Component integration testing tests the interactions between software components andis done after component testing;System integration testing tests the interactions between different systems and may bedone after system testing.Testing of specific non-functional characteristics (e.g. performance) may be included inintegration testing.c) System testing System testing is concerned with the behaviour of a whole system/product as defined bythe scope of a development project or programme.In system testing, the test environment should correspond to the final target orproduction environment as much as possible in order to minimize the risk ofenvironment-specific failures not being found in testing.System testing may include tests based on risks and/or on requirements specifications,business processes, use cases, or other high level descriptions of system behaviour,interactions with the operating system, and system resources.System testing should investigate both functional and non-functional requirements ofthe system.
  23. 23. d) Acceptance testingAcceptance testing is often the responsibility of the customers or users of a system;other stakeholders may be involved as well.The goal in acceptance testing is to establish confidence in the system, parts of thesystem or specific non-functional characteristics of the systemContract and regulation acceptance testingContract acceptance testing is performed against a contract‟s acceptance criteria forproducing custom-developed software. Acceptance criteria should be defined when thecontract is agreed. Regulation acceptance testing is performed against any regulationsthat must be adhered to, such as governmental, legal or safety regulations.Alpha and beta (or field) testingAlpha testing is performed at the developing organization‟s site. Beta testing, or fieldtesting, is performed by people at their own locations. Both are performed by potentialcustomers, not the developers of the product.iii) Test typesa) Testing of function (functional testing) The functions that a system, subsystem or component are to perform may be describedin work products such as a requirements specification, use cases, or a functionalspecification, or they may be undocumented. The functions are “what” the system does.A type of functional testing, security testing, investigates the functions (e.g. a firewall)relating to detection of threats, such as viruses, from malicious outsiders. Another typeof functional testing, interoperability testing, evaluates the capability of the softwareproduct to interact with one or more specified components or systems.b) Testing of non-functional software characteristics (non-functionaltesting) Non-functional testing includes, but is not limited to, performance testing, load testing,stress testing, usability testing, maintainability testing, reliability testing and portabilitytesting. It is the testing of “how” the system works.Non-functional testing may be performed at all test levels.c) Testing of software structure/architecture (structural testing) Structural (white-box) testing may be performed at all test levels. Structural techniquesare best used after specification-based techniques, in order to help measure thethoroughness of testing through assessment of coverage of a type of structure.
  24. 24. Structural testing approaches can also be applied at system, system integration oracceptance testing levels (e.g. to business models or menu structures).d) Testing related to changes (confirmation testing (retesting) andregression testing)After a defect is detected and fixed, the software should be retested to confirm that theoriginal defect has been successfully removed. This is called confirmation. Debugging(defect fixing) is a development activity, not a testing activity.Regression testing is the repeated testing of an already tested program, aftermodification, to discover any defects introduced or uncovered as a result of thechange(s). It isperformed when the software, or its environment, is changed.Regression testing may be performed at all test levels, and applies to functional, non-functional and structural testing.iv) Maintenance testingOnce deployed, a software system is often in service for years or decades. During thistime the system and its environment are often corrected, changed or extended.Modifications include planned enhancement changes (e.g. release-based), correctiveandemergency changes, and changes of environment,Maintenance testing for migration (e.g. from one platform to another) should includeoperational tests of the new environment, as well as of the changed software.Maintenance testing for the retirement of a system may include the testing of datamigration or archiving if long data-retention periods are required.Maintenance testing may be done at any or all test levels and for any or all test types.
  25. 25. Ch-3.1 Static techniques and the test processdynamic testing, static testing, static technique Img 1.1 Permanent Demand trend Jobmarket Internet.UK The chart provides the 3-month moving total beginning in 2004 of permanent IT jobs citing ISTQB within the UK as a proportion of the total demand within the Qualifications category. 20093.2 Review processentry criteria, formal review, informal review, inspection, metric, moderator/inspectionleader, peer review, reviewer, scribe, technical review, walkthrough.3.3 Static analysis by toolsCompiler, complexity, control flow, data flow, static analysisI) Phases of a formal review1) PlanningSelecting the personal, allocating roles, defining entry and exit criteria for more formalreviews etc.2) Kick-offDistributing documents, explaining the objectives, checking entry criteria etc.3) Individual preparationWork done by each of the participants on their own work before the review meeting,questions and comments4) Review meetingDiscussion or logging, make recommendations for handling the defects, or makedecisions about the defects5) ReworkFixing defects found, typically done by the author Fixing defects found, typically done bythe author6) Follow-upChecking the defects have been addressed, gathering metrics and checking on exitcriteriaII) Roles and responsibilities
  26. 26. Manager Decides on execution of reviews, allocates time in projects schedules, anddetermines if the review objectives have been metModerator Leads the review, including planning, running the meeting, follow-up afterthe meeting.Author The writer or person with chief responsibility of the document(s) to be reviewed.Reviewers Individuals with a specific technical or business background. Identify defectsand describe findings.Scribe (recorder) Documents all the issues, problemsIII) Types of reviewInformal review No formal process, pair programming or a technical lead reviewingdesigns and code.Main purpose: inexpensive way to get some benefit.Walkthrough Meeting led by the author, „scenarios, dry runs, peer group‟, open-endedsessions.Main purpose: learning, gaining understanding, defect findingTechnical review Documented, defined defect detection process, ideally led by trainedmoderator, may be performed as a peer review, pre meeting preparation, involved bypeers and technical expertsMain purpose: discuss, make decisions, find defects, solve technical problems and checkconformance to specifications and standardsInspection Led by trained moderator (not the author), usually peer examination,defined roles, includes metrics, formal process, pre-meeting preparation, formal follow-up processMain purpose: find defects.Note: walkthroughs, technical reviews and inspections can be performed within a peergroup-colleague at the same organization level. This type of review is called a “peerreview”.IV) Success factors for reviewsEach review has a clear predefined objective.The right people for the review objectives are involved.Defects found are welcomed, and expressed objectively.People issues and psychological aspects are dealt with (e.g. making it a positiveexperience for the author).Review techniques are applied that are suitable to the type and level of software workproducts and reviewers.Checklists or roles are used if appropriate to increase effectiveness of defectidentification.Training is given in review techniques, especially the more formal techniques, such asinspection.Management supports a good review process (e.g. by incorporating adequate time for
  27. 27. review activities in project schedules). There is an emphasis on learning and processimprovement.V) Cyclomatic ComplexityThe number of independent paths through a programCyclomatic Complexity is defined as: L – N + 2PL = the number of edges/links in a graphN = the number of nodes in a graphsP = the number of disconnected parts of the graph (connected components)Alternatively one may calculate Cyclomatic Complexity using decision point ruleDecision points +1Cyclomatic Complexity and Risk Evaluation1 to 10a simple program, without very much risk11 to 20 a complex program, moderate risk21 to 50, a more complex program, high risk> 50an un-testable program (very high risk)
  28. 28. Ch-4 Test Design Techniques - Modules4.1 The test development processTest case specification, test design, test execution schedule, test procedure specification, testscript, traceability.
  29. 29. 4.2 Categories of test design techniquesBlack-box test design technique, specification-based test design technique, white-box test designtechnique, structure-based test design technique, experience-based test design technique.4.3 Specification-based or black box techniquesBoundary value analysis, decision table testing, equivalence partitioning, state transition testing,use case testing.4.4 Structure-based or white box techniquesCode coverage, decision coverage, statement coverage, structure-based testing.4.5 Experience-based techniquesExploratory testing, fault attack.4.6 Choosing test techniquesNo specific terms.Test Design Techniques  Specification-based/Black-box techniques  Structure-based/White-box techniques  Experience-based techniquesI) Specification-based/Black-box techniquesEquivalence partitioningBoundary value analysisDecision table testingState transition testingUse case testingEquivalence partitioning
  30. 30. o Inputs to the software or system are divided in to groups that are expected to exhibit similarbehavioro Equivalence partitions or classes can be found for both valid data and invalid datao Partitions can also be identified for outputs, internal values, time related values and forinterface values.o Equivalence partitioning is applicable all levels of testingBoundary value analysiso Behavior at the edge of each equivalence partition is more likely to be incorrect. The maximumand minimum values of a partition are its boundary values.o A boundary value for a valid partition is a valid boundary value; the boundary of an invalidpartition is an invalid boundary value.o Boundary value analysis can be applied at all test levelso It is relatively easy to apply and its defect-finding capability is higho This technique is often considered as an extension of equivalence partitioning.Decision table testingo In Decision table testing test cases are designed to execute the combination of inputso Decision tables are good way to capture system requirements that contain logical conditions.o The decision table contains triggering conditions, often combinations of true and false for allinput conditions o It maybe applied to all situations when the action of the software depends onseveral logical decisionsState transition testingo In state transition testing test cases are designed to execute valid and invalid state transitionso A system may exhibit a deferent response on current conditions or previous history. In thiscase, that aspect of the system can be shown as a state transition diagram.o State transition testing is much used in embedded software and technical automation.Use case testingo In use case testing test cases are designed to execute user scenarioso A use case describes interactions between actors, including users and the systemo Each use case has preconditions, which need to be met for a use case to work successfully.o A use case usually has a mainstream scenario and some times alternative branches.o Use cases, often referred to as scenarios, are very useful for designing acceptance tests withcustomer/user participation
  31. 31. II) Structure-based/White-box techniqueso Statement testing and coverageo Decision testing and coverageo Other structure-based techniques  condition coverage  multi condition coverageStatement testing and coverage:StatementAn entity in a programming language, which is typically the smallest indivisible unit ofexecutionStatement coverageThe percentage of executable statements that have been exercised by a test suiteStatement testingA white box test design technique in which test cases are designed to execute statementsDecision testing and coverageDecisionA program point at which the control flow has two or more alternative routesA node with two or more links to separate branchesDecision CoverageThe percentage of decision outcomes that have been exercised by a test suite100% decision coverage implies both 100% branches coverage and 100% statement coverageDecision testingA white box test design technique in which test cases are designed to execute decision outcomes.Other structure-based techniquesConditionA logical expression that can be evaluated as true or falseCondition coverageThe percentage of condition outcomes that have been exercised by a test suiteCondition testingA white box test design technique in which test cases are designed to execute conditionoutcomesMultiple condition testingA white box test design technique in which test cases are designed to execute combinations ofsingle condition outcomesIII) Experience-based techniques
  32. 32. o Error guessingo Exploratory testingError guessingo Error guessing is a commonly used experience-based techniqueo Generally testers anticipate defects based on experience, these defects list can be built based onexperience, available defect data, and from common knowledge about why software fails.Exploratory testingo Exploratory testing is concurrent test design, test execution, test logging and learning , basedon test charter containing test objectives and carried out within time boxeso It is approach that is most useful where there are few or inadequate specifications and servetime pressure.
  33. 33. Ch-5 Test organization and independenceThe effectiveness of finding defects by testing and reviews can be improved by usingindependent testers. Options for testing teams available are:  No independent testers. Developers test their own code.  Independent testers within the development teams.  Independent test team or group within the organization, reporting to project management or executive management  Independent testers from the business organization or user community.  Independent test specialists for specific test targets such as usability testers, security testers or certification testers (who certify a software product against standards and regulations).  Independent testers outsourced or external to the organization.The benefits of independence include:Independent testers see other and different defects, and are unbiased.An independent tester can verify assumptions people made during specification andimplementation of the system.Drawbacks include:Isolation from the development team (if treated as totally independent).Independent testers may be the bottleneck as the last checkpoint.Developers may lose a sense of responsibility for quality.b) Tasks of the test leader and testerTest leader tasks may include:  Coordinate the test strategy and plan with project managers and others.  Write or review a test strategy for the project, and test policy for the organization.  Contribute the testing perspective to other project activities, such as integration planning.  Plan the tests – considering the context and understanding the test objectives and risks – including selecting test approaches, estimating the time, effort and cost of testing, acquiring resources, defining test levels, cycles, and planning incident management.  Initiate the specification, preparation, implementation and execution of tests, monitor the test results and check the exit criteria.  Adapt planning based on test results and progress (sometimes documented in status reports) and take any action necessary to compensate for problems.  Set up adequate configuration management of testware for traceability.  Introduce suitable metrics for measuring test progress and evaluating the quality of the testing and the product.  Decide what should be automated, to what degree, and how.  Select tools to support testing and organize any training in tool use for testers.
  34. 34.  Decide about the implementation of the test environment.  Write test summary reports based on the information gathered during testing.Tester tasks may include:  Review and contribute to test plans.  Analyze, review and assess user requirements, specifications and models for testability.  Create test specifications.  Set up the test environment (often coordinating with system administration and network management).  Prepare and acquire test data.  Implement tests on all test levels, execute and log the tests, evaluate the results and document the deviations from expected results.  Use test administration or management tools and test monitoring tools as required.  Automate tests (may be supported by a developer or a test automation expert).  Measure performance of components and systems (if applicable).  Review tests developed by others.Note: People who work on test analysis, test design, specific test types or test automation may bespecialists in these roles. Depending on the test level and the risks related to the product and theproject, different people may take over the role of tester, keeping some degree of independence.Typically testers at the component and integration level would be developers; testers at theacceptance test level would be business experts and users, and testers for operational acceptancetesting would be operators.c) Defining skills test staff needNow days a testing professional must have „application‟ or „business domain‟ knowledge and„Technology‟ expertise apart from „Testing‟ Skills2) Test planning and estimationa) Test planning activities  Determining the scope and risks, and identifying the objectives of testing.  Defining the overall approach of testing (the test strategy), including the definition of the test levels and entry and exit criteria.  Integrating and coordinating the testing activities into the software life cycle activities: acquisition, supply, development, operation and maintenance.  Making decisions about what to test, what roles will perform the test activities, how the test activities should be done, and how the test results will be evaluated.  Scheduling test analysis and design activities.
  35. 35.  Scheduling test implementation, execution and evaluation.  Assigning resources for the different activities defined.  Defining the amount, level of detail, structure and templates for the test documentation.  Selecting metrics for monitoring and controlling test preparation and execution, defect resolution and risk issues.  Setting the level of detail for test procedures in order to provide enough information to support reproducible test preparation and execution.b) Exit criteriaThe purpose of exit criteria is to define when to stop testing, such as at the end of a test level orwhen a set of tests has a specific goal.Typically exit criteria may consist of:  Thoroughness measures, such as coverage of code, functionality or risk.  Estimates of defect density or reliability measures.  Cost.  Residual risks, such as defects not fixed or lack of test coverage in certain areas.  Schedules such as those based on time to market.c) Test estimationTwo approaches for the estimation of test effort are covered in this syllabus:  The metrics-based approach: estimating the testing effort based on metrics of former or similar projects or based on typical values.  The expert-based approach: estimating the tasks by the owner of these tasks or by experts.Once the test effort is estimated, resources can be identified and a schedule can be drawn up.The testing effort may depend on a number of factors, including:  Characteristics of the product: the quality of the specification and other information used for test models (i.e. the test basis), the size of the product, the complexity of the problem domain, the requirements for reliability and security, and the requirements for documentation.  Characteristics of the development process: the stability of the organization, tools used, test process, skills of the people involved, and time pressure.  The outcome of testing: the number of defects and the amount of rework required.
  36. 36. d) Test approaches (test strategies)One way to classify test approaches or strategies is based on the point in time at which the bulkof the test design work is begun:  Preventative approaches, where tests are designed as early as possible.  Reactive approaches, where test design comes after the software or system has been produced.Typical approaches or strategies include:  Analytical approaches, such as risk-based testing where testing is directed to areas of greatest risk  Model-based approaches, such as stochastic testing using statistical information about failure rates (such as reliability growth models) or usage (such as operational profiles).  Methodical approaches, such as failure-based (including error guessing and fault-attacks), experienced-based, check-list based, and quality characteristic based.  Process- or standard-compliant approaches, such as those specified by industry-specific standards or the various agile methodologies.  Dynamic and heuristic approaches, such as exploratory testing where testing is more reactive to events than pre-planned, and where execution and evaluation are concurrent tasks.  Consultative approaches, such as those where test coverage is driven primarily by the advice and guidance of technology and/or business domain experts outside the test team.  Regression-averse approaches, such as those that include reuse of existing test material, extensive automation of functional regression tests, and standard test suites.Different approaches may be combined, for example, a risk-based dynamic approach.The selection of a test approach should consider the context, including  Risk of failure of the project, hazards to the product and risks of product failure to humans, the environment and the company.  Skills and experience of the people in the proposed techniques, tools and methods.  The objective of the testing endeavour and the mission of the testing team.  Regulatory aspects, such as external and internal regulations for the development process.  The nature of the product and the business.3) Test progress monitoring and controla) Test progress monitoring
  37. 37.  Percentage of work done in test case preparation (or percentage of planned test cases prepared).  Percentage of work done in test environment preparation.  Test case execution (e.g. number of test cases run/not run, and test cases passed/failed).  Defect information (e.g. defect density, defects found and fixed, failure rate, and retest results).  Test coverage of requirements, risks or code.  Subjective confidence of testers in the product.  Dates of test milestones.  Testing costs, including the cost compared to the benefit of finding the next defect or to run the next test.b) Test Reporting  What happened during a period of testing, such as dates when exit criteria were met.  Analyzed information and metrics to support recommendations and decisions about future actions, such as an assessment of defects remaining, the economic benefit of continued testing, outstanding risks, and the level of confidence in tested software.Metrics should be collected during and at the end of a test level in order to assess:  The adequacy of the test objectives for that test level.  The adequacy of the test approaches taken.  The effectiveness of the testing with respect to its objectives.c) Test controlTest control describes any guiding or corrective actions taken as a result of information andmetrics gathered and reported. Actions may cover any test activity and may affect any othersoftware life cycle activity or task.Examples of test control actions are:  Making decisions based on information from test monitoring.  Re-prioritize tests when an identified risk occurs (e.g. software delivered late).  Change the test schedule due to availability of a test environment.  Set an entry criterion requiring fixes to have been retested (confirmation tested) by a developer before accepting them into a build.4) Configuration management
  38. 38. The purpose of configuration management is to establish and maintain the integrity of theproducts (components, data and documentation) of the software or system through the projectand product life cycle.For testing, configuration management may involve ensuring that:  All items of testware are identified, version controlled, tracked for changes, related to each other and related to development items (test objects) so that traceability can be maintained throughout the test process.  All identified documents and software items are referenced unambiguously in test documentationFor the tester, configuration management helps to uniquely identify (and to reproduce) the testeditem, test documents, the tests and the test harness.During test planning, the configuration management procedures and infrastructure (tools) shouldbe chosen, documented and implemented.5) Risk and testinga) Project risksProject risks are the risks that surround the project‟s capability to deliver its objectives, such as:Organizational factors:  skill and staff shortages;  personal and training issues;  political issues, such aso problems with testers communicating their needs and test results;o failure to follow up on information found in testing and reviews (e.g. notimproving development and testing practices).  improper attitude toward or expectations of testing (e.g. not appreciating the value of finding defects during testing).Technical issues:  problems in defining the right requirements;  the extent that requirements can be met given existing constraints;  the quality of the design, code and tests.Supplier issues:  failure of a third party;contractual issues.
  39. 39. b) Product risksPotential failure areas (adverse future events or hazards) in the software or system are known asproduct risks, as they are a risk to the quality of the product, such as:  Failure-prone software delivered.  The potential that the software/hardware could cause harm to an individual or company.  Poor software characteristics (e.g. functionality, reliability, usability and performance).  Software that does not perform its intended functions.Risks are used to decide where to start testing and where to test more; testing is used to reducethe risk of an adverse effect occurring, or to reduce the impact of an adverse effect.Product risks are a special type of risk to the success of a project. Testing as a risk-controlactivity provides feedback about the residual risk by measuring the effectiveness of criticaldefect removal and of contingency plans.A risk-based approach to testing provides proactive opportunities to reduce the levels of productrisk, starting in the initial stages of a project. It involves the identification of product risks andtheir use in guiding test planning and control, specification, preparation and execution of tests. Ina risk-based approach the risks identified may be used to:  Determine the test techniques to be employed.  Determine the extent of testing to be carried out.  Prioritize testing in an attempt to find the critical defects as early as possible.  Determine whether any non-testing activities could be employed to reduce risk (e.g. providing training to inexperienced designers).Risk-based testing draws on the collective knowledge and insight of the project stakeholders todetermine the risks and the levels of testing required to address those risks.To ensure that the chance of a product failure is minimized, risk management activities provide adisciplined approach to:  Assess (and reassess on a regular basis) what can go wrong (risks).  Determine what risks are important to deal with.  Implement actions to deal with those risks.In addition, testing may support the identification of new risks, may help to determine what risksshould be reduced, and may lower uncertainty about risks.6) Incident managementSince one of the objectives of testing is to find defects, the discrepancies between actual andexpected outcomes need to be logged as incidents. Incidents should be tracked from discoveryand classification to correction and confirmation of the solution. In order to manage all incidentsto completion, an organization should establish a process and rules for classification.
  40. 40. Incidents may be raised during development, review, testing or use of a software product. Theymay be raised for issues in code or the working system, or in any type of documentationincluding requirements, development documents, test documents, and user information such as“Help” or installation guides.Incident reports have the following objectives:  Provide developers and other parties with feedback about the problem to enable identification, isolation and correction as necessary.  Provide test leaders a means of tracking the quality of the system under test and the progress of the testing.  Provide ideas for test process improvement.Details of the incident report may include:  Date of issue, issuing organization, and author.  Expected and actual results.  Identification of the test item (configuration item) and environment.  Software or system life cycle process in which the incident was observed.  Description of the incident to enable reproduction and resolution, including logs, database dumps or screenshots.  Scope or degree of impact on stakeholder(s) interests.  Severity of the impact on the system.  Urgency/priority to fix.  Status of the incident (e.g. open, deferred, duplicate, waiting to be fixed, fixed awaiting retest, closed).  Conclusions, recommendations and approvals.  Global issues, such as other areas that may be affected by a change resulting from the incident.  Change history, such as the sequence of actions taken by project team members with respect to the incident to isolate, repair, and confirm it as fixed.References, including the identity of the test case specification that revealed the problem.
  41. 41. Ch-6--------------------------------------------------------------------------------------------------------------------------------------------------------------Types of test toolsManagement of testing and tests  Requirement management tools  Incident management tools  Configuration management toolsStatic testing  Review tools  Static analysis tools (D)  Modeling tools (D)Test specification  Test design tools  Test data preparation toolsTest execution and logging  Test execution tools  Test harness/unit test framework tools (D)  Test comparators  Coverage measurement tools (D)
  42. 42.  Security toolsPerformance and monitoring  Dynamic analysis tools  Performance/Load/Stress Testing tools  Monitoring toolsSpecific application areas  Special tools for web-based applications  Special tools for specific development flat forms  Special tools for embedded systems  Tool support using other toolsTest Tools and their purposesRequirement management tools Store requirements, check for consistency, allow requirements to be prioritized, trace changes,coverage of requirements etc.Incident management tools Store and manage incident reports, facilitating prioritization, assessment of actions to people andattribution of status etc.Configuration management tools Store information about versions and builds of software and testware; enable traceabilitybetween testware and software work products etc.Review toolsStore information, store and communicate review comments etc.Static analysis tools (D) The enforcement of coding standards, the analysis of structures and dependencies, aiding inunderstanding the code etc.
  43. 43. Modeling tools (D)Validate models of the software, find defects in data model, state model or an object model etc.Test design toolsGenerate test inputs or executable tests, generate expected out comes etc.Test data preparation toolsPreparing test data, Manipulate databases, files or data transmissions to set up test data etc.Test execution tools Record tests, Automated test execution, use inputs and expected outcomes, compare results withexpected outcomes, repeat tests, dynamic comparison, manipulate the tests using scriptinglanguage etc.Test harness/unit test framework tools (D) Test components or part of a system by simulating the environment, provide an executionframework in middleware etc.Test comparators Determine differences between files, databases or test results post-execution comparison, mayuse test oracle if it is automated etc.Coverage measurement tools (D) Measure the percentage of specific types of code structure (ex: statements, branches ordecisions, and module or function calls)Security toolsCheck for computer viruses and denial of service attacks, search for specific vulnerabilities ofthe system etc
  44. 44. .Dynamic analysis tools (D) Detect memory leaks, identify time dependencies and identify pointer arithmetic errors.Performance/Load/Stress Testing tools Measure load or stress, Monitor and report on how a system behaves a variety of simulatedusage conditions, simulate a load on an application/a database/or a system environment,repetitive execution of tests etc.Monitoring tools Continuously analyze, verify and report on specific system resources; store information aboutthe version and build of the software and testware, and enable traceability.Tool support using other toolsSome tools use other tools (Ex: QTP uses excel sheet and SQL tools)Potential benefits and risks of tool support for testingBenefits:o Repetitive work is reducedo Greater consistency and repeatabilityo Objective assessmento Ease of access to information about tests or testingRisks:o Unrealistic expectations for the toolo Underestimating the time and effort needed to achieve significant and continues benefits fromthe toolo Underestimating the effort required to maintain the test assets generated by the toolo Over-reliance on the toolSpecial considerations for some types of toolsFollowing tools have special considerations  Test execution tools
  45. 45.  Performance testing tools  Static testing tools  Test management toolsIntroducing a tool into an organizationThe following factors are important in selecting a tool:o Assessment of the organization maturityo Identification of the areas within the organization where tool support will help to improvetesting processo Evaluation of tools against clear requirements and objective criteriao Proof-of-concept to see whether the product works as desired and meets the requirements andobjectives defined for ito Evaluation of the vendor (training, support and other commercial aspects) or open-sourcenetwork of supporto Identifying and planning internal implementation (including coaching and mentoring for thosenew to the use of the tool)The objectives for a pilot project for a new toolo To learn more about the toolo To see how the tool would fit with existing processes or documentationo To decide on standard ways of using the tool that will work for all potential userso To evaluate the pilot project agonist its objectivesSuccesses factors for the deployment of the new tool within an organizationo Rolling out the tool to the rest of the organization incrementallyo Adapting and improving process to fit with the use of the toolo Providing training and coaching/mentoring for new users.o Defining usage guidelineso Implementing a way to learn lessons from tool use.o Monitoring tool use and benefits.
  46. 46. Questions-with-Answers Ch- 11 When what is visible to end-users is a deviation from the specific orexpected behavior, this is called:a) an errorb) a faultc) a failured) a defecte) a mistake2 Regression testing should be performed:v) every weekw) after the software has changedx) as often as possibley) when the environment has changedz) when the project manager saysa) v & w are true, x – z are falseb) w, x & y are true, v & z are falsec) w & y are true, v, x & z are falsed) w is true, v, x y and z are falsee) all of the above are true3 IEEE 829 test plan documentation standard contains all of the followingexcept:a) test itemsb) test deliverablesc) test tasksd) test environmente) test specification4 Testing should be stopped when:a) all the planned tests have been runb) time has run outc) all faults have been fixed correctlyd) both a) and c)e) it depends on the risks for the system being tested5 Order numbers on a stock control system can range between 10000 and99999 inclusive. Which of the following inputs might be a result ofdesigning tests for only valid equivalence classes and valid boundaries:a) 1000, 5000, 99999b) 9999, 50000, 100000c) 10000, 50000, 99999
  47. 47. d) 10000, 99999e) 9999, 10000, 50000, 99999, 100006 Consider the following statements about early test design:i. early test design can prevent fault multiplicationii. faults found during early test design are more expensive to fixiii. early test design can find faultsiv. early test design can cause changes to the requirementsv. early test design takes more efforta) i, iii & iv are true. Ii & v are falseb) iii is true, I, ii, iv & v are falsec) iii & iv are true. i, ii & v are falsed) i, iii, iv & v are true, ii us falsee) i & iii are true, ii, iv & v are false7 Non-functional system testing includes:a) testing to see where the system does not function properlyb) testing quality attributes of the system including performance and usabilityc) testing a system feature using only the software required for that actiond) testing a system feature using only the software required for that functione) testing for functions that should not exist8 Which of the following is NOT part of configuration management:a) status accounting of configuration itemsb) auditing conformance to ISO9001c) identification of test versionsd) record of changes to documentation over timee) controlled library access9 Which of the following is the main purpose of the integration strategy forintegration testing in the small?a) to ensure that all of the small modules are tested adequatelyb) to ensure that the system interfaces to other systems and networksc) to specify which modules to combine when and how many at onced) to ensure that the integration testing can be performed by a small teame) to specify how the software should be divided into modules10 What is the purpose of test completion criteria in a test plan:a) to know when a specific test has finished its executionb) to ensure that the test case specification is completec) to set the criteria used in generating test inputsd) to know when test planning is completee) to plan when to stop testing11 Consider the following statementsi. an incident may be closed without being fixed
  48. 48. ii. incidents may not be raised against documentationiii. the final stage of incident tracking is fixingiv. the incident record does not include information on test environmentsv. incidents should be raised when someone other than the author of the softwareperforms the testa) ii and v are true, I, iii and iv are falseb) i and v are true, ii, iii and iv are falsec) i, iv and v are true, ii and iii are falsed) i and ii are true, iii, iv and v are falsee) i is true, ii, iii, iv and v are false12 Given the following code, which is true about the minimum number oftest cases required for full statement and branch coverage:Read PRead QIF P+Q > 100 THENPrint “Large”ENDIFIf P > 50 THENPrint “P Large”ENDIFa) 1 test for statement coverage, 3 for branch coverageb) 1 test for statement coverage, 2 for branch coveragec) 1 test for statement coverage, 1 for branch coveraged) 2 tests for statement coverage, 3 for branch coveragee) 2 tests for statement coverage, 2 for branch coverage13 Given the following:Switch PC onStart “outlook”IF outlook appears THENSend an emailClose outlooka) 1 test for statement coverage, 1 for branch coverageb) 1 test for statement coverage, 2 for branch coveragec) 1 test for statement coverage. 3 for branch coveraged) 2 tests for statement coverage, 2 for branch coveragee) 2 tests for statement coverage, 3 for branch coverage14 Given the following code, which is true:IF A > B THEN
  49. 49. C=A–BELSEC=A+BENDIFRead DIF C = D ThenPrint “Error”ENDIFa) 1 test for statement coverage, 3 for branch coverageb) 2 tests for statement coverage, 2 for branch coveragec) 2 tests for statement coverage. 3 for branch coveraged) 3 tests for statement coverage, 3 for branch coveragee) 3 tests for statement coverage, 2 for branch coverage15 Consider the following:Pick up and read the newspaperLook at what is on televisionIf there is a program that you are interested in watching then switch the the televisionon and watch the programOtherwiseContinue reading the newspaperIf there is a crossword in the newspaper then try and complete the crossworda) SC = 1 and DC = 1b) SC = 1 and DC = 2c) SC = 1 and DC = 3d) SC = 2 and DC = 2e) SC = 2 and DC = 316 The place to start if you want a (new) test tool is:a) Attend a tool exhibitionb) Invite a vendor to give a democ) Analyze your needs and requirementsd) Find out what your budget would be for the toole) Search the internet17 When a new testing tool is purchased, it should be used first by:a) A small team to establish the best way to use the toolb) Everyone who may eventually have some use for the toolc) The independent testing teamd) The managers to see what projects it should be used ine) The vendor contractor to write the initial scripts18 What can static analysis NOT find?a) The use of a variable before it has been definedb) Unreachable (“dead”) code
  50. 50. c) Whether the value stored in a variable is correctd) The re-definition of a variable before it has been usede) Array bound violations19 Which of the following is NOT a black box technique:a) Equivalence partitioningb) State transition testingc) LCSAJd) Syntax testinge) Boundary value analysis20 Beta testing is:a) Performed by customers at their own siteb) Performed by customers at their software developer‟s sitec) Performed by an independent test teamd) Useful to test bespoke softwaree) Performed as early as possible in the lifecycle21 Given the following types of tool, which tools would typically be used bydevelopers and which by an independent test team:i. static analysisii. performance testingiii. test managementiv. dynamic analysisv. test runningvi. test data preparationa) developers would typically use i, iv and vi; test team ii, iii and vb) developers would typically use i and iv; test team ii, iii, v and vic) developers would typically use i, ii, iii and iv; test team v and vid) developers would typically use ii, iv and vi; test team I, ii and ve) developers would typically use i, iii, iv and v; test team ii and vi22 The main focus of acceptance testing is:a) finding faults in the systemb) ensuring that the system is acceptable to all usersc) testing the system with other systemsd) testing for a business perspectivee) testing by an independent test team23 Which of the following statements about the component testing standardis false:a) black box design techniques all have an associated measurement techniqueb) white box design techniques all have an associated measurement techniquec) cyclomatic complexity is not a test measurement techniqued) black box measurement techniques all have an associated test design techniquee) white box measurement techniques all have an associated test design technique
  51. 51. 24 Which of the following statements is NOT true:a) inspection is the most formal review processb) inspections should be led by a trained leaderc) managers can perform inspections on management documentsd) inspection is appropriate even when there are no written documentse) inspection compares documents with predecessor (source) documents25 A typical commercial test execution tool would be able to perform all ofthe following EXCEPT:a) generating expected outputsb) replaying inputs according to a programmed scriptc) comparison of expected outcomes with actual outcomesd) recording test inputse) reading test values from a data file26 The difference between re-testing and regression testing isa) re-testing is running a test again; regression testing looks for unexpected side effectsb) re-testing looks for unexpected side effects; regression testing is repeating those testsc) re-testing is done after faults are fixed; regression testing is done earlierd) re-testing uses different environments, regression testing uses the same environmente) re-testing is done by developers, regression testing is done by independent testers27 Expected results are:a) only important in system testingb) only used in component testingc) never specified in advanced) most useful when specified in advancee) derived from the code28 Test managers should not:a) report on deviations from the project planb) sign the system off for releasec) re-allocate resource to meet original plansd) raise incidents on faults that they have founde) provide information for risk analysis and quality improvement29 Unreachable code would best be found using:a) code reviewsb) code inspectionsc) a coverage toold) a test management toole) a static analysis tool30 A tool that supports traceability, recording of incidents or scheduling oftests is called:a) a dynamic analysis tool
  52. 52. b) a test execution toolc) a debugging toold) a test management toole) a configuration management tool31 What information need not be included in a test incident report:a) how to fix the faultb) how to reproduce the faultc) test environment detailsd) severity, prioritye) the actual and expected outcomes32 Which expression best matches the following characteristics or reviewprocesses:1. led by author2. Undocumented3. No management participation4. Led by A trained moderator or leader5. uses entry exit criterias) inspectiont) peer reviewu) informal reviewv) walkthrougha) s = 4, t = 3, u = 2 and 5, v = 1b) s = 4 and 5, t = 3, u = 2, v = 1c) s = 1 and 5, t = 3, u = 2, v = 4d) s = 5, t = 4, u = 3, v = 1 and 2e) s = 4 and 5, t = 1, u = 2, v = 333 Which of the following is NOT part of system testing:a) business process-based testingb) performance, load and stress testingc) requirements-based testingd) usability testinge) top-down integration testing34 What statement about expected outcomes is FALSE:a) expected outcomes are defined by the software‟s behaviorb) expected outcomes are derived from a specification, not from the codec) expected outcomes include outputs to a screen and changes to files and databasesd) expected outcomes should be predicted before a test is rune) expected outcomes may include timing constraints such as response times35 The standard that gives definitions of testing terms is:a) ISO/IEC 12207
  53. 53. b) BS7925-1c) BS7925-2d) ANSI/IEEE 829e) ANSI/IEEE 72936 The cost of fixing a fault:a) Is not importantb) Increases as we move the product towards live usec) Decreases as we move the product towards live used) Is more expensive if found in requirements than functional designe) Can never be determined37 Which of the following is NOT included in the Test Plan document of theTest Documentation Standard:a) Test items (i.e. software versions)b) What is not to be testedc) Test environmentsd) Quality planse) Schedules and deadlines38 Could reviews or inspections be considered part of testing:a) No, because they apply to development documentationb) No, because they are normally applied before testingc) No, because they do not apply to the test documentationd) Yes, because both help detect faults and improve qualitye) Yes, because testing includes all non-constructive activities39 Which of the following is not part of performance testing:a) Measuring response timeb) Measuring transaction ratesc) Recovery testingd) Simulating many userse) Generating many transactions40 Error guessing is best useda) As the first approach to deriving test casesb) After more formal techniques have been appliedc) By inexperienced testersd) After the system has gone livee) Only by end users
  54. 54. Ch- 21. Which of the following is true?a. Testing is the same as quality assuranceb. Testing is a part of quality assurancec. Testing is not a part of quality assuranced. Testing is same as debugging2. Why is testing necessary?a. Because testing is good method to make there are not defects in the softwareb. Because verification and validation are not enough to get to know the quality of thesoftwarec. Because testing measures the quality of the software system and helps to increase thequalityd. Because testing finds more defects than reviews and inspections.3. Integration testing has following characteristicsI. It can be done in incremental mannerII. It is always done after system testingIII. It includes functional testsIV. It includes non-functional testsa. I, II and III are correctb. I is correctc. I, III and IV are correctd. I, II and IV are correct4. A number of critical bugs are fixed in software. All the bugs are in onemodule, related to reports. The test manager decides to do regressiontesting only on the reports module.a. The test manager should do only automated regression testing.b. The test manager is justified in her decision because no bug has been fixed in othermodulesc. The test manager should only do confirmation testing. There is no need to doregression testingd. Regression testing should be done on other modules as well because fixing onemodule may affect other modules5. Which of the following is correct about static analysis tools?a. Static analysis tools are used only by developersb. Compilers may offer some support for static analysisc. Static analysis tools help find failures rather than defectsd. Static analysis tools require execution of the code to analyze the coverage
  55. 55. 6. In a flight reservation system, the number of available seats in each planemodel is an input. A plane may have any positive number of available seats,up to the given capacity of the plane. Using Boundary Value analysis, a listof available – seat values were generated. Which of the following lists iscorrect?a. 1, 2, capacity -1, capacity, capacity plus 1b. 0, 1, capacity, capacity plus 1c. 0, 1, 2, capacity plus 1, a very large numberd. 0, 1, 10, 100, capacity, capacity plus one7. Which of the following is correct about static analysis toolsa. They help you find defects rather than failuresb. They are used by developers onlyc. They require compilation of coded. They are useful only for regulated industries8. In foundation level syllabus you will find the main basic principles oftesting. Which of the following sentences describes one of these basicprinciples?a. Complete testing of software is attainable if you have enough resources and test toolsb. With automated testing you can make statements with more confidence about theQuality of a product than with manual testingc. For a software system, it is not possible, under normal conditions, to test all input andpreconditions.d. A goal of testing is to show that the software is defect free.9. Which of the following statements contains a valid goal for a functionaltest set?a. A goal is that no more failures will result from the remaining defects b. A goal is to find as many failures as possible so that the cause of the failures can be identified and fixedc. A goal is to eliminate as much as possible the causes of defectsd. A goal is to fulfill all requirements for testing that are defined in the project plan.10. In system testing...a. .. Both functional and non-functional requirements are to be testedb. ... Only functional requirements are tested; non-functional requirements are validatedin a reviewc. ... Only non-functional requirements are tested; functional requirements are validatedin a reviewd. ... Only requirements which are listed in the specification document are to be tested11. Which of the following activities differentiate a walkthrough from aformal review?a. A walkthrough does not follow a defined processb. For a walkthrough individual preparation by the reviewers is optionalc. A walkthrough requires meeting

×