Your SlideShare is downloading. ×
Istqb intro with question answer for exam preparation
Upcoming SlideShare
Loading in...5

Thanks for flagging this SlideShare!

Oops! An error has occurred.

Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

Istqb intro with question answer for exam preparation


Published on

Istqb intro with question answer for exam preparation …

Istqb intro with question answer for exam preparation
this is usefull to ISTQB exam preparation

Published in: Technology
  • Be the first to comment

No Downloads
Total Views
On Slideshare
From Embeds
Number of Embeds
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

No notes for slide


  • 1. Introduction Let me start with a short Intro, I am a Self employed Musician and webdeveloper, and have spent lot of time understanding the software testing field and thestages of evolution in it. And I bring in front of you a very effective small dose of testingtheory. (Thanks to Mr Suresh Reddy, My senior mentor at NRSTT Hyderabad for althe guidance and coaching. ) I have divided the Concise Testing Theory ( C.T.T )into the following sections. Validation & Verification of any task can be called as Testing (1)SectionsWhat is Software Testing ?Who can do it? & Terminology.Software Development Life Cycle ( SDLC.)Testing methodology & Levels of testing.Types of Testing.Software Testing life cycleBug life cycle (BLC.)Automation Testing.Performance Testing.ISTQB & Software CertificationEach section is explained in Detail and will be enhanced as and when needed. Good luckand enjoy the syllabi. I will shortly create concise knols on Sql server & QTP.I do Believe Strongly that A good understanding of CTT can get you a job in IT ( at leastin India, US, UK & AU) very soon. All you need minimum is to be A graduate from anyfield ( 15 years of formal education).
  • 2. Software Testing?Testing in general, is a process carried out by individuals or groups across all domains,whererequirements exist.Whereas, In more software subjective language, The comparison of EV (expected value )andAV (Actual value) is known as testing.To get a better understanding of the sentence above , lets see a small example,example 1.I have a bulb , and my requirement is that when i switch it on, it should glow. So Mynext step is to identify three thingsAction to be performed : turn the Switch on,Expected value : Bulb should glow,Actual value : Present Status of the bulb after performing the action ,( on oroff ).Now its the time for our test to generate a result based on a simple logic,IF EV = AV then the result is PASS , else Any deviation will make the result as FAIL.Now , ideally based on the difference or margin of difference between the EV and AV ,The Severity of the defect is decided and is subjected for further rectifications. Conclusive Definition of Testing : Testing can be defined as the process in which the defects are Identified, Isolatedand then subjected for rectification and Re ensuring that the end result is defectfree (highest quality) in order to achieve maximum customer satisfaction. ( AnInterview definition. ).Who can do testing & things to understand ( Pre requirement tounderstand Software testing effectively) As we have understood so far, that testing is a very general task and can beperformed by anybody. But we have to understand the importance of software testingfirst,In the earlier days of software development, the support for developing the programswas very little. Commercially very few rich companies used advanced software solutions.As three decades of research and development has passed , IT sector has becomesufficient enough to develop advanced software programming for even spaceexploration. Programming languages in the beginning were very complex and were not feasibleto achieve larger calculations, Hardware was costly and was very limited in its capacityto perform continuous tasks, and Networking bandwidths was a gen next topic.But Now Object oriented Programming languages, Terra bytes of memory andbandwidths of hundreds of M bps, are the foundations for futures interactive webgeneration. Sites like Google, MSN and Yahoo make life much easier to live with Chat ,Conferencing and Data management etc.
  • 3. So coming back to our goal (testing) , We need to realize that Software is here tostay and become a part of mankind. Its the reality of tomorrow and a necessity to belearnt for healthy survival in the coming technological era. More and more companiesare spending billions of cash for achieving the limits and in all these process one thingis attaining its place more prominently which is none other than QUALITY. I used so much of background to stress quality because Its the final and direct goal ofQuality assurance people or Test engineers.  Quality can be defined as Justification of all of the users requirements in a product with the presence of value & safety.So as you can see that quality is a much wider requirement of the masses when it comesto consumption of any service, So the Job of a Tester is to act like a user and verify theproduct Fit ( defect free = quality ).  Defect can be defined as deviation from a users requirement in a particular product or service. So the same product have different level of defects or No defects based on tastes and requirements of different users.As more and more business solutions are moving from man to machine , bothdevelopment and testing are headed for an unprecedented boom. Developers are underconstant pressure of learning and updating themselves with the latest programminglanguages to keep up the pace with users requirements. As a result , Testing is also becoming an Integral part of Software companies toproduce quality results for beating the competition. Previously developers used to testthe applications besides coding, but this had disadvantages , Like time consumption ,emotional block to find defects in ones own creation and Post delivery maintenancewhich is a common requirement now. So Finally Testers have been appointed to simultaneously test the applications alongwith the development , In order to identify defects as soon as possible to reduce timeloss and improve efficiency of the developers. Who can do it? Well, Anyone can do it. You need to be a graduate to enter softwaretesting and comfortable relationship with the PC. All you have to get in your mind is thatyour job is to act like a third person or end user and taste (test) the food before theyeat, and report it to the developer.We are not correcting the mistake here, we are only identifying the defects and reportingit and waiting for the next release to check whether the defects are rectified and also tosee whether new defects have raised.Some Important terminologies
  • 4.  Project : If something is developed based on a particular user/users requirements, and is used exclusively by them, then it is known as project. The finance is arranged by the client to complete the project successfully. Product : If something is developed based on companys specification (after a general survey of the market requirements) and can be used by multiple set of masses, then it is known as a product. The company has to fund the entire development and usually expect to break even after a successful market launch. Defect v/s Defective : If the product is justifying partial requirements of a particular customer but is usable functionally, then we say that The product has a defect, But If the product is functionally non usable , then even if some requirements are satisfied, the product still will be tagged as Defective. Quality Assurance : Quality assurance is a process of monitoring and guiding each and every role in the organization in order to make them perform their tasks according to the companys process guidelines. Quality Control or Validation : To check whether the end result is the right product, In other words whether the developed service meets all the requirements or not. Quality Assurance Verification : It is the method to check whether the developed product or project has followed the right process or guidelines through all the phases of development. NCR : Whenever the role is not following the process in performing the task assigned, then the penalty given to him is known as NCR ( Non Confirmacy raised). Inspection : Its a process of checking conducted by a group of members on a role or a department suddenly without any prior intimation. Audit : Audit is a process of checking conducted on the roles or a department with a prior notice, well in advance. SCM : Software configuration management : This is a process carried out by a team to attain the following things Version control and Change control . In other terms SCM team is responsible for updating all the common documents used across various domains to maintain uniformity, and also name the project and update its version numbers by gauzing the amount of change in the application or service after development. Common Repository : Its a server accessible by authorized users , to store and retrieve the information safely and securely. Base lining vs publishing : Its a process of finalizing the documents vs making it available to all the relevant resources. Release : Its a process of sending the application from the development department to the testing department or from the company to the market. SRN ( Software Release Note ) : Its a note prepared by the development department and sent to the testing department during the release and it contains information about Path of the build, Installation information, test data, list of known issues, version no, date and credentials etc., SDN ( Software delivery Note ) : Its a note prepared by a team under the guidance of a Project manager, and will be submitted to the customer during delivery. It contains a carefully crated User manual and list of known issues and workarounds.
  • 5.  Slippage : The extra time taken to accomplish a task is known as slippage  Metrics v/s Matrix : Clear measurement of any task is defined as metrics, whereas a tabular format with linking information which is used for tracing any information back through references is called as Matrix.  Template v/s document : Template is a pre-defined set of questionnaire or a professional fill in the blanks set up , which is used to prepare an finalize any document. The advantages of template is to maintain uniformity and easier comprehension, throughout all the documentation in a Project, group, department , company or even larger masses.  Change Request v/s Impact Analysis : Change request is the proposal of the customer to bring some changes into the project by filling a CRT ( change request template). Whereas Impact analysis is a study carried out by the business analysts to gauze , how much impact will fall on the already developed part of the application and how feasible it is to go ahead with the change or demands of the customer. Software Development Life Cycle (SDLC) This is a nothing but a model which is used to achieve efficient results in softwarecompanies and consists of 6 main phases. We will discuss each stage in a descriptivefashion, dissected into four parts individually, such as Tasks, Roles , Process and Proofof each phase completion. Remember SDLC is similar to waterfall model where theoutput of one phase acts as the input for the next phase of the cycle.
  • 6. SDLC Image See references below Initial phase Analysis phase Design phase Coding phase Testing phase Delivery and Maintenance Initial-Phase/ Requirements phase : Task : Interacting with the customer and gathering the requirements Roles : Business Analyst and Engagement Manager (EM). Process : First the BA will take an appointment with the customer, Collects therequirement template, meets the customer , gathers requirements and comes back tothe company with the requirements document.Then the EM will go through the requirements document . Try to find additionalrequirements , Get the prototype ( dummy, similar to end product) developed in orderto get exact details in the case of unclear requirements or confused customers, and alsodeal with any excess cost of the project.Proof : The Requirements document is the proof of completion of the first phase ofSDLC .Alternate names of the Requirements Document :(Various companies and environments use different terminologies, But the logic is same)FRS : Functional Requirements Specification.CRS : Customer/Client Requirement Specification,URS : User Requirement Specifications,BRS : Business Requirement Specifications,BDD : Business Design Document,BD : Business Document. Analysis Phase : Tasks : Feasibility Study,
  • 7. Analysis Image references see below Tentative planning, Technology selection Requirement analysis Roles : System Analyst (SA) Project Manager (PM) Technical manager (TM)Process : A detailed study on the requirements and judging the possibilities andscope of the requirements is known as feasibility study. It is done by the Manager levelteams usually. After that in the next step we move to a temporary scheduling of staff toinitiate the project, Select a suitable technology to develop the project effectively (customers choice is given first preference , If it is feasible ). And Finally the Hardware, software and Human resources required are listedout in a document to baseline the project.Proof : The proof document of the analysis phase is SRS ( SystemRequirement Specification.) Design Phase : Tasks : High level Designing (H.L.D) : Low level Designing (L.L.D) Roles : Chief Architect ( handle HLD ) : Technical lead ( involved in LLD)Process : The Chief Architect will divide the whole project into modules by drawingsome graphical layouts using Unif[ied Modeling Language (UML). The Technical leadwill further divide those modules into sub modules using UML . And both will beresponsible for visioning the GUI ( The screen where user interacts with the
  • 8. application OR Graphical User Interface .) and developing the Pseudo code (A dummycode Or usually, Its a set of English Instructions to help the developers in coding theapplication.) Coding Phase : Task : Developing the programs Roles: Developers/programmersProcess : The developers will take the support of technical design document and will befollowing the coding standards, while actual source code is developed. Some of theIndustry standard coding methods include Indentation, Color coding, Commenting etc.Proof : The proof document of the coding phase is Source Code Document (SCD). Testing Phase : Task : Testing Roles : Test engineers, Quality Assurance team.Process : Since It is the core of our article , we will look at a descriptive fashion ofunderstanding the testing process in an I T environment.  First, the Requirements document will be received by the testing department  The test engineers will review the requirements in order to understand them.  While revising , If at all any doubts arise, then the testers will list out all the unclear requirements in a document named Requirements Clarification Note (RCN).  Then they send the RCN to the author of the requirement document ( i.e, Business Analyst ) in order to get the clarifications.  Once the clarifications are done and sent to the testing team, they will take the test case template and write the test cases. ( test cases like example1 above).  Once the first build is released, they will execute the test cases.  While executing the test cases , If at all they find any defects, they will report it in the Defect Profile Document or DPD.  Since the DPD is in the common repository, the developers will be notified of the status of the defect.  Once the defects are sorted out, the development team releases the next build for testing. And also update the status of defects in the DPD.  Testers will here check for the previous defects, related defects, new defects and update the DPD.
  • 9. Proof : The last two steps are carried out till the product is defect free , so qualityassured product is the proof of the testing phase ( and that is why it is a very importantstage of SDLC in modern times). Delivery & Maintenance phase Tasks : Delivery : Post delivery maintenance Roles : Deployment engineersProcess : Delivery : The deployment engineers will go to the customer environmentand install the application in the customers environment & submit the applicationalong with the appropriate release notes to the customer . Maintenance : Once the application is delivered the customer will start usingthe application, While using if any problem occurs , then it becomes a new task, Basedon the severity of the issue corresponding roles and process will be formulated. Somecustomers may be expecting continuous maintenance, In that case a team of softwareengineers will be taking care of the application regularly.Software Testing Methods & Levels of Testing. There are three methods of testing , Black Box, White Box & Grey box testing, Lets see. Black Box testing : If one performs testing only on the functional part of an application (where endusers can perform actions) without having the structural knowledge, then that method of testing is knownas Black box testing. Usually test engineers are in this category. White Box testing : If one performs testing on the structural part of the application then thatmethod of testing is known as white box testing. Usually developers or white box testers are the ones to doit successfully. Grey Box testing : If one performs testing on both the functional part as well as the structural partof an application, then that method of testing is known as Grey box testing. Its an old way of testing and isnot as effective as the previous two methods and is losing popularity recently.Levels of TestingThere are 5 levels of testing in a software environment. They are as follows,
  • 10. Image: Systematic & Simultaneous Levels of testing. (The ticked options are the ones where Black box testers are required and the other performed by the developers usually).Unit level testing: A unit is defined as smallest part of an application. In this level of testing, each andevery program will be tested in order to confirm whether the conditions, functions and loops etc areworking fine or not. Usually the white box testers or developers are the performers at this level.Module level testing: A module is defined as a group of related features to perform a major task. At thislevel the modules are sent to the testing department and the test engineers will be validating thefunctional part of the modules.Integration level testing: At this level the developers will be developing the interfaces in order tointegrate the tested modules. While Integrating the modules they will test whether the interfaces thatconnect the modules are functionally working or not.Usually, the developers opt to integrate the modules in the following methods,  Top down Approach: In this approach the parent modules are developed first and then the corresponding child modules will be developed. In case while integrating the modules if any mandatory module is missing then that is replaced with a temporary program called stub, to facilitate testing.  Bottom up approach: In this approach the child modules will be developed first and will be integrated back to the parent modules. While integrating the modules in this way, if at all any mandatory module is missing, then that module is replaced with a temporary program known as driver to facilitate testing.  Hybrid or Sandwich approach: In this approach both the TD & BU approaches will be mixed due to several reasons.  Big Bang approach: In this approach one wait till all the modules are developed and integrate them finally at a time. System Level testing : Arguably, the Core of the testing comes in this level. Its a major phase in this testing level, becausedepending on the requirement and afford-ability of the company, Automation testing, load, performance& stress testing etc will be carried out in this phase which demands additional skills from a tester.System: Once the stable complete application is installed into an environment, then as a whole it can becalled as a system (envt. + Application) At SLS level, the test engineers will perform many different types of testing, but the most Important oneis,
  • 11. System Integration Test: In this type of testing one will perform some actions on the modules thatwere integrated by the developers in the previous phase, and simultaneously check whether the reflectionsare proper in the related connected modules.Example 2: Lets take An ATM machine application, with the modules as follows,Welcome screen, Balance inquiry screen, Withdrawal screen & Deposit screen.Assuming that these 4 modules were integrated by the developers, So black box testers can performSystem Integration testing like this :- Test case 1:Check Balance: Lets say amount is XDeposit amount: Lets say amount is YCheck balance: Expected ValueIf the Actual Value is X + Y, then it is equal to the Expected Value. And because EV = AV, the result isPASS.User Acceptance Level Testing:At this level the user will be invited and in his presence, testing will be carried on. Its the final testingbefore user signs off and accepts the application. Whatever user may desire, the corresponding featuresneed to be tested functionally by the black box testers or Senior testers in the project. Types of Testing: There are two categories broadly to classify all the available types of testing software.They are Static testing & Dynamic testing. Static means testing where No actions areperformed on the application, and features like GUI and appearance related testingcomes under this category. And Dynamic is where user needs to perform some actionson the application to test like functionality checking, link checking etc. Initially there were very few popular types of testing, which were lengthy and manual.But as the applications are becoming more and more complex, Its inevitable that, notonly features , functionality and appearance but performance and stability also aremajor areas to be concentrated. The following types are the most is use and we will discuss about each of them one byone.Build Acceptance Test/Verification/Sanity testing (BAT) : Its a type of testingin which one will perform an overall testing on the application to validate whether it isproper for further detailed testing. Usually testing team conducts this in high riskapplications before accepting the application from the development department. Other two terms Smoke testing and Sanity testing are in debate from timeimmemorial to find a fixed definition. Majority feel that If developers test the overallapplication before releasing to the testing team for further action is known as Smoketesting and future actions performed by black box test engineers is known as Sanitytesting. But this is cited as vice versa.
  • 12. Regression Testing : Its a type of testing in one will perform testing on the alreadytested functionality again and again. As the name suggests regression is revisiting thesame functionality of a feature, but it happens in the following two cases. Case1 : When the test engineers find some defects and report it to the developers, Thedevelopment team fixes the issue and releases the next build. Once the next build isreleased, as per the requirement the testers will check wether the defect is fixed, andalso whether the related features are still working fine which might have been affectedwhile fixing the defect. Example 3: Lets say that you wanted a bike with 17" tyres instead of 15" and sent thebike to the service center for changing it. Before sending it back for changing, you testedall the other features and you are satisfied with them. Once the bike is returned to youwith new tyres, you will also check the look and feel overall, and check whether thebrakes still work as expected, whether the mileage maintains its expected value and allrelated features that can be affected by this change. Testing the tyre in this case is Newtesting and all the other features will fall under Regression testing. Case 2 : Whenever some new feature is added to the application in the middle ofdevelopment, and released for testing. The test engineers may need to checkadditionally all the related features of the newly added feature. In this case also all thefeatures except the new functionality comes under Regression testing.Retesting : Its a type of testing in which one will perform testing on the samefunctionality again and again with multiple sets of data in order to come to a conclusionwhether its working fine or not. Example 4: Lets say the customer wants a bank application and the login screen musthave the password field that only accepts alphanumeric (eg. abc123, 56df etc) data andno special characters ( *,$,# etc). In this case one needs to test the field with all possiblecombinations and different sets of data to conclude that the password field is workingfine. Such kind of repeated testing on any feature is called Retesting. Alpha Testing & Beta Testing : These are both performed in the User acceptancetesting phase, If the customer visits the company and the comapnys own test engineersperform testing in their own premises, then it is referred as Alpha testing.If the testing is carried out at the clients environment and is carried out by the endusers or third party test experts, then it is known as Beta testing . ( Remember that boththese types are before the actual implementation of the software in the clientsenvironment, hence the term user acceptance. )Installation Testing : Its a type of testing in which one will install the application intothe environment by following the guidelines given in the Deployment document /Installation guide. If at all the installation is succesful then one will come to a conclusionthat the installation guide and the instructions in it are correct and it is appropriate toinstall the application, otherwise report the problems in the deployment document. One main point is to ensure that, In this type of testing we are checking the user manualand not the product. ( i.e, the installation/setup guide and not the applicationscapability to install.)
  • 13. Compatibility testing : Its a type of testing in which one will install the applicationinto multiple environments prepared with different combinations or environmentalcomponents in order to check whether the application is suitable with thoseenvironments or not. Usually this type of testing is carried out on products rather thanprojects. Monkey Testing : Its a type of testing in which Abnormal actions are performedintentionally on the application in order to check the stability of the application.Remember it is different from stress testing or load testing. We are concentrating on thestability of the features and functionality in the case of extreme possibilities of actionsperformed on the application by different kind of users. Useability Testing : In this kind of testing we will check the user freindliness of theapplication. Depending on the complexity of the application, one needs to test whetherinformation about all the features are understandable easily. Navigation of theapplication must be easy to follow. Exploratory Testing : Its a type of testing in which domain experts ( knowledge ofbusiness & functions) will perform testing on the application without having theknowledge of requirements just by parallely exploring the functionality. If you go by thesimple definition of exploring, It means having minimum idea about something, & thendoing something related to it in order to know something more about it. End to End Testing : Its a type of testing in which we perform testing on various endto end scenarios of an application. Which means performing various operations on theapplications like different users in real time scenario. Example 5 : Lets take a bank application and consider the different end to endscenarios that exist within the application. Scenario 1 : Login, Balance enquiry , &Logout. Scenario 2 : Login, Deposit, Withdrawal & logout. Scenario 3 : Login, Balance enquiry, Deposit, Withdrawal, Logout. etc. Security Testing : Its a type of testing in which one will test whether the application isa secured application or not. In order to do the same, A black box test engineer willconcentrate on the following types of testing.  Authentication Testing : It is a type of testing in which one will try to enter into the application by entering different combinations of usernames and passwords in order to check whether the application is allowing only the authorized users or not.  Direct URL testing : It is a type of testing in which one will enter the direct URLs ( Uniform resource locator) of the secured pages and check whether the application is allowing the access to those areas or not.  Firewall Leakage testing : Firewall term is a very widely misunderstood word and it generally means a barrier between two levels. The definition comes from an Old African story where people used to put fire around them while sleeping, in order to prevent red-ants coming near them. So In this type of testing, One
  • 14. will enter into the application as one level of user ( ex. member, ) and try to access other higher level of users pages (ex. admin ,) , In order to check whether the firewalls are working fine or not. Port testing: Its a type of testing in which one will install the application into theclients environment and check whether it is compatible with that environment. (According to the requirements, one needs to find what kind of environment is needed tobe tested, whether it is product or project etc .) Soak Testing/Reliability testing : Its a type of testing in which one will performtesting on the application continuosly for long period of time in order to check thestability of the application. Mutation testing : Its a type of testing in which one will perform testing on theapplication or its related factors by doing some changes to the logics or layers of theenvironment. (Refer envt. knol for details) Any thing from the functionality of a featureto the applications overall route can be tested with different set of combinations ofenvironment. Adhoc testing : Its a type of testing in which the test engineers will perform testing intheir own style after understanding the requirements clearly. ( Note that in Exploratorytesting the domain experts do not have the knowledge of requirements whereas in thiscase the test engineers have the expected values set. )
  • 15. Ch-1Introduction ISTQB is the name of the certification board that certifies individuals and credits them with afoundation or advanced level honors as Software Testers.In November 02, The ISTQB was established in UK. Its entity is anyways in Belgium. Img1.1 Turkish Testing Board Member of logoThe purpose of this certification is to create uniform standards in the field of software testingacross the world. ISTQB is part of the main Issuing comittee of The British Computer Societyestablished ISEB ( reg 1967 ref. wiki ) in order to certify the standards for systems & analysis,networking and designs. The Whole process of certification and accrediation is now regulated by the AmericanSoftware Testing Qualifications Board. The candidates who succesfuly complete the exam areawarded with the ISTQB Certified Tester Certificate and is valued highly by present qualityoriented companies, Both IT and Non IT organizations have really understood the meaning ofquality and have learnt that Testers can be skimmed through certification boards and hence thepopularity. Anyways, Since its a Major topic and requires more samples and questionaires, we havedivided the knol into seperate chapters to keep it short , up and running all the times. If you arecompletely new to testing or want to update the basics of software testing before preparing forISTQB , then Please visit the Software Testing Theory - Part 1 for simplified summary.Chapter -1 Fundamentals of Testing1.1 Why is testing necessary?bug, defect, error, failure, mistake, quality, risk, software, testing and exhaustive testing.
  • 16. 1.2 What is testing?code, debugging, requirement, test basis, test case, test objective1.3 Testing principles1.4 Fundamental test processconformation testing, exit criteria, incident, regression testing, test condition, test coverage, testdata, test execution, test log, test plan, test strategy, test summary report and testware.1.5 The psychology of testingindependence.I) General testing principlesPrinciplesA number of testing principles have been suggested over the past 40 years and offer generalguidelines common for all testing.Principle 1 – Testing shows presence of defectsTesting can show that defects are present, but cannot prove that there are no defects. Testingreduces the probability of undiscovered defects remaining in the software but, even if no defectsare found, it is not a proof of correctness.Principle 2 – Exhaustive testing is impossible Testing everything (all combinations of inputs and preconditions) is not feasible except fortrivial cases. Instead of exhaustive testing, risk analysis and priorities should be used to focustesting efforts.Principle 3 – Early testingTesting activities should start as early as possible in the software or system development lifecycle, and should be focused on defined objectives.Principle 4 – Defect clustering
  • 17. A small number of modules contain most of the defects discovered during pre-release testing, orare responsible for the most operational failures.Principle 5 – Pesticide paradoxIf the same tests are repeated over and over again, eventually the same set of test cases will nolonger find any new defects. To overcome this “pesticide paradox”, the test cases need to beregularly reviewed and revised, and new and different tests need to be written to exercisedifferent parts of the software or system to potentially find more defects.Principle 6 – Testing is context dependentTesting is done differently in different contexts. For example, safety-critical software is testeddifferently from an e-commerce site.Principle 7 – Absence-of-errors fallacyFinding and fixing defects does not help if the system built is unusable and does not fulfill theusers‟ needs and expectations.II) Fundamental test process1) Test planning and controlTest planning is the activity of verifying the mission of testing, defining the objectives of testingand the specification of test activities in order to meet the objectives and mission. It involves taking actions necessary to meet the mission and objectives of the project. In orderto control testing, it should be monitored throughout the project. Test planning takes intoaccount the feedback from monitoring and control activities.2) Test analysis and designTest analysis and design is the activity where general testing objectives are transformed intotangible test conditions and test cases.
  • 18. Test analysis and design has the following major tasks:  Reviewing the test basis (such as requirements, architecture, design, interfaces).  Evaluating testability of the test basis and test objects.  Identifying and prioritizing test conditions based on analysis of test items, the specification, behaviour and structure.  Designing and prioritizing test cases.  Identifying necessary test data to support the test conditions and test cases.  Designing the test environment set-up and identifying any required infrastructure and tools.3) Test implementation and execution  Developing, implementing and prioritizing test cases.  Developing and prioritizing test procedures, creating test data and, optionally, preparing test harnesses and writing automated test scripts.  Creating test suites from the test procedures for efficient test execution.  Verifying that the test environment has been set up correctly.  Executing test procedures either manually or by using test execution tools, according to the planned sequence.  Logging the outcome of test execution and recording the identities and versions of the software under test, test tools and testware.  Comparing actual results with expected results.  Reporting discrepancies as incidents and analyzing them in order to establish their cause (e.g. a defect in the code, in specified test data, in the test document, or a mistake in the way the test was executed).  Repeating test activities as a result of action taken for each discrepancy. For example, reexecution of a test that previously failed in order to confirm a fix (confirmation testing), execution of a corrected test and/or execution of tests in order to ensure that defects have not been introduced in unchanged areas of the software or that defect fixing did not uncover other defects (regression testing).4) Evaluating exit criteria and reporting  Checking test logs against the exit criteria specified in test planning.  Assessing if more tests are needed or if the exit criteria specified should be changed.  Writing a test summary report for stakeholders.5) Test closure activities  Checking which planned deliverables have been delivered, the closure of incident reports or raising of change records for any that remain open, and the documentation of the acceptance of the system.
  • 19.  Finalizing and archiving testware, the test environment and the test infrastructure for later reuse.  Handover of testware to the maintenance organization.  Analyzing lessons learned for future releases and projects, and the improvement of test maturity.III) The psychology of testing  Tests designed by the person(s) who wrote the software under test (low level of independence).  Tests designed by another person(s) (e.g. from the development team).  Tests designed by a person(s) from a different organizational group (e.g. an independent test team) or test specialists (e.g. usability or performance test specialists).  Tests designed by a person(s) from a different organization or company (i.e. outsourcing or certification by an external body).
  • 20. Ch-2. Testing throughout the software life cycle2.1 Software development modelsCOTS, interactive-incremental development model, validation, verification, V-model.2.2 Test levels Alpha testing, beta testing, component testing (also known as unit/module/programtesting), driver, stub, field testing, functional requirement, non-functional requirement,
  • 21. integration, integration testing, robustness testing, system testing, test level, test-drivendevelopment, test environment, user acceptance testing.2.3 Test types Black box testing, code coverage, functional testing, interoperability testing, loadtesting, maintainability testing, performance testing, portability testing, reliabilitytesting, security testing, specification based testing, stress testing, structural testing,usability testing, white box testing2.4 Maintenance testingImpact analysis, maintenance testing.i) Software development modelsa) V-model (sequential development model) Although variants of the V-model exist, a common type of V-model uses four test levels,corresponding to the four development levels.The four levels used in this syllabus are:component (unit) testing;integration testing;system testing;acceptance testing.b) Iterative-incremental development modelsIterative-incremental development is the process of establishing requirements,designing, building and testing a system, done as a series of shorter development cycles.Examples are: prototyping, rapid application development (RAD), Rational UnifiedProcess (RUP) and agile development models.c) Testing within a life cycle model In any life cycle model, there are several characteristics of good testing:? For every development activity there is a corresponding testing activity.? Each test level has test objectives specific to that level.? The analysis and design of tests for a given test level should begin during thecorresponding development activity.? Testers should be involved in reviewing documents as soon as drafts are available inthe development life cycle.
  • 22. ii) Test levelsa) Component testingComponent testing searches for defects in, and verifies the functioning of, software (e.g.modules, programs, objects, classes, etc.) that are separately testable.Component testing may include testing of functionality and specific non-functionalcharacteristics, such as resource-behaviour (e.g. memory leaks) or robustness testing, aswell as structural testing (e.g. branch coverage).One approach to component testing is to prepare and automate test cases before coding.This is called a test-first approach or test-driven development.b) Integration testingIntegration testing tests interfaces between components, interactions with differentparts of a system, such as the operating system, file system, hardware, or interfacesbetween systems.Component integration testing tests the interactions between software components andis done after component testing;System integration testing tests the interactions between different systems and may bedone after system testing.Testing of specific non-functional characteristics (e.g. performance) may be included inintegration testing.c) System testing System testing is concerned with the behaviour of a whole system/product as defined bythe scope of a development project or programme.In system testing, the test environment should correspond to the final target orproduction environment as much as possible in order to minimize the risk ofenvironment-specific failures not being found in testing.System testing may include tests based on risks and/or on requirements specifications,business processes, use cases, or other high level descriptions of system behaviour,interactions with the operating system, and system resources.System testing should investigate both functional and non-functional requirements ofthe system.
  • 23. d) Acceptance testingAcceptance testing is often the responsibility of the customers or users of a system;other stakeholders may be involved as well.The goal in acceptance testing is to establish confidence in the system, parts of thesystem or specific non-functional characteristics of the systemContract and regulation acceptance testingContract acceptance testing is performed against a contract‟s acceptance criteria forproducing custom-developed software. Acceptance criteria should be defined when thecontract is agreed. Regulation acceptance testing is performed against any regulationsthat must be adhered to, such as governmental, legal or safety regulations.Alpha and beta (or field) testingAlpha testing is performed at the developing organization‟s site. Beta testing, or fieldtesting, is performed by people at their own locations. Both are performed by potentialcustomers, not the developers of the product.iii) Test typesa) Testing of function (functional testing) The functions that a system, subsystem or component are to perform may be describedin work products such as a requirements specification, use cases, or a functionalspecification, or they may be undocumented. The functions are “what” the system does.A type of functional testing, security testing, investigates the functions (e.g. a firewall)relating to detection of threats, such as viruses, from malicious outsiders. Another typeof functional testing, interoperability testing, evaluates the capability of the softwareproduct to interact with one or more specified components or systems.b) Testing of non-functional software characteristics (non-functionaltesting) Non-functional testing includes, but is not limited to, performance testing, load testing,stress testing, usability testing, maintainability testing, reliability testing and portabilitytesting. It is the testing of “how” the system works.Non-functional testing may be performed at all test levels.c) Testing of software structure/architecture (structural testing) Structural (white-box) testing may be performed at all test levels. Structural techniquesare best used after specification-based techniques, in order to help measure thethoroughness of testing through assessment of coverage of a type of structure.
  • 24. Structural testing approaches can also be applied at system, system integration oracceptance testing levels (e.g. to business models or menu structures).d) Testing related to changes (confirmation testing (retesting) andregression testing)After a defect is detected and fixed, the software should be retested to confirm that theoriginal defect has been successfully removed. This is called confirmation. Debugging(defect fixing) is a development activity, not a testing activity.Regression testing is the repeated testing of an already tested program, aftermodification, to discover any defects introduced or uncovered as a result of thechange(s). It isperformed when the software, or its environment, is changed.Regression testing may be performed at all test levels, and applies to functional, non-functional and structural testing.iv) Maintenance testingOnce deployed, a software system is often in service for years or decades. During thistime the system and its environment are often corrected, changed or extended.Modifications include planned enhancement changes (e.g. release-based), correctiveandemergency changes, and changes of environment,Maintenance testing for migration (e.g. from one platform to another) should includeoperational tests of the new environment, as well as of the changed software.Maintenance testing for the retirement of a system may include the testing of datamigration or archiving if long data-retention periods are required.Maintenance testing may be done at any or all test levels and for any or all test types.
  • 25. Ch-3.1 Static techniques and the test processdynamic testing, static testing, static technique Img 1.1 Permanent Demand trend Jobmarket Internet.UK The chart provides the 3-month moving total beginning in 2004 of permanent IT jobs citing ISTQB within the UK as a proportion of the total demand within the Qualifications category. 20093.2 Review processentry criteria, formal review, informal review, inspection, metric, moderator/inspectionleader, peer review, reviewer, scribe, technical review, walkthrough.3.3 Static analysis by toolsCompiler, complexity, control flow, data flow, static analysisI) Phases of a formal review1) PlanningSelecting the personal, allocating roles, defining entry and exit criteria for more formalreviews etc.2) Kick-offDistributing documents, explaining the objectives, checking entry criteria etc.3) Individual preparationWork done by each of the participants on their own work before the review meeting,questions and comments4) Review meetingDiscussion or logging, make recommendations for handling the defects, or makedecisions about the defects5) ReworkFixing defects found, typically done by the author Fixing defects found, typically done bythe author6) Follow-upChecking the defects have been addressed, gathering metrics and checking on exitcriteriaII) Roles and responsibilities
  • 26. Manager Decides on execution of reviews, allocates time in projects schedules, anddetermines if the review objectives have been metModerator Leads the review, including planning, running the meeting, follow-up afterthe meeting.Author The writer or person with chief responsibility of the document(s) to be reviewed.Reviewers Individuals with a specific technical or business background. Identify defectsand describe findings.Scribe (recorder) Documents all the issues, problemsIII) Types of reviewInformal review No formal process, pair programming or a technical lead reviewingdesigns and code.Main purpose: inexpensive way to get some benefit.Walkthrough Meeting led by the author, „scenarios, dry runs, peer group‟, open-endedsessions.Main purpose: learning, gaining understanding, defect findingTechnical review Documented, defined defect detection process, ideally led by trainedmoderator, may be performed as a peer review, pre meeting preparation, involved bypeers and technical expertsMain purpose: discuss, make decisions, find defects, solve technical problems and checkconformance to specifications and standardsInspection Led by trained moderator (not the author), usually peer examination,defined roles, includes metrics, formal process, pre-meeting preparation, formal follow-up processMain purpose: find defects.Note: walkthroughs, technical reviews and inspections can be performed within a peergroup-colleague at the same organization level. This type of review is called a “peerreview”.IV) Success factors for reviewsEach review has a clear predefined objective.The right people for the review objectives are involved.Defects found are welcomed, and expressed objectively.People issues and psychological aspects are dealt with (e.g. making it a positiveexperience for the author).Review techniques are applied that are suitable to the type and level of software workproducts and reviewers.Checklists or roles are used if appropriate to increase effectiveness of defectidentification.Training is given in review techniques, especially the more formal techniques, such asinspection.Management supports a good review process (e.g. by incorporating adequate time for
  • 27. review activities in project schedules). There is an emphasis on learning and processimprovement.V) Cyclomatic ComplexityThe number of independent paths through a programCyclomatic Complexity is defined as: L – N + 2PL = the number of edges/links in a graphN = the number of nodes in a graphsP = the number of disconnected parts of the graph (connected components)Alternatively one may calculate Cyclomatic Complexity using decision point ruleDecision points +1Cyclomatic Complexity and Risk Evaluation1 to 10a simple program, without very much risk11 to 20 a complex program, moderate risk21 to 50, a more complex program, high risk> 50an un-testable program (very high risk)
  • 28. Ch-4 Test Design Techniques - Modules4.1 The test development processTest case specification, test design, test execution schedule, test procedure specification, testscript, traceability.
  • 29. 4.2 Categories of test design techniquesBlack-box test design technique, specification-based test design technique, white-box test designtechnique, structure-based test design technique, experience-based test design technique.4.3 Specification-based or black box techniquesBoundary value analysis, decision table testing, equivalence partitioning, state transition testing,use case testing.4.4 Structure-based or white box techniquesCode coverage, decision coverage, statement coverage, structure-based testing.4.5 Experience-based techniquesExploratory testing, fault attack.4.6 Choosing test techniquesNo specific terms.Test Design Techniques  Specification-based/Black-box techniques  Structure-based/White-box techniques  Experience-based techniquesI) Specification-based/Black-box techniquesEquivalence partitioningBoundary value analysisDecision table testingState transition testingUse case testingEquivalence partitioning
  • 30. o Inputs to the software or system are divided in to groups that are expected to exhibit similarbehavioro Equivalence partitions or classes can be found for both valid data and invalid datao Partitions can also be identified for outputs, internal values, time related values and forinterface values.o Equivalence partitioning is applicable all levels of testingBoundary value analysiso Behavior at the edge of each equivalence partition is more likely to be incorrect. The maximumand minimum values of a partition are its boundary values.o A boundary value for a valid partition is a valid boundary value; the boundary of an invalidpartition is an invalid boundary value.o Boundary value analysis can be applied at all test levelso It is relatively easy to apply and its defect-finding capability is higho This technique is often considered as an extension of equivalence partitioning.Decision table testingo In Decision table testing test cases are designed to execute the combination of inputso Decision tables are good way to capture system requirements that contain logical conditions.o The decision table contains triggering conditions, often combinations of true and false for allinput conditions o It maybe applied to all situations when the action of the software depends onseveral logical decisionsState transition testingo In state transition testing test cases are designed to execute valid and invalid state transitionso A system may exhibit a deferent response on current conditions or previous history. In thiscase, that aspect of the system can be shown as a state transition diagram.o State transition testing is much used in embedded software and technical automation.Use case testingo In use case testing test cases are designed to execute user scenarioso A use case describes interactions between actors, including users and the systemo Each use case has preconditions, which need to be met for a use case to work successfully.o A use case usually has a mainstream scenario and some times alternative branches.o Use cases, often referred to as scenarios, are very useful for designing acceptance tests withcustomer/user participation
  • 31. II) Structure-based/White-box techniqueso Statement testing and coverageo Decision testing and coverageo Other structure-based techniques  condition coverage  multi condition coverageStatement testing and coverage:StatementAn entity in a programming language, which is typically the smallest indivisible unit ofexecutionStatement coverageThe percentage of executable statements that have been exercised by a test suiteStatement testingA white box test design technique in which test cases are designed to execute statementsDecision testing and coverageDecisionA program point at which the control flow has two or more alternative routesA node with two or more links to separate branchesDecision CoverageThe percentage of decision outcomes that have been exercised by a test suite100% decision coverage implies both 100% branches coverage and 100% statement coverageDecision testingA white box test design technique in which test cases are designed to execute decision outcomes.Other structure-based techniquesConditionA logical expression that can be evaluated as true or falseCondition coverageThe percentage of condition outcomes that have been exercised by a test suiteCondition testingA white box test design technique in which test cases are designed to execute conditionoutcomesMultiple condition testingA white box test design technique in which test cases are designed to execute combinations ofsingle condition outcomesIII) Experience-based techniques
  • 32. o Error guessingo Exploratory testingError guessingo Error guessing is a commonly used experience-based techniqueo Generally testers anticipate defects based on experience, these defects list can be built based onexperience, available defect data, and from common knowledge about why software fails.Exploratory testingo Exploratory testing is concurrent test design, test execution, test logging and learning , basedon test charter containing test objectives and carried out within time boxeso It is approach that is most useful where there are few or inadequate specifications and servetime pressure.
  • 33. Ch-5 Test organization and independenceThe effectiveness of finding defects by testing and reviews can be improved by usingindependent testers. Options for testing teams available are:  No independent testers. Developers test their own code.  Independent testers within the development teams.  Independent test team or group within the organization, reporting to project management or executive management  Independent testers from the business organization or user community.  Independent test specialists for specific test targets such as usability testers, security testers or certification testers (who certify a software product against standards and regulations).  Independent testers outsourced or external to the organization.The benefits of independence include:Independent testers see other and different defects, and are unbiased.An independent tester can verify assumptions people made during specification andimplementation of the system.Drawbacks include:Isolation from the development team (if treated as totally independent).Independent testers may be the bottleneck as the last checkpoint.Developers may lose a sense of responsibility for quality.b) Tasks of the test leader and testerTest leader tasks may include:  Coordinate the test strategy and plan with project managers and others.  Write or review a test strategy for the project, and test policy for the organization.  Contribute the testing perspective to other project activities, such as integration planning.  Plan the tests – considering the context and understanding the test objectives and risks – including selecting test approaches, estimating the time, effort and cost of testing, acquiring resources, defining test levels, cycles, and planning incident management.  Initiate the specification, preparation, implementation and execution of tests, monitor the test results and check the exit criteria.  Adapt planning based on test results and progress (sometimes documented in status reports) and take any action necessary to compensate for problems.  Set up adequate configuration management of testware for traceability.  Introduce suitable metrics for measuring test progress and evaluating the quality of the testing and the product.  Decide what should be automated, to what degree, and how.  Select tools to support testing and organize any training in tool use for testers.
  • 34.  Decide about the implementation of the test environment.  Write test summary reports based on the information gathered during testing.Tester tasks may include:  Review and contribute to test plans.  Analyze, review and assess user requirements, specifications and models for testability.  Create test specifications.  Set up the test environment (often coordinating with system administration and network management).  Prepare and acquire test data.  Implement tests on all test levels, execute and log the tests, evaluate the results and document the deviations from expected results.  Use test administration or management tools and test monitoring tools as required.  Automate tests (may be supported by a developer or a test automation expert).  Measure performance of components and systems (if applicable).  Review tests developed by others.Note: People who work on test analysis, test design, specific test types or test automation may bespecialists in these roles. Depending on the test level and the risks related to the product and theproject, different people may take over the role of tester, keeping some degree of independence.Typically testers at the component and integration level would be developers; testers at theacceptance test level would be business experts and users, and testers for operational acceptancetesting would be operators.c) Defining skills test staff needNow days a testing professional must have „application‟ or „business domain‟ knowledge and„Technology‟ expertise apart from „Testing‟ Skills2) Test planning and estimationa) Test planning activities  Determining the scope and risks, and identifying the objectives of testing.  Defining the overall approach of testing (the test strategy), including the definition of the test levels and entry and exit criteria.  Integrating and coordinating the testing activities into the software life cycle activities: acquisition, supply, development, operation and maintenance.  Making decisions about what to test, what roles will perform the test activities, how the test activities should be done, and how the test results will be evaluated.  Scheduling test analysis and design activities.
  • 35.  Scheduling test implementation, execution and evaluation.  Assigning resources for the different activities defined.  Defining the amount, level of detail, structure and templates for the test documentation.  Selecting metrics for monitoring and controlling test preparation and execution, defect resolution and risk issues.  Setting the level of detail for test procedures in order to provide enough information to support reproducible test preparation and execution.b) Exit criteriaThe purpose of exit criteria is to define when to stop testing, such as at the end of a test level orwhen a set of tests has a specific goal.Typically exit criteria may consist of:  Thoroughness measures, such as coverage of code, functionality or risk.  Estimates of defect density or reliability measures.  Cost.  Residual risks, such as defects not fixed or lack of test coverage in certain areas.  Schedules such as those based on time to market.c) Test estimationTwo approaches for the estimation of test effort are covered in this syllabus:  The metrics-based approach: estimating the testing effort based on metrics of former or similar projects or based on typical values.  The expert-based approach: estimating the tasks by the owner of these tasks or by experts.Once the test effort is estimated, resources can be identified and a schedule can be drawn up.The testing effort may depend on a number of factors, including:  Characteristics of the product: the quality of the specification and other information used for test models (i.e. the test basis), the size of the product, the complexity of the problem domain, the requirements for reliability and security, and the requirements for documentation.  Characteristics of the development process: the stability of the organization, tools used, test process, skills of the people involved, and time pressure.  The outcome of testing: the number of defects and the amount of rework required.
  • 36. d) Test approaches (test strategies)One way to classify test approaches or strategies is based on the point in time at which the bulkof the test design work is begun:  Preventative approaches, where tests are designed as early as possible.  Reactive approaches, where test design comes after the software or system has been produced.Typical approaches or strategies include:  Analytical approaches, such as risk-based testing where testing is directed to areas of greatest risk  Model-based approaches, such as stochastic testing using statistical information about failure rates (such as reliability growth models) or usage (such as operational profiles).  Methodical approaches, such as failure-based (including error guessing and fault-attacks), experienced-based, check-list based, and quality characteristic based.  Process- or standard-compliant approaches, such as those specified by industry-specific standards or the various agile methodologies.  Dynamic and heuristic approaches, such as exploratory testing where testing is more reactive to events than pre-planned, and where execution and evaluation are concurrent tasks.  Consultative approaches, such as those where test coverage is driven primarily by the advice and guidance of technology and/or business domain experts outside the test team.  Regression-averse approaches, such as those that include reuse of existing test material, extensive automation of functional regression tests, and standard test suites.Different approaches may be combined, for example, a risk-based dynamic approach.The selection of a test approach should consider the context, including  Risk of failure of the project, hazards to the product and risks of product failure to humans, the environment and the company.  Skills and experience of the people in the proposed techniques, tools and methods.  The objective of the testing endeavour and the mission of the testing team.  Regulatory aspects, such as external and internal regulations for the development process.  The nature of the product and the business.3) Test progress monitoring and controla) Test progress monitoring
  • 37.  Percentage of work done in test case preparation (or percentage of planned test cases prepared).  Percentage of work done in test environment preparation.  Test case execution (e.g. number of test cases run/not run, and test cases passed/failed).  Defect information (e.g. defect density, defects found and fixed, failure rate, and retest results).  Test coverage of requirements, risks or code.  Subjective confidence of testers in the product.  Dates of test milestones.  Testing costs, including the cost compared to the benefit of finding the next defect or to run the next test.b) Test Reporting  What happened during a period of testing, such as dates when exit criteria were met.  Analyzed information and metrics to support recommendations and decisions about future actions, such as an assessment of defects remaining, the economic benefit of continued testing, outstanding risks, and the level of confidence in tested software.Metrics should be collected during and at the end of a test level in order to assess:  The adequacy of the test objectives for that test level.  The adequacy of the test approaches taken.  The effectiveness of the testing with respect to its objectives.c) Test controlTest control describes any guiding or corrective actions taken as a result of information andmetrics gathered and reported. Actions may cover any test activity and may affect any othersoftware life cycle activity or task.Examples of test control actions are:  Making decisions based on information from test monitoring.  Re-prioritize tests when an identified risk occurs (e.g. software delivered late).  Change the test schedule due to availability of a test environment.  Set an entry criterion requiring fixes to have been retested (confirmation tested) by a developer before accepting them into a build.4) Configuration management
  • 38. The purpose of configuration management is to establish and maintain the integrity of theproducts (components, data and documentation) of the software or system through the projectand product life cycle.For testing, configuration management may involve ensuring that:  All items of testware are identified, version controlled, tracked for changes, related to each other and related to development items (test objects) so that traceability can be maintained throughout the test process.  All identified documents and software items are referenced unambiguously in test documentationFor the tester, configuration management helps to uniquely identify (and to reproduce) the testeditem, test documents, the tests and the test harness.During test planning, the configuration management procedures and infrastructure (tools) shouldbe chosen, documented and implemented.5) Risk and testinga) Project risksProject risks are the risks that surround the project‟s capability to deliver its objectives, such as:Organizational factors:  skill and staff shortages;  personal and training issues;  political issues, such aso problems with testers communicating their needs and test results;o failure to follow up on information found in testing and reviews (e.g. notimproving development and testing practices).  improper attitude toward or expectations of testing (e.g. not appreciating the value of finding defects during testing).Technical issues:  problems in defining the right requirements;  the extent that requirements can be met given existing constraints;  the quality of the design, code and tests.Supplier issues:  failure of a third party;contractual issues.
  • 39. b) Product risksPotential failure areas (adverse future events or hazards) in the software or system are known asproduct risks, as they are a risk to the quality of the product, such as:  Failure-prone software delivered.  The potential that the software/hardware could cause harm to an individual or company.  Poor software characteristics (e.g. functionality, reliability, usability and performance).  Software that does not perform its intended functions.Risks are used to decide where to start testing and where to test more; testing is used to reducethe risk of an adverse effect occurring, or to reduce the impact of an adverse effect.Product risks are a special type of risk to the success of a project. Testing as a risk-controlactivity provides feedback about the residual risk by measuring the effectiveness of criticaldefect removal and of contingency plans.A risk-based approach to testing provides proactive opportunities to reduce the levels of productrisk, starting in the initial stages of a project. It involves the identification of product risks andtheir use in guiding test planning and control, specification, preparation and execution of tests. Ina risk-based approach the risks identified may be used to:  Determine the test techniques to be employed.  Determine the extent of testing to be carried out.  Prioritize testing in an attempt to find the critical defects as early as possible.  Determine whether any non-testing activities could be employed to reduce risk (e.g. providing training to inexperienced designers).Risk-based testing draws on the collective knowledge and insight of the project stakeholders todetermine the risks and the levels of testing required to address those risks.To ensure that the chance of a product failure is minimized, risk management activities provide adisciplined approach to:  Assess (and reassess on a regular basis) what can go wrong (risks).  Determine what risks are important to deal with.  Implement actions to deal with those risks.In addition, testing may support the identification of new risks, may help to determine what risksshould be reduced, and may lower uncertainty about risks.6) Incident managementSince one of the objectives of testing is to find defects, the discrepancies between actual andexpected outcomes need to be logged as incidents. Incidents should be tracked from discoveryand classification to correction and confirmation of the solution. In order to manage all incidentsto completion, an organization should establish a process and rules for classification.
  • 40. Incidents may be raised during development, review, testing or use of a software product. Theymay be raised for issues in code or the working system, or in any type of documentationincluding requirements, development documents, test documents, and user information such as“Help” or installation guides.Incident reports have the following objectives:  Provide developers and other parties with feedback about the problem to enable identification, isolation and correction as necessary.  Provide test leaders a means of tracking the quality of the system under test and the progress of the testing.  Provide ideas for test process improvement.Details of the incident report may include:  Date of issue, issuing organization, and author.  Expected and actual results.  Identification of the test item (configuration item) and environment.  Software or system life cycle process in which the incident was observed.  Description of the incident to enable reproduction and resolution, including logs, database dumps or screenshots.  Scope or degree of impact on stakeholder(s) interests.  Severity of the impact on the system.  Urgency/priority to fix.  Status of the incident (e.g. open, deferred, duplicate, waiting to be fixed, fixed awaiting retest, closed).  Conclusions, recommendations and approvals.  Global issues, such as other areas that may be affected by a change resulting from the incident.  Change history, such as the sequence of actions taken by project team members with respect to the incident to isolate, repair, and confirm it as fixed.References, including the identity of the test case specification that revealed the problem.
  • 41. Ch-6--------------------------------------------------------------------------------------------------------------------------------------------------------------Types of test toolsManagement of testing and tests  Requirement management tools  Incident management tools  Configuration management toolsStatic testing  Review tools  Static analysis tools (D)  Modeling tools (D)Test specification  Test design tools  Test data preparation toolsTest execution and logging  Test execution tools  Test harness/unit test framework tools (D)  Test comparators  Coverage measurement tools (D)
  • 42.  Security toolsPerformance and monitoring  Dynamic analysis tools  Performance/Load/Stress Testing tools  Monitoring toolsSpecific application areas  Special tools for web-based applications  Special tools for specific development flat forms  Special tools for embedded systems  Tool support using other toolsTest Tools and their purposesRequirement management tools Store requirements, check for consistency, allow requirements to be prioritized, trace changes,coverage of requirements etc.Incident management tools Store and manage incident reports, facilitating prioritization, assessment of actions to people andattribution of status etc.Configuration management tools Store information about versions and builds of software and testware; enable traceabilitybetween testware and software work products etc.Review toolsStore information, store and communicate review comments etc.Static analysis tools (D) The enforcement of coding standards, the analysis of structures and dependencies, aiding inunderstanding the code etc.
  • 43. Modeling tools (D)Validate models of the software, find defects in data model, state model or an object model etc.Test design toolsGenerate test inputs or executable tests, generate expected out comes etc.Test data preparation toolsPreparing test data, Manipulate databases, files or data transmissions to set up test data etc.Test execution tools Record tests, Automated test execution, use inputs and expected outcomes, compare results withexpected outcomes, repeat tests, dynamic comparison, manipulate the tests using scriptinglanguage etc.Test harness/unit test framework tools (D) Test components or part of a system by simulating the environment, provide an executionframework in middleware etc.Test comparators Determine differences between files, databases or test results post-execution comparison, mayuse test oracle if it is automated etc.Coverage measurement tools (D) Measure the percentage of specific types of code structure (ex: statements, branches ordecisions, and module or function calls)Security toolsCheck for computer viruses and denial of service attacks, search for specific vulnerabilities ofthe system etc
  • 44. .Dynamic analysis tools (D) Detect memory leaks, identify time dependencies and identify pointer arithmetic errors.Performance/Load/Stress Testing tools Measure load or stress, Monitor and report on how a system behaves a variety of simulatedusage conditions, simulate a load on an application/a database/or a system environment,repetitive execution of tests etc.Monitoring tools Continuously analyze, verify and report on specific system resources; store information aboutthe version and build of the software and testware, and enable traceability.Tool support using other toolsSome tools use other tools (Ex: QTP uses excel sheet and SQL tools)Potential benefits and risks of tool support for testingBenefits:o Repetitive work is reducedo Greater consistency and repeatabilityo Objective assessmento Ease of access to information about tests or testingRisks:o Unrealistic expectations for the toolo Underestimating the time and effort needed to achieve significant and continues benefits fromthe toolo Underestimating the effort required to maintain the test assets generated by the toolo Over-reliance on the toolSpecial considerations for some types of toolsFollowing tools have special considerations  Test execution tools
  • 45.  Performance testing tools  Static testing tools  Test management toolsIntroducing a tool into an organizationThe following factors are important in selecting a tool:o Assessment of the organization maturityo Identification of the areas within the organization where tool support will help to improvetesting processo Evaluation of tools against clear requirements and objective criteriao Proof-of-concept to see whether the product works as desired and meets the requirements andobjectives defined for ito Evaluation of the vendor (training, support and other commercial aspects) or open-sourcenetwork of supporto Identifying and planning internal implementation (including coaching and mentoring for thosenew to the use of the tool)The objectives for a pilot project for a new toolo To learn more about the toolo To see how the tool would fit with existing processes or documentationo To decide on standard ways of using the tool that will work for all potential userso To evaluate the pilot project agonist its objectivesSuccesses factors for the deployment of the new tool within an organizationo Rolling out the tool to the rest of the organization incrementallyo Adapting and improving process to fit with the use of the toolo Providing training and coaching/mentoring for new users.o Defining usage guidelineso Implementing a way to learn lessons from tool use.o Monitoring tool use and benefits.
  • 46. Questions-with-Answers Ch- 11 When what is visible to end-users is a deviation from the specific orexpected behavior, this is called:a) an errorb) a faultc) a failured) a defecte) a mistake2 Regression testing should be performed:v) every weekw) after the software has changedx) as often as possibley) when the environment has changedz) when the project manager saysa) v & w are true, x – z are falseb) w, x & y are true, v & z are falsec) w & y are true, v, x & z are falsed) w is true, v, x y and z are falsee) all of the above are true3 IEEE 829 test plan documentation standard contains all of the followingexcept:a) test itemsb) test deliverablesc) test tasksd) test environmente) test specification4 Testing should be stopped when:a) all the planned tests have been runb) time has run outc) all faults have been fixed correctlyd) both a) and c)e) it depends on the risks for the system being tested5 Order numbers on a stock control system can range between 10000 and99999 inclusive. Which of the following inputs might be a result ofdesigning tests for only valid equivalence classes and valid boundaries:a) 1000, 5000, 99999b) 9999, 50000, 100000c) 10000, 50000, 99999
  • 47. d) 10000, 99999e) 9999, 10000, 50000, 99999, 100006 Consider the following statements about early test design:i. early test design can prevent fault multiplicationii. faults found during early test design are more expensive to fixiii. early test design can find faultsiv. early test design can cause changes to the requirementsv. early test design takes more efforta) i, iii & iv are true. Ii & v are falseb) iii is true, I, ii, iv & v are falsec) iii & iv are true. i, ii & v are falsed) i, iii, iv & v are true, ii us falsee) i & iii are true, ii, iv & v are false7 Non-functional system testing includes:a) testing to see where the system does not function properlyb) testing quality attributes of the system including performance and usabilityc) testing a system feature using only the software required for that actiond) testing a system feature using only the software required for that functione) testing for functions that should not exist8 Which of the following is NOT part of configuration management:a) status accounting of configuration itemsb) auditing conformance to ISO9001c) identification of test versionsd) record of changes to documentation over timee) controlled library access9 Which of the following is the main purpose of the integration strategy forintegration testing in the small?a) to ensure that all of the small modules are tested adequatelyb) to ensure that the system interfaces to other systems and networksc) to specify which modules to combine when and how many at onced) to ensure that the integration testing can be performed by a small teame) to specify how the software should be divided into modules10 What is the purpose of test completion criteria in a test plan:a) to know when a specific test has finished its executionb) to ensure that the test case specification is completec) to set the criteria used in generating test inputsd) to know when test planning is completee) to plan when to stop testing11 Consider the following statementsi. an incident may be closed without being fixed
  • 48. ii. incidents may not be raised against documentationiii. the final stage of incident tracking is fixingiv. the incident record does not include information on test environmentsv. incidents should be raised when someone other than the author of the softwareperforms the testa) ii and v are true, I, iii and iv are falseb) i and v are true, ii, iii and iv are falsec) i, iv and v are true, ii and iii are falsed) i and ii are true, iii, iv and v are falsee) i is true, ii, iii, iv and v are false12 Given the following code, which is true about the minimum number oftest cases required for full statement and branch coverage:Read PRead QIF P+Q > 100 THENPrint “Large”ENDIFIf P > 50 THENPrint “P Large”ENDIFa) 1 test for statement coverage, 3 for branch coverageb) 1 test for statement coverage, 2 for branch coveragec) 1 test for statement coverage, 1 for branch coveraged) 2 tests for statement coverage, 3 for branch coveragee) 2 tests for statement coverage, 2 for branch coverage13 Given the following:Switch PC onStart “outlook”IF outlook appears THENSend an emailClose outlooka) 1 test for statement coverage, 1 for branch coverageb) 1 test for statement coverage, 2 for branch coveragec) 1 test for statement coverage. 3 for branch coveraged) 2 tests for statement coverage, 2 for branch coveragee) 2 tests for statement coverage, 3 for branch coverage14 Given the following code, which is true:IF A > B THEN
  • 49. C=A–BELSEC=A+BENDIFRead DIF C = D ThenPrint “Error”ENDIFa) 1 test for statement coverage, 3 for branch coverageb) 2 tests for statement coverage, 2 for branch coveragec) 2 tests for statement coverage. 3 for branch coveraged) 3 tests for statement coverage, 3 for branch coveragee) 3 tests for statement coverage, 2 for branch coverage15 Consider the following:Pick up and read the newspaperLook at what is on televisionIf there is a program that you are interested in watching then switch the the televisionon and watch the programOtherwiseContinue reading the newspaperIf there is a crossword in the newspaper then try and complete the crossworda) SC = 1 and DC = 1b) SC = 1 and DC = 2c) SC = 1 and DC = 3d) SC = 2 and DC = 2e) SC = 2 and DC = 316 The place to start if you want a (new) test tool is:a) Attend a tool exhibitionb) Invite a vendor to give a democ) Analyze your needs and requirementsd) Find out what your budget would be for the toole) Search the internet17 When a new testing tool is purchased, it should be used first by:a) A small team to establish the best way to use the toolb) Everyone who may eventually have some use for the toolc) The independent testing teamd) The managers to see what projects it should be used ine) The vendor contractor to write the initial scripts18 What can static analysis NOT find?a) The use of a variable before it has been definedb) Unreachable (“dead”) code
  • 50. c) Whether the value stored in a variable is correctd) The re-definition of a variable before it has been usede) Array bound violations19 Which of the following is NOT a black box technique:a) Equivalence partitioningb) State transition testingc) LCSAJd) Syntax testinge) Boundary value analysis20 Beta testing is:a) Performed by customers at their own siteb) Performed by customers at their software developer‟s sitec) Performed by an independent test teamd) Useful to test bespoke softwaree) Performed as early as possible in the lifecycle21 Given the following types of tool, which tools would typically be used bydevelopers and which by an independent test team:i. static analysisii. performance testingiii. test managementiv. dynamic analysisv. test runningvi. test data preparationa) developers would typically use i, iv and vi; test team ii, iii and vb) developers would typically use i and iv; test team ii, iii, v and vic) developers would typically use i, ii, iii and iv; test team v and vid) developers would typically use ii, iv and vi; test team I, ii and ve) developers would typically use i, iii, iv and v; test team ii and vi22 The main focus of acceptance testing is:a) finding faults in the systemb) ensuring that the system is acceptable to all usersc) testing the system with other systemsd) testing for a business perspectivee) testing by an independent test team23 Which of the following statements about the component testing standardis false:a) black box design techniques all have an associated measurement techniqueb) white box design techniques all have an associated measurement techniquec) cyclomatic complexity is not a test measurement techniqued) black box measurement techniques all have an associated test design techniquee) white box measurement techniques all have an associated test design technique
  • 51. 24 Which of the following statements is NOT true:a) inspection is the most formal review processb) inspections should be led by a trained leaderc) managers can perform inspections on management documentsd) inspection is appropriate even when there are no written documentse) inspection compares documents with predecessor (source) documents25 A typical commercial test execution tool would be able to perform all ofthe following EXCEPT:a) generating expected outputsb) replaying inputs according to a programmed scriptc) comparison of expected outcomes with actual outcomesd) recording test inputse) reading test values from a data file26 The difference between re-testing and regression testing isa) re-testing is running a test again; regression testing looks for unexpected side effectsb) re-testing looks for unexpected side effects; regression testing is repeating those testsc) re-testing is done after faults are fixed; regression testing is done earlierd) re-testing uses different environments, regression testing uses the same environmente) re-testing is done by developers, regression testing is done by independent testers27 Expected results are:a) only important in system testingb) only used in component testingc) never specified in advanced) most useful when specified in advancee) derived from the code28 Test managers should not:a) report on deviations from the project planb) sign the system off for releasec) re-allocate resource to meet original plansd) raise incidents on faults that they have founde) provide information for risk analysis and quality improvement29 Unreachable code would best be found using:a) code reviewsb) code inspectionsc) a coverage toold) a test management toole) a static analysis tool30 A tool that supports traceability, recording of incidents or scheduling oftests is called:a) a dynamic analysis tool
  • 52. b) a test execution toolc) a debugging toold) a test management toole) a configuration management tool31 What information need not be included in a test incident report:a) how to fix the faultb) how to reproduce the faultc) test environment detailsd) severity, prioritye) the actual and expected outcomes32 Which expression best matches the following characteristics or reviewprocesses:1. led by author2. Undocumented3. No management participation4. Led by A trained moderator or leader5. uses entry exit criterias) inspectiont) peer reviewu) informal reviewv) walkthrougha) s = 4, t = 3, u = 2 and 5, v = 1b) s = 4 and 5, t = 3, u = 2, v = 1c) s = 1 and 5, t = 3, u = 2, v = 4d) s = 5, t = 4, u = 3, v = 1 and 2e) s = 4 and 5, t = 1, u = 2, v = 333 Which of the following is NOT part of system testing:a) business process-based testingb) performance, load and stress testingc) requirements-based testingd) usability testinge) top-down integration testing34 What statement about expected outcomes is FALSE:a) expected outcomes are defined by the software‟s behaviorb) expected outcomes are derived from a specification, not from the codec) expected outcomes include outputs to a screen and changes to files and databasesd) expected outcomes should be predicted before a test is rune) expected outcomes may include timing constraints such as response times35 The standard that gives definitions of testing terms is:a) ISO/IEC 12207
  • 53. b) BS7925-1c) BS7925-2d) ANSI/IEEE 829e) ANSI/IEEE 72936 The cost of fixing a fault:a) Is not importantb) Increases as we move the product towards live usec) Decreases as we move the product towards live used) Is more expensive if found in requirements than functional designe) Can never be determined37 Which of the following is NOT included in the Test Plan document of theTest Documentation Standard:a) Test items (i.e. software versions)b) What is not to be testedc) Test environmentsd) Quality planse) Schedules and deadlines38 Could reviews or inspections be considered part of testing:a) No, because they apply to development documentationb) No, because they are normally applied before testingc) No, because they do not apply to the test documentationd) Yes, because both help detect faults and improve qualitye) Yes, because testing includes all non-constructive activities39 Which of the following is not part of performance testing:a) Measuring response timeb) Measuring transaction ratesc) Recovery testingd) Simulating many userse) Generating many transactions40 Error guessing is best useda) As the first approach to deriving test casesb) After more formal techniques have been appliedc) By inexperienced testersd) After the system has gone livee) Only by end users
  • 54. Ch- 21. Which of the following is true?a. Testing is the same as quality assuranceb. Testing is a part of quality assurancec. Testing is not a part of quality assuranced. Testing is same as debugging2. Why is testing necessary?a. Because testing is good method to make there are not defects in the softwareb. Because verification and validation are not enough to get to know the quality of thesoftwarec. Because testing measures the quality of the software system and helps to increase thequalityd. Because testing finds more defects than reviews and inspections.3. Integration testing has following characteristicsI. It can be done in incremental mannerII. It is always done after system testingIII. It includes functional testsIV. It includes non-functional testsa. I, II and III are correctb. I is correctc. I, III and IV are correctd. I, II and IV are correct4. A number of critical bugs are fixed in software. All the bugs are in onemodule, related to reports. The test manager decides to do regressiontesting only on the reports module.a. The test manager should do only automated regression testing.b. The test manager is justified in her decision because no bug has been fixed in othermodulesc. The test manager should only do confirmation testing. There is no need to doregression testingd. Regression testing should be done on other modules as well because fixing onemodule may affect other modules5. Which of the following is correct about static analysis tools?a. Static analysis tools are used only by developersb. Compilers may offer some support for static analysisc. Static analysis tools help find failures rather than defectsd. Static analysis tools require execution of the code to analyze the coverage
  • 55. 6. In a flight reservation system, the number of available seats in each planemodel is an input. A plane may have any positive number of available seats,up to the given capacity of the plane. Using Boundary Value analysis, a listof available – seat values were generated. Which of the following lists iscorrect?a. 1, 2, capacity -1, capacity, capacity plus 1b. 0, 1, capacity, capacity plus 1c. 0, 1, 2, capacity plus 1, a very large numberd. 0, 1, 10, 100, capacity, capacity plus one7. Which of the following is correct about static analysis toolsa. They help you find defects rather than failuresb. They are used by developers onlyc. They require compilation of coded. They are useful only for regulated industries8. In foundation level syllabus you will find the main basic principles oftesting. Which of the following sentences describes one of these basicprinciples?a. Complete testing of software is attainable if you have enough resources and test toolsb. With automated testing you can make statements with more confidence about theQuality of a product than with manual testingc. For a software system, it is not possible, under normal conditions, to test all input andpreconditions.d. A goal of testing is to show that the software is defect free.9. Which of the following statements contains a valid goal for a functionaltest set?a. A goal is that no more failures will result from the remaining defects b. A goal is to find as many failures as possible so that the cause of the failures can be identified and fixedc. A goal is to eliminate as much as possible the causes of defectsd. A goal is to fulfill all requirements for testing that are defined in the project plan.10. In system testing...a. .. Both functional and non-functional requirements are to be testedb. ... Only functional requirements are tested; non-functional requirements are validatedin a reviewc. ... Only non-functional requirements are tested; functional requirements are validatedin a reviewd. ... Only requirements which are listed in the specification document are to be tested11. Which of the following activities differentiate a walkthrough from aformal review?a. A walkthrough does not follow a defined processb. For a walkthrough individual preparation by the reviewers is optionalc. A walkthrough requires meeting
  • 56. d. A walkthrough finds the causes of failures, while formal review finds the failures12. Why does the boundary value analysis provide good test cases?a. Because it is an industry standardb. Because errors are frequently made during programming of the different cases nearthe „edges‟ of the range of valuesc. Because only equivalence classes that are equal from a functional point of view areconsidered in the test casesd. Because the test object is tested under maximal load up to its performance limits13. Which of the following list contains only non-functional tests?a. Interoperability (compatibility) testing, reliability testing, performance testingb. System testing, performance testingc. Load testing, stress testing, component testing, portability testingd. Testing various configurations, beta testing, load testing14. The following list contains risks that have been identified for a softwareproduct to be developed. Which of these risks is an example of a productrisk?a. Not enough qualified testers to complete the planned testsb. Software delivery is behind schedulec. Threat to a patient‟s lifed. 3rd party supplier does not supply as stipulated15. Which set of metrics can be used for monitoring of the test execution?a. Number of detected defects, testing cost;b. Number of residual defects in the test object. c. Percentage of completed tasks in the preparation of test environment; test cases preparedd. Number of test cases run / not run; test cases passed / failed16. Which of the following statements is correct?a. Static analysis tools produce statistics during program executionb. Configuration management systems allow us to provide accurate defect statistics ofdifferent configurationsc. Stress testing tools examine the behaviour of the test object at or beyond full loadd. Performance measurement tools can be used in all phases of software life-cycle17. What makes an inspection different from other review types?a. It is led by a trained leader, uses formal entry and exit criteria and checklistsb. It is led by the author of the document to be inspectedc. It can only be used for reviewing design and coded. It is led by the author, uses checklists, and collects data for improvement18. Which of the following is a valid collection of equivalence classes for thefollowing problem: An integer field shall contain values from and including1 to and including 15
  • 57. a. Less than 1, 1 through 15, more than 15b. Negative numbers, 1 through 15, above 15c. Less than 1, 1 through 14, more than 15d. Less than 0, 1 through 14, 15 and more19. Which of the following is a valid collection of equivalence classes for thefollowing problem: Paying with credit cards shall be possible with Visa,Master and Amex cards only.a. Visa, Master, Amex;b. Visa, Master, Amex, Diners, Keycards, and other optionc. Visa, Master, Amex, any other card, no cardd. No card, other cards, any of Visa – Master – Amex20. Which of the following techniques are black box techniques?a. State transition testing, code testing, agile testingb. Equivalence partitioning, state transition testing, decision table testingc. System testing, acceptance testing, equivalence partitioningd. System integration testing, system testing, decision table testing21. A defect management system shall keep track of the status of everydefect registered and enforce the rules about changing these states. If yourtask is to test the status tracking, which method would be best?a. Logic-based testingb. Use-case-based testingc. State transition testingd. Systematic testing according to the V-model22. This part of a program is given:-WHILE (condition A) Do BEND WHILEHow many decisions should be tested in this code in order to achieve 100%decision coverage?a. 2b. Indefinitec. 1d. 423. Why can be tester dependent on configuration management?a. Because configuration management assures that we know the exact version of theTest-ware and the test objectb. Because test execution is not allowed to proceed without the consent of the changecontrol boardc. Because changes in the test object are always subject to configuration managementd. Because configuration management assures the right configuration of the test tools24. What test items should be put under configuration management?a. The test object, the test material and the test environment
  • 58. b. The problem reports and the test materialc. Only the test objects. The test cases need to be adapted during agile testingd. The test object and the test material25. Which of the following can be root cause of a bug in a software product?(I) The project had incomplete procedures for configuration management.(II) The time schedule to develop a certain component was cut.(III) the specification was unclear(IV) Use of the code standard was not followed up(V) The testers were not certifieda. (I) and (II) are correctb. (I) through (IV) are correctc. (III) through (V) are correctd. (I), (II) and (IV) are correct26. Which of the following is most often considered as componentsinterface bug?a. For two components exchanging data, one component used metric units; the otherone used British unitsb. The system is difficult to use due to a too complicated terminal input structurec. The messages for user input errors are misleading and not helpful for understandingthe input error caused. Under high load, the system does not provide enough open ports to connect to27. Which of the following project inputs influence testing?(I) contractual requirements(II) Legal requirements(III) Industry standards(IV) Application risk(V) Project sizea. (I) through (III) are correctb. All alternatives are correctc. (II) and (V) are correctd. (I), (III) and (V) are correct28. What is the purpose of test exit criteria in the test plan?a. To specify when to stop the testing activityb. To set the criteria used in generating test inputsc. To ensure that the test case specification is completed. To know when a specific test has finished its execution29. Which of the following items need not to be given in an incident report?a. The version number of the test objectb. Test data and used environmentc. Identification of the test case that failedd. The instructions on how to correct the fault
  • 59. 30. V-Model is:a. A software development model that illustrates how testing activities integrate withSoftware development phasesb. A software life-cycle model that is not relevant for testingc. The official software development and testing life-cycle model of ISTQBd. A testing life cycle model including unit, integration, system and acceptance phases31. Why is incremental integration preferred over “big bang” integration?a. Because incremental integration has better early defects screening and isolationabilityb. Because “big bang” integration is suitable only for real time applicationsc. Incremental integration is preferred over “Big Bang Integration” only for “bottom up”development modeld. Because incremental integration can compensate for weak and inadequatecomponent testing32. Maintenance testing is:a. Testing managementb. Synonym of testing the quality of servicec. Triggered by modifications, migration or retirement of existing softwared. Testing the level of maintenance by the vendor33. Why is it necessary to define a Test Strategy?a. As there are many different ways to test software, thought must be given to decidewhat will be the most effective way to test the project on hand.b. Starting testing without prior planning leads to chaotic and inefficient test projectc. A strategy is needed to inform the project management how the test team willschedule the test-cyclesd. Software failure may cause loss of money, time, business reputation, and in extremecases injury and death. It is therefore critical to have a proper test strategy in place.
  • 60. Ch-31. An input field takes the year of birth between 1900 and 2004The boundary values for testing this field area. 0,1900,2004,2005b. 1900, 2004c. 1899,1900,2004,2005d. 1899, 1900, 1901,2003,2004,20052. Which one of the following are non-functional testing methods?a. System testingb. Usability testingc. Performance testingd. b & c both3. Which of the following tools would be involved in the automation ofregression test?a. Data testerb. Boundary testerc. Capture/Playbackd. Output comparator.4. Incorrect form of Logic coverage is:a. Statement Coverageb. Pole Coveragec. Condition Coveraged. Path Coverage5. Which of the following is not a quality characteristic listed in ISO 9126Standard?a. Functionalityb. Usabilityc. Supportabilityd. Maintainability6. To test a function, the programmer has to write a _________, whichcalls the function to be tested and passes it test data.a. Stubb. Driverc. Proxyd. None of the above7. Boundary value testinga. Is the same as equivalence partitioning tests?
  • 61. b. Test boundary conditions on, below and above the edges of input and outputequivalence classesc. Tests combinations of input circumstancesd. Is used in white box testing strategy8. Pick the best definition of qualitya. Quality is job oneb. Zero defectsc. Conformance to requirementsd. Work as designed9. Fault Masking isa. Error condition hiding another error conditionb. creating a test case which does not reveal a faultc. masking a fault by developerd. masking a fault by a tester10. One Key reason why developers have difficulty testing their own workis:a. Lack of technical documentationb. Lack of test tools on the market for developers‟c. Lack of trainingd. Lack of Objectivity11. During the software development process, at what point can the testprocess start?a. When the code is complete.b. When the design is complete.c. When the software requirements have been approved.d. When the first code module is ready for unit testing12. In a review meeting a moderator is a person whoa. Takes minutes of the meetingb. Mediates between peoplec. Takes telephone callsd. Writes the documents to be reviewed13. Acceptance test cases are based on what?a. Requirementb. Designc. Code
  • 62. d. Decision table14. “How much testing is enough?”a. This question is impossible to answerb. This question is easy to answerc. The answer depends on the risk for your industry, contract and special requirementsd. This answer depends on the maturity of your developers15. A common test technique during component test isa. Statement and branch testingb. Usability testingc. Security testingd. Performance testing16. Independent Verification & Validation isa. done by the Developerb. done by the Test Engineersc. Done By Managementd. done by an Entity outside the Project‟s sphere of influence17. Code Coverage is used as a measure of what?a. Defectsb. Trends analysisc. Test Effectivenessd. Time Spent Testing
  • 63. Ch- 41 We split testing into distinct stages primarily because:a) Each test stage has a different purpose.b) It is easier to manage testing in stages.c) We can run different tests in different environments.d) The more stages we have, the better the testing.2 Which of the following is likely to benefit most from the use of test toolsproviding test capture and replay facilities?a) Regression testingb) Integration testingc) System testingd) User acceptance testing3 Which of the following statements is NOT correct?a) A minimal test set that achieves 100% LCSAJ coverage will also achieve 100% branchcoverage.b) A minimal test set that achieves 100% path coverage will also achieve 100% statementcoverage.c) A minimal test set that achieves 100% path coverage will generally detect more faultsthan one that achieves 100% statement coverage.d) A minimal test set that achieves 100% statement coverage will generally detect morefaults than one that achieves 100% branch coverage.4 Which of the following requirements is testable?a) The system shall be user friendly.b) The safety-critical parts of the system shall contain 0 faults.c) The response time shall be less than one second for the specified design load.d) The system shall be built to be portable.5 Analyze the following highly simplified procedure:Ask: “What type of ticket do you require, single or return?”IF the customer wants „return‟Ask: “What rate, Standard or Cheap-day?”IF the customer replies „Cheap-day‟Say: “That will be £11:20”ELSESay: “That will be £19:50”ENDIFELSESay: “That will be £9:75”ENDIFNow decide the minimum number of tests that are needed to ensure that allthe questions have been asked, all combinations have occurred and all
  • 64. replies given.a) 3b) 4c) 5d) 66 Error guessing:a) supplements formal test design techniques.b) Can only be used in component, integration and system testing.c) Is only performed in user acceptance testing.d) is not repeatable and should not be used.7 Which of the following is NOT true of test coverage criteria?a) Test coverage criteria can be measured in terms of items exercised by a test suite.b) A measure of test coverage criteria is the percentage of user requirements covered.c) A measure of test coverage criteria is the percentage of faults found.d) Test coverage criteria are often used when specifying test completion criteria.8 In prioritizing what to test, the most important objective is to:a) find as many faults as possible.b) Test high risk areas.c) Obtain good test coverage.d) Test whatever is easiest to test.9 Given the following sets of test management terms (v-z), and activitydescriptions (1-5), which one of the following best pairs the two sets?v – test controlw – test monitoringx - test estimationy - incident managementz - configuration control1 - calculation of required test resources2 - maintenance of record of test results3 - re-allocation of resources when tests overrun4 - report on deviation from test plan5 - tracking of anomalous test resultsa) v-3,w-2,x-1,y-5,z-4b) v-2,w-5,x-1,y-4,z-3c) v-3,w-4,x-1,y-5,z-2d) v-2,w-1,x-4,y-3,z-5
  • 65. 10 Which one of the following statements about system testing is NOT true?a) System tests are often performed by independent teams.b) Functional testing is used more than structural testing.c) Faults found during system tests can be very expensive to fix.d) End-users should be involved in system tests.11 Which of the following is false?a) Incidents should always be fixed.b) An incident occurs when expected and actual results differ.c) Incidents can be analyzed to assist in test process improvement.d) An incident can be raised against documentation.12 Enough testing has been performed when:a) time runs out.b) The required level of confidence has been achieved.c) No more faults are found.d) The users won‟t find any serious faults.13 Which of the following is NOT true of incidents?a) Incident resolution is the responsibility of the author of the software under test.b) Incidents may be raised against user requirements.c) Incidents require investigation and/or correction.d) Incidents are raised when expected and actual results differ.14 Which of the following is not described in a unit test standard?a) Syntax testingb) equivalence partitioningc) stress testingd) modified condition/decision coverage15 which of the following is false?a) In a system two different failures may have different severities.b) A system is necessarily more reliable after debugging for the removal of a fault.c) A fault need not affect the reliability of a system.d) Undetected errors may lead to faults and eventually to incorrect behavior.16 Which one of the following statements, about capture-replay tools, isNOT correct?a) They are used to support multi-user testing.b) They are used to capture and animate user requirements.c) They are the most frequently purchased types of CAST tool.d) They capture aspects of user behavior.17 How would you estimate the amount of re-testing likely to be required?a) Metrics from previous similar projectsb) Discussions with the development team
  • 66. c) Time allocated for regression testingd) a & b18 Which of the following is true of the V-model?a) It states that modules are tested against user requirements.b) It only models the testing phase.c) It specifies the test techniques to be used.d) It includes the verification of designs.19 The oracle assumption:a) is that there is some existing system against which test output may be checked.b) Is that the tester can routinely identify the correct outcome of a test.c) is that the tester knows everything about the software under test.d) Is that the tests are reviewed by experienced testers.20 Which of the following characterizes the cost of faults?a) They are cheapest to find in the early development phases and the most expensive tofix in the latest test phases.b) They are easiest to find during system testing but the most expensive to fix then.c) Faults are cheapest to find in the early development phases but the most expensive tofix then.d) Although faults are most expensive to find during early development phases, they arecheapest to fix then.21 Which of the following should NOT normally be an objective for a test?a) To find faults in the software.b) To assess whether the software is ready for release.c) To demonstrate that the software doesn‟t work.d) To prove that the software is correct.22 Which of the following is a form of functional testing?a) Boundary value analysisb) Usability testingc) Performance testingd) Security testing23 Which of the following would NOT normally form part of a test plan?a) Features to be testedb) Incident reportsc) Risksd) Schedule24 which of these activities provides the biggest potential cost saving fromthe use of CAST?a) Test managementb) Test designc) Test execution
  • 67. d) Test planning25 which of the following is NOT a white box technique?a) Statement testingb) Path testingc) Data flow testingd) State transition testing26 Data flow analysis studies:a) possible communications bottlenecks in a program.b) The rate of change of data values as a program executes.c) The use of data on paths through the code.d) The intrinsic complexity of the code.27 In a system designed to work out the tax to be paid:An employee has £4000 of salary tax free. The next £1500 is taxed at 10%the next £28000 is taxed at 22%any further amount is taxed at 40%to the nearest whole pound, which of these is a valid Boundary Value Analysis test case?a) £1500b) £32001c) £33501d) £2800028 An important benefit of code inspections is that they:a) enable the code to be tested before the execution environment is ready.b) Can be performed by the person who wrote the code.c) Can be performed by inexperienced staff.d) Are cheap to perform.29 Which of the following is the best source of Expected Outcomes for UserAcceptance Test scripts?a) Actual resultsb) Program specificationc) User requirementsd) System specification30 what is the main difference between a walkthrough and an inspection?a) An inspection is lead by the author, whilst a walkthrough is lead by a trainedmoderator.b) An inspection has a trained leader, whilst a walkthrough has no leader.c) Authors are not present during inspections, whilst they are during walkthroughs.d) A walkthrough is lead by the author, whilst an inspection is lead by a trainedmoderator.31 Which one of the following describes the major benefit of verificationearly in the life cycle?
  • 68. a) It allows the identification of changes in user requirements.b) It facilitates timely set up of the test environment.c) It reduces defect multiplication.d) It allows testers to become involved early in the project.32 Integration testing in the small:a) Tests the individual components that have been developed.b) Tests interactions between modules or subsystems.c) Only uses components that form part of the live system.d) Tests interfaces to other systems.33 Static analysis is best described as:a) The analysis of batch programs.b) The reviewing of test plans.c) The analysis of program code.d) The use of black box testing.34 Alpha testing is:a) post-release testing by end user representatives at the developer‟s site.b) The first testing that is performed.c) Pre-release testing by end user representatives at the developer‟s site.d) Pre-release testing by end user representatives at their sites.35 A failure is:a) found in the software; the result of an error.b) Departure from specified behavior.c) An incorrect step, process or data definition in a computer program.d) A human action that produces an incorrect result.36 In a system designed to work out the tax to be paid:An employee has £4000 of salary tax free. The next £1500 is taxed at 10%the next £28000 is taxed at 22%any further amount is taxed at 40%which of these groups of numbers would fall into the same equivalence class?a) £4800; £14000; £28000b) £5200; £5500; £28000c) £28001; £32000; £35000d) £5800; £28000; £3200037 The most important thing about early test design is that it,a) makes test preparation easier.b) Means inspections are not required.c) Can prevent fault multiplication.d) Will find all faults.38 Which of the following statements about reviews is true?a) Reviews cannot be performed on user requirements specifications.
  • 69. b) Reviews are the least effective way of testing code.c) Reviews are unlikely to find faults in test plans.d) Reviews should be performed on specifications, code, and test plans.39 Test cases are designed during:a) Test recording.b) Test planning.c) Test configuration.d) Test specification.40 A configuration management system would NOT normally provide:a) linkage of customer requirements to version numbers.b) Facilities to compare test results with expected results.c) The precise differences in versions of software component source code.d) Restricted access to the source code library.
  • 70. Ch- 51. Software testing activities should starta. as soon as the code is writtenb. during the design stagec. when the requirements have been formally documentedd. as soon as possible in the development life cycle2.Faults found by users are due to:a. Poor quality softwareb. Poor software and poor testingc. bad luckd. insufficient time for testing3.What is the main reason for testing software before releasing it?a. to show that system will work after releaseb. to decide when the software is of sufficient quality to releasec. to find as many bugs as possible before released. to give information for a risk based decision about release4. Which of the following statements is not true,a. performance testing can be done during unit testing as well as during the testing ofwhole systemb. The acceptance test does not necessarily include a regression testc. Verification activities should not involve testers (reviews, inspections etc)d. Test environments should be as similar to production environments as possible5. When reporting faults found to developers, testers should be:a. as polite, constructive and helpful as possibleb. firm about insisting that a bug is not a “feature” if it should be fixedc. diplomatic, sensitive to the way they may react to criticismd. All of the above6.In which order should tests be run?a. the most important tests firstb. the most difficult tests first(to allow maximum time for fixing)c. the easiest tests first(to give initial confidence)d. the order they are thought of7. The later in the development life cycle a fault is discovered, the moreexpensive it is to fix. why?a. the documentation is poor, so it takes longer to find out what the software is doing.b. wages are risingc. the fault has been built into more documentation, code, tests, etcd. none of the above
  • 71. 8. Which is not true-The black box testera. should be able to understand a functional specification or requirements documentb. should be able to understand the source code.c. is highly motivated to find faultsd. is creative to find the system‟s weaknesses9. A test design technique isa. a process for selecting test casesb. a process for determining expected outputsc. a way to measure the quality of softwared. a way to measure in a test plan what has to be done10. Test-ware (test cases, test dataset)a. needs configuration management just like requirements, design and codeb. should be newly constructed for each new version of the softwarec. is needed only until the software is released into production or used. does not need to be documented and commented, as it does not form part of thereleased software system11. An incident logging systema only records defectsb is of limited valuec is a valuable source of project information during testing if it contains all incidentsd. should be used only by the test team.12. Increasing the quality of the software, by better development methods,will affect the time needed for testing (the test phases) by:a. reducing test timeb. no changec. increasing test timed. can‟t say13. Coverage measurementa. is nothing to do with testingb. is a partial measure of test thoroughnessc. branch coverage should be mandatory for all softwared. can only be applied at unit or module testing, not at system testing14. When should you stop testing?a. when time for testing has run out.b. when all planned tests have been runc. when the test completion criteria have been metd. when no faults have been found by the tests run15. Which of the following is true?
  • 72. a. Component testing should be black box, system testing should be white box.b. if u find a lot of bugs in testing, you should not be very confident about the quality ofsoftwarec. the fewer bugs you find, the better your testing wasd. the more tests you run, the more bugs you will find.16. What is the important criterion in deciding what testing technique touse?a. how well you know a particular techniqueb. the objective of the testc. how appropriate the technique is for testing the applicationd. whether there is a tool to support the technique17. If the pseudo code below were a programming language, how many testsare required to achieve 100% statement coverage?1. If x=3 then2. Display_messageX;3. If y=2 then4. Display_messageY;5. Else6. Display_messageZ;7. Else8. Display_messageZ;a. 1b. 2c. 3d. 418. Using the same code example as question 17, how many tests arerequired to achieve 100% branch/decision coverage?a. 1b. 2c. 3d. 419 which of the following is NOT a type of non-functional test?a. State-Transitionb. Usabilityc. Performanced. Security20. Which of the following tools would you use to detect a memory leak?a. State analysisb. Coverage analysisc. Dynamic analysis
  • 73. d. Memory analysis21. Which of the following is NOT a standard related to testing?a. IEEE829b. IEEE610c. BS7925-1d. BS7925-222.which of the following is the component test standard?a. IEEE 829b. IEEE 610c. BS7925-1d. BS7925-223 which of the following statements are true?a. Faults in program specifications are the most expensive to fix.b. Faults in code are the most expensive to fix.c. Faults in requirements are the most expensive to fixd. Faults in designs are the most expensive to fix.24. Which of the following is not the integration strategy?a. Design basedb. Big-bangc. Bottom-upd. Top-down25. Which of the following is a black box design technique?a. statement testingb. equivalence partitioningc. error- guessingd. usability testing26. A program with high Cyclo-metric complexity is almost likely to be:a. Largeb. Smallc. Difficult to writed. Difficult to test27. Which of the following is a static test?a. code inspectionb. coverage analysisc. usability assessmentd. installation test28. Which of the following is the odd one out?a. white boxb. glass box
  • 74. c. structurald. functional29. A program validates a numeric field as follows:values less than 10 are rejected, values between 10 and 21 are accepted, values greaterthan or equal to 22 are rejectedwhich of the following input values cover all of the equivalence partitions?a. 10,11,21b. 3,20,21c. 3,10,22d. 10,21,2230. Using the same specifications as question 29, which of the followingcovers the MOST boundary values?a. 9,10,11,22b. 9,10,21,22c. 10,11,21,22d. 10,11,20,21
  • 75. Ch- 61. COTS is known asA. Commercial off the shelf softwareB. Compliance of the softwareC. Change control of the softwareD. Capable off the shelf software2. From the below given choices, which one is the „Confidence testing‟A. Performance TestingB. System testingC. Smoke testingD. Regression testing3. „Defect Density‟ calculated in terms ofA. The number of defects identified in a component or system divided by the size of thecomponent or systemB. The number of defects found by a test phase divided by the number found by that testphase and any other means after wardsC. The number of defects identified in the component or system divided by the numberof defects found in a test phase,D. The number of defects found by a test phase divided by the number found by the sizeof the system,4. „Be bugging‟ is known asA. Preventing the defects by inspectionB. Fixing the defects by debuggingC. Adding known defects by seedingD. process of fixing the defects by the tester5. Expert based test estimation is also known asA. Narrow band DelphiB. Wide band DelphiC. Bespoke DelphiD. Robust Delphi6. When testing a grade calculation system, a tester determines that allscores from 90 to 100 will yield a grade of A, but scores below 90 will not.This analysis is known as:A. Equivalence partitioningB. Boundary value analysisC. Decision tableD. Hybrid analysis
  • 76. 7. All of the following might be done during unit testing exceptA. Desk checkB. Manual support testingC. WalkthroughD. Compiler based testing8. Which of the following characteristics is primarily associated withsoftware reusability?A. The extent to which the software can be used in other applicationsB. The extent to which the software can be used by many different users,C. The capability of the software to be moved to a different platform,D. The capability of the system to be coupled with another system9. Which of the following software change management activities is mostvital to assessing the impact of proposed software modifications?A. Baseline identificationB. Configuration auditingC. Change controlD. Version control10. Which of the following statements is true about a software verificationand validation program?I. It strives to ensure that quality is built into software.II. It provides management with insights into the state of a software project.III. It ensures that alpha, beta, and system tests are performed.IV. It is executed in parallel with software development activities.A. I, II&IIIB.II, III&IVC.I, II&IVD.I, III&IV
  • 77. 11. Which of the following is a requirement of an effective softwareenvironment?I. Ease of useII. Capacity for incremental implementation,III. Capability of evolving with the needs of a project,IV. Inclusion of advanced toolsA.I, II &IIIB.I, II &IVC.II, III&IVD.I, III&IV12. A project manager has been transferred to a major softwaredevelopment project that is in the implementation phase. The highestpriority for this project manager should be toA. Establish a relationship with the customerB. Learn the project objectives and the existing project planC. Modify the project‟ s organizational structure to meet the manager‟ s managementstyleD. Ensure that the project proceeds at its current pace13. Which of the following functions is typically supported by a softwarequality information system?I. Record keepingII. System designIII. Evaluation schedulingIV. Error reportingA.I, II&IIIB.II, III &IVC.I, III &IVD.I, II & IV14. During the testing of a module tester „X‟ finds a bug and assigned it todeveloper. But developer rejects the same, saying that it‟s not a bug. What„X‟ should do?A. Report the issue to the test manager and try to settle with the developer.B. Retest the module and confirm the bug
  • 78. C. Assign the same bug to another developerD. Send to the detailed information of the bug encountered and check thereproducibility15. The primary goal of comparing a user manual with the actual behaviorof the running program during system testing is toA. Find bugs in the programB. Check the technical accuracy of the documentC. Ensure the ease of use of the documentD. Ensure that the program is the latest version16. A type of integration testing in which software elements, hardwareelements, or both are combined all at once into a component or an overallsystem, rather than in stages.A. System TestingB. Big-Bang TestingC. Integration TestingD. Unit Testing17. Which technique can be used to achieve input and output coverage? Itcan be applied to human input, input via interfaces to a system, or interfaceparameters in integration testing.A. Error GuessingB. Boundary Value AnalysisC. Decision Table testingD. Equivalence partitioning18. There is one application, which runs on a single terminal. There isanother application that works on multiple terminals. What are the testtechniques you will use on the second application that you would not do onthe first application?A. Integrity, Response timeB. Concurrency test, ScalabilityC. Update & Rollback, Response timeD. Concurrency test, Integrity19. You are the test manager and you are about the start the system testing.The developer team says that due to change in requirements they will be
  • 79. able to deliver the system to you for testing 5 working days after the duedate. You can not change the resources(work hours, test tools, etc.) Whatsteps you will take to be able to finish the testing in time. (A. Tell to the development team to deliver the system in time so that testing activity willbe finish in time.B. Extend the testing plan, so that you can accommodate the slip going to occurC. Rank the functionality as per risk and concentrate more on critical functionalitytestingD. Add more resources so that the slippage should be avoided20. Item transmittal report is also known asA. Incident reportB. Release noteC. Review reportD. Audit report21. Testing of software used to convert data from existing systems for use inreplacement systemsA. Data driven testingB. Migration testingC. Configuration testingD. Back to back testing22. Big bang approach is related toA. Regression testingB. Inter system testingC. Re-testingD. Integration testing23. “The tracing of requirements for a test level through the layers of a testdocumentation” done byA. Horizontal traceabilityB. Depth traceabilityC. Vertical traceability
  • 80. D. Horizontal & Vertical traceability24. A test harness is aA. A high level document describing the principles, approach and major objectives of theorganization regarding testingB. A distance set of test activities collected into a manageable phase of a projectC. A test environment comprised of stubs and drives needed to conduct a testD. A set of several test cases for a component or system under test25. „Entry criteria‟ should address questions such asI. Are the necessary documentation, design and requirements information available thatwill allow testers to operate the system and judge correct behavior.II. Is the test environment-lab, hardware, software and system administration supportready?III. Those conditions and situations that must prevail in the testing process to allowtesting to continue effectively and efficiently.IV. Are the supporting utilities, accessories and prerequisites available in forms thattesters can useA. I, II and IVB. I, II and IIIC. I, II, III and IVD. II, III and IV.26. “This life cycle model is basically driven by schedule and budget risks”This statement is best suited forA. Water fall modelB. Spiral modelC. Incremental modelD. V-Model