Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Question ISTQB foundation 3


Published on

Published in: Technology, Education
  • Be the first to comment

Question ISTQB foundation 3

  1. 1. Chapter -1 Fundamentals of Testing1.1 Why is testing necessary?bug, defect, error, failure, mistake, quality, risk, software, testing and exhaustive testing.1.2 What is testing?code, debugging, requirement, test basis, test case, test objective1.3 Testing principles1.4 Fundamental test processconformation testing, exit criteria, incident, regression testing, test condition, test coverage,test data, test execution, test log, test plan, test strategy, test summary report and testware.1.5 The psychology of testingindependence.I) General testing principlesPrinciplesA number of testing principles have been suggested over the past 40 years and offer generalguidelines common for all testing.Principle 1 – Testing shows presence of defectsTesting can show that defects are present, but cannot prove that there are no defects. Testingreduces the probability of undiscovered defects remaining in the software but, even if nodefects are found, it is not a proof of correctness.Principle 2 – Exhaustive testing is impossible Testing everything (all combinations of inputs and preconditions) is not feasible except fortrivial cases. Instead of exhaustive testing, risk analysis and priorities should be used to focustesting efforts.Principle 3 – Early testingTesting activities should start as early as possible in the software or system development lifecycle, and should be focused on defined objectives.Principle 4 – Defect clusteringA small number of modules contain most of the defects discovered during pre-release testing,or are responsible for the most operational failures.Principle 5 – Pesticide paradoxIf the same tests are repeated over and over again, eventually the same set of test cases will nolonger find any new defects. To overcome this “pesticide paradox”, the test cases need to beregularly reviewed and revised, and new and different tests need to be written to exercisedifferent parts of the software or system to potentially find more defects.Principle 6 – Testing is context dependentTesting is done differently in different contexts. For example, safety-critical software is testeddifferently from an e-commerce site.Principle 7 – Absence-of-errors fallacyFinding and fixing defects does not help if the system built is unusable and does not fulfill the 1
  2. 2. users’ needs and expectations.II) Fundamental test process1) Test planning and controlTest planning is the activity of verifying the mission of testing, defining the objectives oftesting and the specification of test activities in order to meet the objectives and mission. It involves taking actions necessary to meet the mission and objectives of the project. Inorder to control testing, it should be monitored throughout the project. Test planning takesinto account the feedback from monitoring and control activities.2) Test analysis and designTest analysis and design is the activity where general testing objectives are transformed intotangible test conditions and test cases.Test analysis and design has the following major tasks:  Reviewing the test basis (such as requirements, architecture, design, interfaces).  Evaluating testability of the test basis and test objects.  Identifying and prioritizing test conditions based on analysis of test items, the specification, behaviour and structure.  Designing and prioritizing test cases.  Identifying necessary test data to support the test conditions and test cases.  Designing the test environment set-up and identifying any required infrastructure and tools.3) Test implementation and execution  Developing, implementing and prioritizing test cases.  Developing and prioritizing test procedures, creating test data and, optionally, preparing test harnesses and writing automated test scripts.  Creating test suites from the test procedures for efficient test execution.  Verifying that the test environment has been set up correctly.  Executing test procedures either manually or by using test execution tools, according to the planned sequence.  Logging the outcome of test execution and recording the identities and versions of the software under test, test tools and testware.  Comparing actual results with expected results.  Reporting discrepancies as incidents and analyzing them in order to establish their cause (e.g. a defect in the code, in specified test data, in the test document, or a mistake in the way the test was executed).  Repeating test activities as a result of action taken for each discrepancy. For example, reexecution of a test that previously failed in order to confirm a fix (confirmation testing), execution of a corrected test and/or execution of tests in order to ensure that defects have not been introduced in unchanged areas of the software or that defect fixing did not uncover other defects (regression testing). 2
  3. 3. 4) Evaluating exit criteria and reporting  Checking test logs against the exit criteria specified in test planning.  Assessing if more tests are needed or if the exit criteria specified should be changed.  Writing a test summary report for stakeholders.5) Test closure activities  Checking which planned deliverables have been delivered, the closure of incident reports or raising of change records for any that remain open, and the documentation of the acceptance of the system.  Finalizing and archiving testware, the test environment and the test infrastructure for later reuse.  Handover of testware to the maintenance organization.  Analyzing lessons learned for future releases and projects, and the improvement of test maturity.III) The psychology of testing  Tests designed by the person(s) who wrote the software under test (low level of independence).  Tests designed by another person(s) (e.g. from the development team).  Tests designed by a person(s) from a different organizational group (e.g. an independent test team) or test specialists (e.g. usability or performance test specialists).  Tests designed by a person(s) from a different organization or company (i.e. outsourcing or certification by an external body).Questions :Example 1: 50 Question( These are sample questions related to chapter 1 , For answers see other sections of ISTQBbelow)1) When what is visible to end-users is a deviation from the specific or expected behavior, thisis called:a) an errorb) a faultc) a failured) a defect2) Regression testing should be performed:v) every weekw) after the software has changedx) as often as possibley) when the environment has changedz) when the project manager saysa) v & w are true, x – z are false 3
  4. 4. b) w, x & y are true, v & z are falsec) w & y are true, v, x & z are falsed) w is true, v, x y and z are false3) Testing should be stopped when:a) all the planned tests have been runb) time has run outc) all faults have been fixed correctlyd) it depends on the risks for the system being tested4) Consider the following statements about early test design:i. early test design can prevent fault multiplicationii. faults found during early test design are more expensive to fixiii. early test design can find faultsiv. early test design can cause changes to the requirementsv. early test design takes more efforta) i, iii & iv are true. Ii & v are falseb) iii is true, I, ii, iv & v are falsec) iii & iv are true. i, ii & v are falsed) i, iii, iv & v are true, ii us false5) The main focus of acceptance testing is:a) finding faults in the systemb) ensuring that the system is acceptable to all usersc) testing the system with other systemsd) testing for a business perspective6) The difference between re-testing and regression testing isa) re-testing is running a test again; regression testing looks for unexpected side effectsb) re-testing looks for unexpected side effects; regression testing is repeating those testsc) re-testing is done after faults are fixed; regression testing is done earlierd) re-testing is done by developers, regression testing is done by independent testers7) Expected results are:a) only important in system testingb) only used in component testingc) never specified in advanced) most useful when specified in advance8) The cost of fixing a fault:a) Is not importantb) Increases as we move the product towards live usec) Decreases as we move the product towards live used) Is more expensive if found in requirements than functional design9) Fault Masking is 4
  5. 5. a. Error condition hiding another error conditionb. creating a test case which does not reveal a faultc. masking a fault by developerd. masking a fault by a tester10) One Key reason why developers have difficulty testing their own work is:a. Lack of technical documentationb. Lack of test tools on the market for developersc. Lack of trainingd. Lack of Objectivity11) Enough testing has been performed when:a) Time runs out.b) The required level of confidence has been achieved.c) No more faults are found.d) The users won’t find any serious faults.12) Which of the following is false?a) In a system two different failures may have different severities.b) A system is necessarily more reliable after debugging for the removal of a fault.c) A fault need not affect the reliability of a system.d) Undetected errors may lead to faults and eventually to incorrect behavior.13) Which of the following characterises the cost of faults?a) They are cheapest to find in the early development phases and the less expensive to fix.b) They are easiest to find during system testing but the most expensive to fix then.c) Faults are cheapest to find in the early development phases but the most expensive to fixthen.d) Although faults are most expensive to find during early development phases, they arecheapest to fix then.14) According to the ISTQB Glossary a risk relates to which of the following?a) Negative feedback to the tester.b) Negative consequences that will occur.c) Negative consequences that could occur.d) Negative consequences for the test object.15) Ensuring that a test design start during the requirements definition phase is important toenable which of the following test objectives?a) Preventing defects in the system.b) Finding defects through dynamic testing.c) Gaining confidence in the system.d) Finishing the project on time.16) A failure is:a) Found in the software; the result of an error. 5
  6. 6. b) Departure from specified behavior.c) An incorrect step, process or data definition in a computer program.d) A human action that produces an incorrect result.17) Faults found by users are due to:a. Poor quality softwareb. Poor software and poor testingc. bad luckd. insufficient time for testing18) Which of the following statements are true?a. Faults in program specifications are the most expensive to fix.b. Faults in code are the most expensive to fix.c. Faults in requirements are the most expensive to fixd. Faults in designs are the most expensive to fix.19) COTS is known as:a. Commercial off the shelf softwareb. Compliance of the softwarec. Change control of the softwared. Capable off the shelf software20) Which is not the testing objective?a. Finding defectsb. Gaining confidence about the level of quality and providing informationc. Preventing defects.d. Debugging defects21) Exhaustive Testing isa) Impractical but possibleb) Practically possiblec) Impractical and impossibled) Always possible22) Which of the following is most important to promote and maintain good relationshipsbetween developers and testers?a) Understanding what Managers value about testing.b) Explaining test results in a neutral fashion.c) Identifying potential customer work-around for bugs.d) Promoting better quality software whenever possible.23) According to ISTQB Glossary, the word ‘Error’ is synonymous with which of thefollowing?a) Failureb) Defectc) Mistake 6
  7. 7. d) Bug24) In prioritising what to test, the most important objective is to:a) Find as many faults as possible.b) Test high risk areas.c) Obtain good test coverage.d) Test whatever is easiest to test.25) Incidents would not be raised against:a) Requirementsb) Documentationc) Test casesd) Improvements suggested by users26) Designing the test environment set-up and identifying any required infrastructure andtools are a part of which phasea) Test Implementation and executionb) Test Analysis and Designc) Evaluating the Exit Criteria and reportingd) Test Closure Activities27) Which of the following is not a part of the Test Implementation and Execution Phase?a) Creating test suites from the test casesb) Executing test cases either manually or by using test execution toolsc) Comparing actual resultsd) Designing the Tests28) Test Case are grouped into Manageable (and scheduled) units are called asa. Test Harnessb. Test Suitec. Test Cycled. Test Driver29) Which of the following could be a reason for a failure1) Testing fault2) Software fault3) Design fault4) Environment Fault5) Documentation Faulta. 2 is a valid reason; 1,3,4 & 5 are notb. 1,2,3,4 are valid reasons; 5 is notc. 1,2,3 are valid reasons; 4 & 5 are notd. All of them are valid reasons for failure30) Handover of Testware is a part of which Phasea) Test Analysis and Design 7
  8. 8. b) Test Planning and controlc) Test Closure Activitiesd) Evaluating exit criteria and reporting31) An exhaustive test suit would include:a) All combination of input values and preconditions.b) All combination of input values and output values.c) All pairs of input values and preconditions.d) All states and state transitions.32) Which of the following encourages objective testing?a) Unit Testing.b) System Testing.c) Independent Testing.d) Destructive Testing.33) Consider the following list of test process activities:I Analysis and DesignII Test Closure activitiesIII Evaluating exit criteria and reportingIV Planning and ControlV Implementation and executionWhich of the following places these in their logical sequence?a) I, II, III, IV and Vb) IV, I, V, III and IIc) IV, I, V, II and IIId) I, IV, V, III and II34) According to ISTQB Glossary, debugging:a) Is part of the fundamental test process.b) Includes the repair of the cause of a failurec) Involves intentionally adding known defectsd) Follows the steps of a test procedure35) Which of the following could be a root cause of a defect in financial software in which anincorrect interest rate is calculated?a) Insufficient funds were available to pay the interest rate calculated.b) Insufficient calculations of compound interest were included.c) Insufficient training was given to the developers concerning compound interest calculationrules.d) Incorrect calculators were used to calculate the expected results.36) When should you stop testing?a) When the time for testing has run outb) When all planned tests have been runc) When the test completion criteria have been met 8
  9. 9. d) When no faults have been found by the tests run37) An incident logging system:a) Only records defectsb) is of limited valuec) is a valuable source of project information during testing if it contains all incidentsd) Should be used only by the test team38) The term confirmation testing is synonymous toa) Exploratory testingb) Regression testingc) Exhaustive testingd) Re- testing39) Consider the following statements:i. an incident may be closed without being fixed.ii. Incidents may not be raised against documentation.iii. The final stage of incident tracking is fixing.iv. The incident record does not include information on test environments.a) ii is true, i, iii and iv are falseb) i is true, ii, iii and iv are falsec) i and iv are true, ii and iii are falsed) i and ii are true, iii and iv are false40) Which of the following is not a characteristic of software?a) Software is developed or engineered; it is not manufactured in the classic senseb) Software doesn’t “wear out” with the timec) The traditional industry is moving toward component based assembly, whereas mostsoftware continues to be custom built and the concept of component based assembly is stilltaking shaped) Software does not require maintenance41) If the expected result is not specified then:a) We cannot run the testb) It may be difficult to repeat the testc) It may be difficult to determine if the test has passed or failedd) We cannot automate the user inputs42) A reliable system will be one that:a) is unlikely to be completed on scheduleb) is unlikely to cause a failurec) is likely to be fault freed) is likely to be liked by the users43) What is the purpose of Exit Criteria?a) To determine when writing a test case is complete 9
  10. 10. b) To determine when to stop the testingc) To ensure the test specification is completed) To determine when to stop writing the test plan44) What is the focus of Re-Testing?a) Re-Testing ensures the original fault has been removedb) Re-Testing prevents future faultsc) Re-Testing looks for unexpected side effectsd) Re-Testing ensures the original fault is still present45) How is the amount of Re-Testing required normally defined?a) Discussions with the end usersb) Discussions with the developersc) Metrics from Previous projectsd) none of the above46) A manifestation of an ‘error’ in software isa) An Errorb) A Faultc) A Failured) An Action47) If testing time is limited, we should …a) Only test high risk areasb) Only test simple areasc) Only test low risk areasd) Only test complicated areas48) The quality of the product is said to increase when?a) All faults have been reviewedb) All faults have been foundc) All faults have been raisedd) All faults have been rectified49) Pick the best definition of qualitya) Quality is job oneb) Zero defectsc) Conformance to requirementsd) Work as designed50) What is the Main reason for testing software before releasing it?a) To show the system will work after releaseb) To decide when software is of sufficient quality to releasec) To find as many of bugs as possible before released) To give information for a risk based decision about release 10
  11. 11. 51) Select a reason that does not agree with the fact that complete testing is impossible:a) The domain of possible inputs is too large to test.b) Limited financial resources.c) There are too many possible paths through the program to test.d) The user interface issues (and thus the design issues) are too complex to completely test. 11
  12. 12. Chaper 2. Testing throughout the software life cycleInternational Software Testing Quality Board, Terms Development modelsSoftware development models, Test levels , test types and maintenance testing, Tests andanswers, sample questionaire, Detailed explanation of SDLC and V models can be referredfrom software testing theory Part 1 ( Index and Sub Index)2.1 Software development modelsCOTS, interactive-incremental development model, validation, verification, V-model.2.2 Test levels Alpha testing, beta testing, component testing (also known as unit/module/program testing),driver, stub, field testing, functional requirement, non-functional requirement, integration,integration testing, robustness testing, system testing, test level, test-driven development, test 12
  13. 13. environment, user acceptance testing.2.3 Test types Black box testing, code coverage, functional testing, interoperability testing, load testing,maintainability testing, performance testing, portability testing, reliability testing, securitytesting, specification based testing, stress testing, structural testing, usability testing, whitebox testing2.4 Maintenance testingImpact analysis, maintenance testing.i) Software development modelsa) V-model (sequential development model) Although variants of the V-model exist, a common type of V-model uses four test levels,corresponding to the four development levels.The four levels used in this syllabus are:component (unit) testing;integration testing;system testing;acceptance testing.b) Iterative-incremental development modelsIterative-incremental development is the process of establishing requirements, designing,building and testing a system, done as a series of shorter development cycles. Examples are:prototyping, rapid application development (RAD), Rational Unified Process (RUP) and agiledevelopment models.c) Testing within a life cycle modelIn any life cycle model, there are several characteristics of good testing:? For every development activity there is a corresponding testing activity.? Each test level has test objectives specific to that level.? The analysis and design of tests for a given test level should begin during the correspondingdevelopment activity.? Testers should be involved in reviewing documents as soon as drafts are available in thedevelopment life cycle.ii) Test levelsa) Component testingComponent testing searches for defects in, and verifies the functioning of, software (e.g.modules, programs, objects, classes, etc.) that are separately testable.Component testing may include testing of functionality and specific non-functional 13
  14. 14. characteristics, such as resource-behaviour (e.g. memory leaks) or robustness testing, as wellas structural testing (e.g. branch coverage).One approach to component testing is to prepare and automate test cases before coding. Thisis called a test-first approach or test-driven development.b) Integration testing Integration testing tests interfaces between components, interactions with different parts of asystem, such as the operating system, file system, hardware, or interfaces between systems.Component integration testing tests the interactions between software components and isdone after component testing;System integration testing tests the interactions between different systems and may be doneafter system testing.Testing of specific non-functional characteristics (e.g. performance) may be included inintegration testing.c) System testing System testing is concerned with the behaviour of a whole system/product as defined by thescope of a development project or programme.In system testing, the test environment should correspond to the final target or productionenvironment as much as possible in order to minimize the risk of environment-specificfailures not being found in testing.System testing may include tests based on risks and/or on requirements specifications,business processes, use cases, or other high level descriptions of system behaviour,interactions with the operating system, and system resources.System testing should investigate both functional and non-functional requirements of thesystem.d) Acceptance testing Acceptance testing is often the responsibility of the customers or users of a system; otherstakeholders may be involved as well.The goal in acceptance testing is to establish confidence in the system, parts of the system orspecific non-functional characteristics of the systemContract and regulation acceptance testingContract acceptance testing is performed against a contract’s acceptance criteria for producingcustom-developed software. Acceptance criteria should be defined when the contract isagreed. Regulation acceptance testing is performed against any regulations that must beadhered to, such as governmental, legal or safety regulations.Alpha and beta (or field) testingAlpha testing is performed at the developing organization’s site. Beta testing, or field testing,is performed by people at their own locations. Both are performed by potential customers, notthe developers of the product.iii) Test typesa) Testing of function (functional testing)The functions that a system, subsystem or component are to perform may be described in 14
  15. 15. work products such as a requirements specification, use cases, or a functional specification, orthey may be undocumented. The functions are “what” the system does.A type of functional testing, security testing, investigates the functions (e.g. a firewall) relatingto detection of threats, such as viruses, from malicious outsiders. Another type of functionaltesting, interoperability testing, evaluates the capability of the software product to interactwith one or more specified components or systems.b) Testing of non-functional software characteristics (non-functional testing) Non-functional testing includes, but is not limited to, performance testing, load testing, stresstesting, usability testing, maintainability testing, reliability testing and portability testing. It isthe testing of “how” the system works.Non-functional testing may be performed at all test levels.c) Testing of software structure/architecture (structural testing) Structural (white-box) testing may be performed at all test levels. Structural techniques arebest used after specification-based techniques, in order to help measure the thoroughness oftesting through assessment of coverage of a type of structure.Structural testing approaches can also be applied at system, system integration or acceptancetesting levels (e.g. to business models or menu structures).d) Testing related to changes (confirmation testing (retesting) and regressiontesting)After a defect is detected and fixed, the software should be retested to confirm that the originaldefect has been successfully removed. This is called confirmation. Debugging (defect fixing) isa development activity, not a testing activity.Regression testing is the repeated testing of an already tested program, after modification, todiscover any defects introduced or uncovered as a result of the change(s). It isperformed when the software, or its environment, is changed.Regression testing may be performed at all test levels, and applies to functional, non-functional and structural testing.iv) Maintenance testingOnce deployed, a software system is often in service for years or decades. During this time thesystem and its environment are often corrected, changed or extended.Modifications include planned enhancement changes (e.g. release-based), corrective andemergency changes, and changes of environment,Maintenance testing for migration (e.g. from one platform to another) should includeoperational tests of the new environment, as well as of the changed software.Maintenance testing for the retirement of a system may include the testing of data migrationor archiving if long data-retention periods are required.Maintenance testing may be done at any or all test levels and for any or all test types.Questions:1) What are the good practices for testing with in the software development life cycle?a) Early test analysis and designb) Different test levels are defined with specific objectives 15
  16. 16. c) Testers will start to get involved as soon as coding is done.d) A and B above2) Which option best describes objectives for test levels with a life cycle model?a) Objectives should be generic for any test levelb) Objectives are the same for each test level.c) The objectives of a test level don’t need to be defined in advanced) Each level has objectives specific to that level.3) Which of the following is a type?a) Component testingb) Functional testingc) System testingd) Acceptance testing4) Non-functional system testing includes:a) Testing to see where the system does not function properlyb) testing quality attributes of the system including performance and usabilityc) testing a system feature using only the software required for that actiond) testing for functions that should not exist5) Beta testing is:a) Performed by customers at their own siteb) Performed by customers at their software developer’s sitec) Performed by an independent test teamd) Performed as early as possible in the lifecycle6) Which of the following is not part of performance testing:a) Measuring response timeb) Measuring transaction ratesc) Recovery testingd) Simulating many users7) Which one of the following statements about system testing is NOT true?a) System tests are often performed by independent teams.b) Functional testing is used more than structural testing.c) Faults found during system tests can be very expensive to fix.d) End-users should be involved in system tests.8) Integration testing in the small:a) Tests the individual components that have been developed.b) Tests interactions between modules or subsystems.c) Only uses components that form part of the live system.d) Tests interfaces to other systems. 16
  17. 17. 9) Alpha testing is:a) Post-release testing by end user representatives at the developer’s site.b) The first testing that is performed.c) Pre-release testing by end user representatives at the developer’s site.d) Pre-release testing by end user representatives at their sites.10) Software testing activities should starta) As soon as the code is writtenb) during the design stagec) when the requirements have been formally documentedd) as soon as possible in the development life cycle11) consider the following statements about regression testsI. they may useful be automated if they are well designed.II. They are the same as confirmation testsIII. They are a way to reduce the risk of a change having an adverse affect elsewhere in thesystemIV. They are only effective if automated.Which pairs of statements are true?a) I and IIb) I and IIIc) II and IIId) II and IV12) Which of these statements about functional testing is true?a) Structural testing is more important than functional testing as it addresses the codeb) Functional testing is useful throughout the life cycle and can be applied by businessannalists, developers, testers and usersc) Functional testing is more powerful than static testing as you actually run the system andsee what happens.d) Inspection is a functional testing13) Consider the following statements about maintenance testing:I. It requires both re-test and regression test and may require additional new testsII. It is testing to show how easy it will be to maintain the systemIII) It is difficult to scope and therefore needs careful risk and impact analysisIV. It need not be done for emergency bug fixes.Which of the statement are true?a) I and IIIb) I and IVc) II and IIId) II and IV14) Which of the following is NOT part of system testing:a) business process-based testingb) performance, load and stress testingc) requirements-based testing 17
  18. 18. d) top-down integration testing15) To test a function, the programmer has to write a _________, which calls the function tobe tested and passes it test data.a. Stubb. Driverc. Proxyd. None of the above16) Which of the following is a form of functional testing?a) Boundary value analysisb) Usability testingc) Performance testingd) Security testing17) Which one of the following describes the major benefit of verification early in the lifecycle?a) It allows the identification of changes in user requirements.b) It facilitates timely set up of the test environment.c) It reduces defect multiplication.d) It allows testers to become involved early in the project.18) The most important thing about early test design is that it:a) makes test preparation easier.b) means inspections are not required.c) can prevent fault multiplication.d) will find all faults.19). Which of the following statements is not truea. performance testing can be done during unit testing as well as during the testing of wholesystemb. The acceptance test does not necessarily include a regression testc. Verification activities should not involve testers (reviews, inspections etc)d. Test environments should be as similar to production environments as possible20) Which of the following is NOT a type of non-functional test?a. State-Transitionb. Usabilityc. Performanced. Reliability21) Which of the following is not the integration strategy?a. Design basedb. Big-bangc. Bottom-upd. Top-down22) Big bang approach is related toa) Regression testingb) Inter system testingc) Re-testingd). Integration testing 18
  19. 19. 23) “Which life cycle model is basically driven by schedule and budget risks” This statement isbest suited fora) Water fall modelb) Spiral modelc) Incremental modeld) V-Model24) Use cases can be performed to testA. Performance testingB. Unit testingC. Business scenariosD. Static testing25) Which testing is performed at an external site?A. Unit testingB. Regression testingC. Beta testingD. Integration testing26) Functional system testing is:a) Testing that the system functions with other systemsb) Testing that the components that comprise the system function togetherc) Testing the end to end functionality of the system as a wholed) Testing the system performs functions within specified response times27) Maintenance testing is:a) Updating tests when the software has changedb) testing a released system that has been changedc) testing by users to ensure that the system meets a business needd) testing to maintain business advantage28) Which of the following uses Impact Analysis most?a)component testingb)non-functional system testingc)user acceptance testingd)maintenance testing29) Which of the following is NOT part of system testing?a) business process-based testingb) performance, load and stress testingc) usability testingd) top-down integration testing30) Which of the following list contains only non-functional tests?a. compatibility testing, usability testing, performance testingb. System testing, performance testingc. Load testing, stress testing, component testing, portability testingd. Testing various configurations, beta testing, load testing31) V-Model is: 19
  20. 20. a. A software development model that illustrates how testing activities integrate with softwaredevelopment phasesb. A software life-cycle model that is not relevant for testingc. The official software development and testing life-cycle model of ISTQBd. A testing life cycle model including unit, integration, system and acceptance phases32) Maintenance testing is:a. Testing managementb. Synonym of testing the quality of servicec. Triggered by modifications, migration or retirement of existing softwared. Testing the level of maintenance by the vendor33) Link Testing is also called as:a) Component Integration testingb) Component System Testingc) Component Sub System Testingd) Maintenance testing34) Component Testing is also called as:i. Unit Testingii. Program Testingiii. Module Testingiv. System Component TestingWhich of the following is correct?a) i, ii, iii are true and iv is falseb) i, ii, iii, iv are falsec) i, ii, iv are true and iii is falsed) All of above is true35) Match every stage of the software Development Life cycle with the Testing Life cycle:i. Global designii. System Requirementsiii. Detailed designiv. User Requirementsa) Unit testsb) Acceptance testsc) System testsd) Integration testsa) i -d , ii-a , iii-b , iv-cb) i -c , ii-d , iii-a , iv-bc) i -d , ii-c , iii-a , iv-bd) i -c , ii-d , iii-b , iv-a36) Which of these is a functional test?a) Measuring response time on an online booking systemb) Checking the effect of high volumes of traffic in a call-center system.c) Checking the on-line booking screens information and the database contacts against theinformation on the letter to the customersd) Checking how easy the system is to use 20
  21. 21. Chaper 3: Static TechniquesInternational Software Testing and Quality Board, Objective Tests, sample questionsThis chapter explains the static and dynamic techniques of testing, Review process, formaland informal, inspection, and technical review, Analysis compiler, complexity, control flow,data flow and walkthrough. For other manual testing concepts and SDLC, STLC, BLC topics ,Please refer the Main chapter of Software testing theory.3.1 Static techniques and the test processdynamic testing, static testing, static technique Img 1.1 Permanent Demand trend Jobmarket Internet.UK The chart provides the 3-month moving total beginning in 2004 of permanent IT jobs citing ISTQB within the UK as a proportion of the total demand within the Qualifications category. 20093.2 Review processentry criteria, formal review, informal review, inspection, metric, moderator/inspectionleader, peer review, reviewer, scribe, technical review, walkthrough.3.3 Static analysis by toolsCompiler, complexity, control flow, data flow, static analysisI) Phases of a formal review1) PlanningSelecting the personal, allocating roles, defining entry and exit criteria for more formalreviews etc. 21
  22. 22. 2) Kick-offDistributing documents, explaining the objectives, checking entry criteria etc.3) Individual preparationWork done by each of the participants on their own work before the review meeting, questionsand comments4) Review meetingDiscussion or logging, make recommendations for handling the defects, or make decisionsabout the defects5) ReworkFixing defects found, typically done by the author Fixing defects found, typically done by theauthor6) Follow-upChecking the defects have been addressed, gathering metrics and checking on exit criteriaII) Roles and responsibilitiesManager Decides on execution of reviews, allocates time in projects schedules, anddetermines if the review objectives have been metModerator Leads the review, including planning, running the meeting, follow-up after themeeting.Author The writer or person with chief responsibility of the document(s) to be reviewed.Reviewers Individuals with a specific technical or business background. Identify defects anddescribe findings.Scribe (recorder) Documents all the issues, problemsIII) Types of reviewInformal review No formal process, pair programming or a technical lead reviewing designsand code.Main purpose: inexpensive way to get some benefit.Walkthrough Meeting led by the author, ‘scenarios, dry runs, peer group’, open-endedsessions.Main purpose: learning, gaining understanding, defect findingTechnical review Documented, defined defect detection process, ideally led by trainedmoderator, may be performed as a peer review, pre meeting preparation, involved by peersand technical expertsMain purpose: discuss, make decisions, find defects, solve technical problems and checkconformance to specifications and standardsInspection Led by trained moderator (not the author), usually peer examination, definedroles, includes metrics, formal process, pre-meeting preparation, formal follow-up processMain purpose: find defects.Note: walkthroughs, technical reviews and inspections can be performed within a peer group-colleague at the same organization level. This type of review is called a “peer review”. 22
  23. 23. IV) Success factors for reviewsEach review has a clear predefined objective.The right people for the review objectives are involved.Defects found are welcomed, and expressed objectively.People issues and psychological aspects are dealt with (e.g. making it a positive experience forthe author).Review techniques are applied that are suitable to the type and level of software workproducts and reviewers.Checklists or roles are used if appropriate to increase effectiveness of defect identification.Training is given in review techniques, especially the more formal techniques, such asinspection.Management supports a good review process (e.g. by incorporating adequate time for reviewactivities in project schedules). There is an emphasis on learning and process improvement.V) Cyclomatic ComplexityThe number of independent paths through a programCyclomatic Complexity is defined as: L – N + 2PL = the number of edges/links in a graphN = the number of nodes in a graphsP = the number of disconnected parts of the graph (connected components)Alternatively one may calculate Cyclomatic Complexity using decision point ruleDecision points +1Cyclomatic Complexity and Risk Evaluation1 to 10a simple program, without very much risk11 to 20 a complex program, moderate risk21 to 50, a more complex program, high risk> 50an un-testable program (very high risk)Questions1) Which of the following statements is NOT true:a) inspection is the most formal review processb) inspections should be led by a trained leaderc) managers can perform inspections on management documentsd) inspection is appropriate even when there are no written documents2) Which expression best matches the following characteristics or review processes:1. led by author2. undocumented3. no management participation4. led by a trained moderator or leader5. uses entry exit criterias) inspection 23
  24. 24. t) peer reviewu) informal reviewv) walkthrougha) s = 4, t = 3, u = 2 and 5, v = 1b) s = 4 and 5, t = 3, u = 2, v = 1c) s = 1 and 5, t = 3, u = 2, v = 4d) s = 5, t = 4, u = 3, v = 1 and 23) Could reviews or inspections be considered part of testing:a) No, because they apply to development documentationb) No, because they are normally applied before testingc) No, because they do not apply to the test documentationd) Yes, because both help detect faults and improve quality4) In a review meeting a moderator is a person whoa. Takes minutes of the meetingb. Mediates among peoplec. Takes telephone callsd. writes the documents to be reviewed5) Which of the following statements about reviews is true?a) Reviews cannot be performed on user requirements specifications.b) Reviews are the least effective way of testing code.c) Reviews are unlikely to find faults in test plans.d) Reviews should be performed on specifications, code, and test plans.6) What is the main difference between a walkthrough and an inspection?a) An inspection is lead by the author, whilst a walkthrough is lead by a trained moderator.b) An inspection has a trained leader, whilst a walkthrough has no leader.c) Authors are not present during inspections, whilst they are during walkthroughs.d) A walkthrough is lead by the author, whilst an inspection is lead by a trained moderator.7) Which of the following is a static test?a. code inspectionb. coverage analysisc. usability assessmentd. installation test 24
  25. 25. 8) Who is responsible for document all the issues, problems and open point that wereidentified during the review meetingA. ModeratorB. ScribeC. ReviewersD. Author9) What is the main purpose of Informal review?A. Inexpensive way to get some benefitB. Find defectsC. Learning, gaining understanding, effect findingD. Discuss, make decisions and solve technical problems10) Which of the following is not a static testing technique?a. Error guessingb. Walkthroughc. Data flow analysisd. Inspections11) Inspections can find all the following excepta. Variables not defined in the codeb. Spelling and grammar faults in the documentsc. Requirements that have been omitted from the design documentsd. How much of the code has been covered12) Which of the following artifacts can be examined by using review techniques?a) Software codeb) Requirements specificationc) Test designsd) All of the above13) Which is not a type of review?a) Walkthroughb) Inspectionc) Management approvald) Informal review14) Which of the following statements about early test design are true and which are false?1. Defects found during early test design are more expensive to fix2. Early test design can find defects3. Early test design can cause to the changes to the requirements4. Early test design can takes more efforta) 1 and 3 are true. 2 and 4 are false. 25
  26. 26. b) 2 is true. 1, 3 and 4 are false.c) 2 and 3 are true. 1 and 4 are false.d) 2, 3, and 4 are true. 1 is false15) Static code analysis typically identifies all but one of the following problems. Which is it?a) Unreachable codeb) Faults in requirementsc) Undeclared variablesd) Too few comments16) What is the best description of static analysis?a) The analysis of bath programsb) The reviewing of test plansc) The analysis of program code or other software a rtifactsd) The use of black-box testing17) What is the more important factor for successful performance of review?a) A separate scribe during the logging meetingb) Trained participants and review leadersc) The availability of tools to support the review processd) A reviewed test plan18) Code Walkthrough isa. type of dynamic testingb. type of static testingc. neither dynamic nor staticd. performed by the testing team19) Static Analysisa. same as static testingb. done by the developersc. both a and bd. none of the above20) Which review is inexpensive?a. Informal Reviewb. Walkthroughc. Technical reviewd. Inspection21) Who should have technical or Business background?a. Moderatorb. Author 26
  27. 27. c. Reviewerd. Recorder22) The person who leads the review of the document(s), planning the review, running themeeting and follow-up after the meetinga. Reviewerb. Authorc. Moderatord. Auditor23) Peer Reviews are also called as:a) Inspectionb) Walkthroughc) Technical Reviewd) Formal Review24) The Kick Off phase of a formal review includes the followinga) Explaining the objectiveb) Fixing defects found typically done by authorc) Follow upd) Individual Meeting preparations25) Success Factors for a review include:i. Each Review does not have a predefined objectiveii. Defects found are welcomed and expressed objectivelyiii. Management supports a good review process.iv. There is an emphasis on learning and process improvement.a) ii, iii, iv are correct and i is incorrectb) iii , i , iv is correct and ii is incorrectc) i , iii , iv are correct and ii is in correctd) i, ii are correct and iii, iv are incorrect26) Why static testing described as complementary for dynamic testing?a) Because they share the aim of identifying defects and finds the same types of defect.b) Because they have different aims and differ in the types of defect they find.c) Because they have different aims but find the same types of defect.d) Because they share the aim of identifying defects but differ in the types of defect they find.27) Which of the following statements regarding static testing is false?a) Static testing requires the running of tests through the codeb) Static testing includes desk checkingc) Static testing includes techniques such as reviews and inspectionsd) Static testing can give measurements such as cyclomatic complexity 27
  28. 28. 28) Which of the following is true about Formal Review or Inspection?i. Led by Trained Moderator (not the author).ii. No Pre Meeting Preparationsiii. Formal Follow up process.iv. Main objective is to find defectsa) ii is true and i, iii, iv are falseb) i, iii, iv are true and ii is falsec) i, iii, iv are false and ii is trued) iii is true and I, ii, iv are false29) The Phases of formal review process is mentioned below arrange them in the correctorder.i. Planning ii. Review Meetingiii. Rework iv. Individual Preparationsv. Kick Off vi. Follow upa) i,ii,iii,iv,v,vib) vi,i,ii,iii,iv,vc) i,v,iv,ii,iii,vid) i,ii,iii,v,iv,vi30) Which of the following is Key Characteristics of Walk Through?a) Scenario, Dry Run, Peer Groupb) Pre Meeting Preparationsc) Formal Follow up Processd) Includes MetricsReferences 1 ISTQB3 Gilb, T & Graham, D (1993) Software Inspection, Addison-Wesley: London Hatton, L. (1997) Reexamining the Fault Density-component Size Connection in IEEE Software, vol. 14 Issue 2, March 197; pp. 89-97. 2 ISTQB3, Van Veenendaal, E. (2004). The Testing Practitioner, 2nd edition, UTN publishing van Veenendaal, E. and van der Zwan, M. (2000) GQM based Inspections in proceedings of the 11th European software control and metrics conference (ESCOM), Munich, May 2000. 28
  29. 29. Chaper 4: Test Design Techniques – ModulesInternational Software testing & Quality Board, Test design techniques, CategoriesSpecifcations and White Box / Black Box techniques, BVA and ECP , Decision table testing,state transition , and use case tesing. Test case specification, test design, test executionschedule, test procedure specification, test script, traceability. For more details on testing andrelated concepts plese refer the main chapter of software testing theory Part 14.1 The test development processTest case specification, test design, test execution schedule, test procedure specification, testscript, traceability.4.2 Categories of test design techniquesBlack-box test design technique, specification-based test design technique, white-box testdesign technique, structure-based test design technique, experience-based test designtechnique.4.3 Specification-based or black box techniquesBoundary value analysis, decision table testing, equivalence partitioning, state transitiontesting, use case testing.4.4 Structure-based or white box techniquesCode coverage, decision coverage, statement coverage, structure-based testing.4.5 Experience-based techniquesExploratory testing, fault attack.4.6 Choosing test techniquesNo specific terms.Test Design Techniques  Specification-based/Black-box techniques  Structure-based/White-box techniques  Experience-based techniquesI) Specification-based/Black-box techniquesEquivalence partitioningBoundary value analysisDecision table testing 29
  30. 30. State transition testingUse case testingEquivalence partitioningo Inputs to the software or system are divided in to groups that are expected to exhibit similarbehavioro Equivalence partitions or classes can be found for both valid data and invalid datao Partitions can also be identified for outputs, internal values, time related values and forinterface values.o Equivalence partitioning is applicable all levels of testingBoundary value analysiso Behavior at the edge of each equivalence partition is more likely to be incorrect. Themaximum and minimum values of a partition are its boundary values.o A boundary value for a valid partition is a valid boundary value; the boundary of an invalidpartition is an invalid boundary value.o Boundary value analysis can be applied at all test levelso It is relatively easy to apply and its defect-finding capability is higho This technique is often considered as an extension of equivalence partitioning.Decision table testingo In Decision table testing test cases are designed to execute the combination of inputso Decision tables are good way to capture system requirements that contain logicalconditions.o The decision table contains triggering conditions, often combinations of true and false for allinput conditions o It maybe applied to all situations when the action of the software dependson several logical decisionsState transition testingo In state transition testing test cases are designed to execute valid and invalid statetransitionso A system may exhibit a deferent response on current conditions or previous history. In thiscase, that aspect of the system can be shown as a state transition diagram.o State transition testing is much used in embedded software and technical automation.Use case testingo In use case testing test cases are designed to execute user scenarioso A use case describes interactions between actors, including users and the systemo Each use case has preconditions, which need to be met for a use case to work successfully.o A use case usually has a mainstream scenario and some times alternative branches.o Use cases, often referred to as scenarios, are very useful for designing acceptance tests withcustomer/user participation 30
  31. 31. II) Structure-based/White-box techniqueso Statement testing and coverageo Decision testing and coverageo Other structure-based techniques  condition coverage  multi condition coverageStatement testing and coverage:StatementAn entity in a programming language, which is typically the smallest indivisible unit ofexecutionStatement coverageThe percentage of executable statements that have been exercised by a test suiteStatement testingA white box test design technique in which test cases are designed to execute statementsDecision testing and coverageDecisionA program point at which the control flow has two or more alternative routesA node with two or more links to separate branchesDecision CoverageThe percentage of decision outcomes that have been exercised by a test suite100% decision coverage implies both 100% branches coverage and 100% statement coverageDecision testingA white box test design technique in which test cases are designed to execute decisionoutcomes.Other structure-based techniquesConditionA logical expression that can be evaluated as true or falseCondition coverageThe percentage of condition outcomes that have been exercised by a test suiteCondition testingA white box test design technique in which test cases are designed to execute conditionoutcomesMultiple condition testingA white box test design technique in which test cases are designed to execute combinations ofsingle condition outcomes 31
  32. 32. III) Experience-based techniqueso Error guessingo Exploratory testingError guessingo Error guessing is a commonly used experience-based techniqueo Generally testers anticipate defects based on experience, these defects list can be built basedon experience, available defect data, and from common knowledge about why software fails.Exploratory testingo Exploratory testing is concurrent test design, test execution, test logging and learning ,based on test charter containing test objectives and carried out within time boxeso It is approach that is most useful where there are few or inadequate specifications and servetime pressure.Questions1) Order numbers on a stock control system can range between 10000 and 99999 inclusive.Which of the following inputs might be a result of designing tests for only valid equivalenceclasses and valid boundaries:a) 1000, 5000, 99999b) 9999, 50000, 100000c) 10000, 50000, 99999d) 10000, 99999e) 9999, 10000, 50000, 99999, 100002) Which of the following is NOT a black box technique:a) Equivalence partitioningb) State transition testingc) Syntax testingd) Boundary value analysis3) Error guessing is best useda) As the first approach to deriving test casesb) After more formal techniques have been appliedc) By inexperienced testersd) After the system has gone livee) Only by end users4) Which is not true-The black box testera. should be able to understand a functional specification or requirements documentb. should be able to understand the source code.c. is highly motivated to find faultsd. is creative to find the system’s weaknesses. 32
  33. 33. 5) A test design technique isa. a process for selecting test casesb. a process for determining expected outputsc. a way to measure the quality of softwared. a way to measure in a test plan what has to be done6) Which of the following is true?a. Component testing should be black box, system testing should be white box.b. if u find a lot of bugs in testing, you should not be very confident about the quality ofsoftwarec. the fewer bugs you find, the better your testing wasd. the more tests you run, the more bugs you will find.7) What is the important criterion in deciding what testing technique to use?a. how well you know a particular techniqueb. the objective of the testc. how appropriate the technique is for testing the applicationd. whether there is a tool to support the technique8) Which of the following is a black box design technique?a. statement testingb. equivalence partitioningc. error- guessingd. usability testing9) A program validates a numeric field as follows:values less than 10 are rejected, values between 10 and 21 are accepted, values greater than orequal to 22 are rejectedWhich of the following input values cover all of the equivalence partitions?a. 10, 11, 21b. 3, 20, 21c. 3, 10, 22d. 10, 21, 2210) Using the same specifications as question 9, which of the following covers the MOSTboundary values?a. 9,10,11,22b. 9,10,21,22c. 10,11,21,22d. 10,11,20,2111) Error guessing:a) supplements formal test design techniques. 33
  34. 34. b) can only be used in component, integration and system testing.c) is only performed in user acceptance testing.d) is not repeatable and should not be used.12) Which of the following is NOT a white box technique?a) Statement testingb) Path testingc) Data flow testingd) State transition testing13) Data flow analysis studies:a) possible communications bottlenecks in a program.b) the rate of change of data values as a program executes.c) the use of data on paths through the code.d) the intrinsic complexity of the code.14) In a system designed to work out the tax to be paid:An employee has £4000 of salary tax free. The next £1500 is taxed at 10%The next £28000 is taxed at 22%Any further amount is taxed at 40%Which of these groups of numbers would fall into the same equivalence class?a) £4800; £14000; £28000b) £5200; £5500; £28000c) £28001; £32000; £35000d) £5800; £28000; £3200015) Test cases are designed during:a) test recording.b) test planning.c) test configuration.d) test specification.16) An input field takes the year of birth between 1900 and 2004The boundary values for testing this field area. 0,1900,2004,2005b. 1900, 2004c. 1899,1900,2004,2005d. 1899, 1900, 1901,2003,2004,200517) Boundary value testinga. Is the same as equivalence partitioning tests?b. Test boundary conditions on, below and above the edges of input and output equivalenceclassesc. Tests combinations of input circumstancesd. Is used in white box testing strategy 34
  35. 35. 18) When testing a grade calculation system, a tester determines that all scores from 90 to 100will yield a grade of A, but scores below 90 will not. This analysis is known as:a) Equivalence partitioningb) Boundary value analysisc) Decision tabled) Hybrid analysis19) Which technique can be used to achieve input and output coverage? It can be applied tohuman input, input via interfaces to a system, or interface parameters in integration testing.a) Error Guessingb) Boundary Value Analysisc) Decision Table testingd) Equivalence partitioning20) Features to be tested, approach, item pass/fail criteria and test deliverables should bespecified in which document?a) Test case specificationb) Test procedure specificationc) Test pland) Test design specification21) Which specification-based testing techniques are most closely related to each other?a) Decision tables and state transition testingb) Equivalence partitioning and state transition testingc) Decision tables and boundary value analysisd) Equivalence partitioning and boundary value analysis22) assume postal rates for ‘light letters’ are:$0.25 up to 10 grams$0.35 up to 50 grams$0.45 up to 75 grams$0.55 up to 100 gramsWhich test inputs (in grams) would be selected using boundary value analysis?a) 0, 9, 19, 49, 50, 74, 75, 99, 100b) 10, 50, 75, 100, 250, 1000c) 0, 1, 10, 11, 50, 51, 75, 76, 100, 101d) 25, 26, 35, 36, 45, 46, 55, 5623) If the temperature falls below 18 degrees, the heating system is switched on. When thetemperature reaches 21 degrees, the heating system is switched off. What is the minimum setof test input values to cover all valid equivalence partitions?a) 15, 19 and 25 degrees 35
  36. 36. b) 17, 18, 20 and 21 degreesc) 18, 20 and 22 degreesd) 16 and 26 degrees24) What is a test condition?a) An input, expected outcome, precondition and post conditionb) The steps to be taken to get the system to a given pointc) Something that can be testedd) A specific state of the software, ex: before a test can be run25) What is a key characteristic of specification-based testing techniques?a) Tests are derived from information about how the software is constructedb) Tests are derived from models (formal or informal) that specify the problem to be solved bythe software or its componentsc) Tests are derived based on the skills and experience of the testerd) Tests are derived from the extent of the coverage of structural elements of the system orcomponents26) Why are both specification-based and structure-based testing techniques useful?a) They find different types of defect.b) using more techniques is always betterc) both find the same types of defect.d) Because specifications tend to be unstructured27) Find the Equivalence class for the following test caseEnter a number to test the validity of being accepting the numbers between 1 and99a) All numbers < 1b) All numbers > 99c) Number = 0d) All numbers between 1 and 9928) What is the relationship between equivalence partitioning and boundaryvalue analysis techniques?a) Structural testingb) Opaque testingc) Compatibility testingd) All of the above29) Suggest an alternative for requirement traceability matrixa) Test Coverage matrixb) Average defect agingc) Test Effectivenessd) Error discovery rate 36
  37. 37. 30) The following defines the statement of what the tester is expected to accomplish orvalidate during testing activitya) Test scopeb) Test objectivec) Test environmentd) None of the above31) One technique of Black Box testing is Equivalence Partitioning. In a programstatement that accepts only one choice from among 10 possible choices,numbered 1 through 10, the middle partition would be from _____ to _____a) 4 to 6b) 0 to 10c) 1 to 10d) None of the above32) Test design mainly emphasizes all the following excepta) Data planningb) Test procedures planningc) Mapping the requirements and test casesd) Data synchronization33) Deliverables of test design phase include all the following excepta) Test datab) Test data planc) Test summary reportd) Test procedure plan34) Test data planning essentially includesa) Networkb) Operational Modelc) Boundary value analysisd) Test Procedure Planning35) Test coverage analysis is the process ofa) Creating additional test cases to increase coverageb) Finding areas of program exercised by the test casesc) Determining a quantitative measure of code coverage, which is adirect measure of quality.d) All of the above.36) Branch Coveragea) another name for decision coverageb) another name for all-edges coverage 37
  38. 38. c) another name for basic path coveraged) all the above37) The following example is aif (condition1 && (condition2 || function1()))statement1;elsestatement2; (Testing concepts)a) Decision coverageb) Condition coveragec) Statement coveraged) Path Coverage38) Test cases need to be written fora) invalid and unexpected conditionsb) valid and expected conditionsc) both a and bd) none of these39) Path coverage includesa) statement coverageb) condition coveragec) decision coveraged) none of these40) The benefits of glass box testing area) Focused Testing, Testing coverage, control flowb) Data integrity, Internal boundaries, algorithm specific testingc) Both a and bd) Either a or b41) Find the invalid equivalence class for the following test caseDraw a line up to the length of 4 inchesa) Line with 1 dot-widthb) Curvec) line with 4 inchesd) line with 1 inch.42) Error seedinga) Evaluates the thoroughness with which a computer program is tested by purposelyinserting errors into a supposedly correct program.b) Errors inserted by the developers intentionally to make the systemmalfunctioning.c) for identifying existing errors 38
  39. 39. d) Both a and b43) Which of the following best describes the difference between clearbox and opaque box?1. Clear box is structural testing, opaque box is Ad-hoc testing2. Clear box is done by tester, and opaque box is done by developer3. Opaque box is functional testing, clear box is exploratory testinga) 1b) 1 and 3c) 2d) 344) What is the concept of introducing a small change to the program and having the effects ofthat change show up in some test?a) Desk checkingb) Debugging a programc) A mutation errord) Introducing mutation45) How many test cases are necessary to cover all the possible sequences of statements(paths) for the following program fragment? Assume that the two conditions are independentof each other : - …………if (Condition 1)then statement 1else statement 2fiif (Condition 2)then statement 3fi…………a. 1 test caseb. 3 Test Casesc. 4 Test Casesd. Not achievable46) Given the following code, which is true about the minimum number of test cases requiredfor full statement and branch coverage:Read PRead QIF P+Q > 100 THENPrint “Large”ENDIFIf P > 50 THENPrint “P Large”ENDIF 39
  40. 40. a) 1 test for statement coverage, 3 for branch coverageb) 1 test for statement coverage, 2 for branch coveragec) 1 test for statement coverage, 1 for branch coveraged) 2 tests for statement coverage, 3 for branch coveragee) 2 tests for statement coverage, 2 for branch coverage47) Given the following:Switch PC onStart “outlook”IF outlook appears THENSend an emailClose outlooka) 1 test for statement coverage, 1 for branch coverageb) 1 test for statement coverage, 2 for branch coveragec) 1 test for statement coverage. 3 for branch coveraged) 2 tests for statement coverage, 2 for branch coveragee) 2 tests for statement coverage, 3 for branch coverage48) If a candidate is given an exam of 40 questions, should get 25 marks to pass (61%) andshould get 80% for distinction, what is equivalence class?A. 23, 24, 25B. 0, 12, 25C. 30, 36, 39D. 32, 37, 4049) Consider the following statements:i. 100% statement coverage guarantees 100% branch coverage.ii. 100% branch coverage guarantees 100% statement coverage.iii. 100% branch coverage guarantees 100% decision coverage.iv. 100% decision coverage guarantees 100% branch coverage.v. 100% statement coverage guarantees 100% decision coverage.a) ii is True; i, iii, iv & v are Falseb) i & v are True; ii, iii & iv are Falsec) ii & iii are True; i, iv & v are Falsed) ii, iii & iv are True; i & v are False50) Which statement about expected outcomes is FALSE?a) Expected outcomes are defined by the softwares behaviorb) Expected outcomes are derived from a specification, not from the codec) Expected outcomes should be predicted before a test is rund) Expected outcomes may include timing constraints such as response times51) Which of the following is not a white box testing?a) Random testing 40
  41. 41. b) Data Flow testingc) Statement testingd) Syntax testing52) If the pseudo code below were a programming language, how many tests are required toachieve 100% statement coverage?1. If x=3 then2. Display_messageX;3. If y=2 then4. Display_messageY;5. Else6. Display_messageZ;a. 1b. 2c. 3d. 453) Using the same code example as question 17, how many tests are required to achieve 100%branch/decision coverage?a. 1b. 2c. 3d. 454) Which of the following technique is NOT a black box technique?a) Equivalence partitioningb) State transition testingc) LCSAJd) Syntax testing55) Given the following code, which is true?IF A>B THENC=A–BELSEC=A+BENDIFRead DIF C = D THENPrint “Error”ENDIFa) 1 test for statement coverage, 1 for branch coverageb) 2 tests for statement coverage, 2 for branch coveragec) 2 tests for statement coverage, 3 for branch coverage 41
  42. 42. d) 3 tests for statement coverage, 3 for branch coveragee) 3 tests for statement coverage, 2 for branch coverage56) Consider the following:Pick up and read the news paperLook at what is on televisionIf there is a program that you are interested in watching then switch the television on andwatch the programOtherwiseContinue reading the news paperIf there a crossword in the news paper then try and complete the crossworda) SC = 1 and DC = 3b) SC = 1 and DC = 2c) SC = 2 and DC = 2d) SC = 2 and DC = 357) The specification: an integer field shall contain values from and including 1 to andincluding 12 (number of the month)Which equivalence class partitioning is correct?a) Less than 1, 1 through 12, larger than 12b) Less than 1, 1 through 11, larger than 12c) Less than 0, 1 through 12, larger than 12d) Less than 1, 1 through 11, and above58) Analyze the following highly simplified procedure:Ask: “What type of ticket do you require, single or return?”IF the customer wants ‘return’ Ask: “What rate, Standard or Cheap-day?” IF the customer replies ‘Cheap-day’ Say: “That will be £11:20” ELSE Say: “That will be £19:50” ENDIF ELSE Say: “That will be £9:75” ENDIFNow decide the minimum number of tests that are needed to ensure that all the questionshave been asked, all combinations have occurred and all replies given.a) 3b) 4c) 5d) 6 42
  43. 43. 43
  44. 44. Chapter 5: Test organization and independenceInternational Software Testing & Quality Board, Test management, Test reportsTesters, test leader, test manager, test approach, defect density, failure rate, testmanager, test approach, configuration management, version control, Risk,product risk, prject risk, risk based testing, incident logging, incidentmanagement. Please refer to the earlier version of Software testing theory part 1The effectiveness of finding defects by testing and reviews can be improved byusing independent testers. Options for testing teams available are:  No independent testers. Developers test their own code.  Independent testers within the development teams.  Independent test team or group within the organization, reporting to project management or executive management  Independent testers from the business organization or user community.  Independent test specialists for specific test targets such as usability testers, security testers or certification testers (who certify a software product against standards and regulations).  Independent testers outsourced or external to the organization.The benefits of independence include:Independent testers see other and different defects, and are unbiased.An independent tester can verify assumptions people made during specification andimplementation of the system.Drawbacks include:Isolation from the development team (if treated as totally independent).Independent testers may be the bottleneck as the last checkpoint.Developers may lose a sense of responsibility for quality.b) Tasks of the test leader and testerTest leader tasks may include:  Coordinate the test strategy and plan with project managers and others.  Write or review a test strategy for the project, and test policy for the organization.  Contribute the testing perspective to other project activities, such as integration planning.  Plan the tests – considering the context and understanding the test objectives and risks 44
  45. 45. –including selecting test approaches, estimating the time, effort and cost of testing, acquiring resources, defining test levels, cycles, and planning incident management.  Initiate the specification, preparation, implementation and execution of tests, monitor the test results and check the exit criteria.  Adapt planning based on test results and progress (sometimes documented in status reports) and take any action necessary to compensate for problems.  Set up adequate configuration management of testware for traceability.  Introduce suitable metrics for measuring test progress and evaluating the quality of the testing and the product.  Decide what should be automated, to what degree, and how.  Select tools to support testing and organize any training in tool use for testers.  Decide about the implementation of the test environment.  Write test summary reports based on the information gathered during testing.Tester tasks may include:  Review and contribute to test plans.  Analyze, review and assess user requirements, specifications and models for testability.  Create test specifications.  Set up the test environment (often coordinating with system administration and network management).  Prepare and acquire test data.  Implement tests on all test levels, execute and log the tests, evaluate the results and document the deviations from expected results.  Use test administration or management tools and test monitoring tools as required.  Automate tests (may be supported by a developer or a test automation expert).  Measure performance of components and systems (if applicable).  Review tests developed by others.Note: People who work on test analysis, test design, specific test types or test automationmay be specialists in these roles. Depending on the test level and the risks related to theproduct and the project, different people may take over the role of tester, keeping some degreeof independence. Typically testers at the component and integration level would bedevelopers; testers at the acceptance test level would be business experts and users, andtesters for operational acceptance testing would be operators.c) Defining skills test staff needNow days a testing professional must have ‘application’ or ‘business domain’ knowledge and‘Technology’ expertise apart from ‘Testing’ Skills 45
  46. 46. 2) Test planning and estimationa) Test planning activities  Determining the scope and risks, and identifying the objectives of testing.  Defining the overall approach of testing (the test strategy), including the definition of the test levels and entry and exit criteria.  Integrating and coordinating the testing activities into the software life cycle activities: acquisition, supply, development, operation and maintenance.  Making decisions about what to test, what roles will perform the test activities, how the test activities should be done, and how the test results will be evaluated.  Scheduling test analysis and design activities.  Scheduling test implementation, execution and evaluation.  Assigning resources for the different activities defined.  Defining the amount, level of detail, structure and templates for the test documentation.  Selecting metrics for monitoring and controlling test preparation and execution, defect resolution and risk issues.  Setting the level of detail for test procedures in order to provide enough information to support reproducible test preparation and execution.b) Exit criteriaThe purpose of exit criteria is to define when to stop testing, such as at the end of a test levelor when a set of tests has a specific goal.Typically exit criteria may consist of:  Thoroughness measures, such as coverage of code, functionality or risk.  Estimates of defect density or reliability measures.  Cost.  Residual risks, such as defects not fixed or lack of test coverage in certain areas.  Schedules such as those based on time to market.c) Test estimationTwo approaches for the estimation of test effort are covered in this syllabus:  The metrics-based approach: estimating the testing effort based on metrics of former or similar projects or based on typical values.  The expert-based approach: estimating the tasks by the owner of these tasks or by experts.Once the test effort is estimated, resources can be identified and a schedule can be drawn up.The testing effort may depend on a number of factors, including: 46
  47. 47.  Characteristics of the product: the quality of the specification and other information used for test models (i.e. the test basis), the size of the product, the complexity of the problem domain, the requirements for reliability and security, and the requirements for documentation.  Characteristics of the development process: the stability of the organization, tools used, test process, skills of the people involved, and time pressure.  The outcome of testing: the number of defects and the amount of rework required.d) Test approaches (test strategies)One way to classify test approaches or strategies is based on the point in time at which thebulk of the test design work is begun:  Preventative approaches, where tests are designed as early as possible.  Reactive approaches, where test design comes after the software or system has been produced.Typical approaches or strategies include:  Analytical approaches, such as risk-based testing where testing is directed to areas of greatest risk  Model-based approaches, such as stochastic testing using statistical information about failure rates (such as reliability growth models) or usage (such as operational profiles).  Methodical approaches, such as failure-based (including error guessing and fault- attacks), experienced-based, check-list based, and quality characteristic based.  Process- or standard-compliant approaches, such as those specified by industry- specific standards or the various agile methodologies.  Dynamic and heuristic approaches, such as exploratory testing where testing is more reactive to events than pre-planned, and where execution and evaluation are concurrent tasks.  Consultative approaches, such as those where test coverage is driven primarily by the advice and guidance of technology and/or business domain experts outside the test team.  Regression-averse approaches, such as those that include reuse of existing test material, extensive automation of functional regression tests, and standard test suites.Different approaches may be combined, for example, a risk-based dynamic approach.The selection of a test approach should consider the context, including  Risk of failure of the project, hazards to the product and risks of product failure to humans, the environment and the company.  Skills and experience of the people in the proposed techniques, tools and methods.  The objective of the testing endeavour and the mission of the testing team.  Regulatory aspects, such as external and internal regulations for the development 47
  48. 48. process.  The nature of the product and the business.3) Test progress monitoring and controla) Test progress monitoring  Percentage of work done in test case preparation (or percentage of planned test cases prepared).  Percentage of work done in test environment preparation.  Test case execution (e.g. number of test cases run/not run, and test cases passed/failed).  Defect information (e.g. defect density, defects found and fixed, failure rate, and retest results).  Test coverage of requirements, risks or code.  Subjective confidence of testers in the product.  Dates of test milestones.  Testing costs, including the cost compared to the benefit of finding the next defect or to run the next test.b) Test Reporting  What happened during a period of testing, such as dates when exit criteria were met.  Analyzed information and metrics to support recommendations and decisions about future actions, such as an assessment of defects remaining, the economic benefit of continued testing, outstanding risks, and the level of confidence in tested software.Metrics should be collected during and at the end of a test level in order to assess:  The adequacy of the test objectives for that test level.  The adequacy of the test approaches taken.  The effectiveness of the testing with respect to its objectives.c) Test controlTest control describes any guiding or corrective actions taken as a result of information andmetrics gathered and reported. Actions may cover any test activity and may affect any othersoftware life cycle activity or task.Examples of test control actions are:  Making decisions based on information from test monitoring.  Re-prioritize tests when an identified risk occurs (e.g. software delivered late).  Change the test schedule due to availability of a test environment. 48
  49. 49.  Set an entry criterion requiring fixes to have been retested (confirmation tested) by a developer before accepting them into a build.4) Configuration managementThe purpose of configuration management is to establish and maintain the integrity of theproducts (components, data and documentation) of the software or system through theproject and product life cycle.For testing, configuration management may involve ensuring that:  All items of testware are identified, version controlled, tracked for changes, related to each other and related to development items (test objects) so that traceability can be maintained throughout the test process.  All identified documents and software items are referenced unambiguously in test documentationFor the tester, configuration management helps to uniquely identify (and to reproduce) thetested item, test documents, the tests and the test harness.During test planning, the configuration management procedures and infrastructure (tools)should be chosen, documented and implemented.5) Risk and testinga) Project risksProject risks are the risks that surround the project’s capability to deliver its objectives, suchas:Organizational factors:  skill and staff shortages;  personal and training issues;  political issues, such aso problems with testers communicating their needs and test results;o failure to follow up on information found in testing and reviews (e.g. notimproving development and testing practices).  improper attitude toward or expectations of testing (e.g. not appreciating the value of finding defects during testing).Technical issues:  problems in defining the right requirements;  the extent that requirements can be met given existing constraints;  the quality of the design, code and tests.Supplier issues:  failure of a third party; 49
  50. 50. contractual issues.b) Product risksPotential failure areas (adverse future events or hazards) in the software or system are knownas product risks, as they are a risk to the quality of the product, such as:  Failure-prone software delivered.  The potential that the software/hardware could cause harm to an individual or company.  Poor software characteristics (e.g. functionality, reliability, usability and performance).  Software that does not perform its intended functions.Risks are used to decide where to start testing and where to test more; testing is used toreduce the risk of an adverse effect occurring, or to reduce the impact of an adverse effect.Product risks are a special type of risk to the success of a project. Testing as a risk-controlactivity provides feedback about the residual risk by measuring the effectiveness of criticaldefect removal and of contingency plans.A risk-based approach to testing provides proactive opportunities to reduce the levels ofproduct risk, starting in the initial stages of a project. It involves the identification of productrisks and their use in guiding test planning and control, specification, preparation andexecution of tests. In a risk-based approach the risks identified may be used to:  Determine the test techniques to be employed.  Determine the extent of testing to be carried out.  Prioritize testing in an attempt to find the critical defects as early as possible.  Determine whether any non-testing activities could be employed to reduce risk (e.g. providing training to inexperienced designers).Risk-based testing draws on the collective knowledge and insight of the project stakeholdersto determine the risks and the levels of testing required to address those risks.To ensure that the chance of a product failure is minimized, risk management activitiesprovide a disciplined approach to:  Assess (and reassess on a regular basis) what can go wrong (risks).  Determine what risks are important to deal with.  Implement actions to deal with those risks.In addition, testing may support the identification of new risks, may help to determine whatrisks should be reduced, and may lower uncertainty about risks.6) Incident managementSince one of the objectives of testing is to find defects, the discrepancies between actual andexpected outcomes need to be logged as incidents. Incidents should be tracked from discoveryand classification to correction and confirmation of the solution. In order to manage allincidents to completion, an organization should establish a process and rules forclassification. 50
  51. 51. Incidents may be raised during development, review, testing or use of a software product.They may be raised for issues in code or the working system, or in any type of documentationincluding requirements, development documents, test documents, and user information suchas “Help” or installation guides.Incident reports have the following objectives:  Provide developers and other parties with feedback about the problem to enable identification, isolation and correction as necessary.  Provide test leaders a means of tracking the quality of the system under test and the progress of the testing.  Provide ideas for test process improvement.Details of the incident report may include:  Date of issue, issuing organization, and author.  Expected and actual results.  Identification of the test item (configuration item) and environment.  Software or system life cycle process in which the incident was observed.  Description of the incident to enable reproduction and resolution, including logs, database dumps or screenshots.  Scope or degree of impact on stakeholder(s) interests.  Severity of the impact on the system.  Urgency/priority to fix.  Status of the incident (e.g. open, deferred, duplicate, waiting to be fixed, fixed awaiting retest, closed).  Conclusions, recommendations and approvals.  Global issues, such as other areas that may be affected by a change resulting from the incident.  Change history, such as the sequence of actions taken by project team members with respect to the incident to isolate, repair, and confirm it as fixed.References, including the identity of the test case specification that revealed the problem.Questions1) The following list contains risks that have been identified for a software product to bedeveloped. Which of these risks is an example of a product risk?a) Not enough qualified testers to complete the planned testsb) Software delivery is behind schedulec) Threat to a patient’s lifed) 3rd party supplier does not supply as stipulated 51
  52. 52. 2) Which set of metrics can be used for monitoring of the test execution?a) Number of detected defects, testing cost;b) Number of residual defects in the test object.c) Percentage of completed tasks in the preparation of test environment; test casespreparedd) Number of test cases run / not run; test cases passed / failed3) A defect management system shall keep track of the status of every defect registered andenforce the rules about changing these states. If your task is to test the status tracking, whichmethod would be best?a) Logic-based testingb) Use-case-based testingc) State transition testingd) Systematic testing according to the V-model4) Why can be tester dependent on configuration management?a) Because configuration management assures that we know the exact version of the testwareand the test objectb) Because test execution is not allowed to proceed without the consent of the change controlboardc) Because changes in the test object are always subject to configuration managementd) Because configuration management assures the right configuration of the test tools5) What test items should be put under configuration management?a) The test object, the test material and the test environmentb) The problem reports and the test materialc) Only the test objects. The test cases need to be adapted during agile testingd) The test object and the test material6) Which of the following can be root cause of a bug in a software product?(I) The project had incomplete procedures for configuration management.(II) The time schedule to develop a certain component was cut.(III) The specification was unclear(IV) Use of the code standard was not followed up(V) The testers were not certifieda) (I) and (II) are correctb) (I) through (IV) are correctc) (III) through (V) are correctd) (I), (II) and (IV) are correct7) Which of the following is most often considered as components interface bug? 52
  53. 53. a) For two components exchanging data, one component used metric units, the other oneused British unitsb) The system is difficult to use due to a too complicated terminal input structurec) The messages for user input errors are misleading and not helpful for understanding theinput error caused) Under high load, the system does not provide enough open ports to connect to8) Which of the following project inputs influence testing?(I) contractual requirements(II) Legal requirements(III) Industry standards(IV) Application risk(V) Project sizea) (I) through (III) are correctb) All alternatives are correctc) (II) and (V) are correctd) (I), (III) and (V) are correct9) What is the purpose of test exit criteria in the test plan?a) To specify when to stop the testing activityb) To set the criteria used in generating test inputsc) To ensure that the test case specification is completed) To know when a specific test has finished its execution10) Which of the following items need not to be given in an incident report?a) The version number of the test objectb) Test data and used environmentc) Identification of the test case that failedd) The location and instructions on how to correct the fault11) Why is it necessary to define a Test Strategy?a) As there are many different ways to test software, thought must be given to decide what willbe the most effective way to test the project on hand.b) Starting testing without prior planning leads to chaotic and inefficient test projectc) A strategy is needed to inform the project management how the test team will schedule thetest-cyclesd) Software failure may cause loss of money, time, business reputation, and in extreme casesinjury and death. It is therefore critical to have a proper test strategy in place12) IEEE 829 test plan documentation standard contains all of the following except:a) test itemsb) test deliverablesc) test tasksd) test environment 53