• Like
Software techniques
Upcoming SlideShare
Loading in...5
×

Software techniques

  • 868 views
Uploaded on

 

More in: Technology
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
No Downloads

Views

Total Views
868
On Slideshare
0
From Embeds
0
Number of Embeds
0

Actions

Shares
Downloads
57
Comments
0
Likes
1

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide

Transcript

  • 1. Software testingSoftware testing is an investigation conducted to provide stakeholders with information aboutthe quality of the product or service under test. Software testing can also provide an objective,independent view of the software to allow the business to appreciate and understand the risksof software implementation. Test techniques include, but are not limited to, the process ofexecuting a program or application with the intent of finding software bugs (errors or otherdefects).Software testing can be stated as the process of validating and verifying that a softwareprogram/application/product:1.meets the requirements that guided its design and development;2.works as expected; and3.can be implemented with the same characteristics.Software testing, depending on the testing method employed, can be implemented at any timein the development process. However, most of the test effort occurs after the requirementshave been defined and the coding process has been completed. As such, the methodology ofthe test is governed by the software development methodology adopted.Different software development models will focus the test effort at different points in thedevelopment process. Newer development models, such as Agile, often employ test drivendevelopment and place an increased portion of the testing in the hands of the developer, beforeit reaches a formal team of testers. In a more traditional model, most of the test executionoccurs after the requirements have been defined and the coding process has been completed.
  • 2. [ edit] OverviewTesting can never completely identify all the defects within software. Instead, itfurnishes a criticism or comparison that compares the state and behavior of theproduct against oracles—principles or mechanisms by which someone might recognizea problem. These oracles may include (but are not limited to) specifications, contracts,comparable products, past versions of the same product, inferences about intended orexpected purpose, user or customer expectations, relevant standards, applicable laws,or other criteria.Every software product has a target audience. For example, the audience for videogame software is completely different from banking software. Therefore, when anorganization develops or otherwise invests in a software product, it can assesswhether the software product will be acceptable to its end users, its target audience,its purchasers, and other stakeholders. Software testing is the process of attemptingto make this assessment.A study conducted by NIST in 2002 reports that software bugs cost the U.S. economy$59.5 billion annually. More than a third of this cost could be avoided if bettersoftware testing was performed.
  • 3. [edit] HistoryThe separation of debugging from testing was initially introduced byGlenford J. Myers in 1979.Although his attention was on breakagetesting ("a successful test is one that finds a bug") it illustrated thedesire of the software engineering community to separate fundamentaldevelopment activities, such as debugging, from that of verification.Dave Gelperin and William C. Hetzel classified in 1988 the phases andgoals in software testing in the following stages:•Until 1956 - Debugging oriented•1957–1978 - Demonstration oriented•1979–1982 - Destruction oriented•1983–1987 - Evaluation oriented•1988–2000 - Prevention oriented
  • 4. Software testing topicsScopeA primary purpose of testing is to detect software failures so thatdefects may be discovered and corrected. Testing cannot establish that aproduct functions properly under all conditions but can only establishthat it does not function properly under specific conditions. The scopeof software testing often includes examination of code as well asexecution of that code in various environments and conditions as well asexamining the aspects of code: does it do what it is supposed to do anddo what it needs to do. In the current culture of software development,a testing organization may be separate from the development team.There are various roles for testing team members. Information derivedfrom software testing may be used to correct the process by whichsoftware is developed.
  • 5. [edit] Functional vs. non-functional testingFunctional testing refers to activities that verify a specific action or function of the code. Theseare usually found in the code requirements documentation, although some developmentmethodologies work from use cases or user stories. Functional tests tend to answer thequestion of "can the user do this" or "does this particular feature work."Non-functional testing refers to aspects of the software that may not be related to a specificfunction or user action, such as scalability or other performance, behavior under certainconstraints, or security. Non-functional requirements tend to be those that reflect the quality ofthe product, particularly in the context of the suitability perspective of its users.[edit] Defects and failuresNot all software defects are caused by coding errors. One common source of expensive defectsis caused by requirement gaps, e.g., unrecognized requirements, that result in errors of omissionby the program designer.[15] A common source of requirements gaps is non-functionalrequirements such as testability, scalability, maintainability, usability, performance, and security.Software faults occur through the following processes. A programmer makes an error (mistake),which results in a defect (fault, bug) in the software source code. If this defect is executed, incertain situations the system will produce wrong results, causing a failure.[16] Not all defects willnecessarily result in failures. For example, defects in dead code will never result in failures. Adefect can turn into a failure when the environment is changed. Examples of these changes inenvironment include the software being run on a new computer hardware platform, alterationsin source data or interacting with different software.[16] A single defect may result in a widerange of failure symptoms.
  • 6. Finding faults earlyIt is commonly believed that the earlier a defect is found the cheaper it is to fix it. Thefollowing table shows the cost of fixing the defect depending on the stage it wasfound. For example, if a problem in the requirements is found only post-release, thenit would cost 10–100 times more to fix than if it had already been found by therequirements review. Modern continuous deployment practices, and cloud-basedservices may cost less for re-deployment and maintenance than in the past.CompatibilityA common cause of software failure (real or perceived) is a lack of this compatibilitywith other application software, operating systems (or operating system versions, oldor new), or target environments that differ greatly from the original (such as a terminalor GUI application intended to be run on the desktop now being required to become aweb application, which must render in a web browser). For example, in the case of alack of backward compatibility, this can occur because the programmers develop andtest software only on the latest version of the target environment, which not all usersmay be running. This results in the unintended consequence that the latest work maynot function on earlier versions of the target environment, or on older hardware thatearlier versions of the target environment was capable of using. Sometimes suchissues can be fixed by proactively abstracting operating system functionality into aseparate program module or library.
  • 7. Time detected Cost to fix a defect Requirements Architecture Construction System test Post-release Requirements 1× 3× 5–10× 10× 10–100× Time Architecture - 1× 10× 15× 25–100×introduced Construction - - 1× 10× 10–25×
  • 8. Input combinations and preconditionsA very fundamental problem with software testing is that testing under allcombinations of inputs and preconditions (initial state) is not feasible, even with asimple product. This means that the number of defects in a software product can bevery large and defects that occur infrequently are difficult to find in testing. Moresignificantly, non-functional dimensions of quality (how it is supposed to be versuswhat it is supposed to do)—usability, scalability, performance, compatibility,reliability—can be highly subjective; something that constitutes sufficient value to oneperson may be intolerable to another.Static vs. dynamic testingThere are many approaches to software testing. Reviews, walkthroughs, or inspectionsare considered as static testing, whereas actually executing programmed code with agiven set of test cases is referred to as dynamic testing. Static testing can be (andunfortunately in practice often is) omitted. Dynamic testing takes place when theprogram itself is used for the first time (which is generally considered the beginning ofthe testing stage). Dynamic testing may begin before the program is 100% complete inorder to test particular sections of code (modules or discrete functions). Typicaltechniques for this are either using stubs/drivers or execution from a debuggerenvironment. For example, spreadsheet programs are, by their very nature, tested to alarge extent interactively ("on the fly"), with results displayed immediately after eachcalculation or text manipulation.
  • 9. Software verification and validationSoftware testing is used in association with verification and validation:•Verification: Have we built the software right? (i.e., does it match the specification).•Validation: Have we built the right software? (i.e., is this what the customer wants).The terms verification and validation are commonly used interchangeably in theindustry; it is also common to see these two terms incorrectly defined. According tothe IEEE Standard Glossary of Software Engineering Terminology:Verification is the process of evaluating a system or component to determine whetherthe products of a given development phase satisfy the conditions imposed at the startof that phase.Validation is the process of evaluating a system or component during or at the end ofthe development process to determine whether it satisfies specified requirements.According to the IS0 9000 standard:Verification is confirmation by examination and through provision of objectiveevidence that specified requirements have been fulfilled.Validation is confirmation by examination and through provision of objective evidencethat the requirements for a specific intended use or application have been fulfilled.
  • 10. Software quality assurance (SQA)Though controversial, software testing is a part of the software qualityassurance (SQA) process. In SQA, software process specialists andauditors are concerned for the software development process ratherthan just the artifacts such as documentation, code and systems. Theyexamine and change the software engineering process itself to reducethe amount of faults that end up in the delivered software: the so-calleddefect rate.What constitutes an "acceptable defect rate" depends on the nature ofthe software; A flight simulator video game would have much higherdefect tolerance than software for an actual airplane.Although there are close links with SQA, testing departments often existindependently, and there may be no SQA function in some companies.Software testing is a task intended to detect defects in software bycontrasting a computer programs expected results with its actualresults for a given set of inputs. By contrast, QA (quality assurance) isthe implementation of policies and procedures intended to preventdefects from occurring in the first place.
  • 11. The software testing teamSoftware testing can be done by software testers.Until the 1980s the term "software tester" was usedgenerally, but later it was also seen as a separateprofession. Regarding the periods and the differentgoals in software testing, different roles have beenestablished: manager, test lead, test designer, tester,automation developer, and test administrator.
  • 12. Testing methodsThe box approachSoftware testing methods are traditionally dividedinto white- and black-box testing. These twoapproaches are used to describe the point of viewthat a test engineer takes when designing test cases.White-box testingMain article: White-box testingWhite-box testing is when the tester has accessto the internal data structures and algorithmsincluding the code that implement these
  • 13. Types of white-box testingThe following types of white-box testing exist:•API testing (application programming interface) - testing ofthe application using public and private APIs•Code coverage - creating tests to satisfy some criteria ofcode coverage (e.g., the test designer can create tests tocause all statements in the program to be executed at leastonce)•Fault injection methods - improving the coverage of a test byintroducing faults to test code paths•Mutation testing methods•Static testing - All types
  • 14. Test coverageWhite-box testing methods can also be used to evaluate thecompleteness of a test suite that was created with black-box testingmethods. This allows the software team to examine parts of a systemthat are rarely tested and ensures that the most important functionpoints have been tested.Two common forms of code coverage are:•Function coverage, which reports on functions executed•Statement coverage, which reports on the number of lines executed tocomplete the testThey both return a code coverage metric, measured as apercentage
  • 15. Black-box testingBlack-box testing treats the software as a "black box"—without anyknowledge of internal implementation. Black-box testing methodsinclude: equivalence partitioning, boundary value analysis, all-pairstesting, fuzz testing, model-based testing, exploratory testing andspecification-based testing.Specification-based testing: Specification-based testing aims to test thefunctionality of software according to the applicable requirements.[22]Thus, the tester inputs data into, and only sees the output from, the testobject. This level of testing usually requires thorough test cases to beprovided to the tester, who then can simply verify that for a given input,the output value (or behavior), either "is" or "is not" the same as theexpected value specified in the test case.Specification-based testing is necessary, but it is insufficient to guardagainst certain risks.
  • 16. Advantages and disadvantages: The black-box tester has no "bonds"with the code, and a testers perception is very simple: a code musthave bugs. Using the principle, "Ask and you shall receive," black-boxtesters find bugs where programmers do not. On the other hand, black-box testing has been said to be "like a walk in a dark labyrinth without aflashlight," because the tester doesnt know how the software beingtested was actually constructed. As a result, there are situations when(1) a tester writes many test cases to check something that could havebeen tested by only one test case, and/or (2) some parts of the back-end are not tested at all.Therefore, black-box testing has the advantage of "an unaffiliatedopinion", on the one hand, and the disadvantage of "blind exploring",on the other.
  • 17. Grey-box testingGrey-box testing (American spelling: gray-box testing) involves havingknowledge of internal data structures and algorithms for purposes ofdesigning tests, while executing those tests at the user, or black-boxlevel. The tester is not required to have a full access to the softwaressource code. Manipulating input data and formatting output do notqualify as grey-box, because the input and output are clearly outside ofthe "black box" that we are calling the system under test. Thisdistinction is particularly important when conducting integration testingbetween two modules of code written by two different developers,where only the interfaces are exposed for test. However, modifying adata repository does qualify as grey-box, as the user would not normallybe able to change the data outside of the system under test. Grey-boxtesting may also include reverse engineering to determine, for instance,boundary values or error messages.
  • 18. By knowing the underlying concepts of how the software works, thetester makes better-informed testing choices while testing thesoftware from outside. Typically, a grey-box tester will be permitted toset up his testing environment; for instance, seeding a database; andthe tester can observe the state of the product being tested afterperforming certain actions. For instance, in testing a database producthe/she may fire an SQL query on the database and then observe thedatabase, to ensure that the expected changes have been reflected.Grey-box testing implements intelligent test scenarios, based onlimited information. This will particularly apply to data type handling,exception handling, and so on.
  • 19. Visual testingThe aim of visual testing is to provide developers with the ability to examine what washappening at the point of software failure by presenting the data in such a way thatthe developer can easily and the information he requires, and the information isexpressed clearly.At the core of visual testing is the idea that showing someone a problem (or a testfailure), rather than just describing it, greatly increases clarity and understanding.Visual testing therefore requires the recording of the entire test process – capturingeverything that occurs on the test system in video format. Output videos aresupplemented by real-time tester input via picture-in-a-picture webcam and audiocommentary from microphones.Visual testing provides a number of advantages. The quality of communicationis increased dramatically because testers can show the problem (and theevents leading up to it) to the developer as opposed to just describing it and theneed to replicate test failures will cease to exist in many cases. The developerwill have all the evidence he requires of a test failure and can instead focus onthe cause of the fault and how it should be fixed
  • 20. Visual testing is particularly well-suited for environments that deployagile methods in their development of software, since agile methodsrequire greater communication between testers and developers andcollaboration within small teams.Ad hoc testing and exploratory testing are important methodologies forchecking software integrity, because they require less preparation timeto implement, whilst important bugs can be found quickly. In ad hoctesting, where testing takes place in an improvised, impromptu way, theability of a test tool to visually record everything that occurs on a systembecomes very important.Visual testing is gathering recognition in customer acceptance andusability testing, because the test can be used by manyindividuals involved in the development process. For thecustomer, it becomes easy to provide detailed bug reports andfeedback, and for program users, visual testing can record useractions on screen, as well as their voice and image, to provide acomplete picture at the time of software failure for the developer
  • 21. Testing levelsTests are frequently grouped by where they are addedin the software development process, or by the levelof specificity of the test. The main levels during thedevelopment process as defined by the SWEBOKguide are unit-, integration-, and system testing thatare distinguished by the test target without implying aspecific process model. Other test levels are classifiedby the testing objective.
  • 22. Unit testingMain article: Unit testingUnit testing, also known as component testing, refers to tests that verifythe functionality of a specific section of code, usually at the functionlevel. In an object-oriented environment, this is usually at the class level,and the minimal unit tests include the constructors and destructors.These types of tests are usually written by developers as they work oncode (white-box style), to ensure that the specific function is working asexpected. One function might have multiple tests, to catch corner casesor other branches in the code. Unit testing alone cannot verify thefunctionality of a piece of software, but rather is used to assure that thebuilding blocks the software uses work independently of each other.
  • 23. Integration testingMain article: Integration testingIntegration testing is any type of software testing that seeks to verify theinterfaces between components against a software design. Softwarecomponents may be integrated in an iterative way or all together ("bigbang"). Normally the former is considered a better practice since itallows interface issues to be localized more quickly and fixed.Integration testing works to expose defects in the interfaces andinteraction between integrated components (modules). Progressivelylarger groups of tested software components corresponding to elementsof the architectural design are integrated and tested until the softwareworks as a system.
  • 24. System testingMain article: System testingSystem testing tests a completely integrated system to verifythat it meets its requirements.System integration testingMain article: System integration testingSystem integration testing verifies that a system is integratedto any external or third-party systems defined in the systemrequirements.[citation needed]
  • 25. Objectives of testing[edit] Regression testingMain article: Regression testingRegression testing focuses on finding defects after a major code changehas occurred. Specifically, it seeks to uncover software regressions, orold bugs that have come back. Such regressions occur wheneversoftware functionality that was previously working correctly stopsworking as intended. Typically, regressions occur as an unintendedconsequence of program changes, when the newly developed part ofthe software collides with the previously existing code. Commonmethods of regression testing include re-running previously run testsand checking whether previously fixed faults have re-emerged. Thedepth of testing depends on the phase in the release process and therisk of the added features. They can either be complete, for changesadded late in the release or deemed to be risky, to very shallow,consisting of positive tests on each feature, if the changes are early inthe release or deemed to be of low risk.
  • 26. Acceptance testingMain article: Acceptance testingAcceptance testing can mean one of two things:1.A smoke test is used as an acceptance test prior tointroducing a new build to the main testing process, i.e.before integration or regression.2.Acceptance testing performed by the customer, often intheir lab environment on their own hardware, is known asuser acceptance testing (UAT). Acceptance testing may beperformed as part of the hand-off process between any twophases of development.
  • 27. Alpha testingAlpha testing is simulated or actual operational testing by potentialusers/customers or an independent test team at the developers site.Alpha testing is often employed for off-the-shelf software as a form ofinternal acceptance testing, before the software goes to beta testing.[33]Beta testingBeta testing comes after alpha testing and can be considered aform of external user acceptance testing. Versions of the software,known as beta versions, are released to a limited audienceoutside of the programming team. The software is released togroups of people so that further testing can ensure the producthas few faults or bugs. Sometimes, beta versions are madeavailable to the open public to increase the feedback field to amaximal number of future users.[citation needed
  • 28. Non-functional testingSpecial methods exist to test non-functional aspects of software.In contrast to functional testing, which establishes the correctoperation of the software (for example that it matches theexpected behavior defined in the design requirements), non-functional testing verifies that the software functions properly evenwhen it receives invalid or unexpected inputs. Software faultinjection, in the form of fussing, is an example of non-functionaltesting. Non-functional testing, especially for software, is designedto establish whether the device under test can tolerate invalid orunexpected inputs, thereby establishing the robustness of inputvalidation routines as well as error-management routines. Variouscommercial non-functional testing tools are linked from thesoftware fault injection page; there are also numerous open-source and free software tools available that perform non-functional testing.
  • 29. Software performance testingPerformance testing is in general executed to determine how a systemor sub-system performs in terms of responsiveness and stability under aparticular workload. It can also serve to investigate, measure, validate orverify other quality attributes of the system, such as scalability,reliability and resource usage.Load testing is primarily concerned with testing that can continue tooperate under a specific load, whether that be large quantities of dataor a large number of users. This is generally referred to as softwarescalability. The related load testing activity of when performed as a non-functional activity is often referred to as endurance testing. Volumetesting is a way to test functionality. Stress testing is a way to testreliability under unexpected or rare workloads. Stability testing (oftenreferred to as load or endurance testing) checks to see if the softwarecan continuously function well in or above an acceptable period.There is little agreement on what the specific goals of performancetesting are. The terms load testing, performance testing, reliabilitytesting, and volume testing, are often used interchangeably.
  • 30. Usability testingUsability testing is needed to check if the user interface iseasy to use and understand. It is concerned mainly with theuse of the application. Security testingSecurity testing is essential for software that processesconfidential data to prevent system intrusion by hackers.
  • 31. Internationalization and localizationThe general ability of software to be internationalized and localized canbe automatically tested without actual translation, by using pseudolocalization. It will verify that the application still works, even after it hasbeen translated into a new language or adapted for a new culture (suchas different currencies or time zones).Actual translation to human languages must be tested, too. Possiblelocalization failures include:•Software is often localized by translating a list of strings out of context,and the translator may choose the wrong translation for an ambiguoussource string.•Technical terminology may become inconsistent if the project istranslated by several people without proper coordination or if thetranslator is imprudent.•Literal word-for-word translations may sound inappropriate, artificialor too technical in the target language.•Untranslated messages in the original language may be left hard codedin the source code.
  • 32. •Some messages may be created automatically at run time and the resulting stringmay be ungrammatical, functionally incorrect, misleading or confusing.•Software may use a keyboard shortcut which has no function on the sourcelanguages keyboard layout, but is used for typing characters in the layout of the targetlanguage.•Software may lack support for the character encoding of the target language.•Fonts and font sizes which are appropriate in the source language, may beinappropriate in the target language; for example, CJK characters may becomeunreadable if the font is too small.•A string in the target language may be longer than the software can handle. This maymake the string partly invisible to the user or cause the software to crash ormalfunction.•Software may lack proper support for reading or writing bi-directional text.•Software may display images with text that wasnt localized.•Localized operating systems may have differently-named system configuration filesand environment variables and different formats for date and currency.To avoid these and other localization problems, a tester who knows the targetlanguage must run the program with all the possible use cases for translation to see ifthe messages are readable, translated correctly in context and dont cause failures.
  • 33. Destructive testingMain article: Destructive testingDestructive testing attempts to cause the software or a sub-system to fail, in order to test its robustness
  • 34. The testing processTraditional CMMI or waterfall development modelA common practice of software testing is that testing isperformed by an independent group of testers after thefunctionality is developed, before it is shipped to thecustomer. This practice often results in the testing phasebeing used as a project buffer to compensate for projectdelays, thereby compromising the time devoted to testing.[36]Another practice is to start software testing at the samemoment the project starts and it is a continuous process untilthe project finishes.Further information: Capability Maturity Model Integrationand Waterfall model
  • 35. Agile or Extreme development modelIn contrast, some emerging software disciplines such as extremeprogramming and the agile software development movement, adhereto a "test-driven software development" model. In this process, unittests are written first, by the software engineers (often with pairprogramming in the extreme programming methodology). Of coursethese tests fail initially; as they are expected to. Then as code is writtenit passes incrementally larger portions of the test suites. The test suitesare continuously updated as new failure conditions and corner cases arediscovered, and they are integrated with any regression tests that aredeveloped. Unit tests are maintained along with the rest of the softwaresource code and generally integrated into the build process (withinherently interactive tests being relegated to a partially manual buildacceptance process). The ultimate goal of this test process is to achievecontinuous integration where software updates can be published to thepublic frequently.
  • 36. A sample testing cycleAlthough variations exist between organizations, there is a typical cyclefor testing.[40] The sample below is common among organizationsemploying the Waterfall development model.•Requirements analysis: Testing should begin in the requirementsphase of the software development life cycle. During the design phase,testers work with developers in determining what aspects of a designare testable and with what parameters those tests work.•Test planning: Test strategy, test plan, test bed creation. Since manyactivities will be carried out during testing, a plan is needed.•Test development: Test procedures, test scenarios, test cases, testdatasets, test scripts to use in testing software.•Test execution: Testers execute the software based on the plans andtest documents then report any errors found to the development team.•Test reporting: Once testing is completed, testers generate metrics andmake final reports on their test effort and whether or not the softwaretested is ready for release.
  • 37. •Test result analysis: Or Defect Analysis, is done by the developmentteam usually along with the client, in order to decide what defectsshould be assigned, fixed, rejected (i.e. found software workingproperly) or deferred to be dealt with later.•Defect Retesting: Once a defect has been dealt with by thedevelopment team, it is retested by the testing team. AKA Resolutiontesting.•Regression testing: It is common to have a small test program built ofa subset of tests, for each integration of new, modified, or fixedsoftware, in order to ensure that the latest delivery has not ruinedanything, and that the software product as a whole is still workingcorrectly.Test Closure: Once the test meets the exit criteria, the activitiessuch as capturing the key outputs, lessons learned, results, logs,documents related to the project are archived and used as areference for future projects
  • 38. Automated testingMain article: Test automationMany programming groups are relying more and more onautomated testing, especially groups that use test-drivendevelopment. There are many frameworks to write tests in,and continuous integration software will run testsautomatically every time code is checked into a versioncontrol system.While automation cannot reproduce everything that a humancan do (and all the ways they think of doing it), it can be veryuseful for regression testing. However, it does require a well-developed test suite of testing scripts in order to be trulyuseful.
  • 39. Testing toolsProgram testing and fault detection can be aided significantly by testingtools and debuggers. Testing/debug tools include features such as:•Program monitors, permitting full or partial monitoring of programcode including: Instruction set simulator, permitting complete instruction level monitoring and trace facilities Program animation, permitting step-by-step execution and conditional breakpoint at source level or in machine code Code coverage reports•Formatted dump or symbolic debugging, tools allowing inspection ofprogram variables on error or at chosen points•Automated functional GUI testing tools are used to repeat system-leveltests through the GUI•Benchmarks, allowing run-time performance comparisons to be made•Performance analysis (or profiling tools) that can help to highlight hotspots and resource usage
  • 40. Some of these features may be incorporated into an IntegratedDevelopment Environment (IDE).•A regression testing technique is to have a standard set of tests, whichcover existing functionality that result in persistent tabular data, and tocompare pre-change data to post-change data, where there should notbe differences, using a tool like diffkit. Differences detected indicateunexpected functionality changes or "regression".
  • 41. Measurement in software testingUsually, quality is constrained to such topics as correctness,completeness, security, but can also include more technicalrequirements as described under the ISO standard ISO/IEC9126, such as capability, reliability, efficiency, portability,maintainability, compatibility, and usability.There are a number of frequently-used softwaremeasures, often called metrics, which are used to assistin determining the state of the software or the adequacyof the testing.
  • 42. Testing artifactsThe software testing process can produce several artifacts.Test planA test specification is called a test plan. The developers are well awarewhat test plans will be executed and this information is made availableto management and the developers. The idea is to make them morecautious when developing their code or making additional changes.Some companies have a higher-level document called a test strategy.Traceability matrixA traceability matrix is a table that correlates requirements or designdocuments to test documents. It is used to change tests when relatedsource documents are changed, to select test cases for execution whenplanning for regression tests by considering requirement coverage.
  • 43. Test caseA test case normally consists of a unique identifier, requirement references from adesign specification, preconditions, events, a series of steps (also known as actions) tofollow, input, output, expected result, and actual result. Clinically defined a test case isan input and an expected result.[41] This can be as pragmatic as for condition x yourderived result is y, whereas other test cases described in more detail the inputscenario and what results might be expected. It can occasionally be a series of steps(but often steps are contained in a separate test procedure that can be exercisedagainst multiple test cases, as a matter of economy) but with one expected result orexpected outcome. The optional fields are a test case ID, test step, or order ofexecution number, related requirement(s), depth, test category, author, and checkboxes for whether the test is automatable and has been automated. Larger test casesmay also contain prerequisite states or steps, and descriptions. A test case should alsocontain a place for the actual result. These steps can be stored in a word processordocument, spreadsheet, database, or other common repository. In a database system,you may also be able to see past test results, who generated the results, and whatsystem configuration was used to generate those results. These past results wouldusually be stored in a separate table.
  • 44. Test scriptA test script is a procedure, or programming code that replicates user actions. Initiallythe term was derived from the product of work created by automated regression testtools. Test Case will be a baseline to create test scripts using a tool or a program.Test suiteThe most common term for a collection of test cases is a test suite. The test suite oftenalso contains more detailed instructions or goals for each collection of test cases. Itdefinitely contains a section where the tester identifies the system configuration usedduring testing. A group of test cases may also contain prerequisite states or steps, anddescriptions of the following tests.Test dataIn most cases, multiple sets of values or data are used to test the same functionality ofa particular feature. All the test values and changeable environmental components arecollected in separate files and stored as test data. It is also useful to provide this datato the client and with the product or a project.Test harnessThe software, tools, samples of data input and output, and configurations are allreferred to collectively as a test harness.