Paper 06


Published on

agile methodology concept

Published in: Education, Technology
1 Comment
  • Helpful but it would be better if it were downloadable :)
    Are you sure you want to  Yes  No
    Your message goes here
  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Paper 06

  1. 1. 30Software Testing Methods andTechniquesJovanović, IrenaAbstract—In this paper main testing methods andtechniques are shortly described. Generalclassification is outlined: two testing methods – blackbox testing and white box testing, and their frequentlyused techniques: Black Box techniques: EquivalentPartitioning, Boundary Value Analysis,Cause-Effect Graphing Techniques, andComparison Testing; White Box techniques: Basis Path Testing,Loop Testing, and Control StructureTesting.Also, the classification of the IEEE ComputerSociety is illustrated.1. DEFINITION AND THE GOAL OF TESTINGROCESS of creating a program consists ofthe following phases (see [8]): 1. defining aproblem; 2. designing a program; 3. building aprogram; 4. analyzing performances of aprogram, and 5. final arranging of a product.According to this classification, software testingis a component of the third phase, and meanschecking if a program for specified inputs givescorrectly and expected results.Software testing (Figure 1) is an importantcomponent of software quality assurance, andmany software organizations are spending up to40% of their resources on testing. For life-criticalsoftware (e.g., flight control) testing can behighly expensive. Because of that, many studiesabout risk analysis have been made. This termmeans the probability that a software project willexperience undesirable events, such asschedule delays, cost overruns, or outrightcancellation (see [9]), and more about this in[10].There are a many definitions of softwaretesting, but one can shortly define that as:1A process of executing a program with thegoal of finding errors (see [3]). So, testingmeans that one inspects behavior of a programon a finite set of test cases (a set of inputs,execution preconditions, and expectedoutcomes developed for a particular objective,such as to exercise a particular program path orto verify compliance with a specific requirement,see [11]) for which valued inputs always exist.In practice, the whole set of test cases isconsidered as infinite, therefore theoreticallythere are too many test cases even for thesimplest programs. In this case, testing couldrequire months and months to execute. So, howto select the most proper set of test cases? Inpractice, various techniques are used for that,and some of them are correlated with riskanalysis, while others with test engineeringexpertise.Testing is an activity performed for evaluatingsoftware quality and for improving it. Hence, thegoal of testing is systematical detection ofdifferent classes of errors (error can be definedas a human action that produces an incorrectresult, see [12]) in a minimum amount of timeand with a minimum amount of effort. Wedistinguish (see [2]):Figure 1: Test Information FlowP1Manuscript received May 26, 2008.I. M. Jovanovic is with the DIV Inzenjering,d.o.o., Belgrade
  2. 2. 31 Good test cases - have a goodchance of finding an yet undiscoverederror; and Successful test cases - uncovers anew error.Anyway, a good test case is one which: Has a high probability of finding anerror;Is not redundant; Should be “best of breed”; Should not be too simple or toocomplex.2. TESTING METHODSTest cases are developed using various testtechniques to achieve more effective testing. Bythis, software completeness is provided andconditions of testing which get the greatestprobability of finding errors are chosen. So,testers do not guess which test cases to chose,and test techniques enable them to designtesting conditions in a systematic way. Also, ifone combines all sorts of existing testtechniques, one will obtain better results rather ifone uses just one test technique.Software can be tested in two ways, inanother words, one can distinguish two differentmethods:1. Black box testing, and2. White box testing.White box testing is highly effective indetecting and resolving problems, because bugs(bug or fault is a manifestation of an error in asoftware, see [12]) can often be found beforethey cause trouble. We can shortly define thismethod as testing software with the knowledgeof the internal structure and coding inside theprogram (see [13]). White box testing is alsocalled white box analysis, clear box testing orclear box analysis. It is a strategy for softwaredebugging (it is the process of locating andfixing bugs in computer program code or theengineering of a hardware device, see [14]) inwhich the tester has excellent knowledge of howthe program components interact. This methodcan be used for Web services applications, andis rarely practical for debugging in large systemsand networks (see [14]). Besides, in [15], whitebox testing is considered as a security testing(the process to determine that an informationsystem protects data and maintains functionalityas intended, see [6]) method that can be used tovalidate whether code implementation followsintended design, to validate implementedsecurity functionality, and to uncover exploitablevulnerabilities (see [15]).Black box testing is testing software basedon output requirements and without anyknowledge of the internal structure or coding inthe program (see [16]). In another words, ablack box is any device whose workings are notunderstood by or accessible to its user. Forexample, in telecommunications, it is a resistorconnected to a phone line that makes itimpossible for the telephone company’sequipment to detect when a call has beenanswered. In data mining, a black box is analgorithm that doesn’t provide an explanation ofhow it works. In film–making, a black box is adedicated hardware device: equipment that isspecifically used for a particular function, but inthe financial world, it is a computerized tradingsystem that doesn’t make its rules easilyavailable.In recent years, the third testing method hasbeen also considered – gray box testing. It isdefined as testing software while already havingsome knowledge of its underlying code or logic(see [17]). It is based on the internal datastructures and algorithms for designing the testcases more than black box testing but less thanwhite box testing. This method is importantwhen conducting integration testing between twomodules of code written by two differentdevelopers, where only interfaces are exposedfor test. Also, this method can include reverseengineering to determine boundary values. Graybox testing is non-intrusive and unbiasedbecause it doesn’t require that the tester haveaccess to the source code.The main characteristics and comparisonbetween white box testing and black box testingare follows.2.1. Black Box Testing Versus White BoxTestingBlack Box Testing: Performing the tests which exercise allfunctional requirements of a program; Finding the following errors:1. Incorrect or missing functions;2. Interface errors;3. Errors in data structures orexternal database access;4. Performance errors;
  3. 3. 325. Initialization and terminationerrors. Advantages of this method: The number of test cases arereduced to achieve reasonabletesting; The test cases can showpresence or absence of classesof errors.White Box Testing: Considering the internal logicalarrangement of software; The test cases exercise certain sets ofconditions and loops; Advantages of this method: All independent paths in amodule will be exercised at leastonce; All logical decisions will beexercised; All loops at their boundaries willbe executed; Internal data structures will beexercised to maintain theirvalidity.3. GENERAL CLASSIFICATION OFTEST TECHNIQUESIn this paper, the most important testtechniques are shortly described, as it is shownin Figure 2.Figure 2: General Classification of TestTechniques3.1. Equivalence PartitioningSummary: equivalence classThis technique divides the input domain of aprogram onto equivalence classes.Equivalence classes – set of valid or invalidstates for input conditions, and can be defined inthe following way:1. An input condition specifies a range →one valid and two invalid equivalenceclasses are defined;2. An input condition needs a specificvalue → one valid and two invalidequivalence classes are defined;3. An input condition specifies a memberof a set→ one valid and one invalidequivalence class are defined;4. An input condition is Boolean→ onevalid and one invalid equivalence classare defined.Well, using this technique, one can get testcases which identify the classes of errors.3.2. Boundary Value AnalysisSummary: complement equivalencepartitioningThis technique is like the techniqueEquivalence Partitioning, except that for creatingthe test cases beside input domain use outputdomain.One can form the test cases in the followingway:1. An input condition specifies a rangebounded by values a and b→ testcases should be made with values justabove and just below a and b,respectively;2. An input condition specifies variousvalues → test cases should beproduced to exercise the minimum andmaximum numbers;3. Rules 1 and 2 apply to outputconditions;If internal program data structures haveprescribed boundaries, produce test cases toexercise that data structure at its boundary.3.3. Cause-Effect Graphing TechniquesSummary: translateTESTINGBlack Box White BoxEquivalent PartitioningBoundary ValueAnalysisCause-Effect GraphingTechniquesComparison TestingBasis Path TestingLoop TestingControl StructureTestingModel-based testingFuzz Testing
  4. 4. 33One uses this technique when one wants totranslate a policy or procedure specified in anatural language into software’s language.This technique means:Input conditions and actions are listed for amodule ⇒ an identifier is allocated for eachone of them ⇒ cause-effect graph is created⇒ this graph is changed into a decision table⇒ the rules of this table are modified to testcases.3.4. Comparison TestingSummary: independent versions of anapplicationIn situations when reliability of software iscritical, redundant software is produced. In thatcase one uses this technique.This technique means:Software engineering teams produceindependent versions of an application→ eachversion can be tested with the same test data→so the same output can be ensured.Residual black box test techniques areexecuting on the separate versions.3.5. Fuzz TestingSummary: random inputFuzz testing is often called fuzzing,robustness testing or negative testing. It isdeveloped by Barton Miller at the University ofWisconsin in 1989. This technique feeds randominput to application. The main characteristic offuzz testing, according to the [26] are: the input is random; the reliability criteria: if the applicationcrashes or hangs, the test is failed; fuzz testing can be automated to ahigh degree.A tool called fuzz tester which indicatescauses of founded vulnerability, works best forproblems that can cause a program to crashsuch as buffer overflow, cross-site scripting,denial of service attacks, format bug and SQLinjection. Fuzzing is less effective for spyware,some viruses, worms, Trojans, and keyloggers.However, fuzzers are most effective when areused together with extensive black box testingtechniques.3.6. Model-based testingModel-based testing is automatic generationof efficient test procedures/vectors using modelsof system requirements and specifiedfunctionality (see [27]).In this method, test cases are derived in wholeor in part from a model that describes someaspects of the system under test. These testcases are known as the abstract test suite, andfor their selection different techniques have beenused: generation by theorem proving; generation by constraint logicprogramming; generation by model checking; generation by symbolic execution; generation by using an event-flowmodel; generation by using an Markov chainsmodel.Model-based testing has a lot of benefits(according [28]): forces detailed understanding of thesystem behavior; early bug detection; test suite grows with the product; manage the model instead of thecases; can generate endless tests; resistant to pesticide paradox; find crashing and non-crashing bugs; automation is cheaper and moreeffective; one implementation per model, thenall cases free; gain automated exploratory testing; testers can address bigger testissues.3.7. Basis Path TestingSummary: basis set, independent path, flowgraph, cyclomatic complexity, graph matrix, linkweightIf one uses this technique, one can evaluatelogical complexity of procedural design. Afterthat, one can employ this measure fordescription basic set of execution paths.For obtaining the basis set and for
  5. 5. 34presentation control flow in the program, oneuses flow graphs (Figure 3 and Figure 4). Maincomponents of that graphs are: Node – it represents one or moreprocedural statements. Node whichcontains a condition is called predicatenode. Edges between nodes – represent flowof control. Each node must be boundedby at least one edge, even if it does notcontain any useful information.Region – an area bounded by nodes andedges.Figure 3: Flow GraphCyclomatic Complexity is software metric.The value evaluated for cyclomatic complexitydefines the number of independent paths in thebasis set of a program.Independent path is any path through aprogram that introduces at least one new set ofprocessing statements.For the given graph G, cyclomatic complexityV(G) is equal to:1. The number of regions in the flow graph;2. V(G) = E - N + 2, where E is the numberof edges, and N is the number of nodes;V(G) = P + 1, where P is the number ofpredicate nodes.So, the core of this technique is: one drawsthe flow graph according to the design or codelike the basis ⇒ one determines its cyclomaticcomplexity; cyclomatic complexity can bedetermined without a flow graph→ in that caseone computes the number of conditionalstatements in the code ⇒ after that, onedetermines a basis set of the linearlyindependent paths; the predicate nodes areuseful when necessary paths must bedetermined ⇒ at the end, one prepares testcases by which each path in the basis set will beexecuted. Each test case will be executed andcompared to the expected results.SequenceIFWhileRepeatCaseFigure 4: Different Versions of Flow Graphs
  6. 6. 35Example1. (cyclomatic complexity)Figure 5: Graph for Example 1.The cyclomatic complexity for the upper graph(figure 5) is:o V(G) = the number of predicate nodes+1 = 3+1 =4, oro V(G)= the number of simple decisions+1 = 4.Well, V(G) = 4, so there are four independentpaths:The path 1: 1, 2, 3, 6, 7, 8The path 2: 1, 2, 3, 5, 7, 8;The path 3: 1, 2, 4, 7, 8;The path 4: 1,2,4,7,2,4,…,7,8.Now, test cases should be designed toexercise these paths.Example2. (cyclomatic complexity)Cyclomatic complexity for graph which isrepresented in Figure 6 is:V(G) = E – N + 2 = 17 – 13 + 2 = 6.So, the basis set of independent paths is:1-2-10-11-13;1-2-10-12-13;1-2-3-10-11-13;1-2-3-4-5-8-9-2;1-2-3-4-5-6-8-9-2;1-2-3-4-5-6-7-8-9-2.Figure 6: Graph for Example 2.Example3.Here are presented corresponding graph matrixand connection matrix for graph which isdepicted in Figure 7.Figure 7: Graph for Example 3.12435abcdefg1546327891012111315 648237
  7. 7. 36Table 1: Graph Matrixnode 1 2 3 4 51 a2 b3 d, c f45 e gTable 2: Connection Matrixnode 1 2 3 4 5 connections1 1 1-1=02 1 1-1=03 1,1 1 3-1=24 05 1 12-1=1Cyclomatic complexity is 2+1=33.8. Loop TestingThere are four types of loops:1. Simple loops;2. Concatenated loops;3. Nested loops;4. Unstructured loops.3.8.1. Simple LoopsIt is possible to execute the following tests: Skip the loop entirely; Only one pass through the loop; Two passes through the loop; m passes through the loop where m<n; n-1, n, n+1 passes through the loop,where n is the maximum number ofallowable passes through the loop.A typical simple loop is depicted in Figure 8.Figure 8: Simple Loop3.8.2. Nested LoopsIf one uses this type of loops, it can bepossible that the number of probable testsincreases as the level of nesting grows. So, onecan have an impractical number of tests. Tocorrect this, it is recommended to use thefollowing approach: Start at the innermost loop, and set allother loops to minimum values; Conduct simple loop tests for theinnermost loop and holding the outerloop at their minimum iterationparameter value; Work outward, performing tests for thenext loop; Continue until all loops have beentested.A typical nested loop is depicted in Figure 9.Figure 9: Nested Loop3.8.3. Concatenated LoopsThese loops are tested using simple loop testsif each loop is independent from the other. Incontrary, nested loops tests are used.A typical concatenated loop is presented inFigure 10.
  8. 8. 37Figure 10: Concatenated Loop3.8.4. Unstructured LoopsThis type of loop should be redesigned.A typical unstructured loop is depicted inFigure 11.Figure 11: Unstructured Loop3.9. Control Structure TestingTwo main components of classification ofControl Structure Testing (Figure 12) aredescribed below.Figure 12: Classification of Control StructureTesting3.9.1. Condition TestingBy this technique, each logical condition in aprogram is tested.A relational expression takes a form:21 E>operatorrelational<E - ,Where E1 and E2 are arithmetic expressions,and relational operator is one of the following: <,=, ≤≠, , >, or ≥.A simple condition is a Boolean variable or arelational expression, possibly with one NOToperator.A compound condition is made up of two ormore simple conditions, Boolean operators, andparentheses.This technique determines not only errors inthe conditions of a program but also errors in thewhole program.3.9.2. Data Flow TestingBy this technique, one can choose test pathsof a program based on the locations ofdefinitions and uses of variables in a program.Unique statement number is allocated foreach statement in a program. For statement withS as its statement number, one can define:DEF(S) = {X| statement S contains a definitionof X}USE(S) = {X| statement S contains a use ofX}.The definition of a variable X at statement S islive at statement S’ if there exists a path fromstatement S to S’ which does not contain anycondition of X.A definition-use chain (or DU chain) ofvariable X is of the type [X, S, S’] where S andS’ are statement numbers, X is in DEF(S),USE(S’), and the definition of X in statement S islive at statement S’.One basic strategy of this technique is thateach DU chain be covered at least once.4. Test Techniques According tothe Project of the IEEE ComputerSociety, 2004.The IEEE Computer Society is established topromote the advancements of theory andpractice in the field of software engineering.This Society completed IEEE Standard 730for software quality assurance (it is anysystematic process of checking to see whether aproduct or service being developed is meetingspecified requirements, see [18]) in the year1979. This was the first standard of this Society.The purpose of IEEE Standard 730 was toprovide uniform, minimum acceptableControl Structure TestingCondition Testing Data Flow Testing
  9. 9. 38requirements for preparation and content ofsoftware quality assurance plans. Another, newstandards are developed in later years.These standards are meaningful not only forpromotion software requirements, softwaredesign and software construction, but also forsoftware testing, software maintenance,software configuration management andsoftware engineering management.So, for improving software testing and fordecreasing risk on all fields, there isclassification of test techniques according to thisSociety, which is listed below. Based on the software engineer’sintuition and experience:1. Ad hoc testing – Test cases aredeveloped basing on the softwareengineer’s skills, intuition, andexperience with similar programs;2. Exploratory testing – This testing isdefined like simultaneous learning,which means that test are dynamicallydesigned, executed, and modified. Specification-based techniques:1. Equivalence partitioning;2. Boundary-value analysis;3. Decision table – Decision tablesrepresent logical relationships betweeninputs and outputs (conditions andactions), so test cases represent everypossible combination of inputs andoutputs;4. Finite-state machine-based – Testcases are developed to cover states andtransitions on it;5. Testing from formal specifications –The formal specifications (thespecifications in a formal language)provide automatic derivation offunctional test cases and a referenceoutput for checking test results;6. Random testing – Random points arepicked within the input domain whichmust be known, so test cases are basedon random. Code-based techniques:1. Control-flow-based criteria –Determine how to test logicalexpressions (decisions) in computerprograms (see [19]). Decisions areconsidered as logical functions ofelementary logical predicates(conditions) and combinations ofconditions’ values are used as data fortesting of decisions. The definition ofevery control-flow criteria includes astatement coverage requirement as acomponent part: every statement in theprogram has been executed at leastonce. Control–flow criteria areconsidered as program-based anduseful for white-box testing. For control-flow criteria, the objects of investigationhave been relatively simple: RandomCoverage, Decision Coverage (everydecision in the program has taken allpossible outcomes at least once),Condition Coverage (every condition ineach decision has taken all possibleoutcomes at least once),Decision/Condition Coverage (everydecision in the program has taken allpossible outcomes at least once andevery condition in each decision hastaken all possible outcomes at leastonce) , etc.2. Data flow-based criteria3. Reference models for code-basedtesting – This means that the controlstructure of a program is graphicallyrepresented using a flow graph. Fault-based techniques:1. Error guessing – Test cases aredeveloped by software engineers tryingto find out the most frequently faults in agiven program. The history of faultsdiscovered in earlier projects and thesoftware engineer’s expertise are helpfulin this situations;2. Mutation testing - A mutant is amodified version of the program undertest. It is differing from the program by asyntactic change. Every test caseexercises both the original and allgenerated mutants. Test cases aregenerating until enough mutants havebeen killed or test cases are developedto kill surviving mutants. Usage-based techniques:1. Operational profile – From theobserved test results, someone caninfer the future reliability of the software;2. Software Reliability EngineeredTesting. Techniques based on the nature of theapplication:
  10. 10. 391. Object-oriented testing – By this testtechnique we can find where theelement under test does not perform asspecified. Besides, the goal of thistechnique is to select, structure andorganize the tests to find the errors asearly as possible (see [25]).2. Component-based testing – Is basedon the idea of creating test cases fromhighly reusable test components. A testcomponent is a reusable and context-independent test unit, providing testservices through its contract-basedinterfaces. More about this testtechnique on: Web-based testing – Is a computer-based test delivered via the internet andwritten in the “language” of the internet,HTML and possibly enhanced byscripts. The test is located as a websiteon the tester’s server where it can beaccessed by the test-taker’s computer,the client. The client’s browser software(e.g. Netscape Navigator, MS InternetExplorer) displays the test, the test-takercompletes it, and if so desired sendshis/her answers back to the server, fromwhich the tester can download andscore them (see [20]).4. GUI testing – Is the process of testing aproduct that uses a graphical userinterface, to ensure it meets its writtenspecifications (see [6]).5. Testing of concurrent programs;6. Protocol conformance testing – Aprotocol describes the rules with whichcomputer systems have to comply intheir communication with othercomputer systems in distributedsystems (see [23]). Protocolconformance testing is a way to checkconformance of protocolimplementations with theircorresponding protocol standards, andan important technology to assuresuccessful interconnection andinteroperability between differentmanufacturers (see [24]). Protocolconformance testing is mostly based onthe standard ISO 9646: “ConformanceTesting Methodology and Framework”[ISO 91]. However, this conventionalmethod of standardization used forprotocol conformance test, sometimesgives wrong test result because the testis based on static test sequences.7. Testing of real-time systems – Morethan one third of typical projectresources are spent on testingembedded and real-time systems. Real-time and embedded systems requirethat a special attention must be given totiming during testing. According to the[21], real-time testing is defined asevaluation of a system (or itscomponents) at its normal operatingfrequency, speed or timing. But, it isactually a conformance testing, whichgoal is to check whether the behavior ofthe system under test is correct(conforming) to that of its specification(see [22]). Test cases can be generatedoffline or online. In the first case, thecomplete test scenarios and verdict arecomputed a-priori and before execution.The offline test generation is oftenbased on a coverage criterion of themodel, on a test purpose or a faultmodel. Online testing combines testgeneration and execution.8. Testing of safety-critical systems. Selecting and combining techniques:1. Functional and structural;2. Deterministic vs. random - Test casescan be selected in a deterministic wayor randomly drawn from somedistribution of inputs, such as is inreliability testing.5. CONCLUSIONSoftware testing is a component of softwarequality control (SQC). SQC means control thequality of software engineering products, whichis conducting using tests of the software system(see [6]). These tests can be: unit tests (thistesting checks each coded module for thepresence of bugs), integration tests(interconnects sets of previously tested modulesto ensure that the sets behave as well as theydid as independently tested modules), or systemtests (checks that the entire software systemembedded in its actual hardware environmentbehaves according to the requirements
  11. 11. 40document). SQC also includes formal check ofindividual parts of code, and the review ofrequirements documents.SQC is different from software qualityassurance (SQA), which means control thesoftware engineering processes and methodsthat are used to ensure quality (see [6]). Controlconduct by inspecting quality managementsystem. One or more standards can be used forthat. It is usually ISO 9000. SQA relates to thewhole software development process, whichincludes the following events: software design,coding, source code control, code reviews,change management, configurationmanagement, and release management.Finally, SQC is a control of products, and SQAis a control of processes.Eventual bugs and defects reduce applicationfunctionality, do not look vocational, and disturbcompany’s reputation. Thence, radically testingis very important to conduct. At that way, thedefects can be discovered and repaired. Even ifcustomers are dissatisfied with a product, theywill never recommend that product, so product’scost and its popularity at the market willdecrease.Besides, customer testing is also veryimportant to conduct. Through this process onecan find out if application’s functions andcharacteristics correspond to customers, andwhat should be changed in application toaccommodate it according to customer’srequests.Large losses can be avoided if timely testingand discovering bugs in initial phases of testingare conducting. Deficits are minor if the bugs arediscovered by testing within the company, wheredevelopers can correct errors rather than if thebugs are discovered in the phase of customertesting, or when the application is started “live”in some other company or system for which theapplication is created. In that case, the lossescan be enormous.Therefore software testing is greatlyimportant, and test techniques too, because theyhave the aim to improve and make easier thisprocess.There is considerable controversy betweensoftware testing writers and consultants aboutwhat is important in software testing and whatconstitutes responsible software testing.So, some of the major controversies include: What constitutes responsiblesoftware testing? – Members of the“context-driven” school of testing believethat the “best practices” of softwaretesting don’t exist, but that testing is acollection of skills which enable testersto chose or improve test practicesproper for each unique situation. Otherssuppose that this outlook directlycontradicts standards such as IEEE 829test documentation standard, andorganizations such as Food and DrugAdministration who promote them. Agile vs. traditional – Agile testing ispopular in commercial circles andmilitary software providers. Someresearchers think that testers shouldwork under conditions of uncertainly andconstant change, but others think thatthey should aim to at process “maturity”. Exploratory vs. scripted – Someresearchers believe that tests should becreated at time when they are executed,but others believe that they should bedesigned beforehand. Manual vs. automated – Propagatorsof agile development recommendcomplete automation of all test cases.Others believe that test automation ispretty expensive. Software design vs. softwareimplementation – The question is:Should testing be carried out only at theend or throughout the whole process? Who watches the watchmen – Anyform of observation is an interaction, sothe act of testing can affect an object oftesting.
  12. 12. 41REFERENCES[1], February 08, 2009.[2] Stacey, D. A., “Software Testing Techniques”[3] Guide to the Software Engineering Body of Knowledge,Swebok – A project of the IEEE Computer SocietyProfessional Practices Committee, 2004.[4] “Software Engineering: A Practitioner’s Approach, 6/e;Chapter 14: Software Testing Techniques”, R.S.Pressman & Associates, Inc., 2005.[5] Myers, Glenford J., IBM Systems Research Institute,Lecturer in Computer Science, Polytechnic Institute ofNew York, “The Art of Software Testing”, Copyright1979. by John Wiley & Sons, Inc.[6] Wikipedia, The Free Encyclopedia,[7][8] Parezanovic, Nedeljko, “Racunarstvo i informatika”,Zavod za udzbenike i nastavna sredstva – Beograd,1996.[9] Wei–Tek, Tsai, “Risk – based testing”, Arizona StateUniversity, Tempe, AZ 85287[10] Redmill, Felix, “Theory and Practice of Risk-basedTesting”, Software Testing, Verification and Reliability,Vol. 15, No. 1, March 2005.[11] IEEE, “IEEE Standard Glossary of SoftwareEngineering Terminology” (IEEE Std 610.12-1990), LosAlamitos, CA: IEEE Computer Society Press, 1990.[12], February 08, 2009.[13],2542,t=white+box+testing&i=54432,00.asp, February 08, 2009.[14],,sid92_gci1242903,00.html, February 08, 2009.[15] Janardhanudu, Girish, “White Box Testing”,, February 08, 2009.[16],2542,t=black+box+testing&i=38733,00.asp, February 08, 2009.[17],2542,t=gray+box+testing&i=57517,00.asp, February 08, 2009.[18],,sid92_gci816126,00.html, February 08, 2009.[19] Vilkomir, A, Kapoor, K & Bowen, JP, “Tolerance ofControl-flow testing Criteria”, Proceedings 27thAnnualInternational Computer Software and applicationsConference, 3-6 November 2003, 182-187, or[20], February 08,2009.[21], February, 2009.[22] Mikucionis, Marius, Larsen, Kim, Nielsen, Brian, “OnlineOn-the-Fly Testing of Real-time systems”,,February, 2009.[23] Tretmans, Jan, “An Overview of OSI ConformanceTesting”,[24], February2009.[25], February 2009.[26], February,2009.[27],February, 2009.[28],February, 2009.