Your SlideShare is downloading. ×
MIT521   software testing  (2012) v2
Upcoming SlideShare
Loading in...5

Thanks for flagging this SlideShare!

Oops! An error has occurred.


Introducing the official SlideShare app

Stunning, full-screen experience for iPhone and Android

Text the download link to your phone

Standard text messaging rates apply

MIT521 software testing (2012) v2


Published on

Software testing asignment

Software testing asignment

Published in: Technology, Education

  • Be the first to comment

No Downloads
Total Views
On Slideshare
From Embeds
Number of Embeds
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

No notes for slide


  • 2. 1.0 IntroductionIn software development methodology or SLDC Testing is one of its phases to (1) verify thatit behaves ―as specified‖; (2) to detect errors, and (3) to validate that what has been specifiedis what the user actually wanted.Verification is to make sure that the system answered the question; are we building thesystem right? In this stage verification is the checking or testing of items, including software,for conformance and consistency by evaluating the results against pre-specified requirements.Error Detection: Testing should intentionally attempt to make things go wrong to determine ifthings happen when they shouldn‘t or things don‘t happen when they should.Validation looks at the system correctness – i.e. is the process of checking that what has beenspecified is what the user actually wanted. The question that we need to answer here is; arewe building the right system?The definition of testing according to the ANSI/IEEE 1059 standard is that testing is theprocess of analyzing a software item to detect the differences between existing and requiredconditions (that is defects/errors/bugs) and to evaluate the features of the software item.2.0 Software Testing MethodologyAs explained in the earlier Software testing is an integral part of the software developmentlife cycle (SDLC). In the market, there are a lot of software testing methods that widely use.There are many different types of testing methods or techniques used as part of the softwaretesting methodology. Below are lists of few of the methods: White box testing Black box testing Gray box testing Unit testing Integration testing Regression testing Usability testing Performance testing
  • 3.  Scalability testing Software stress testing Recovery testing Security testing Conformance testing Smoke testing Compatibility testing System testing Alpha testing Beta testingThe software testing methods described above can be implemented in two ways - manually orby automation. Human software testers, who manually check the piece of code, test andreport bugs in it, do manual software testing. In case of automated software testing, the sameprocess is performed by a computer by means of automated testing software such asWinRunner, LoadRunner, Test Director, etc.3.0 Case StudyCase Study 1 - Achieving the Full Potential of Test Automation ( case study has concluded that, as with other areas of software development, the truepotential of software test automation is realized only within a framework that provides truescalable structure. Since its introduction in 1994, the keyword based method of testautomation has become the dominant approach in Europe and is now taking USA by stormprecisely because it provides the best way to achieve this goal.Action Based Testing offers the latest innovations in keyword-driven testing from the originalarchitect of the keyword concept. Test design, test automation and test execution are allperformed within a spreadsheet environment, guided by a method focused on an elegantstructure of reusable high-level actions.TestArchitect, a test automation framework from LogiGear with features ranging from actionorganization to globally distributed team management, offers the full power of Action BasedTesting to the entire testing organization including business analysts, test engineers,automation engineers, test leads and managers.
  • 4. Case Study 2 – Prudential Intranet Regional Callcenter SystemAs I am writing this survey, I am currently working in Prudential Services Asia Sdn. Bhd. asa Developer for internal using system called RCC-Intranet. This system including HRRecruitment, Callback Management, Staff Information Management, Leave Management,Dashboard Report Management and Overtime Management.This system is developed based on Prototype Methodology and to be specific, it isevolutionary prototyping. As we developing this system we are continually refined andrebuilt. It has been argued that prototyping, in some form or another, should be used all thetime. However, prototyping is most beneficial in systems that will have many interactionswith the users.It has been found that prototyping is very effective in the analysis and design of on-linesystems, especially for transaction processing, where the use of screen dialogs is much morein evidence. The greater the interaction between the computer and the user, the greater thebenefit is that can be obtained from building a quick system and letting the user play with it.As we finished the system module by module, we will let the user to test the system and theoutput and result from users we use to continually rebuilt and refine the system. As thus far(to the date this article is written), 80% modules have been developed and tested.4.0 Test PlanIn the testing stage, the test plan is crucial. A Software Test Plan is a document describing thetesting scope and activities. It is the basis for formally testing any software/product in aproject. Below is the test plan template [2] and guideline:TEST PLAN TEMPLATEThe format and content of a software test plan vary depending on the processes, standards,and test management tools being implemented. Nevertheless, the following format, which isbased on IEEE standard for software test documentation, provides a summary of what a testplan can/should contain.Test Plan Identifier: Provide a unique identifier for the document. (Adhere to the ConfigurationManagement System if you have one.)Introduction:
  • 5.  Provide an overview of the test plan. Specify the goals/objectives. Specify any constraints.References: List the related documents, with links to them if available, including the following: Project Plan Configuration Management PlanTest Items: List the test items (software/products) and their versions.Features to be tested: List the features of the software/product to be tested. Provide references to the Requirements and/or Design specifications of the features tobe testedFeatures Not to Be Tested: List the features of the software/product, which will not be tested. Specify the reasons these features won‘t be tested.Approach: Mention the overall approach to testing. Specify the testing levels [if its a Master Test Plan], the testing types, and the testingmethods [Manual/Automated; White Box/Black Box/Gray Box]Item Pass/Fail Criteria: Specify the criteria that will be used to determine whether each test item(software/product) has passed or failed testing.Suspension Criteria and Resumption Requirements: Specify criteria to be used to suspend the testing activity. Specify testing activities, which must be redone when testing, is resumed.Test Deliverables: List test deliverables, and links to them if available, including the following: Test Plan (this document itself) Test Cases Test Scripts Defect/Enhancement Logs Test Reports
  • 6. Test Environment: Specify the properties of test environment: hardware, software, network etc. List any testing or related tools.Estimate: Provide a summary of test estimates (cost or effort) and/or provide a link to thedetailed estimation.Schedule: Provide a summary of the schedule, specifying key test milestones, and/or provide alink to the detailed schedule.Staffing and Training Needs: Specify staffing needs by role and required skills. Identify training that is necessary to provide those skills, if not already acquired.Responsibilities: List the responsibilities of each team/role/individual.Risks: List the risks that have been identified. Specify the mitigation plan and the contingency plan for each risk.Assumptions and Dependencies: List the assumptions that have been made during the preparation of this plan. List the dependencies.Approvals: Specify the names and roles of all persons who must approve the plan. Provide space for signatures and dates. (If the document is to be printed.)5.0 A surveyHere are a few papers that mention about software testing.Jovanović, Irena, Software Testing Methods and TechniquesIn this paper main testing methods and techniques are shortly described. Generalclassification is outlined: two testing methods – black box testing and white box testing, andtheir frequently used techniques: Black Box techniques: Equivalent Partitioning, Boundary Value Analysis, Cause-Effect Graphing Techniques, and Comparison Testing;
  • 7.  White Box techniques: Basis Path Testing, Loop Testing, and Control StructureTesting.Also, the classification of the IEEE Computer Society is illustrated.Jiantao Pan (1999), Software Testing 18-849b Dependable Embedded SystemsSoftware testing is any activity aimed at evaluating an attribute or capability of a program orsystem and determining that it meets its required results. [Hetzel88] Although crucial tosoftware quality and widely deployed by programmers and testers, software testing stillremains an art, due to limited understanding of the principles of software. The difficulty insoftware testing stems from the complexity of software: we can not completely test a programwith moderate complexity. Testing is more than just debugging. The purpose of testing canbe quality assurance, verification and validation, or reliability estimation. Testing can be usedas a generic metric as well. Correctness testing and reliability testing are two major areas oftesting. Software testing is a trade-off between budget, time and quality.Ibrahim K. El-Far and James A. Whittaker (2001), Model-based Software Testing, FloridaInstitute of TechnologySoftware testing requires the use of a model to guide such efforts as test selection and testverification. Often, such models are implicit, existing only in the head of a human tester,applying test inputs in an ad hoc fashion. The mental model testers build encapsulatesapplication behavior, allowing testers to understand the application‘s capabilities and moreeffectively test its range of possible behaviors. When these mental models are written down,they become sharable, reusable testing artifacts. In this case, testers are performing what hasbecome to be known as model-based testing. Model-based testing has recently gainedattention with the popularization of models (including UML) in software design anddevelopment. There are a number of models of software in use today, a few of which makegood models for testing. This paper introduces model-based testing and discusses its tasks ingeneral terms with finite state models (arguably the most popular software models) asexamples. In addition, advantages, difficulties, and shortcoming of various model-basedapproaches are concisely presented. Finally, we close with a discussion of where model-based testing fits in the present and future of software engineering.
  • 8. Mohd. Ehmer Khan (2010), Different Forms of Software Testing Techniques for FindingErrors, Department of Information Technology, Al Musanna College of Technology,Sultanate of OmanSoftware testing is an activity, which is aimed for evaluating an attribute or capability of aprogram and ensures that it meets the required result. There are many approaches to softwaretesting, but effective testing of complex product is essentially a process of investigation, notmerely a matter of creating and following route procedure. It is often impossible to find allthe errors in the program. This fundamental problem in testing thus throws open question, asto what would be the strategy that we should adopt for testing. Thus, the selection of rightstrategy at the right time will make the software testing efficient and effective. In this paper Ihave described software-testing techniques, which are classified by purpose.Vu Nguyen, Test Case Point Analysis: An Approach to Estimating Software Testing Size,Faculty of Information Technology University of Science, VNU-HCMC Ho Chi Minh city,VietnamQuality assurance management is an essential component of the software developmentlifecycle. To ensure quality, applicability, and usefulness of a product, development teamsmust spend considerable time and resources testing, which makes the estimation of thesoftware testing effort, a critical activity. This paper proposes an approach, namely Test CasePoint Analysis (TCPA), to estimating the size of software testing work. The approachmeasures the size of a software test case based on its checkpoints, preconditions and test data,as well as the types of testing. This paper also describes a case study applying the TCPA to alarge testing project at KMS Technology. The result indicates that this approach is morepreferable than estimating using tester‘s experience.6.0 Test Best Practice6.1 MethodsThe purpose of ‗Test Techniques and Methods‘ is to improve test process capability duringtest design and execution by applying basic test techniques and methods and incidentmanagement. Well-founded testing means that test design techniques and methods areapplied, supported (if possible and beneficial) by tools. Test design techniques are used toderive and select test cases from requirements and design specifications. A test case consistsof the description of the input values, execution preconditions, the change process, and the
  • 9. expected result. The test cases are documented in a test design specification. At a later stage,as more information becomes available on the actual implementation, the test designs aretranslated into test procedures. In a test procedure, also referred to as a manual test script, thespecific test actions and checks are arranged in an executable sequence. The tests willsubsequently be executed using these test procedures. The test design and execution activitiesfollow the test approach as defined in the test plan. During the test execution stage, incidents(defects) are found and test incident reports are written. Incidents are be logged using anincident management system and thorough communication about the incidents withstakeholders is established. For incident management a basic incident classification scheme isestablished, and a basic incident repository is put into place.6.1.1 Static and Dynamic approachThere are many approaches to software testing. Reviews, walkthroughs, or inspections arereferred to as static testing, whereas actually executing programmed code with a given set oftest cases is referred to as dynamic testing. Static testing can be omitted, and unfortunately inpractice often is dynamic testing takes place when the program itself is used. Dynamic testingmay begin before the program is 100% complete in order to test particular sections of codeand are applied to discrete functions or modules. Typical techniques for this are either usingstubs/drivers or execution from a debugger environment.5.1.2 The box approachSoftware testing methods are traditionally divided into white- and black-box testing. Thesetwo approaches are used to describe the point of view that a test engineer takes whendesigning test cases.White-box testingWhite box testing is the detailed investigation of internal logic and structure of the code.White box testing is also called glass testing or open box testing. In order to perform whitebox testing on an application, the tester needs to possess knowledge of the internal workingof the code. The tester needs to have a look inside the source code and find out whichunit/chunk of the code is behaving inappropriately.Advantages:
  • 10.  As the tester has knowledge of the source code, it becomes very easy to findout which type of data can help in testing the application effectively. It helps in optimizing the code. Extra lines of code can be removed which can bring in hidden defects. Due to the testers knowledge about the code, maximum coverage is attainedduring test scenario writing.Disadvantages: Due to the fact that a skilled tester is needed to perform white box testing, thecosts are increased. Sometimes it is impossible to look into every nook and corner to find outhidden errors that may create problems as many paths will go untested. It is difficult to maintain white box testing as the use of specialized tools likecode analyzers and debugging tools are required.White-box testing (also known as clear box testing, glass box testing, and transparent boxtesting and structural testing) tests internal structures or workings of a program, as opposed tothe functionality exposed to the end-user. In white-box testing an internal perspective of thesystem, as well as programming skills, are used to design test cases. The tester chooses inputsto exercise paths through the code and determine the appropriate outputs. This is analogous totesting nodes in a circuit, e.g. in-circuit testing (ICT).Black-box testingThe technique of testing without having any knowledge of the interior workings of theapplication is Black Box testing. The tester is oblivious to the system architecture and doesnot have access to the source code. Typically, when performing a black box test, a tester willinteract with the systems user interface by providing inputs and examining outputs withoutknowing how and where the inputs are worked upon.Advantages: Well suited and efficient for large code segments. Code Access not required.
  • 11.  Clearly separates users perspective from the developers perspective throughvisibly defined roles. Large numbers of moderately skilled testers can test the application with noknowledge of implementation, programming language or operating systems.Disadvantages: Limited Coverage since only a selected number of test scenarios are actuallyperformed. Inefficient testing, due to the fact that the tester only has limited knowledgeabout an application. Blind Coverage, since the tester cannot target specific code segments or errorprone areas. The test cases are difficult to design.Black box testing treats the software as a "black box", examining functionality without anyknowledge of internal implementation. The tester is only aware of what the software issupposed to do, not how it does it. Black-box testing methods include: equivalencepartitioning, boundary value analysis, all-pairs testing, state transition tables, decision tabletesting, fuzz testing, model-based testing, use case testing, exploratory testing andspecification-based testing.Gray box testingGrey-box testing (American spelling: gray-box testing) involves having knowledge ofinternal data structures and algorithms for purposes of designing tests, while executing thosetests at the user, or black-box level. The tester is not required to have full access to thesoftwares source code. Manipulating input data and formatting output do not qualify as grey-box, because the input and output are clearly outside of the "black box" that we are callingthe system under test. This distinction is particularly important when conducting integration
  • 12. testing between two modules of code written by two different developers, where only theinterfaces are exposed for test. However, modifying a data repository does qualify as grey-box, as the user would not normally be able to change the data outside of the system undertest. Grey-box testing may also include reverse engineering to determine, for instance,boundary values or error messages.Grey Box testing is a technique to test the application with limited knowledge of the internalworkings of an application. In software testing, the term the more you know the better carriesa lot of weight when testing an application.Mastering the domain of a system always gives the tester an edge over someone with limiteddomain knowledge. Unlike black box testing, where the tester only tests the applications userinterface, in grey box testing, the tester has access to design documents and the database.Having this knowledge, the tester is able to better prepare test data and test scenarios whenmaking the test plan.Advantages: Offers combined benefits of black box and white box testing whereverpossible. Grey box testers dont rely on the source code; instead they rely on interfacedefinition and functional specifications. Based on the limited information available, a grey box tester can designexcellent test scenarios especially around communication protocols and datatype handling. The test is done from the point of view of the user and not the designer.Disadvantages: Since the access to source code is not available, the ability to go over the codeand test coverage is limited. The tests can be redundant if the software designer has already run a test case. Testing every possible input stream is unrealistic because it would take anunreasonable amount of time; therefore, many program paths will go untested.
  • 13. 5.1.2 ComparisonBlack Box vs. Grey Box vs. White BoxS.N. Black Box Testing Grey Box Testing White Box Testing1 The Internal Workingsof an application are notrequired to be knownSomewhat knowledge ofthe internal workings areknownTester has fullknowledge of theInternal workings ofthe application2 Also known as closedbox testing, data driventesting and functionaltestingAnother term for greybox testing is translucenttesting as the tester haslimited knowledge of theinsides of the applicationAlso known as clearbox testing, structuraltesting or code basedtesting3 Performed by end usersand also by testers anddevelopersPerformed by end usersand also by testers anddevelopersNormally done bytesters and developers4 Testing is based onexternal expectations -Internal behavior of theapplication is unknownTesting is done on thebasis of high leveldatabase diagrams anddata flow diagramsInternal workings arefully known and thetester can design testdata accordingly5 This is the least timeconsuming andexhaustivePartly time consumingand exhaustiveThe most exhaustiveand time consumingtype of testing6 Not suited to algorithmtestingNot suited to algorithmtestingSuited for algorithmtesting7 This can only be doneby trial and error methodData domains andInternal boundaries canbe tested, if knownData domains andInternal boundariescan be better tested6.2 TipsSeriously take every aspects of test. Analyze test result thoroughly. Do not ignore the testresult. The final test result may be ‗pass‘ or ‗fail‘ but troubleshooting the root cause of ‗fail‘
  • 14. will lead to the solution of the problem. Testers will be respected if they not only log the bugsbut also provide solutions.Maximize the test coverage every time any application is tested. Though 100 percent testcoverage might not be possible you can always try to reach it.Break the application into smaller functional modules to ensure maximum test coverage e.g.:If you have divided your website application in modules and ―accepting user information‖ isone of the modules. You can break this ―user information‖ screen into smaller parts forwriting test cases: Parts like UI testing, security testing, functional testing of the userinformation form etc. Apply all form field type and size tests, negative and validation tests oninput fields and write all test cases for maximum coverage.Write test cases for the intended functionality first i.e.: for valid conditions according torequirements. Then write test cases for invalid conditions. This will cover expected as wellunexpected behavior of the application.Write test cases in requirement analysis and the design phase itself. This way can ensure allthe requirements are testable.Make your test cases available to developers prior to coding. Don‘t keep your test cases withyou waiting to get the final application release for testing, thinking that you can log morebugs. Let developers analyze your test cases thoroughly to develop a quality application. Thiswill also save the re-work time.If possible identify and group your test cases for regression testing. This will ensure quickand effective manual regression testing.Applications requiring critical response time should be thoroughly tested for performance.Performance testing is the critical part of many applications. In manual testing this is mostlyignored by testers. Find out ways to test your application for performance. If it is not possibleto create test data manually, then write some basic scripts to create test data for performancetesting or ask the developers to write it for you.
  • 15. Programmers should not test their own code. Basic unit testing of the developed applicationshould be enough for developers to release the application for the testers. But testers shouldnot force developers to release the product for testing. Let them take their own time.Everyone from lead to manger will know when the module/update is released for testing andthey can estimate the testing time accordingly. This is a typical situation in an agile projectenvironment.Go beyond requirement testing. Test the application for what it is not supposed to do.7.0 Question and DiscussionSome of the major software testing controversies include:What constitutes responsible software testing?Members of the "context-driven" school of testing believe that there are no "best practices" oftesting, but rather that testing is a set of skills that allow the tester to select or invent testingpractices to suit each unique situation.Agile vs. traditionalShould testers learn to work under conditions of uncertainty and constant change or shouldthey aim at process "maturity"? The agile testing movement has received growing popularitysince 2006 mainly in commercial circles, whereas government and military softwareproviders use this methodology but also the traditional test-last models (e.g. in the Waterfallmodel).Exploratory test vs. scriptedShould tests be designed at the same time as they are executed or should they be designedbeforehand?Manual testing vs. automatedSome writers believe that test automation is so expensive relative to its value that it should beused sparingly. More in particular, test-driven development states that developers shouldwrite unit-tests of the Xunit type before coding the functionality. The tests then can beconsidered as a way to capture and implement the requirements.Software design vs. software implementationShould testing be carried out only at the end or throughout the whole process?Who watches the watchmen?The idea is that any form of observation is also an interaction—the act of testing can alsoaffect that which is being tested.
  • 16. 8.0 ConclusionAs the conclusion, software testing has its purposes. First, to reduce costly error.The cost of errors in software can vary from nothing at all to large amounts of money andeven the loss of life. There are hundreds of stories about failures of computer systems thathave been attributed to errors in software. There are many reasons why systems fail but theissue that stands out the most is the lack of adequate testing.Most of us have had an experience with software that did not work as expected. Software thatdoes not work can have a large impact on an organization. It can lead to many problemsincluding: Loss of money – this can include losing customers right through to financial penaltiesfor non-compliance to legal requirements Loss of time – this can be caused by transactions taking a long time to process but caninclude staff not being able to work due to a fault or failure Damage to business reputation – if an organization is unable to provide service totheir customers due to software problems than the customers will lose confidence orfaith in this organization (and probably take their business elsewhere) Injury or death – It might sound dramatic but some safety-critical systems could resultin injuries or deaths if they don‘t work properly (e.g. flight traffic control software)Testing is an important part of each software development process, no matter whichprogramming paradigm is used. In functional programming its low level nature causedmissing acceptance of the software testing by parts of the community. Publications from thelast few years show that testing of functional programs has (eventually) received some moreattention.
  • 17. References1. Exploratory Testing, Cem Kaner,Florida Institute of Technology, Quality AssuranceInstitute Worldwide Annual Software Testing Conference, Orlando, FL, November20062. Software Testing by Jiantao Pan, Carnegie Mellon University3. Leitner, A., Ciupa, I., Oriol, M., Meyer, B., Fiva, A., "Contract Driven Development= Test Driven Development - Writing Test Cases", Proceedings of ESEC/FSE07:European Software Engineering Conference and the ACM SIGSOFT Symposium onthe Foundations of Software Engineering 2007, (Dubrovnik, Croatia), September20074. a b cKaner, Cem; Falk, Jack and Nguyen, Hung Quoc (1999). Testing ComputerSoftware, 2nd Ed.. New York, et al: John Wiley and Sons, Inc.. pp. 480 pages.ISBN 0-471-35846-0.5. Kolawa, Adam; Huizinga, Dorota (2007). Automated Defect Prevention: BestPractices in Software Management. Wiley-IEEE Computer Society Press. pp. 41–43.ISBN 0-470-04212-5. Kolawa, Adam; Huizinga, Dorota (2007). Automated Defect Prevention: BestPractices in Software Management. Wiley-IEEE Computer Society Press. p. 426.ISBN 0-470-04212-5. a bSection 1.1.2, Certified Tester Foundation Level Syllabus, International SoftwareTesting Qualifications Board8. Principle 2, Section 1.3, Certified Tester Foundation Level Syllabus, InternationalSoftware Testing Qualifications Board9. "Proceedings from the 5th International Conference on Software Testing andValidation (ICST). Software Competence Center Hagenberg. "Test Design: LessonsLearned and Practical Implications.". Software errors cost U.S. economy $59.5 billion annually, NIST report11. McConnell, Steve (2004). Code Complete (2nd ed.). Microsoft Press. p. 29. ISBN 0-7356-1967-0.12. see D. Gelperin and W.C. Hetzel
  • 18. 13. a bMyers, Glenford J. (1979). The Art of Software Testing. John Wiley and Sons.ISBN 0-471-04328-1.14. Company, Peoples Computer (1987). "Dr. Dobbs journal of software tools for theprofessional programmer". Dr. Dobbs journal of software tools for the professionalprogrammer (M&T Pub) 12 (1–6): 116. Gelperin, D.; B. Hetzel (1988). "The Growth of Software Testing". CACM 31 (6).ISSN 0001-0782.16. until 1956 it was the debugging oriented period, when testing was often associated todebugging: there was no clear difference between testing and debugging. Gelperin,D.; B. Hetzel (1988). "The Growth of Software Testing". CACM 31 (6). ISSN 0001-0782.17. From 1957–1978 there was the demonstration oriented period where debugging andtesting was distinguished now - in this period it was shown, that software satisfies therequirements. Gelperin, D.; B. Hetzel (1988). "The Growth of Software Testing".CACM 31 (6). ISSN 0001-0782.18. The time between 1979–1982 is announced as the destruction oriented period, wherethe goal was to find errors. Gelperin, D.; B. Hetzel (1988). "The Growth of SoftwareTesting". CACM 31 (6). ISSN 0001-0782.19. 1983–1987 is classified as the evaluation oriented period: intention here is thatduring the software lifecycle a product evaluation is provided and measuring quality.Gelperin, D.; B. Hetzel (1988). "The Growth of Software Testing". CACM 31 (6).ISSN 0001-0782.20. From 1988 on it was seen as prevention oriented period where tests were todemonstrate that software satisfies its specification, to detect faults and to preventfaults. Gelperin, D.; B. Hetzel (1988). "The Growth of Software Testing". CACM 31(6). ISSN 0001-0782.21. Introduction, Code Coverage Analysis, Steve Cornett22. Ron, Patton. Software Testing.23. Laycock, G. T. (1993) (PostScript). The Theory and Practice of Specification BasedSoftware Testing. Dept of Computer Science, Sheffield University, UK. Retrieved 2008-02-13.24. Savenkov, Roman (2008). How to Become a Software Tester. Roman SavenkovConsulting. p. 159. ISBN 978-0-615-23372-7.
  • 19. 25. Patton, Ron. Software Testing.26. "SOA Testing Tools for Black, White and Gray Box SOA Testing Techniques" Retrieved2012-12-10.27. "Visual testing of software - Helsinki University of Technology" (PDF). Retrieved 2012-01-13.28. "Article on visual testing in Test Magazine". Retrieved 2012-01-13.29. a b"SWEBOK Guide - Chapter 5". Retrieved 2012-01-13.30. Binder, Robert V. (1999). Testing Object-Oriented Systems: Objects, Patterns, andTools. Addison-Wesley Professional. p. 45. ISBN 0-201-80938-9.31. Beizer, Boris (1990). Software Testing Techniques (Second ed.). New York: VanNostrand Reinhold. pp. 21,430. ISBN 0-442-20672-0.32. van Veenendaal, Erik. "Standard glossary of terms used in Software Testing". Retrieved 4 January 2013.33. EtestingHub-Online Free Software Testing Tutorial. "e)Testing Phase in SoftwareTesting:". 2012-01-13.34. Myers, Glenford J. (1979). The Art of Software Testing. John Wiley and Sons.pp. 145–146. ISBN 0-471-04328-1.35. Dustin, Elfriede (2002). Effective Software Testing. Addison Wesley. p. 3. ISBN 0-201-79429-2.36. Pan, Jiantao (Spring 1999). "Software Testing (18-849b Dependable EmbeddedSystems)". Topics in Dependable Embedded Systems. Electrical and ComputerEngineering Department, Carnegie Mellon University.