Uploaded on

 

More in: Technology , Education
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
No Downloads

Views

Total Views
6,091
On Slideshare
0
From Embeds
0
Number of Embeds
0

Actions

Shares
Downloads
111
Comments
0
Likes
2

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide

Transcript

  • 1. What is software testingSoftware Testing is the process of executing a program or system with the intent of finding errors.Or, it involves anyactivity aimed at evaluating an attribute or capability of a program or system and determining that it meets itsrequired results.Software is not unlike other physical processes where inputs are received and outputs are produced.Where software differs is in the manner in which it fails. Most physical systems fail in a fixed (and reasonably small)set of ways. By contrast, software can fail in many bizarre ways. Detecting all of the different failure modes forsoftware is generally infeasible.Unlike most physical systems, most of the defects in software are design errors, not manufacturing defects. Softwaredoes not suffer from corrosion, wear and tear .Generally it will not change until upgrades, or until obsolescence. Soonce the software is shipped, the design defects or bugs will be buried in and remain latent until activation.Software bugs will almost always exist in any software module with moderate size not because programmers arecareless or irresponsible, but because the complexity of software is generally intractable and humans have onlylimited ability to manage complexity. It is also true that for any complex systems, design defects can never becompletely ruled out.Discovering the design defects in software, is equally difficult, for the same reason of complexity. Because softwareand any digital systems are not continuous, testing boundary values are not sufficient to guarantee correctness. Allthe possible values need to be tested and verified, but complete testing is infeasible. Exhaustively testing a simpleprogram to add only two integer inputs of 32-bits (yielding 2^64 distinct test cases) would take hundreds of years,even if tests were performed at a rate of thousands per second. Obviously, for a realistic software module, thecomplexity can be far beyond the example mentioned here. If inputs from the real world are involved, the problemSoftware Testing Page 1
  • 2. will get worse, because timing and unpredictable environmental effects and human interactions are all possible inputparameters under consideration.Objectives of testing:-First of all objectives should be clear.  Testing as a process of executing a program with the intent of finding errors.To perform testing, test cases are designed. A test case is a particular made up of artificial situation upon which a program is exposed so as to find errors. So a good test case is one that finds undiscovered errors.  If testing is done properly, it uncovers errors and after fixing those errors we have software that is being developed according to specifications.  The above objective implies a dramatic change in viewpoint .The move counter to the commonly held view than a successful test is one in which no errors are found. In fact, our objective is to design tests that a systematically uncover different classes of errors and do so with a minimum amount of time and effort.Testing principles:Before applying methods to design effective test cases, software engineer must understand the basic principles thatguide the software testing process. Some of the most commonly followed principles are:All test should be traceable to customer requirements as the objective of testing is to uncover errors, it follows thatthe most severe defects (from the customers point of view) are those that causes the program to fail to meet itsrequirements.Software Testing Page 2
  • 3. Tests should be planned long before the testing begins. Test planning can begin as soon as the requirement model iscomplete. Detailed definition of test cases can begin as soon as the design model has been solidated. Therefore, alltests can be planned and designed before any code can be generated.Exhaustive testing is not possible. The number of paths permutations for impossible to execute every combination ofpaths during testing. It is possible however to adequately cover program logic and to ensure that all conditions in theprocedural design have been exercised.To be most effective, an independent third party should conduct testing. By “most effective”, we mean testing thathas the highest probability of finding errors (the primary objective of testing).Test Information Flow:Testing is a complete process. For testing we need two types of inputs:  Software configuration –it includes software requirement specification, design specification and source code of program. Software configuration is required so that testers know what is to be expected and tested.  Test configuration – it is basically test plan and procedure. Test configuration is testing plan that is, the way how the testing will be conducted on the system. It specifies the test cases and their expected value. It also specifies if any tools for testing are to be used. Test cases are required to know what specific situations need to be tested. When tests are evaluated, test results are compared with actual results and if there is some error, then debugging is done to correct the error. Testing is a way to know about quality.Software Testing Page 3
  • 4. Different types of testing 1. White box testing 2. Black box testing 3. Unit testing 4. Incremental integration testing 5. Integration testing 6. Functional testing 7. System testing 8. End-to-end testing 9. Sanity testing 10.Regression testing 11.Acceptance testing 12.Load testing 13.Stress testing 14.Performance testing 15.Usability testing 16.Install/uninstall testing 17.Recovery testing 18.Security testing 19.Compatibility testing 20.Comparison testing 21.Beta testing 22.Alpha testing 23.Smoke testing 24.Monkey testing 25.Ad hoc testingSoftware Testing Page 4
  • 5. 1. Black box testing Internal system design is not considered in this type of testing. Tests are based on requirements and functionality.2. White box testing This testing is based on knowledge of the internal logic of an application’s code. Also known as Glass box Testing. Internal software and code working should be known for this type of testing. Tests are based on coverage of code statements, branches, paths, conditions.3. Unit testing Testing of individual software components or modules. Typically done by the programmer and not by testers, as it requires detailed knowledge of the internal program design and code. May require developing test driver modules or test harnesses.4. Incremental integration testing Bottom up approach for testing i.e continuous testing of an application as new functionality is added; Application functionality and modules should be independent enough to test separately. done by programmers or by testers.5. Integration testing Testing of integrated modules to verify combined functionality after integration. Modules are typically code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems.Software Testing Page 5
  • 6. 6. Functional testing This type of testing ignores the internal parts and focus on the output is as per requirement or not. Black-box type testing geared to functional requirements of an application.7. System testing – Entire system is tested as per the requirements. Black-box type testing that is based on overall requirements specifications, covers all combined parts of a system.8. End-to-end testing – Similar to system testing, involves testing of a complete application environment in a situation that mimics real- world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.9. Sanity testing – Testing to determine if a new software version is performing well enough to accept it for a major testing effort. If application is crashing for initial use then system is not stable enough for further testing and build or application is assigned to fix.10. Regression testing – Testing the application as a whole for the modification in any module or functionality. Difficult to cover all the system in regression testing so typically automation tools are used for these testing types.Software Testing Page 6
  • 7. 11. Acceptance testing Normally this type of testing is done to verify if system meets the customer specified requirements. User or customer do this testing to determine whether to accept application.12. Load testing Its a performance testing to check system behavior under load. Testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the system’s response time degrades or fails.13. Stress testing – System is stressed beyond its specifications to check how and when it fails. Performed under heavy load like putting large number beyond storage capacity, complex database queries, continuous input to system or database load.14. Performance testing Term often used interchangeably with ’stress’ and ‘load’ testing. To check whether system meets performance requirements. Used different performance and load tools to do this.15.Usability testing User-friendliness check. Application flow is tested, Can new user understand the application easily, Proper help documented whenever user stuck at any point. Basically system navigation is checked in this testing.Software Testing Page 7
  • 8. 16. Install/uninstall testing Tested for full, partial, or upgrade install/uninstall processes on different operating systems under different hardware, software environment.17. Recovery testing : Testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.18. Security testing Can system be penetrated by any hacking way. Testing how well the system protects against unauthorized internal or external access. Checked if system, database is safe from external attacks.19. Compatibility testing Testing how well software performs in a particular hardware/software/operating system/network environment and different combination s of above.20.Comparison testing Comparison of product strengths and weaknesses with previous versions or other similar products.21. Alpha testing – In house virtual user environment can be created for this type of testing. Testing is done at the end of development. Still minor design changes may be made as a result of such testing.Software Testing Page 8
  • 9. 22. Beta testing: Testing typically done by end-users or others. Final testing before releasing application for commercial purpose.23.24. Smoke testing It is a term used in plumbing, woodwind repair, electronics, computer software development, infectious disease control, and the entertainment industry. It refers to the first test made after repairs or first assembly to provide some assurance that the system under test will not catastrophically fail. After a smoke test proves that "the pipes will not leak, the keys seal properly, the circuit will not burn, or the software will not crash outright," the assembly is ready for more stressful testing.25. Monkey testing It is random testing performed by automated testing tools (after the latter are developed by humans). These automated testing tools are considered "monkeys", if they work at random. We call them "monkeys" because it is widely believed that if we allow six monkeys to pound on six typewriters at random, for a million years, they will recreate all the works of Isaac Asimov. a) Smart monkeys- are valuable for load and stress testing they will find a significant number of bugs, but are also very expensive to develop. (b) Dumb monkeys- are inexpensive to develop, are able to do some basic testing, but they will find few bugs.Software Testing Page 9
  • 10. 26.Ad hoc testing Its a commonly used term for software testing performed without planning and documentation.The tests are intended to be run only once, unless a defect is discovered. Ad hoc testing is a part of exploratory testing, being the least formal of test methods. In this view, ad hoc testing has been criticized because it isnt structured, but this can also be a strength: important things can be found quickly. It is performed with improvisation, the tester seeks to find bugs with any means that seem appropriate. It contrasts to regression testing that looks for a specific issue with detailed reproduction steps, and a clear expected result. .Testing types_ a) Manual testing b) Automation testingManual testingIt is the process of manually testing software for defects. It requires a tester to play the role of an end user, and usemost of all features of the application to ensure correct behavior. To ensure completeness of testing, the tester oftenfollows a written test plan that leads them through a set of important test cases.Test automationIts the use of software to control the execution of tests, the comparison of actual outcomes to predicted outcomes, thesetting up of test preconditions, and other test control and test reporting functions[1]. Commonly, test automationinvolves automating a manual process already in place that uses a formalized testing process.Software Testing Page 10
  • 11. Software quality assuranceSoftware quality assurance (SQA) consists of a means of monitoring the software engineering processes and methodsused to ensure quality. The methods by which this is accomplished are many and varied, and may include ensuringconformance to one or more standards, such as ISO 9000 or a model such as CMMISQA encompasses the entire software development process, which includes processes such as requirementsdefinition, software design, coding, source code control, code reviews, change management, configurationmanagement, testing, release management, and product integration.The American Society for Quality offers aCertified Software Quality Engineer (CSQE) certification with exams held a minimum of twice a year.SQA includes● Defect Prevention– prevents defects from occurring in the first place– Activities: training, planning, and simulation● Defects detection– finds defects in a software artifact– Activities: inspections, testing or measuring● Defects removal– isolation, correction, verification of fixes– Activities: fault isolation, fault analysis, regression testing● Verification– are we building the product right ?– performed at the end of a phase to ensurethat requirements established duringSoftware Testing Page 11
  • 12. previous phase have been met● Validation– are we building the right product ?– performed at the end of the developmentprocess to ensure compliance with productrequirementsObjective of SQA Quality is a key measure of project success. Software producers want to be assured of the product quality beforedelivery. For this, they need to plan and perform a systematic set of activities called Software Quality Assurance(SQA).SQA helps ensure that quality is incorporated into a software product. It aims at preventing errors and detectingthem as early as possible. SQA provides confidence to software producers that their product meets the qualityrequirements. SQA activities include setting up processes and standards, detecting and removing errors, andensuring that every project performs project SQA activities. Introduction to Software Quality.Importance of Software Quality● Several historic disasters attributed to software– 1988 shooting down of Airbus 320 by the USS Vincennes crypticand misleading output displayed by tracking software– 1991 patriot missile failure inaccuratecalculation of time due toSoftware Testing Page 12
  • 13. computer arithmetic errors– London Ambulance Service Computer Aided Dispatch System –several deaths– On June 3, 1980, the North American Aerospace Defense Command(NORAD) reported that the U.S. was under missile attack.– First operational launch attempt of the space shuttle, whose realtimeoperating software consists of about 500,000 lines of code, failed synchronizationproblem among its flightcontrolcomputers.– 9 hour breakdown of AT&Ts longdistancetelephone network causedby an untested code patchImportance of Software Quality● Ariane 5 crash June 4, 1996– maiden flight of the European Ariane 5 launchercrashed about 40 seconds after takeoff– lost was about half a billion dollars– explosion was the result of a software error● Uncaught exception due to floatingpointerror: conversionfrom a 64bitinteger to a 16bitsigned integer applied to alarger than expected number● Module was reusedwithout proper testing from Ariane 4Software Testing Page 13
  • 14. – Error was not supposed to happen with Ariane 4– No exception handler● Mars Climate Orbiter September23, 1999– Mars Climate Orbiter, disappeared as it began to orbitMars.– Cost about $US 125million– Failure due to error in a transfer of informationbetween a team in Colorado and a team in California● One team used English units (e.g., inches, feet and pounds)while the other used metric units for a key spacecraftoperation.● Mars Polar Lander December,1999– Mars Polar Lander, disappeared during landing onMars– Failure more likely due to unexpected setting of asingle data bit.● defect not caught by testing● independent teams tested separate aspects● Internet viruses and worms– Blaster worm ($US 525 millions)– Sobig.F ($US 500 millions – 1billions)● Exploit well known software vulnerabilities– Software developers do not devote enough effort to applyinglessons learned about the causes of vulnerabilities.Software Testing Page 14
  • 15. – Same types of vulnerabilities continue to be seen in newerversions of products that were in earlier versions.● Usability problems● Monetary impact of poor software quality (Standish group 1995)● 175,000 software projects/year AverageCost per project– Large companies $US 2,322,000– Medium companies $US 1,331,000– Small companies $US 434,000● 31.1% of projects canceled before completed– cost $81 billion● 52.7% of projects exceed their budget costing189% of original estimates– cost $59 billion● 16.2% of software projects completed ontimeand onbudget(9% for largercompanies)Software Testing Page 15
  • 16. What are test cases A test case is a set of conditions or variables under which a tester will determine whether an application or softwaresystem is working correctly or not. The mechanism for determining whether a software program or system haspassed or failed such a test is known as a test oracle. In some settings, an oracle could be a requirement or use case,while in others it could be a heuristic. It may take many test cases to determine that a software program or system isfunctioning correctly. Test cases are often referred to as test scripts, particularly when written. Written test cases areusually collected into test suitesTest cases can be:-1.Formal test casesIn order to fully test that all the requirements of an application are met, there must be at least two test cases for eachrequirement: one positive test and one negative test unless a requirement has sub-requirements. In that situation,each sub-requirement must have at least two test cases. Keeping track of the link between the requirement and thetest is frequently done using a traceability matrix. Written test cases should include a description of the functionalityto be tested, and the preparation required to ensure that the test can be conducted.What characterizes a formal, written test case is that there is a known input and an expected output, which isworked out before the test is executed. The known input should test a precondition and the expected output shouldtest a post condition.Software Testing Page 16
  • 17. 2. Informal test cases For applications or systems without formal requirements, test cases can be written based on the accepted normaloperation of programs of a similar class. In some schools of testing, test cases are not written at all but the activities andresults are reported after the tests have been run.In scenario testing, hypothetical stories are used to help the tester think through a complex problem or system. Thesescenarios are usually not written down in any detail. They can be as simple as a diagram for a testing environment orthey could be a description written in prose. The ideal scenario test is a story that is motivating, credible, complex, andeasy to evaluate. They are usually different from test cases in that test cases are single steps while scenarios cover anumber of steps.Software Testing Page 17
  • 18. Test cases for different modules in ‘Alprus’. 1 .Test cases for ‘Home’ Page:- Test case Test Test Test Steps to Expectedno id Module summary description data Perquisite follow results 1.Login to Verify that Ensure the the that user application homepage is should be 2.verify User displayed able to see Valid user that home should be TC_ after the home should page is able to see Home successfully page after exist in the getting the home 1 Page_001 Homepage login login application displayed page 1.Login to User the should be Ensure application able to see Verify the all the 2.Verify all the availability tasks that all the tasks that of all tasks should be Valid user task are are TC_ in the available should displayed available Home “home at “home exist in the in the at “home 2 Page_002 Homepage page” page” application application page” Software Testing Page 18
  • 19. 1.Login to Ensure the “Home that the 2..Verify page” user name that user should should be Valid user name is display Verify that available name displayed name of TC_ homepage throughout should on the the Home will display the exist in the “Home existing3 Page_003 Homepage user name application application page” user 1.Login to the application 2.verify all Ensure links are that all available Verify the links at” home Links availability should be Valid user page” should be TC_ of links in available should throughout available Home “home at “ home exist in the the at home4 Page_004 Homepage page” page” application application page Software Testing Page 19
  • 20. Ensure “logout “ Verify button functionality should be 1.Login to of “logout” click able the User button and logout Valid user application should be TC_ throughout user from should 2.Click on logout Home the the exist in the “logout” from the5 Page_005 Homepage application application application button application Ensure that the Verify the logout 1.Login to “Logout” availability button the button of “logout” should be application should be button there and Valid user 2.verify the displayed TC_ throughout logout the should availability and logout Home the user from exist in the of “logout” user from6 Page_006 Homepage application application application button. application Software Testing Page 20
  • 21. 1.Login to Ensure the that after application clicking on 2.Click on “home” “home” link it and User Verify the should Valid user navigate should be TC_ functionality take user should the user to able to see Home of “home” at home exist in the “home the “home7 Page_007 Homepage link page application page” page” 1.Login to the Ensure application that 2.Click on “search “home” “Search button” link and button Verify the should be Valid user navigate should be TC_ availability available should the user to displayed Home of “search at home exist in the “home on “Home8 Page_008 Homepage button” page application page” page”. Software Testing Page 21
  • 22. Software Testing Page 22
  • 23. Tools and Technologies used in ALPRUS:Tools Used :  TCMS(Test Case Management System)  Bugzilla  QTPTCMS:A Test Case Management System (TCMS) is meant to be a communications medium through which engineeringteams coordinate their efforts. More specifically, it allows for BlackBox QA, WhiteBox QA, Automation, andDevelopment to be a cohesive force in ensuring testing completeness with minimal effort or overhead. The end resultbeing higher quality deliverables in the same time frame, and better visibility into the testing efforts on a givenproject.A TCMS will only help coordinate the process, it does not implement the process itself. This document details theindividual groups directly involved in this process and how they interact together. This will set up the high-levelconcepts which the effective usage of the TCMS relies upon, and give a better overall understanding of therequirements for the underlying implementation.RequirementsThe TCMS has a concept of scenarios and configurations. In this context, scenario is a physical topology, and aconfiguration is the software and/or hardware a given test case will be executed on. This information must comefrom a Requirements document that specifies the expected scenarios, configurations, and functionality that theSoftware Testing Page 23
  • 24. product deliverable will be expected to support. A Requirements Document with this information is a necessity forthe TCMS to be used effectively by BlackBox QA and Development.BlackBox QABlackBox QA creates test cases based upon their high level knowledge of the product, and executes test cases. Testcases also come from Development, WhiteBox QA, and elsewhere that BlackBox QA also executes. All test cases arefunneled into the TCMS, a central repository for this information. On a given build, a BlackBox QA Engineer willexecute the test cases assigned to him or her, and update the Last Build Tested information to reflect that work.With this information, management can create a simple query to gauge the testing status of a given project, andredeploy effort as necessary. If a given test case fails, the Engineer can then submit a defect containing the test caseinformation easily. If a reported defect has a test case that is not in the TCMS, a BlackBox QA engineer can transferthe test case information from the defect tracking system into the TCMS.AutomationThe main job of the Automation team is to automate execution of test cases for the purpose of increasing codecoverage per component. Once a given project has entered the "alpha" stage (functionality/code complete), releasemilestones (betas, release candidates, etc) are then based upon the amount of code coverage per component in theautomated test suite. For instance, a goal is set for a minimum of 50% code coverage per component before a betacandidate can be considered. This may seem as though the Automation team would then be the bottleneck for releasemilestones, but this is not the case. Automation requires that test cases be supplied that sufficiently exercise code,and works from there. As was stated before, all sections of engineering supply test cases; if Automation hasautomated all test cases and has not met the goal for a given milestone, other sections of engineering (WhiteBox QA,BlackBox QA, Development) need to supply more test cases to be automated. This is not to say that Automation isSoftware Testing Page 24
  • 25. helpless; they can supply test cases as well. The three groups mentioned so far (BlackBox QA, WhiteBox QA, andAutomation) are given a synergy by the TCMS whereby a feedback loop is created. For clarity, here is a diagram:1. BlackBox QA (and development) record test cases into the TCMS, which the Automation team then automatesand generates code coverage data for.2. When BlackBox testing yields no more code coverage, WhiteBox QA analyses output from the code coverage toolto supply test cases to exercise heretofore untested codepaths.3. The test cases supplied by WhiteBox QA are then approved by BlackBox QA and the cycle begins again.This feedback loops has the "snowball rolling downhill" effect in regard to code coverage, which is why it is logicalto partially base release milestones upon those metrics.DevelopmentDevelopments role in the TCMS is simply to supply and critique test cases. The owner of a given component shouldreview the test cases in the TCMS for their component and supply test cases or information/training to QA to fill inany gaps she/he sees. Component owners should also have a goal of supplying a given number of test cases for themilestone of alpha release. This way, BlackBox QA and Automation have something to work from initially and canprovide more immediate results.Software Testing Page 25
  • 26. Roles in a CycleThis table documents all of the aforementioned groups roles in a given product release cycle. The only soliddefinitions necessary is that the "alpha" release is functionality complete, and that each release milestone has anincremental code coverage goal.Milesto Development BlackBox QA WhiteBox QA Automationne designing; research/study documenting design;pre- implementing on product reviewing code; N/AAlpha design, technologies providing feedback functionality manual supplies initial execution of test cases; initially running code/runtime begins provides supplied testAlpha analysis tools; automating test architecture/pr cases; test case reporting defects case in TCMS oduct overview; creation; fixes bugs reporting defectsBeta bug fixing; test manual integrating must report at case creation execution of test code/runtime analysis least X percent cases; test case tools in the automated amount of code creation; defect test suite; reporting coverage per reporting defects; ensuring component, adherence to repeat cycle untilSoftware Testing Page 26
  • 27. documented design; met test case creation analysing output of code/runtime analysis must report at manual tools in the automated least X plus 20 execution of test test suite; reporting percent amount bug fixing; testRelease cases; test case defects; ensuring of code coverage case creation creation; defect adherence to per component, reporting documented design; repeat cycle until code review; test case met creation Bugzilla:Bugzilla is a Web-based general-purpose bug tracker and testing tool originally developed and used by the Mozillaproject, and licensed under the Mozilla Public License. Released as open source software by NetscapeCommunications in 1998, it has been adopted by a variety of organizations for use as a defect tracker for both freeand open source software and proprietary productsBugzillas system requirements include:A compatible database management systemA suitable release of Perl 5Software Testing Page 27
  • 28. An assortment of Perl modulesA compatible web serverA suitable mail transfer agent, or any SMTP serverBugzilla boasts many advanced features:  Powerful searching  User-configurable email notifications of bug changes  Full change history  Inter-bug dependency tracking and graphing  Excellent attachment management  Integrated, product-based, granular security schema  Fully security-audited, and runs under Perls taint mode  A robust, stable RDBMS back-end  Web, XML, email and console interfaces  Completely customisable and/or localisable web user interfaceSoftware Testing Page 28
  • 29.  Extensive configurability  Smooth upgrade pathway between versions The life cycle of a Bugzilla bug Software Testing Page 29
  • 30. QTP Quick Test Professional is automated testing software designed for testing various software applications andenvironments. It performs functional and regression testing through a user interface such as a native GUI or webinterface. It works by identifying the objects in the application user interface or a web page and performing desiredoperations (such as mouse clicks or keyboard events) it can also capture object properties like name or handler ID.QuickTest Professional uses a VBScript scripting language to specify the test procedure and to manipulate theobjects and controls of the application under test. To perform more sophisticated actions, users may need tomanipulate the underlying VBScript.Although QuickTest Professional is usually used for "UI Based" Test Case Automation, it also can automate some"Non-UI" based Test Cases such as file system operations and database testing.QTP performs following Tasks:- • Verification Checkpoints verify that an application under test functions as expected. You can add a checkpoint to check if a particular object, text or a bitmap is present in the automation run. Checkpoints verify that during the course of test execution, the actual application behavior or state is consistent with the expected application behavior or state. QuickTest Professional offers 10 types of checkpoints, enabling users to verify various aspects of an application under test, such as: the properties of an object, data within a table, records within aatabase, a bitmap image, or the text on an application screen. The types of checkpoints are standard, image, table, page, text, text area, bitmap, database, accessibility and XML checkpoints. Users can also create user-defined checkpoints.Software Testing Page 30
  • 31. • Exception handling QuickTest Professional manages exception handling using recovery scenarios , the goal is to continue running tests if an unexpected failure occurs.For example, if an application crashes and a message dialog appears, QuickTest Professional can be instructed to attempt to restart the application and continue with the rest of the test cases from that point. Because QuickTest Professional hooks into the memory space of the applications being tested, some exceptions may cause QuickTest Professional to terminate and be unrecoverable. • Data-driven testing QuickTest Professional supports data-driven testing. For example, data can be output to a data table for reuse elsewhere. Data-driven testing is implemented as a Microsoft Excel workbook that can be accessed from QuickTest Professional. QuickTest Professional has two types of data tables: the Global data sheet and Action (local) data sheets. The test steps can read data from these data tables in order to drive variable data into the application under test, and verify the expected result. • Automating custom and complex UI objects QuickTest Professional may not recognize customized user interface objects and other complex objects. Users can define these types of objects as virtual objects. QuickTest Professional does not support virtual objects for analog recording or recording in low-level mode. • Extensibilit QuickTest Professional can be extended with separate add-ins for a number of development environments that are not supported out-of-the-box. QuickTest Professional add-ins include support for Web, .NET, Java, and Delphi. QuickTest Professional and the QuickTest Professional add-ins are packaged together in HPSoftware Testing Page 31
  • 32. Functional Testing software. • Test results At the end of a test, QuickTest Professional generates a test result. Using XML schema, the test result indicates whether a test passed or failed, shows error messages, and may provide supporting information that allows users to determine the underlying cause of a failure. Release 10 lets users export QuickTest Professional test results into HTML, Microsoft Word or PDF report formats. Reports can include images and screen shots for use in reproducing errors. User interface QuickTest Professional provides two views--and ways to modify-- a test script: Keyword View and Expert View. These views enable QuickTest Professional to act as an IDE for the test, and QuickTest Professional includes many standard IDE features, such as breakpoints to pause a test at predetermined places. • Keyword view Keyword View lets users create and view the steps of a test in a modular, table format. Each row in the table represents a step that can be modified. The Keyword View can also contain any of the following columns Item, Operation, Value, Assignment, Comment, and Documentation. For every step in the Keyword View, QuickTest Professional displays a corresponding line of script based on the row and column value. Users can add, delete or modify steps at any point in the test. • Expert view In Expert View, QuickTest Professional lets users display and edit a tests source code using VBScript.Software Testing Page 32
  • 33. Designed for more advanced users, users can edit all test actions except for the root Global action, and changes are synchronized with the Keyword View. • Languages QuickTest Professional uses VBScript as its scripting language. VBScript supports classes but not polymorphism and inheritance. Compared with Visual Basic for Applications (VBA), VBScript lacks the ability to use some Visual Basic keywords, does not come with an integrated debugger, lacks an event handler, and does not have a forms editor. It has added a debugger, but the functionality is more limited when compared with testing tools that integrate a full-featured IDE, such as those provided with VBA, Java, or VB.NET.Technologies QTP Supports 1. Web 2. Java(Core and Advanced) 3. .Net 4. WPF 5. SAP 6. Oracle 7. Siebel 8. PeopleSoft 9. Delphi 10.Power BuilderSoftware Testing Page 33
  • 34. 11. Stingray 1 12.Terminal Emulator 13. Flex 14. Mainframe terminal emulatorsVersions 1. 10.0 - Released in 2009 2. 9.5 - Released in 2007 3. 9.2 - Released in 2007 4. 9.0 - Released in 2006 5. 8.2 - Released in 2005 6. 8.0 - Released in 2004 7. 7.0 - Never released. 8. 6.5 - Released in 2003 9. 6.0 - Released in 2002 10. 5.5 - First release. Released in 2001Technologies used in ALPRUS:Manual testing:It is the process of manually testing software for defects. It requires a tester to play the role of an end user, and usemost of all features of the application to ensure correct behavior. To ensure completeness of testing, the tester oftenfollows a written test plan that leads them through a set of important test cases.Software Testing Page 34
  • 35. For small scale engineering efforts (including prototypes), exploratory testing may be sufficient. With this informalapproach, the tester does not follow any rigorous testing procedure, but rather explores the user interface of theapplication using as many of its features as possible, using information gained in prior tests to intuitively deriveadditional tests. The success of exploratory manual testing relies heavily on the domain expertise of the tester,because a lack of knowledge will lead to incompleteness in testing. One of the key advantages of an informalapproach is to gain an intuitive insight to how it feels to use the application.Large scale engineering projects that rely on manual software testing follow a more rigorous methodology in orderto maximize the number of defects that can be found. A systematic approach focuses on predetermined test casesand generally involves the following steps.[1]Choose a high level test plan where a general methodology is chosen, and resources such as people, computers, andsoftware licenses are identified and acquired.Write detailed test cases, identifying clear and concise steps to be taken by the tester, with expected outcomes.Assign the test cases to testers, who manually follow the steps and record the results.Author a test report, detailing the findings of the testers. The report is used by managers to determine whether thesoftware can be released, and if not, it is used by engineers to identify and correct the problems.Automation Testing:Automated software testing tool is able to playback pre-recorded and predefined actions, compare the results to theexpected behavior and report the success or failure of these manual tests to a test engineer. Once automated tests arecreated they can easily be repeated and they can be extended to perform tasks impossible with manual testing.Because of this, savvy managers have found that automated software testing is an essential component of successfulSoftware Testing Page 35
  • 36. development projects.Automated software testing has long been considered critical for big software developmentorganizations but is often thought to be too expensive or difficult for smaller companies to implement.AutomatedQA’s TestComplete is affordable enough for single developer shops and yet powerful enough that ourcustomer list includes some of the largest and most respected companies in the world.Companies like Corel, Intel, Adobe, Autodesk, Intuit, McDonalds, Motorola, Symantec and Sony all useTestComplete.What makes automated software testing so important to these successful companies?Automated Software Testing Saves Time and MoneySoftware tests have to be repeated often during development cycles to ensure quality. Every time source code ismodified software tests should be repeated. For each release of the software it may be tested on all supportedoperating systems and hardware configurations. Manually repeating these tests is costly and time consuming. Oncecreated, automated tests can be run over and over again at no additional cost and they are much faster than manualtests. Automated software testing can reduce the time to run repetitive tests from days to hours. A time savings thattranslates directly into cost savings.Automated Software Testing Improves AccuracyEven the most conscientious tester will make mistakes during monotonous manual testing. Automated tests performthe same steps precisely every time they are executed and never forget to record detailed results.Automated Software Testing Increases Test CoverageAutomated software testing can increase the depth and scope of tests to help improve software quality. Lengthy teststhat are often avoided during manual testing can be run unattended. They can even be run on multiple computersSoftware Testing Page 36
  • 37. with different configurations. Automated software testing can look inside an application and see memory contents,data tables, file contents, and internal program states to determine if the product is behaving as expected.Automated software tests can easily execute thousands of different complex test cases during every test runproviding coverage that is impossible with manual tests. Testers freed from repetitive manual tests have more timeto create new automated software tests and deal with complex features.Automated Software Testing Does What Manual Testing CannotEven the largest software departments cannot perform a controlled web application test with thousands of users.Automated testing can simulate tens, hundreds or thousands of virtual users interacting with network or websoftware and applications.Implementation:Software Testing Life Cycle:Software Testing Life Cycle consists of six (generic) phases:  Test Planning,  Test Analysis,  Test Design,  Construction and verification,  Testing Cycles,  Final Testing and Implementation and  Post Implementation.  Software testing has its own life cycle that intersects with every stage of the SDLC. The basic requirements in software testing life cycle is to control/deal with software testing – Manual, Automated and Performance.Software Testing Page 37
  • 38. Test Planning:This is the phase where Project Manager has to decide what things need to be tested, do I have the appropriatebudget etc. Naturally proper planning at this stage would greatly reduce the risk of low quality software. Thisplanning will be an ongoing process with no end point.Activities at this stage would include preparation of high level test plan-(according to IEEE test plan template TheSoftware Test Plan (STP) is designed to prescribe the scope, approach, resources, and schedule of all testingactivities. The plan must identify the items to be tested, the features to be tested, the types of testing to be performed,the personnel responsible for testing, the resources and schedule required to complete testing, and the risksassociated with the plan.). Almost all of the activities done during this stage are included in this software test planand revolve around a test plan.Test Analysis:Once test plan is made and decided upon, next step is to delve little more into the project and decide what types oftesting should be carried out at different stages of SDLC, do we need or plan to automate, if yes then when theappropriate time to automate is, what type of specific documentation I need for testing.Proper and regular meetings should be held between testing teams, project managers, development teams, BusinessAnalysts to check the progress of things which will give a fair idea of the movement of the project and ensure thecompleteness of the test plan created in the planning phase, which will further help in enhancing the right testingstrategy created earlier. We will start creating test case formats and test cases itself. In this stage we need to developFunctional validation matrix based on Business Requirements to ensure that all system requirements are covered byone or more test cases, identify which test cases to automate, begin review of documentation, i.e. Functional Design,Software Testing Page 38
  • 39. Business Requirements, Product Specifications, Product Externals etc. We also have to define areas for Stress andPerformance testing.Test Design:Test plans and cases which were developed in the analysis phase are revised. Functional validation matrix is alsorevised and finalized. In this stage risk assessment criteria is developed. If you have thought of automation then youhave to select which test cases to automate and begin writing scripts for them. Test data is prepared. Standards forunit testing and pass / fail criteria are defined here. Schedule for testing is revised (if necessary) & finalized and testenvironment is prepared.Construction and verification:In this phase we have to complete all the test plans, test cases, complete the scripting of the automated test cases,Stress and Performance testing plans needs to be completed. We have to support the development team in their unittesting phase. And obviously bug reporting would be done as when the bugs are found. Integration tests areperformed and errors (if any) are reported.Testing Cycles:In this phase we have to complete testing cycles until test cases are executed without errors or a predefined conditionis reached. Run test cases --> Report Bugs --> revise test cases (if needed) --> add new test cases (if needed) --> bugfixing --> retesting (test cycle 2, test cycle 3….).Final Testing and Implementation:Software Testing Page 39
  • 40. In this we have to execute remaining stress and performance test cases, documentation for testing is completed /updated, provide and complete different matrices for testing. Acceptance, load and recovery testing will also beconducted and the application needs to be verified under production conditions.Post Implementation:In this phase, the testing process is evaluated and lessons learnt from that testing process are documented. Line ofattack to prevent similar problems in future projects is identified. Create plans to improve the processes. Therecording of new errors and enhancements is an ongoing process. Cleaning up of test environment is done and testmachines are restored to base lines in this stageBug A software bug is the common term used to describe an error, flaw, mistake, failure, or fault in a computerprogram or system that produces an incorrect or unexpected result, or causes it to behave in unintended ways. Mostbugs arise from mistakes and errors made by people in either a programs source code or its design, and a few arecaused by compilers producing incorrect code. A program that contains a large number of bugs, and/or bugs thatseriously interfere with its functionality, is said to be buggy. Reports detailing bugs in a program are commonlyknown as bug reports, fault reports, problem reports, trouble reports, change requests, and so forth.Arithmetic bugs * Division by zero * Arithmetic overflow or underflow * Loss of arithmetic precision due to rounding or numerically unstable algorithmsSoftware Testing Page 40
  • 41. Logic bugs * Infinite loops and infinite recursion * Off by one error, counting one too many or too few when loopingSyntax bugs * Use of the wrong operator, such as performing assignment instead of equality test. In simple cases often warnedby the compiler; in many languages, deliberately guarded against by language syntaxResource bugs * Null pointer dereference * Using an uninitialized variable * Using an otherwise valid instruction on the wrong data type (see packed decimal/binary coded decimal) * Access violations * Resource leaks, where a finite system resource such as memory or file handles are exhausted by repeatedallocation without release. * Buffer overflow, in which a program tries to store data past the end of allocated storage. This may or may notlead to an access violation or storage violation. These bugs can form a security vulnerability. * Excessive recursion which though logically valid causes stack overflowMulti-threading programming bugs * DeadlockSoftware Testing Page 41
  • 42. * Race condition * Concurrency errors in Critical sections, Mutual exclusions and other features of concurrent processing. Time-of-check-to-time-of-use (TOCTOU) is a form of unprotected critical section.Teamworking bugs * Unpropagated updates; e.g. programmer changes "myAdd" but forgets to change "mySubtract", which uses thesame algorithm. These errors are mitigated by the Dont Repeat Yourself philosophy. * Comments out of date or incorrect: many programmers assume the comments accurately describe the code * Differences between documentation and the actual productBugs in popular culture * In the 1968 novel 2001: A Space Odyssey (and the corresponding 1968 film), a spaceships onboard computer,HAL 9000, attempts to kill all its crew members. In the followup 1982 novel, 2010: Odyssey Two, and theaccompanying 1984 film, 2010, it is revealed that this action was caused by the computer having been programmedwith two conflicting objectives: to fully disclose all its information, and to keep the true purpose of the flight secretfrom the crew; this conflict caused HAL to become paranoid and eventually homicidal. * In the 1984 song 99 Red Balloons (though not in the original German version), "bugs in the software" lead to acomputer mistaking a group of balloons for a nuclear missile and starting a nuclear war. * The 2004 novel The Bug, by Ellen Ullman, is about a programmers attempt to find an elusive bug in a databaseapplication.Effects of Bugs;Software Testing Page 42
  • 43. Bugs trigger Type I and type II errors that can in turn have a wide variety of ripple effects, with varying levels ofinconvenience to the user of the program. Some bugs have only a subtle effect on the programs functionality, andmay thus lie undetected for a long time. More serious bugs may cause the program to crash or freeze leading to adenial of service. Others qualify as security bugs and might for example enable a malicious user to bypass accesscontrols in order to obtain unauthorized privileges.The results of bugs may be extremely serious. Bugs in the code controlling the Therac-25 radiation therapy machinewere directly responsible for some patient deaths in the 1980s. In 1996, the European Space Agencys US$1 billionprototype Ariane 5 rocket was destroyed less than a minute after launch, due to a bug in the on-board guidancecomputer program. In June 1994, a Royal Air Force Chinook crashed into the Mull of Kintyre, killing 29. This wasinitially dismissed as pilot error, but an investigation by Computer Weekly uncovered sufficient evidence to convincea House of Lords inquiry that it may have been caused by a software bug in the aircrafts engine control computer.[1]In 2002, a study commissioned by the US Department of Commerce National Institute of Standards and Technologyconcluded that software bugs, or errors, are so prevalent and so detrimental that they cost the US economy anestimated $59 billion annually, or about 0.6 percent of the gross domestic product.How to prevent bug • Programming style While typos in the program code are often caught by the compiler, a bug usually appears when the programmer makes a logic error. Various innovations in programming style and defensive programming are designed to make these bugs less likely, or easier to spot. In some programming languages, so-called typos, especially of symbols or logical/mathematical operators, actually represent logic errors, sinceSoftware Testing Page 43
  • 44. the mistyped constructs are accepted by the compiler with a meaning other than that which the programmer intended. • Programming techniques Bugs often create inconsistencies in the internal data of a running program. Programs can be written to check the consistency of their own internal data while running. If an inconsistency is encountered, the program can immediately halt, so that the bug can be located and fixed. Alternatively, the program can simply inform the user, attempt to correct the inconsistency, and continue running. • Development methodologies There are several schemes for managing programmer activity, so that fewer bugs are produced. Many of these fall under the discipline of software engineering (which addresses software design issues as well). For example, formal program specifications are used to state the exact behavior of programs, so that design bugs can be eliminated. Unfortunately, formal specifications are impractical or impossible for anything but the shortest programs, because of problems of combinatorial explosion and indeterminacy. • Programming language support Programming languages often include features which help programmers prevent bugs, such as static type systems, restricted name spaces and modular programming, among others. For example, when a programmer writes (pseudocode) LET REAL_VALUE PI = "THREE AND A BIT", although this may be syntactically correct, the code fails a type check. Depending on the language and implementation, this may be caught by the compiler or at runtime. In addition, many recently-invented languages have deliberately excluded features which can easily lead to bugs, at the expense of making code slower than it need be: the general principle being that, because of Moores law, computers get faster and software engineers get slower; it is almost always better to write simpler, slower code than "clever", inscrutable code, especially considering that maintenance cost is considerable. For example, the Java programming language does not support pointer arithmetic; implementations of some languages such as Pascal and scripting languages often have runtime bounds checking of arrays, at least in a debugging build.Software Testing Page 44
  • 45. • Code analysisTools for code analysis help developers by inspecting the program text beyond the compilers capabilities to spot potential problems. Although in general the problem of finding all programming errors given a specification is not solvable (see halting problem), these tools exploit the fact that human programmers tend to make the same kinds of mistakes when writing software. • Instrumentation Tools to monitor the performance of the software as it is running, either specifically to find problems such as bottlenecks or to give assurance as to correct working, may be embedded in the code explicitly (perhaps as simple as a statement saying PRINT "I AM HERE"), or provided as tools. It is often a surprise to find where most of the time is taken by a piece of code, and this removal of assumptions might cause the code to be rewritten. Bug management: It is common practice for software to be released with known bugs that are considered non-critical, that is, that do not affect most users main experience with the product. While software products may, by definition, contain any number of unknown bugs, measurements during testing can provide an estimate of the number of likely bugs remaining; this becomes more reliable the longer a product is tested and developed ("if we had 200 bugs last week, we should have 100 this week"). Most big software projects maintain two lists of "known bugs" those known to the software team, and those to be told to users. This is not dissimulation, but users are not concerned with the internal workings of the product. The second list informs users about bugs that are not fixed in the current release, or not fixed at all, and a workaround may be offered. There are various reasons for not fixing bugs: • The developers often dont have time or it is not economical to fix all non-severe bugs. • The bug could be fixed in a new version or patch that is not yet released.Software Testing Page 45
  • 46. • The changes to the code required to fix the bug could be large, expensive, or delay finishing the project. • Even seemingly simple fixes bring the chance of introducing new unknown bugs into the system. At the end of a test/fix cycle some managers may only allow the most critical bugs to be fixed. • Users may be relying on the undocumented, buggy behavior, especially if scripts or macros rely on a behavior; it may introduce a breaking change. • Its "not a bug". A misunderstanding has arisen between expected and provided behavior It is often considered impossible to write completely bug-free software of any real complexity. So bugs are categorized by Severity, and Low-Severity non-critical bugs are tolerated, as they do not affect the proper operation of the system for most users. NASAs SATC managed to reduce the number of errors to fewer than 0.1 per 1000 lines of code (SLOC) but this was not felt to be feasible for any real world projects. The severity of a bug is not the same as its importance for fixing, and the two should be measured and managed separately. On a Microsoft Windows system a blue screen of death is rather severe, but if it only occurs in extreme circumstances, especially if they are well diagnosed and avoidable, it may be less important to fix than an icon not representing its function well, which though purely aesthetic may confuse thousands of users every single day. This balance, of course, depends on many factors; expert users have different expectations from novices, a niche market is different from a general consumer market, and so on. A school of thought popularized by Eric S. Raymond as Linuss Law says that popular open-source software has more chance of having few or no bugs than other software, because "given enough eyeballs, all bugs are shallow". This assertion has been disputed, however: computer security specialist Elias Levy wrote that "it is easy to hide vulnerabilities in complex, little understood and undocumented source code," because, "even if people are reviewing the code, that doesnt mean theyre qualified to do so."Software Testing Page 46
  • 47. Bug management must be conducted carefully and intelligently because "what gets measured gets done" and managing purely by bug counts can have unintended consequences. If, for example, developers are rewarded by the number of bugs they fix, they will naturally fix the easiest bugs first leaving the hardest, and probably most risky or critical, to the last possible moment Debugging:-Finding and fixing bugs, or "debugging" has always been a major part of computer programming.. As computerprograms grow more complex, bugs become more common and difficult to fix. Often programmers spend more timeand effort finding and fixing bugs than writing new code. Software testers are professionals whose primary task is tofind bugs, or write code to support testing. On some projects, more resources can be spent on testing than indeveloping the program.Usually, the most difficult part of debugging is finding the bug in the source code. Once it is found, correcting it isusually relatively easy. Programs known as debuggers exist to help programmers locate bugs by executing code lineby line, watching variable values, and other features to observe program behavior. Without a debugger, code can beadded so that messages or values can be written to a console (for example with printf in the c language) or to awindow or log file to trace program execution or show values.However, even with the aid of a debugger, locating bugs is something of an art. It is not uncommon for a bug in onesection of a program to cause failures in a completely different section, thus making it especially difficult to track(for example, an error in a graphics rendering routine causing a file I/O routine to fail), in an apparently unrelatedpart of the system.Sometimes, a bug is not an isolated flaw, but represents an error of thinking or planning on the part of theprogrammer. Such logic errors require a section of the program to be overhauled or rewritten. As a part of CodeSoftware Testing Page 47
  • 48. review, stepping through the code modelling the execution process in ones head or on paper can often find theseerrors without ever needing to reproduce the bug as such, if it can be shown there is some faulty logic in itsimplementation.But more typically, the first step in locating a bug is to reproduce it reliably. Once the bug is reproduced, theprogrammer can use a debugger or some other tool to monitor the execution of the program in the faulty region, andfind the point at which the program went astray.It is not always easy to reproduce bugs. Some are triggered by inputs to the program which may be difficult for theprogrammer to re-create. One cause of the Therac-25 radiation machine deaths was a bug (specifically, a racecondition) that occurred only when the machine operator very rapidly entered a treatment plan; it took days ofpractice to become able to do this, so the bug did not manifest in testing or when the manufacturer attempted toduplicate it. Other bugs may disappear when the program is run with a debugger; these are heisenbugs (humorouslynamed after the Heisenberg uncertainty principle.)Debugging is still a tedious task requiring considerable effort. Since the 1990s, particularly following the Ariane 5Flight 501 disaster, there has been a renewed interest in the development of effective automated aids to debugging.There are also classes of bugs that have nothing to do with the code itself. for example one relies on faultydocumentation or hardware, the code may be written perfectly properly to what the documentation says, but the bugtruly lies in the documentation or hardware, not the code. However, it is common to change the code instead of theother parts of the system, as the cost and time to change it is generally less. Embedded systems frequently haveworkarounds for hardware bugs, since to make a new version of a ROM is much cheaper than remanufacturing thehardware, especially if they are commodity items.Software Testing Page 48
  • 49. Software Testing Page 49
  • 50. Bug tracking tools Tools Vendor Description AceProject Websystems Bug tracking software designed for project managers and developers. AdminiTrack AdminiTrack Hosted issue and bug tracking application ADT Web Borderwave It is designed for small, medium and large software companies to simplify their defect, suggestion and feature request tracking. It allows to track defects, feature requests and suggestions by version, customer etc. Agility AgileEdge . Agility features a easy to use web-based interface. It includes fully customizable field lists, workflow engine, and email notifications. Bug/Defect Applied Innovation Web-based bug tracking softwareTracking Expert Management BugAwar bugaware Installed and ASP hosted service available. Email alert notification, knowledge base, dynamic reporting, team management, user discussion threads, file attachment, searching. bugcentral. bugcentral.com Web based defect tracking service BUGtrack SkyeyTech, Inc. Web based defect tracking system BugHost Active-X.COM Ideal for small- to medium-sized companies who want a secure, Web- based issue and bug management system. There is no software to installSoftware Testing Page 50
  • 51. and can be accessed from any Internet connection. Designed from the ground up, the system is easy to use, extremely powerful, and customizable to meet your needs. BugImpact Avna Int. . Unlimited: projects, entries/bugs/issues Web access -users access their BugImpact service through a standard Web browser Workflow configurations control: BugImpact installs with a default workflow configuration that can easily be changed or replaced entirely File attachment: details thread may contain attachments, such as screenshots, Excel spreadsheets, internal documents or just any binary files. E-mail notification: the system sends e-mail notification to users when new bugs are assigned or status changes Builds : project(s) may have a specific fix-for version with optional deadline Priority Colorize: custom colors may be associated with different priorities BugStation Bugopolis It is designed to make Bugzilla easier and more secure. A centralized system for entering, assigning and tracking defects. Configurable and customizable. Bug Tracker Bug Tracker Web based defect tracking and data sharing Software Software Bug Tracking Bug-Track.com It offers email notification, file attachment, tracking history, bilingualSoftware Testing Page 51
  • 52. pages, 128-bit encryption connection and advance customization. . Bugvisor softwarequality, Inc. Enterprise solution for capturing, managing and communicating feature requests, bug reports, changes and project issues from emergence to resolution with a fully customizable and controllable workflow Bugzero WEBsina Web-based, easy-to-install, cross-platform defect tracking system Bugzilla Bugzilla.org Highly configurable Open source defect tracking system developed originally for the Mozilla projectCensus BugTrack MetaQuest . Includes VSS integration, notifications, workflow, reporting and change history. DefectTracker Pragmatic Software Subscription-based bug/problem tracking solution Defectr Defectr Defect tracking and project management tool developed using IBM Lotus Domino and Dojo Ajax framework. Dragonfly Vermont Software Web-based, cross-browser, cross-platform issue tracking and change Testing Group management for software development, testing, debugging, and documentation. ExDesk ExDesk Bug and issue tracking software, remotely hosted, allows to tracking software bugs and route them to multiple developers or development groups for repair with reporting and automatic notification FogBUGZ Fog Creek S/W Web-based defect tracking. Fast BugTrack AlceaTech Web-based bug trackingSoftware Testing Page 52
  • 53. Footprints Unipress Web-based issue tracking and project management tool IssueTrak Help Desk Software Offers issue tracking, customer relationship and project management Central functions. JIRA Atlassian J2EE-based, issue tracking and project management application. Jitterbug Samba Freeware defect tracking JTrac Generic issue-tracking web-application that can be easily customized by adding custom fields and drop-downs. Features include customizable workflow, field level permissions, e-mail integration, file attachments and a detailed history view. Mantis Lightweight and simple bugtracking system. Easily modifiable, customizable, and upgradeable. Open Source. MyBugReport Bug Tracker It allows the different participants working on the development of a software or multimedia application to detect new bugs, to ensure their follow-up, to give them a priority and to assign them within the team. Ozibug Tortuga Written in Java, it utilizes servlet technology and offers features such as Technologies reports, file attachments, role-based access, audit trails, email notifications, full internationalization, and a customizable appearance. Perfect Tracker Avensoft Web-based defect trackingProblemTracker NetResults Web-based collaboration software for issue tracking; automated support; and workflow, process, and change management.Software Testing Page 53
  • 54. ProjectLocker ProjectLocker Hosted source control (CVS/Subversion), web-based issue tracking, and web-based document management solutions. PR Tracker Softwise Company Records problem reports in a network and web-based database that supports access by multiple users. It include classification, assignment, sorting, searching, reporting, access control, & more. QEngine AdventNet Offers the facility of tracking and managing bugs, issues, improvements, and features. It provides role based access control, attachment handling, schedule management, automatic e-mail notification, workflow, resolution, worklogs, attaching screenshots, easy reporting, and extensive customization. SpeeDEV SpeeDEV A complete visual design of a multi level rol based process can be defined for different types of issues with conditional branching and automated task generation. Squish Information Web based issue tracking Management Systems, Inc. Task Complete Smart Design Te TaskComplete enables a team to organize and track software defects using with integrated calendar, discussion, and document management capabilities. Can easily be customized to meet the needs of any software development team. teamatic Teamatic Defect tracking system TrackStudio TrackStudio Supports workflow, multi-level security, rule-based email notification,Software Testing Page 54
  • 55. email submission, subscribe-able filters, reports. Has skin-based user interface. Supports ORACLE, DB2, MS SQL, Firebird, PostgreSQL, Hypersonic SQL . VisionProject Visionera AB Designed to make projects more efficient and profitable. Woodpecker IT AVS GmbH It is for performing request, version or bug management. Its main function is recording and tracking issues, within a freely defined workflow. yKAP DCom Solutions Uses XML to deliver a powerful, cost effective, Web based Bug/Defect tracking, Issue Management and Messaging product. , yKAP features include support for unlimited projects, test environments, attachments, exporting data into PDF/RTF/XLS/HTML/Text formats, rule-based email alerts, exhaustive search options, saving searches (public/ private), Auto- complete for user names, extensive reports, history, custom report styles, exhaustive data/trends analysis, printing, role-based security. yKAP allows the user to add custom values for system parameters such as Status, Defect cause, Defect type, priority, etc. yKAP is installed with complete help documentation. Tools Vendor Description assyst Axios Systems Offers a unique lifecycle approach to IT Service Management through the integration of all ITIL processes in a single application.BridgeTrak Kemma Software Record and track development or customers issues, assign issues to development teams, create software release notes and more.Software Testing Page 55
  • 56. BugRat Giant Java Tree It provides a defect reporting and tracking system. Bug reporting by the Web and email. BugSentry IT Collaborate Automatically and securely reports errors in .NET and COM applications. BugSentry provides a .NET dll (COM interop version available too) that developers ship with their products. Bug Trail Osmosys This easy to use tool allows to attach screenshots, automatically capture system parameters and create well formatted MS-WORD and HTML output reports. Customizable defect status flow allows small to large organizations configure as per their existing structure. BugZap Cybernetic For small or medium-size projects, which is easy to install, small and requires no Intelligence server-side installation. GmbHDefect Agent Inborne Software Defect tracking, enhancement suggestion tracking, and development team workflow management software. Defect Tiera Software Manages defects and enhancements through the complete entire life cycle of Manager product development through field deployment Fast Alcea Bug Tracking / Defect Tracking / Issue Tracking - Change Management Software BugTrack (work flow/process flow) GNATS GNU Freeware defect tracking software. Intercept Elsinore Bug tracking system designed to integrate with Visual SourceSafe and the rest of Technologies your Microsoft development environmentSoftware Testing Page 56
  • 57. IssueView IssueView SQL server based bug tracking with Outlook style user interface. JIRA Atlassian Browser-based J2EE defect tracking and issue management software. Supports any platform that runs Java 1.3.x. QAW B.I.C Quality Developed to assist all quality assurance measurements within ICT-projects. The basic of QAW is a structured way of registration and tracking issues (defects). QuickBugs Excel Software Tool for reporting, tracking and managing bugs, issues, changes and new features involved in product development. Key attributes include extreme ease-of-use and flexibility, a shared XML repository accessible to multiple users, multiple projects with assigned responsibilities, configurable access and privileges for users on each project. Virtually everything in QuickBugs is configurable to the organization and specific user needs including data collection fields, workflow, views, queries, reports, security and access control. Highly targeted email messages notify people when specific events require their attention. Support Acentre Web enabled defect tracking application, one of the modules of the Tracker Suite Tracker software package. Support Tracker is based on Lotus Notes, allowing customers to leverage their existing Notes infrastructure for this bug tracking solution. Because Tracker Suite is server-based, Support Tracker installs with zero-impact on the desktop. User can create, track, and manage requests through Notes or over the Web. Requests are assigned, routed, and escalated automatically ts via Service Level Agreements, for proper prioritization and resource allocation. Support Tracker also features FAQ and Knowledgebase functionality.SWBTracker software with Bug tracking system brainsSoftware Testing Page 57
  • 58. TestTrack Pro Seapine Software Delivers time-saving features that keep everyone, involved with the project, informed and on schedule. TestTrack Pro is a scalable solution with Windows and Web clients and server support for Windows, Linux, Solaris, and Mac OS X, integration with MS Visual Studio (including .NET) and interfaces with most major source code managers including Surround SCM, and automated software testing tool, QA Wizard, along with other Seapine tools. Download a free Eval. Track Soffront Defect tracking system ZeroDefect ProStyle Issue managementBug reportSoftware Testing Page 58