• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Testing concepts prp_ver_1[1].0
 

Testing concepts prp_ver_1[1].0

on

  • 3,209 views

 

Statistics

Views

Total Views
3,209
Views on SlideShare
3,209
Embed Views
0

Actions

Likes
2
Downloads
386
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • Some of you are from IT background and some of you are from Non-IT, however there is no difference in the work that you will be doing once you are assigned to a project.
  • What is a software product? A software product is made up of software packages that are separately installable. Ex. Word, online sales application, What are the activities involved when you think about developing a project? Marketing Survey is done by marketing people in the market for various product and benchmark the product. MRS( Marketing requirement survey) is created at the end of the survey. Software Requirement Analysis This is also known as feasibility study. In this phase, the development team visits the customer and studies their system. For a few projects, technical feasibility is a significant concern: Is it technically feasible to build a Star Wars missile defense system? Is it technically feasible to build a natural language English-to-French translator? For most projects, however, feasibility depends on non-technical issues: Are the project’s cost and schedule assumptions realistic? Does the project have an effective executive sponsor? Does the company have a business case for the software when the real cost—rather than the initial blue-sky, wishful-thinking cost—is considered? Ex- Contractor A and B made flyover at Domlur but they did not exchange/discuss their blueprints hence there is a gap of around 6 mtrs. By the end of the feasibility study, the team furnishes a document that holds the different specific recommendations for the candidate system. It also includes the personnel assignments, costs, project schedule, and target dates. The requirements gathering process is intensified and focussed specially on software. To understand the nature of the program(s) to be built, the system engineer ("analyst") must understand the information domain for the software, as well as required function, behavior, performance and interfacing. The essential purpose of this phase is to find the need and to define the problem that needs to be solved . System Analysis and Design In this phase, the software development process, the software's overall structure is defined. In terms of the client/server technology, the number of tiers needed for the package architecture, the database design, the data structure design etc are all defined in this phase. A software development model is created. Analysis and Design are very crucial in the whole development cycle. Any gap in the design phase could be very expensive to solve in the later stage of the software development. Much care is taken during this phase. The logical system of the product is developed in this phase.
  • Code Generation The design must be translated into a machine-readable form. The code generation step performs this task. If the design is performed in a detailed manner, code generation can be accomplished without much complication. Programming tools like Compilers, Interpreters, Debuggers are used to generate the code. Different high level programming languages like C, C++, Pascal, Java are used for coding. With respect to the type of application, the right programming language is chosen. Formal reviews are conducted after every 500 lines of code e.g. Code Inspection and code walkthrough What happen if we send this software directly to production? What all issues we may face? If we send the software directly to production without thorough testing, system may fail at the initial stage itself, and even if it doesn’t there may be some functionality which was not even thought of and tested. e.g. in online shopping system add products to cart and one page can hold description of ten carts only and then next page appears with additional product added to cart, but it was never tested, every time cart with less than 10 products was tested, hence testing of more than 10 products in a cart was skipped and leads to bad impression on customer. Will the customer again want to visit the site?? Off course NO What else a customer expect from any application other than perfect functionality? Look and Feel:- If the application is very dull, not very well presentable then also customer would not like to visit that site again. user-friendly:- If the application is not user friendly, instead of common language understood by people some other language is used or it is not easy to browse it, searching is not convenient etc then even though functionality of the application is perfect still this application would be a failure as the end- user is not at all satisfied. Testing Once the code is generated, the software program testing begins. Different testing methodologies are available to unravel the bugs that were committed during the previous phases. Different testing tools and methodologies are already available. Some companies build their own testing tools that are tailor made for their own development operations. Maintenance Software will definitely undergo change once it is delivered to the customer. There are many reasons for the change. Change could happen because of some unexpected input values into the system. In addition, the changes in the system could directly affect the software operations. The software should be developed to accommodate changes that could happen during the post implementation period.
  • Requirement analysis:- As we were talking about feasibility study, which also includes working in team, work cohesively with other people engaged in the similar or other activities. Eg. The flyover that is being constructed at Domlur airport. Contract was given to 2 contractors, they started constructing flyover without exchanging their blue prints, result was that once they finished their part of flyover there was a gap of almost 6 mtrs in height. Had they have exchanged their blueprints they would have constructed it much earlier and without any disruption and disturbance and loss of money and effort. Now they have to do some workaround as it has to be completed somehow… Why these situations occur??? Majorly due to lack of proper testing. Overconfidence Even the software developed by best of the developers may fail. It is not only the system that is being developed is tested but the system with which our product is interfaced that is also tested for interfacing not the product as a whole.
  • As per IEEE Standards, SDLC broadly consist of 3 phases:- Definition Development Maintainence
  • As we have talked about that we cannot launch any product successfully without testing. Even the product designed by best of the best developers needs to be tested and verified under various circumstances. The U.S. space shuttle Columbia with a seven-member crew, disintegrated in flames over central Texas shortly before it was scheduled to land at Cape Canaversal in Florida. On February 1, 2003, at an altitude of 63 kilometers and a velocity of 20,000 km/hr, Columbia disintegrated killing 7 astronauts. Columbia was designed and developed by most of the intelligent people employed in NASA. And the product was tested very well before its launch, still it failed leading to disaster and taking lives of 7 best astronauts that includes Kalpana Chawla from India. The product failed not because it was not tested well but it could have been tested more to avoid this disaster. Hence testing is very vital in all application/products however more aggressive where lives are involved for e.g medical equipments, space shuttles etc.
  • What Is Software Testing? Software testing is a process used to help identify the correctness, completeness and quality of developed computer software. With that in mind, testing can never completely establish the correctness of computer software. Only the process of formal verification can prove that there are no defects. What is the expected behavior of the system? Testing is to check if there is no difference between actual output and expected output. It is to put software under various test to check that it satisfies some requirements of end user/Client and hence detecting errors if it fails to satisfy a set of requirements.
  • Testing is to establish confidence that a system does what it is supposed to do and even if it “misbehaves” it should not crash, it should do it decently. It is to confirm that system performs its intended functions correctly, moreover testing does not guarantee bug free product and good testing cannot be a substitute for good programming ex. If I do a very poor programming assuming that if there are any bugs my testers would find it and I would fix them later. This idea would take more time plus more effort in debugging those many number of bugs and in all the cost would be much higher than it would have been if the code has been developed effectively. Testing is not a debugging or preventing process, it is only to ensure that within an application I have found/detect these many number of bugs and rest debugging/fixing of bugs is done by developers. It improves quality and identify risks as well.
  • As we were discussing about the need of software testing… hence we test our software to :- Detect programming errors:- Even though we have best developers developing the product still there may be flaws somewhere due to misconception, due to lack of understanding in integrating modules of 2 great developers. Most of the time developers think that their code is perfect, it cannot have bugs and that’s where the problem is. They think that finding bugs in their code means that some one is proving them inefficient. Hence we test code not to prove a developer inefficient but to find bug in code not in developer. Testing at earlier stage is much cheaper than testing at later stage. If Client find bugs which were uncovered earlier then we loses credibility and may lose business as well. There are several functionality related that occur in real life scenarios related only so while testing at customer site during beta testing we can uncover these defects there and then. To release a bug free product seems to be a dream yet to be fulfilled, hence it is a real challenge to release a product that is bug-free.
  • Testing is defined in a number of ways by various veterans. Of course, none of these definitions claims that testing shows that software is free from defects. Testing can show the presence, but not the absence of problems.
  • What is the criteria for a successful product? A product is successful only when it is adding any value to customer business, only if it increases its revenue, credibility in the market and so on. To develop a product we set a target time for completion and released to customer. If we can stick to that time without increasing budget and scope of the product then the product is successful. Using the developed product true business goals should be met else the product developed can be a failure leading to finance loss, customer satisfaction loss, and loss of business further from that client as well as associates. Look and feel should be accepted by user who will be finally using that product.
  • Product Success Criteria (PSC) so as to address the following. These criteria, once arrived at the initial stage of the project, shall be the basis for strategy and planning. Functionality: The business perceives that the system satisfactorily addresses the true business goals. The system is functionally compliant. The system was delivered on time and within budget and scope. Usability: The user perceives the system as value-added to his or her job and is easy to learn and use. Likeability: End user feels that look, feel, and navigation are easy. Configurability: All changes to project scope, schedule, and budget were handled under a well-defined and visible change control process. Maintainability: The operations staff is prepared to maintain and support the delivered system. Interoperability: The organizational interoperability groups perceive that the solution is consistent with interoperability and reusability standards and goals.
  • Testability and Product Success Criteria Testability is the possible extent to which the software product under consideration can be evaluated to determine- Operability: The better it works; the more efficiently it can be tested. Controllability: The better we can control it, the more the Software Testing can be automated & optimized Observability : What you see is what you test. Simplicity: The less there is to test, the more quickly we can test. Understandability: The more information we have, the smarter we will test. Suitability: The more we know about the intended use of the system, the better we can organize our Software Testing to find important bugs Stability: The fewer the changes, the fewer the disruptions to Software Testing Accessibility: Web site shall be accessible when incorporating frames, JavaScript, Cascading Style Sheets, Dynamic HTML, and multimedia technology such as QuickTime or Flash. Navigability: Site's architecture is firmly represented by the visually designed interface for ease of motion within the site. Editorial Continuity: Ensure that users can easily move from page to page in a multi-page section within your web site and still retain a sense of location. Scalability: Modular interface design structure to support major edits and visual design overhauls. Context Sensitivity: Interface reinforcing the content of the web site and enhances the overall brand presentation strategy. Structural Continuity: Ensuring that users are able to understand and utilize every component of the navigational system in order to move throughout the web site. Software Testing shall be aligned with software project and shall meet
  • Correctness To claim that software is correct, we have to assure that the data entered, processed, and outputted is accurate and complete. We can achieve this through controlling data element across the entire transaction. This requires that: Well defined Functional specifications. Design conformance with user requirements. Program conformance to design specifications. Testing to ensure requirements are properly implemented. Availability of right programs and data in the released system. Proper management change requests Reliability This will ensure that the system will perform its intended function with the required precision over an extended period of time. This shall ensure: Establishing the system in operational environment to meet the expected level of accuracy and completeness. Designing and implementing process tolerance through the data integrity controls. Performing manual, regression, and functional tests to ensure proper functioning of the data integrity controls. Verification of the installation for system accuracy and completeness. Maintaining accuracy of system requirements and also, system updation. Ease of Use This will address effort required to learn, operate, prepare input for, and interpret output from the system. This test factor deals with the how quickly and easily the people interfacing with the application system will learn to use it. The points addressed here are: Defining the usability specifications for the application system. Design to optimize the usability related requirements Conformance of programs to the design in order to optimize the ease of use. Testing to ensure that the application is easy to use. Proper presentation and communication of usability instructions to appropriate individuals. Preserving usability as and when the system is maintained. File integrity This ensures appropriate storage and maintenance of data. This requires that: Defined file integrity requirements. Controls in design that ensure the integrity of the file. Proper implementation of specified file integrity controls. Proper testing to ensure proper functioning of the file integrity. Verification of file integrity to ensure delivery of right riles prior to system release. Preserving integrity of the files during the maintenance phase. Maintainability This factor addresses effort required to locate and fix an error in an operational system so that system operates smoothly. This addresses: Specification of the desired level of maintainability of the system. Development of design and program to achieve the desired level of maintainability. Inspection of system to ensure that it is maintainable. Verifying system documentation for completeness. Maintainability is preserved as the system is updated.
  • 02/09/12 The development process must be iterative-- let’s see how that works The traditional development process involves a long building phase after you’ve specifiedd the application. After the app is built, you move to a testing phase where you find bugs and fix them. Many people view testing as a “washing machine” that you clean your code with at the end of the development phase.
  • As we have already discussed Software Development V- model, Let us compare it with v-Model of software testing life cycle.
  • The responsibility for testing Unit Test is the responsibility of the Development Team System Testing is the responsibility of SQA User Acceptance Testing is the Responsibility of the User Representatives Team Technology Compliance Testing is the responsibility of the Systems Installation & Support Group.
  • During acceptance lets see what do we mean by hot-Fixes. In acceptance phase product is installed and executed in users environment. There may be some failure to software might be due to some configuration files mismatch or some other interface. When Bill Gates was giving Live demo for Microsoft windows new version suddenly a Unwanted popup appears and whole world was watching this demo, Bill did some changes to config files and it worked fine. This is termed as HOT_FIX( to make small changes to software to continue testing activity)
  • Role of test engineer is to :-
  • 02/09/12 module interfaces are tested to ensure that the information properly flows into and out of the program unit under test. The local data structure is examined to ensure that data stored temporarily maintains its integrity during all steps in an algorithm’s execution. Boundary conditions are executed to ensure that the module operates properly at boundaries established to limit or restrict processing. All independent paths (basis paths) through the control structure are exercised to ensure that all statements in a module have been executed at least once All error handling paths are tested.
  • 02/09/12 Step1. Unit test criteria is a narrative describing what will be done to verify that the program code operates properly and according to specifications. The unit test plan is doc that defines the process for testing the code, including descriptions and procedures for all tests and evaluations. These above items and test specifications are prepared before the program or module is coded. Step2. Code walk through is done to review to the requirements, design, and coding before the testing to ensure compliance to the IS standards. Step3. Unit test data are specific values or transactions that are created for the purpose of testing the code.Test data should be meticulously prepared so that a full range of data values and combinations are considered, and all instruction paths within the module are executed. The unit test report records the results and findings of the test and provides the basis for determining if the unit is ready for system testing.
  • 02/09/12 Select test data to exercise each aspect of functionality identified in the program’s requirements specifications Functional Testing of word processor calls for using each required feature of of the program at least once. Such features include editing, formatting, search etc
  • 02/09/12 Tests module against functional and non-functional specifications Specification used to derive test cases Do not look at module code (except to debug) Attempt to force behavior that doesn't match specification Problem – how to select inputs that have a high probability of causing error
  • 02/09/12 regression testing aims to assure that the software continues to perform according to specification after it has been modified.
  • 02/09/12 Definitions ‘ Regression testing is the process of testing changes to computer programs to make sure that the older programming still works with the new changes.’ ‘ A technique for ensuring that all aspects of a software system operate as required after enhancement or repair.’
  • 02/09/12 Manual Testing: : Here, the developer/quality assurance team manually tests each and every path that the software code can take and then compares the results by ‘eyeballing’. This approach takes a lot of time and a lot of manpower. It is successful most of the times, but not efficient enough. Hence the need for automating the whole process...
  • 02/09/12
  • 02/09/12
  • 02/09/12 Why? -- If no errors are found during testing, the project team did not test the system sufficiently. Exhaustive testing is impractical, so the Project team must design and plan a testing strategy that utilizes a balance of testing techniques to cover a representative sample of the system. The test planning process is a critical step in the testing process. Without a documented test plan, the test itself cannot be verified, coverage cannot be analyzed, and the test is not repeatable Repeatable : Once the tests are documented, any member of the test team should be able to execute the tests. If the test must be executed multiple times, the plan ensures that all of the critical elements are tested correctly. Parts or the entire plan can be executed for any necessary regression testing.   Controllable : Knowing what test data is required and what the expected results are.   Coverage : Based on the risks and priorities associated with the parts of the system, the test plan is designed to insure that adequate test coverage is build into the test. The plan can be reviewed by the project team to insure that all are in agreement that the correct amount and types of tests are planned.
  • 02/09/12 When? – Should begin at the same time requirements definition starts. The plan can then be detailed in parallel with application requirements.
  • Test plan identifier:- Shall contain full identification of the system and the software to which this document applies, including, as applicable, identification numbers(s), title(s), abbreviation(s), version number(s) and release number(s). Introduction:- 1. System Overview Shall briefly state the purpose and nature of the system and the software to which this document applies. Summarize the history of system development, operation, and maintenance. Identify the project sponsor, acquirer, user, developer, and support agencies, current and planned operating sites and list other relevant documents. 2. Document overview: Shall summarize the purpose and contents of this document and shall describe any security or privacy considerations associated with its use. 3. Relationship to other plans: Shall describe the relationship, if any, of the Software Test Plan to related project management plans. Test items/Integrated components:- Shall identify a Unit, subsystem, system or other entity by name and project-unique identifier and shall be divided into the following subparagraphs to describe the testing planned for the item(s). 1.Project-unique identifier of a test: Shall identify a test by project-unique identifier and shall provide the information specified below for the test. 1. Test objective 2. Test level 3. Test type or class 4. Qualification method(s) as specified in the requirements specification 5. Identifier for the System requirement and, if applicable software system requirements addressed by this test. 6. Special requirements (for example, 48 hours of continuous facility time, weapon simulation, extent of test, use of a special input or database) 7. Type of data to be recorded 8. Type of data recording/reduction/analysis to be employed 9. Assumptions and constraints, such as anticipated limitations on the test due to system or test conditions- timings, interfaces, equipment personnel, database etc. 10. Safety, security and privacy considerations associated with the test Features to be tested:- It list all the feature of application that needs to be tested. Features not to be tested:- It list all the features of application that are not to be tested.
  • 02/09/12 Focus is on individual tests and small groups of related tests.
  • 02/09/12
  • 02/09/12
  • 02/09/12
  • 02/09/12
  • 02/09/12
  • 02/09/12
  • 02/09/12
  • Bug Triage Meetings (sometimes called Bug Councils) are project meetings in which open bugs are divided into categories. The most important distinction is between bugs that will not be fixed in this release and those that will be. Triaging a bug involves: Making sure the bug has enough information for the developers and makes sense Making sure the bug is filed in the correct place Making sure the bug has sensible "Severity" and "Priority" fields Let us see what Priority and Severity means Priority is Business; Severity is Technical In Triages, team will give the Priority of the fix based on the business perspective. They will check “How important is it to the business that we fix the bug?” In most of the times high Severity bug is becomes high Priority bug, but it is not always. There are some cases where high Severity bugs will be low Priority and low Severity bugs will be high Priority. In most of the projects I worked, if schedule drawn closer to the release, even if the bug severity is more based on technical perspective, the Priority is given as low because the functionality mentioned in the bug is not critical to business. Priority and Severity gives the excellent metrics to identify overall health of the Project. Severity is customer-focused while priority is business-focused. Assigning Severity for a bug is straightforward. Using some general guidelines about the project, testers will assign Severity but while assigning a priority is much more juggling act. Severity of the bug is one of the factors for assigning priority for a bug. Other considerations are might be how much time left for schedule, possibly ‘who is available for fix’, how important is it to the business to fix the bug, what is the impact of the bug, what are the probability of occurrence and degree of side effects are to be considered.

Testing concepts prp_ver_1[1].0 Testing concepts prp_ver_1[1].0 Presentation Transcript

  • Software Testing Concept and Methodologies
  • Fundamentals of Software Testing
    • Is there any difference when you work for same assignment ???
    NON-IT? IT
  • Software/Application??
    • Group of programs designed for end user using operating system and system utilities.
    • A self contained program that performs well- defined set of tasks under user control.
    • Programs, procedures, rules, and any associated documentation pertaining to the operation of a system.
  • Evolution of software product
    • Marketing
      • Survey is done by marketing people for various product and benchmark the products.
      • Create MRS( Marketing Requirement survey)
    • Requirement analysis
      • Feasibility study (social, economical etc)
      • investigate the need for possible software automation in the given system.
      • Domain expert Create URS ( User Requirement specification)
    • Design
      • software's overall structure is defined.
      • Software architecture, interdependence of modules, interfaces, database etc is defined.
      • System analyst Create SRS( High level design, Low level design etc)
  • Evolution of software product( Cont.)
    • Code Generation
      • design must be translated into a machine-readable form taking input as SRS.
      • Done by Team of developers.
      • Reviews after every 500 lines of code
        • Code Inspection
        • Code Walkthrough
    • Testing
      • New/patched build is tested by Test engineers for stability of the application.
    • Maintainence
      • Software is maintained due to changes ( unexpected values into the system)
  • Software development life cycle Operational Testing Ongoing Support Requirement Analysis High level design Detailed Specifications Coding Unit Testing Integration Testing Review/Test
  • System
    • An inter-related set of components, with identifiable boundaries, working together for some purpose.
    Output Input Process
  • Analysis
    • The process of identifying requirements, current problems, constraints ,Opportunities for improvement , timelines and Resources costs .
  • Design
    • The business of finding a way to meet the functional requirements within the specified constraints using the available technology
  • Software Development life cycle Phases or stages of a project from inception through completion and delivery of the final product… … and maintenance too!
  • Software Development life cycle Three Identifiable Phases: 1. Definition 2. Development 3. Maintenance
  • Definition Phase
    • Focuses on WHAT
      • What information to be processed?
      • What functions and performances are desired?
      • What interfaces are to be established?
      • What design constraints exists?
      • What validation criteria are required to define a success system?
  • Development Phase
    • Focuses on
      • How the database should be designed ?
      • How the software architecture to be designed ?
      • How the design will be translated in to a code ?
      • How testing will be performed ?
    • Three specific steps in Development Phase are:-
      • a. Design
      • b. Coding
      • c. Testing (ignored due to lack of time, due time to market, additional cost involved, lack of testing requirement understanding etc.) )
  • Maintenance Phase
    • Maintenance phase focuses on CHANGE that is associated with
    • Error correction
    • Adaptation required as the software environment evolves
    • Enhancements brought about by changing customer requirements
    • Reengineering carried out for performance improvements
    “ Maintainability is defined as the ease with which software can be understood, corrected, adapted and enhanced”
    • Identify Problems/Objectives
    • Determine information Requirements
    • Analyze System needs
    • Design the recommended system
    • Develop and Document software
    • Testing the System
    • Implementation and maintaining the system
    SDLC Phases S D L C
  • SDLC Phases : Requirement Identification & Analysis Phase
    • Request for Proposal
    • Proposal
    • Negotiation
    • Contract
    • User Requirement Specification
    • Software Requirement Specification
  • Software Requirement Specifications IEEE 830 : Software Requirement Specification is a means of translating the ideas in the minds of the clients(the inputs) into a set of formal document (the output) of the requirement phase The Role Bridge the communication gap between the client the user and the developer
  • SDLC Phases- Design
    • HLD Document contains items in a macro level
      • List of modules and a brief description of each
      • Brief functionality of each module
      • Interface relationship among modules
      • Dependencies between modules
      • Database tables identified with key elements
      • Overall architecture diagrams along with technology details
    High Level Design
  • SDLC Phases- Design
    • Detailed functional logic of the module, in pseudo code
    • Database tables, with all elements, including their type and size
    • All interface details
    • All dependency issues
    • Error MSG listing
    • Complete input and output format of a module
    Low Level Design HLD and LLD phases put together called Design phase
  • SDLC Phases
    • Code Generation
      • design must be translated into a machine-readable form taking input as SRS.
      • Done by Team of developers.
      • Reviews after every 500 lines of code
        • Code Inspection
        • Code Walkthrough
    • Testing
      • New/patched build is tested by Test engineers for stability of the application.
    • Maintainence
      • Software is maintained due to changes ( unexpected values into the system)
    • What is testing?
    • We Test !! We Test !! Why?
    • Testing Defined
    • Is Product Successful
    • Product Success criteria
    • Testability
    • Test factors
    Get Started with Testing !!!!!!!
  • What is Testing?
    • process used to help identify the correctness, completeness and quality of developed computer software.
    • Find out difference between actual and expected behavior.
    • The process of exercising software to verify that it satisfies specified requirements of end user and to detect errors
    • The process of revealing that an artifact fails to satisfy a set of requirements
  • What is Testing ( Cont.) ?
    • Establishing confidence that a system does what it is supposed to do
    • Confirming that a system performs its intended functions correctly
    • Does not guarantee bug free product
    • No substitute for good programming
    • Can’t prevent/debug bugs, only detect
    • Offer advise on product quality and risks.
  • We Test !! We Test !! Why
    • Detect programming errors - programmers, like anyone else, can make mistakes.
    • To catch bugs/defect/errors.
    • To check program against specifications
    • Cost of debugging is higher after release
    • Client/end user should not find bugs
    • Some bugs are easier to find in testing
    • Challenge to release a bug-free product.
    • Verifying Documentation.
    • To get adequate trust and confidence on the product.
    • To meet organizational goals
      • Like meeting requirements, satisfied customers, improved market share, Zero Defects etc
    • Since the software can perform 100000 correct operations per second, it has the same ability to perform 100000 wrong operations per second, if not tested properly.
    • Ensuring that system is ready for use
    • Understanding limits of performance.
    • Learning what a system is not able to do
    • Evaluating capabilities of system
    We Test !! We Test !! Why?
  • Testing defined !!
    • Def-1
      • Process of establishing confidence that a program or system does what it is supposed to.
    • Def-2
      • Process of exercising or evaluating a system or system component by manual or automated means to verify that it satisfies specified requirement (IEEE 83a)
    • Def-3
      • Testing is a process of executing a program with the intent of finding errors (Myers)
    • Def-4
      • Testing is any activity aimed at evaluating an attribute or capability of a program or system and determining that it meets its required results.
  • Is Product successful ???
    • When Client/Customer perceives it as value-added to his business.
    • Timeliness of delivery of the product within budget and scope.
    • The business perceives that the system satisfactorily addresses the true business goals.
    • End user feels that look, feel, and navigation are easy.
    • Team is prepared to support and maintain the delivered product.
  • Product Success Criteria
    • Functionality
    • Usability
    • Likeability
    • Configurability
    • Maintainability
    • Interoperability
  • Testability
    • Operability
    • Controllability
    • Observability
    • Understandability
    • Suitability
    • Stability
    • Accessibility
    • Navigability
    • Editorial Continuity
    • Scalability
    • Context Sensitivity
    • Structural Continuity
  • Test Factors     Structure Integrity Maintainability Documentation Usability Reusability Testability Reliability Flexibility Efficiency Correctness Adaptability (future quality) Engineering (interior quality) Functionality (exterior quality)
  • Software Testing Life Cycle
  • Conventional Testing Process Design Build Test & Fix * Here testing was happening only towards the end of the life cycle Spec
  • Distribution of Defects in the life cycle Source: IBM/TRW/Mitre 27% 7% 10% 56%
  • Software development life cycle Operational Testing Ongoing Support Requirement Analysis High level design Detailed Specifications Coding Unit Testing Integration Testing Review/Test
  • STLC-V Model Requirement Req. Review Design Design Review Code Code Review Develop Unit Test Review Unit Test Execute Unit Test Execute Integration tests Execute System Tests Develop integration Tests Review Integration Tests Develop Acceptance Tests Review Acceptance tests
  • STLC:- Activities
    • Scope/Requirement
    • Base line inventory
    • Acceptance criteria
    • Schedule
    • Prioritization
    • Test references
    • Sign off req
    • Plan
    • Approach
    • Process and Tools
    • Methodology
    • Delivery Models
    • Risk Plan
    • Project Overflow
    • Quality Objectives
    • Configuration Plan
    • Design
    • Test Design
    • Specifications
    • Test Scenarios
    • Test Cases
    • Test Data
    • Tool Development
  • STLC:- Activities
    • Execution
    • Implement Stubs
    • Test Data Feeders
    • Batch Processes
    • Execute Testing
    • Collate Test Data
    • Identify Bugs
    • Defect Analysis
    • Check Unexpected Behavior
    • Identify defective application areas
    • Identify erroneous test data
    • Identify defect trends/patterns
  • Test Approach
    • Test Process :- The project under development or incorporation of accepted changes in the project or project under maintenance which implemented changes, use the testing process. Based on the nature of the project, adequate testing shall be arrived at the project level.
    • The Test Approach
      • sets the scope of system testing
      • the overall strategy to be adopted
      • the activities to be completed
      • the general resources required
      • the methods and processes to be used to test the release.
      • details the activities, dependencies and effort required to conduct the System Test.
  • Test Approach( Cont.)
    • Test approach will be based on the objectives set for testing
    • Test approach will detail the way the testing to be carried out
    • Types of testing to be done viz Unit, Integration and system testing
    • the general resources required
    • The method of testing viz Black–box, White-box etc.,
    • Details of any automated testing to be done
    • Details the activities, dependencies and effort required to conduct the System Test
  • Software Testing Life Cycle- Phases
    • Requirement Analysis
    • Prepare Test Plan
    • Test Case Designing
    • Test Case Execution
    • Bug Reporting, Analysis and Regression testing
    • Inspection and release
    • Client acceptance and support during acceptance
    • Test Summary analysis
  • Requirement Analysis
    • Objective
      • The objective of Requirement Analysis is to ensure software quality by eradicating errors as earlier as possible in the developement process, as the errors noticed at the end of
      • the software life cycle are more costly compared to that of
      • early ones, and there by validating each of the Outputs.
    • The objective can be acheived by three basic issues:
        • Correctness
        • Completeness
        • Consistency
  • Type of Requirement
    • Functional
    • Data
    • Look and Feel
    • Usability
    • Performance
    • Operational
    • Maintainability
    • Security
    • Scalability
    • Etc…….
  • Evaluating Requirements
    • What Constitutes a good Requirement?
    • Clear:-
      • Unambiguous terminology
    • Concise:-
      • no unnecessary narrative or non-relevant facts
    • Consistent
      • requirements that are similar are stated in similar terms. Requirements do not conflict with each other.
    • Complete
      • all functionality needed to satisfy the goals of the system is specified to a level of detail sufficient for design to take place.
  • Requirement Analysis
    • Difficulties in conducting requirement analysis
    • Analyst not prepared
    • Customer has no time/interest
    • Incorrect customer personnel involved
    • Insufficient time allotted in project schedule
  • Prepare Test Plan- Activities
    • Scope Analysis of project
    • Document product purpose/definition
    • Prepare product requirement document
    • Develop risk assessment criteria
    • Identify acceptance criteria
    • Document Testing Strategies.
    • Define problem - reporting procedures
    • Prepare Master Test Plan
  • Design-Activities
    • Setup test environment
    • Design Test Cases: Requirements-based and Code-based Test Cases
    • Analyze if automation of any test cases is needed
  • Execution- Activities
    • Initial Testing, Detect and log Bugs
    • Retesting after bug fixes
    • Final Testing
    • Implementation
    • Setup database to track components of the automated testing system, i.e. reusable modules
  • Bug Reporting, Analysis, and Regressing Testing
    • Activities
    • Detect Bugs by executing test cases
    • Bug Reporting
    • Analyze the Error/Defect/Bug
    • Debugging the system
    • Regression testing
  • Inspection and Release-Activities
    • Maintaining configuration of related work products
    • Final Review of Testing
    • Metrics to measure improvement
    • Replication of Product
    • Product Delivery Records
    • Evaluate Test Effectiveness
  • Client Acceptance
    • Software Installation
    • Provide Support during Acceptance Testing
    • Analyze and Address the Error/Defect/Bug
    • Track Changes and Maintenance
    • Final Testing and Implementation
    • Submission, client Sign-off
    • Update respective Process
  • Support during Acceptance-Activities
    • Pre-Acceptance Test Support
    • Installing the software on the client’s environment
    • Providing training for using the software or maintaining the software
    • Providing hot-fixes as and when required to make testing activity to continue
    • Post Acceptance Test Support
    • Bug Fixing
  • Test Summary Analysis- Requirement
    • Quantitative measurement and Analysis of Test Summary
    • Evaluate Test Effectiveness
    • Test Reporting
      • Report Faults – (off-site testing)
      • Report Faults – (on-site/ field testing)
  • Testing Life Cycle - Team Structure
    • An effective testing team includes a mixture of members who has
      • Testing expertise
      • Tools expertise
      • Database expertise
      • Domain/Technology expertise
  • Testing Life Cycle - Team Structure (Contd…)
    • The testing team must be properly structured, with defined roles and responsibilities that allow the testers to perform their functions with minimal overlap.
    • There should not be any uncertainty regarding which team member should perform which duties.
    • The test manager will be facilitating any resources required for the testing team.
  • Testing Life Cycle - Roles & Responsibilities
    • Clear Communication protocol should be defined with in the testing team to ensure proper understanding of roles and responsibilities.
    • The roles chart should contain both on-site and off-shore team members.
  • Testing Life Cycle - Roles & Responsibilities
    • Test Manager
      • Single point contact between Wipro onsite and offshore team
      • Prepare the project plan
      • Test Management
      • Test Planning
      • Interact with Wipro onsite lead, Client QA manager
      • Team management
      • Work allocation to the team
      • Test coverage analysis
  • Testing Life Cycle - Roles & Responsibilities
    • Test Manager cont..
      • Co-ordination with onsite for issue resolution.
      • Monitoring the deliverables
      • Verify readiness of the product for release through release review
      • Obtain customer acceptance on the deliverables
      • Performing risk analysis when required
      • Reviews and status reporting
      • Authorize intermediate deliverables and patch releases to customer.
  • Testing Life Cycle - Roles & Responsibilities
    • Test Lead
      • Resolves technical issues for the product group
      • Provides direction to the team members
      • Performs activities for the respective product group
      • Review and Approve of Test Plan / Test cases
      • Review Test Script / Code
      • Approve completion of Integration testing
      • Conduct System / Regression tests
      • Ensure tests are conducted as per plan
      • Reports status to the Offshore Test Manager
  • Testing Life Cycle - Roles & Responsibilities
    • Test Engineer
      • Development of Test cases and Scripts
      • Test Execution
      • Result capturing and analysing
      • Defect Reporting and Status reporting
  • Software Testing Phases
  • Software Testing Phases
    • Unit Testing
    • Functional Testing
    • Integration Testing
    • System Testing
    • Acceptance Testing
    • Interface Testing
    • Regression Testing
    • Special Testing
  • Unit Testing
    • Unit Testing is a verification effort on the smallest unit of the software design the software component or module.
  • Why Unit Testing?
    • Test early for each component and prevent the defect from being carried forward to next stage.
    • To ensure that the design specifications have been correctly implemented.
    ?
  • Approach
    • Uses the component-level design description as a guide.
    • Important control paths are tested to uncover errors within the boundary of the module.
    • Unit testing is white-box oriented, and this can be conducted in parallel for multiple components.
    • the relative complexity of tests and uncovered errors are limited by the constraints scope established for unit testing.
  • Unit Testing Test Cases Interfaces (input/output) Local Data structures Boundary Conditions Independent Paths Error Handling Paths Module
  • Unit testing to uncover errors like
    • Comparison of different data types
    • Incorrect logical operators or precedence
    • Expectation of equality when precision errors makes equality unlikely.
    • Incorrect comparison of variables
    • Improper or nonexistent loop termination
    • Failure to exit when divergent iteration is encountered.
    • Improperly modified loop variables, etc.
    • Misunderstood or incorrect arithmetic precedence
    • Mixed mode operations
    • Incorrect initialization
    • Precision inaccuracy
    • Incorrect symbolic representation of an expression
    Some of the Computational Errors uncovered while Unit Testing
    • Error description is unintelligible
    • Error notes does not correspond to error encountered
    • Error condition causes system intervention prior to error handling
    • Exception- condition processing is incorrect
    • Error description does not provide enough information to assist in the location of the cause of error.
    Potential errors while error handling is evaluated
  • Unit Testing Procedure
    • Unit testing is normally considered as an adjunct to the coding step.
    • Unit test case design begins ,once the component level design has been developed, reviewed and verified.
    • A review of design information provides guidance for establishing test cases that are likely to uncover errors.
    • Each test case should be coupled with a set of expected results.
  • Unit Test Steps
    • The Unit test criteria, the Unit test plan, and the test case specifications are defined.
    • A code walkthrough for all new or changed programs or modules is conducted.
    • Unit Test data is created, program or module testing is performed, and a Unit test report is written.
    • Sign-offs to integration testing must be obtained, Sign-off can be provided by the lead programmer, project coordinator, or project administrator.
  • Functional Testing
    • Functional Testing is a kind of black box testing because a program’s internal structure is not considered.
    • Give the inputs, check the outputs without concentrating on how the operations are performed by the system.
    • When black box testing is conducted , the SRS plays a major role and the functionality is given the utmost importance.
  • Functional Testing
    • Focus on system functions
      • developed from the requirements
      • Behavior testing
    • Should
      • know expected results
      • test both valid and invalid input
    • Unit test cases can be reused
    • New end user oriented test cases have to be developed as well.
  • User Interface
    • This stage will also include Validation Testing
    • which is intensive testing of the new Front end
    • fields and screens. Windows GUI Standards;
    • valid, invalid and limit data input; screen & field
    • look and appearance, and overall consistency
    • with the rest of the application.
  • Vertical First Testing : - When the complete set of functionality is taken for one module and tested it is called Vertical First testing. Horizontal First Testing : - If a similar function is taken across all the modules and it is tested, it is called horizontal-first testing. Vertical Horizontal
  • Integration testing
    • testing with the components put together.
  • Why integration Testing ?
    • Data can be lost across an interface.
    • One module can have an inadvertent, adverse effect on another.
    • Sub-functions, when combined, may not produce the desired major function.
    • Individually acceptable imprecision may be magnified to unacceptable levels.
    • Global data structures can create problems, and so on…
  • Types of approaches- Top-Down
    • Top-Down is an incremental approach to testing of the program structure. Modules are integrated by moving downward through the control hierarchy, beginning with the main control module, this could be done as depth- first or breadth-first manner.
    A x 1 2 z y
    • as the name implies, begins construction and testing with atomic modules i.e., from the components at the lowest levels in the program structure
    Type of Approaches- Bottom-Up A x 1 2 z y
  • Integration testing- Example
    • E.g.: Login.java and ConnectionPool.java
    • Login class calls the ConnectionPool object , Integration testing identifies errors not observed while code Debugging or Reviews.
  • System Testing
    • Purpose
      • Test the entire system as a whole
    • Assumptions
      • Completed
        • Unit Testing
        • Functional Testing
        • Integration Testing
  • Expectations
    • Verification of the system
    • Software Requirements
    • Business Workflow perspective
    • Final verification of requirements and design
    • External Interfaces
    • Performance tests
    • Affected documentation
    • Non-testable requirements
  • Interface Testing
    • Purpose
      • Interfaces with the system
    • Assumptions
      • Unit, functional and integration testing
      • All Critical Errors
    • Expectations
      • Interfaces with External Systems
      • Planning and Co-ordination meetings with the external organizations in preparation for testing.
        • Who will be the primary contacts?
        • When is testing scheduled?
          • If there is no test environment available testing may have to occur on weekends or during non-production hours.
  • Interface Testing: Expectations (Contd.)
    • Expectations (Contd.)
      • What types of test cases will be run, how many and what are they testing?
        • Provide copies of test cases and procedures to the participants
        • If the external organization has specific cases they would like to test, have them provide copies
      • Who will supply the data and what will it contain? What format will it be in (paper, electronic, just notes for someone else to construct the data, etc.)?
      • Who is responsible for reviewing the results and verifying they are as expected?
      • How often will the group meet to discuss problems and testing status?
  • Interface Testing: Expectations (Contd.)
    • Expectations (Contd.)
      • Both normal cases and exceptions should be tested on both sides of the interface (if both sides exchange data). The interface should be tested for handling the normal amount and flow of data as well as peak processing volumes and traffic.
      • If appropriate, the batch processing or file transmission “window” should be tested to ensure that both systems complete their processing within the allocated amount of time.
      • If fixes or changes need to be made to either side of the interface, the decisions, deadlines and re-test procedures should be documented and distributed to all the appropriate organizations.
  • Performance Testing
    • Purpose
      • The purpose is to verify the system meets the performance requirements.
    • Assumptions/Pre-Conditions
        • System testing successful.
        • Ensure no unexpected performance.
        • Prior to Acceptance Testing.
        • Tests should use business cases, including normal, error and unlikely cases.
  • Performance Testing (Contd…)
        • Performance tests
          • Load Test
          • Stress Test
          • Volume Test
          • Test data
          • Response time
          • End-to-end tests and workflows should be performed
          • Tracking tool for comparison
  • Regression Testing
  • Regression Testing
    • Approach
    • Definition and Purpose
    • Types of regression testing
    • Regression test problems
    • Regression testing tools
  • Regression Testing
    • Definition
      • “ Selective retesting to detect faults introduced during modification of a system or system component, to verify that modifications have not caused unintended adverse effects, or to verify that a modified system or system component still meets its specified requirements.”
      • “ ...a testing process which is applied after a program is modified.”
  • Regression Testing
    • Purpose of Regression testing
      • Locate errors
      • Increase confidence in correctness
      • Preserve quality
      • Ensure continued operations
      • Check correctness of new logic
      • Ensure continuous working of unmodified portions
  • Regression testing types Perfective Maintenance Corrective Maintenance Adaptive Maintenance
    • Corrective - fixing bugs, Design Errors, coding errors
    • Adaptive - no change to functionality, but now works under new conditions, i.e., modifications in the environment.
    • Perfective - adds something new; makes the system “better” , Eg: adding new modules.
    • Preventive – prevent malfunctions or improve maintainability of the software. Eg: code restructuring, optimization, document updating etc
    Preventive Maintenance
  • Regression Testing
    • Example 1 - Y2K
    • During Y2K code changing, regression testing was the essence of the transition phase. What was typically done, was that code was changed at multiple places (it did not turn the original logic upside down, but made subtle changes). Now Regression testing was very important for the fact that even one small piece of code lying untested could lead to huge ramifications in the large amounts of data that is typically handled by these mainframe computers / programs.
  • Regression Testing
    • Example 2 – General
      • Regression testing might even be required when one of the business associates changes his systems (might be new hardware). Since our system is hooked on to this transition system, our test engineers are also required to do regression testing on our system which has NOT been changed.
      • This example brings to light another fact with Regression testing, i.e., sometimes, even an unchanged system needs to be tested!
  • Regression testing methods
    • Regression testing can be done either manually or by automated testing tools.
      • Manual testing: Can be done for small systems, where investing in automated tools might not be feasible enough.
      • Automated testing: One class of these tools is called as Capture-playback tool. This is very helpful in situations where the system undergoes lots of version changes.
  • Acceptance Testing
    • Purpose
      • The purpose of acceptance testing is to verify system from user perspective
    • Assumptions/Pre-Conditions
        • Completed system and regression testing
        • Configuration Manager
        • Test data
        • Final versions of all documents ready
        • Overview to the testing procedures
        • Exit decision
        • Specific procedures
        • Acceptance Criteria MUST be documented
          • Acceptance Testing
          • Project stakeholders
  • Acceptance Testing
    • Expectations
        • Verification from the user’s perspective
        • Performance testing should be conducted again
        • Extra time
        • User manual to the testers
        • Non-testable requirements
        • Review with the Sponsor and User
        • Plans for the Implementation
  • Field Testing
    • Purpose
      • The purpose of field testing is to verify that the systems work in actual user environment.
    • Assumptions/Pre-Conditions
      • System and/or acceptance testing successful.
    • Expectations
      • Verification of the system works in the actual user environment.
      • Pilot test with the final product.
      • Pilot system should work during a problem.
  • Software Testing Strategies
  • Testing Information Flow
    • NOTES
      • Software Configuration includes a Software Requirements Specification, a Design Specification and source code.
      • A test configuration includes a Test Plan and Procedures, test cases and testing tools.
      • It is difficult to predict the time to debug the code, hence it is difficult to schedule.
    Reliability Model Evaluation Testing Debug Test Configuration Software Configuration Expected Results Predicted Reliability Corrections Test Results Error Data Rate Errors
  • Software Testing Strategies and Techniques
    • Concise Statement of how to meet the objectives of software testing
    • To Clarify expectations with the user, sponsor and bidders
    • To Describe the details of how the testing team will evaluate the work products, systems and testing activities and results
    • To describe the approach to all testing types and phases and the activities for which they are responsible
  • Test Strategy for Maintenance
    • Includes a greater focus on regression testing, on keeping the users informed of specific fixes or changes that were requested.
    • Test process should be described in terms of the periodic release cycles that are part of the change control process.
    • Also describe a set of minimum tests to be performed when emergency fixes are needed (for instance, due to failed hardware or recovering from a database crash).
  • Test Strategy: Inputs & Deliverables Test Strategy Priority & Criticality Types of Applications Project Success Criteria Time Required for Testing No. & Levels of Resources Rounds of Testing Exit Criteria Test Suspension Criteria Resumption Criteria Deliverables Time Required for Testing No. & Levels of Resources Rounds of Testing Exit Criteria Test Suspension Criteria
  • Typical Test Issues Test Participation Test Environments Approach to Testing External Interfaces Approach to Testing COTS products Scope of Acceptance Testing Verification of Un-testable Requirements Criteria for Acceptance of the System Pilot of Field Testing Performances and Capacity Requirement/Testing Test Issues
  • Common Test Related Risks and Considerations Test Related & Risks & Considerations Poor Requirements Stakeholder Participation Test Staffing Testing of COTS External Interfaces Performance and Stress Testing Schedule Compression Requirement Testability Acceptance
  • Test Exit Criteria
    • Executed at least once?
    • Requirements been tested or verified?
    • Test Documentation
    • Documents updated and submitted
    • Configuration Manager
    • Test Incidents
  • Software Test Plan
  • How to achieve good testing
    • Start planning early in the project.
    • Prepare a Test Plan.
    • Identify the objectives.
    • Document objectives in Test Plan.
  • Test Plan
    • Objective
    • A test plan prescribes the scope, approach, resources, and
    • schedule of testing activities. It identifies the items to be
    • tested, the features to be tested, the testing tasks to be
    • performed, the personnel responsible for each task, and the
    • risks associated with the plan.
  • Why Plan Test?
    • Repeatable
    • To Control
    • Adequate Coverage
    Test planning process is a critical step in the testing process. Without a documented test plan, the test itself cannot be verified, coverage cannot be analyzed and the test is not repeatable Importance
  • Test plan To support testing, a plan should be there, which specifies
        • What to do?
        • How to do?
        • When to do?
  • Test Plan
    • Test plans need to identify
      • The materials needed for testing
      • What tests will be run
      • What order the tests will be run
    • Test plans also need to:
      • Name each test
      • Predict how long the test should take
      • Scripts and test cases will be needed for most tests
  • Structure 1. Test plan identifier 2. Introduction 3. Test items / integration components 4. Features to be tested 5. Features not to be tested 6. Test Approach 7. Item pass/fail criteria 8. Suspension criteria and resumption requirements Continue.. * As defined by the IEEE 829 Test documentation Std
  • Structure 9. Test deliverables (PPlan) 10. Environmental needs (H/w & S/w) 11. Responsibilities (PPlan) 12. Staffing and Training needs (PPlan) 13. Schedule (PPlan) 14. Risks and Contingencies (PPlan) 15. Approvals Ref : Test Plan Template IEEE 829
  • Testing Process
    • Code Based Test Case Design
    • Requirements Based Test Case Design
    Testing Techniques
  • Testing Techniques
    • Specification Based (Black Box/Functional Testing)
      • Equivalence Partitioning
      • Cause Effect Graphing
      • Boundary Value Analysis
      • Category Partition
      • Formal Specification Based
      • Control Flow Based Criteria
      • Data Flow based criteria
    • Fault Based
      • Error Guessing
      • Mutation
      • Fault Seeding
  • Testing Techniques (Contd…)
    • Usage Based
      • Statistical testing
      • (Musa’s)SRET
    • Specific Technique
      • Object Oriented Testing
      • Component Based Testing
    • Code Based (White Box/Structural testing)
      • Statement Coverage
      • Edge Coverage
      • Condition Coverage
      • Path Coverage
      • Cyclomatic Complexity
  • Test Data Adequacy Criteria
    • Have I
      • Tested
      • Exercised
      • Forced
      • Found
    • Have I
      • Thought
      • Applied all inputs
      • Completely explored
      • Run all the Scenarios
    Test Data Adequacy Criteria Code Based Testing Requirement Based Testing
  • Test Preparation Checklist
    • Test Id
    • Version
    • Users A/c
    • Input DB
    • Training
    • Release to System
    • Reset System
    • Test Environment
    • Stake Holders
    • Schedule……
    • Code Based Test Case Design
    • Requirements Based Test Case Design
    Test Design Specifications
  • Purpose of Test Design Specification
    • Requirements of Test Approach
    • Identify the features to be tested
    • Arrive at High Level
  • Contents of Test Design Specification
    • Identification and Purpose
    • Features to be tested
    • Approach Refinements
    • Test Identification
    • Pass/Fail Criteria
  • Approach
    • Study Business Requirements
    • Arrive at Environmental Requirements
    • Identify test related Risks
    • Decide Automation Requirements
    • Prepare Test Documents
    • Plan for Test Completion
    • Analyze Track changes
    • Review Test design effectiveness
  • Test Cases
  • Test Case Sheet To Capture Details: 1.Testcase ID (should be unique, e.g.: c_01.1, c_01.1a, c_01.2,…) 2.Feature functionality to be tested (each Requirement/feature could be from Usecase/COMP) 3.Test Description/ test input details (test input, test data, action to be performed to test the feature, complex test cases be split to more than one) 4.Expected behavior ( in messages, screens, data, to be with correct details) 5.Actual and Status
  • Test case development process
      • Identify all potential Test Cases needed to fully test the business and technical requirements
      • Document Test Procedures
      • Document Test Data requirements
      • Prioritize test cases
      • Identify Test Automation Candidates
      • Automate designated test cases
  • Type of test cases Type Source 1.Requirement Based Specifications 2.Design based Logical system 3.Code based Code 4.Extracted Existing files or test cases 5.Extreme Limits and boundary conditions
  • Requirement based test cases
    • Identify the basic cases that indicate program functionality.
    • Create a minimal set of tests to cover all inputs and outputs.
    • Breakdown complex cases into single cases.
    • Remove unnecessary or duplicate cases.
    • Review systematically and thoroughly.
    • Design based test cases supplement requirements based test cases.
    Steps for selecting test cases:
  • Code based test cases
    • Every statement exercised at least once.
    • Every decision exercised over all outcomes.
    Goals for complete code based coverage:
  • Extreme cases
    • Looks for exceptional conditions,
    • extremes, boundaries, and abnormalities .
    • Requires experience, creativity of the Test Engineer
    Need:
  • Extracted and randomized cases
    • Extracted cases involved extracting
    • samples of real data for the testing
    • process.
    • Randomized cases involved using tools
    • to generate potential data for the testing
    • process.
  • Characteristics of good test case
    • Specific
    • Non-redundant
    • Reasonable probability of catching an error
    • Medium complexity
    • Repeatable
    • Always list expected results
  • Test case guidelines
      • Developed to verify that specific requirements or design are satisfied
      • Each component must be tested with at least two test cases: Positive and Negative
      • Real data should be used to reality test the modules after successful test data is used
  • The Testing process Test Cases Test Data Test Results Test Reports Design Test Cases Prepare test data Run Prg with Test data Compare results
      • Statement Coverage
      • Edge Coverage
      • Condition Coverage
      • Path Coverage
      • Cyclomatic Complexity
    Code Base Test Case Design
  • Purpose
    • Understand the Objective
    • Effective conversion of specifications
    • Checking Programming Style with coding standards
    • Check Logic Errors
    • Incorrect Assumptions
    • Typographical Errors
  • Code Based Testing - White Box Testing
    • Coding Standards
    • Logic Programming Style
    • Complexity of Code
    • Structural Testing
    • Ensure Reduced Rework
    • Quicker Stability
    • Smooth Acceptance
    • Structure of the Software itself
    • Valuable Source
    • Selecting test cases
  • Code Based Testing or White Box Testing
    • Testing control structures of a procedural design.
    • Can derive test cases to ensure:
      • All independent paths are exercised at least once.
      • All logical decisions are exercised for both true and false paths.
      • All loops are executed at their boundaries and within operational bounds.
      • All internal data structures are exercised to ensure validity.
    Contd..2
  • Code Based Testing or White Box Testing (Contd..)
      • Why do white box testing when black box testing is used to test conformance to requirements?
        • Logic errors and incorrect assumptions most likely to be made when coding for "special cases". Need to ensure these execution paths are tested.
  • Code Based Testing or White Box Testing (Contd..)
        • May find assumptions about execution paths incorrect and so make design errors. White box testing can find these errors.
        • Typographical errors are random. Just as likely to be on an obscure logical path as on a mainstream path.
          • "Bugs lurk in corners and congregate at boundaries"
  • Types of Code Based Testing & Adequacy Criteria
    • Involve Control Flow Testing
      • Statement Coverage
        • Is every statement executed at least once?
      • Edge Coverage
        • Is every edge in the control flow graph executed?
      • Condition Coverage
        • Is edge + every Boolean (sub) expression in the control flow graph executed?
      • Path Coverage
        • Is every path in the control flow graph executed?
      • Cyclomatic Complexity
        • Is the logical structure of the program appropriate?
  • Test Cases Derive Test Cases Independent Path Logical Decisions Boundaries Data Structures
  • Types of Code Based Testing (1) - Statement Coverage
    • Control Flow elements to be exercised in statements.
    • Statements coverage criterion requires elementary statement, where program is executed at least once.
    Total Number of Statements (T) Number of Executed Statements (P) Statement coverage (C) =
  • Types of Code Based Testing (2) - Edge Coverage (Branch Coverage)
    • Focus is on identifying test cases executing each branch at least once.
    Total Number of Branches (T) Number of Executed Branches (P) Edge Covers (C) =
  • Types of Code Based Testing(3) - Conditions Coverage
    • Combination of Edge Coverage and more detailed conditions.
    • Examples: True & False, Elementary Conditions, Comparisons, Boolean Expressions.
    Total Number of Conditions (T) Number of Executed Conditions (P) Basic Conditions Coverage (C) =
  • Types of Code Based Testing(3) - Conditions Coverage (Contd.)
    • Condition testing aims to exercise all logical conditions in a program module. It is defined as:
      • Relational expression: (E1 op E2), where E1 and E2 are arithmetic expressions.
    • Simple condition: Boolean variable or relational expression, possibly preceded by a NOT operator.
  • Types of Code Based Testing(3) - Conditions Coverage (Contd.)
    • Compound condition: Composed of two or more simple conditions, boolean operators and parentheses.
    • Boolean expression: Condition without relational expressions.
  • Types of Code Based Testing(3) - Conditions Coverage (Contd.)
    • Errors in expressions can be due to:
      • Boolean operator error
      • Boolean variable error
      • Boolean parenthesis error
      • Relational operator error
      • Arithmetic expression error
    • Condition testing methods focus on testing each condition in the program.
  • Types of Code Based Testing(3) - Conditions Coverage (Contd.)
    • Strategies proposed include:
      • Branch testing - execute every branch at least once.
      • Domain Testing - uses three or four tests for every relational operator.
      • Branch and relational operator testing - uses condition constraints.
  • Types of Code Based Testing(3) - Conditions Coverage (Contd.)
    • Example 1: C1 = B1 & B2
      • where B1, B2 are boolean conditions.
      • Condition constraint of form (D1,D2) where D1 and D2 can be true (t) or false(f).
      • The branch and relational operator test requires the constraint set {(t,t),(f,t),(t,f)} to be covered by the execution of C1.
    • Coverage of the constraint set guarantees detection of relational operator errors
    • Path Coverage executed at least once.
      • Selects test paths according to the location of definitions and use of variables.
    • Test for Loops (iterations)
      • Loop Testing.
      • Loops fundamental to many algorithms.
      • Can define loops as simple, concatenated, nested and unstructured.
    Types of Code Based Testing(4) - Path Coverage : Data Flow Testing
  • Types of Code Based Testing(4) - Path Coverage: Loop Testing: Examples Simple Nested Concatenated Unstructured
  • Types of Code Based Testing(4) - Path Coverage: Simple Loops
    • Simple Loops of size n:
        • Skip loop entirely
        • Only one pass through loop
        • Two passes through loop
        • m passes through loop where, m<n.
        • (n-1), n and (n+1) passes through the loop.
    Simple
  • Types of Code Based Testing(4) - Path Coverage: Nested Testing
    • Nested Loops
      • Start with inner loop. Set all other loops to minimum values.
      • Conduct simple loop testing on inner loop.
      • Work outwards.
      • Continue until all loops are tested.
    Nested
  • Types of Code Based Testing(4) - Path Coverage: Concatenated Loop
    • Concatenated Loops test
        • If independent loops, use simple loop testing.
        • If dependent, treat as nested loops.
    Concatenated
    • Unstructured loops
        • Don't test - redesign.
    Types of Code Based Testing(4) - Path Coverage: Unstructured Loops Unstructured
    • Measures the amount of decision logic in a single software module.
    • The Cyclomatic complexity gives a quantitative measure of the logical complexity.
    • This value gives the number of independent paths in the basis set and an upper bound for the number of tests to ensure that each statement is executed at least once.
    Types of Code Based Testing(5) - Cyclomatic Complexity
  • Cyclomatic Complexity
    • An independent path is any path through a program that introduces at least one new set of processing statements or a new condition (i.e., a new edge).
  • Relationship with Programming Complexity
    • Cyclomatic Complexity calculations help the developer/tester to decide whether the module under test is overly complex or well written.
    • Recommended limit value of Cyclomatic Complexity is 10.
      • >10
        • Structure of the module is overly complex.
      • >5 and <10
        • Structure of the module is complex indicating that the logic is difficult to test.
      • <5
        • structure of the module is simple and logic is easy to test.
  • Flow Graphic Notation Sequence If While Until Case
  • Flow Graphic Notation
    • On a flow graph:
      • Arrows called edges represent flow of control.
      • Circles called nodes represent one or more actions.
      • Areas bounded by edges and nodes are called regions.
      • A predicate node is a node containing a condition.
    Contd..2
  • Flow Graphic Notation
    • Any procedural design can be translated into a flow graph.
    • Note that compound Boolean expressions at tests generate at least two predicate node and additional arcs.
    Contd..2
  • Flow Graphic Notation
  • Deriving Cyclomatic Complexity
    • Cyclomatic Complexity equals number of independent paths through standard control flow graph model.
    • Steps to arrive at Cyclomatic Complexity
      • Draw a corresponding flow graph.
      • Determine Cyclomatic Complexity.
      • Determine independent paths.
      • Prepare tests cases.
    • 1. Do while records remain
    • read record
    • 2. If record field 1=0
    • 3. Then process record;
    • store in buffer,
    • increment counter,
    • 4. Elseif record field 2=0
    • 5. Then reset record;
    • 6. Else process record;
    • store in file,
    • 7a Endif
    • Endif
    • 7b.Enddo
    • 8. End
    Cyclomatic Complexity: Example PROCEDURE SORT Contd..2 1 2 4 3 5 6 7a 8 7b
  • Reporting Cyclomatic Complexity
    • The McCabe Cyclomatic complexity V ( G ) of a control flow graph measures the maximum number of linearly independent paths through it. The complexity typically increases because of branch points.
    • Definitions:
    • Cyclomatic Complexity V(G) = e – n + 2
  • Reporting Cyclomatic Complexity
    • To compute the Cyclomatic complexity: V ( G ) where v refers to the Cyclomatic number in graph theory and G indicates that the complexity is a function of the graph.
      • If e is the number of arcs,
      • n is the number of nodes and
      • p is the number of connected components or predicates or modules, then
      • Linearly independent paths,
        • V ( G ) = e - n + 2 * p
  • Software Testing Technique
      • Example
          • Independent Paths:
            • 1, 1, 8
            • 1, 2, 3, 7b, 1, 8
            • 1, 2, 4, 5, 7a, 7b, 1, 8
            • 1, 2, 4, 6, 7a, 7b, 1, 8
          • Cyclomatic complexity provides upper bound for number of tests required to guarantee coverage of all program statements.
  • Summary: Cyclomatic Complexity
    • The number of tests to test all control statements + one virtual path equals the Cyclomatic complexity.
    • Cyclomatic complexity equals number of conditions in a program.
    • Useful if used with care. Does not imply adequacy.
    • Does not take into account data-driven programs.
  • Deriving Test Cases
    • Using the design or code, draw the corresponding flow graph.
    • Determine the Cyclomatic complexity of the flow graph.
    • Determine a basis set of independent paths.
    • Prepare test cases that will force execution of each path in the basis set.
      • Note: some paths may only be able to be executed as part of another test.
  • Graph Matrices
    • Can automate derivation of flow graph and determination of a set of basis paths.
    • Software tools to do this can use a graph matrix.
    • Graph matrix:
      • Is square with # of sides equal to # of nodes.
      • Rows and columns correspond to the nodes.
      • Entries correspond to the edges.
    Contd..2
  • Graph Matrices
    • Can associate a number with each edge entry.
    • Use a value of 1 to calculate the Cyclomatic complexity
      • For each row, sum column values and subtract 1.
      • Sum these totals and add 1.
    Contd..2
  • Some other interesting link weights
    • Probability that a link (edge) will be executed.
    • Processing time for traversal of a link.
    • Memory required during traversal of a link.
    • Resources required during traversal of a link.
    Contd..2
  • Graph Matrices 1 2 4 3 5 6 7a 8 7b
  • Introduction to Static Testing
  • Static Testing
    • Static testing is the process of evaluating a system or component based on its form, structure, content or documentation (without computer program execution).
    • Reviews form an important activity in static testing.
  • Reviews
    • Reviews are &quot;filters&quot; applied to uncover error from products at the end of each phase.
    • A review process can be defined as a critical evaluation of an object.
    • Involve a group meeting to assess a work product. In certain phases, such as the Requirements phase, Prototyping phase and the final delivery phase.
  • Benefits of Reviews
    • Identification of the anomalies at the earlier stage of the life cycle
    • Identifying needed improvements
    • Certifying correctness
    • Encouraging uniformity
    • Enforcing subjective rules
  • Types of Reviews
    • Inspections
    • Walkthroughs
    • Technical Reviews
    • Audits
  • Work-products that undergo reviews
    • Software Requirement Specification
    • Software design description
    • Source Code
    • Software test documentation
    • Software user documentation
    • System Build
    • Release Notes
    • Let us discuss Inspections, Walkthroughs and Technical Reviews with respect to Code.
  • Code Inspection
    • Code inspection is a visual examination of a software product to detect and identify software anomalies including errors and deviations from standards and specifications.
    •  Inspections are conducted by peers led by impartial facilitators.
    • Inspectors are trained in Inspection techniques.
    • Determination of remedial or investigative action for an anomaly is mandatory element of software inspection
    • Attempt to discover the solution for the fault is not part of the inspection meeting.
  • Objectives of code Inspection
    • Cost of detecting and fixing defects is less during early stages.
    • Gives management an insight into the development process – through metrics.
    • Inspectors learn from the inspection process.
    • Allows easy transfer of ownership, should staff leave or change responsibility.
    • Build team strength at emotional level.
  • Composition of Code Inspection Team
    • Author
    • Reader
    • Moderator
    • Inspector
    • Recorder
  • Rules for Code Inspection
    • Inspection team can have only 3 to 6 participants maximum.
    • Author shall not act as Inspection leader, reader or recorder.
    • Management member shall not participate in the inspection.
    • Reader responsible for leading the inspection team through the program written interpreting sections of work line by line.
    • Relating the code back to higher level work products like Design, Requirements.
  • Inspection Process
    • Overview
    • Preparation
    • Inspection
    • Rework
    • Follow up
  • Classification of anomaly
    • Missing
    • Superfluous (additional)
    • Ambiguous
    • Inconsistent
    • Improvement desirable
    • Non-conformance to standards
    • Risk-prone (safer alternative methods are available)
    • Factually incorrect
    • Non-implementable (due to system or time constraints)
  • Severity of anomaly
    • Major
    • Minor
  • Benefits of Code Inspection
    • Synergy – 3-6 active people work together, focused on a common goal.
    • Work product is detached from the individual.
    • Identification of the anomalies at the earlier stage of the life cycle.
    • Uniformity is maintained.
  • Guidelines for Code Inspection
    • Adequate preparation time must be provided to participants.
    • The inspection time must be limited to 2-hours sessions, with a maximum of 2 sessions a day.
    • The inspection meeting must be focused only on identifying anomalies, not on the resolution of the anomalies.
    • The author must be dissociated from his work.
    • The management must not participate in the inspections.
    • Selecting the right participants for the inspection.
  • Output of Code Inspection
    • Inspection team members
    • Software program examined
    • Code inspection objectives and whether they were met.
    • Recommendations regarding each anomaly.
    • List of actions, due dates and responsible people.
    • Recommendations, if any, to the QA group to improve the process
  • Code Walkthrough
    • Walkthrough is a static analysis technique in which a designer or programmer leads members of the development team and other interested parties through a software program.
    • Participants ask questions on the program and make comments about possible errors, violation of standards, guidelines etc.
  • Objectives of Code Walkthrough
    • To evaluate a software program, check conformance to standards, guidelines and specifications
    • Educating / Training participants
    • Find anomalies
    • Improve software program
    • Consider alternative implementation if required (not done in inspections)
  • Difference between Inspections and Walkthroughs Walkthrough process includes Overview, little or no preparation, examination (actual walkthrough meeting), rework and follow up. Inspection process includes Overview, preparation, inspection, rework and follow up. No checklist used in walkthroughs Checklist is used to find faults Usually team members of the same project take participation in the walkthrough. Author himself acts the walkthrough leader. A group of relevant persons from different departments participate in the inspection.
  • Difference between Inspections and Walkthroughs Contd. Shorter time is spent on walkthroughs as there is not formal checklist used to evaluate the program. Inspection takes longer time as the list of items in the checklist is tracked to completion. No formalized procedure in the steps. Formalized procedure in each step.
  • Code Walkthrough Team
    • Author
    • Walkthrough Leader
    • Recorder
    • Team member
  • Code Walkthrough Process
    • Overview
    • Preparation
    • Examination
    • Rework / Follow-up
  • Outputs of Code Walkthrough
    • Walkthrough team members
    • Software program examined
    • Walkthrough objectives and whether they were met.
    • Recommendations regarding each anomaly.
    • List of actions, due dates and responsible people.
  • Technical Review of Code
    • A technical review is a formal team evaluation of a product.
    • It identifies any discrepancies from specifications and standards or provides recommendations after the examination of alternatives or both.
    • The technical review is less formal than the formal inspection.
    • The technical review participants include the author and participants knowledgeable of the technical content of the product being reviewed.
  • Technical review process
    • Step 1: Planning the Technical Review Meeting
    • Step 2: Reviewing the Product
    • Step 3: Conducting the Technical Review
    • Step 4: Resolving Defects
    • Step 5: Reworking the Product
  • Outputs of Technical review
    • Same as Inspections.
  • Requirement Bases Test Design- Black Box Technique
      • Low Level Testing
      • High Level Testing
  • Purpose
    • Is to find
      • Functional validity of the system
      • Sensitivity
      • Tolerance
      • Operability
      • Interface errors
      • Errors in database structures
      • Performance errors
      • Initialization and termination errors
  • Approach
    • Positive Testing
    • Negative Testing
    • Use case Testing
  • Categories of Requirements
    • Functional
      • Absolutely necessary for functioning of system
      • Describes the input/output behaviour of the system
      • Shalls of the software
      • Must be testable
    • Non-functional
      • Restriction or constraints on system services
      • Define the attributes of the system as it performs its job
      • Subjective in nature and not conclusively testable
      • In real-systems, these are more important than functional requirements!
  • Validating Functional Requirements
    • Black Box Testing
      • Low Level Testing
      • High Level Testing
  • Validating Non-functional Requirements
    • Software Quality Factors
    • Test cases generated to validate the metrics
      • Criteria is met
      • Factor is met
    Prioritization of factors Important factor criteria metric
    • Low Level Techniques
      • Equivalence partitioning
      • Boundary value analysis
      • Input domain & Output domain
      • Special Value
      • Error based
      • Cause-effect Graph
      • Comparison Testing
    • High Level Techniques
      • Specification-based testing
    • Express requirements in simple formal notations like
      • State machine
      • Decision table
      • Use cases
      • Flowchart
      • Boolean logic
      • Regular expressions
    • The notation allows generation of scenarios.
    • Different test cases for every scenario.
    • Good side effects!
    • Makes requirements verifiable, finds flaws in requirements.
    Requirements Based Test Design - Black Box Techniques
  • Requirement Base Test Design- Black Box Technique
      • High Level Techniques
  • Techniques
    • State Machine
    • Decision Table
    • Flowchart
    • Use Cases
  • State Machine
    • Description
      • State based business logic
      • Covering all paths generate test cases
      • Diagram may be complicated
      • For every event generate test cases using BVA, EP…
    • State Diagram
  • Decision Table
    • Explores combinations of input conditions
    • Consists of 2 parts: Condition section and Action section
      • Condition Section - Lists conditions and their combinations
      • Action Section - Lists responses to be produced
    • Exposes errors in specification
    • Columns in decision table are converted to test cases
    • Similar to Condition Coverage used in White Box Testing
    CONDITION ACTION √ (W) NA √ (W) Warning Message √ X √ Unsuccessful Login X √ X Successful Login X √ X Password X √ √ Login Value 3 Value 2 Value 1
  • Flowchart
    • Description
      • Flow based business logic
      • Generate test cases covering all paths
      • Simple to use
      • For every condition generate test cases using BVA, EP…
  • Use Cases :
    • Simple and Effective method to find errors in Object Oriented applications during Analysis phase.
    • Good start for User Acceptance Testing and Plan.
    • Accurately reflects business requirements.
  • Requirement Base Test Design- Black Box Technique
      • Low Level Techniques
  • Techniques
    • Equivalence partitioning
    • Boundary value analysis
    • Input domain & Output domain
    • Special Value
    • Error based
    • Cause-effect Graph
    • Comparison Testing
  • Low Level Techniques (1) - Equivalence Partitioning
    • Divides the input domain into classes of data for which test cases can be generated.
    • Attempts to uncover classes of errors.
    • Divides the input domain of a program into classes of data.
    • Derives test cases based on these partitions.
    • An equivalence class is a set of valid or invalid states of input.
    • Test case design is based on equivalence classes for an input domain.
    Invalid Inputs Valid Inputs SYSTEM` Output
  • Low Level Techniques (1) - Equivalence Partitioning (Contd..)
    • Useful in reducing the number of Test Cases required.
    • It is very useful when the input/output domain is amenable to partitioning.
    Input Range (6,15) Test Values (4,9,17) Invalid Valid Range Invalid Less than 6 Between 6 and 15 More than 15 4 9 17
  • Low Level Techniques (1) - Equivalence Partitioning (Contd..)
    • Here test cases are written to uncover classes of errors for every input condition.
    • Equivalence classes are:-
      • Range
        • Upper bound + 1
        • Lower bound – 1
        • Within bound
      • Value
        • Maximum length + 1
        • Minimum length – 1
        • Valid value and Valid length
        • Invalid value
      • Set
        • In-set
        • Out-of-set
      • Boolean
        • True
        • False
  • Low Level Techniques (1) - Equivalence Partitioning (Contd..)
    • Equivalence Partitioning partitions the data to partition of a set.
    • Partition refers to collection of mutually disjoint subsets whose union is the entire set.
    • Choose one data element from each partitioned set.
    • The KEY is the choice of equivalence relation!
    • EC based testing allows
      • To have a sense of complete testing.
      • Helps avoid redundancy.
  • Low Level Techniques (2) - Boundary Value Analysis
    • A Black Box Testing Method
    • Complements to Equivalence partition
    • BVA leads to a selection of test cases that exercise bounding values
    • Design test cases test
      • Min values of an input
      • Max values of an input
      • Just above and below input range
    Input Range (6,15) Test Values (5,6,7,15,16) 5 7 16 6 15 Less than 6 Between 6 and 15 More than 15
  • Low Level Techniques (2) - Boundary Value Analysis
    • Helps to write test cases that exercise bounding values.
    • Complements Equivalence Partitioning.
    • Guidelines are similar to Equivalence Partitioning.
    • Two types of BVA:
      • Range
        • Above and below Range
      • Value
        • Above and below min and max number
  • Low Level Techniques (2) - Boundary Value Analysis
    • Boundary Value Analysis
      • Large number of errors tend to occur at boundaries of the input domain.
      • BVA leads to selection of test cases that exercise boundary values.
      • BVA complements Equivalence Partitioning.
      • Rather than select any element in an equivalence class, select those at the “edge” of the class.
  • Low Level Techniques (2) - Boundary Value Analysis
    • Examples :
      • For a range of values bounded by ‘a’ and ‘b’, test (a-1), a, (a+1), (b-1), b, (b+1).
      • If input conditions specify a number of values ‘n’, test with (n-1), n and (n+1) input values.
      • Apply 1 and 2 to output conditions (e.g., generate table of minimum and maximum size).
      • If internal program data structures have boundaries (e.g., buffer size, table limits), use input data to exercise structures on boundaries.
  • Low Level Techniques (2) - Boundary Value Analysis
    • For Two Variables
      • a < = x1 < = b
      • c < = x2 < = d
    • For each variable
      • Minimum -1
      • Minimum
      • Minimum +1
      • Nominal/mid
      • Maximum -1
      • Maximum
      • Maximum +1
    • Take Cartesian product of these sets
  • Low Level Techniques (3) - Input/Output Domain Testing
    • Description
      • From input side generate inputs to map to outputs.
      • Ensure that you have generated all possible inputs by looking from the output side.
    Inputs Outputs
  • Low Level Techniques (4) - Special Value Testing
    • Select test data on the basis of features of a function to be computed.
    • Tester uses her / his domain knowledge, experience with similar programs.
    • Ad-hoc / seat-of-pants / skirt testing.
    • No guidelines, use best engineering judgment.
    • Special test cases / Error guessing.
    • Is useful – don’t discount effectiveness!
  • Low Level Techniques (5) - Error based Testing
    • Generate test cases based on
      • Programmer histories
      • Program complexity
      • Knowledge of error-prone syntactic constructs
    • Guess errors based on data type
  • Low Level Techniques (6) - Cause Effect Graphing Techniques
    • Cause Effect Graphing Techniques
      • Translation of natural language descriptions of procedures to software based algorithms is error prone.
    • Uncovers errors by representing algorithm as a cause-effect graph representing logical combinations and corresponding actions.
    Contd..2
  • Low Level Techniques (6) - Cause Effect Graphing Techniques
    • Cause Effect Graphing Techniques
      • How do you test code which attempts to implement this?
      • Cause-effect graphing attempts to provide a concise representation of logical combinations and corresponding actions.
      • Causes (input conditions) and effects (actions) are listed for a module and an identifier is assigned to each.
    • Steps:
      • A cause-effect graph developed.
      • Graph converted to a decision table.
      • Decision table rules are converted to test cases.
    Contd..2
  • Low Level Techniques (7) - Comparison Testing
    • Helps to check performance of the software under different hardware and software configurations.
    • Two variants of Comparison testing are:
      • Develop the software.
      • Run the software in parallel and compare the results.
  • Low Level Techniques (7)- Comparison Testing
    • Comparison Testing
      • In some applications, reliability is critical.
      • Redundant hardware and software may be used.
      • For redundant s/w, use separate teams to test the software.
      • Test with same test data to ensure all provide identical output.
      • Run the software in parallel with a real-time comparison of results.
      • Method does not catch errors in the specification.
  • GUI Testing
  • Windows Compliance Standards
    • Windows resize options
      • Maximize, minimize and close options should be available.
    • Using TAB
      • Should move the focus (cursor) from left to right and top to bottom in the window.
    • Using SHIFT+TAB
      • Should move the focus (cursor) from right to left and bottom to top.
    • Text
      • Should be left-justified.
  • Windows Compliance Standards (Contd..)
    • Edit Box
      • U should be able to enter data.
      • Try to overflow the text, text should be stopped after the specified length of characters.
      • Try entering invalid characters - should not allow.
    • Radio Buttons
      • Left and right arrows should move ‘ON’ selection. So should UP and DOWN.
      • Select with the mouse by clicking.
    • Check Boxes
      • Clicking with the mouse on the box or on the text should SET/UNSET the box.
      • Space should do the same.
  • Windows Compliance Standards (Contd..)
    • Command Buttons
      • Should have shortcut keys (except OK and Cancel buttons).
      • Click each button with the mouse - should activate.
      • TAB to each button & press Space/Enter - should activate.
    • Drop Down List
      • Pressing the arrow should give list of options.
      • Pressing a letter should bring you to the first item in the list with that start letter.
      • Pressing Ctrl+F4 should open/drop down the list box.
  • Windows Compliance Standards (Contd..)
    • Combo Boxes
      • Should allow text to be entered.
      • Clicking the arrow should allow user to choose from the list
    • List Boxes
      • Should allow a single selection to be chosen by clicking with the mouse or using the Up and Down arrows.
      • Pressing a letter should bring you to the first item in the list with that start letter.
  • Screen Validation Standards
    • Aesthetic Conditions
      • The general screen background should be of correct colour (company standards,….).
      • The field prompts and backgrounds should be of correct colour.
      • The text in all the fields should be of the same font.
      • All the field prompts, group boxes and edit boxes should be aligned perfectly.
      • Microhelp should be available and spelt correctly.
      • All dialog boxes and windows should have a consistent look and feel.
  • Screen Validation Standards (Contd..)
    • Validation Conditions
      • Failure of validation on every field should cause a user error message.
      • If any fields are having multile validation rules, all should be applied.
      • If the user enters an invalid value and clicks on the OK button, the invalid entry should be identified and highlighted.
      • In the numeric fields, negative numbers should be allowed to enter.
      • Should allow the minimum, maximum and mid range values in numeric fields.
      • All mandatory fields should require user input.
  • Screen Validation Standards (Contd..)
    • Navigation Conditions
      • The screen should be accessible correctly from the menu and toolbar.
      • All screens accessible through buttons on this screen should be accessed correctly.
      • The user should not be prevented from accessing other functions when this screen is active.
      • Should not allow to open number of instances of the same screen at the same time.
  • Screen Validation Standards (Contd..)
    • Usability Conditions
      • All the dropdowns should be sorted alphabetically (unless specified).
      • All pushbuttons should have appropriate shortcut keys and should work properly.
      • All read-only and disabled fields should be avoided in the TAB sequence.
      • Should not allow to edit microhelp text.
      • The cursor should be positioned in the first input field or control when opened.
      • When an error message occurs, the focus should return to the field in error after cancelling it.
      • Alt+Tab should not have any impact on the screen upon return.
  • Screen Validation Standards (Contd..)
    • Data Integrity Conditions
      • The data should be saved when the window is closed by double clicking on the close box.
      • There characters should not be truncated.
      • Maximum and minimum field values for numeric fields should be verified.
      • Negative values should be stored and accessed from the database correctly.
  • Screen Validation Standards (Contd..)
    • Modes (Editable, Read-only) conditions
      • The screen and field colours should be adjusted correctly for read-only mode.
      • Is the read only field necessary for this screen?
      • All fields and controls should be disabled in read-only mode.
      • No validation is performed in read-only mode.
  • Screen Validation Standards (Contd..)
    • General Conditions
      • “ Help” menu should exist.
      • All buttons on all tool bars should have corresponding key commands.
      • Abbreviations should not be used in drop down lists.
      • Duplicate hot keys/shortcut keys should not exist.
      • Escape key and cancel button should cancel (close) the application.
      • OK and Cancel buttons should be grouped separately.
      • Command button names should not be abbreviations.
  • Screen Validation Standards (Contd..)
    • General Conditions (Contd..)
      • Field labels/names should not be technical labels, they should be meaningful to system users.
      • All command buttons should be of similar size, shape, font and font size.
      • Option boxes, option buttons and command buttons should be logically grouped.
      • Mouse action should be consistent through out the screen.
      • Red colour should not be used to highlight active objects (many individuals are red-green colour blind).
      • Screen/Window should not have cluttered appearance.
      • Alt+F4 should close the window/application.
  • Bug Life Cycle
  • What is a Bug?
    • Bug
      • A fault in a program which causes the program to perform in an unintended or unanticipated manner or deviation from the requirement specification or the design specification is referred as a bug.
  • What is a Bug Life Cycle? No Yes Submitted In-Work Solved Validated Terminated Deferred
  • Classification of Bugs
    • Two attributes are used whenever a Bug/Defect is detected
    • Severity ( Severity is Technical)
      • Critical
      • Serious
      • Minor
    • Priority ( Priority is Business)
      • High
      • Medium
      • Low
  • Reporting/Logging a Bug/Defect
    • A Bug/Defect is reported with the following details
      • Summary
      • Description
      • How to reproduce
      • Version
      • Module
      • Phase
      • Browser
      • Environment
      • Modified Date
  • Reporting/Logging a Bug/Defect (Contd..)
    • A Bug/Defect is reported with the following details
      • Job assigned to
      • Severity
      • Priority
      • Tester’s name
      • Status
      • Database
      • Type of defect
      • Reproducible
      • Attachments