SlideShare a Scribd company logo
Venkat Alagarsamy
 venkat.alagarsamy@gmail.com




                                          https://www.linkedin.com/in/VenkatAlagarsamy
                                              https://www.scribd.com/VenkatAlagarsamy
                               https://www.facebook.com/Venkatachalapathi.Alagarsamy
Last Updated: 7th March 2009
   Testing is a process used to help identify the
    correctness, completeness and quality of
    developed computer software.

   One definition of testing is "the process of
    questioning a product in order to evaluate it",
    where the "questions" are things the tester
    tries to do with the product, and the product
    answers with its behavior in reaction to the
    probing of the tester.
Aims/Objectives of Testing
   To uncover the maximum no of bugs
    or errors.

   To increase the quality of the software
    product.

   To ensure the user-friendliness.
Testing Start Process
   Testing is sometimes incorrectly thought as an
    after-the-fact    activity; performed     after
    programming      is done for a product.
    Instead, testing should be performed at every
    development stage of the product.

   If we divide the lifecycle of software
    development           into      “Requirements
    Analysis”, “Design”, “Programming/Construction”
    and “Operation and Maintenance”, then testing
    should accompany each of the above phases.

   If testing is isolated as a single phase late in the
Testing Activities in Each Phase
   Requirements Analysis
    - Determine correctness
    - Generate functional test data.

   Design
    - Determine correctness and consistency
    - Generate structural and functional test data.

   Programming/Construction
    - Determine correctness and consistency
    - Generate structural and functional test data
    - Apply test data
    - Refine test data

   Operation and Maintenance
    - Retest
Testing Stop Process
   Many modern software applications are so
    complex, and run in such as interdependent
    environment, that complete testing can never be
    done. "When to stop testing" is one of the most
    difficult questions to a test engineer.
   Common factors in deciding when to stop are:
   Deadlines       (   release     deadlines, testing
    deadlines.)
   Test cases completed with certain percentages
    passed
   Coverage of code/functionality/requirements
    reaches a specified point
Risk Analysis


A Risk is a potential for loss or damage to an
  Organization from materialized threats. A
  threat is a possible damaging event.

Risk Analysis attempts to identify all the risks
  and then quantify the severity of the risks. It
  exploits vulnerability in the security of a
  computer based system.
Test Metrics

   Test metrics accomplish in analyzing the
    current level of maturity in testing and give
    a projection on how to go about testing
    activities by allowing us to set goals and
    predict            future             trends.

   Types of Metrics
    1. Process Metrics
    2. Product Metrics
    3. Project Metrics
   Product metrics are for describing characteristics of
    product such as it‟s size, complexity, features, and
    performance. Several common product metrics are
    mean time to failure, defect density, customer problem,
    and customer satisfaction metrics.

   Many Organizations take measurements or metrics
    because they have the capability to measure, rather
    than determining why they need the information.
    Unfortunately, measurement for the sake of a number or
    statistic rarely makes a process better, faster, or
    cheaper.

   The first application of project metrics on most software
    projects occurs during estimation. Metrics collected from
Static Testing
   It is generally not detailed testing, but checks
    mainly for the sanity of the code, algorithm, or
    document. It is primarily syntax checking of the
    code or and manually reading of the code or
    document to find errors. This type of testing can
    be used by the developer who wrote the code,
    in isolation. Code reviews, inspections and
    walkthroughs are also used.

   From the black box testing point of view, static
    testing involves review of requirements or
    specifications. This is done with an eye toward
    completeness or appropriateness for the task at
Code Review


It is systematic examination (often as peer review)
    of computer source code intended to find and fix
    mistakes overlooked in the initial development
    phase.

Purpose of code review is to improve both the
  overall quality of software and the developers'
  skills.
Inspection

   Inspection in software engineering, refers to peer
    review of any work product by trained individuals
    who look for defects using a well defined process.
   The goal of the inspection is for all of the inspectors
    to reach consensus on a work product and approve
    it for use in the project.
   In an inspection, a work product is selected for
    review and a team is gathered for an inspection
    meeting to review the work product.
   A moderator is chosen to moderate the meeting.
    Each inspector prepares for the meeting by reading
    the work product and noting each defect.
Walkthrough
Walkthroughs a walkthrough or walk-through is a
 form of software peer review, in which,
   a designer
   programmer leads
   members of the development team and
   other interested parties
To scan through a software product, and the
  participants
   ask questions
   make comments
About
   possible errors,
   violation of development standards,
   other problems
Dynamic testing
   Dynamic Testing involves working with the
    software to validate by giving input values and
    checking if the output is as expected. These are
    the Validation activities.

   Dynamic testing (or dynamic analysis) is a term
    used in software engineering to describe the
    testing of the dynamic behavior of code.
    • It is the examination of the physical response from the
      system to variables that are not constant and change
      with time.
    • The software must actually be compiled and run; this
      is in contrast to static testing.

   Dynamic testing methodologies:
    • Unit Testing
Black box testing
 Black Box Testing treats the software as a black
  box without any knowledge of internal
  implementation.
 These tests can be functional or non-functional,
  though usually functional.
 The test designer selects valid and invalid input
  and determines the correct output.
 This method of test design is applicable to all
  levels of software testing:
    • unit,
    • integration
    • functional testing
    • System
Black Box Testing Strategy
   In order to implement Black Box Testing
    Strategy, the tester is needed to be
    thorough      with      the    requirement
    specifications of the system and as a user,
    should know, how the system should
    behave in response to the particular
    action.
   Functional testing covers how well the
    system executes the functions it is
    supposed to execute—including user
    commands, data manipulation, searches
    and business processes, user screens,
Testing Method where User is NOT
                 Required
   Functional Testing
    In this type of testing, the software is tested for
    the functional requirements. The tests are
    written in order to check if the application
    behaves as expected.

   Stress Testing
    The application is tested against heavy load
    such as complex numerical values, large
    number of inputs, large number of queries etc.
    which checks for the stress/load the applications
    can withstand.

   Load Testing
    The application is tested against heavy loads or
   Ad-hoc Testing
    This type of testing is done without any formal Test
    Plan or Test Case creation. Ad-hoc testing helps in
    deciding the scope and duration of the various other
    testing and it also helps testers in learning the
    application prior starting with any other testing.

   Exploratory Testing
    This testing is similar to the ad-hoc testing and is
    done in order to learn/explore the application.

   Usability Testing
    This testing is also called as „Testing for User-
    Friendliness‟. This testing is done if User Interface
    of the application stands an important consideration
   Smoke Testing
    This type of testing is also called sanity testing
    and is done in order to check if the application is
    ready for further major testing and is working
    properly without failing up to least expected
    level.

   Recovery Testing
    Recovery testing is basically done in order to
    check how fast and better the application can
    recover against any type of crash or hardware
    failure etc. Type or extent of recovery is
    specified in the requirement specifications.

   Volume Testing
Testing where User plays a Role - User
                  Required
   User Acceptance Testing
    In this type of testing, the software is handed over to
    the user in order to find out if the software meets the
    user expectations and works as it is expected to.

   Alpha Testing
    In this type of testing, the users are invited at the
    development center where they use the application
    and the developers note every particular input or
    action carried out by the user. Any type of abnormal
    behavior of the system is noted and rectified by the
    developers.

   Beta Testing
Advantages of Black Box Testing
   more effective on larger units of code.

   tester    needs       no     knowledge       of
    implementation, including specific programming
    languages

   tester and programmer are independent of each
    other

   tests are done from a user's point of view

   will help to expose any ambiguities          or
    inconsistencies in the specifications
Disadvantages of Black Box Testing
   only a small number of possible inputs can
    actually be tested, to test every possible
    input stream would take nearly forever

   without clear and concise specifications, test
    cases are hard to design

   there may be unnecessary repetition of test
    inputs if the tester is not informed of test
    cases the programmer has already tried

   may leave many program paths untested
Characteristics of Black Box Testing
   People: Who does the testing?
    Some people know how software works
    (developers) and others just use it (users).
    Accordingly, any testing by users or other non
    developers is sometimes called “black box”
    testing. Developer testing is called “white box”
    testing. The distinction here is based on what
    the person knows or can understand.

   Coverage: What is tested?
    These are the two most commonly used
    coverage criteria. Both are supported by
    extensive literature and commercial tools.
    Requirements-based testing could be called
    “black box” because it makes sure that all the
    customer requirements have been verified.
   Risks: Why are you testing?
    Sometimes testing is targeted at particular risks.
    Boundary testing and other attack-based
    techniques are targeted at common coding
    errors. Effective security testing also requires a
    detailed understanding of the code and the
    system architecture. Thus, these techniques
    might be classified as “white box.”
   Activities: How do you test?
    A common distinction is made between
    behavioral test design, which defines tests
    based on functional requirements, and
    structural test design, which defines tests based
    on the code itself. These are two design
    approaches. Since behavioral testing is based
    on external functional definition, it is often called
    “black box,” while structural testing—based on
    the code internals—is called “white box.”
    Indeed, this is probably the most commonly
    cited definition for black box and white box
    testing.
   Evaluation: How do you know if you‟ve found a
    bug?
    There are certain kinds of software faults that
    don‟t always lead to obvious failures. They may
    be masked by fault tolerance or simply luck.
    Memory leaks and wild pointers are examples.
    Certain test techniques seek to make these
    kinds of problems more visible. Related
    techniques capture code history and stack
    information when faults occur, helping with
    diagnosis.
    Assertions are another technique for helping to
    make problems more visible. All of these
White Box Testing
   White box testing is a security testing method
    that can be used to validate whether code
    implementation follows intended design, to
    validate implemented security functionality, and
    to uncover exploitable vulnerabilities. White box
    testing, by contrast to black box testing, is when
    the tester has access to the internal data
    structures and algorithms

   The purpose of any security testing method is to
    ensure the robustness of a system in the face of
    malicious attacks or regular software failures.
    White box testing is performed based on the
 White box testing requires access to the source
  code. Though white box testing can be
  performed any time in the life cycle after the
  code is developed, it is a good practice to
  perform white box testing during the unit testing
  phase. White box testing requires knowing what
  makes software secure or insecure.
 The first step in white box testing is to
  comprehend and analyze source code, so
  knowing what makes software secure is a
  fundamental requirement. Second, to create
  tests that exploit software, a tester must think
  like an attacker. Third, to perform testing
Types of White Box Testing

    Code coverage - creating tests to satisfy some
    criteria of code coverage. For example, the test
    designer can create tests to cause all statements in
    the program to be executed at least once.

   Mutation testing methods - A kind of testing in
    which, the application is tested for the code that
    was modified after fixing a particular bug/defect. It
    also helps in finding out which code and which
    strategy of coding can help in developing the
    functionality effectively.

   Fault injection methods - fault injection is a
Results to Expect
   Any security testing method aims to ensure that
    the software under test meets the security goals
    of the system and is robust and resistant to
    malicious attacks. Security testing involves
    taking two diverse approaches: one, testing
    security mechanisms to ensure that their
    functionality is properly implemented; and
    two, performing risk-based security testing
    motivated by understanding and simulating the
    attacker‟s approach.

   Some examples of errors uncovered include
    • data inputs compromising security
Benefits of White Box Testing
   Analyzing source code and developing tests based
    on the implementation details enables testers to
    find programming errors quickly.

   Validating design decisions and assumptions
    quickly through white box testing increases
    effectiveness. The design specification may outline
    a secure design, but the implementation may not
    exactly capture the design intent.

   Finding ”unintended” features can be quicker during
    white box testing. Security testing is not just about
    finding vulnerabilities in the intended functionality of
Unit Testing
   In computer programming, unit testing is a
    procedure used to validate that individual units of
    source code are working properly. A unit is the
    smallest testable part of an application.

   In procedural programming a unit may be an
    individual program, function, procedure, etc.,

   while in object-oriented programming, the smallest
    unit is a method; which may belong to a base/super
    class, abstract class or derived/child class.

   The goal of unit testing is to isolate each part of the
    program and show that the individual parts are
    correct. A unit test provides a strict, written contract
    that the piece of code must satisfy. As a result, it
    affords several benefits. Unit tests find problems early
Limitations of Unit Testing
   Testing, in general, cannot be expected to catch every
    error in the program. The same is true for unit testing.
    By definition, it only tests the functionality of the units
    themselves. Therefore, it may not catch integration
    errors, performance problems, or other system-wide
    issues.

   To obtain the intended benefits from unit testing, a
    rigorous sense of discipline is needed throughout the
    software development process. It is essential to keep
    careful records, not only of the tests that have been
    performed, but also of all changes that have been made
    to the source code of this or any other unit in the
    software.

   It is also essential to implement a sustainable process
    for ensuring that test case failures are reviewed daily
Integration Testing
   Integration testing is the phase of software testing in
    which individual software modules are combined and
    tested as a group.

    Integration testing takes as its input modules that have
    been unit tested, groups them in larger aggregates,
    applies tests defined in an integration test plan to those
    aggregates, and delivers as its output the integrated
    system ready for system testing.

   The purpose of integration testing is to verify functional,
    performance and reliability requirements placed on
    major design items. These "design items", i.e.
    assemblages (or groups of units), are exercised through
    their interfaces using black box testing, success and
    error cases being simulated via appropriate parameter
    and data inputs. Simulated usage of shared data areas
    and inter-process communication is tested and
System Testing
   System testing of software or hardware is
    testing conducted on a complete, integrated
    system to evaluate the system's compliance
    with its specified requirements.

   As a rule, system testing takes, as its input, all
    of the "integrated" software components that
    have successfully passed integration testing
    and also the software system itself integrated
    with any applicable hardware system(s). The
    purpose of integration testing is to detect any
    inconsistencies between the software units that
    are integrated together (called assemblages) or
Types of System Testing

Error Handling Testing - Error handling refers to
  the anticipation, detection, and resolution of
  programming, application, and communications
  errors. Specialized programs, called error
  handlers, are available for some applications.
  The best programs of this type forestall errors if
  possible, recover from them when they occur
  without terminating the application, or (if all else
  fails) gracefully terminate an affected application
  and save the error information to a log file.
Volume Testing belongs to the group of non-
  functional tests, which are often misunderstood
  and/or used interchangeably. Volume testing
  refers to testing a software application for a
  certain data volume. This volume can in generic
  terms be the database size or it could also be
  the size of an interface file that is the subject of
  volume testing. For example, if you want to
  volume test your application with a specific
  database size, you will explode your database
  to that size and then test the application's
  performance on it. Another example could be
  when there is a requirement for your application
  to interact with an interface file (could be any file
   Performance testing is the process of
    determining the speed or effectiveness of a
    computer, network, software program or device.
    This process can involve quantitative tests done
    in a lab, such as measuring the response time
    or the number of MIPS (millions of instructions
    per second) at which a system functions.

   Performance testing can refer to the
    assessment of the performance of a human
    examinee. For example, a behind-the-wheel
    driving test is a performance test of whether a
    person is able to perform the functions of a
User Acceptance Testing
   User Acceptance Testing (UAT) is a process to obtain
    confirmation by a Subject Matter Expert (SME), preferably
    the owner or client of the object under test, through trial or
    review, that the modification or addition meets mutually
    agreed-upon requirements. In software development, UAT
    is one of the final stages of a project and often occurs
    before a client or customer accepts the new system.

   Users of the system perform these tests, which
    developers derive from the client's contract or the user
    requirements specification.

   These tests, which are usually performed by clients or
    end-users, are not usually focused on identifying simple
    problems such as spelling errors and cosmetic
Regression Testing
   Regression testing is any type of software
    testing which seeks to uncover regression bugs.
    Regression bugs occur whenever software
    functionality that previously worked as desired,
    stops working or no longer works in the same
    way that was previously planned. Typically
    regression bugs occur as an unintended
    consequence of program changes.

   Regression testing can be used not only for
    testing the correctness of a program, but it is
    also often used to track the quality of its output.
Grey Box Testing

Gray box testing is a software testing technique
  that uses a combination of black box testing and
  white box testing. Gray box testing is not black
  box testing, because the tester does know some
  of the internal workings of the software under
  test. In gray box testing, the tester applies a
  limited number of test cases to the internal
  workings of the software under test. In the
  remaining part of the gray box testing, one takes
  a black box approach in applying inputs to the
  software under test and observing the outputs.
Test Cases
   In software engineering, the most common
    definition of a test case is a set of conditions or
    variables under which a tester will determine if a
    requirement or use case upon an application is
    partially or fully satisfied.

   In order to fully test that all the requirements of an
    application are met, there must be at least one
    test case for each requirement unless a
    requirement has sub-requirements.

   What characterizes a formal, written test case is
    that there is a known input and an expected
    output, which is worked out before the test is
    executed. The known input should test a

More Related Content

What's hot

Testing Types And Models
Testing Types And ModelsTesting Types And Models
Testing Types And Modelsnazeer pasha
 
Types of testing
Types of testingTypes of testing
Types of testing
Valarmathi Srinivasan
 
Testing types functional and nonfunctional - Kati Holasz
Testing types   functional and nonfunctional - Kati HolaszTesting types   functional and nonfunctional - Kati Holasz
Testing types functional and nonfunctional - Kati Holasz
Holasz Kati
 
Manual testing interview questions and answers
Manual testing interview questions and answersManual testing interview questions and answers
Manual testing interview questions and answers
Rajnish Sharma
 
Types of Software testing
Types of  Software testingTypes of  Software testing
Types of Software testingMakan Singh
 
Basic interview questions for manual testing
Basic interview questions for manual testingBasic interview questions for manual testing
Basic interview questions for manual testing
JYOTI RANJAN PAL
 
Different Types Of Testing
Different Types Of TestingDifferent Types Of Testing
Different Types Of Testing
Siddharth Belbase
 
Software testing basic
Software testing basicSoftware testing basic
Software testing basic
Rohit Singh
 
Software testing strategies
Software testing strategiesSoftware testing strategies
Software testing strategies
Sophia Girls' College(Autonomous), Ajmer
 
Software testing ppt
Software testing pptSoftware testing ppt
Software testing ppt
Ajit Waje
 
Software quality and testing (func. & non func.)
Software quality and testing (func. & non   func.)Software quality and testing (func. & non   func.)
Software quality and testing (func. & non func.)
Pragya G
 
Object Oriented Analysis
Object Oriented AnalysisObject Oriented Analysis
Object Oriented Analysis
AMITJain879
 
Software testing
Software testingSoftware testing
Software testing
Bhagyashree pathak
 
Software Testing and Quality Assurance unit1
Software Testing and Quality Assurance  unit1Software Testing and Quality Assurance  unit1
Software Testing and Quality Assurance unit1
Bhagyashree Dhakulkar
 
Software testing
Software testingSoftware testing
Software testing
ssusere50573
 
software testing for beginners
software testing for beginnerssoftware testing for beginners
software testing for beginnersBharathi Ashok
 
Software testing
Software testingSoftware testing
Software testing
Nitish Upreti
 
Software Testing Fundamentals | Basics Of Software Testing
Software Testing Fundamentals | Basics Of Software TestingSoftware Testing Fundamentals | Basics Of Software Testing
Software Testing Fundamentals | Basics Of Software Testing
KostCare
 
Interview questions for manual testing technology.
Interview questions for manual testing technology.Interview questions for manual testing technology.
Interview questions for manual testing technology.
Vinay Agnihotri
 

What's hot (20)

Testing Types And Models
Testing Types And ModelsTesting Types And Models
Testing Types And Models
 
Types of testing
Types of testingTypes of testing
Types of testing
 
Software testing
Software testingSoftware testing
Software testing
 
Testing types functional and nonfunctional - Kati Holasz
Testing types   functional and nonfunctional - Kati HolaszTesting types   functional and nonfunctional - Kati Holasz
Testing types functional and nonfunctional - Kati Holasz
 
Manual testing interview questions and answers
Manual testing interview questions and answersManual testing interview questions and answers
Manual testing interview questions and answers
 
Types of Software testing
Types of  Software testingTypes of  Software testing
Types of Software testing
 
Basic interview questions for manual testing
Basic interview questions for manual testingBasic interview questions for manual testing
Basic interview questions for manual testing
 
Different Types Of Testing
Different Types Of TestingDifferent Types Of Testing
Different Types Of Testing
 
Software testing basic
Software testing basicSoftware testing basic
Software testing basic
 
Software testing strategies
Software testing strategiesSoftware testing strategies
Software testing strategies
 
Software testing ppt
Software testing pptSoftware testing ppt
Software testing ppt
 
Software quality and testing (func. & non func.)
Software quality and testing (func. & non   func.)Software quality and testing (func. & non   func.)
Software quality and testing (func. & non func.)
 
Object Oriented Analysis
Object Oriented AnalysisObject Oriented Analysis
Object Oriented Analysis
 
Software testing
Software testingSoftware testing
Software testing
 
Software Testing and Quality Assurance unit1
Software Testing and Quality Assurance  unit1Software Testing and Quality Assurance  unit1
Software Testing and Quality Assurance unit1
 
Software testing
Software testingSoftware testing
Software testing
 
software testing for beginners
software testing for beginnerssoftware testing for beginners
software testing for beginners
 
Software testing
Software testingSoftware testing
Software testing
 
Software Testing Fundamentals | Basics Of Software Testing
Software Testing Fundamentals | Basics Of Software TestingSoftware Testing Fundamentals | Basics Of Software Testing
Software Testing Fundamentals | Basics Of Software Testing
 
Interview questions for manual testing technology.
Interview questions for manual testing technology.Interview questions for manual testing technology.
Interview questions for manual testing technology.
 

Similar to Introduction to software testing

Software testing
Software testingSoftware testing
Software testing
Sengu Msc
 
UNIT 2.pptx
UNIT 2.pptxUNIT 2.pptx
UNIT 2.pptx
PallawiBulakh1
 
Validation & verification software engineering
Validation & verification software engineeringValidation & verification software engineering
Validation & verification software engineering
Sweta Kumari Barnwal
 
Software testing sengu
Software testing  senguSoftware testing  sengu
Software testing senguSengu Msc
 
Software Testing
Software TestingSoftware Testing
Software Testing
Sengu Msc
 
Software testing & Quality Assurance
Software testing & Quality Assurance Software testing & Quality Assurance
Software testing & Quality Assurance
Webtech Learning
 
softwaretestingppt-FINAL-PPT-1
softwaretestingppt-FINAL-PPT-1softwaretestingppt-FINAL-PPT-1
softwaretestingppt-FINAL-PPT-1FAIZALSAIYED
 
SOFTWARE TESTING
SOFTWARE TESTINGSOFTWARE TESTING
SOFTWARE TESTING
acemindia
 
Testing Slides 1 (Testing Intro+Static Testing).pdf
Testing Slides 1 (Testing Intro+Static Testing).pdfTesting Slides 1 (Testing Intro+Static Testing).pdf
Testing Slides 1 (Testing Intro+Static Testing).pdf
MuhammadShoaibHussai2
 
Software Testing - A sneak preview By Srikanth
Software Testing - A sneak preview By SrikanthSoftware Testing - A sneak preview By Srikanth
Software Testing - A sneak preview By SrikanthSrikanth Krishnamoorthy
 
black and white Box testing.pptx
black and white Box testing.pptxblack and white Box testing.pptx
black and white Box testing.pptx
PavanNikhil3
 
Manual testing
Manual testingManual testing
Manual testing
Vivek V
 
Software Testing Training in Chandigarh
Software Testing Training in ChandigarhSoftware Testing Training in Chandigarh
Software Testing Training in Chandigarh
Kreativan Technologies
 
Software testing
Software testingSoftware testing
SWE-401 - 10. Software Testing Overview
SWE-401 - 10. Software Testing OverviewSWE-401 - 10. Software Testing Overview
SWE-401 - 10. Software Testing Overview
ghayour abbas
 
10. Software testing overview
10. Software testing overview10. Software testing overview
10. Software testing overview
ghayour abbas
 
Software testing
Software testingSoftware testing
Software testing
Madhumita Chatterjee
 
Software testing career
Software testing careerSoftware testing career
Software testing career
Ahmed Ahmed Mokhtar
 
Software testing
Software testingSoftware testing
Software testing
Ravi Dasari
 

Similar to Introduction to software testing (20)

Software testing
Software testingSoftware testing
Software testing
 
UNIT 2.pptx
UNIT 2.pptxUNIT 2.pptx
UNIT 2.pptx
 
Validation & verification software engineering
Validation & verification software engineeringValidation & verification software engineering
Validation & verification software engineering
 
Software testing sengu
Software testing  senguSoftware testing  sengu
Software testing sengu
 
Software Testing
Software TestingSoftware Testing
Software Testing
 
Software testing & Quality Assurance
Software testing & Quality Assurance Software testing & Quality Assurance
Software testing & Quality Assurance
 
softwaretestingppt-FINAL-PPT-1
softwaretestingppt-FINAL-PPT-1softwaretestingppt-FINAL-PPT-1
softwaretestingppt-FINAL-PPT-1
 
SOFTWARE TESTING
SOFTWARE TESTINGSOFTWARE TESTING
SOFTWARE TESTING
 
Manual testing
Manual testingManual testing
Manual testing
 
Testing Slides 1 (Testing Intro+Static Testing).pdf
Testing Slides 1 (Testing Intro+Static Testing).pdfTesting Slides 1 (Testing Intro+Static Testing).pdf
Testing Slides 1 (Testing Intro+Static Testing).pdf
 
Software Testing - A sneak preview By Srikanth
Software Testing - A sneak preview By SrikanthSoftware Testing - A sneak preview By Srikanth
Software Testing - A sneak preview By Srikanth
 
black and white Box testing.pptx
black and white Box testing.pptxblack and white Box testing.pptx
black and white Box testing.pptx
 
Manual testing
Manual testingManual testing
Manual testing
 
Software Testing Training in Chandigarh
Software Testing Training in ChandigarhSoftware Testing Training in Chandigarh
Software Testing Training in Chandigarh
 
Software testing
Software testingSoftware testing
Software testing
 
SWE-401 - 10. Software Testing Overview
SWE-401 - 10. Software Testing OverviewSWE-401 - 10. Software Testing Overview
SWE-401 - 10. Software Testing Overview
 
10. Software testing overview
10. Software testing overview10. Software testing overview
10. Software testing overview
 
Software testing
Software testingSoftware testing
Software testing
 
Software testing career
Software testing careerSoftware testing career
Software testing career
 
Software testing
Software testingSoftware testing
Software testing
 

More from Venkat Alagarsamy

Wearable Tech - What is Next?
Wearable Tech - What is Next?Wearable Tech - What is Next?
Wearable Tech - What is Next?
Venkat Alagarsamy
 
IoT in Healthcare
IoT in HealthcareIoT in Healthcare
IoT in Healthcare
Venkat Alagarsamy
 
Introduction to NFC
Introduction to NFCIntroduction to NFC
Introduction to NFC
Venkat Alagarsamy
 
Enterprise mobileapplicationsecurity
Enterprise mobileapplicationsecurityEnterprise mobileapplicationsecurity
Enterprise mobileapplicationsecurityVenkat Alagarsamy
 
Application of RFID in Fashion Retail outlet
Application of RFID in Fashion Retail outletApplication of RFID in Fashion Retail outlet
Application of RFID in Fashion Retail outlet
Venkat Alagarsamy
 
Cross platform mobile application architecture for enterprise
Cross platform mobile application architecture for enterpriseCross platform mobile application architecture for enterprise
Cross platform mobile application architecture for enterpriseVenkat Alagarsamy
 

More from Venkat Alagarsamy (8)

Wearable Tech - What is Next?
Wearable Tech - What is Next?Wearable Tech - What is Next?
Wearable Tech - What is Next?
 
IoT in Healthcare
IoT in HealthcareIoT in Healthcare
IoT in Healthcare
 
Introduction to NFC
Introduction to NFCIntroduction to NFC
Introduction to NFC
 
Enterprise mobileapplicationsecurity
Enterprise mobileapplicationsecurityEnterprise mobileapplicationsecurity
Enterprise mobileapplicationsecurity
 
Application of RFID in Fashion Retail outlet
Application of RFID in Fashion Retail outletApplication of RFID in Fashion Retail outlet
Application of RFID in Fashion Retail outlet
 
Introduction to RFID
Introduction to RFIDIntroduction to RFID
Introduction to RFID
 
Software Task Estimation
Software Task EstimationSoftware Task Estimation
Software Task Estimation
 
Cross platform mobile application architecture for enterprise
Cross platform mobile application architecture for enterpriseCross platform mobile application architecture for enterprise
Cross platform mobile application architecture for enterprise
 

Introduction to software testing

  • 1. Venkat Alagarsamy venkat.alagarsamy@gmail.com https://www.linkedin.com/in/VenkatAlagarsamy https://www.scribd.com/VenkatAlagarsamy https://www.facebook.com/Venkatachalapathi.Alagarsamy Last Updated: 7th March 2009
  • 2. Testing is a process used to help identify the correctness, completeness and quality of developed computer software.  One definition of testing is "the process of questioning a product in order to evaluate it", where the "questions" are things the tester tries to do with the product, and the product answers with its behavior in reaction to the probing of the tester.
  • 3. Aims/Objectives of Testing  To uncover the maximum no of bugs or errors.  To increase the quality of the software product.  To ensure the user-friendliness.
  • 4. Testing Start Process  Testing is sometimes incorrectly thought as an after-the-fact activity; performed after programming is done for a product. Instead, testing should be performed at every development stage of the product.  If we divide the lifecycle of software development into “Requirements Analysis”, “Design”, “Programming/Construction” and “Operation and Maintenance”, then testing should accompany each of the above phases.  If testing is isolated as a single phase late in the
  • 5. Testing Activities in Each Phase  Requirements Analysis - Determine correctness - Generate functional test data.  Design - Determine correctness and consistency - Generate structural and functional test data.  Programming/Construction - Determine correctness and consistency - Generate structural and functional test data - Apply test data - Refine test data  Operation and Maintenance - Retest
  • 6. Testing Stop Process  Many modern software applications are so complex, and run in such as interdependent environment, that complete testing can never be done. "When to stop testing" is one of the most difficult questions to a test engineer.  Common factors in deciding when to stop are:  Deadlines ( release deadlines, testing deadlines.)  Test cases completed with certain percentages passed  Coverage of code/functionality/requirements reaches a specified point
  • 7. Risk Analysis A Risk is a potential for loss or damage to an Organization from materialized threats. A threat is a possible damaging event. Risk Analysis attempts to identify all the risks and then quantify the severity of the risks. It exploits vulnerability in the security of a computer based system.
  • 8. Test Metrics  Test metrics accomplish in analyzing the current level of maturity in testing and give a projection on how to go about testing activities by allowing us to set goals and predict future trends.  Types of Metrics 1. Process Metrics 2. Product Metrics 3. Project Metrics
  • 9. Product metrics are for describing characteristics of product such as it‟s size, complexity, features, and performance. Several common product metrics are mean time to failure, defect density, customer problem, and customer satisfaction metrics.  Many Organizations take measurements or metrics because they have the capability to measure, rather than determining why they need the information. Unfortunately, measurement for the sake of a number or statistic rarely makes a process better, faster, or cheaper.  The first application of project metrics on most software projects occurs during estimation. Metrics collected from
  • 10. Static Testing  It is generally not detailed testing, but checks mainly for the sanity of the code, algorithm, or document. It is primarily syntax checking of the code or and manually reading of the code or document to find errors. This type of testing can be used by the developer who wrote the code, in isolation. Code reviews, inspections and walkthroughs are also used.  From the black box testing point of view, static testing involves review of requirements or specifications. This is done with an eye toward completeness or appropriateness for the task at
  • 11. Code Review It is systematic examination (often as peer review) of computer source code intended to find and fix mistakes overlooked in the initial development phase. Purpose of code review is to improve both the overall quality of software and the developers' skills.
  • 12. Inspection  Inspection in software engineering, refers to peer review of any work product by trained individuals who look for defects using a well defined process.  The goal of the inspection is for all of the inspectors to reach consensus on a work product and approve it for use in the project.  In an inspection, a work product is selected for review and a team is gathered for an inspection meeting to review the work product.  A moderator is chosen to moderate the meeting. Each inspector prepares for the meeting by reading the work product and noting each defect.
  • 13. Walkthrough Walkthroughs a walkthrough or walk-through is a form of software peer review, in which,  a designer  programmer leads  members of the development team and  other interested parties To scan through a software product, and the participants  ask questions  make comments About  possible errors,  violation of development standards,  other problems
  • 14. Dynamic testing  Dynamic Testing involves working with the software to validate by giving input values and checking if the output is as expected. These are the Validation activities.  Dynamic testing (or dynamic analysis) is a term used in software engineering to describe the testing of the dynamic behavior of code. • It is the examination of the physical response from the system to variables that are not constant and change with time. • The software must actually be compiled and run; this is in contrast to static testing.  Dynamic testing methodologies: • Unit Testing
  • 15. Black box testing  Black Box Testing treats the software as a black box without any knowledge of internal implementation.  These tests can be functional or non-functional, though usually functional.  The test designer selects valid and invalid input and determines the correct output.  This method of test design is applicable to all levels of software testing: • unit, • integration • functional testing • System
  • 16. Black Box Testing Strategy  In order to implement Black Box Testing Strategy, the tester is needed to be thorough with the requirement specifications of the system and as a user, should know, how the system should behave in response to the particular action.  Functional testing covers how well the system executes the functions it is supposed to execute—including user commands, data manipulation, searches and business processes, user screens,
  • 17. Testing Method where User is NOT Required  Functional Testing In this type of testing, the software is tested for the functional requirements. The tests are written in order to check if the application behaves as expected.  Stress Testing The application is tested against heavy load such as complex numerical values, large number of inputs, large number of queries etc. which checks for the stress/load the applications can withstand.  Load Testing The application is tested against heavy loads or
  • 18. Ad-hoc Testing This type of testing is done without any formal Test Plan or Test Case creation. Ad-hoc testing helps in deciding the scope and duration of the various other testing and it also helps testers in learning the application prior starting with any other testing.  Exploratory Testing This testing is similar to the ad-hoc testing and is done in order to learn/explore the application.  Usability Testing This testing is also called as „Testing for User- Friendliness‟. This testing is done if User Interface of the application stands an important consideration
  • 19. Smoke Testing This type of testing is also called sanity testing and is done in order to check if the application is ready for further major testing and is working properly without failing up to least expected level.  Recovery Testing Recovery testing is basically done in order to check how fast and better the application can recover against any type of crash or hardware failure etc. Type or extent of recovery is specified in the requirement specifications.  Volume Testing
  • 20. Testing where User plays a Role - User Required  User Acceptance Testing In this type of testing, the software is handed over to the user in order to find out if the software meets the user expectations and works as it is expected to.  Alpha Testing In this type of testing, the users are invited at the development center where they use the application and the developers note every particular input or action carried out by the user. Any type of abnormal behavior of the system is noted and rectified by the developers.  Beta Testing
  • 21. Advantages of Black Box Testing  more effective on larger units of code.  tester needs no knowledge of implementation, including specific programming languages  tester and programmer are independent of each other  tests are done from a user's point of view  will help to expose any ambiguities or inconsistencies in the specifications
  • 22. Disadvantages of Black Box Testing  only a small number of possible inputs can actually be tested, to test every possible input stream would take nearly forever  without clear and concise specifications, test cases are hard to design  there may be unnecessary repetition of test inputs if the tester is not informed of test cases the programmer has already tried  may leave many program paths untested
  • 23. Characteristics of Black Box Testing  People: Who does the testing? Some people know how software works (developers) and others just use it (users). Accordingly, any testing by users or other non developers is sometimes called “black box” testing. Developer testing is called “white box” testing. The distinction here is based on what the person knows or can understand.  Coverage: What is tested? These are the two most commonly used coverage criteria. Both are supported by extensive literature and commercial tools. Requirements-based testing could be called “black box” because it makes sure that all the customer requirements have been verified.
  • 24. Risks: Why are you testing? Sometimes testing is targeted at particular risks. Boundary testing and other attack-based techniques are targeted at common coding errors. Effective security testing also requires a detailed understanding of the code and the system architecture. Thus, these techniques might be classified as “white box.”
  • 25. Activities: How do you test? A common distinction is made between behavioral test design, which defines tests based on functional requirements, and structural test design, which defines tests based on the code itself. These are two design approaches. Since behavioral testing is based on external functional definition, it is often called “black box,” while structural testing—based on the code internals—is called “white box.” Indeed, this is probably the most commonly cited definition for black box and white box testing.
  • 26. Evaluation: How do you know if you‟ve found a bug? There are certain kinds of software faults that don‟t always lead to obvious failures. They may be masked by fault tolerance or simply luck. Memory leaks and wild pointers are examples. Certain test techniques seek to make these kinds of problems more visible. Related techniques capture code history and stack information when faults occur, helping with diagnosis. Assertions are another technique for helping to make problems more visible. All of these
  • 27. White Box Testing  White box testing is a security testing method that can be used to validate whether code implementation follows intended design, to validate implemented security functionality, and to uncover exploitable vulnerabilities. White box testing, by contrast to black box testing, is when the tester has access to the internal data structures and algorithms  The purpose of any security testing method is to ensure the robustness of a system in the face of malicious attacks or regular software failures. White box testing is performed based on the
  • 28.  White box testing requires access to the source code. Though white box testing can be performed any time in the life cycle after the code is developed, it is a good practice to perform white box testing during the unit testing phase. White box testing requires knowing what makes software secure or insecure.  The first step in white box testing is to comprehend and analyze source code, so knowing what makes software secure is a fundamental requirement. Second, to create tests that exploit software, a tester must think like an attacker. Third, to perform testing
  • 29. Types of White Box Testing  Code coverage - creating tests to satisfy some criteria of code coverage. For example, the test designer can create tests to cause all statements in the program to be executed at least once.  Mutation testing methods - A kind of testing in which, the application is tested for the code that was modified after fixing a particular bug/defect. It also helps in finding out which code and which strategy of coding can help in developing the functionality effectively.  Fault injection methods - fault injection is a
  • 30. Results to Expect  Any security testing method aims to ensure that the software under test meets the security goals of the system and is robust and resistant to malicious attacks. Security testing involves taking two diverse approaches: one, testing security mechanisms to ensure that their functionality is properly implemented; and two, performing risk-based security testing motivated by understanding and simulating the attacker‟s approach.  Some examples of errors uncovered include • data inputs compromising security
  • 31. Benefits of White Box Testing  Analyzing source code and developing tests based on the implementation details enables testers to find programming errors quickly.  Validating design decisions and assumptions quickly through white box testing increases effectiveness. The design specification may outline a secure design, but the implementation may not exactly capture the design intent.  Finding ”unintended” features can be quicker during white box testing. Security testing is not just about finding vulnerabilities in the intended functionality of
  • 32. Unit Testing  In computer programming, unit testing is a procedure used to validate that individual units of source code are working properly. A unit is the smallest testable part of an application.  In procedural programming a unit may be an individual program, function, procedure, etc.,  while in object-oriented programming, the smallest unit is a method; which may belong to a base/super class, abstract class or derived/child class.  The goal of unit testing is to isolate each part of the program and show that the individual parts are correct. A unit test provides a strict, written contract that the piece of code must satisfy. As a result, it affords several benefits. Unit tests find problems early
  • 33. Limitations of Unit Testing  Testing, in general, cannot be expected to catch every error in the program. The same is true for unit testing. By definition, it only tests the functionality of the units themselves. Therefore, it may not catch integration errors, performance problems, or other system-wide issues.  To obtain the intended benefits from unit testing, a rigorous sense of discipline is needed throughout the software development process. It is essential to keep careful records, not only of the tests that have been performed, but also of all changes that have been made to the source code of this or any other unit in the software.  It is also essential to implement a sustainable process for ensuring that test case failures are reviewed daily
  • 34. Integration Testing  Integration testing is the phase of software testing in which individual software modules are combined and tested as a group.  Integration testing takes as its input modules that have been unit tested, groups them in larger aggregates, applies tests defined in an integration test plan to those aggregates, and delivers as its output the integrated system ready for system testing.  The purpose of integration testing is to verify functional, performance and reliability requirements placed on major design items. These "design items", i.e. assemblages (or groups of units), are exercised through their interfaces using black box testing, success and error cases being simulated via appropriate parameter and data inputs. Simulated usage of shared data areas and inter-process communication is tested and
  • 35. System Testing  System testing of software or hardware is testing conducted on a complete, integrated system to evaluate the system's compliance with its specified requirements.  As a rule, system testing takes, as its input, all of the "integrated" software components that have successfully passed integration testing and also the software system itself integrated with any applicable hardware system(s). The purpose of integration testing is to detect any inconsistencies between the software units that are integrated together (called assemblages) or
  • 36. Types of System Testing Error Handling Testing - Error handling refers to the anticipation, detection, and resolution of programming, application, and communications errors. Specialized programs, called error handlers, are available for some applications. The best programs of this type forestall errors if possible, recover from them when they occur without terminating the application, or (if all else fails) gracefully terminate an affected application and save the error information to a log file.
  • 37. Volume Testing belongs to the group of non- functional tests, which are often misunderstood and/or used interchangeably. Volume testing refers to testing a software application for a certain data volume. This volume can in generic terms be the database size or it could also be the size of an interface file that is the subject of volume testing. For example, if you want to volume test your application with a specific database size, you will explode your database to that size and then test the application's performance on it. Another example could be when there is a requirement for your application to interact with an interface file (could be any file
  • 38. Performance testing is the process of determining the speed or effectiveness of a computer, network, software program or device. This process can involve quantitative tests done in a lab, such as measuring the response time or the number of MIPS (millions of instructions per second) at which a system functions.  Performance testing can refer to the assessment of the performance of a human examinee. For example, a behind-the-wheel driving test is a performance test of whether a person is able to perform the functions of a
  • 39. User Acceptance Testing  User Acceptance Testing (UAT) is a process to obtain confirmation by a Subject Matter Expert (SME), preferably the owner or client of the object under test, through trial or review, that the modification or addition meets mutually agreed-upon requirements. In software development, UAT is one of the final stages of a project and often occurs before a client or customer accepts the new system.  Users of the system perform these tests, which developers derive from the client's contract or the user requirements specification.  These tests, which are usually performed by clients or end-users, are not usually focused on identifying simple problems such as spelling errors and cosmetic
  • 40. Regression Testing  Regression testing is any type of software testing which seeks to uncover regression bugs. Regression bugs occur whenever software functionality that previously worked as desired, stops working or no longer works in the same way that was previously planned. Typically regression bugs occur as an unintended consequence of program changes.  Regression testing can be used not only for testing the correctness of a program, but it is also often used to track the quality of its output.
  • 41. Grey Box Testing Gray box testing is a software testing technique that uses a combination of black box testing and white box testing. Gray box testing is not black box testing, because the tester does know some of the internal workings of the software under test. In gray box testing, the tester applies a limited number of test cases to the internal workings of the software under test. In the remaining part of the gray box testing, one takes a black box approach in applying inputs to the software under test and observing the outputs.
  • 42. Test Cases  In software engineering, the most common definition of a test case is a set of conditions or variables under which a tester will determine if a requirement or use case upon an application is partially or fully satisfied.  In order to fully test that all the requirements of an application are met, there must be at least one test case for each requirement unless a requirement has sub-requirements.  What characterizes a formal, written test case is that there is a known input and an expected output, which is worked out before the test is executed. The known input should test a