Inderdeep Kaur and Rupinder Kaur
Department Of Computer Science & Engineering
Jaswinder Kaur (Department Of Information Technology)
Institute of Engineering & Technology, Bhaddal (Ropar) Punjab
Abstract: Testing involves operation of a system activities. After identifying and removing error it
or application under controlled conditions and was tested for all activities together using firstly
evaluating the results. The controlled conditions as dummy data and then as test data
should include both normal and abnormal
conditions. Testing should intentionally attempt 2. Levels of Testing: To complete the whole
to make things go wrong to determine if things process of testing various levels of testing are to
happen when they shouldn't or things don't be employed,
happen when they should. Software testing is a
critical element of software quality assurance Unit Testing
and represents the ultimate review of
specification, design and coding. Theoretically, a Unit testing is the testing of a single module in
newly designed system should have all the pieces an isolated environment. Different modules are
in working order, but in reality, each piece tested against the specifications of their design.
works independently. During testing, the system This is essentially a type of verification provided
is used experimentally to ensure that the for the code written for them. Thus the goal of
specifications and in the way user expects. The this is to test the internal logic used while
purpose of the testing is to consider all the likely coding.
variations to which it will be subjected and then
push the system to it limits. Integration testing
1. Introduction: Software Testing is the act of Many tested modules are combined into one sub
confirming that software design specifications system which is then tested together. The goal
have been effectively fulfilled and attempting to here is to see if the modules can be integrated
find software faults during execution of the properly. Integration testing is done with
software. In other words" Testing is a process of interfaces among system modules. It ensures that
gathering information by making observations the data moving between the modules is handled
and comparing them to expectations." properly or not.
Preliminary testing was done using self-
generated dummy (fake) data for individual
System Testing is the testing of the system
against its initial objectives. In this testing, the
software and the other system elements are tested
as a whole either in simulated environment or in
Verify whether the design achieves the
objectives of the requirements as well as the
design being effective and efficient
Verification Techniques: Design walkthroughs,
Verify that the design is correctly translated to
verify coding is as per company's standards and
Verification Techniques: Code walkthroughs,
Fig1: Levels of Testing Validation Techniques: Unit testing and
This is performed with the realistic data of the
client to demonstrate that the software is
working satisfactory or not. Testing here focuses
on the external behavior of the system, the
internal logic of the system is not emphasized
Test Review is the process, which ensures that
testing is carried out as planned. Test reviews
decide whether or not the software is ready for
release. It was extensively tested under all Fig2: Process of Testing
relevant that all packages, database functions are System Testing phase:
behaving in the manner as expected of them and Execute test cases
giving accurate results. This process helped in Log bugs and track them to closure
rectifying or modifying the modules User Acceptance phase:
Users validate the applicability and usability of
3. Software testing process: The following is the software in performing their day to day
the software testing process operations.
Requirements Gathering phase: Maintenance phase:
Verify that the requirements captured are After the software is implemented, any changes
complete, unambiguous, accurate and non to the software must be thoroughly tested and
conflicting with each other care should be taken not to introduce regression
4. Types of Testing: classes rather than undertaking exhaustive testing
of each value of the larger class. For example, a
Black box testing - Tests are not based on any program which edits credit limits within a given
knowledge of internal design or code. Tests are range (1,000 - 1,500) would have three
based on requirements and functionality. When equivalence classes:
creating black-box test cases, the input data used < 1,000 (invalid)
is critical. Three successful techniques for Between 1,000 and 1,500 (valid)
managing the amount of input data required > 1,500 (invalid)
Equivalence Partitioning A technique that consists of developing test
An equivalence class is a subset of data that is cases and data that focus on the input and output
representative of a larger class. Equivalence boundaries of a given function. In same credit
partitioning is a technique for testing equivalence limit example, boundary analysis would test:
Low boundary +/- one (999 and 1,001) added; requires that various aspects of an
On the boundary (1,000 and 1,500) application's functionality be independent
Upper boundary +/- one (1,499 and 1,501) enough to work separately before all parts of the
program are completed, or that test drivers be
White box testing - Tests are based on developed as needed; done by programmers or
knowledge of the internal logic of an by testers.
application's code. Tests are based on coverage
of code statements, branches, paths, conditions. Functional testing - black-box testing aimed to
White-box testing assumes that the path of logic validate to functional requirements of an
in a unit or program is known. White-box testing application; this type of testing should be done
consists of testing paths, branch by branch, to by testers.
produce predictable results. The following are
white-box testing techniques: System testing - black-box type testing that is
Statement Coverage based on overall requirements specifications;
Execute all statements at least once. covers all combined parts of a system.
Execute each decision direction at least once. End-to-end testing - similar to system testing but
Condition Coverage involves testing of the application in a
Execute each decision with all possible outcomes environment that mimics real-world use, such as
at least once. interacting with a database, using network
Decision/Condition Coverage communications, or interacting with other
Execute all possible combinations of condition hardware, applications, or systems if appropriate.
outcomes in each decision. Treat all iterations Even the transactions performed mimics the end
users usage of the application
Incremental integration testing - continuous
testing of an application as new functionality is
Dynamic testing - test activities that involve
Sanity testing - typically an initial testing effort running the software are called dynamic testing.
to determine if a new software version is
performing well enough to accept it for a major Regression testing - Testing of a previously
testing effort. For example, if the new software is verified program or application following
crashing systems every 5 minutes, bogging down program modification for extension or correction
systems to a crawl, or destroying databases, the to ensure no new defects have been introduced.
software may not be in a 'sane' enough condition Automated testing tools can be especially useful
to warrant further testing in its current state. for this type of testing.
Smoke testing - The general definition (related to Load testing -Load testing is a test whose
Hardware) of Smoke Testing is: objective is to determine the maximum
Smoke testing is a safe harmless procedure of sustainable load the system can handle. Load is
blowing smoke into parts of the sewer and drain varied from a minimum (zero) to the maximum
lines to detect sources of unwanted leaks and level the system can sustain without running out
sources of sewer odors. of resources or having, transactions suffer
In relation to software, the definition is Smoke (application-specific) excessive delay.
testing is non-exhaustive software testing,
ascertaining that the most crucial functions of a Stress testing - Stress testing is subjecting a
program work, but not bothering with finer system to an unreasonable load while denying it
details. the resources (e.g., RAM, disc, mips, interrupts)
needed to process that load. The idea is to stress
Static testing - Test activities that are performed a system to the breaking point in order to find
without running the software is called static bugs that will make that break potentially
testing. Static testing includes code inspections, harmful. The system is not expected to process
walkthroughs, and desk checks the overload without adequate resources, but to
behave (e.g., fail) in a decent manner (e.g., not
corrupting or losing data). The load (incoming
transaction stream) in stress testing is often Clearly this is subjective, and will depend on the
deliberately distorted so as to force the system targeted end-user or customer. User interviews,
into resource depletion surveys, video recording of user sessions, and
other techniques can be used. Programmers and
Performance testing - Validates that both the testers are usually not appropriate as usability
online response time and batch run times meet testers.
the defined performance requirements.
Install/uninstall testing - testing of full, partial, or
Usability testing - testing for 'user-friendliness'. upgrade install/uninstall processes.
Recovery testing - testing how well a system Beta testing - testing when development and
recovers from crashes, hardware failures, or testing are essentially completed and final bugs
other catastrophic problems. and problems need to be found before final
release. Typically done by end-users or others,
Security testing - testing how well the system not by programmers or testers. The software has
protects against unauthorized internal or external reached "beta" stage when it is operating with
access, willful damage, etc; may require most of its functionality, and is ready for user
sophisticated testing techniques. feedback.
Compatibility testing - testing how well software Beta tests enable the software to be tested in
performs in a particular hardware/software/ customer environments, giving users the
operating system/network/etc. environment. opportunity to exercise the software, find errors,
and correct them before a product is released.
Exploratory testing - often taken to mean a For beta test, the software is delivered to
creative, informal software test that is not based customers' sites, along with instructions on what
on formal test plans or test cases; testers may be features to exercise in the software. The
learning the software as they test it. developer records, tracks, and addresses reported
bugs. Beta testing provides important input from
Ad-hoc testing - similar to exploratory testing, users.
but often taken to mean that the testers have
significant understanding of the software before • Software functionality
testing it. • Algorithms used
Monkey testing:-monkey testing is a testing that
• Solved example problems or tutorial
runs with no specific test in mind. The monkey
• Subject matter technical content
in this case is the producer of any input data
(whether that be file data, or input device data). • Ease of use
Keep pressing some keys randomly and check • Installation instructions
whether the software fails or not. • User documentation
Comparison testing - comparing software The beta testers will need to check the technical
weaknesses and strengths to competing products. accuracy of the software, and will need to use it
with their own computer setups, data and
Alpha testing - testing of an application when workflows to make sure that the software will
development is nearing completion; minor perform as desired in their own
design changes may still be made as a result of environments.The beta testers will use the
such testing. Typically done by users within the software and respond in writing to the
development team. assessment points.
Mutation testing - a method for determining if a Proper implementation requires large
set of test data or test cases is useful, by computational resources
deliberately introducing various code changes
('bugs') and retesting with the original test
data/cases to determine if the 'bugs' are detected.
Cross browser testing - application tested with
different browser for usability testing & Test Management Tools
compatibility testing these tools are used to manage the entire testing
process. Most of the tools support the following
Concurrent testing - Multi-user testing geared activities
towards determining the effects of accessing the * Requirements gathering
same application code, module or database * Test planning
records. Identifies and measures the level of * Test cases development
locking, deadlocking and use of single-threaded * Test execution and scheduling
code and locking semaphores etc. * Analyzing test execution results
* Defect reporting and tracking
Negative testing - Testing the application for fail * Generation of test reports
conditions, negative testing is testing the tool * DefecttrackingTools
with improper inputs. For example entering the these tools are used to record bugs or defects
special characters for phone number uncovered during testing and track them until
they get completely fixed.
5. Track testing progress: AutomationTools
These tools records the actions performed on the
The best way is to have a fixed number of test application being tested, in a language it
cases ready before test execution cycle begins. understands and where ever we want to compare
Then the testing progress is measured by the the actual behavior of the application with the
total number of test cases executed. expected behavior, we insert a verification point.
% Completion = (Number of test cases The tool generates a script with the recorded
executed)/(Total number of test cases) actions and inserted verification points. To repeat
Not only the testing progress but also the the test case, all we need to do is playback (run)
following metrics are helpful to measure the the script and at the end of its run, check the
quality of the product result file.
% Test cases Passed = (Number of test cases Load testing/Performance testing tools
Passed)/(Number of test cases executed) these tools can be used to identify the
% Test cases Failed = (Number of test cases bottlenecks or areas of code which are severely
Passed)/ (Number of test cases executed) hampering the performance of the application.
Note: A test case is Failed when at least one bug They can be also used to measure the maximum
is found while executing it, otherwise Passed load which the application can withstand before
its performance starts to degrade.
6 Testing Tools:
Codecoveragetools 6. Termination of testing:
this type of tools can be very useful to measure
the coverage of the test cases and to identify the Let’s discuss few approaches
gaps. The tool identifies the code that has not Approach 1: This approach requires that you
been run even once(hence not tested) while have a fixed number of test cases ready before
running the test cases. You may have to sit with test execution cycle. In each testing cycle you
the developers to understand the code. After execute all test cases. You stop testing when all
analysis, the test cases should be updated with the test cases are Passed or % failure is very less
new ones to cover the missing code. Its not cost in the latest testing cycle.
effective to aim for 100% code coverage unless Approach 2: Make use of the following metrics
it is a critical application otherwise 70-80% is Mean Time between Failures: The average
considered to be a good coverage. operational time it takes before a software
Unittestingtools system fails.
Unit testing is a white box testing technique, Coverage metrics: the percentage of instructions
done by developers. The following are some of or paths executed during tests.
the automated unit testing tools Defect density: defects related to size of software
such as “defects/1000 lines of code” Open bugs
and their severity levels,
If the coverage of code is good, Mean time
between failures is quite large, defect density is
very low and not may high severity bugs still
open, then 'may' be you should stop testing.
'Good', 'large', 'low' and 'high' are subjective
terms and depend on the product being tested.
Finally, the risk associated with moving the
application into production, as well as the risk of
not moving forward, must be taken into
consideration. In a multiround scenario whether
to go for a next round or stop there depends on
the number of bugs logged in last round of
testing. The following criteria can be used.
No new critical bugs/regression issues were
Minor issues found are less(less is relative term
which depends on the application being tested)
5. Conclusion Well, the simple reason is
development 'process' is unable to produce defect
free software. Testing not only identifies and
reports defects but also measures the quality of
the product which helps to decide whether to
release the product or not. a quality product is
defined as the one that meets product
requirements But Quality can only be seen
through customer eyes. So the most important
definition of quality is meeting customer needs
or Understanding customer requirements,
expectations and exceeding those expectations.
Customer must be satisfied by using the product,
then its a quality product.