Probably this is what most of us practice and is used most widely. This is also type of testing which is very
close to customer experience. In this type of testing system is treated as close system and test engineer
do not assume any thing about how system was created.
As a test engineer if you are performing black box test cases, one thing that you need to make sure is
that you do not make any assumptions about the system based on your knowledge. Assumption created
in our mind because of the system knowledge could harm testing effort and increase the chances of
missing critical test cases.
Only input for test engineer in this type of testing is requirement document and functionality of system
which you get by working with the system. Purpose of black box testing is to Make sure that system is
working in accordance with the system requirement. Make sure that system is meeting user expectation.
In order to make sure that purpose of black box testing is met, various techniques can be used for data
Boundary value analysis
Activities within every testing types can be divided into verification and validation. Within black box testing
following activities will need verification techniques
Review of requirement and functional specification.
Review of Test plan and test cases.
Review of test data
And test case execution will fall under validation space.
Goal of Unit testing is to uncover defects using formal techniques like Boundary
Value Analysis (BVA), Equivalence Partitioning, and Error Guessing. Defects and deviations in
Date formats, Special requirements in input conditions (for example Text box where only numeric
or alphabets should be entered), selection based on Combo Box‟s, List Box‟s, Option buttons,
Check Box‟s would be identified during the Unit Testing phase.
Integration testing is a systematic technique for constructing the program structure while at the
same time conducting tests to uncover errors associated with interfacing. The objective is to take
unit tested components and build a program structure that has been dictated by design.
Integration Testing will check following features between the modules.
Data Dependency between the modules
Data Transfer between the modules.
Usually, the following methods of Integration testing are followed:
1. Top-Down Integration approach.
2. Bottom-up Integration approach.
3. Big Bang Approach
Integration Testing - Top-down:
Top-down integration testing is an incremental approach to construction of program structure.
Modules are integrated by moving downward through the control hierarchy, beginning with the
main control module. Modules subordinate to the main control module are incorporated into the
structure in either a depth-first or breadth-first manner.
The Integration process is performed in a series of five steps:
The main control module is used as a test driver and stubs are substituted for all
components directly subordinate to the main control module.
Depending on the integration approach selected subordinate stubs are replaced
one at a time with actual components.
Tests are conducted as each component is integrated.
On completion of each set of tests, stub is replaced with the real component.
Integration Testing – Bottom Up:
Bottom-up integration testing begins construction and testing with atomic modules (i.e.
components at the lowest levels in the program structure). Because components are integrated
from the bottom up, processing required for components subordinate to a given level is always
available and the need for stubs is eliminated.
A Bottom-up integration strategy may be implemented with the following steps:
Low level components are combined into clusters that perform a specific software sub
The terminal module is tested in isolation first, and then the next sets of the higher-level
modules are tested with the previously tested lower level modules.
Here we have to write ' Drivers'
A driver is written to coordinate test case input and output.
Integration testing - Big bang:
A Type of Integration Testing, in which software components of an application are combined all at
once into a overall system, and tested.
According to this approach, every module is first unit tested in isolation from every
After each module is tested, all of the modules are integrated together at once.
Big bang approach is called as " Non Incremental Approach“
Here all modules are combined and integrated in advance. The entire program is tested
as a whole. If Set of bugs encountered correction is difficult. If one error is corrected new
bug appears and the process continues.
System testing is probably the most important phase of complete testing cycle. This phase is started after
the completion of other phases like Unit, Component and Integration testing. During the System Testing
phase, non functional testing also comes in to picture and performance, load, stress, scalability all these
types of testing are performed in this phase.
By Definition, System Testing is conducted on the complete integrated system and on a replicated
production environment. System Testing also evaluates that system compliance with specific functional
and non functional requirements both.
It is very important to understand that not many test cases are written for the system testing. Test cases
for the system testing are derived from the architecture/design of the system, from input of the end user
and by user stories. It does not make sense to exercise extensive testing in the System Testing phase, as
most of the functional defects should have been caught and corrected during earlier testing phase.
Utmost care is exercised for the defects uncovered during System Testing phase and proper impact
analysis should be done before fixing the defect. Some times, if business permits defects are just
documented and mentioned as the known limitation instead of fixing it.
Progress of the System Testing also instills and build confidence in the product teams as this is the first phase in
which product is tested with production environment.
System testing phase also prepares team for more user centric testing i.e User Acceptance Testing.
Unit, component and Integration test are complete
Defects identified during these test phases are resolved and closed by QE team
Teams have sufficient tools and resources to mimic the production environment
Teams have sufficient tools and resources to mimic the production environment
Test cases execution reports shows that functional and non functional requirements are met.
Defects found during the System Testing are either fixed after doing thorough impact analysis or
are documented as known limitations.
Smoke /Sanity Testing:
A smoke test determines whether it is possible to continue testing, A software smoke test
determines whether the program launches and whether its interfaces are accessible and responsive (for
example, the responsiveness of a web page or an input button).
If the smoke test fails, it is impossible to conduct a sanity test. In contrast, the ideal sanity test
exercises the smallest subset of application functions needed to determine whether the application logic
is generally functional and correct (for example, an interest rate calculation for a financial application). If
the sanity test fails, it is not reasonable to attempt more rigorous testing
Both sanity tests and smoke tests are ways to avoid wasting time and effort by quickly
determining whether an application is too flawed to merit any rigorous testing
• Regression testing is the re-execution of some subset of tests that have already been conducted to
ensure that changes have not propagated unintended side effects.
• Regression may be conducted manually, by re-executing a subset of al test cases or using
automated capture/playback tools ex: QTP, Selenium etc...
As some one has said, changes are the only thing constant in this world. It holds true for software also,
existing software are either being changed or removed from the shelves. These changes could be due to
any reason. There could be some critical defects which needs to be fixed, there could be some
enhancements which has to happen in order to remain in the competition.
Regression testing is done to ensure that enhancement, defect fixes or any other changes made to the
software has not broken any existing functionality.
Regression testing is very important, because in most places these days iterative development is used. In
iterative development, shorter cycle is used with some functionality added in every cycle. In this scenario,
it make sense to have regression testing for every cycle to make sure that new features are not breaking
any existing functionality.
Whenever there are any changes in the system, regression testing is performed. Regression testing can
be performed for the complete product or it can be selective. Normally, full regression cycle is executed
during the end of testing cycle and partial regression cycle is executed between the test cycles. During
the regression cycle it becomes very important to select the proper test cases to get the maximum
benefits. Test cases for regression testing should be selected based on the knowledge of
What defect fixes, enhancement or changes have gone into the system?
What is the impact of these changes on the other aspect of system?
Focus of the regression testing is always on the impact of the changes in the system. In most of the
organizations, priority of the regression defect is always very high. It is normally included in the exit
criteria of the product that it should have zero regression defects.
Regression testing should always happen after the sanity or smoke testing. Sanity/Smoke can be defined
as the type of testing which make sure that software is in testable state. Normally, sanity test suite will
contain very basic and core test cases. These test cases decide the quality of the build and any failure in
the sanity test suite should result in the rejection of build by the test team.
Regression testing is a continuous process and and it happens after every release. Test cases are added
to the regression suite after every release and repetitively executed for every release.
Because test cases of regression suite are executed for every release, they are perfect candidate for the
The Regression test suit contains three different classes of test cases:
A representative sample of tests that will exercise all important software functions.
Additional tests that focus on software functions that are likely to be affected by the
Tests that focus on the software components that have been changed.
Automated Regression testcases are called Automated Regression Test Suite
Many Companies do prepare Automated Regression suites to run against each build of the Application.
Verification of the impact of frontend operation ( User Interface Functional Navigation) on backend tables.
Database testing can be performed in any the following situations wherever Database interaction
UI vs. Database
Database1 vs. Database2
UI vs. database vs. UI
XML vs. Database
UI 1 vs. database1 vs. database2 vs. UI 2
• Perform Database Testing using SQL (Structured Query Language)
Register some user and check in the database when that Value is inserted or not.
Search for an existing user and edit the information and Save, Cross check in the Database
whether information is Updated or not
Delete an user from the Front End and verify the Database whether row related to the user is
deleted from the table or not.
Following SQL will be used for the above flow verification:
SQL: Select * from <User_Info> where Userid = ‘<created Userid at Fronend>’
UI vs Database: Check the Data on the UI reflects in the Databse or not.
The field size defined in the application is matching with that in the db.
User Actions on the Frontend Reflects simultaneously in the Database or not.
Ex: Create User in the frontend, Update user info or delete User from the Database.
Check the functionality which violates Primary Key Concept..
Perform Data Integrity Testing, testing whether values are truncated or rounded in the database
Verify Database Triggers are performed at right time
XML vs Database:
Values in the XML tags are properly stored in the Database
Within the Database:
Verify By writing queries when parent table is updated then child tables are updated simultaneously
• Exploratory testing is „Testing while Exploring‟. When you have no idea of how the application works,
exploring the application with the intent of finding errors can be termed as Exploratory Testing.
• Exploratory Tests are categorized under Black Box Tests and are aimed at testing in conditions when
sufficient time is not available for testing or proper documentation is not available.
• The following can be used to perform Exploratory Testing:
Learn the Application.
Learn the Business for which the application is addressed.
Learn the technology to the maximum extent on which the application has been designed.
Learn how to test.
Plan and Design tests as per the learning.
• End-to-end testing is the process of testing transactions or business level products as they pass right
through the computer systems. Thus this generally ensures that all aspects of the business are
supported by the systems under
• E2E testing Verifies that a set of interconnected systems will perform correctly Individual subsystems
and applications may have been tested and approved but unobserved faults may still exist in the
system as a whole
E2E testing is system testing here we identify end to end scenarios which affects workflow of
customer. Testing environment should be just like production environment.
Ad hoc testing is a term commonly used for the tests carried out without planning software and
documentation. The tests are intended to be executed only once, unless a defect is discovered. This is
part of the exploratory test, which is the least formal of test methods. In this context, it has been criticized
because it is not structured.
It is performed with improvisation; the tester seeks to find bugs with any means that seem appropriate. It
contrasts to regression testing that looks for a specific issue with detailed reproduction steps, and a clear
expected result. Ad hoc testing is most often used as a complement to other types of testing.
Ad-hoc Testing is very cost effective
Ad-hoc Testing can be followed even when documentation is not present for the system
Ad-hoc Testing can be adjusted and done as per the budget
Ideal for small size companies
Is a process to find that Application is user friendly or not.
Software usability testing is an example of non functional testing. Usability testing evaluates how easy a
system is to learn and use. There are enormous benefits of the usability testing but still there is not much
awareness about the subject.
Benefits of Usability testing can be summarized as
Its easier for sales team to sell a highly usable product.
Usable products are easy to learn and use.
Support cost is less for Usable products.
According to the ISO definition usability is the extent to which a product can be used by specified user
to achieve specified goal with effectiveness, efficiency and satisfaction in a specified context of use.
It is very important to understand following before starting any Usability testing activities.
Specified User - Who will be the targeted user population. A usable system for a businessman could be
highly unusable for the farmers. Targeted audience should be identified clearly.
Specified Goal - Usability testing team should understand the primary goal of the system. Usable system
will rarely have fancy functionality, as it might be irrelevant to 80 percent of the user.
Effectiveness and Efficiency - These can be measured in terms of accuracy and completeness with
which user achieve specified goals in minimum amount of time.
Context of Use - It is very important to understand the context in which software will be used before
usability testing. Usability testing of a video game will be different from a sophisticated software used by
person in space shuttle.
Define set of usability testing categories and identify goals for each
Interactivity- interaction mechanisms are easy to understand and use.
Layout- navigation,content and functions allow user to find them quickly
Readability – Content Understandable
Aesthetics- graphic design supports ease of use.
Display characters- Web App makes good use of screen size and resolution
Time sensitivity- Content and features can be acquired in timely manner
Personalization – Adaptive Interfaces
Accessibility – Special needs users
Design tests will enable each goal to be evaluated.
Select participants to conduct the tests
Instrument participants interactions with the WebApp during testing
Develop method for assessing usability of the WebApp
User Acceptance Testing (UAT)
• User Acceptance Testing is the formal testing conducted to determine whether or not system satisfies
acceptance criteria and to enable the customer to determine whether or not to accept the software
• Once the application is ready to be released the crucial step is User Acceptance Testing. In this step
a group representing a cross section of end users tests the application. The user acceptance testing
is done using real world scenarios and perceptions relevant to the end users.
• Can be conducted over period of weeks or months ,there by uncovering cumulative Errors that might
degrade the system over time.
UAT for Products
If the software is developed as a product to be used by many customers, it is impractical to perform
formal acceptance tests with each customer.
Most Software Product builders use a process called alpha testing and beta testing to uncover errors
The Alpha test is conducted at developer site by a Customer.
The software is used in a natural setting along with the developer.
Developer records the error and usage problems
Alpha tests are done in controlled Environment
Beta testing is the last stage of testing, and normally can involve sending the product to beta test sites
outside the company for real-world exposure or offering the product for a free trial download over the
Beta testing is often preceded by a round of Alpha testing and followed by the release of the product.
Developer not present
„Live‟ situation. Developer not in control
Customers records problems (real,imagined) and reports to developer
World is flat. If you are reading this page, chances are that you are experiencing this as well. It is very
difficult to survive in the current world if you are selling your product in only one country or geological
region. Even if you are selling in all over the world, but your product is not available in the regional
languages, you might not be in a comfortable situation. Products developed in one location are used all
over the world with different languages and regional standards. This arises the need to test product in
different languages and different regional standards. Multilingual and localization testing can increase
your products usability and acceptability worldwide.
Internationalization testing is the process, which ensures that product‟s functionality is not broken and all
the messages are properly externalized when used in different languages and locale. Internationalization
testing is also called I18N testing, because there are 18 characters between I and N in
Internationalization, Globalization and Localization, all these words are normally used together. Though
the objective of these word is same, that is to make sure that product is ready for the global market, but
they serve different purpose and have different meaning. We will explore these terms in more detail.
Globalization is the process of developing, manufacturing and marketing software
products that are intended for worldwide distribution. An important feature of these
products is that, they support multiple languages and locale.
In a globalize product, code is separated from the messages or text that it uses. This
enables software to be used with different languages without having to rebuilding the
Globalization is achieved through the Internationalization and localization.
The process of designing, developing, and engineering products that can be launched worldwide
Two major Components – Internationalization & Localization
• Internationalization (I18N)
• Designing/developing software so it can be adapted to various locales and regions
• Localization (L10N)
• Process of adapting software by translating text and adding locale –specific
• I18N Sufficiency Testing been properly internationalized according to design rules
and is READY to be localized
Ensures software has.
Benefits of I18N:
The main goal of I18N testing is to find potentially show stopping software problems before the
application is translated and localized into various languages
I18N sufficiency testing must begin as early in the testing cycle as possible
I18N sufficiency testing does not require an in-depth knowledge of many locale specific issues, a fully
localized environment, or native linguistic skills.
Localization Testers verify that localized content is linguistically correct, culturally appropriate, and fits
within the context. They examine the functionality of a product to ensure proper operation on localized
operating systems and supported platforms.
Localization QA testing team performs several validation steps to ensure that a product has been
successfully adapted to regional expectations. Some of these steps include: reporting truncation issues
due to text expansion, validating input fields such as zip codes and telephone numbers, ensuring that
sorting logic is applied correctly and that formatting date, time and decimals are displayed properly.
a. Linguistic testing - Validation of the translated content within context
Grammatical, spelling, and punctuation errors – is the text free of linguistic errors?
Accuracy of translation in the given context – does translation fit within given context? The
translation of a single word may change within the context. The terms Exit and Close may
indicate the same operation in some cases; however, the translation will depend on the action
that the word refers to. Only way to ensure proper translation is to see the actual button.
Adherence to glossary – is the translation consistent with the provided glossary? Is it consistent
throughout the application? Are brand names translated correctly?
Missing content – is the localized content imported properly and entirely to the UI? Is everything
b. Cosmetic testing - “look and feel” review
Consistent layout with source – do images, tables, and general design matches the source
Translation consistency - is the terminology consistent across graphics, documentation, and
the UI? For example, if the instructions say “Click on the Agree button to move forward,”
users expect to see the word “Agree” on the button – not “OK.”
Correct line breaks – is the text broken in the correct place? Automated text wrapping can
often break words or sentences in incorrect places, separating one character from the rest
of the word. In some languages, such as Thai, only a person who can read the language is
capable of verifying if a line break is correct.
Layout – are there any truncation issues? Is the text aligned properly?
Display – are there any character corruption issues?
c. Functional testing – Functional validation of localized product
Install, use, uninstall - is the application fully functional on a localized operating system?
Supported platforms - does the web application function properly on localized versions of the
Review - are all hyperlinks functional? Do they point to the correct pages? Is data sorted
properly? Do the cascading list boxes work as expected?
Input/output validation – do the input fields allow the use of non ASCII characters? Are the non
ASCII characters displayed back to the user properly after being saved to the database?
Regional formatting - is the display format proper for date/time? Does the calendar start from the
Hotkeys/shortcut keys – are they functional?
This is a brief summary of our localization testing coverage
Non- Functional Testing:
In non-functional testing, the focus of the testing activities is on non functional aspects of the
system. Non functional testing is normally carried out during the System Testing phase only. The
focus of non functional testing is on the behavior and user experience of the system.
This testing Approach mainly deals with Appearance of the Graphical User Interface (GUI) of the
1. Check the Title of the webpage.
2. Check for the Menu bar of the Web Page.
3. Check for „Esc‟ key.
4. Check for the „F1‟ help key
5. Check the Title of the Screen.
6. Check whether the Title of the Screen is aligned properly.
7. Check for the Default Cursor Focus in the Screen.
8. Check for the elements available in the Screen.
9. Check for the fonts and font size of the labels.
10. Check whether the Text Boxes (if available) are aligned properly.
11. Check for the spelling errors in the Labels of the Text Boxes.
12. Check whether the Buttons (if available) in the Screen are aligned properly.
13. Check for the spelling errors in the Labels of the Buttons.
14. Check whether the Drop-Down (if available) in the Screen are aligned properly.
15. Check for the spelling errors in the Labels of the Drop-Down.
16. Check whether User can choose from List Box by using Top & Down key buttons
17. Check whether the Collapse/Expand icon (if available) is being displayed.
18. Check for the Scroll Icons (if available) in the Screen.
19. Check for the „Tool Tip‟ of Scroll Icons in the Screen.
20. Check for the Mandatory fields in the Screen.
21. Check whether the Cursor Sequence is from Left to Right and Top to Bottom.
22. Check whether Cursor focus move backward by clicking Shift+Tab
23. Check whether all pages/Screens have a consistent look & feel.
24. Dialog Boxes have consistent look & feel.
Testing to ensure compatibility of an application or Web site with different browsers, OSs, and hardware
Compatibility testing is a type of testing used to ensure compatibility of the System/application/website
built with various other objects such as other web browsers, hardware platforms, users (in case if its very
specific type of requirement, such as a user who speaks and can read only a particular language),
operating systems etc. This type of testing helps find out how well a system performs in a particular
environment that includes hardware, network, operating system and other software etc.
Compatibility testing can be automated using automation tools or can be performed manually and is a
part of non-functional software testing.
Different types of Compatibility
Hardware: Evaluation of the performance of system/application/website on a certain hardware platform.
For example: If an all-platform compatible game is developed and is being tested for hardware
compatibility, the developer may choose to test it for various combinations of chipsets (such as Intel,
Macintosh GForce), motherboards etc.
Browser: Evaluation of the performance of system/website/application on a certain type of browser. For
example: A website is tested for compatibility with browsers like Internet Explorer, Firefox etc. (usually
browser compatibility testing is also looked at as a user experience testing, as it is related to user‟s
experience of the application/website, while using it on different browsers).
Network: Evaluation of the performance of system/application/website on network with varying
parameters such as bandwidth, variance in capacity and operating speed of underlying hardware etc.,
which is set up to replicate the actual operating environment.
Peripherals: Evaluation of the performance of system/application in connection with various
systems/peripheral devices connected directly or via network. For example: printers, fax machines,
telephone lines etc.
Compatibility between versions: Evaluation of the performance of system/application in connection with
its own predecessor/successor versions (backward and forward compatibility). For example: Windows 98
was developed with backward compatibility for Windows 95 etc.
Softwares: Evaluation of the performance of system/application in connection with other softwares. For
example: Software compatibility with operating tools for network, web servers, messaging tools etc.
Operating System: Evaluation of the performance of system/application in connection with the
underlying operating system on which it will be used.
Databases: Many applications/systems operate on databases. Database compatibility testing is used to
evaluate an application/system‟s performance in connection to the database it will interact with.
How helpful is it?
Compatibility testing can help developers understand the criteria that their system/application needs to
attain and fulfill, in order to get accepted by intended users who are already using some OS, network,
software and hardware etc. It also helps the users to find out which system will better fit in the existing
setup they are using.
The most important use of the compatibility testing is as already mentioned above: to ensure its
performance in a computing environment in which it is supposed to operate. This helps in figuring out
changes/modifications/additions required to make the system/application compatible with the computing
Installation Testing is one of the most important part of testing activities. Installation is the first interaction
of user with our product and it is very important to make sure that user do not have any trouble in
installing the software.
It becomes even more critical now as there are different means to distribute the software. Instead of
traditional method of distributing software in the physical CD format, software can be installed from
internet, from a network location or even it can be pushed to the end user's machine.
The type if installation testing you do, will be affected by lots of factors like
What platforms and operating systems you support?
How will you distribute the software?
Installation testing for different platforms
Process of installing your software could be different for different platforms. It could be a neat GUI for
windows or plain command line for Unix boxes.
Usually installers ask a series of questions and based on the response of the user, installation changes. It
is always a good idea to create a Tree structure of all the options available to the user and cover all
unique paths of installation if possible.
Person performing installation testing, should certainly have information on what to expect after
installation is done. Tools to compare file system, registry. DLLs etc are very handy in making sure that
installation is proper.
Most of the installers support the functionality of silent installation, this also need thorough testing. Main
thing to look for here is the config file that it uses for installation. Any changes made in the config file
should have proper effect on the installation.
If installation is dependent on some other components like database, server etc. test cases should be
written specifically to address this.
Negative cases like insufficient memory, insufficient space, aborted installation should also be covered as
part of installation testing. Test engineer should make sure that proper messages are given to the user
and installation can continue after increasing memory, space etc.
Test engineer should be familiar with the installer technologies and if possible try to explore the defects or
limitation of the installer itself.
Installation testing based on the distribution
Apart from the sample cases covered above, special cases should be written to test how software will be
If software is distributed using physical CD format, test activities should include following things -
• Test cases should be executed from ISO images, if getting physical CD is not possible.
• Test cases should be present to check the sequence of CDs used.
• Test cases should be present for the gracefully handling of corrupted CD or image.
If test cases are distributed from Internet, test cases should be included for
• Bad network speed and broken connection.
• Firewall and security related.
• Size and approximate time taken.
• Concurrent installation/downloads
Load/Performance testing is the process of creating demand on a system or device and
measuring its response.
Load testing generally refers to the practice of modeling the expected usage of a software program by
simulating multiple users accessing the program's services concurrently. As such, this testing is most
relevant for multi-user systems, often one built using a client/server model, such as web servers.
However, other types of software systems can be load-tested also. For example, a word processor or
graphics editor can be forced to read an extremely large document; or a financial package can be forced
to generate a report based on several years' worth of data. The most accurate load testing occurs with
actual, rather than theoretical, results.
When the load placed on the system is raised beyond normal usage patterns, in order to test the system's
response at unusually high or peak loads, it is known as stress testing. The load is usually so great that
error conditions are the expected result, although no clear boundary exists when an activity ceases to be
a load test and becomes a stress test.
The term is often used synonymously with Load/performance testing, reliability testing, and volume
In the current era, when you hardly have any stand alone desktop application Performance, Load and
stress Testing becomes key to the success of your application. Performance testing comes under non
Performance testing is normally used with Load testing and stress testing. Some people even use these
terms interchangeably, which is not correct. Common factor among these testing type is the simulated
load, but there are subtle difference in performance, load and stress testing. We will explore these in
some detail and try to make the difference between these testing clear.
Performance testing is conducted after the completion of functional testing. Performance testing is
normally conducted during the System Testing phase. Objective of performance testing is not to find
functional defect in the system, it is assumed that functional defects have been identified and removed
from the system.
Performance testing is usually conducted for web applications. Main objective of performance testing is to
get information with respect to response time, throughput and utilization under a given load. In order to
perform performance testing on the web application, you need to know at least these two things
Expected load it could be in terms of concurrent user or HTTP connections.
Acceptable response time
During performance testing whole system can be optimized at various levels. It can be optimized at
Operating system level
Performance testing can be performed as a white box or black box activity. In white box approach,
system is inspected and performance tuning is performed where ever possible to improve performance of
the system. In black box approach, test engineers can use tools that simulates the concurrent users/
HTTP connections and measure response time.
When the result of the performance testing indicates that performance of the system is below the
acceptable level, you start tuning application and databases. You need to make sure that system runs as
efficiently as possible on the given hardware/OS combination. If even after tuning application, databases
and other parameters performance of application is under acceptable level, probably it is time to scale
your hardware, database and web servers.
Load Performance Testing Tools are : Loadrunner, Silk Performer, E-Load etc.
Testing conducted to evaluate a system or component at or beyond the limits of its specified
requirements to determine the load under which it fails and how. A graceful degradation under load
leading to non-catastrophic failure is the desired result. Often Stress Testing is performed using the same
process as Performance Testing but employing a very high level of simulated load.
Security Testing is very important in today's world, because of the way computer and internet has
affected the individual and organization. Today, it is very difficult to imagine world without Internet and
latest communication system. Since every one from individual to organization, uses Internet or
communication system to pass information, to do business, to transfer money it becomes very critical for
the service provider to make sure that information and network are secured from the intruders. Primary
purpose of security testing is to identify the vulnerabilities and subsequently repairing them. Typically,
security testing is conducted after the system has been developed, installed and is operational.
Security Testing verifies that protection mechanisms built into a system will in fact protect it from
improper penetration. System is protected in accordance with importance to organization, with
respect to security levels.
Is confidentiality/user privacy protected?
Does the site prompt for user name and password?
Are there Digital Certificates, both at server and client?
Have you verified where encryption begins and ends?
Are concurrent log-ons permitted?
Does the application include time-outs due to inactivity?
Is bookmarking disabled on secure pages?
Does the key/lock display on status bar for insecure/secure pages?
Is Right Click, View, Source disabled?
Are you prevented from doing direct searches by editing content in the URL?
If using Digital Certificates, test the browser Cache by enrolling for the Certificate and completing all of
the required security information. After completing the application and installation of the certificate, try
using the <-- BackSpace key to see if that security information is still residing in Cache. If it is, then any
user could walk up to the PC and access highly sensitive Digital Certificate security information.
Is there an alternative way to access secure pages for browsers under version 3.0, since SSL is not
compatible with those browsers?
Do your users know when they are entering or leaving secure portions of your site?
Does your server lock out an individual who has tried to access your site multiple times with invalid
Test both valid and invalid login names and passwords. Are they case sensitive? Is there a limit to how
many tries that are allowed? Can it be bypassed by typing the URL to a page inside directly in the
What happens whentime out is exceeded? Are users still able to navigate through the site?
Relevant information is written to the logfiles and that the information is traceable.
In SSL verify that the encryption is done correctly and check the integrity of the information.
Scripting on the server is not possible to plan or edit scripts without authorisation.
Have you tested the impact of Secure Proxy Server?
Test should be done to ensure that the Load Balancing Server is taking the session information of Server
A and pooling it to Server B when A goes down.
Have you verified the use of 128-bit Encryption?
Recovery testing is the activity of testing how well the software is able to recover from crashes, hardware
failures and other similar problems.
Recovery testing is the forced failure of the software in a variety of ways to verify that recovery is properly
performed. Recovery testing should not be confused with Reliability testing, which is tries to discover the
point at which failure occurs.
Some the examples of recovery testing are:
1) While the application running, suddenly restart the computer and after that check the validness
of application's data integrity;
2) While application receives data from the network, unplug and then in some time plug-in the
cable, and analyze the application ability to continue receiving of data from that point, when network
3) To restart the system while the browser will have definite number of sessions and after
rebooting check, that it is able to recover all of them.