RANPREET KAUR
1301413031
IT-13
 It is the process used to identify the correctness,
completeness and quality of developed computer
software.
 It is the process of executing a program/application
under positive and negative conditions by manual or
automated means. It checks for the :-
• Specification
• Functionality
• Performance
 Uncover as many errors (or bugs) as possible in a given
product or software.
 Demonstrate a given software product matching its
requirements and specifications.
 It is very important to ensure the Quality of the product.
Quality product delivered to the customers helps in gaining
their confidence.
 It’s essential since it makes sure of the Customer’s reliability
and their satisfaction in the application.
 Unit Testing
 Integration Testing
 System Testing
 Alpha Testing
 Beta Testing
 Acceptance Testing
 Performance Testing
 UNIT TESTING:
• Tests each module individually.
• Done by developers.
• Follows a white box testing (Logic of the program).
 INTEGRATION TESTING:
• Once all the modules have been unit tested, integration
testing is performed.
• It is systematic testing.
• Produce tests to identify errors associated with interfacing.
 SYSTEM TESTING:
• The system as a whole is tested to uncover errors.
• Verifies that all system elements work properly and that overall
system function and performance has been achieved.
 ALPHA TESTING:
• It is carried out by the test team within the developing
organization .
 BETA TESTING:
• It is performed by a selected group of friendly customers.
 ACCEPTANCE TESTING:
• It is performed by the customer to determine whether to
accept or reject the delivery of the system.
 PERFORMANCE TESTING:
• It is carried out to check whether the software is performed
well under the stress and load conditions. It also checks for
its response time, throughput, stability and scalability of the
software.
 It is the process of determining the speed or effectiveness
of a computer, network, software program or device.
 Resource usage, scalability and reliability of the product
are validated under this testing.
 It is the testing that is performed to ascertain how the
different components of a system are performing under a
given situation.
 It is done to provide stakeholders with information about their
application regarding speed, stability and scalability.
 It uncovers what needs to be improved before the product goes
to market.
 Without performance testing, software is likely to suffer from
issues such as: running slow while several users use it
simultaneously, inconsistencies across different operating
systems and poor usability.
 Speed
 Scalability
 Stability
 Confidence
 Throughput
 Response time
 SPEED:
• Does the application respond quickly enough for the
intended users? (Need to point the standards)
• Does the application matches the user expectation?
• Speed can effect the cost of the software. Speed can be
expensive..!!
 SCALABILITY:
Scalability risks concern not only the number of users an
application can support, but also the volume of data the
application can contain and process, as well as the ability to
identify when an application is approaching capacity.
– Database capacity
– File Server capacity
– Back-up Server capacity
 STABILITY:
• Is the application stable under expected and unexpected user
loads? (Reliability, and recoverability). Although stability risks
are commonly addressed with high-load, endurance, and stress
tests
What happens if…
– there are more users than we expect?
– all the users do the same thing?
– a user gets disconnected?
– there is a Denial of Service Attack?
– the web server goes down?
– we get too many orders for the same thing?
 CONFIDENCE:
• Are you sure that users will have a positive experience on
go-live day?
• If you know what the performance is…
– you can assess risk.
– you can make informed decisions.
– you can plan for the future.
– you can sleep the night before go-live day.
 THROUGHPUT:
• Capability of a product to handle multiple transactions in a
given period.
• Throughput represents the number of requests/business
transactions processed by the product in a specified time
duration.
• The throughput is measured in terms of requests per second,
calls per day, hits per second, reports per year, etc. In most of
the cases, the throughput is calculated in bits per seconds.
• Higher the throughput value, higher the performance of the
application
 RESPONSE TIME:
• It is equally important to find out how much time each of
the transactions took to complete.
• Response time is defined as the delay between the point
of request and the first response from the product.
Open Source
• OpenSTA
• Diesel Test
• TestMaker
• Grinder
• LoadSim
• Jmeter
• Rubis
Commercial
• LoadRunner
• Silk Performer
• Qengine
• Empirix e-Load
 Improved quality from a user’s perspective.
 Reduced cost of change.
 Early identification of major application defects and
architectural issues.
 Guaranteed customer satisfaction.
 Performance Testing also removes many myths associated
with the users and builds confidence.
 We conclude that we perform performing testing to:-
 Evaluate Risk.
 Determine system capabilities.
 Without performance testing if the software launches in
the market, users can face many problems related to
speed, stability etc.
 It determines which parts of the application perform
poorly and under what conditions.
 https://www.google.co.in/search?q=Performance+Testi
ng&ie=utf-8&oe=utf-8&gw
 http://prateekvjoshi.com/2013/08/21/why-do-we-need-
performance-testing
 http://www.softwaretestinghelp.com/performance-
testing-tools-load-testing-tools
 http://www.slideshare.net/SwaamTech/an-introduction-
to-performance-testing-14780481?next_slideshow=1
 https://en.wikipedia.org/wiki/Software_performance_te
sting
Performance testing
Performance testing

Performance testing

  • 1.
  • 2.
     It isthe process used to identify the correctness, completeness and quality of developed computer software.  It is the process of executing a program/application under positive and negative conditions by manual or automated means. It checks for the :- • Specification • Functionality • Performance
  • 3.
     Uncover asmany errors (or bugs) as possible in a given product or software.  Demonstrate a given software product matching its requirements and specifications.  It is very important to ensure the Quality of the product. Quality product delivered to the customers helps in gaining their confidence.  It’s essential since it makes sure of the Customer’s reliability and their satisfaction in the application.
  • 4.
     Unit Testing Integration Testing  System Testing  Alpha Testing  Beta Testing  Acceptance Testing  Performance Testing
  • 5.
     UNIT TESTING: •Tests each module individually. • Done by developers. • Follows a white box testing (Logic of the program).  INTEGRATION TESTING: • Once all the modules have been unit tested, integration testing is performed. • It is systematic testing. • Produce tests to identify errors associated with interfacing.
  • 6.
     SYSTEM TESTING: •The system as a whole is tested to uncover errors. • Verifies that all system elements work properly and that overall system function and performance has been achieved.  ALPHA TESTING: • It is carried out by the test team within the developing organization .  BETA TESTING: • It is performed by a selected group of friendly customers.
  • 7.
     ACCEPTANCE TESTING: •It is performed by the customer to determine whether to accept or reject the delivery of the system.  PERFORMANCE TESTING: • It is carried out to check whether the software is performed well under the stress and load conditions. It also checks for its response time, throughput, stability and scalability of the software.
  • 8.
     It isthe process of determining the speed or effectiveness of a computer, network, software program or device.  Resource usage, scalability and reliability of the product are validated under this testing.  It is the testing that is performed to ascertain how the different components of a system are performing under a given situation.
  • 9.
     It isdone to provide stakeholders with information about their application regarding speed, stability and scalability.  It uncovers what needs to be improved before the product goes to market.  Without performance testing, software is likely to suffer from issues such as: running slow while several users use it simultaneously, inconsistencies across different operating systems and poor usability.
  • 10.
     Speed  Scalability Stability  Confidence  Throughput  Response time
  • 11.
     SPEED: • Doesthe application respond quickly enough for the intended users? (Need to point the standards) • Does the application matches the user expectation? • Speed can effect the cost of the software. Speed can be expensive..!!
  • 12.
     SCALABILITY: Scalability risksconcern not only the number of users an application can support, but also the volume of data the application can contain and process, as well as the ability to identify when an application is approaching capacity. – Database capacity – File Server capacity – Back-up Server capacity
  • 13.
     STABILITY: • Isthe application stable under expected and unexpected user loads? (Reliability, and recoverability). Although stability risks are commonly addressed with high-load, endurance, and stress tests What happens if… – there are more users than we expect? – all the users do the same thing? – a user gets disconnected? – there is a Denial of Service Attack? – the web server goes down? – we get too many orders for the same thing?
  • 14.
     CONFIDENCE: • Areyou sure that users will have a positive experience on go-live day? • If you know what the performance is… – you can assess risk. – you can make informed decisions. – you can plan for the future. – you can sleep the night before go-live day.
  • 15.
     THROUGHPUT: • Capabilityof a product to handle multiple transactions in a given period. • Throughput represents the number of requests/business transactions processed by the product in a specified time duration. • The throughput is measured in terms of requests per second, calls per day, hits per second, reports per year, etc. In most of the cases, the throughput is calculated in bits per seconds. • Higher the throughput value, higher the performance of the application
  • 16.
     RESPONSE TIME: •It is equally important to find out how much time each of the transactions took to complete. • Response time is defined as the delay between the point of request and the first response from the product.
  • 18.
    Open Source • OpenSTA •Diesel Test • TestMaker • Grinder • LoadSim • Jmeter • Rubis Commercial • LoadRunner • Silk Performer • Qengine • Empirix e-Load
  • 19.
     Improved qualityfrom a user’s perspective.  Reduced cost of change.  Early identification of major application defects and architectural issues.  Guaranteed customer satisfaction.  Performance Testing also removes many myths associated with the users and builds confidence.
  • 20.
     We concludethat we perform performing testing to:-  Evaluate Risk.  Determine system capabilities.  Without performance testing if the software launches in the market, users can face many problems related to speed, stability etc.  It determines which parts of the application perform poorly and under what conditions.
  • 21.
     https://www.google.co.in/search?q=Performance+Testi ng&ie=utf-8&oe=utf-8&gw  http://prateekvjoshi.com/2013/08/21/why-do-we-need- performance-testing http://www.softwaretestinghelp.com/performance- testing-tools-load-testing-tools  http://www.slideshare.net/SwaamTech/an-introduction- to-performance-testing-14780481?next_slideshow=1  https://en.wikipedia.org/wiki/Software_performance_te sting