SlideShare a Scribd company logo
1 of 28
11
My experience in Software QAMy experience in Software QA
Leonid MazurLeonid Mazur
(650) 967-3150(650) 967-3150
22
My recent QA projectsMy recent QA projects
• Yuzu:Yuzu:
 Load, performance, stress and recover testing of web application on client sideLoad, performance, stress and recover testing of web application on client side
and web services (RESTful) on server sideand web services (RESTful) on server side
 Performance and functional monitoring of web application and web servicesPerformance and functional monitoring of web application and web services
activity in production environment with visual graphical representation in realactivity in production environment with visual graphical representation in real
timetime
 Production DB verification (general integrity, table interconnections, lost/brokenProduction DB verification (general integrity, table interconnections, lost/broken
foreign keys, data validation, etc)foreign keys, data validation, etc)
• Become:Become:
 5 comparison shopping web sites (become.com, become.co.uk, exava.com,5 comparison shopping web sites (become.com, become.co.uk, exava.com,
exava.co.uk, discountbee.com) and Become mobile – client side and serverexava.co.uk, discountbee.com) and Become mobile – client side and server
side testingside testing
 DB testing and feed data verificationDB testing and feed data verification
 Builds functional and regression testing, load, performance and stress testingBuilds functional and regression testing, load, performance and stress testing
 Time performance monitoringTime performance monitoring
 Comparative performance monitoring vs. competitorsComparative performance monitoring vs. competitors
• Yahoo:Yahoo:
 Yahoo Maps – client side and server side testingYahoo Maps – client side and server side testing
 Yahoo Local – client side testingYahoo Local – client side testing
 Yahoo Yellow Page – client side testingYahoo Yellow Page – client side testing
 Yahoo Direct Marketing – client side, middle-tier (web services) and server sideYahoo Direct Marketing – client side, middle-tier (web services) and server side
(DB) testing(DB) testing
33
My recent QA projects (continued)My recent QA projects (continued)
• Talent6 (now Casting360):Talent6 (now Casting360):
 CastingCasting web site – client side testingweb site – client side testing
• PayPal :PayPal :
 Money transaction web site – API (white box) and server side testingMoney transaction web site – API (white box) and server side testing
• CafePress :CafePress :
 Online shopping –client side testingOnline shopping –client side testing
44
Client side testingClient side testing
• Preferred method – automated testingPreferred method – automated testing
• I usedI used WinRunner, QTPWinRunner, QTP andand WatirWatir for frontend testfor frontend test
automation in my project at Talent6 and Yahoo; now Iautomation in my project at Talent6 and Yahoo; now I’’mm
using preferably Selenium 2 with one of scriptingusing preferably Selenium 2 with one of scripting
languages – Perl or Rubylanguages – Perl or Ruby
• For example test suites (BVT, regression, end-to-For example test suites (BVT, regression, end-to-
end) for Become.com testing are written in Ruby +end) for Become.com testing are written in Ruby +
Selenium 2, consist of the test driver and severalSelenium 2, consist of the test driver and several
functional modules; contain 5000+ lines of code andfunctional modules; contain 5000+ lines of code and
15000+ tests15000+ tests
• Test time – 4 – 8 hours. Different functional modulesTest time – 4 – 8 hours. Different functional modules
can run in simultaneouslycan run in simultaneously
55
Structure of the test suite for Become.comStructure of the test suite for Become.com
frontend testingfrontend testing
Test Driver
Home Page
testing
“All categories”
page
testing
Products
pages
testing
Search
results
testing
Research
results
testing
Category pages
testing
Trusted
merchants
testing
Hot products
testing
Features
testing
Product details
pages
testing
66
Server side testing (API, WebServices)Server side testing (API, WebServices)
• Preferred method – automated testingPreferred method – automated testing
• I usedI used WWebTester for test automation on serverebTester for test automation on server
side for some Yahoo projects and just Perl orside for some Yahoo projects and just Perl or
Ruby scripts in other projectsRuby scripts in other projects
• Approach I use in server side testingApproach I use in server side testing
 Perl/Ruby script makes requests to server using thePerl/Ruby script makes requests to server using the
same protocol client talks to server; gets and savessame protocol client talks to server; gets and saves
response dataresponse data
 Response is verified against requestResponse is verified against request’’s parameterss parameters
alsoalso
 Script retrieves the same info directly from DB orScript retrieves the same info directly from DB or
some side sourcessome side sources
 Results received from backend and from DB/sideResults received from backend and from DB/side
sources comparedsources compared
77
Structure of the test suite for Yahoo MapsStructure of the test suite for Yahoo Maps
server side testing (maps testing)server side testing (maps testing)
Perl script
Yahoo Maps backend
Navteq
Geo DB
TeleAtlas
Geo DB
Results comparison
88
Structure of the test suite for Yahoo MapsStructure of the test suite for Yahoo Maps
server side testing (driving directions,server side testing (driving directions,
regression testing)regression testing)
Perl script
Yahoo Maps backend –
current build
New and old driving
directions comparison
Yahoo Maps backend –
new build
99
Examples of Yahoo MapsExamples of Yahoo Maps
server side testsserver side tests
1.1. - Select all US cities info (40,000+) from Navtec Geo DB- Select all US cities info (40,000+) from Navtec Geo DB
– state, county, zip codes, latitude/longitude of city center,– state, county, zip codes, latitude/longitude of city center,
etcetc
- Select the same from Yahoo Maps backend- Select the same from Yahoo Maps backend
- compare results- compare results
2.2. - Do the same for 1000 - 5000 fixed or randomly selected- Do the same for 1000 - 5000 fixed or randomly selected
street addressesstreet addresses
3.3. - Calculate driving times for 1000 – 10000 fixed or- Calculate driving times for 1000 – 10000 fixed or
random pairs of origins/destinations in the current maprandom pairs of origins/destinations in the current map
backend build/version and in the new build/versionbackend build/version and in the new build/version
- Compare results and expose deviations.- Compare results and expose deviations.
- Calculate stats like - % of cases driving time- Calculate stats like - % of cases driving time
in the current build/version is better than one in the newin the current build/version is better than one in the new
build/version.build/version.
1010
DB testing and feed data verificationDB testing and feed data verification
• Preferred method – automated testingPreferred method – automated testing
• Testing highly depends on DB structure and contentTesting highly depends on DB structure and content
• The main goal of testing is to make sure the contents ofThe main goal of testing is to make sure the contents of
all DB tables are consistent, data are relevant andall DB tables are consistent, data are relevant and
agreed with the latest feeds.agreed with the latest feeds.
• For example you need to check -For example you need to check -
 Formats (numeric data, date formats, utf-8 coding for text data,Formats (numeric data, date formats, utf-8 coding for text data,
data limits and allowable values verifications, NULL in key datadata limits and allowable values verifications, NULL in key data
etc)etc)
 Broken chainsBroken chains
 Some changes in DB donSome changes in DB don’’t present in log files and vice versat present in log files and vice versa
 Non-appropriate materialized views (materialized views createdNon-appropriate materialized views (materialized views created
by cron processes are not current)by cron processes are not current)
 etcetc
1111
DB testing and feed data verificationDB testing and feed data verification
(continued)(continued)
Example from Become, Inc.Example from Become, Inc.
• DB collects info about merchants and products (titles, descriptions,DB collects info about merchants and products (titles, descriptions,
prices, stock statuses, images, landing pages, etc)prices, stock statuses, images, landing pages, etc)
• Very often product prices in merchantVery often product prices in merchant’’s feeds are different froms feeds are different from
real prices on merchantreal prices on merchant’’s sites or products are unavailable ors sites or products are unavailable or
landing pages incorrectlanding pages incorrect
• I wrote some crawler (Perl script with SQL and Selenium 2) whichI wrote some crawler (Perl script with SQL and Selenium 2) which
 Selects randomly 1000 – 50000 or all merchantSelects randomly 1000 – 50000 or all merchant’’ products fromproducts from
merchant data feedmerchant data feed
 Opens landing pages of selected products on merchantOpens landing pages of selected products on merchant’’ sitesite
 ChecksChecks
 If landing pages correctIf landing pages correct
 If prices in the feed and on landing pages are the sameIf prices in the feed and on landing pages are the same
 If product statuses (in stock, out of stock, to be ordered etc) in the feedIf product statuses (in stock, out of stock, to be ordered etc) in the feed
and on landing pages are the sameand on landing pages are the same
1212
Structure of the test suite for productStructure of the test suite for product
data verificationdata verification
Perl scripts + SQL +
Selenium
Compare data
(prices, landing pages,
stock statuses etc)
Merchant’s
sitesDB/merchant feed
– products data
(titles, prices,
landing pages,
product statuses,
etc)
Select merchants and products
from DB/merchant feed
Get landing pages for selected products
from merchant sites
1313
Black box, white box and gray boxBlack box, white box and gray box
testingtesting
• Black box testingBlack box testing is used when tester has no knowledgeis used when tester has no knowledge
about internal structure of application. Testing is based only on applicationabout internal structure of application. Testing is based only on application
behavior against some input, design documents, previous experience andbehavior against some input, design documents, previous experience and
knowledge of the previous versions or similar products from competitors,knowledge of the previous versions or similar products from competitors,
common sense, discussion with developers, etc.common sense, discussion with developers, etc.
II’’ve used black box testing mostly for frontend testingve used black box testing mostly for frontend testing
• White box testingWhite box testing means the tester has access to whole info about applicationmeans the tester has access to whole info about application
including DB structure and source code. This knowledge may be used in twoincluding DB structure and source code. This knowledge may be used in two
waysways
 Just to understand the application logic and algorithms used and to applyJust to understand the application logic and algorithms used and to apply
that to test suite development. Ithat to test suite development. I’’ve used this approach in some project atve used this approach in some project at
Yahoo and BecomeYahoo and Become
 To employ classes/modules/functions from the source code inTo employ classes/modules/functions from the source code in
development test tools for functional unit testing. I used this approach atdevelopment test tools for functional unit testing. I used this approach at
TeleAtlas, ICL-KMECS and PayPal.TeleAtlas, ICL-KMECS and PayPal.
See example belowSee example below
• Gray box testingGray box testing is something intermediate. The tester doesnis something intermediate. The tester doesn’’t have the wholet have the whole
knowledge about internal structure of application, but has some. Most oftenknowledge about internal structure of application, but has some. Most often
he/she is informed about DB structure. Ihe/she is informed about DB structure. I’’ve used gray approach box in almostve used gray approach box in almost
all backend testing projectsall backend testing projects
1414
White box testing at PayPalWhite box testing at PayPal
• I used some core classes and functions of PayPal backend engineI used some core classes and functions of PayPal backend engine
(in C++) to develop test tools for API testing on functional unit level(in C++) to develop test tools for API testing on functional unit level
avoiding frontend.avoiding frontend. The toolThe toolss allowsallows to perform various types ofto perform various types of
payment operations reliably, and enabledpayment operations reliably, and enabled functional unitfunctional unit tests to betests to be
written using a high level language by people who do not have awritten using a high level language by people who do not have a
background inbackground in programming/programming/API testingAPI testing..
Web frontend layer
PayPal engine on backend DB
API test tools
(C++)
Various
money
transactions
Call interface
Command
line
interface
Test
report
1515
Manual vs. automated testingManual vs. automated testing
• I prefer to use automated testing whenever itI prefer to use automated testing whenever it’’s possible; but stills possible; but still
there are about 5 - 10% of test cases in average I have to verifythere are about 5 - 10% of test cases in average I have to verify
manuallymanually
• It happens whenIt happens when
 You are short in timeYou are short in time
 Feature/functional area is not so important for testing to beFeature/functional area is not so important for testing to be
automatedautomated
 Feature/functional area is very changeable so test scriptFeature/functional area is very changeable so test script’’ss
maintenance cost might be very highmaintenance cost might be very high
 Manual testing is much simpler than automated; for example testManual testing is much simpler than automated; for example test
automation for images, videos and other media elements is veryautomation for images, videos and other media elements is very
complicate, at the same time visual testing of them is prettycomplicate, at the same time visual testing of them is pretty
straightforwardstraightforward
 In some other casesIn some other cases
• Anyway in manual/automated dilemma you need to find trade-offAnyway in manual/automated dilemma you need to find trade-off
between coding/maintenance expenses and benefits of automationbetween coding/maintenance expenses and benefits of automation
1616
Load/performance/stress/recoverLoad/performance/stress/recover
testingtesting
• Preferred method – only automated testingPreferred method – only automated testing
• All testing is done in the test environment with a newAll testing is done in the test environment with a new
build/version of applicationbuild/version of application
Load testingLoad testing
• I consider load testing as a base for performance/stressI consider load testing as a base for performance/stress
testing; not a separate method of testingtesting; not a separate method of testing
• To make a load to web application I use Perl/Ruby scriptTo make a load to web application I use Perl/Ruby script
with N threads running in Linux/Unix (N – parameter)with N threads running in Linux/Unix (N – parameter)
• Each thread simulates an user and makes requests toEach thread simulates an user and makes requests to
the site tested randomly and independentlythe site tested randomly and independently
• It is possible to use JMeter to make a loadIt is possible to use JMeter to make a load
1717
Load/performance/stress/recover testingLoad/performance/stress/recover testing
(continued)(continued)
Performance testing (client side)Performance testing (client side)
• As a load is set some Ruby script is launched in clientAs a load is set some Ruby script is launched in client
environment (e.g. Windows or Mac OS) which measures siteenvironment (e.g. Windows or Mac OS) which measures site
performance under loadperformance under load
• It makes typical random (or the same) requests to the site,It makes typical random (or the same) requests to the site,
catches HTTP requests, classifies them against metricscatches HTTP requests, classifies them against metrics
selected and collects HTTP response times taking intoselected and collects HTTP response times taking into
account the possible time overlappingaccount the possible time overlapping
• Performance metrics may be – backend response time,Performance metrics may be – backend response time,
rendering start time, total page load time, some pagerendering start time, total page load time, some page
elements (images, ads, tracking pixels, social networks, etc)elements (images, ads, tracking pixels, social networks, etc)
load times or all of them or something elseload times or all of them or something else
• Performance (in some metrics) averages over some period ofPerformance (in some metrics) averages over some period of
time – 6 – 12 hours and different number oftime – 6 – 12 hours and different number of ““usersusers”” (say N =(say N =
10, 30, 50, 100, etc) are calculated10, 30, 50, 100, etc) are calculated
• Performance of new build is compared with the savedPerformance of new build is compared with the saved
performance of the current build to make decision about newperformance of the current build to make decision about new
build qualitybuild quality
1818
Schema of performance testing (client side)Schema of performance testing (client side)
Unix /
Linux
Ruby script with
HttpWatch classes.
It catches all http
requests and collects
load times for different
page elements
Windows/Ma
c OS
Web
application
Perl/Ruby scripts
with N threads.
Each thread makes
requests to
the site
like an end user does
LoadPerformanc
e
1919
Load/performance/stress/recover testingLoad/performance/stress/recover testing
(continued)(continued)
Stress testingStress testing
• Increase load – number of users/requests perIncrease load – number of users/requests per
second – until performance in selected metricssecond – until performance in selected metrics
reaches some predefined value. For example, site isreaches some predefined value. For example, site is
not respondingnot responding
• Save N – stress threshold - number ofSave N – stress threshold - number of
users/requests per second when it happened. Thisusers/requests per second when it happened. This
is stress level.is stress level.
• Is stress level less or greater than specified inIs stress level less or greater than specified in
design?design?
• Is stress level for the new build/version less orIs stress level for the new build/version less or
greater than saved stress level of the current build ingreater than saved stress level of the current build in
the same environment?the same environment?
2020
Load/performance/stress/recover testingLoad/performance/stress/recover testing
(continued)(continued)
Stress testingStress testing
• Decrease load step by step until site operatingDecrease load step by step until site operating
restoredrestored
• Save N - number of users/requests per secondSave N - number of users/requests per second
when it happened. This is recover level.when it happened. This is recover level.
• Check recover level against specs or previous buildCheck recover level against specs or previous build
recover levelrecover level
2121
Time performance monitoring (client side)Time performance monitoring (client side)
• This is similar to performance testing; but is done on live/production site at theThis is similar to performance testing; but is done on live/production site at the
current loadcurrent load
• Some Ruby + HttpWatch classes script is running permanently in clientSome Ruby + HttpWatch classes script is running permanently in client
environmentenvironment
• It makes same or random typical requests to production site each T (parameter)It makes same or random typical requests to production site each T (parameter)
seconds; catches and classifies against metrics used all http requests,seconds; catches and classifies against metrics used all http requests,
calculates and collects http response times and put them to DBcalculates and collects http response times and put them to DB
• At Become we collect the following data per request -At Become we collect the following data per request -
 Request timesRequest times
 Requests (queries)Requests (queries)
 Backend response timesBackend response times
 Rendering start timesRendering start times
 Images load timesImages load times
 Ads load timesAds load times
 Tracking pixels load timesTracking pixels load times
 Javascript timesJavascript times
 Social media network timesSocial media network times
 Total page load timesTotal page load times
• The detailed and aggregated data are displayed also in the graphic form onThe detailed and aggregated data are displayed also in the graphic form on
displays for the visual site monitoring; detailed graphics are refreshed each 15displays for the visual site monitoring; detailed graphics are refreshed each 15
min, aggregated ones – on hourly basis (see examples on the next two slides).min, aggregated ones – on hourly basis (see examples on the next two slides).
2222
Become: Performance monitoring (client side) - exampleBecome: Performance monitoring (client side) - example
Become: Performance monitoring report (example)Become: Performance monitoring report (example)
2424
Time performance monitoringTime performance monitoring
(server side)(server side)
 All data needed are extracted from logAll data needed are extracted from log
files, saved in DB and exposed in thefiles, saved in DB and exposed in the
graphic form with R and Rpad/GoogleVisgraphic form with R and Rpad/GoogleVis
toolstools
 See examples on the next slidesSee examples on the next slides
Yuzu: Performance monitoring (server side) - exampleYuzu: Performance monitoring (server side) - example
2525
Yuzu: Performance monitoring (server side) - exampleYuzu: Performance monitoring (server side) - example
2626
2727
Become: Performance monitoring (server side) -Become: Performance monitoring (server side) -
examplesexamples
2828
Comparative performance monitoring vs.Comparative performance monitoring vs.
competitorscompetitors
• The similar approach was used for the parallelThe similar approach was used for the parallel
time monitoring Become.com and toptime monitoring Become.com and top
competitorscompetitors
• The same requests were periodically sent to allThe same requests were periodically sent to all
sites; request times and page element loadsites; request times and page element load
times are measured and exposed on display intimes are measured and exposed on display in
the graphical formsthe graphical forms
• Execution was done in some cloud environmentExecution was done in some cloud environment

More Related Content

Viewers also liked (10)

White Box Testing V0.2
White Box Testing V0.2White Box Testing V0.2
White Box Testing V0.2
 
Black box
Black boxBlack box
Black box
 
Whitebox testing
Whitebox testingWhitebox testing
Whitebox testing
 
Black box & white-box testing technique
Black box & white-box testing techniqueBlack box & white-box testing technique
Black box & white-box testing technique
 
How to Build in Quality from Day 1 using Lean QA and Agile Testing
How to Build in Quality from Day 1 using Lean QA and Agile TestingHow to Build in Quality from Day 1 using Lean QA and Agile Testing
How to Build in Quality from Day 1 using Lean QA and Agile Testing
 
Black & White Box testing
Black & White Box testingBlack & White Box testing
Black & White Box testing
 
Black Box Testing
Black Box TestingBlack Box Testing
Black Box Testing
 
Black box
Black boxBlack box
Black box
 
Software Testing Fundamentals
Software Testing FundamentalsSoftware Testing Fundamentals
Software Testing Fundamentals
 
Quality Assurance and Software Testing
Quality Assurance and Software TestingQuality Assurance and Software Testing
Quality Assurance and Software Testing
 

Similar to My experience in Software QA

ott_calfee_resume
ott_calfee_resumeott_calfee_resume
ott_calfee_resume
Ott Calfee
 
Topics in Verification: Reuse, Coverage, Regression Engineering, Planning, Qu...
Topics in Verification: Reuse, Coverage, Regression Engineering, Planning, Qu...Topics in Verification: Reuse, Coverage, Regression Engineering, Planning, Qu...
Topics in Verification: Reuse, Coverage, Regression Engineering, Planning, Qu...
DVClub
 
QA Fest 2019. Дмитрий Собко. Testing Big Data solutions fast and furiously
QA Fest 2019. Дмитрий Собко. Testing Big Data solutions fast and furiouslyQA Fest 2019. Дмитрий Собко. Testing Big Data solutions fast and furiously
QA Fest 2019. Дмитрий Собко. Testing Big Data solutions fast and furiously
QAFest
 

Similar to My experience in Software QA (20)

ott_calfee_resume
ott_calfee_resumeott_calfee_resume
ott_calfee_resume
 
Netserv Software Testing
Netserv Software TestingNetserv Software Testing
Netserv Software Testing
 
" Performance testing for Automation QA - why and how " by Andrey Kovalenko f...
" Performance testing for Automation QA - why and how " by Andrey Kovalenko f..." Performance testing for Automation QA - why and how " by Andrey Kovalenko f...
" Performance testing for Automation QA - why and how " by Andrey Kovalenko f...
 
Pm 6 testing
Pm 6 testingPm 6 testing
Pm 6 testing
 
Pm 6 testing
Pm 6 testingPm 6 testing
Pm 6 testing
 
Do not automate GUI testing
Do not automate GUI testingDo not automate GUI testing
Do not automate GUI testing
 
Automated Acceptance Tests & Tool choice
Automated Acceptance Tests & Tool choiceAutomated Acceptance Tests & Tool choice
Automated Acceptance Tests & Tool choice
 
Neev QA Offering
Neev QA OfferingNeev QA Offering
Neev QA Offering
 
Design For Testability
Design For TestabilityDesign For Testability
Design For Testability
 
Topics in Verification: Reuse, Coverage, Regression Engineering, Planning, Qu...
Topics in Verification: Reuse, Coverage, Regression Engineering, Planning, Qu...Topics in Verification: Reuse, Coverage, Regression Engineering, Planning, Qu...
Topics in Verification: Reuse, Coverage, Regression Engineering, Planning, Qu...
 
Software Architecture and Predictive Models in R
Software Architecture and Predictive Models in RSoftware Architecture and Predictive Models in R
Software Architecture and Predictive Models in R
 
Using JMeter and Google Analytics for Software Performance Testing
Using JMeter and Google Analytics for Software Performance TestingUsing JMeter and Google Analytics for Software Performance Testing
Using JMeter and Google Analytics for Software Performance Testing
 
SOASTA Webinar: Process Compression For Mobile App Dev 120612
SOASTA Webinar: Process Compression For Mobile App Dev 120612SOASTA Webinar: Process Compression For Mobile App Dev 120612
SOASTA Webinar: Process Compression For Mobile App Dev 120612
 
Comprehensive Performance Testing: From Early Dev to Live Production
Comprehensive Performance Testing: From Early Dev to Live ProductionComprehensive Performance Testing: From Early Dev to Live Production
Comprehensive Performance Testing: From Early Dev to Live Production
 
How to build scalable and maintainable test automation solution for Web / API...
How to build scalable and maintainable test automation solution for Web / API...How to build scalable and maintainable test automation solution for Web / API...
How to build scalable and maintainable test automation solution for Web / API...
 
Big Data Testing : Automate theTesting of Hadoop, NoSQL & DWH without Writing...
Big Data Testing : Automate theTesting of Hadoop, NoSQL & DWH without Writing...Big Data Testing : Automate theTesting of Hadoop, NoSQL & DWH without Writing...
Big Data Testing : Automate theTesting of Hadoop, NoSQL & DWH without Writing...
 
Testing Big Data solutions fast and furiously
Testing Big Data solutions fast and furiouslyTesting Big Data solutions fast and furiously
Testing Big Data solutions fast and furiously
 
Introduction to DevOps
Introduction to DevOpsIntroduction to DevOps
Introduction to DevOps
 
QA Fest 2019. Дмитрий Собко. Testing Big Data solutions fast and furiously
QA Fest 2019. Дмитрий Собко. Testing Big Data solutions fast and furiouslyQA Fest 2019. Дмитрий Собко. Testing Big Data solutions fast and furiously
QA Fest 2019. Дмитрий Собко. Testing Big Data solutions fast and furiously
 
Avoid Growing Pains: Scale Your App for the Enterprise (October 14, 2014)
Avoid Growing Pains: Scale Your App for the Enterprise (October 14, 2014)Avoid Growing Pains: Scale Your App for the Enterprise (October 14, 2014)
Avoid Growing Pains: Scale Your App for the Enterprise (October 14, 2014)
 

My experience in Software QA

  • 1. 11 My experience in Software QAMy experience in Software QA Leonid MazurLeonid Mazur (650) 967-3150(650) 967-3150
  • 2. 22 My recent QA projectsMy recent QA projects • Yuzu:Yuzu:  Load, performance, stress and recover testing of web application on client sideLoad, performance, stress and recover testing of web application on client side and web services (RESTful) on server sideand web services (RESTful) on server side  Performance and functional monitoring of web application and web servicesPerformance and functional monitoring of web application and web services activity in production environment with visual graphical representation in realactivity in production environment with visual graphical representation in real timetime  Production DB verification (general integrity, table interconnections, lost/brokenProduction DB verification (general integrity, table interconnections, lost/broken foreign keys, data validation, etc)foreign keys, data validation, etc) • Become:Become:  5 comparison shopping web sites (become.com, become.co.uk, exava.com,5 comparison shopping web sites (become.com, become.co.uk, exava.com, exava.co.uk, discountbee.com) and Become mobile – client side and serverexava.co.uk, discountbee.com) and Become mobile – client side and server side testingside testing  DB testing and feed data verificationDB testing and feed data verification  Builds functional and regression testing, load, performance and stress testingBuilds functional and regression testing, load, performance and stress testing  Time performance monitoringTime performance monitoring  Comparative performance monitoring vs. competitorsComparative performance monitoring vs. competitors • Yahoo:Yahoo:  Yahoo Maps – client side and server side testingYahoo Maps – client side and server side testing  Yahoo Local – client side testingYahoo Local – client side testing  Yahoo Yellow Page – client side testingYahoo Yellow Page – client side testing  Yahoo Direct Marketing – client side, middle-tier (web services) and server sideYahoo Direct Marketing – client side, middle-tier (web services) and server side (DB) testing(DB) testing
  • 3. 33 My recent QA projects (continued)My recent QA projects (continued) • Talent6 (now Casting360):Talent6 (now Casting360):  CastingCasting web site – client side testingweb site – client side testing • PayPal :PayPal :  Money transaction web site – API (white box) and server side testingMoney transaction web site – API (white box) and server side testing • CafePress :CafePress :  Online shopping –client side testingOnline shopping –client side testing
  • 4. 44 Client side testingClient side testing • Preferred method – automated testingPreferred method – automated testing • I usedI used WinRunner, QTPWinRunner, QTP andand WatirWatir for frontend testfor frontend test automation in my project at Talent6 and Yahoo; now Iautomation in my project at Talent6 and Yahoo; now I’’mm using preferably Selenium 2 with one of scriptingusing preferably Selenium 2 with one of scripting languages – Perl or Rubylanguages – Perl or Ruby • For example test suites (BVT, regression, end-to-For example test suites (BVT, regression, end-to- end) for Become.com testing are written in Ruby +end) for Become.com testing are written in Ruby + Selenium 2, consist of the test driver and severalSelenium 2, consist of the test driver and several functional modules; contain 5000+ lines of code andfunctional modules; contain 5000+ lines of code and 15000+ tests15000+ tests • Test time – 4 – 8 hours. Different functional modulesTest time – 4 – 8 hours. Different functional modules can run in simultaneouslycan run in simultaneously
  • 5. 55 Structure of the test suite for Become.comStructure of the test suite for Become.com frontend testingfrontend testing Test Driver Home Page testing “All categories” page testing Products pages testing Search results testing Research results testing Category pages testing Trusted merchants testing Hot products testing Features testing Product details pages testing
  • 6. 66 Server side testing (API, WebServices)Server side testing (API, WebServices) • Preferred method – automated testingPreferred method – automated testing • I usedI used WWebTester for test automation on serverebTester for test automation on server side for some Yahoo projects and just Perl orside for some Yahoo projects and just Perl or Ruby scripts in other projectsRuby scripts in other projects • Approach I use in server side testingApproach I use in server side testing  Perl/Ruby script makes requests to server using thePerl/Ruby script makes requests to server using the same protocol client talks to server; gets and savessame protocol client talks to server; gets and saves response dataresponse data  Response is verified against requestResponse is verified against request’’s parameterss parameters alsoalso  Script retrieves the same info directly from DB orScript retrieves the same info directly from DB or some side sourcessome side sources  Results received from backend and from DB/sideResults received from backend and from DB/side sources comparedsources compared
  • 7. 77 Structure of the test suite for Yahoo MapsStructure of the test suite for Yahoo Maps server side testing (maps testing)server side testing (maps testing) Perl script Yahoo Maps backend Navteq Geo DB TeleAtlas Geo DB Results comparison
  • 8. 88 Structure of the test suite for Yahoo MapsStructure of the test suite for Yahoo Maps server side testing (driving directions,server side testing (driving directions, regression testing)regression testing) Perl script Yahoo Maps backend – current build New and old driving directions comparison Yahoo Maps backend – new build
  • 9. 99 Examples of Yahoo MapsExamples of Yahoo Maps server side testsserver side tests 1.1. - Select all US cities info (40,000+) from Navtec Geo DB- Select all US cities info (40,000+) from Navtec Geo DB – state, county, zip codes, latitude/longitude of city center,– state, county, zip codes, latitude/longitude of city center, etcetc - Select the same from Yahoo Maps backend- Select the same from Yahoo Maps backend - compare results- compare results 2.2. - Do the same for 1000 - 5000 fixed or randomly selected- Do the same for 1000 - 5000 fixed or randomly selected street addressesstreet addresses 3.3. - Calculate driving times for 1000 – 10000 fixed or- Calculate driving times for 1000 – 10000 fixed or random pairs of origins/destinations in the current maprandom pairs of origins/destinations in the current map backend build/version and in the new build/versionbackend build/version and in the new build/version - Compare results and expose deviations.- Compare results and expose deviations. - Calculate stats like - % of cases driving time- Calculate stats like - % of cases driving time in the current build/version is better than one in the newin the current build/version is better than one in the new build/version.build/version.
  • 10. 1010 DB testing and feed data verificationDB testing and feed data verification • Preferred method – automated testingPreferred method – automated testing • Testing highly depends on DB structure and contentTesting highly depends on DB structure and content • The main goal of testing is to make sure the contents ofThe main goal of testing is to make sure the contents of all DB tables are consistent, data are relevant andall DB tables are consistent, data are relevant and agreed with the latest feeds.agreed with the latest feeds. • For example you need to check -For example you need to check -  Formats (numeric data, date formats, utf-8 coding for text data,Formats (numeric data, date formats, utf-8 coding for text data, data limits and allowable values verifications, NULL in key datadata limits and allowable values verifications, NULL in key data etc)etc)  Broken chainsBroken chains  Some changes in DB donSome changes in DB don’’t present in log files and vice versat present in log files and vice versa  Non-appropriate materialized views (materialized views createdNon-appropriate materialized views (materialized views created by cron processes are not current)by cron processes are not current)  etcetc
  • 11. 1111 DB testing and feed data verificationDB testing and feed data verification (continued)(continued) Example from Become, Inc.Example from Become, Inc. • DB collects info about merchants and products (titles, descriptions,DB collects info about merchants and products (titles, descriptions, prices, stock statuses, images, landing pages, etc)prices, stock statuses, images, landing pages, etc) • Very often product prices in merchantVery often product prices in merchant’’s feeds are different froms feeds are different from real prices on merchantreal prices on merchant’’s sites or products are unavailable ors sites or products are unavailable or landing pages incorrectlanding pages incorrect • I wrote some crawler (Perl script with SQL and Selenium 2) whichI wrote some crawler (Perl script with SQL and Selenium 2) which  Selects randomly 1000 – 50000 or all merchantSelects randomly 1000 – 50000 or all merchant’’ products fromproducts from merchant data feedmerchant data feed  Opens landing pages of selected products on merchantOpens landing pages of selected products on merchant’’ sitesite  ChecksChecks  If landing pages correctIf landing pages correct  If prices in the feed and on landing pages are the sameIf prices in the feed and on landing pages are the same  If product statuses (in stock, out of stock, to be ordered etc) in the feedIf product statuses (in stock, out of stock, to be ordered etc) in the feed and on landing pages are the sameand on landing pages are the same
  • 12. 1212 Structure of the test suite for productStructure of the test suite for product data verificationdata verification Perl scripts + SQL + Selenium Compare data (prices, landing pages, stock statuses etc) Merchant’s sitesDB/merchant feed – products data (titles, prices, landing pages, product statuses, etc) Select merchants and products from DB/merchant feed Get landing pages for selected products from merchant sites
  • 13. 1313 Black box, white box and gray boxBlack box, white box and gray box testingtesting • Black box testingBlack box testing is used when tester has no knowledgeis used when tester has no knowledge about internal structure of application. Testing is based only on applicationabout internal structure of application. Testing is based only on application behavior against some input, design documents, previous experience andbehavior against some input, design documents, previous experience and knowledge of the previous versions or similar products from competitors,knowledge of the previous versions or similar products from competitors, common sense, discussion with developers, etc.common sense, discussion with developers, etc. II’’ve used black box testing mostly for frontend testingve used black box testing mostly for frontend testing • White box testingWhite box testing means the tester has access to whole info about applicationmeans the tester has access to whole info about application including DB structure and source code. This knowledge may be used in twoincluding DB structure and source code. This knowledge may be used in two waysways  Just to understand the application logic and algorithms used and to applyJust to understand the application logic and algorithms used and to apply that to test suite development. Ithat to test suite development. I’’ve used this approach in some project atve used this approach in some project at Yahoo and BecomeYahoo and Become  To employ classes/modules/functions from the source code inTo employ classes/modules/functions from the source code in development test tools for functional unit testing. I used this approach atdevelopment test tools for functional unit testing. I used this approach at TeleAtlas, ICL-KMECS and PayPal.TeleAtlas, ICL-KMECS and PayPal. See example belowSee example below • Gray box testingGray box testing is something intermediate. The tester doesnis something intermediate. The tester doesn’’t have the wholet have the whole knowledge about internal structure of application, but has some. Most oftenknowledge about internal structure of application, but has some. Most often he/she is informed about DB structure. Ihe/she is informed about DB structure. I’’ve used gray approach box in almostve used gray approach box in almost all backend testing projectsall backend testing projects
  • 14. 1414 White box testing at PayPalWhite box testing at PayPal • I used some core classes and functions of PayPal backend engineI used some core classes and functions of PayPal backend engine (in C++) to develop test tools for API testing on functional unit level(in C++) to develop test tools for API testing on functional unit level avoiding frontend.avoiding frontend. The toolThe toolss allowsallows to perform various types ofto perform various types of payment operations reliably, and enabledpayment operations reliably, and enabled functional unitfunctional unit tests to betests to be written using a high level language by people who do not have awritten using a high level language by people who do not have a background inbackground in programming/programming/API testingAPI testing.. Web frontend layer PayPal engine on backend DB API test tools (C++) Various money transactions Call interface Command line interface Test report
  • 15. 1515 Manual vs. automated testingManual vs. automated testing • I prefer to use automated testing whenever itI prefer to use automated testing whenever it’’s possible; but stills possible; but still there are about 5 - 10% of test cases in average I have to verifythere are about 5 - 10% of test cases in average I have to verify manuallymanually • It happens whenIt happens when  You are short in timeYou are short in time  Feature/functional area is not so important for testing to beFeature/functional area is not so important for testing to be automatedautomated  Feature/functional area is very changeable so test scriptFeature/functional area is very changeable so test script’’ss maintenance cost might be very highmaintenance cost might be very high  Manual testing is much simpler than automated; for example testManual testing is much simpler than automated; for example test automation for images, videos and other media elements is veryautomation for images, videos and other media elements is very complicate, at the same time visual testing of them is prettycomplicate, at the same time visual testing of them is pretty straightforwardstraightforward  In some other casesIn some other cases • Anyway in manual/automated dilemma you need to find trade-offAnyway in manual/automated dilemma you need to find trade-off between coding/maintenance expenses and benefits of automationbetween coding/maintenance expenses and benefits of automation
  • 16. 1616 Load/performance/stress/recoverLoad/performance/stress/recover testingtesting • Preferred method – only automated testingPreferred method – only automated testing • All testing is done in the test environment with a newAll testing is done in the test environment with a new build/version of applicationbuild/version of application Load testingLoad testing • I consider load testing as a base for performance/stressI consider load testing as a base for performance/stress testing; not a separate method of testingtesting; not a separate method of testing • To make a load to web application I use Perl/Ruby scriptTo make a load to web application I use Perl/Ruby script with N threads running in Linux/Unix (N – parameter)with N threads running in Linux/Unix (N – parameter) • Each thread simulates an user and makes requests toEach thread simulates an user and makes requests to the site tested randomly and independentlythe site tested randomly and independently • It is possible to use JMeter to make a loadIt is possible to use JMeter to make a load
  • 17. 1717 Load/performance/stress/recover testingLoad/performance/stress/recover testing (continued)(continued) Performance testing (client side)Performance testing (client side) • As a load is set some Ruby script is launched in clientAs a load is set some Ruby script is launched in client environment (e.g. Windows or Mac OS) which measures siteenvironment (e.g. Windows or Mac OS) which measures site performance under loadperformance under load • It makes typical random (or the same) requests to the site,It makes typical random (or the same) requests to the site, catches HTTP requests, classifies them against metricscatches HTTP requests, classifies them against metrics selected and collects HTTP response times taking intoselected and collects HTTP response times taking into account the possible time overlappingaccount the possible time overlapping • Performance metrics may be – backend response time,Performance metrics may be – backend response time, rendering start time, total page load time, some pagerendering start time, total page load time, some page elements (images, ads, tracking pixels, social networks, etc)elements (images, ads, tracking pixels, social networks, etc) load times or all of them or something elseload times or all of them or something else • Performance (in some metrics) averages over some period ofPerformance (in some metrics) averages over some period of time – 6 – 12 hours and different number oftime – 6 – 12 hours and different number of ““usersusers”” (say N =(say N = 10, 30, 50, 100, etc) are calculated10, 30, 50, 100, etc) are calculated • Performance of new build is compared with the savedPerformance of new build is compared with the saved performance of the current build to make decision about newperformance of the current build to make decision about new build qualitybuild quality
  • 18. 1818 Schema of performance testing (client side)Schema of performance testing (client side) Unix / Linux Ruby script with HttpWatch classes. It catches all http requests and collects load times for different page elements Windows/Ma c OS Web application Perl/Ruby scripts with N threads. Each thread makes requests to the site like an end user does LoadPerformanc e
  • 19. 1919 Load/performance/stress/recover testingLoad/performance/stress/recover testing (continued)(continued) Stress testingStress testing • Increase load – number of users/requests perIncrease load – number of users/requests per second – until performance in selected metricssecond – until performance in selected metrics reaches some predefined value. For example, site isreaches some predefined value. For example, site is not respondingnot responding • Save N – stress threshold - number ofSave N – stress threshold - number of users/requests per second when it happened. Thisusers/requests per second when it happened. This is stress level.is stress level. • Is stress level less or greater than specified inIs stress level less or greater than specified in design?design? • Is stress level for the new build/version less orIs stress level for the new build/version less or greater than saved stress level of the current build ingreater than saved stress level of the current build in the same environment?the same environment?
  • 20. 2020 Load/performance/stress/recover testingLoad/performance/stress/recover testing (continued)(continued) Stress testingStress testing • Decrease load step by step until site operatingDecrease load step by step until site operating restoredrestored • Save N - number of users/requests per secondSave N - number of users/requests per second when it happened. This is recover level.when it happened. This is recover level. • Check recover level against specs or previous buildCheck recover level against specs or previous build recover levelrecover level
  • 21. 2121 Time performance monitoring (client side)Time performance monitoring (client side) • This is similar to performance testing; but is done on live/production site at theThis is similar to performance testing; but is done on live/production site at the current loadcurrent load • Some Ruby + HttpWatch classes script is running permanently in clientSome Ruby + HttpWatch classes script is running permanently in client environmentenvironment • It makes same or random typical requests to production site each T (parameter)It makes same or random typical requests to production site each T (parameter) seconds; catches and classifies against metrics used all http requests,seconds; catches and classifies against metrics used all http requests, calculates and collects http response times and put them to DBcalculates and collects http response times and put them to DB • At Become we collect the following data per request -At Become we collect the following data per request -  Request timesRequest times  Requests (queries)Requests (queries)  Backend response timesBackend response times  Rendering start timesRendering start times  Images load timesImages load times  Ads load timesAds load times  Tracking pixels load timesTracking pixels load times  Javascript timesJavascript times  Social media network timesSocial media network times  Total page load timesTotal page load times • The detailed and aggregated data are displayed also in the graphic form onThe detailed and aggregated data are displayed also in the graphic form on displays for the visual site monitoring; detailed graphics are refreshed each 15displays for the visual site monitoring; detailed graphics are refreshed each 15 min, aggregated ones – on hourly basis (see examples on the next two slides).min, aggregated ones – on hourly basis (see examples on the next two slides).
  • 22. 2222 Become: Performance monitoring (client side) - exampleBecome: Performance monitoring (client side) - example
  • 23. Become: Performance monitoring report (example)Become: Performance monitoring report (example)
  • 24. 2424 Time performance monitoringTime performance monitoring (server side)(server side)  All data needed are extracted from logAll data needed are extracted from log files, saved in DB and exposed in thefiles, saved in DB and exposed in the graphic form with R and Rpad/GoogleVisgraphic form with R and Rpad/GoogleVis toolstools  See examples on the next slidesSee examples on the next slides
  • 25. Yuzu: Performance monitoring (server side) - exampleYuzu: Performance monitoring (server side) - example 2525
  • 26. Yuzu: Performance monitoring (server side) - exampleYuzu: Performance monitoring (server side) - example 2626
  • 27. 2727 Become: Performance monitoring (server side) -Become: Performance monitoring (server side) - examplesexamples
  • 28. 2828 Comparative performance monitoring vs.Comparative performance monitoring vs. competitorscompetitors • The similar approach was used for the parallelThe similar approach was used for the parallel time monitoring Become.com and toptime monitoring Become.com and top competitorscompetitors • The same requests were periodically sent to allThe same requests were periodically sent to all sites; request times and page element loadsites; request times and page element load times are measured and exposed on display intimes are measured and exposed on display in the graphical formsthe graphical forms • Execution was done in some cloud environmentExecution was done in some cloud environment