Why QA is Important and How it Should Interact with Agile Teams
1. Chuck Summers
1127 Cass St, La Crosse, WI 54601 972-342-4059 chuck.summers@charter.net
So your CEO corners you and asks why QA is important. After all shouldn’t developers
be able to write good code by themselves?
In a software companies, developers are generally focused on feature build out. This
viewpoint tends to be narrow in scope: get into the code base, make source changes the
developer / team feels are needed to satisfy the ticket, check in the source modifications, and
then move on to the next ticket. QA on the other hand is the surrogate voice of the customer
and the guardians of overall business integrity. It is the QA team’s responsibility to catch and
identify all issues before they reach the customer. The QA team must insure that the customer
experience remains intact from release to release. The QA team must also insure the non-
customer facing functions continue to work as expected. For e-commerce sites this generally
means the order taking / fulfillment path remains flawless (i.e. from the customer driven
shopping cart, thru the back-end internal fulfillment / inventory management process, thru the
line item entry in the general ledger, and into the call-center customer / technical support
tools). The QA team must numerically quantify the quality of each and every software release
such that the executive staff can properly weigh the risk of release. This is why QA is an
important counterpart to development.
What are the top three QA metrics / graphs that executives love to see?
The top three QA metrics / graphs that I find executives love to see are:
1. Qualitative metrics / graphs comparing this release vs. last release:
• Defect counts by priority level (i.e. critical, high, medium, low)
• Customer reported defects remaining
• Defect density per line of source code
2. Performance metrics / graphs of the site under load
• Response time across typical customer driven action (ex. add to cart, submit order,
receipt of order confirmation, …etc).
• What loading will cause the site to be perceived as unresponsive by customers
• Uptime before expected failure
3. Efficiency / cost measurement of the development and QA teams
• Time between critical production regression
• % defects caught before release
• Trending story point capacity of development / QA teams
• Trending $ cost of development / QA per feature
• Coverage map of site features to test case(s)
• % manual vs. automated testing
• Trending time / cost to QA each sprint
For all the above, executives want to see trending improvement release over release coupled
with reduction in internal operational cost.
2. Describe how a QA team should interact with a software development team following
the Agile Scrum methodology? And what would the right ratio of developers to QA
people be?
QA is an integral part of software development. Under Agile, the scrum cycle begins with
ticket grooming, estimation, acceptance, and commitment of work into the next sprint. The
QA team should play an active role in all phases of the scrum cycle. QA activities at a
minimum are:
• The mapping of each feature to be developed to an existing or a new test case
• If a new test case is required, determine:
o Whether a manual or automate test will be created
o How new test case will be tagged and stored in the QA test case repository
o Estimate cost of new test case creation (I prefer BDD style test specs)
o Create test
• Develop a customized test / general regression plan for the sprint
• Identify what metrics are required and collect them during the execution of the test plan
• Execute all of the above side-by-side with the development team
The QA activities outlined should be distilled to tickets / tasks in the sprint. These tasks need
to groomed, estimated, and accepted into sprint based on QA resource allocation (i.e. just like
developer teams do within the Agile Scrum methodology). The QA team is accountable to the
sprint commitment they make.
The ideal ratio of QA to developers is 0. However, this would be an ideal project that has fully
automated test coverage, developer driven creation of new test cases, and an continuous
integration feedback loop that triggers test execution / report upon every developer source
check-in.
In general, it is too costly to develop automated tests for every facet of a web site. I find a ratio
of 3-4 developers to QA person to be a reasonable balance. This ratio assumes a working
blend of automated tests covering the underpinning core models / service APIs plus a well
organized / documented set of manual test cases. Each test case should be tagged by
keyword. Test plans should be able to be drawn from a test repository based on keyword
search. For instance, if development is reworking the shopping cart in the sprint, QA should
be able to quickly create a customized test plan for the active sprint test plan that is order /
fulfillment test case. QA progress thru every test plan should be tracked in red-green-yellow
manner (i.e. red for test case failure, green for pass, yellow for a fluctuating / inconclusive
result).