Your SlideShare is downloading. ×
ACC presentation for QA Club Kiev
Upcoming SlideShare
Loading in...5

Thanks for flagging this SlideShare!

Oops! An error has occurred.


Saving this for later?

Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime - even offline.

Text the download link to your phone

Standard text messaging rates apply

ACC presentation for QA Club Kiev


Published on

Published in: Business, Technology

  • Be the first to comment

  • Be the first to like this

No Downloads
Total Views
On Slideshare
From Embeds
Number of Embeds
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

No notes for slide
  • This is my first appearance on QA gathering. I regret I did not start earlier – it’s a pleasure for me to be among many people of the same occupation as me, who want to know more, who want more who want to be more.
  • Scanjour R&D in Kiev: Team of 21; 9 in test!CPH part of the R&D dept has about the same dev-test ratio.We have good quality-oriented culture (unit tests and sometimes  TDD, help to test from dev), automation testing and most modern tools.Things going well, but we have where to improve.Not enough sync: sometimes people are syncronizing only after a bug is found – want to have that earlier in process to reduce defect fix cost…So what are the improvement areas… List/explainSo this is our areas to improve. How do we see the direction for our progress?
  • We want to discuss test object team-wide, in STRUCTURED and easy mannerWe want to have a short but still descriptive summary of discussion result so we can ->Prioritize test effort,Which allows to knowwhat to test,what not to test in given time/resource constraints,and where to put additional effort if have free buffer
  • “ACC” name of method indicates three dimensions we look at or product at.ACC has no ‘date of start’.ACC was probably first mentioned ‘globally’ around 2010 by WhittakerWhittaker started practicing in at Microsoft (for MTM 2010 we are currently using ) and brought it to Google.Google has its self-developed tools to support the method “Testify” aka “Google Test Analytics”. Tools are not public (yet).Almost no info in Internet or offline -> We (Scanjour) has taken the idea and developed the method (theory (basic definitions and explanations) and implementation based on tools we use in our teams)
  • I heard many times: “But this is Google! This is Microsoft!”. Don’t even try to compare!!… Let see what the method is. And is there any magic.
  • Attribute is qualities of the productsthat users / customers pay for, what solves their problems and satisfies their needs. Examples: Functional, Secure, Reliable, User-friendly, Fast… This is what POs, sales and executives brag about.Components are technical pieces of the product you can find in architectural or design documents. Libraries, classes, parts of UI, files of code or configurations, or groups of those serving some architectural or design need. This is what developers will tell you they work on. Examples: dialog in UI, form to enter/modify data, installer, API, configuration mechanism…Capabilities are small actions the product can do, both for servicing user needs and infrastructure needs. Examples:User needs: search, update item data in DB, validate form input, repair installed product, accept keyboard shortcutsInfrastructure needs: cleanup, prioritize jobs, balance load.
  • A Capability is technically implemented in a component, and contributes to one of the product’s AttributesA pair of Attribute and Component can have any number of CapabilitiesNow it is time to quantify each Capability by several parametersto be able to compare them to each other and understand where we need more analysis and test.
  • How often user calls the capabilityHow often the capability is called automatically (by other capabilities or external systems) – for infrastructure capabilities
  • - The equation is a variation of risk equation (Risk = Probability * Impact). Distribution of Testing Needs in ACC corresponds in distribution of classical Risk.All team members are free and very welcome to share their opinion and influence numbers. However, practice shows that some number are more influenced by some roles.- Complexity is mostly a ‘tech’ number, less a ‘business’ numberFrequency is 50-50 tech-bizImpact is mostly business number
  • Next slide is probably the most important in the presentation.It SHOWS MAIN VALUE OF THE METHOD.
  • Once again – every team member, every role has contributed. PO, devs, testers. So this is the most objective model we can have.…And the result is seen on 1-2-3 screens, depending on how deep you want to dig.Bunus: discussing numbers, people share their business and technical knowledge and opinions with each other, which is great for a scrum team. Team is much more in sync about the product, which is extremely important when team does not rely on documentation too much.
  • - Now we have the complete list.- This is a model, and we can see things when we look at it (and the object it represents) from different perspectives (next two slides).
  • We see how much we implemented to enhance product’s attributes (column totals)We see what are most ‘heavy’ components (row totals)In the example, we can see that we did more for the product to be compatible than to be functional. This is natural, as the product heavily uses one product and vitally relies on another product.
  • We see where we need more analysis and test.Easy to overview by single capabilities (cells), but also by attributes (column totals) and Components (row totals).In the example, we can see that Crawler component needs more test, even though there are twice less components. The reason is that Crawler is used more frequently and its crash has extremely high impact (system does not fulfill its intent).
  • “No time” – Plan and perform ACC work instead of doing ACC when you don’t have other tasks.“It shows what we knew!” Why we need it?? – build model step-by-step, without trying to achieve certain level of testing need. Otherwise you see what you want to see! Also, remember, that putting our knowledge into written is what we always do when create/update documents – this does not mean that we don’t need documents.
  • Capabilities are too small to test them separately – try another level of detail in the ACC model.Hard to bind test cases to capabilities (one test case cover many, the overlap a lot!). Hard to bind bugs to capabilities (hard to define what capability bug belongs to, especially when there are a lot of capabilities, >100, >200). Solved in two ways:Reconsider (decrease?) model detail level – you’ll get less overlapping of test cases covering capabilities, and each capability will represent more code so it will be easier to define to which capability bug belongs to… It’s all about balance.You are using the ACC model.Think of testing in terms of ACC. You don’t just execute test cases. You TEST CAPABILITIES. At any moment you know what capability you test, and you find most of bugs in what you are currently testing. So it is much easier to link bugs to capabilities.Conclusion: There are many good methods. Each of them can succeed or fail. This method belongs to good ones. Let us remember why:Structured. There is structure for discussion and algorithm for actions. Method is reproducible and usable.Whole team participates. Everyone contributes to the model and shares knowledge with everyone.Easy-to-use result. Short, good looking result that can easily be analysied and presented in the team and outside the team.
  • Transcript

    • 1. Balancing Your Test EffortPlanning test with Google’s approach Nikita Knysh Ciklum, August 17, 2011
    • 2. The Speaker• Nikita Knysh, 30 y.o., ~11 years in IT• Background: IT education, webdev, support+lead, PM, TW, BA+lead, ISTQB FL• Now: 4 years with Scanjour, Test TL
    • 3. Agenda• The Challenge• The Directions• The Method• The Tools• Probs & Cures
    • 4. The Challenge• What to test if not enough time?• What to test if have buffer?• Much test, few bugs• No overview of test needs• Not enough sync between roles
    • 5. The Directions• Discuss• Summarize• Prioritize… team-wide & easilySo what’s the right approach then??
    • 6. The Method: OriginACC (Attribute-Component-Capability) James A. Whittaker, Test Director at Google, 2010
    • 7. Microsoft and Google use this. Can we?
    • 8. The Method: Concept• List product’s selling points (Attributes)• Break down the product into tech Components.• Break down the product based on WHAT it does (Capabilities).We get a model that reflects allthe vital views on the product!
    • 9. The Method: ACC Modeling• ACC list• Time to give it some numbers!
    • 10. The Method: Giving it Numbers• Complexity increases risk of human mistakes during code development and maintenance and therefore risk of introducing bugs• Use full scale (1 to 5)• Track averages
    • 11. The Method: Giving it Numbers• Complexity factors unit test coverage
    • 12. The Method: Giving it Numbers• Frequency of Use how often the capability is called by user or automatically and therefore how often failures caused by defects in code will most likely occur
    • 13. The Method: Giving it Numbers• User Impact damage dealt to user and / or system intent should the capability fail completely or severely
    • 14. The Method: Outcome• Testing Needs = Complexity * Frequency * Impact
    • 15. Now we know where the risk is.Now we know where we need more test. and…
    • 16. Our knowledge is based on cumulated vision of the whole team!…and it is extremely easy to overview!
    • 17. The Method: Outcome• ACC list, now with numbers
    • 18. The Method: Outcome• Matrix view of capability count
    • 19. The Method: Outcome• Matrix view of testing needs
    • 20. The Tools: How We Do It• ACC items are TFS work items• ACC linked to TCs and bugs for metrics• Excel book for each model• Two-way sync between TFS and Excel• Instant update: DWH cube is avoided• Pivot tables and charts make the beauty
    • 21. The Tools: How It Looks• Model overview in Excel
    • 22. That easy? Really?
    • 23. Probs & Cures• “No time for modeling!”. Include into DoD.• “It shows what we knew!” Be strong! Don’t manipulate! +True for any document.
    • 24. More Probs & Cures• Can’t test individual capabilities. Reconsider product breakdown.• Hard to bind test cases and bugs to capabilities. Reconsider model’s detail level. Think starting from ACC, not test cases.
    • 25. Thank you!• Questions & Answers