Role Of Qa And Testing In Agile 1225221397167302 8


Published on

1 Like
  • Be the first to comment

No Downloads
Total Views
On Slideshare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide
  • To follow the waterfall model, one proceeds from one phase to the next in a purely sequential manner. For example, one first completes requirements specification, which are set in stone. When the requirements are fully completed, one proceeds to design. The software in question is designed and a blueprint is drawn for implementers (coders) to follow — this design should be a plan for implementing the requirements given. When the design is fully completed, an implementation of that design is made by coders. Towards the later stages of this implementation phase, disparate software components produced are combined to introduce new functionality and remove bugs.
  • Agile testing is all about applying agile values and principles to testing. Agile testing involves the set of good practices which help the agile team deliver high quality software.
  • The key here is to make the entire development team, not just testing or QA, responsible for testing and quality. Automation is the manta in agile testing. In agile, it is important to automate all unit & regression tests and further integrate them with continuous integration framework so that testers are free to concentrate more on exploratory testing.
  • The key here is to make the entire development team, not just testing or QA, responsible for testing and quality. Automation is the manta in agile testing. In agile, it is important to automate all unit & regression tests and further integrate them with continuous integration framework so that testers are free to concentrate more on exploratory testing.
  • TDD is one such software development technique which ensures that the software being developed is 100% covered by unit test cases. How it is so? It is because TDD talks about writing the test case first and then the actual coding. Let us look at the steps of TDD Write the unit test cases for the features to be developed in the sprint Run a build to make sure that the build recognizes this new test cases and build fails Now write just enough source code to ensure that it covers the test case Run the build again to confirm that the build passes Now because the developer has written the code just to cover the test case, it is time to refactor the source code to make sure that the code is readable by others (adding proper comments), simply its structure without changing the behavior of the feature developed.
  • Sometimes we have to follow the process of finding the most important bugs in a short period of time, this is called exploratory testing. Exploratory testing is like a chess game with a computer. Tester revise his plans after seeing the move of your opponent. Yet all your plans can be changed after one unpredictable move of your opponent. Can you write a detail plan for a chess game with a computer? Player uses all his knowledge and experience. He can define only the fist move in the game and it is unreasonable to plan far ahead. You can plan 1 move ahead, or 20 moves if you're a very experienced player, but you can't plan the whole game. To plan 20 moves he had to spend a lot of his valuable time (the clock is ticking). Of course he is trying to find some information about existing situations between moves. This is exactly what an experienced exploratory tester does. After running any test case, testers may need to find additional information about application and system from a developer, system architect, business analyst, or may be from literature. A lot of information is necessary for correct exploratory testing. Other teams whose products are used for creating the applications have influence on our testing too. Just as a star may seem dim in the spectrum of visible light, yet burn brightly in the infrared, the simple idea of exploratory testing becomes interesting and complex when viewed in the spectrum of skill . Consider chess. The procedures of playing chess are far less interesting than the skills. No one talks about how wonderfully Emanuel Lasker followed the procedures of chess when he defeated Steinitz in 1894 to become world champion. The procedures of chess remain constant, it’s only the choices that change, and the skill of the players who choose the next move. What makes exploratory testing interesting, and in my view profoundly important, is that when a tester has the skills to listen , read, think and report, rigorously and effectively, without the use of pre-scripted instructions, the exploratory approach to testing can be many times as productive (in terms of revealing vital information) as the scripted variety. And when properly supervised and chartered, even testers without special skills can produce useful results that would not have been anticipated by a script. If I may again draw a historical analogy, the spectacularly successful Lewis and Clark expedition is an excellent example of the role of skill in exploration:
  • A day in the life of a Quality Engineer Hi! My name is Cue Aye. I'm a Quality Engineer. I'm one of the few who make sure that we're building the right product in the right way. I work closely with the Product Manager and the developers too. A normal day My normal day, during an iteration, looks something like this: I come to work with a smile, around the same time as my team mates. It is essential that I maintain a pleasant disposition... that's because I know I'm in for a tough day, especially if the developers haven't been doing a good job! I open my email client and start downloading my waiting emails In parallel, I log in to JiRA and open my Personal Work Queue , already sorted by priority I decide, according to the estimates I'd signed up for, how many of those work items I can complete/continue today I shoot a print command for that list and the issue summaries I grab a coffee and chat with a colleague while my tasks for the day get printed I quickly go over the work item summaries as I walk over to our stand-up room During the stand-up, I listen to everybody speak offer help if someone needs it (but discuss it in a follow-up) honestly answer the 3 questions about my work progress I then come back to my desk and start working on tasks off the printed work items list, highest priority first What I do next depends on the different kinds of tasks that I may have signed up for: Writing acceptance criteria (this is usually substituted by writing of Test Case s) This task is closely linked to a business requirement . I need to understand the requirement as well as I can, using both my domain knowledge and frequent discussions with the Product Manager about the same There is quite a bit of back-and-forth that happens while fleshing out the acceptance criteria: I add a few points (mostly in the form of steps to be performed by an end-user), the Product Manager reviews them, gives his comments, and I update the issue accordingly Finally when both the Product Manager and I are happy with the criteria we've listed out, I resolve the work item issue Writing a Test Case This task is again closely linked to a business requirement and is very much similar to writing acceptance criteria – the only difference is that these are more in-depth and cover many more scenarios (such as positive and negative cases) I use the Test Management System to register the test case against the business requirement (this helps in maintaining traceability) Automating a Test Case This task usually goes hand in hand with writing a test case... we strive to automate as many of the tests as possible I use the test automation tools chosen at the beginning of the release, depending on the nature of the product and the kind of test automation strategy adopted I write the scripts, either programmatically or using the point-and-click approach, and run them again and again on my local machine, tweaking the script after each run to make the execution smooth and error-free... I make sure that I handle all of the exception cases too (the failure of one script shouldn't prevent other scripts from running) Once confident of the script, I check it in into our project's code repository, mentioning the work item issue id as a commit comment The Test Management System should be able to notice the check in and automatically link the script to the related Test Case . This would help automate execution of the test scripts. I test whether the checked-in scripts work as supposed to, and then I resolve my work item issue Running an automated Test Case This task is carried out when QA has received a Release Candidate Build for testing. Using the Test Management System , I select the Test Case(s) which need to be executed and give the command to run them. Usually this is all that I need to do because the rest is handled by the Test Management System itself: it'll check out the test scripts, execute them against the specified build in the QA staging environment, and log the test results in the Test Set Execution issue. If there are failures, Defect issues are also automatically created. In case the above is not possible for some of the scripts, I check out the scripts and run them myself, making note of the results and logging them in the Test Set Execution issue assigned to me. If there are failures, I myself create Defect issues . Once done, I resolve my work item issue Executing a Test Case manually It is sometimes necessary to manually execute a Test Case, when it is not possible to automate the same I interact with the Release Candidate Build provided to the Quality team, making sure the prerequisites of the Test Case are satisfied I perform the steps mentioned in the Test Case, making note of any discrepencies and of course, errors. I log the results on the Test Set Execution issue assigned to me. If there are failures, I myself create Defect issues . Once done, I resolve my work item issue Every now and then I note the time I'm spending on the various issues. I jot them down on the printed sheets, to update them online later. At the end of the day, it's time to go home... I go over my notes on the printed issue summaries and update the issues online. I also log my work, if I haven't done so already. Hmm... I'm satisfied with a good day's work
  • Role Of Qa And Testing In Agile 1225221397167302 8

    1. 1. Role of QA and Testing in Agile Mayank Gupta
    2. 2. Time for Introduction <ul><li>Working w ith GlobalLogic India </li></ul><ul><li>Practicing Agile for last 3 years </li></ul><ul><li>Certified Scrum Master (CSM), OCP, ISTQB </li></ul><ul><li>Published Articles in Scrum Alliance & CM Journal </li></ul><ul><li>Blog - </li></ul>
    3. 3. Agenda <ul><li>Agile Testing – An Overview </li></ul><ul><ul><li>Traditional vs. Agile Approach </li></ul></ul><ul><ul><li>Agile Values </li></ul></ul><ul><li>Agile Testing practices </li></ul><ul><ul><li>Test Driven Development </li></ul></ul><ul><ul><li>Continuous Integration </li></ul></ul><ul><ul><li>Regression Testing </li></ul></ul><ul><ul><li>Exploratory Testing </li></ul></ul><ul><li>Role of a Tester in an Agile Project </li></ul><ul><li>A Case Study </li></ul>
    4. 4. Let us brainstorm <ul><li>Does agile help QA? </li></ul><ul><li>What is the difference between testing in traditional environment and agile environment? </li></ul><ul><li>Do testers and QA people have a role in agile software development environment? If yes, what? </li></ul><ul><li>Why QA is so much neglected in agile? </li></ul><ul><li>Do developers and testers remain at opposite poles as it happens in traditional testing model (though not always)? </li></ul>
    5. 5. Traditional software development
    6. 6. Agile testing -Why?
    7. 7. Traditional vs. Agile Approach Continuous sprint by sprint planning, deliver important features first Plan one time delivery which has to happen way later Planning Entire team Quality Engineer Quality Responsibility Evolving and Iteration Wise All Upfront Designing of Test Cases See working software every iteration and every release Milestone document review Progress review Adapt and adjust at every release and iteration boundary Prohibit Change Manage Change Constant interaction within the team and with the Product Owner Upfront Understanding Requirements Agile Traditional Criteria
    8. 8. Agile Values <ul><li>We are uncovering better ways of developing software by doing it and helping others do it. Through this work we have come to value: </li></ul><ul><li>Individuals and interactions over processes and tools </li></ul><ul><li>Working software over comprehensive documentation </li></ul><ul><li>Customer collaboration over contract negotiation </li></ul><ul><li>Responding to change over following a plan </li></ul><ul><li>That is, while there is value in the items on the right, we value the items on the left more. </li></ul>
    9. 9. Agile synthesized <ul><li>Reference – “Agile QA Testing by Elisabeth Hendrickson </li></ul>
    10. 10. Agile Testing - Definition <ul><li>Agile testing is all about applying agile values and principles to testing. The value of agile testing lies in effective communication between developers, testers and the product owner </li></ul><ul><li>Testers write acceptance test cases and get them approved by the product owner. At the same time, coding is driven by the customer facing tests which is more likely to deliver better software </li></ul><ul><li>In the end a story is not complete until testing is finished. </li></ul>
    11. 11. Focus of Agile Testing <ul><li>High Value features first </li></ul><ul><li>Continuous Integration with pre/post release build automation </li></ul><ul><li>Test Driven development </li></ul><ul><li>Automation of unit & regression testing </li></ul><ul><li>Automated acceptance testing </li></ul><ul><li>Exploratory testing </li></ul>
    12. 12. Agile Testing Practices <ul><li>Within an Iteration </li></ul>Automated Acceptance tests Manual Exploratory testing Automated Unit Tests <ul><li>Define “Done” </li></ul><ul><li>Represent requirements </li></ul><ul><li>Derive “Design” </li></ul><ul><li>Represent Specifications </li></ul><ul><li>Provide additional feedback </li></ul>
    13. 13. Test Driven Development (TDD) <ul><li>A software development technique which ensures that the software being developed is 100% covered by unit test cases </li></ul><ul><li>Write the unit test case </li></ul><ul><li>Execute the build  Test fails </li></ul><ul><li>Write just enough code to cover the test </li></ul><ul><li>Execute the build  Test passes </li></ul><ul><li>Refactor </li></ul>Write the Test Refactor Execute the build Coding Execute the build Fail Pass Pass Fail
    14. 14. Continuous Integration <ul><li>Continuous Integration is a software development practice where members of a team integrate their work frequently; usually each person integrates at least daily - leading to multiple integrations per day. </li></ul><ul><li>Each integration is verified by an automated build (including test ) to detect integration errors as quickly as possible </li></ul><ul><li>– Martin Fowler </li></ul><ul><li>Pre-release builds (hourly/nightly) help developers to  improve the quality of the source code and release builds help in automating the release process. </li></ul>
    15. 15. Continuous Integration <ul><li>It is essential that code repository is centralized and unified tools are used </li></ul>
    16. 16. Regression Testing
    17. 17. Exploratory Testing <ul><li>Exploratory testing is concurrent test design, test execution, test logging and learning, based on a test charter containing test objectives, and carried out within time-boxes. </li></ul><ul><li>Used in higher test levels to complement systematic (automated) testing </li></ul><ul><li>Exploratory testing is like the chess game with the application </li></ul>
    18. 18. Role of a Tester in an Agile Project <ul><li>Testers are integral part of the team </li></ul><ul><li>Participate in Release/Iteration planning </li></ul><ul><li>Start Testing activities from the day 1 </li></ul><ul><li>Collaborates with the customer to define the acceptance test criteria  Validates that the system is doing exactly what it is supposed to do </li></ul><ul><li>Tests Stories once they are complete </li></ul><ul><li>Focuses on test automation </li></ul><ul><li>Focuses more on exploratory testing </li></ul><ul><li>Practices pair testing (similar to pair programming) </li></ul><ul><li>Collaborates with Development team </li></ul><ul><li>Provides continuous feedback to the team </li></ul>
    19. 19. A Day in Life of a Tester <ul><li>Starts by looking at prioritized personal work queue </li></ul><ul><li>Identifying the work items for the day </li></ul><ul><li>Attending the standup meeting </li></ul><ul><li>Collaboration with Dev Team (seeking and offering support) </li></ul><ul><li>Depending on the type of work item </li></ul><ul><ul><li>Writing acceptance Criteria </li></ul></ul><ul><ul><li>Writing the test case </li></ul></ul><ul><ul><li>Automating a test case </li></ul></ul><ul><ul><li>Executing a test case (manually) </li></ul></ul>
    20. 20. Why does it fail? <ul><li>Hard! </li></ul><ul><li>Agile does not fix everything </li></ul><ul><li>Are we ready for a change? </li></ul><ul><li>Practicing TDD is not easy </li></ul><ul><li>A move from Manual testing to Automated testing </li></ul><ul><li>Agile Testing needs a coder-tester profile </li></ul><ul><li>It makes products to be delivered faster </li></ul><ul><li>Partial adoption? </li></ul>
    21. 21. How GlobalLogic does it? <ul><li>GlobalLogic applies a unique method and platform for distributed software development, “GlobalLogic Velocity™” </li></ul><ul><li>“ GlobalLogic Velocity™ is an Agile product engineering method supported by an innovative collaboration platform, software frameworks and reusable software objects that together help bring high quality software products to market faster and with less risk.“ </li></ul><ul><li>Velocity Method is the GlobalLogic framework of processes, templates, and behaviors which optimizes communication and provides just enough structure to effectively manage distributed product engineering teams. </li></ul><ul><li>Velocity Platform is an integrated suite of tools and systems that aids Distributed Product Development by facilitating end-to-end collaboration and tracking of issues through Requirements Engineering, Iteration Planning and Release, Test Engineering, and Release Engineering. </li></ul>
    22. 22. Velocity – A case study
    23. 23. [email_address]
    1. A particular slide catching your eye?

      Clipping is a handy way to collect important slides you want to go back to later.