Your SlideShare is downloading. ×
0
Revolution, Evolution and 'Lean: A Test Process Improvement Diary From Copenhagen To Manchester' by Tapani Aaltio
Revolution, Evolution and 'Lean: A Test Process Improvement Diary From Copenhagen To Manchester' by Tapani Aaltio
Revolution, Evolution and 'Lean: A Test Process Improvement Diary From Copenhagen To Manchester' by Tapani Aaltio
Revolution, Evolution and 'Lean: A Test Process Improvement Diary From Copenhagen To Manchester' by Tapani Aaltio
Revolution, Evolution and 'Lean: A Test Process Improvement Diary From Copenhagen To Manchester' by Tapani Aaltio
Revolution, Evolution and 'Lean: A Test Process Improvement Diary From Copenhagen To Manchester' by Tapani Aaltio
Revolution, Evolution and 'Lean: A Test Process Improvement Diary From Copenhagen To Manchester' by Tapani Aaltio
Revolution, Evolution and 'Lean: A Test Process Improvement Diary From Copenhagen To Manchester' by Tapani Aaltio
Revolution, Evolution and 'Lean: A Test Process Improvement Diary From Copenhagen To Manchester' by Tapani Aaltio
Revolution, Evolution and 'Lean: A Test Process Improvement Diary From Copenhagen To Manchester' by Tapani Aaltio
Revolution, Evolution and 'Lean: A Test Process Improvement Diary From Copenhagen To Manchester' by Tapani Aaltio
Revolution, Evolution and 'Lean: A Test Process Improvement Diary From Copenhagen To Manchester' by Tapani Aaltio
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

Revolution, Evolution and 'Lean: A Test Process Improvement Diary From Copenhagen To Manchester' by Tapani Aaltio

585

Published on

The move from a waterfall life cycle to an agile one is not straightforward for people with a long experience in software development and testing. The ways of working are in their backbones and hard …

The move from a waterfall life cycle to an agile one is not straightforward for people with a long experience in software development and testing. The ways of working are in their backbones and hard to change. In this kind of situation, taking small steps towards the right direction simply takes too long. You need to throw away your old process (=Revolution) and then improve the new process constantly (=Evolution). Usually, it is not possible to have a full scale revolution at once. That makes the evolution even more important, to get rid of the bad practices from the time before the revolution. An effective approach to speed up the evolution is to take the seven wastes of lean management as a guideline – to detect and remove relics not blown away by the revolution.



This presentation is my diary for test process improvement, spanning from Eurostar 2010 in Copenhagen and the Workshop on Lean Test Management (by Bob van de Burgt & Iris Pinkster), to the 2011 conference in Manchester. It highlights in interesting and surprising ways the evolution of three agile teams in a major Finnish company.



Waste is often produced by applying good practices in the wrong place, e.g.because of a weak test strategy or by involving too many people in defect management. Waste can also be caused by very practical things like bad seating arrangements or people not attending meetings. However, the biggest waste is to optimize the wrong things – e.g. the detection of defects instead of their prevention. These are examples of the things that have come up so far – but there will more during the year. I will also introduce some practical and light ways to measure the consequences of the waste.

Published in: Technology, Education
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
585
On Slideshare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
11
Comments
0
Likes
0
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide
  • My name is Tapani Aaltio, I come from Sogeti Finland.I’ve been in IT since I can remember, in various roles. I survived the waterfall era, and I’m here to tell you about my transformation to agile and how I´ve used lean ideas in improving the agile test process. The change hasn’t been easy, some revolution and evolution was needed on the way.There has been some determining moments in my career. When I was a developer, I picked up a book on Data flow modeling, then I became interested in requirements management, case tools, etc. Much later, I became a tester. In other words, I have seen a lot of good practices on my way and the best ones I’ve put in my toolset. When you do that for long enough, you have a heavy bag full of tools to carry around. This story is about re-evaluating that toolset with agile and lean.This is not a course in agile or lean. This is my diary of test process improvement in my recent projects, from Eurostar 2010 in Copenhagen to Eurostar 2011 in Manchester.TransitionLet’s start with the revolution…
  • In a revolution, drastic and far-reaching changes happen usually during a short time. Even the most basic values can be thrown away overnight. In a revolution there are both winners and losers. For the winners the revolution means to be free from the old values and restrictions and a chance to create new values.Revolutions really have changed the world a lot. They have changed the way we think and behave – which doesn’t normally happen easily. Think about the consequences of the French Revolution ,the Russian Revolution of 1917 and the Chinese Revolution (1927–1949), not to mention the fall of the Berlin Wall in 1989. So what does this have to do with software development and testing? Well, guys like me - with toolsets packed with old values - need a revolution to blow away the old values and make room for new, fresh values.Transition-
  • This story actually began three and a half years ago, when I was assigned to create a test process for the IT department of a Finnish major company. The department was struggling with adapting the agile life cycle. They the had a test process – maybe a bit like the TMap process applied in the waterfall world and they used ideas from the V-model of testing. They wanted to do scrum and agile, were you have the product backlog, you pull items to the sprint backlog and you create working software in a 2-4 week sprint, testing included.I didn’t have any experience of agile development or testing – but I had created testing processes before. Pretty soon it was clear to me that moving from waterfall – or any variation of that – to agile, is not straightforward. In fact, agile requires a different mindset. Just following the agile practices – like having a 15 minute stand up meeting every morning - does not make you agile.  Luckily I had access to several agile projects. One of the teams that I worked with was extremely effective and efficient and seemed to know exactly what they were doing. At first it had been hard for me to understand that they were good because they seemed not to follow any process at all – or so it seemed. They simply discussed what had to be done and then they just did it – and then the same happened the next day, and the next and so on. They had thrown away most of the good practices like having an approved project plan or test plan or having approved specifications before starting coding. They automated all their test cases – and for the part that they couldn’t automate they didn’t have detailed test cases, just some vague ideas on what should be tested – exploratory testing.Eventually, I got my assignment finished with a process description. It included a process flow picture with about six boxes. Come to think of it, maybe the process description with the boxes was just something I wanted to have myself. The real messages of my final report was in agile slogans like “trust the team”, “test levels are concurrent”, “instant feedback loop from developer to tester”, ”continuous feedback and reporting.” And that the ideas in TMap and in the V-model can be applied also in an agile team, but in a different way.  TransitionSometime later I was assigned to work on an agile team. This is were the adventure starts.
  • A fewwordsaboutourproject…..I am the qualitymanager of thisteamTransitionBeforediscussingsomeexperiencesfrom the project, let’sgoback to Copenhagen for a fewminutes.
  • In Eurostar 2010 in Copenhagen, I attended a full-day workshop on Lean Test Management by Bob van de Burgt & Iris Pinkster. They presented the principles and the background of lean management and we did some excercises on the subject. Most of all, they presented the seven wastes:Overproduction – you produce something JIC (just in case) – it isn’t really needed right now or maybe ever. A new feature that doesn’t give any value to the customers, or a detailed test report that nobody will read. Waiting – goods are not moving or processed. You produce something too early and won’t be able to collect the cash until later – or you implement something now that the testers will test a lot later.Unnecessaryinventory– which is a directresult of overproduction and waiting. Youhave to keeptrack of thingsthatare WIP – thisdoesn’tgiveanyvalue to yourcustomers, typical management workweused to valuebefore.Transporting – transporting doesn’t add any value either but costs money. E.g. deploying the test object to test environments. It has to be done – but it should be minimized and optimized.Inappropriate processing – as they say “if you have a hammer all the problems look like nails”. A genuine example of this is to use a heavy defect management process in a small team.Excessmotion– well, testersdon’tusuallymovethatmuchbutifweconsidermentalbending, streching and so on, wehave a case here: searching for documents in the repository, usingclumsytools to investigatefailures etc.Defects – as testers we all now how much waste defects produce, all that investigating, classifying, prioritizing, fixing and verifying. Ouch!That workshop in Copenhagen, it really hit me. I had heard of lean and the seven wastes before, maybe even browsed through an article in some magazine, but after Eurostar 2010 I began to see the daily routines in a different light than before. I started questioning some of the things I had taken for granted. So thank you, Bob and Iris, for an inspiring workshop. In the lean spirit,I’ve tried to keep this whole presentationas simple as possible. I haven’t read any books or attended any more seminars on lean. I have just tried to think lean and wanted to see if that would lead to any results.TransitionAnd nowit’stime to discuss the firstpracticalexample of avoidingwaste.
  • Which method would you choose to estimate the effort for your project: working out a detailed work breakdownstructure with estimates for all tasks, playing poker or maybe the crystal ball?We use planning poker to estimate the complexity of the backlog items. We have a bunch of people in a meeting room, developers, architect, testers, product owner. We discuss each candidate backlog item, one at a time, its business value, impact etc, just enough information to understand the item on a high level. The backlog items have been prepared before the meeting by the product owner and architects.At the end of the discussion on a item, each participant gives his/her estimate on how complicated the item is. The effort is given by showing a card with a number on it. The cards used are not poker cards but rather cards with Fibonacci numbers (1,2,3,5,8,13, …). The estimateis given in story points. A story point relates to complexity rather than time or effort. If the estimates don’t match, we discuss to find a common opinion.Using this information, it is possible to make a rough release plan. From history, we know what our velocity is – that is, how many story points we are able to handle in one sprint.We use one hour each week to estimation. In one meeting, we handle 5-10 items. All the documentation that’s done is the estimation in story points. If an item appears to be unclear, it is returned to the architects or the product owner.Also for the testers, this is as awesome practice. You are there on day one when the new a feature is introduced to the team. You can assess the risk and the scope of testing before the estimation is made – not afterwards. The developers and testers can create a common view of the item early on. The tester becomes an active member of the team, not just someone who is supposed to wait around until everything is done and then check if it’s OK.I really like planning poker. Compared to the traditional work breakdown structure, you create so much less waste. No more extensive detailed planning for things that might or might not happen in the future – a genuine example of overproduction, creating something that is not needed now or maybe ever, planning just in case. I guess WBS can also be applied in bigger projects efficiently – but in a scrum team it is no doubt inappropriate processing.TransitionOK, planning poker makes it easy to estimate without creating waste.How about organizing the work?
  • Scrum gives us two nice tools to avoid creating waste in organizing the work: Definition of done and sprint commitment.DoDIn each sprint, the team produces a potentially shippable piece of software. That piece has to be done – in every aspect. Basically, the team doesn’t have to revisit the piece ever. Each scrum team has to define its own definition of done – and here the testers have a big role. The developers might consider things done when implementation is done – and the testers should not accept this.Saying “It’s done but I haven’t tested it yet.” does not make sense.Implementation done does not equal Done. Testing all aspects of the systemhas to be part of the definition of done.Sprint commitmentIn sprint planning, the team takes items from the prioritized product backlog, adds them to the sprint backlog and commits to implement them in the sprint. So it’s up to the team to decide how much workthey can do, it is not a decision of anyone else, like their manager. There has to be a moment of truth for the team, with the questions “can we make it”, “what potential impediments do we have to tackle?”Team commitmentIt must be a team commitment. Sometimes it happens that testing something is more complicated than the implementation – so testing takes more time. The team has to find a balance between implementation and testing. That is, the throughput of developers must not be more than what the testers can handle. This would lead to overproduction by the developers and to unnecessary inventory, with lots of items being implemented but not tested.Plainly put, the team has to slow down in this kind of situation, a decision that is not easy to make. It takes a lot of courage to say to your manager that we cannot do this as fast as you’d like us to. We ended up in a situation like this during the year. We slowed down, we put less backlog items in the sprints. After that we had a few sprints where some developers complained that they didn’t have almost anything to do. Pretty soon some of them learned that they can help the testers, making us a better team.So, to avoid overproduction, waiting and inventory inside the team, the developers and the testers should be on the same page. One way to obtain this would be to have the teams sit together at the same table – and to have them sometimes forget their roles of developer and tester to achieve the sprint commitment as a team.TransitionLet’s look at someexamples of goodpracticesturned towaste…
  • Legacy test tool & Test automation frameworkFor functional testing, we have two tools in use: an integrated test tool from a major vendor for manual testing only and our test automation framework that we have created in-house.Before, we used the integrated test tool extensively for test management. We planned our test cases in it – both automated and manual. Then we set the priority for each case in the integrated tool and reviewed them using the review procedure of that tool. Then we added the cases to automate in our test automation framework, wrote the code and added them to the execution pool to have them executed.Prioritizing test cases is waste - OverproductionThe test casepriorities were set early in the sprint. The idea was that in case we won’t have time to test everything, we can choose the test case to run according to priority. I greatly supported this idea when we started to do this – but it turned out to be waste: overproduction. Instead of prioritizing “just in case”, we now prioritize “just in time”. Sometimes we don’t have time to test all the manual test cases and we drop some of them, mostly regression test cases. With good knowledge of the system and risks, just-in-time prioritizing goes smoothly and you always have the latest information available.LinkingRequirements to testcases – inappropriateprocessingWealsowanted to linkrequirements to testcases to see the testresultsbyuserstory.The testers started to link the test cases to user stories. So now the testers had to do only 7 small steps to do in test planning: create the test case in in the test tool, set the priority, link it to the right user story, have someone review it, add it to test automation framework, create a link between the test cases in both tools, write the script code – and voilà! the test was ready to run. Unfortunately, only 3 of these 7 steps provided any real value. The rest was waste, overproduction and inappropriate processing.We were so busy setting up and maintaining this monster process that we needed a small revolution in our minds. We wanted to have the linkage from user stories to test cases, we wanted the use the nice review process of the test tool and we wanted to report the results by test case priority. We didn’t want to implement this in the test automation framework – since we already had it in the test tool.Fortunately, the evolution took care of this. We had to use too much time in figuring out which test cases weren’t reported to the test tool, to make sure that all the testers had linked their test cases to user stories, that the priority had been set for each case. At the end we noticed that all this doesn’t give us any real value.So we ditched the integrated test tool in everything but manual testing. We implemented a light review process tool in our test automation framework. We don’t link user stories to test cases anymore. We focus on setting up the right automation test cases to make sure our software works as specified and then make those test cases pass. We use the test automation framework reports to figure out where we are.Have we lost something in dropping these good practices? No, on the contrary. The team still has all the necessary information on the quality. It might not be in a tool – but it does not matter, as long as we know. We trust the team!LinkingDefects to testcasesSimilarly,keepinglinksfromtestcases to defectsjust in case is waste. In a scrumteamthesekind of linksarenotneeded in anytoolorbetweenanytools. If a test case fails, the tester and the developershould look at ittogether – fixit and have the testpass in the nextrun. TransitionNow the thetestplanningprocess is clear of waste, let’s look at testexecution and reporting…
  • Why is avoiding waste in test execution and reporting important? Test execution is the phase where testing is on the critical path of the project, and all eyes are fixed on the testing team.Therefore, it is vital to do only the necessary in this phase. Testing should be automated as much as possible. Automate once, execute often. Manual testing our worst enemy and creates a lot of waste, unnecessary inventory.ReportingEffective and efficient reporting is continuous, it has to happen in real-time. There is no time to compile test reports manually – they would be out-of-date anyway, another form something nobody needs, overproduction.Using dashboards to monitor the progress and qualityCurrently we monitor the quality status of our new releases very closely as we build them. This is done using dashboards and monitors, no manual intervention is needed to produce this information.The only progress metric that we have is the sprint burndown chart. This chart is based on the physical scrum board with post-it notes were we plan the sprint in the sprint planning session and follow up the progress in daily stand-ups. We use a continuous integration tool – Jenkins – to report the build status. We have a physical traffic light in the open office showing the build status with either a green, yellow or red light. If a build fails, a synthesizor calls out a short message on who made the last change, causing the build to fail.Jenkins also shows the history of the previous builds, code coverage, etc.We also have a dashboard which collects information automatically from several sources: latest test automation results from all the test environments plus a trend graph of test results, critical defects to fix or verify, defect trend, performance test status and progress, build status in the trunk and all branches, and development test results, to mention a few.Reporting from one tool is wasteWe used to create a lot of waste in test reporting . We had this great idea that we would report all our test results from our integrated test tool. This tool has an XML interface that you can push your results to and you’ll be able to create a nice graph with the percentage of passed and failed cases. Sounds nice - so where’s all the waste? Unfortunately, the import of the restuls often failed for two basic reasons: either the interface had simply failed or someone had forgotten to make the link between the tool and our test automation framework. So we had to find out which cases were actually run but not reported. So we decided that instead of having a nice report from the integrated tool we will use the test automation reports. They won’t tell us failures by priority or by user story – but that’s actually not necessary. We focus on making the failed cases pass. TransitionBut the ultimate good practice turned to waste is yet to come: let’s take a look at defect management!
  • Defects are wasteDefects is one of the seven wastes in Lean. Despite this, defect management is considered by many as a good practice or a mandatory practice. When you look at some of the test process assessment models, defect management is one of the key practices. Commonresponsibility?Filing a defect means that you know that you have a problem but you put it aside and say “I will look at this later.” or “Let someone else take a look at this.” If you don’t have time to do it right the first time, when will you have time to do it? Instead of filing a defect, you should let the developers know what is wrong, and look into the issue together. One of the key things in creating good quality is to have short feedback loops without handovers.Throw it in the cloudWhen throwing the failure in the defect management “cloud”, you borrow time from the future. You pretend that you are done with your features – but you’re not. You’re creating technical debt.A typical defect management process has around 50 steps. You file a defect, you prioritize, you assign, you fix, you verify, you reopen, you refix, you re-prioritize, you re-verify and you close. Already when you file a defect, you have to fill 16 fields for it. Severity, priority, description, … I admit, that when a defect has escaped to production, you have to communicate lots of information to make sure that the defect is understood. But inside a development and testing team this lead to inappropriate processing. The best way would be face-to-face communication.Create a test case to demonstrate a failureA more efficient way to deal with defects is “create a test case to demonstrate a failure.” Quite simply, the team looks at the failed test cases together and does whatever is needed to make them pass. If there is no test case to demonstrate the failure, simply create a test case instead of a defect. When the newly created test case passes, you can move forward. No need for filing a defect, assigning it to somebody, prioritizing, re-testing.PostponingdefectsOnce you have defect management in place, you will use it for suspicious purposes. As on example, you start to postpone defects – you buy time from the future. ”We don’t have time to fix this now – but we will fix it in the next release.This leads to artificial quality gates, e.g. you might have a release criteria saying ”we must not have more than 5 major defects in our releases”, which unfortunately turns into ”it is our standard to have at least 4-5 major defects in our releases” We don’t aim to have zero tolerance, we aim at five! Not to talk about the 25 minor defects you’re allowed to have! Who said they were minor to the users?When you postpone defects, the next thing you notice is you will have to drag the defects with you, some of them almost forever. Every time you look at your defect list, you use a small amount of energy to think about the postponed ones. When you do this repeatedly, it adds up. Pretty soon you’re going to need a fulltime defect manager, a person who doesn’t create any value to the customer, but only manages defects.TransitionSo far I haven’t said anything about evolution, and we’re almost done. So what’s all this have to do with evolution?In testing we have three kind of measures to measure and improve quality: Preventive, Detective and Corrective.With preventive measures you can prevent bad quality from being created, so this is the best way to improve quality.Detective and corrective measures are more or lesswaste – how can we avoid them and start to use more and more preventive measures?
  • EvolutionNowwecome to evolution. Evolution is a processwheresomething – like a testprocess – passesbydegrees to a moreadvancedstage. As youcansee on the left, a lot of mutations is created in evolutionbutonly the favorablemutationssurvive. Man hascomefrom the seesoevolutioncantakeyou a long way.Translated to processimprovement: only the goodideaswillsurvive. Badideasarelikeunfavorablemutations, theywillbeshort-lived. Wecollectideas in twocycles: in the sprintcyclebi-weekly and in the release cycleeveryfour to sixweeks. In the sprintcyclewefocusmore on howwework and in the release cycleweactuallymeasurequalitybycollecting all the escaped defects, investigate their root causes, and going through the timeline of the release and looking for anything that we might improve on. The best part of the release retro is that it’s based entirely on facts. We make one slide with the highlights – and look at how bad we were.  It motivates you to improve! Of course, we also list some of the more successful things we did, to keep us happy.Wehavearound 25 sprintretroseveryyear and 6-8 release cycles, sowehaveenoughopportunities to improve. Just like in evolution, wecanafford to try out lots of ideas. Ifsomethingturns out to be a bad idea afterall, wewon’tapplyitagain. Wedon’thave to beafraid of creatingbadideas.Sothis is ourway of improvingcontinuouslybyevolution, and applyingmore and morepreventive – butlight – measures to improvequality and avoidproducingwaste.TransitionWith these practical examples I have highlighted the way we do test process improvement.To really change, you need both revolution and evolution. Revolution clears the table from old values to give a fresh start.To complement the revolution, we need to have evolution, to improve continuously.
  • A revolution is bound to hit you sooner or later – things simply do change. Have the courage to change along, don’t be blown away by the revolution. Don’t be afraid of bad ideas - you might block fantastic ideas! Evolution will make sure your bad ideas will die and your good ideas will live! Before you start doing something, anything: make sure not to generate any waste, nobody wants it!
  • Transcript

    • 1. Revolution, Evolution and Lean - a Test Process Improvement Diary from Copenhagen to Manchester. Tapani Aaltio Sogeti Finland Nov 24, 2011
    • 2. Revolution a drastic and far-reaching change in ways of thinking and behaving Eugène Delacroix - La liberté guidant le peuple, 1833. ( Liberty Leading the People) Source: wordnetweb.princeton.edu
    • 3. Agile Test Process? Plan Prep Spec Exec Comp Infra Ctrl development tests acceptance tests system tests functional Design realisation operation & management wish, legislation, policy, technical design opportunity , problem requirements input for Scrum Cycles TMap Test Process V-Model
    • 4. The Project  The system  Identity management: register, sign-in, sign-out, profile management, sso  Used by around 50 services with 150 million users  No downtime, response times under 1 sec  People  Pool of 25 people, three scrum teams  Developer:tester -ratio 1:1  Testing  ”Manual testing is our worst enemy”  3000 automated unit and integration tests  1000 automated black box tests  Test automation framework based on Ruby/Watir, built by the team  Manual integration testing and acceptance testing with customers  Scrum  Sprint cycle two weeks, release cycle 4-6 weeks
    • 5. The Seven Wastes of Lean Management Overproduction Waiting Unnecessary inventory Transporting Inappropriate processing Excess motion Defects
    • 6. How to Estimate Effort Without Creating Waste? Planning poker  Estimate complexity of backlog items  Fast way to create a common understanding among the team  Used to measure the velocity of the team  Testers participate on day one Overproduction Inappropriate processing
    • 7. How to Organize Work Without Creating Waste? Sprint planning commitment Definition of done Short commitments, several releases Overproduction Waiting Inventory
    • 8. Applying Good Practices “Just in Case” Is Waste  Prioritizing test cases  Linking requirements to test cases  Linking test cases to defects Overproduction Inappropriate processing
    • 9. Manual Testing and Manual Reporting Are Waste  Manual testing is our worst enemy  Test levels are concurrent, not back-to- back  Instant, continuous feedback and reporting  Creating test reports manually is waste InventoryOverproduction
    • 10. TesterDeveloper Defect Management is Waste! Nieuw BO Afgewezen Onderzoek Toegewezen Uitgesteld Oplosser In hertest TM Hertest ok Tester Hertest niet ok Niet opgelost Tester = Status = Person/ meeting who adjusts the status TM = Test manager Andere oplossing New DC Rejected Analysis Assigned Postponed Solved Solver In re test TM Re testok Tester Re test not ok Not solved Tester = Status Other solution Throw it in the cloud! Overproduction Waiting Inventory Transporting Excess motion Inappropriate processing Defects Instead of this… Create a test case to demonstrate a failure, work together!
    • 11. Evolution A process in which something passes by degrees to a different stage (especially a more advanced or mature stage) Source: wordnetweb.princeton.edu Evolution of scrum teamsEvolution of species
    • 12. How to Apply Revolution and Evolution? Have the courage to change, don’t be blown away by the revolution. Don’t be afraid of bad ideas - you might block fantastic ideas! Overproduction Waiting Unnecessary inventory Transporting Inappropriate processing Excess motion Defects

    ×