Cynefin, Kanban and Crash Test Dummies

  • 401 views
Uploaded on

Retro on a web development project with an over sized team. An intro to Cynefin. A lot an how context changes drive changes in Agile approaches

Retro on a web development project with an over sized team. An intro to Cynefin. A lot an how context changes drive changes in Agile approaches

More in: Technology
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
    Be the first to like this
No Downloads

Views

Total Views
401
On Slideshare
0
From Embeds
0
Number of Embeds
1

Actions

Shares
Downloads
4
Comments
0
Likes
0

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide
  • Well enough about me – this is for you
  • When asking for help hopefully you have a mentorThe answer always depends on the contextSo they will often start with a preface , a disclaimerAsk audience to guessYes “It Depends” - but what if you don’t have a mentorBefore I get into a discussion about the project I want to introduce a frameworkIt is a framework for decision making. For when you don’t have a mentor and you have to figure it out for yourself, like we did.To make sense of the project context before you start make decisions that change it
  • I’m going to refer to context as being some mixture of these ingredientsPeople – your team the stakeholders– those in or close to the project - brining their wants, desires and aspirationsCulture of the organization, - influenced by its domain commercial, government non-profit, it’s history – the pressures it struggles with to surviveThere is the face and voice of the client – the reason we are developing some product and service – the where and how they live and the goals we are helping them achieveLastly there are the tools and processes, the “mechanics” of the project we are in.All these elements come from somewhere, have a history and mix together to form a project context. We square up for the challenge - we tend to want to classify the project against some checklistMore and more I see people grimly hanging onto the ideas, hoping that the more they comply to the model, the better they will be.I want to introduce to another way of looking at your context. I wish I had found it earlier in the project. [INFOQ STORY – stumbling on the quote]
  • I’m going to cover briefly, LOOK for the YOUTUBE VIDEO by DAVE SNODEN for more backgroundCynefinpronounced ku-nev-in, is a Welsh word that signifies the multiple factors in our environment and our experience that influence in ways we sometimes can’t predictIDEA behind using the Cynefin framework is to make sense of which context you are in - So you can not only make better decisions - BUT avoid mistakes when applying your standard approach if it no longer fits - ADAPTBoundaries are squishy be design – not categorizing – its data before you form a model – not data you are fitting into a model – that’s categorizationAs you enter one of the domains – you try to make sense – possibly you change interpretation as you gain knowledge of the contextOnce you understand where you are you can start to make the right decisions to make progressSIMPLE – Everything known. Not much room for improvement. You Sense – categorize into some implementation and respond with the implementation. BEST PRACTICESExamples – Order taking, application submission FOR DEVS: automation of repetitive jobsSTANDARD RESPONSE – OH it’s one of those problems – NO ANALYSIS REQUIRED.COMPLICATED– PREDICTABLE BUT REQUIRES AN EXPERT Multiple right answer but an expert can choose the right one GOOD PRACTICES reignEXAMPLES : C-R-U-D functions, Use of Open Source StacksYou SENSE- ANALYZE – RESPONDSTANDARD RESPONSE: “Let me have a look at this problem and I’ll tell you how to solve it”AVOID – Same old thinking avoids innovation =? Use RetrospectivesAVOID – Analysis ParalysisCOMPLEX- Standard practices are not able to solve. Need new ideas – experimentExample : Battlefield scenario, gather a team of experts and com up with solutions – like there isone but you have to experiement BDD start writing tests and more questions come up , changes requirements, changes the targetYou can look back and see how you got to a solution but the problem was such that it required playing with ideasPROBE – SENSE – RESONDKEY POINT to probe around IT MUST BE SAFE-TO-FAIL environmentASK – LOOK at your context and see if it struggles with AGILE because NOT SAFE TO FAILMost interest to tech types => DESIGN => POC => LEARNAlso why if we try to nail every requirement we are bound to failLeaders can be scared by Complex Domian => Default back to command and control to enforce order on the process where it does not belowCHAOS: House on fire, production site down, You just need to fix and then figure out what happened, stem the bleeding, Rolll back a release – Get it back under some controlACT - SENSE – RESPONSEOTHER FEATURES:DISORDER – WHEN NO-ONE is trying to make decisionsLITTLE LIP – Will demo this
  • Ground RulesClient is perfectThey did what they had to, because of immense pressure they were under - from inside their organizations – their cultureSometimes the team didn’t like – the team may not have known better – OR because we needed to accept the cultural component of the contextLimit details on technology as some elements are proprietryTalking about www.client.com client’s primary marketing siteA marketing site tries to figure out what the best product to offer based on little information – Not like upsellingLike a magic brochure that changes content based on who you are and adds and deletes pages as it learns about what you are after.Engine behind is like Pachinko Machine or Galton BoxCustomers are presented by a home page and start to make navigation choices based on their needs an options presentedMarketers drive the site content have campaigns for products and are doing market analysis to figure out what products do wellAnalytics engine provides feedbackOnce a product is selected the site hands off to other systems that take the appications
  • Started in late 2008 with a strategy study Produced a model to transform from their current technology to portal basedArchitectural goals were typical of a large financial institution, reliability, performance, securityOne of main business goals – reduce cost to run and REDUCE TIME TO MARKET on campaignsThrough 2009 Used our extensive CMS knowledge to build out the CMS for what was a content driven siteMay 2010 we were asked to come back and review the project. It was close to release larger LOBs. They wanted a health checkLooked like a Complicated not Complex type of context.Discussions showed the content to be COMPLEXMATURITY CHANGED THE SOLUTION IN A COMPLEX SPACE model was wrong compared to where they wanted to beSOLUTION WAS more complex to maintain, was not delivering consistent analytics and had taken a step backwards in time to marketCame up with alternatives and a new Engine based model that would simplify page construction with continuing to use the mature CMS
  • Rather go into all the gory detail I’m going to give you a high level and then pick a couple of challenges that stood out from the normBasic set up was - 2 week sprints - co-locate but offsite - AGILE BA Mind melded with the PO – worked well in mashing formal reqs with Agile Stories - No electronic tools, Just powerpoint for stories and excel for trackingRequirements were detail for the a few of the Lines of Business so we attacked in vertical slices Just enough architecture just enough functionality to support releasing LOBS3 Challenges Offsite client architects could not engage yet requirement heavy design docs – KNOWLEDGE CAPTURE We were having troubling aligning sprint cadence with all the testing documentation rigour Leads were offsite and processes made extensive use of inspection for validation Biggest challenge was getting a new team to jell – Scrum didn’t provide enough glue to hold them together RANT !
  • Scrum assumes this amorphous blog of self organizing genius – I don’t
  • Discuss the components of history , cultural backgroundDifferent cultures view authority differentlyDifferent cultures treat heritage differentlyAfter talking them through this simple model, on a whim I added the simple test
  • We also took a hard look at practices – developers are not arriving with the XP practices well entrenchedI still find rookie Agile developers blaming testers for not finding bugs.My usual response is along the lines of “This is not an treasure hunt – why did you put them there?” Gets weird looks as the light bulb goes off.
  • Focus on the consequence of being at the end of a process and impact on testers behaviorThe developers looked up to the designer and immediately wanted to be at that level of the respect it garnered
  • Ask if familiar with McCarthy Core Protocols – Jim and Michelle McCarthyHorribly truncated and boiled down but in essence we asked if they would be willing to act in different way
  • We had delivered butSponsor restlessnessPlans had been derailed by the discover and need to implement glue technology between the old and new systemsSponsors had question – when will it end?Slicing approach did not provides the transparency into the future in order to secure budgeting bean counters don’t changeNeeded big picture thinking!Co location => culture and pressures were more realPage building team separate configuration and considerable challenge to specialist – they started out with 2 week iterations with non-software contextReorganize infrastructure into smaller batches, build out along the deployment pipelineProduct roadmap allows forMake big bad estimatesIdentifying chunks of common functionalityWhen given the big picture the client was more willing to defer immediate gratification for more holistic approachUse configurable designs to defer having to commit to that configuration until the business requirements where in handCynefin – Less oscillation between Complex and Complicated – spend more time in the complex domain investing in chuncks
  • Don’t worry about that detailsTesting became a big batch of work – late discoveryThis is QUALITY CONTROL vs QUALITY ASSURANCEIf your experimenting with Complex domain, you need that learning as part of probing is the software valuableQuality assurance better for simple – check it after
  • Most project managers are scared of the LevelWho has heard of it? Who has seen it? It is the BigFoot of Project Features.Who has taken a plaster cast of the crater it left in your schedule.Use the priority beyond business value . It can be anything from 1-1000So what di you think the client thoughtTHE CLIENT HATED IT! Estimating are useful, Estimates are useless
  • Bad Bad ideaFist of five who thinks this is a bad ideaOnce you have done it – you’ll never ever do it againFinger in socket analogySo this this is the second time I have done and I’m getting better at it
  • Talk on teams of 4Then team of 5 to 9 – the Agile sweet spot.

Transcript

  • 1. Agile Richmond – May 2013
  • 2. Guy Winterbotham BioFlowBottlenecksWIP Limits, 5S,KaizenAgileUgly Agile@guywinterbotham http://www.linkedin.com/pub/guy-winterbotham/1/2bb/516
  • 3. The Agile Help DisclaimerIt depends …
  • 4. Project ContextPeopleTeam, StakeholdersMechanicsTools, ProcessesClientProduct, ServiceCultureOrganization,Market, Domain
  • 5. Cynefin: A decision making frameworkPronounced - ku-nev-inLicensed by Dave Snowden under the Creative Commons Attribution 3.0 Unported
  • 6. Marketing Site = Content TargetingUse campaigns to promote productsMarketing sites:Electronic brochuresthat change to matchthe userCustomer’s choices direct themto a product. Analytics measureseffectiveness.
  • 7. Project Time Line
  • 8. STAGE 1: Basic ScrumSlices of supporting architectureSlices of pages aligned to business units
  • 9. RANT #1 – I HATE SCRUMS!
  • 10. Forming a ScrumRugby Union uses two more
  • 11. • Pick up two cards• Write where you feel you are• Write where you feel the team isTuckmans Stages of Group DevelopmentA quick and dirty team self assessment
  • 12. To change we needed to look at:• Who we were – Roles and Responsibilities• How we work – Engineering Practices• How we function – Our Processes• Who we are working with – Our ClientLewins Change ModelUnfreeze – Change – RefreezePlan for Getting to Performing
  • 13. Make the Team Aware of the ContextCultural awareness came later
  • 14. Counter the Hierarchy of Process
  • 15. Counter the Hierarchy of RoleClient FireEngagementManagerProjectManagerArchitect/Tech LeadTestersDesignersDevelopers BSAs• Proximity ≠ Smarter or Better• Was an indicator of Cynefin Sweet spot
  • 16. Counter the Hierarchy in Behavior To engage when present Respect for ideas no matter their source A willingness to bring ideas or support the best current idea To always seek help To use teams for complex endeavors Do now what can effectively be done now “I will never do anything dumb on purpose”McCarthys Core Protocols
  • 17. Focus away from differences ….to common goals“Continuous Integration is a softwaredevelopment practice where members of ateam integrate their work frequently…”, blahblah, blah-Martin Fowler“Continuous Integration along with DeveloperTDD forms a competitive game framework.Coding is a competitive contact sport”- Me
  • 18. Pay attention to Build MetricsUse Metrics to drive games
  • 19. Pay attention to Build MetricsVisible Metrics for Quality Focus
  • 20. STAGE 2: Scrum + 1: Change and ScaleLate 2010 through Q3 2011Slices of pages aligned to business unitsBuild a core framework to support pagesContext Changes• New Management• Co-locate on site• New Teams• Separation of concerns• Product RoadmapChallenges• Scaling a team• Big team woes• That darn testing process
  • 21. STAGE 2: The bad testing ideaWeek 1Dev Environment• Tasking of stories• Story Development + Unit testing• Story Test Case Preparation• Functional Testing (if functionality allows)Week 2Dev Environment• Story Development + Unit Testing• Story Test Case Preparation• Functional TestingMUDA Week 3QA Environment• Sprint Code Build into QA• Formal Functional Testing – Input test cases and results into QC• Performance Baseline• Detailed Design for Next Sprint Stories• Sprint Code Review• Bug Fixes and Code Quality rework• Knowledge Transfer to QC staff• Retrospective• Sprint Planning• Maybe a demoMuda (無駄) Wasteful, Unproductive
  • 22. Challenge 1: Modeling a Big teamNeeded tool that could….• Use bad estimates to extrapolate a critical chain• Be able to include inter-team dependencies• Be able to model a ramp up period• Handle “What If” scenarios• Already available to everybody on the team• Could communicate schedule and budget
  • 23. RANT #2 – AGILE HATES PMI!Individuals +InteractionsProcesses +Tools
  • 24. Tools, Tools and more Tools
  • 25. Challenge 1: Modeling a Big teamNeeded tool that could….• Use bad estimates to extrapolate a critical chain• Be able to include inter-team dependencies• Be able to model a ramp up period• Handle “What If” scenarios• Already available to everybody on the team• Could communicate schedule and budget
  • 26. Microsoft Project is a Modeling Tool…please keep this our little secret• Use VB macro to load your backlog• Build out different roles as task• Play with resources as roles not individuals• Look at critical chain interactions of roles• Learn to love the Level Button
  • 27. RANT #3 – BIG TEAMS = BAD NEWS
  • 28. Big Teams and Social Loafing“Tendency of certain members of a group to getby with less effort than what they would haveput when working alone.”• Retrospectives become ineffective• Transparency dims for stakeholders• Standups become amateur theater• Coach becomes disconnected• Coding practices diluted or skippedBeyond team of 10, people get lost
  • 29. Look Under the Hood: Burn Downs-200-10001002003004005006007008009000 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20HoursSprint Days"The Hill Team" Team BurndownReal data, with names changed
  • 30. Personal Burn Down: JillJill is doing fine0.0010.0020.0030.0040.0050.0060.0070.000 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20Ideal Burndown Upper Bound Lower Bound Actual Burndown
  • 31. Personal Burn Down: JackJack over estimates, what a hero0.0050.00100.00150.00200.00250.000 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20Ideal Burndown Upper Bound Lower Bound Actual Burndown
  • 32. Personal Burn Down: “The Bucket”“The Bucket” is hiding, getting carried0.0020.0040.0060.0080.00100.00120.00140.00160.000 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20Ideal Burndown Upper Bound Lower Bound Actual Burndown
  • 33. Pay attention to ErgonomicsUse Big Monitors as dividersTableAgile Card WallDev Table with Fast NetworkBuildMonitorDoor andInformationRadiators
  • 34. STAGE 3: Backing into KanbanDriver for changing to focus more on flow• Page building became the date driver• Needed a pull model to support the Page team• Iterations a constraint to fluid response• Reduce the lengthy standups• Make assignments more visible• Once and for all accept the testing modelLate 2011 through 2012
  • 35. STAGE 3: Fluid BacklogDave Anderson Cost Model
  • 36. Kanban brings focus to the “How” Visualize the workflow Limit WIP Manage Flow Make Process Policies Explicit Improve CollaborativelyOurs was a shallow implementationDave Anderson – The Principles behind Kanban
  • 37. Cynefin Learning CyclesLicensed by Dave Snowden under the Creative Commons Attribution 3.0 UnportedStandardize,AutomateInnovate,ExploreDisrupt,Scare
  • 38. Step 1: Change up the Dev boardDip into chaos to shake up the teamWIP limits on devs using avatars and on types of workPosted Poliesby Work TypeMagneticAvatarsSuppliesin ShoeHolder
  • 39. Step 2: Man the Page Gates• Enforce Quality of the Inputs• Don’t start a page unless it can be finished• Account for rework in WIP limits• Create supporting tools• Pull on development• UAT pages as part of Page creation• Resulted in 80% – 90% pass rateStop Starting, Start Finishing
  • 40. Step 3: Reimaging Page BuildingBring them from Chaos/Complex
  • 41. Kanban Overlay: What did we get? Operating in the end state Quality end-to-end and built in Insight into how was done Incremental Improvement Flexibility and responsiveness High Performing Team!Late 2010 through early 2011
  • 42. The End?The Evolution Continues…..
  • 43. Cynefin: Chaos CliffLicensed by Dave Snowden under the Creative Commons Attribution 3.0 Unported
  • 44. That little Cynefin CliffComplacence
  • 45. The EndNo Crash Test Dummies were hurt in the making of this presentation“Without deviation from the norm, progress is not possible.”- Frank Zappa