AW9
Agile Development Concurrent Session
11/12/2014 2:45 PM
"Agility at Scale: WebSphere’s
Agile Transformation"
Presented by:
Susan Hanson
IBM Software Group
Brought to you by:
340 Corporate Way, Suite 300, Orange Park, FL 32073
888-268-8770 ∙ 904-278-0524 ∙ sqeinfo@sqe.com ∙ www.sqe.com
Susan Hanson has twenty-five years of experience in developing
and delivering IBM software products across the WebSphere and
Tivoli brands. Susan has held various positions spanning testing,
development, release management, and people management.
Currently the process architect for the WebSphere Application
Server team, she is responsible for development transformation,
including processes and tooling. Susan was part of the
transformation team as the product embraced agile technologies.
She is one of the leaders in the move to a continuous
delivery/DevOps model, and the DevOps focal point for the
application integration and middleware portfolio of products.
1
Agility at Scale: WebSphere
Application Server's Agile
Transformation
Susan M Hanson
IBM Corporation
hansons@us.ibm.com
2
Who am I?
 A member of the IBM WebSphere
Application Server development team
 Based in Research Triangle Park, NC
 Development Transformation Lead and
Continuous Delivery Architect
 Responsible for process transformation,
including tooling, for the development team
3
Who are we?
 Global development and support team
550+ employees
 Major labs in United States, United Kingdom,
Canada, China, Israel, India, and Mexico
 Individuals scattered in a few other countries
 Development and support for:
Multiple releases, editions
Multiple platforms
Multiple DBs
4
Why Agile and CD?
 Adapt more quickly and easily to rapidly-
changing requirements in the marketplace
 Full lifecycle including customer feedback
feeding directly into the development plans
 Continue to improve the quality of our
deliverables to customers
 Deliver add-on features for existing
releases on a more rapid/consistent pace
5
Our Major Challenges
 LARGE team (around 550 people total)
 Geographically disperse (timezone spread ~15 hours in some
cases)
 Historically silo’d around components
 Older technology with a multitude of home-grown tools and
processes built around it
 Multiple tools, some linked together, some not so much
 10 years of waterfall culture, org structure, process, and
mindset before Agile transformation
6
WAS and Liberty v8.5.5.x
and beyond
Continuous Delivery Era
2014 and beyond
Waterfall Era
1998-2008
WAS v1-v7
Agile Era
2009-2013
WAS v8, v8.5, v8.5.5
Liberty Profile/Core v8.5, v8.5.5
Our Journey
7
Our Journey
 Multiple major “Era’s” in our journey
 Waterfall – bulk of our releases
 Traditional waterfall (design, then dev, then test, then release)
 Agile – major transition in multiple phases
 Tooling
 Automation
 Team Structure
 Communication
 Continuous Delivery
 Improved tooling and automation as well as product architecture
changes that better enable CD (modular, zero migration,etc..)
 Changes are yielding measureable improvements in
quality, efficiency, flexibility, predictability, cost, time to
market, and ultimately competitiveness.
8
What we did
Unleash the power of Rational Tools
• RTC code, builds, tests, and project management
• Clear visibility to status
• Customer requirements bridged from DevWorks into
RTC and linked to the implementing Feature
Code ready to ship at
the end of each
iteration
Team Structure
• Multi-discipline, co-located teams (where possible)
• Single release backlog
• Pull vs Push methodology
• Joint team demos, sizing, design issues, rotating
team leadership
Ownership & Accountability
• No individual code ownership, shared responsibility
• Functional Expertise areas that cross teams
• Value leadership and expertise over title and
ownership
9
Tooling and Processes
 Shifted to the Rational CLM Suite
 Rational Team Concert (v2 to start, now on v4.0.5)
 Rational Quality Manager (now integrated with RTC in single
JTS)
 Single, highly-customized project area for development
linked to a separate project area for requirements
 Closed-loop process with customer feedback loop for
requirements (via IBM DeveloperWorks RFE)
 Heavy use of automation
 Shift towards joint team ownership and accountability
10
Push vs Pull
 Work based on Priority only
 Previously, work was
“pushed” to a team based on
a component ownership
 Always had an owner
(component lead)
 We removed ownership but
retain a general “functional
area” in each work item
 Teams “pull” work into their
team backlog based on
priority, availability, and team
skills
 Mindset shift:
 Works items don’t have an
Owner until it floats up in the
priority queue
 Work items may not have an
owner until the priority brings it to
the “top”
 Multiple teams can contribute to
a single functional area
 Teams may be asked to work
outside of their area of expertise
to assist in other areas (we are
smart people!)
 Teams don’t sit and wait for
people to give them work ... they
go GET it
 Constantly watch a single,
prioritized list of work for work
that is higher priority than what
they have currently.
11
Communication
 Agile has a focus on communications, including
stand-up scrums
 By aligning our teams to be as co-located as
possible, we are able to have more non-virtual
communication
 Individual team scrums, and then a team rep
(Iteration Lead) participates in a Scrum of
Scrums
 Geo-handover (AP -> EMEA -> US)
 Incorporated more IRC rooms and mailing lists
for cross-team collaboration
 Constant feedback via retrospectives, process
and tools continuously refined
12
Ownership
 Moved many items to a “whole team ownership”
model
 Builds are monitored by a rotating set of build
monitors, so each team spends time monitoring
builds. Provides better understanding of the impacts
of build issues and infrastructure challenges.
 Each team has a technical Iteration Lead (rotates)
and each iteration, a team has Lead of Leads
responsibility
 Runs LoL scrum
 Coordinates Iteration Demos
 Facilitates cross-team collaboration
 Teams “manage” the teams
13
Cycle Time Improvements
Lifecycle
Measurements
2009 2010-2011
2012-
2013
Total
Improvement
Final Regression Test
Period
28 days 14 days
N/A, done
each
iteration
28 days
Iteration Length 6 weeks 4 weeks 2 weeks 4 weeks
Build + Full Automated
Regression
N/A 1 week 6 hours 1 week
(compared to 2010)
Time Between
Releases
24 Months 24 Months 12 Months 12 Months
 Automation increased 70% (to 1.3 million test cases weekly) initially under Agile,
currently between 3 and 6 million spread across our supported platforms
 Increased test coverage across all test phases
 First beta drop for latest release delivered 7 months earlier in the cycle than under
Waterfall process
 Increased product quality
14
14
Quality Improvement
 Quantifiable quality improvements by moving
from Waterfall to Agile
 Comparison of average customer reported problems
for 3 releases under Waterfall to the 2 releases under
Agile, each month from when publically available
 29.5% improvement at 12 months
 40% improvement at 24 months
 Improved customer satisfaction
 Reduced service resources allows us to put more
resources on satisfying more customer
requirements.
15
Average customer reported issues by
methodology
0
200
400
600
800
1,000
1,200
GA+0 GA+3 GA+6 GA+9 GA+12 GA+15 GA+18 GA+21 GA+24
Waterfall
Agile
Quality Improvement with Agile
16
16
Best Practices
 Automation, Automation, Automation
 Have your tools do everything they can and leave your engineers
to do what they do best (code, test, whatever)
 Communicate, Communicate, Communicate
 Over-communicate, be transparent with intent and direction
 Can’t see the forest for the trees - be open to suggestions for
improvements from the team ... they are living it every day in the
trenches, you may not see everything they are going through
 Don’t kill the forest to keep a couple trees – not every suggestion
from the team is going to be best for the overall organization.
 90/10 rule
 Make things as easy as possible for the 90% of cases, deal with
the 10% as exceptions
 Work items, workflows, build parameters, etc
17
Best Practices
 Ensure good Agile practices
Don’t allow Technical Debt to accrue
Fix defects early, this ensures a defect is not
“masking” other defects that you find later
Regressions cannot be tolerated
Test fixes are just as important as product
fixes (again, could mask a potential product
failure)
Done actually does mean Done (everything,
not “just” the development piece of it)
18
18
Best Practices
 Small teams are much easier to transform than large
ones
 For very large team organizational transformation:
 Invest in upfront education for everyone (including managers &
execs)
 Form transformation team with reps from major areas of org
(dev, test, support, docs, etc) to plot the transformation
 Prototype process changes at a smaller scale
 Find enthusiastic change agents in the trenches to help lead
the change and add credibility
 Stage transformation actions over time to avoid too much
overwhelming change at once
 Be pragmatic, don’t give up, don’t plateau,
 Keep transforming, improving, getting better
 Improvise .. Adapt ... Overcome
19
What is next for us?
 In the process of a shift into a Continuous
Delivery model for a portion of our deliverables
 Monthly betas
 Monthly cloud offering
 Quarterly new feature updates
 Shift from automated-tests-with-story to
automated-tests-with-every-task
 Auto-provisioning of environments and
supporting test artifacts (like an LDAP server or
database)
 Continuous (loop) test environment with in-place
upgrades instead of single execution test buckets

Agility at Scale: WebSphere’s Agile Transformation

  • 1.
    AW9 Agile Development ConcurrentSession 11/12/2014 2:45 PM "Agility at Scale: WebSphere’s Agile Transformation" Presented by: Susan Hanson IBM Software Group Brought to you by: 340 Corporate Way, Suite 300, Orange Park, FL 32073 888-268-8770 ∙ 904-278-0524 ∙ sqeinfo@sqe.com ∙ www.sqe.com
  • 2.
    Susan Hanson hastwenty-five years of experience in developing and delivering IBM software products across the WebSphere and Tivoli brands. Susan has held various positions spanning testing, development, release management, and people management. Currently the process architect for the WebSphere Application Server team, she is responsible for development transformation, including processes and tooling. Susan was part of the transformation team as the product embraced agile technologies. She is one of the leaders in the move to a continuous delivery/DevOps model, and the DevOps focal point for the application integration and middleware portfolio of products.
  • 3.
    1 Agility at Scale:WebSphere Application Server's Agile Transformation Susan M Hanson IBM Corporation hansons@us.ibm.com
  • 4.
    2 Who am I? A member of the IBM WebSphere Application Server development team  Based in Research Triangle Park, NC  Development Transformation Lead and Continuous Delivery Architect  Responsible for process transformation, including tooling, for the development team
  • 5.
    3 Who are we? Global development and support team 550+ employees  Major labs in United States, United Kingdom, Canada, China, Israel, India, and Mexico  Individuals scattered in a few other countries  Development and support for: Multiple releases, editions Multiple platforms Multiple DBs
  • 6.
    4 Why Agile andCD?  Adapt more quickly and easily to rapidly- changing requirements in the marketplace  Full lifecycle including customer feedback feeding directly into the development plans  Continue to improve the quality of our deliverables to customers  Deliver add-on features for existing releases on a more rapid/consistent pace
  • 7.
    5 Our Major Challenges LARGE team (around 550 people total)  Geographically disperse (timezone spread ~15 hours in some cases)  Historically silo’d around components  Older technology with a multitude of home-grown tools and processes built around it  Multiple tools, some linked together, some not so much  10 years of waterfall culture, org structure, process, and mindset before Agile transformation
  • 8.
    6 WAS and Libertyv8.5.5.x and beyond Continuous Delivery Era 2014 and beyond Waterfall Era 1998-2008 WAS v1-v7 Agile Era 2009-2013 WAS v8, v8.5, v8.5.5 Liberty Profile/Core v8.5, v8.5.5 Our Journey
  • 9.
    7 Our Journey  Multiplemajor “Era’s” in our journey  Waterfall – bulk of our releases  Traditional waterfall (design, then dev, then test, then release)  Agile – major transition in multiple phases  Tooling  Automation  Team Structure  Communication  Continuous Delivery  Improved tooling and automation as well as product architecture changes that better enable CD (modular, zero migration,etc..)  Changes are yielding measureable improvements in quality, efficiency, flexibility, predictability, cost, time to market, and ultimately competitiveness.
  • 10.
    8 What we did Unleashthe power of Rational Tools • RTC code, builds, tests, and project management • Clear visibility to status • Customer requirements bridged from DevWorks into RTC and linked to the implementing Feature Code ready to ship at the end of each iteration Team Structure • Multi-discipline, co-located teams (where possible) • Single release backlog • Pull vs Push methodology • Joint team demos, sizing, design issues, rotating team leadership Ownership & Accountability • No individual code ownership, shared responsibility • Functional Expertise areas that cross teams • Value leadership and expertise over title and ownership
  • 11.
    9 Tooling and Processes Shifted to the Rational CLM Suite  Rational Team Concert (v2 to start, now on v4.0.5)  Rational Quality Manager (now integrated with RTC in single JTS)  Single, highly-customized project area for development linked to a separate project area for requirements  Closed-loop process with customer feedback loop for requirements (via IBM DeveloperWorks RFE)  Heavy use of automation  Shift towards joint team ownership and accountability
  • 12.
    10 Push vs Pull Work based on Priority only  Previously, work was “pushed” to a team based on a component ownership  Always had an owner (component lead)  We removed ownership but retain a general “functional area” in each work item  Teams “pull” work into their team backlog based on priority, availability, and team skills  Mindset shift:  Works items don’t have an Owner until it floats up in the priority queue  Work items may not have an owner until the priority brings it to the “top”  Multiple teams can contribute to a single functional area  Teams may be asked to work outside of their area of expertise to assist in other areas (we are smart people!)  Teams don’t sit and wait for people to give them work ... they go GET it  Constantly watch a single, prioritized list of work for work that is higher priority than what they have currently.
  • 13.
    11 Communication  Agile hasa focus on communications, including stand-up scrums  By aligning our teams to be as co-located as possible, we are able to have more non-virtual communication  Individual team scrums, and then a team rep (Iteration Lead) participates in a Scrum of Scrums  Geo-handover (AP -> EMEA -> US)  Incorporated more IRC rooms and mailing lists for cross-team collaboration  Constant feedback via retrospectives, process and tools continuously refined
  • 14.
    12 Ownership  Moved manyitems to a “whole team ownership” model  Builds are monitored by a rotating set of build monitors, so each team spends time monitoring builds. Provides better understanding of the impacts of build issues and infrastructure challenges.  Each team has a technical Iteration Lead (rotates) and each iteration, a team has Lead of Leads responsibility  Runs LoL scrum  Coordinates Iteration Demos  Facilitates cross-team collaboration  Teams “manage” the teams
  • 15.
    13 Cycle Time Improvements Lifecycle Measurements 20092010-2011 2012- 2013 Total Improvement Final Regression Test Period 28 days 14 days N/A, done each iteration 28 days Iteration Length 6 weeks 4 weeks 2 weeks 4 weeks Build + Full Automated Regression N/A 1 week 6 hours 1 week (compared to 2010) Time Between Releases 24 Months 24 Months 12 Months 12 Months  Automation increased 70% (to 1.3 million test cases weekly) initially under Agile, currently between 3 and 6 million spread across our supported platforms  Increased test coverage across all test phases  First beta drop for latest release delivered 7 months earlier in the cycle than under Waterfall process  Increased product quality
  • 16.
    14 14 Quality Improvement  Quantifiablequality improvements by moving from Waterfall to Agile  Comparison of average customer reported problems for 3 releases under Waterfall to the 2 releases under Agile, each month from when publically available  29.5% improvement at 12 months  40% improvement at 24 months  Improved customer satisfaction  Reduced service resources allows us to put more resources on satisfying more customer requirements.
  • 17.
    15 Average customer reportedissues by methodology 0 200 400 600 800 1,000 1,200 GA+0 GA+3 GA+6 GA+9 GA+12 GA+15 GA+18 GA+21 GA+24 Waterfall Agile Quality Improvement with Agile
  • 18.
    16 16 Best Practices  Automation,Automation, Automation  Have your tools do everything they can and leave your engineers to do what they do best (code, test, whatever)  Communicate, Communicate, Communicate  Over-communicate, be transparent with intent and direction  Can’t see the forest for the trees - be open to suggestions for improvements from the team ... they are living it every day in the trenches, you may not see everything they are going through  Don’t kill the forest to keep a couple trees – not every suggestion from the team is going to be best for the overall organization.  90/10 rule  Make things as easy as possible for the 90% of cases, deal with the 10% as exceptions  Work items, workflows, build parameters, etc
  • 19.
    17 Best Practices  Ensuregood Agile practices Don’t allow Technical Debt to accrue Fix defects early, this ensures a defect is not “masking” other defects that you find later Regressions cannot be tolerated Test fixes are just as important as product fixes (again, could mask a potential product failure) Done actually does mean Done (everything, not “just” the development piece of it)
  • 20.
    18 18 Best Practices  Smallteams are much easier to transform than large ones  For very large team organizational transformation:  Invest in upfront education for everyone (including managers & execs)  Form transformation team with reps from major areas of org (dev, test, support, docs, etc) to plot the transformation  Prototype process changes at a smaller scale  Find enthusiastic change agents in the trenches to help lead the change and add credibility  Stage transformation actions over time to avoid too much overwhelming change at once  Be pragmatic, don’t give up, don’t plateau,  Keep transforming, improving, getting better  Improvise .. Adapt ... Overcome
  • 21.
    19 What is nextfor us?  In the process of a shift into a Continuous Delivery model for a portion of our deliverables  Monthly betas  Monthly cloud offering  Quarterly new feature updates  Shift from automated-tests-with-story to automated-tests-with-every-task  Auto-provisioning of environments and supporting test artifacts (like an LDAP server or database)  Continuous (loop) test environment with in-place upgrades instead of single execution test buckets