Summit 2013 spring rob hirschfeld migrations v1

1,145 views

Published on

0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
1,145
On SlideShare
0
From Embeds
0
Number of Embeds
24
Actions
Shares
0
Downloads
11
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Summit 2013 spring rob hirschfeld migrations v1

  1. 1. Rob HirschfeldDell, Distinguished Engineer http://lifeatthebar.com
  2. 2. • This session could repeat a lot from last summit • http://www.openstack.org/summit/san-diego-2012/openstack-summit- sessions/presentation/getting-from-folsom-to-grizzly-a-devops-upgrade- pattern• Interoperability & Reference Architecture • Reference Architecture w/ Heat (Tuesday @ 11:00) • Interop Panel (Tuesday @ 5:20)• Upgrade Projects • https://wiki.openstack.org/wiki/Upgrade-with-minimal-downtime • https://wiki.openstack.org/wiki/Grenade
  3. 3. • The “Problem“ with Migration• Paths to Nirvana (or Roads to Perdition)• Alternatives• An Opinion• Discussion F G H http://learn.genetics.utah.edu/content/begin/cells/organelles/
  4. 4. • OpenStack has 3 month release major/minor cycle • Major version every 6 months • Minor version (but important) 3 & 6 months after release• Lots of Changes • Bugs are fixed • Operating Systems upgrade • New technologies appear • Whole projects are split off• We expect operators to • Keep systems running • Never loose data • And… Stay up to date http://cdn2.arkive.org sockeye-salmon-predated-by-grizzly-bear-on-migration-upstream.jpg
  5. 5. • What are we upgrading? • OpenStack - Yes! • Dependent packages - Probably? • Base OS - Maybe?• What is the state during the "in-between" time? • Infrastructure downtime? • VM downtime? VM Reboot? Controlled/Informed? • Availability Windows?• What contingency plans? • Dry run? Maybe. • Recover by going backwards? Maybe.• What level of safety and trust do you need? • Assure data integrity? • Assure Infrastructure Integrity? • Maintain Security?• How long can the migration take? • Big bang move or gradual migrate? • How will my API consumers/ecosystem cope? • Can Keystone Grizzly work with Folsom Nova??? • What about futures? G.1 to G.2? H to I? • Can I skip versions? Jump from G to I? http://www.publicdomainpictures.net Steep Steps by Peter Griffin
  6. 6. • Beginning Answers • Distros will manage dependencies and packaging • We can’t lose data or compromise security • Infrastructure state and integrity will vary by solution• Assumption of Staging • Some managed environment (not a manual deploy) • Staging/test environment to get "familiar" with the problem. • Maintenance window for production - limits scope of change • Step-wise changes are OK (big bang is not required) • We can make trade-offs to defray expensive requirements• Beyond Assumptions… Paradigm Shifts • There are shared best practices • Upgrades can be automated in a sharable way http://www.theemailadmin.com/wp-content/uploads/2012/09/GFI229-hot-water-migration.jpg
  7. 7. All the nodes update to the latest codein a short time window• Details: 1. Cookbooks include update (instead of install) directives. 2. Control upstream package point (e.g. apt-update when appropriate) 3. Force chef-client run 4. Now at new level• Considerations • Pros: Potentially fast, continuous operation • Cons: Dont mess up, it is your production environment • Scope: Security updates • Code Assumptions: • System can function through service restarts. • Underlying data models dont change or migrate appropriately.
  8. 8. Nodes migrate in staged groups• Details: 1. Choose subset of machines and quiesce them. 2. Update set 3. Freeze state (by tenant) 4. Migrate service/tenant content 5. Repurpose after complete.• Considerations • Pros: Safer, more controlled, and can move tenants as needed • Cons: Takes longer, still has cut-over point, but less open http://allgodscrittersgotrhythm.blogspot.com/2010_08_01_archive.html
  9. 9. Nodes changed individually by a system-wideorchestration that supports components of multiple versions• Details 1. Components must be able to straddle versions 2. Orchestration updates core components to new version 3. System as a whole queiseces and is validated (requires self test) 4. Orchestration individually migrates components (return to step 3)• Considerations • Pros: Creates a highly resilient system that handles higher rate of change • Cons: More complex to create and maintain http://www.grizzlycentral.com/forum/grizzly-tire-wheel-combos/1204-upgrade-tires-grizzly.html
  10. 10. • Orchestration (not just deployment automation)• Awareness of physical layout is required • Must respect fault zones to sustain HA • Proximity of resources matters for migration • Networking transitions are essential• Collaboration with development teams is essential • Components must support current and previous • Upgrade plan must be baked into configuration and tested • Upgrade dependencies must be 1) clear and 2) minimized• HA complicates upgrades • Upgrade can be detected as a failure • HA system must be able to bridge versions
  11. 11. • Partial features were confusing• We wanted to get ahead on upgrade• It looked like dev jumped to Grizzly• Good news: • Some testing of upgrade • Folsom to Grizzly ops was pretty smooth• Bad news: • Grizzly is more complex (more moving parts) • Missing multi-node upgrade validation
  12. 12. DB Oslo Keystone Msg BusClient Glance Nova Compute Dashboard Cinder Celimeter Quantum
  13. 13. • Fault Tolerance on BOTH SIDES AND VERSIONS• Same Version = EASY• Backwards Version = HARD• Forward Version = IMPOSSIBLE Keystone Grizzly Keystone Easy Nova Havana Havana
  14. 14. • We want to limit need to sustain old services• New versions should support past APIs• API consumers can migrate in steps Nova Grizzly Keystone Step 2 Grizzly API Keystone Nova Havana Step 3 Havana Ideally, we’d server AND client would be multi-version
  15. 15. • Size Matters • Big Steps = Release Based • Small Steps = Commit Based G H• Small steps are digest • Easier to test small steps • Incur less technical debt • Expose issues to developers while code is fresh• Large steps create risk • More combinations to test • More changes at one time • Difficult to fix design issues
  16. 16. Forced Client Big Bang! Migration Protocol Protocol Driven Stepping Rolling Upgrade Server vs Client Parallel OperationContinuous Small Step vs Large StagedDeploy Upgrade
  17. 17. Forced Client Big Bang! Migration Protocol Protocol Driven Stepping Rolling Upgrade Server vs Client Parallel OperationContinuous Small Step vs Large StagedDeploy Upgrade
  18. 18. Forced Client Big Bang! Migration Protocol Protocol Driven Stepping Rolling Upgrade Server vs Client Parallel OperationContinuous Small Step vs Large StagedDeploy Upgrade
  19. 19. • Servers & agents must be version tolerant• Clients protocols must be testable and documented• Ensure non-destructive migration• Fast-fail on client, but version tolerant on server• Require Expectation that servers will migrate need to be built into the system! Servers must be adopting latest protocols or clients will not follow.• Servers must test legacy clients/protocols! We must have tests!• We must be able to find and upgrade legacy clients
  20. 20. • Deployment Upstream Cookbooks/Modules• Best Practice Discussions• Code for Upgradeability• Crowbar Collaboration • Upgrade is a FEATURE! • Orchestration + Chef • Pull from Source Deployments • System Discovery • Networking Configuration • Operating System Install http://farm3.static.flickr.com/2561/3891653055_262410bc31.jpg

×