• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Summit 2013 spring rob hirschfeld migrations v1
 

Summit 2013 spring rob hirschfeld migrations v1

on

  • 854 views

 

Statistics

Views

Total Views
854
Views on SlideShare
836
Embed Views
18

Actions

Likes
0
Downloads
9
Comments
0

6 Embeds 18

https://twitter.com 8
http://irq.tumblr.com 6
http://localhost 1
http://moderation.local 1
http://www.newsblur.com 1
http://safe.txmblr.com 1

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    Summit 2013 spring rob hirschfeld migrations v1 Summit 2013 spring rob hirschfeld migrations v1 Presentation Transcript

    • Rob HirschfeldDell, Distinguished Engineer http://lifeatthebar.com
    • • This session could repeat a lot from last summit • http://www.openstack.org/summit/san-diego-2012/openstack-summit- sessions/presentation/getting-from-folsom-to-grizzly-a-devops-upgrade- pattern• Interoperability & Reference Architecture • Reference Architecture w/ Heat (Tuesday @ 11:00) • Interop Panel (Tuesday @ 5:20)• Upgrade Projects • https://wiki.openstack.org/wiki/Upgrade-with-minimal-downtime • https://wiki.openstack.org/wiki/Grenade
    • • The “Problem“ with Migration• Paths to Nirvana (or Roads to Perdition)• Alternatives• An Opinion• Discussion F G H http://learn.genetics.utah.edu/content/begin/cells/organelles/
    • • OpenStack has 3 month release major/minor cycle • Major version every 6 months • Minor version (but important) 3 & 6 months after release• Lots of Changes • Bugs are fixed • Operating Systems upgrade • New technologies appear • Whole projects are split off• We expect operators to • Keep systems running • Never loose data • And… Stay up to date http://cdn2.arkive.org sockeye-salmon-predated-by-grizzly-bear-on-migration-upstream.jpg
    • • What are we upgrading? • OpenStack - Yes! • Dependent packages - Probably? • Base OS - Maybe?• What is the state during the "in-between" time? • Infrastructure downtime? • VM downtime? VM Reboot? Controlled/Informed? • Availability Windows?• What contingency plans? • Dry run? Maybe. • Recover by going backwards? Maybe.• What level of safety and trust do you need? • Assure data integrity? • Assure Infrastructure Integrity? • Maintain Security?• How long can the migration take? • Big bang move or gradual migrate? • How will my API consumers/ecosystem cope? • Can Keystone Grizzly work with Folsom Nova??? • What about futures? G.1 to G.2? H to I? • Can I skip versions? Jump from G to I? http://www.publicdomainpictures.net Steep Steps by Peter Griffin
    • • Beginning Answers • Distros will manage dependencies and packaging • We can’t lose data or compromise security • Infrastructure state and integrity will vary by solution• Assumption of Staging • Some managed environment (not a manual deploy) • Staging/test environment to get "familiar" with the problem. • Maintenance window for production - limits scope of change • Step-wise changes are OK (big bang is not required) • We can make trade-offs to defray expensive requirements• Beyond Assumptions… Paradigm Shifts • There are shared best practices • Upgrades can be automated in a sharable way http://www.theemailadmin.com/wp-content/uploads/2012/09/GFI229-hot-water-migration.jpg
    • All the nodes update to the latest codein a short time window• Details: 1. Cookbooks include update (instead of install) directives. 2. Control upstream package point (e.g. apt-update when appropriate) 3. Force chef-client run 4. Now at new level• Considerations • Pros: Potentially fast, continuous operation • Cons: Dont mess up, it is your production environment • Scope: Security updates • Code Assumptions: • System can function through service restarts. • Underlying data models dont change or migrate appropriately.
    • Nodes migrate in staged groups• Details: 1. Choose subset of machines and quiesce them. 2. Update set 3. Freeze state (by tenant) 4. Migrate service/tenant content 5. Repurpose after complete.• Considerations • Pros: Safer, more controlled, and can move tenants as needed • Cons: Takes longer, still has cut-over point, but less open http://allgodscrittersgotrhythm.blogspot.com/2010_08_01_archive.html
    • Nodes changed individually by a system-wideorchestration that supports components of multiple versions• Details 1. Components must be able to straddle versions 2. Orchestration updates core components to new version 3. System as a whole queiseces and is validated (requires self test) 4. Orchestration individually migrates components (return to step 3)• Considerations • Pros: Creates a highly resilient system that handles higher rate of change • Cons: More complex to create and maintain http://www.grizzlycentral.com/forum/grizzly-tire-wheel-combos/1204-upgrade-tires-grizzly.html
    • • Orchestration (not just deployment automation)• Awareness of physical layout is required • Must respect fault zones to sustain HA • Proximity of resources matters for migration • Networking transitions are essential• Collaboration with development teams is essential • Components must support current and previous • Upgrade plan must be baked into configuration and tested • Upgrade dependencies must be 1) clear and 2) minimized• HA complicates upgrades • Upgrade can be detected as a failure • HA system must be able to bridge versions
    • • Partial features were confusing• We wanted to get ahead on upgrade• It looked like dev jumped to Grizzly• Good news: • Some testing of upgrade • Folsom to Grizzly ops was pretty smooth• Bad news: • Grizzly is more complex (more moving parts) • Missing multi-node upgrade validation
    • DB Oslo Keystone Msg BusClient Glance Nova Compute Dashboard Cinder Celimeter Quantum
    • • Fault Tolerance on BOTH SIDES AND VERSIONS• Same Version = EASY• Backwards Version = HARD• Forward Version = IMPOSSIBLE Keystone Grizzly Keystone Easy Nova Havana Havana
    • • We want to limit need to sustain old services• New versions should support past APIs• API consumers can migrate in steps Nova Grizzly Keystone Step 2 Grizzly API Keystone Nova Havana Step 3 Havana Ideally, we’d server AND client would be multi-version
    • • Size Matters • Big Steps = Release Based • Small Steps = Commit Based G H• Small steps are digest • Easier to test small steps • Incur less technical debt • Expose issues to developers while code is fresh• Large steps create risk • More combinations to test • More changes at one time • Difficult to fix design issues
    • Forced Client Big Bang! Migration Protocol Protocol Driven Stepping Rolling Upgrade Server vs Client Parallel OperationContinuous Small Step vs Large StagedDeploy Upgrade
    • Forced Client Big Bang! Migration Protocol Protocol Driven Stepping Rolling Upgrade Server vs Client Parallel OperationContinuous Small Step vs Large StagedDeploy Upgrade
    • Forced Client Big Bang! Migration Protocol Protocol Driven Stepping Rolling Upgrade Server vs Client Parallel OperationContinuous Small Step vs Large StagedDeploy Upgrade
    • • Servers & agents must be version tolerant• Clients protocols must be testable and documented• Ensure non-destructive migration• Fast-fail on client, but version tolerant on server• Require Expectation that servers will migrate need to be built into the system! Servers must be adopting latest protocols or clients will not follow.• Servers must test legacy clients/protocols! We must have tests!• We must be able to find and upgrade legacy clients
    • • Deployment Upstream Cookbooks/Modules• Best Practice Discussions• Code for Upgradeability• Crowbar Collaboration • Upgrade is a FEATURE! • Orchestration + Chef • Pull from Source Deployments • System Discovery • Networking Configuration • Operating System Install http://farm3.static.flickr.com/2561/3891653055_262410bc31.jpg