Anatomy of a Build Pipeline
Upcoming SlideShare
Loading in...5
×
 

Anatomy of a Build Pipeline

on

  • 1,781 views

You've heard about Continuous Integration and Continuous Deilvery but how do you get code from your machine to production in a rapid, repeatable manner? Let a build pipeline do the work for you! Sam ...

You've heard about Continuous Integration and Continuous Deilvery but how do you get code from your machine to production in a rapid, repeatable manner? Let a build pipeline do the work for you! Sam Brown will walk through the how, the when and the why of the various aspects of a Contiuous Delivery build pipeline and how you can get started tomorrow implementing changes to realize build automation. This talk will start with an example pipeline and go into depth with each section detailing the pros and cons of different steps and why you should include them in your build process.

Statistics

Views

Total Views
1,781
Slideshare-icon Views on SlideShare
1,781
Embed Views
0

Actions

Likes
5
Downloads
87
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • 11+ Years as a Java developer6 years Practicing continuous integration/continuous deliveryDevOps Evangelist CSMPuppet Certified
  • Some assumptions about enterprises tackling automationThey possess some standard components to automate building shared API’s, products and/or custom web applicationsBuilding software is mostly at a very micro level when viewed through the enterpriseIgnoring business logic, there are still a LOT of places software could fail in this view
  • Eliminate defects in:The processThe product
  • …in fact, NONE ARE!Build pipelines varyDifferent teams, different needsStart simply
  • Our use-case Pipeline- Building a web-services based web application- Has an environment build- Fork/Join- Does NOT flow all of the way to Production
  • System of RecordJust use it!Commit HooksBuild trunkTag often (cheap)No broken code
  • Under 10 minNo external resourcesEvery checkinFix broken builds80% coverageChallenges:Logs of buildsFalse securityWriting tests is hard
  • Test connectivityTest frameworksTest componentsTest configFewer tests than unitChallenges:External resourcesTime consumingLocal resourcesSplit tests
  • Check syntaxFind security issuesRecord test coverageDiscover complexity and areas of focusFail based on some metric not metCheck out technical debtChallengesFinding a free toolLearning/integrating these tools
  • Labeling snapshots your codePackage for easier deploymentSteps can be combinedNo config in package!ChallengesLabeling may create copies of code baseMany packaging options (RPM)
  • Make artifacts availableAlways versionMake repo available to allCombine stepsChallenges:Complex setupSecurity challenges exposing artifacts
  • Our use-case Pipeline- Building a web-services based web application- Has an environment build- Fork/Join- Does NOT flow all of the way to Production
  • Infastructure as code!- Puppet, Chef, cfengine, batch scripts should all be in version control just like application code
  • Check infrastructure language syntaxNo-op checks compile and a test runChallengesRequires rubyNeeds a prod-like VMLong feedback loop
  • Applying changes to prod-like VMRun tests to ensure infrastructure is readyChallengesLong feedback loopAnother language to learnUp to date VM needed
  • Our use-case Pipeline- Building a web-services based web application- Has an environment build- Fork/Join- Does NOT flow all of the way to Production
  • Bringing sub-lines together for full runTest that application runsEnd to EndEnd user perspectiveMeets criteriaChallengesUp to date VMBrittle acceptance testsLong running tests
  • Label/Tag infrastructure and code, they go together!Deploy to DEV for additional developer testingTest things that can’t be automated?ChallengesDEV is updating here, should it start from scratch?Security!Is a DEV deployment necessary? Where else could this apply?
  • Our use-case Pipeline- Building a web-services based web application- Has an environment build- Fork/Join- Does NOT flow all of the way to Production
  • Flipped which side seems simple and which side seems hard!
  • Pull-based deploymentManual testing and approvalChallenges:Change in process/paradigmNot every build needs manual testing! Mind shift!
  • Push button to production – SCARY!Requires test aprovalChallengesAuditing/Security – where does this happen? (Automate, collaborate)Change for operations (this is too easy)Rollback/Roll-forward strategy (RPM’s make this easier, my preference)
  • Remove manual processes and human errorRepeatability to test and improve the build processVisibility for the entire teamQuality is “baked in”Metrics on anything you want to measure to gain insightRapid and constant feedback at all stagesReleases become non –events (hopefully)
  • Why do we keep reems of versions? Are we going back? Auditing?My view Store the latest build and current production release ONLYBugs fixed in next deployment Environments are difficult to reproduceVersion control has your historyException might be creating API’sFrequent delivery allows you to continue pushing forward instead of looking backwards
  • Version controlStart simple with unit test coverageAnalyze your code -> Shows you where to focus effortInstall CI and start with 2 build stepsSTART A WIKI!!

Anatomy of a Build Pipeline Anatomy of a Build Pipeline Presentation Transcript

  • Sam Brownsamuel.brown@excella.com November 7, 2012
  • Thanks to Mike McGarr andExcella Consulting for hosting!!
  • Sam Brown 11+ Years as a Java developer with commercial and federal clients Practicing continuous integration/continuous delivery for ~6 years DevOps Evangelist at Excella (www.excella.com) Certified Scrum Master Puppet Certified Professional
  • Basic components of an automated enterprise Continuous integration Dependency management Automated build toolsto build... Shared API libraries Custom web applications Products
  • “The purpose of a pipeline is to transport some resourcefrom point A to point B quickly and effectively with minimalupkeep or attention required once built” – me So how did „pipelines‟ get applied to software? Let‟s try a few changes to this statement...“The purpose of a pipeline is to transport from to quickly andeffectively with minimal upkeep or attention required oncebuilt” – me
  • Build pipelines require measurements and verification ofthe code to ensure: Adherence to standards Quality proven through testing A product that meets user‟s needsThe purpose is not just transport, but to ensure that ourproduct is high-quality, prepared for the environment it willreach, and satisfies the end-user.
  • “An automated manifestation of the process required to get yourteam’s application code to the end-user, typically implemented viacontinuous integration server, with emphasis on eliminatingdefects” – me (again)
  •  …in fact, NONE ARE! Build pipelines will vary as much as applications Different teams have different needs Simplicity is key One Size Fits All!
  • Repeatable, automated, process to ensure that application code is tested, analyzed, and packaged for deployment.
  •  System of record Just do it! Take advantage of commit hooks Build from trunk and reduce server-side branches Tag often Don‟t check in broken code!
  • Purpose: Integrate, build and unit test code for quickfeedback Best Practices  Runs in under 10 minutes (rapid feedback)  Unit tests do not require external resources  Run on EVERY developer check-in  Fixing broken builds is the top priority!  Gamification to drive adoption  80% test coverage or BETTER Challenges  LOTS of builds  False sense of security  Writing tests is hard
  • Purpose: Test component and/or external resourceintegration Best Practices  Test connectivity with external resources  Test frameworks load correctly  Test application components work together  Test configuration  Fewer integration tests than unit tests Challenges  External resources may not be available in all environments ○ Mock locally  Can be time consuming ○ Use local resources ○ Separate short/long running tests
  • Purpose: Use automated tools to inspect code Best Practices  Check syntax  Find security vulnerabilities  Record test coverage  Discover complexity  Optional: Fail based on a metric  Optional: View technical debt Challenges  Not all code analysis tools are free  Learning/installing new tools
  • Purpose: Label code and package asdeployableBest Practices  Labeling allows you to go back in time  Packaging code for deployment  Reduce complexity by combining steps  NO configuration in package -> Package once, deploy multiple Challenges  Labeling can be resource intensive  Many packaging options
  • Purpose: Make artifacts available fordeployment or available to other teams Best Practices  Publish a versioned artifact  Make repository available  Reduce complexity by combining steps Challenges  Requires initial complex setup  Security requirements around exposing artifacts ○ Use a tool with security built-in like Nexus
  • Repeatable, automated, process to ensure that our target environment is properly constructed for our application(s).
  • Purpose: Check syntax and compile prior toapplication Puppet Lint – Static format checker for Puppet manifests No-op Test Run – Ensure that manifest compiles Challenges  Puppet-lint requires a ruby-based environment  No-op test needs production-like VM  Long feedback loop
  • Purpose: Test infrastructure in a prod-like environment Puppet Apply –Puppet application against VM that mimics DEV/TEST/PROD Infrastructure Tests – Test your environment! Example tests:  Users and groups created  Packages installed  Services running  Firewall configured Challenges  Long feedback loop  Yet another language (cucumber/rspec/other)  VM must be up to date with DEV/TEST/PROD
  • cucumber-puppet rspec-puppetFeature: Services require spec_helperScenario Outline: Service should be running and bind to describe logrotate::rule doport let(:title) { nginx } When I run `lsof -i :<port>` Then the output should match /<service>.*<user>/ it { should include_class(logrotate::rule) } Examples: it do | service | user | port | should contain_file(/etc/logrotate.d/nginx).with({ | master | root | 25 | ensure => present, | apache2 | www-data | 80 | owner => root, | dovecot | root | 110 | group => root, | mysqld | mysql | 3306 | mode => 0444, }) end http://projects.puppetlabs.com/projects/cucu end mber-puppet/wiki http://rspec-puppet.com/
  • Repeatable, automated, process to ensure that our application isproperly installed in the target environment and that the application meets acceptance criteria.
  • Purpose: Test acceptance criteria in a prod-likeenvironment Puppet Apply – Apply Puppet manifests including deploying application Run Acceptance Tests – “End-to-end” testing  End-user perspective  Meets user-defined acceptance criteria  Possible tools: Cucumber, Selenium, Geb, Sikuli Challenges  Maintain a production-like VM  Acceptance tests brittle ○ Test at the right level  Acceptance tests long running ○ Run nightly
  • Purpose: Label application and infrastructurecode, deploy to DEV environment Label Release Candidate – Known “accepted” versions will be deployed together Deploy to DEV – Automated deployment  Infrastructure AND application Challenges  DEV updating, not deployed from scratch ○ Create tests for ALL possible scenarios  Security ○ Work with security early and often!
  • Simplified process to support streamlined deployments to TEST and PRODUCTION
  • Purpose: Enable the test team to pull thelatest code Pull-based deployment Manual Testing/Approval Challenges  Enabling test team is a paradigm shift  Producing changes too fast ○ Create good release notes ○ Not every build needs manual testing
  • Purpose: Enable operations team to pull thelatest code into production “Push-button” deployment to production  Requires testing approval Challenges  Audit/security check before deployment ○ Discuss with operations ○ Automate as much as possible and prudent  Paradigm shift for operations, TOO EASY! ○ Engage the operations team as early and often  Rollback/Roll forward strategy ○ Easier with RPM‟s, I prefer roll forward
  •  Remove human error Repeatability tests and improves the process Visibility from code to deployment Baked-in quality Metrics, metrics, metrics Rapid and constant feedback Releases are non-events
  • Why do we store old/obsolete versions? Rollback Auditing History? Any other reason?My view: Store only the latest build and current production release Bugs fixed in latest version (Almost) impossible to reproduce environments Version control has historyException: Other teams dependent on a previous version Store major/minor revisionsReasoning: In a continuous delivery environment, delivering frequentlyallows you to keep moving forward with new features AND bug fixes!
  •  Put EVERYTHING in version control Start simple, up your unit test coverage. Analyze your code in order to focus Install CI and start with two build steps Start and maintain a wiki And lastly…
  •  samuel.brown@excella.com @SamuelBrownIV http://github.com/samueltbrown http://www.linkedin.com/pub/samuel-brown/3/715/352