Continuous delivery pipelines allow developers to automatically compile, test, and store code in an artifact repository. This provides a reliable system that saves development teams time. The document discusses tools and practices for continuous delivery pipelines including infrastructure as code, code quality analysis, and telemetry. It also covers concepts like feature flags, silent deployments, and using analytics to gain insights from telemetry data to improve applications proactively.
4. Agile Planning
Delivery plans
Dashboards
Kanban boards
Build and Test
Git source control
Continuous integration
Security scanning
Open source compliance
Cloud-based device testing
Plan +
Track
Monitor +
Learn
DevOps
Release
Develop +
Test Release
Continuous delivery
Functional testing
Release management
DevOps
Monitor and Learn
Application analytics
Logging & operations analytics
Mobile crash reporting
5. Foundations for DevOps
I want a system where my code is
automatically compiled, tested and
stored into an artifacts repository
The system is reliable and saves
quality time for the development
teams
It should not worry about how things
are done – once the process is up and
running, it just works
8. Know your code
▪ Code quality analysis at build time makes sure you aren’t releasing potential
problems
▪ If your code quality is bad you are piling up on technical debt
▪ In the Build-Measure-Learn loop the Measure stage is the most important one
▪ Unless you keep up with it from the beginning it is going to clog up your project
▪ Only Metrics will tell if you are going in the right direction, so it is critical to ‘get
them right’
▪ Extend your product to include what really matters
▪ Reduce maintenance costs
▪ Proactively understand potential problems
9. Infrastructure as Code
▪ It’s not a cloud-only technology…
▪ PowerShell DSC
▪ Chef
▪ Puppet
▪ …
▪ …but it is easier in the cloud
▪ Azure Resource Manager
▪ Amazon CloudFormations
▪ Takes the hassle of infrastructure provisioning out of the process
▪ You define the infrastructure once, and it is deployed in the same way all the time
10. “We have system administrators for this!”
▪ Everybody works together – sysadmins and developers, testers and architects…
▪ Automation means less errors due to manual interactions, and more quality time
to focus on business-relevant matters (optimisations, security, etc.)
▪ All the definitions are handled as source code
▪ Versioned
▪ Tested
▪ Documented
11. Building is not releasing…
▪ You should always keep build and release apart
▪ Starting a release process from a broken build is a waste of resources
▪ Release starts when all your artefacts are built and ready
▪ Infrastructure is provisioned and artefacts are deployed
▪ Here is where you have an approval flow
12. A look at prerequisites
Pipelines, Code Analysis and Infrastructure as Code
13. A shift to top gear: silent deployments
▪ I can release unfinished features as part of my iterative process
▪ This is not about the user, it is about the process
▪ Quality is more important than functionality
▪ Continuous Deployment to the limit!
▪ As soon as have something that ticks the Definition of Done, it goes straight to
production
14. Lots of pros, a few cons…
▪ More quality thanks to smaller, granular components or steps
▪ Deployments are liquid – it is proof that the pipeline you are using works well
▪ Handling data changes becomes so much easier!
▪ There is an overhead – knowing what is in-flight or incomplete
▪ One of the risks is technical debt increase – keeping track of what is going on is
critical, and that is the overhead with this approach
15. A real example – a TFS schema change
▪ Team Foundation Server the schema for a portion of its database between
version 2013 and 2015 to allow Team Project renane
▪ A large instance upgrade (size >1TB) required days of downtime
▪ From TFS 2013 Update 4 onwards a tool enabled an online (i.e. synchronised)
pre-migration for these tables, so that the upgrade proper could be done in a
weekend
▪ PRO: Less than 24hrs downtime for the upgrade
▪ CON: Ton of space required for the tool to run
17. So, naturally…
▪ It will become natural for teams to have non-aligned, rolling releases for
components
▪ Not a Lean exclusive – you can do that with Scrum or any other methodology as
well
▪ This liquid approach makes sure that the pipeline works as expected
18. What if I want even more granularity?
▪ Easy: implement Feature Flags/Toggles!
▪ Every feature lives behind a switch, which can be binary or progressive
▪ In the most advanced systems they are grouped together based on the user
target
▪ Beta tester
▪ Early adopter
▪ If something goes wrong you have a brilliant killswitch
19. How?
▪ Key-value storage, dedicated database, configuration files
▪ Easy with a few flags, harder with hundreds!
▪ It also needs to be integrated with a directory system to keep track of who-sees-
what
▪ It can be fragile – it doesn’t have to be shielded, but it is important to avoid leaks
21. The missing link - telemetry
▪ We might build the best stuff in the world, but if nobody uses this stuff it is just
wasted money
▪ Telemetry is critical for this: it builds metrics we can use to succeed
▪ Reactive telemetry is usually the starting point for most
▪ It tracks what happened in the past
▪ It would be great if we can get an idea of what will happen in the future…
22. Telemetry is not just for reacting to problems
▪ A proactive telemetry tool has Machine Learning behind the scenes, continuously
analyzing your logs and finding patterns in how your application behaves
▪ Application Insights Analytics is a great tool for this, and you can use it with any
other reactive telemetry solution
▪ E.g.: Logstash has a plugin for exporting directly to Analytics
▪ Using these tools you can prevent potential security issues (DDOS?) or
understand resource usage patterns and requirements
▪ Try it yourself: https://analytics.applicationinsights.io/demo