This document summarizes a DevOps engineer's journey introducing infrastructure automation at a company called Tatts. It describes establishing practices like continuous integration (CI) and continuous delivery (CD) using tools like TeamCity, Octopus, and Chef. It outlines challenges like a lack of configuration management and negative attitudes. Solutions involved establishing a "TattsCloud" private cloud with VMware tools, implementing patterns like layering, and maximizing reuse through CI/CD pipelines. The end result was transforming workflows to treat infrastructure as code.
Understand immutable infrastructure, what? Why? How? - Meta-Meetup DEVOPS NIGHT Quentin Adam
Quentin Adam from Clever Cloud discusses immutable infrastructure and automation. He argues that infrastructure should be treated as ephemeral instances rather than precious servers. Stateless applications deployed across immutable instances in an automated fashion provide scalability, high availability, and security. Key aspects include splitting state from process, service discovery, configuration as code, distributed systems, and monitoring.
We will go over the motivations for wix.com R&D to move to a CI/CD/TDD model, how the model was implemented and the impact on Wix R&D. We will cover the tools used (developed in-house and 3rd party), change in methodologies, what we have learned during the transformation and the unexpected change in working with product and the rest of the company.
Presented in the Continuous Delivery track at DevOps Con Israel 2013
This document discusses web policies and reporting, specifically feature policy and the reporting API. Feature policy allows defining which browser features are allowed on a site, like geolocation or oversized images. The reporting API enables reporting on certain events, like content security policy violations. The talk covered current browser support for these APIs and WordPress plugins that implement feature policy and reporting API for sites built with WordPress.
Using the Atlassian Plugin Platform to Create Your Own SaaS Plugin PlatformAtlassian
The document discusses using the Atlassian Plugin Platform to build a plugin system for Qato, DZone's enterprise question and answer platform. Key points include leveraging the Atlassian framework to avoid reinventing the wheel, supporting multi-tenancy by tying plugins to the proper tenant, and allowing Spring annotations in plugins. The Atlassian Plugin Framework allowed DZone to quickly add extensibility to Qato through a plugin system.
Azure Web SItes - Things they don't teach kids in school - Multi-ManiaMaarten Balliauw
Microsoft has a cloud platform which runs .NET, NodeJS and PHP. All 101 talks out there will show you the same: it’s easy to deploy, it scales out on demand and it runs WordPress. Great! But what about doing real things? In this session, we’ll explore the things they don’t teach kids in school. How about trying to find out the architecture of this platform? What about the different deployment options for Windows Azure Web Sites, the development flow and some awesome things you can do with the command line tools? Did you know you can modify the automated build process? Join me in this exploration of some lesser known techniques of the platform.
What Multisite can do for You - Anthony Cole - WordCamp Sydney 2012WordCamp Sydney
- WordPress Multisite allows managing multiple sites from one WordPress install, providing centralized administration and upgrades. It can be an efficient and cost-effective solution for agencies managing multiple client sites.
- The presenter initially set up Multisite for his agency to more easily upgrade and manage ten client sites. He discusses hosting considerations and recommends a VPS for Australian sites.
- Key aspects of his Multisite implementation include using Git for version control, Pingdom for monitoring, and custom scripts for backups to S3. He advocates keeping implementations simple to ensure stability.
This document discusses WP-CLI, a command line interface for WordPress. It provides commands for common WordPress tasks like installing WordPress, creating configuration files, managing posts and users. Using WP-CLI allows automating repetitive WordPress tasks and managing WordPress sites from the command line. The presenter provides an example of using WP-CLI commands in a script and encourages the audience to try it out themselves.
Recipes for Continuous Delivery (ThoughtWorks Geeknight)Gurpreet Luthra
In this presentation, I cover techniques and best practices for CD. The idea is to explain the rationale behind CI, Branching, Feature Branches, Trunk Based Development, Feature Toggles, and related techniques that aid in faster delivery.
Special Thanks to Luminaries like Martin Fowler, Paul Hammant, Jez Humble, Pete Hodgson and many ThoughtWorkers for their material. I have mentioned links to them on respective slides.
I presented this at ThoughtWorks Pune Geek Night on 8/Feb/2018.
Understand immutable infrastructure, what? Why? How? - Meta-Meetup DEVOPS NIGHT Quentin Adam
Quentin Adam from Clever Cloud discusses immutable infrastructure and automation. He argues that infrastructure should be treated as ephemeral instances rather than precious servers. Stateless applications deployed across immutable instances in an automated fashion provide scalability, high availability, and security. Key aspects include splitting state from process, service discovery, configuration as code, distributed systems, and monitoring.
We will go over the motivations for wix.com R&D to move to a CI/CD/TDD model, how the model was implemented and the impact on Wix R&D. We will cover the tools used (developed in-house and 3rd party), change in methodologies, what we have learned during the transformation and the unexpected change in working with product and the rest of the company.
Presented in the Continuous Delivery track at DevOps Con Israel 2013
This document discusses web policies and reporting, specifically feature policy and the reporting API. Feature policy allows defining which browser features are allowed on a site, like geolocation or oversized images. The reporting API enables reporting on certain events, like content security policy violations. The talk covered current browser support for these APIs and WordPress plugins that implement feature policy and reporting API for sites built with WordPress.
Using the Atlassian Plugin Platform to Create Your Own SaaS Plugin PlatformAtlassian
The document discusses using the Atlassian Plugin Platform to build a plugin system for Qato, DZone's enterprise question and answer platform. Key points include leveraging the Atlassian framework to avoid reinventing the wheel, supporting multi-tenancy by tying plugins to the proper tenant, and allowing Spring annotations in plugins. The Atlassian Plugin Framework allowed DZone to quickly add extensibility to Qato through a plugin system.
Azure Web SItes - Things they don't teach kids in school - Multi-ManiaMaarten Balliauw
Microsoft has a cloud platform which runs .NET, NodeJS and PHP. All 101 talks out there will show you the same: it’s easy to deploy, it scales out on demand and it runs WordPress. Great! But what about doing real things? In this session, we’ll explore the things they don’t teach kids in school. How about trying to find out the architecture of this platform? What about the different deployment options for Windows Azure Web Sites, the development flow and some awesome things you can do with the command line tools? Did you know you can modify the automated build process? Join me in this exploration of some lesser known techniques of the platform.
What Multisite can do for You - Anthony Cole - WordCamp Sydney 2012WordCamp Sydney
- WordPress Multisite allows managing multiple sites from one WordPress install, providing centralized administration and upgrades. It can be an efficient and cost-effective solution for agencies managing multiple client sites.
- The presenter initially set up Multisite for his agency to more easily upgrade and manage ten client sites. He discusses hosting considerations and recommends a VPS for Australian sites.
- Key aspects of his Multisite implementation include using Git for version control, Pingdom for monitoring, and custom scripts for backups to S3. He advocates keeping implementations simple to ensure stability.
This document discusses WP-CLI, a command line interface for WordPress. It provides commands for common WordPress tasks like installing WordPress, creating configuration files, managing posts and users. Using WP-CLI allows automating repetitive WordPress tasks and managing WordPress sites from the command line. The presenter provides an example of using WP-CLI commands in a script and encourages the audience to try it out themselves.
Recipes for Continuous Delivery (ThoughtWorks Geeknight)Gurpreet Luthra
In this presentation, I cover techniques and best practices for CD. The idea is to explain the rationale behind CI, Branching, Feature Branches, Trunk Based Development, Feature Toggles, and related techniques that aid in faster delivery.
Special Thanks to Luminaries like Martin Fowler, Paul Hammant, Jez Humble, Pete Hodgson and many ThoughtWorkers for their material. I have mentioned links to them on respective slides.
I presented this at ThoughtWorks Pune Geek Night on 8/Feb/2018.
Leveraging the Power of Custom Elements in GutenbergFelix Arntz
This document discusses the benefits of using web components for building reusable components in a standardized way. It outlines how web components allow encapsulation of styles and markup through features like shadow DOM and custom elements. Web components help improve maintainability and reusability of components. Frameworks are increasingly using web components as the basis for their "leaf components". The document promotes web components as a solid foundation and provides resources for getting started with web components.
AtlasCamp 2010: The Atlassian Plugin SDK For Fun & Profit - Ben SpeakmonAtlassian
The document discusses the challenges of plugin development and how the Atlassian Plugin SDK addresses them. It outlines problems like different startup processes, configuration locations, and installation methods across Atlassian products. The SDK standardizes these areas with tools like atlas-run, automatic configuration, and installation from the command line. It also improves the development cycle through features like hot reloading of code changes and testing against multiple versions. The presentation encourages involvement in the open source SDK project.
Atlaskickin' the Plugin SDK, AtlasCamp US 2012Atlassian
Jonathan Doklovic, Developer Relations Engineer
The Atlassian SDK is what makes Atlassian plugin development possible. Jonathan Doklovic will run through the recent dev speed focused improvements we've made to the SDK and give you some productivity protips that will make developing plugins even more joyful.
Oscon 2013 -Your OSS Project Is now servedUri Cohen
The document discusses a solution called Cloudify that allows developers to easily share and deploy open source software projects. Cloudify provides an embeddable web player that allows users to launch and test software recipes directly from a browser. It works by packaging application code and configuration into reusable recipes. When users run a recipe in the player, it automatically provisions a full project environment in the cloud.
Sai devops - the art of being specializing generalistOdd-e
Devops aims to bring developers and operations teams together to collaborate more closely. As systems become more complex, the traditional separation of duties has caused issues with deployment, configuration, and monitoring. By integrating development and operations work, organizations can deploy code changes more rapidly and reliably while improving system performance, security, and availability. Effective devops processes include continuous integration, automated testing of infrastructure changes, configuration management, and monitoring systems in production.
Tailwind CSS is a utility-first CSS framework for building custom designs rapidly. It allows developers to have full control over components without relying on predefined styles. Some key benefits of Tailwind CSS include customization through configuration files, not needing to name CSS classes, minimal context switching between HTML and CSS, and fast development speeds. It also has responsive designs and Purge CSS can be used to reduce file sizes significantly after development.
NetBeans is faster at starting up and more stable than Eclipse. It uses Swing for its GUI rather than SWT, and has a custom plugin development format based on OSGI. The document discusses why the author chose NetBeans over Eclipse and IntelliJ for Java Swing development - its stability, customizability through plugins which he has developed, and open source nature. It also mentions his research into toolchain and assembler development using the NetBeans platform.
Automate your WordPress Workflow with Grunt.jsJosh Lee
This document discusses using Grunt.js to automate WordPress development workflows. Grunt allows automating repetitive tasks like compiling CSS and JavaScript, running linting tools, concatenating and minifying files, generating sprites, deploying code, and live reloading browsers. It uses plugins maintained in GitHub repositories to perform these tasks. The document provides steps to set up a Grunt-based development environment, including installing Node.js, creating a package.json, installing Grunt and plugins, and configuring a Gruntfile. It also discusses options for using Grunt within WordPress themes and plugins or for an entire WordPress site.
This document discusses strategies for modernizing front-end codebases in an incremental way. It suggests starting with basic modularization by splitting code into logical chunks, then concatenating and minifying modules. Next steps include loading modules on demand using various module systems. Graceful deprecation is recommended over breaking changes. The document also advocates trying new frameworks on side projects first before adopting one. Maintaining good development practices like testing, linting, code style rules and performance testing is emphasized over choosing any particular framework.
J&Js adventures with agency best practice & the hybrid MVC framework - Umbrac...Jeavon Leopold
The document discusses Crumpled Dog's Hybrid MVC Framework for Umbraco projects. The framework provides a standardized starting point for each project, reducing setup time and risks. It includes preconfigured packages, document types, and Razor snippets. Using the framework saves developer time on new projects and enables front-end developers to focus on design rather than configuration. The framework is available on GitHub for others to use as a starting point for their own Umbraco projects.
The document discusses the experience of the Norwegian Food Safety Authority with continuous integration using Hudson. It describes how Hudson was used to manage branches and releases for a large Java project with multiple teams. Hudson provided build servers, testing environments, and tools to monitor code quality and metrics. It helped enable continuous deployment by automating testing and deployment across environments.
Here you will get detail Information about these below points
- What is VPS Hosting & Why Virtualization technology
- How is different from shared hosting
- What are the VPS Technologies are there
- When do go for VPS Hosting
- How to setup a multiple VPS server for your website
- Advantages and Disadvantages with VPS server
- How to Buy VPS
- Plan and Options on VPS Server
- How to manage Your VPS server
DevOps, Cloud, and the Death of Backup Tape Changerske4qqq
- DevOps aims to break down barriers between development and operations teams through automation, measurement, and culture change. This enables faster delivery of applications and services.
- Traditional IT operations has focused too much on control and constraint rather than enabling teams. As a result, developers often work around or avoid IT.
- If IT does not adapt by becoming more agile and self-service oriented like cloud computing, it risks becoming irrelevant like backup tape changers - a outdated technology that people work to avoid. IT must partner with teams rather than control them to remain relevant in the future.
BOSH is an open source tool that allows developers to easily package, release, deploy, and manage distributed systems and applications at scale across multiple cloud environments. It provides capabilities for deployment, configuration management, updates/upgrades with minimal downtime, remediation, and scaling. BOSH abstracts away infrastructure details through "stem cells" and treats applications as logical concepts rather than physical servers through a "release" process, providing consistency, reproducibility and agility in deployments.
Gutenberg | How a WordPress studio adaptedDavid Darke
An WordPress studio initially used a custom CMS but later transitioned projects to WordPress due to its simple content delivery and organization features. The studio supplemented WordPress with plugins like ACF that allowed for more flexible fields and relationships. When Gutenberg was announced as a new editor, the studio was worried about impacts but prepared by testing early versions and installing the classic editor plugin. As the editor evolved, the studio adapted training and development practices to understand Gutenberg capabilities and data changes.
Waterfall, Agile, Extreme Programming, Water-gile In this session we will discuss agile strategies that can help you get to done; efficiently, quickly and happier. I will cover the Scrum Framework concepts and some of the lessons learned from using agile strategy to manage a multinational distributed team. that does Drupal every day.
This session is for Managers and team members that want to learn more about agile strategies and how to apply them to Drupal.
Topics Covered
Where we all start, Waterfall.
Why agile is wrong, Agility is right.
Scrum Framework basics
What actions are Agile
What actions are not Agile
Lessons learned working with agile
Challenges of Scrum for small teams
Agility you can implement now
Configuring and maintaining a continuous integration environment is quite a bit of work. It requires ongoing resources both in terms of manpower and hardware infrastructure. As an application evolves so does the number of ongoing projects. The challenge is creating a scalable continuous integration environment which does not impede development and can handle the complexities of Java EE testing. This session covers how to setup and configure a cloud-based continuous integration environment for Java EE applications.
The presentation will focus on demonstrating how to use Atlassian Bamboo running on AWS to build and test a Maven/Gradle Java EE project that uses Arquillian for testing. Topics that will be covered include creating a custom AWS VM for use with Bamboo, creating an Amazon VPC (Virtual Private Cloud) along with test database using Amazon RDS. The presentation will delve into the specifics of testing EJBs, WebSocket endpoints, RESTful web services, as well as performing load testing in this environment. Security, cost control, and build monitoring will be covered as well.
Leveraging the Power of Custom Elements in GutenbergFelix Arntz
This document discusses the benefits of using web components for building reusable components in a standardized way. It outlines how web components allow encapsulation of styles and markup through features like shadow DOM and custom elements. Web components help improve maintainability and reusability of components. Frameworks are increasingly using web components as the basis for their "leaf components". The document promotes web components as a solid foundation and provides resources for getting started with web components.
AtlasCamp 2010: The Atlassian Plugin SDK For Fun & Profit - Ben SpeakmonAtlassian
The document discusses the challenges of plugin development and how the Atlassian Plugin SDK addresses them. It outlines problems like different startup processes, configuration locations, and installation methods across Atlassian products. The SDK standardizes these areas with tools like atlas-run, automatic configuration, and installation from the command line. It also improves the development cycle through features like hot reloading of code changes and testing against multiple versions. The presentation encourages involvement in the open source SDK project.
Atlaskickin' the Plugin SDK, AtlasCamp US 2012Atlassian
Jonathan Doklovic, Developer Relations Engineer
The Atlassian SDK is what makes Atlassian plugin development possible. Jonathan Doklovic will run through the recent dev speed focused improvements we've made to the SDK and give you some productivity protips that will make developing plugins even more joyful.
Oscon 2013 -Your OSS Project Is now servedUri Cohen
The document discusses a solution called Cloudify that allows developers to easily share and deploy open source software projects. Cloudify provides an embeddable web player that allows users to launch and test software recipes directly from a browser. It works by packaging application code and configuration into reusable recipes. When users run a recipe in the player, it automatically provisions a full project environment in the cloud.
Sai devops - the art of being specializing generalistOdd-e
Devops aims to bring developers and operations teams together to collaborate more closely. As systems become more complex, the traditional separation of duties has caused issues with deployment, configuration, and monitoring. By integrating development and operations work, organizations can deploy code changes more rapidly and reliably while improving system performance, security, and availability. Effective devops processes include continuous integration, automated testing of infrastructure changes, configuration management, and monitoring systems in production.
Tailwind CSS is a utility-first CSS framework for building custom designs rapidly. It allows developers to have full control over components without relying on predefined styles. Some key benefits of Tailwind CSS include customization through configuration files, not needing to name CSS classes, minimal context switching between HTML and CSS, and fast development speeds. It also has responsive designs and Purge CSS can be used to reduce file sizes significantly after development.
NetBeans is faster at starting up and more stable than Eclipse. It uses Swing for its GUI rather than SWT, and has a custom plugin development format based on OSGI. The document discusses why the author chose NetBeans over Eclipse and IntelliJ for Java Swing development - its stability, customizability through plugins which he has developed, and open source nature. It also mentions his research into toolchain and assembler development using the NetBeans platform.
Automate your WordPress Workflow with Grunt.jsJosh Lee
This document discusses using Grunt.js to automate WordPress development workflows. Grunt allows automating repetitive tasks like compiling CSS and JavaScript, running linting tools, concatenating and minifying files, generating sprites, deploying code, and live reloading browsers. It uses plugins maintained in GitHub repositories to perform these tasks. The document provides steps to set up a Grunt-based development environment, including installing Node.js, creating a package.json, installing Grunt and plugins, and configuring a Gruntfile. It also discusses options for using Grunt within WordPress themes and plugins or for an entire WordPress site.
This document discusses strategies for modernizing front-end codebases in an incremental way. It suggests starting with basic modularization by splitting code into logical chunks, then concatenating and minifying modules. Next steps include loading modules on demand using various module systems. Graceful deprecation is recommended over breaking changes. The document also advocates trying new frameworks on side projects first before adopting one. Maintaining good development practices like testing, linting, code style rules and performance testing is emphasized over choosing any particular framework.
J&Js adventures with agency best practice & the hybrid MVC framework - Umbrac...Jeavon Leopold
The document discusses Crumpled Dog's Hybrid MVC Framework for Umbraco projects. The framework provides a standardized starting point for each project, reducing setup time and risks. It includes preconfigured packages, document types, and Razor snippets. Using the framework saves developer time on new projects and enables front-end developers to focus on design rather than configuration. The framework is available on GitHub for others to use as a starting point for their own Umbraco projects.
The document discusses the experience of the Norwegian Food Safety Authority with continuous integration using Hudson. It describes how Hudson was used to manage branches and releases for a large Java project with multiple teams. Hudson provided build servers, testing environments, and tools to monitor code quality and metrics. It helped enable continuous deployment by automating testing and deployment across environments.
Here you will get detail Information about these below points
- What is VPS Hosting & Why Virtualization technology
- How is different from shared hosting
- What are the VPS Technologies are there
- When do go for VPS Hosting
- How to setup a multiple VPS server for your website
- Advantages and Disadvantages with VPS server
- How to Buy VPS
- Plan and Options on VPS Server
- How to manage Your VPS server
DevOps, Cloud, and the Death of Backup Tape Changerske4qqq
- DevOps aims to break down barriers between development and operations teams through automation, measurement, and culture change. This enables faster delivery of applications and services.
- Traditional IT operations has focused too much on control and constraint rather than enabling teams. As a result, developers often work around or avoid IT.
- If IT does not adapt by becoming more agile and self-service oriented like cloud computing, it risks becoming irrelevant like backup tape changers - a outdated technology that people work to avoid. IT must partner with teams rather than control them to remain relevant in the future.
BOSH is an open source tool that allows developers to easily package, release, deploy, and manage distributed systems and applications at scale across multiple cloud environments. It provides capabilities for deployment, configuration management, updates/upgrades with minimal downtime, remediation, and scaling. BOSH abstracts away infrastructure details through "stem cells" and treats applications as logical concepts rather than physical servers through a "release" process, providing consistency, reproducibility and agility in deployments.
Gutenberg | How a WordPress studio adaptedDavid Darke
An WordPress studio initially used a custom CMS but later transitioned projects to WordPress due to its simple content delivery and organization features. The studio supplemented WordPress with plugins like ACF that allowed for more flexible fields and relationships. When Gutenberg was announced as a new editor, the studio was worried about impacts but prepared by testing early versions and installing the classic editor plugin. As the editor evolved, the studio adapted training and development practices to understand Gutenberg capabilities and data changes.
Waterfall, Agile, Extreme Programming, Water-gile In this session we will discuss agile strategies that can help you get to done; efficiently, quickly and happier. I will cover the Scrum Framework concepts and some of the lessons learned from using agile strategy to manage a multinational distributed team. that does Drupal every day.
This session is for Managers and team members that want to learn more about agile strategies and how to apply them to Drupal.
Topics Covered
Where we all start, Waterfall.
Why agile is wrong, Agility is right.
Scrum Framework basics
What actions are Agile
What actions are not Agile
Lessons learned working with agile
Challenges of Scrum for small teams
Agility you can implement now
Configuring and maintaining a continuous integration environment is quite a bit of work. It requires ongoing resources both in terms of manpower and hardware infrastructure. As an application evolves so does the number of ongoing projects. The challenge is creating a scalable continuous integration environment which does not impede development and can handle the complexities of Java EE testing. This session covers how to setup and configure a cloud-based continuous integration environment for Java EE applications.
The presentation will focus on demonstrating how to use Atlassian Bamboo running on AWS to build and test a Maven/Gradle Java EE project that uses Arquillian for testing. Topics that will be covered include creating a custom AWS VM for use with Bamboo, creating an Amazon VPC (Virtual Private Cloud) along with test database using Amazon RDS. The presentation will delve into the specifics of testing EJBs, WebSocket endpoints, RESTful web services, as well as performing load testing in this environment. Security, cost control, and build monitoring will be covered as well.
Using apache camel for microservices and integration then deploying and managing on Docker and Kubernetes. When we need to make changes to our app, we can use Fabric8 continuous delivery built on top of Kubernetes and OpenShift.
Scale Machine Learning from zero to millions of users (April 2020)Julien SIMON
This document discusses scaling machine learning models from initial development to production deployment for millions of users. It outlines several options for scaling models from a single instance to large distributed systems, including using Amazon EC2 instances with automation, Docker clusters on ECS/EKS, or the fully managed SageMaker service. SageMaker is recommended for ease of scaling training and inference with minimal infrastructure management required.
In an increasingly competitive marketplace, speed and business agility are paramount. And integration between customer-facing systems and back-end applications is more crucial than ever.
At this event, you'll learn how open source software built by communities, like Apache Camel, Docker, Kubernetes, OpenShift Origin, and Fabric8, can help organizations integrate services and establish effective continuous integration and delivery (CI/CD) pipelines.
Journey to Docker Production: Evolving Your Infrastructure and Processes - Br...Docker, Inc.
DevOps in the Real World is far from perfect, and we're all somewhere on the path to one day writing that "Amazing-Hacker-News-Post about your chat-bot fully-automated micro-service infrastructure." But until then, how can you *really* start using containers today, in meaningful ways that impact yours and your customers productivity? This session is designed for practitioners who are looking for ways to get started now with Docker and Swarm in production. No Docker 101 here, this is for helping you be successful on your way to Dockerizing your production systems. Attendees will get tactics, example configs, real working infrastructure designs, and see the (sometimes messy) internals of Docker in production today.
20111110 how puppet-fits_into_your_existing_infrastructure_and_change_managem...garrett honeycutt
Puppet can help with change management by using its environments and version control features. Environments represent different stages like development, testing, and production. Changes are made on branches in version control and merged to trunk/master after testing. Tags mark versions to deploy to each environment. Documentation and gates between environments ensure changes meet requirements before moving forward.
This document discusses the CMake build system and its components CTest, CDash, and CPack. It provides an overview of CMake, comparing it to other build systems like Autotools and explaining its advantages such as being fast, cross-platform, and having wide industry adoption. Key features of CMake discussed include its use of CMakeLists.txt files, built-in rules for common targets, custom targets, macros, and finding and using external libraries.
Automation: The Good, The Bad and The Ugly with DevOpsGuys - AppD Summit EuropeAppDynamics
A cornerstone of the DevOps philosophy, investment in automation at all stages across the SDLC has increased over recent years. Automation promises velocity and reduced errors, helps foster repeatable processes, and removes the need for long hours on dull, repetitive tasks. So what’s not to like? The downside of automation is that unless applied at the right place in your SDLC it can make a bad process worse. Automation also raises questions around job security, the need for re-skilling in other areas, and tool sprawl if different teams each choose their preferred technology. This session will outline:
-A short chronology of where automation has impacted the modern software stack
-Where it makes the most sense to automate (by identifying your key constraints)
-Best practices for adopting automation and how to identify where it’s working — and where it isn’t
For more information, visit: www.appdynamics.com
DevOpsGuys - DevOps Automation - The Good, The Bad and The UglyDevOpsGroup
DevOpsGuys - DevOps Automation - The Good, The Bad and The Ugly gives an overview of the strengths and weaknesses of DevOps automation, tips on developing your automation strategy, and a high level overview of automation options across the DevOps toolchain.
Release Management with Visual Studio Team Services and Office Dev PnPPetter Skodvin-Hvammen
Learn about the capabilities of Visual Studio Online Services:
– how you can setup continuous builds whenever a change is committed to the source repository
– how to setup scheduled builds and deploys
– how to target deployments for your dev, test, uat and prod environments
– how to manage release security and use approval workflows
Also learn how you can use Office Dev PnP PowerShell to support rapid and automated deployments and about other alternatives out there
Application Delivery Patterns for Developers - Technical 401Amazon Web Services
Every developer has gone through the frustration of creating new features, fixing bugs, or refactoring beautiful code, and then wait for it to reach the promise land of production. Come and learn how to get your changes in the hands of your customers with more speed, reliability, security and quality.
We will dive deep into architectures for continuous delivery pipelines, apply lean principles, and build intelligence into your pipeline.
Speaker: Shiva Narayanaswamy, Solutions Architect, Amazon Web Services
Featured Customer - REA Group
CMake, CTest, and CPack are open source build, test, and install tools. CMake generates native makefiles and workspaces that can be used to build a project across platforms. CTest can run tests and integrate with the CDash dashboard for continuous integration. CPack creates professional installers for software distribution.
Integrating Security into DevOps and CI / CD Environments - Pop-up Loft TLV 2017Amazon Web Services
AWS serverless architecture components such as Amazon S3, Amazon SQS, Amazon SNS, CloudWatch Logs, DynamoDB, Amazon Kinesis, and Lambda can be tightly constrained in their operation. However, it may still be possible to use some of them to propagate payloads that could be used to exploit vulnerabilities in some consuming endpoints or user-generated code. This session explores techniques for enhancing the security of these services, from assessing and tightening permissions in IAM to integrating further tools and mechanisms for inline and out-of-band payload analysis that are more typically applied to traditional server-based architectures, and generalising these techniques to APIs for all AWS services.
This document discusses integration in the age of DevOps. It describes how microservices help solve the problem of decoupling services and teams to move quickly at scale. Apache Camel is presented as a solution for integration that allows for reliable and distributed integration through mechanisms like messaging. Kubernetes and Docker are discussed as platforms that help develop and run microservices locally and at scale by providing automation, configuration, isolation and service discovery capabilities.
Revolutionize DevOps lifecycle with Amazon CodeCatalyst and DevOps Guru at De...Vadym Kazulkin
This document summarizes Amazon CodeCatalyst and DevOps Guru, which help revolutionize the DevOps lifecycle. Amazon CodeCatalyst allows developers to create serverless projects that include code, development environments, CI/CD pipelines, and issue/report tracking. DevOps Guru uses machine learning to detect operational issues in services like DynamoDB, API Gateway, and Lambda by analyzing metrics to find anomalies and reduce human intervention. It provides both reactive insights for existing issues and proactive insights to predict future problems.
This presentation is a part of meetup session delivered in the Microsoft User Group - Chandigarh.
In this meetup we looked into how to deploy and manage Virtual Machines in Microsoft Azure cloud.
This was an advanced session and targeted more towards IT Pro audience. Developers were welcome also.
We covered created virtual machines via ARM template and covered with Virtual Machine Scale Sets with a live demo with Autoscale.
Docker is the developer-friendly container technology that enables creation of your application stack: OS, JVM, app server, app, database and all your custom configuration. So you are a Java developer but how comfortable are you and your team taking Docker from development to production? Are you hearing developers say, “But it works on my machine!” when code breaks in production? And if you are, how many hours are then spent standing up an accurate test environment to research and fix the bug that caused the problem?
This workshop/session explains how to package, deploy, and scale Java applications using Docker.
AWS Summit 2013 | India - Running High Churn Development & Test Environments,...Amazon Web Services
The flexible and pay-as-you-go nature of AWS means that developers can spin up compute resources quickly and shut them down when not required. Learn about rapid deployment of applications to AWS as part of your development and testing cycle. Development and testing are a resource hungry function that requires numerous environments and the AWS Cloud allows you create these environments quickly. Hear about real-world examples of our existing customers that have benefited from using AWS for their development and testing.
Oscon London 2016 - Docker from Development to ProductionPatrick Chanezon
Docker revolutionized how developers and operations teams build, ship, and run applications, enabling them to leverage the latest advancements in software development: the microservice architecture style, the immutable infrastructure deployment style, and the DevOps cultural model.
Existing software layers are not a great fit to leverage these trends. Infrastructure as a service is too low level; platform as a service is too high level; but containers as a service (CaaS) is just right. Container images are just the right level of abstraction for DevOps, allowing developers to specify all their dependencies at build time, building and testing an artifact that, when ready to ship, is the exact thing that will run in production. CaaS gives ops teams the tools to control how to run these workloads securely and efficiently, providing portability between different cloud providers and on-premises deployments.
Patrick Chanezon offers a detailed overview of the latest evolutions to the Docker ecosystem enabling CaaS: standards (OCI, CNCF), infrastructure (runC, containerd, Notary), platform (Docker, Swarm), and services (Docker Cloud, Docker Datacenter). Patrick ends with a demo showing how to do in-container development of a Spring Boot application on a Mac running a preconfigured IDE in a container, provision a highly available Swarm cluster using Docker Datacenter on a cloud provider, and leverage the latest Docker tools to build, ship, and run a polyglot application architected as a set of microservices—including how to set up load balancing.
Similar to KnowledgeHut - Switching On DevOps (20)
A talk delivered at the Elabor8 Lunch 'n Learn in March 2019. I talk about how I used entrepreneurial thinking when working in a large corporate environment, as well as how I moved from there to running a startup.
This document discusses remote working and running distributed teams. It outlines different types of remote work including fully remote companies. The advantages listed are flexibility, a global talent pool, reduced interruptions, and environmental benefits. Challenges discussed include communication, isolation, time zones, and ensuring a sense of belonging for remote employees. Solutions proposed are onboarding processes, virtual meetings, communication channels, and emphasizing well-being. Overall the document provides an overview of considerations for effectively managing remote and distributed teams.
People are Weird: Overcoming Resistance to Change and Achieving Continuous De...Shaw Innes
Prior to adopting a DevOps mindset, development and deployment at Tatts was being held up by slow, manual processes. DevOps was introduced to improve continuous delivery and break down silos in the business. In this presentation I will explain how resistance to change and peoples' unique quirks were some of the biggest challenges in the process and how this was overcome:
- Changing the way leadership communicated with staff and applying new language to the process
- Creating an environment where it’s encouraged to make and learn from ‘good mistakes’
- How DevOps was successfully scaled to become the way of working across many systems
What we've learnt and what we want to put in practice from recent conferences. A summary by Neil Frawley, Sam Thwaites and Shaw Innes about a trip to the 2016 DevOps Enterprise Summit in San Francisco.
Presented at the January 2017 Brisbane DevOps Meetup Group
Salt & Pepper Calamari: Cooking up DevOps with Chef and Octopus DeployShaw Innes
This document appears to be an agenda for an event discussing DevOps practices. It includes sections for an introduction, overview of challenges, a demonstration, and bonus content, thanking sponsors, and a question period. The agenda touches on DevOps tools and processes while providing both information and interactive elements.
The document discusses treating infrastructure as code by building pipelines that are testable, repeatable, destroyable, and reviewable. This involves using a toolchain to manage infrastructure in a way that addresses the problems of complexity, fragility, and misalignment that currently exist. The goal is to provide a solution and vision for more effectively developing and managing infrastructure.
Software Engineering and Project Management - Introduction, Modeling Concepts...Prakhyath Rai
Introduction, Modeling Concepts and Class Modeling: What is Object orientation? What is OO development? OO Themes; Evidence for usefulness of OO development; OO modeling history. Modeling
as Design technique: Modeling, abstraction, The Three models. Class Modeling: Object and Class Concept, Link and associations concepts, Generalization and Inheritance, A sample class model, Navigation of class models, and UML diagrams
Building the Analysis Models: Requirement Analysis, Analysis Model Approaches, Data modeling Concepts, Object Oriented Analysis, Scenario-Based Modeling, Flow-Oriented Modeling, class Based Modeling, Creating a Behavioral Model.
Use PyCharm for remote debugging of WSL on a Windo cf5c162d672e4e58b4dde5d797...shadow0702a
This document serves as a comprehensive step-by-step guide on how to effectively use PyCharm for remote debugging of the Windows Subsystem for Linux (WSL) on a local Windows machine. It meticulously outlines several critical steps in the process, starting with the crucial task of enabling permissions, followed by the installation and configuration of WSL.
The guide then proceeds to explain how to set up the SSH service within the WSL environment, an integral part of the process. Alongside this, it also provides detailed instructions on how to modify the inbound rules of the Windows firewall to facilitate the process, ensuring that there are no connectivity issues that could potentially hinder the debugging process.
The document further emphasizes on the importance of checking the connection between the Windows and WSL environments, providing instructions on how to ensure that the connection is optimal and ready for remote debugging.
It also offers an in-depth guide on how to configure the WSL interpreter and files within the PyCharm environment. This is essential for ensuring that the debugging process is set up correctly and that the program can be run effectively within the WSL terminal.
Additionally, the document provides guidance on how to set up breakpoints for debugging, a fundamental aspect of the debugging process which allows the developer to stop the execution of their code at certain points and inspect their program at those stages.
Finally, the document concludes by providing a link to a reference blog. This blog offers additional information and guidance on configuring the remote Python interpreter in PyCharm, providing the reader with a well-rounded understanding of the process.
Accident detection system project report.pdfKamal Acharya
The Rapid growth of technology and infrastructure has made our lives easier. The
advent of technology has also increased the traffic hazards and the road accidents take place
frequently which causes huge loss of life and property because of the poor emergency facilities.
Many lives could have been saved if emergency service could get accident information and
reach in time. Our project will provide an optimum solution to this draw back. A piezo electric
sensor can be used as a crash or rollover detector of the vehicle during and after a crash. With
signals from a piezo electric sensor, a severe accident can be recognized. According to this
project when a vehicle meets with an accident immediately piezo electric sensor will detect the
signal or if a car rolls over. Then with the help of GSM module and GPS module, the location
will be sent to the emergency contact. Then after conforming the location necessary action will
be taken. If the person meets with a small accident or if there is no serious threat to anyone’s
life, then the alert message can be terminated by the driver by a switch provided in order to
avoid wasting the valuable time of the medical rescue team.
Prediction of Electrical Energy Efficiency Using Information on Consumer's Ac...PriyankaKilaniya
Energy efficiency has been important since the latter part of the last century. The main object of this survey is to determine the energy efficiency knowledge among consumers. Two separate districts in Bangladesh are selected to conduct the survey on households and showrooms about the energy and seller also. The survey uses the data to find some regression equations from which it is easy to predict energy efficiency knowledge. The data is analyzed and calculated based on five important criteria. The initial target was to find some factors that help predict a person's energy efficiency knowledge. From the survey, it is found that the energy efficiency awareness among the people of our country is very low. Relationships between household energy use behaviors are estimated using a unique dataset of about 40 households and 20 showrooms in Bangladesh's Chapainawabganj and Bagerhat districts. Knowledge of energy consumption and energy efficiency technology options is found to be associated with household use of energy conservation practices. Household characteristics also influence household energy use behavior. Younger household cohorts are more likely to adopt energy-efficient technologies and energy conservation practices and place primary importance on energy saving for environmental reasons. Education also influences attitudes toward energy conservation in Bangladesh. Low-education households indicate they primarily save electricity for the environment while high-education households indicate they are motivated by environmental concerns.
Digital Twins Computer Networking Paper Presentation.pptxaryanpankaj78
A Digital Twin in computer networking is a virtual representation of a physical network, used to simulate, analyze, and optimize network performance and reliability. It leverages real-time data to enhance network management, predict issues, and improve decision-making processes.
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024Sinan KOZAK
Sinan from the Delivery Hero mobile infrastructure engineering team shares a deep dive into performance acceleration with Gradle build cache optimizations. Sinan shares their journey into solving complex build-cache problems that affect Gradle builds. By understanding the challenges and solutions found in our journey, we aim to demonstrate the possibilities for faster builds. The case study reveals how overlapping outputs and cache misconfigurations led to significant increases in build times, especially as the project scaled up with numerous modules using Paparazzi tests. The journey from diagnosing to defeating cache issues offers invaluable lessons on maintaining cache integrity without sacrificing functionality.
Height and depth gauge linear metrology.pdfq30122000
Height gauges may also be used to measure the height of an object by using the underside of the scriber as the datum. The datum may be permanently fixed or the height gauge may have provision to adjust the scale, this is done by sliding the scale vertically along the body of the height gauge by turning a fine feed screw at the top of the gauge; then with the scriber set to the same level as the base, the scale can be matched to it. This adjustment allows different scribers or probes to be used, as well as adjusting for any errors in a damaged or resharpened probe.
Blood finder application project report (1).pdfKamal Acharya
Blood Finder is an emergency time app where a user can search for the blood banks as
well as the registered blood donors around Mumbai. This application also provide an
opportunity for the user of this application to become a registered donor for this user have
to enroll for the donor request from the application itself. If the admin wish to make user
a registered donor, with some of the formalities with the organization it can be done.
Specialization of this application is that the user will not have to register on sign-in for
searching the blood banks and blood donors it can be just done by installing the
application to the mobile.
The purpose of making this application is to save the user’s time for searching blood of
needed blood group during the time of the emergency.
This is an android application developed in Java and XML with the connectivity of
SQLite database. This application will provide most of basic functionality required for an
emergency time application. All the details of Blood banks and Blood donors are stored
in the database i.e. SQLite.
This application allowed the user to get all the information regarding blood banks and
blood donors such as Name, Number, Address, Blood Group, rather than searching it on
the different websites and wasting the precious time. This application is effective and
user friendly.
Software Engineering and Project Management - Software Testing + Agile Method...Prakhyath Rai
Software Testing: A Strategic Approach to Software Testing, Strategic Issues, Test Strategies for Conventional Software, Test Strategies for Object -Oriented Software, Validation Testing, System Testing, The Art of Debugging.
Agile Methodology: Before Agile – Waterfall, Agile Development.
3. ABOUT ME
My journey at Tatts
Developer
DevOps Engineer
DevOps Strategist
Engineering Manager
https://linkedin.com/in/shawinneshttps://twitter.com/shawinnes
Our Achievements
Introduced CI (TeamCity)
Introduced CD (Octopus)
Introduced IAC (Chef, Automation)
5. MY INTERPRETATION OF DEVOPS
• Culture, Automation, Lean, Metrics, Sharing
• Tools help, but they’re not that important, it’s the mindset
• It’s not a team, job title, or a role, it’s the practices*
• It’s ALL about the culture
• Don’t hire brilliant jerks & heroes, hire collaborators
• Watch this YouTube video
* sometimes you compromise on things to achieve bigger goals in
the long run…
6. BACKGROUND
• SCM Team
• Environment Management Team (EMT)
• CI Automation
• Build Automation
• Git trunk-based development
• Deployment Automation
• Big Brick Wall
7. THE PROBLEM
• Theory of Constraints
• We knew what to tackle next
• Lack of knowledge & skills
8. PREVIOUS / EXISTING ATTEMPTS
• Ops manually building and maintaining
• Microsoft SCCM
• Simple scripting
9. HEAD WINDS
• No record of configuration
• Maybe word documents
• Regulated environment
• An insane number of new tools
• No public cloud available for use
• Negative mindsets
• Middle managers
10. TAIL WINDS
• World-class on-premises data centres
• Executive support
• Positive mindset (a small group of dedicated followers)
• Minimal budgetary constraints
12. PROOF OF CONCEPT - PACKER
• To solve these problems
• VMWare template sprawl
• Building OS Images manually / using MS tool
• No version control
• Demo on laptop – Packer, Vagrant, VMWare, Windows VM, Chef
13. TATTSCLOUD
• Used existing VM infrastructure
• VMWare V-Realize Automation
• VMWare V-Realize Orchestrator
• Adopted cloud-like patterns to maximise re-use
• Used an integration approach to minimise product lock-in
• Adopted CI/CD processes to manage all configuration
• Test-driven infrastructure
15. EXAMPLE OF PATTERNS - LAYERING (DIAGRAM)
Hypervisor-specific Configuration * box / vmdk / ami
Base Operating System
Windows / Linux (patched),
vm tools
Tier 1 Components AV, SCCM, SOE, Octopus
Customisation Dependencies
.net runtimes, java, folder
structures, services
* VMWare (Fusion, Desktop, ESXi), VirtualBox, AWS
16. AND NOW... ?
• Started September 2015
• Current State (Post IAC Pod)
• Transformed way of working
• TheLott API story
17. KEY MESSAGES
• Start Small
• Develop Patterns
• Cloud patterns even if you’re not using cloud
• Container patterns even if you’re not using containers
• Treat it like a product
• Maximise Re-use
• Don’t underestimate complexity
• Don’t underestimate organisational change management
• Remember to take a break every so often to reflect and marvel at what’s been achieved
Editor's Notes
Today I’m going to share some lessons about how we switched on infrastructure automation at Tatts Group…
Hi I’m Shaw and I worked at Tatts Group for around seven years. During that time I held a number of roles. I started as a developer in the Lotteries terminal team where we built the software used in lotteries agencies in every state except for Western Australia. It was during this time that we introduced a bit of a skunkworks automated CI build process.
After a couple of years I moved into the UBET web team as a DevOps Engineer so I could help get some repeatable processes for their builds and deployment. UBET web was to be one of the first continuously delivered products at Tatts.
I then moved into the enterprise agility team where I became a DevOps Strategist – a title we made up to ensure we were looking at the bigger picture, not just the day-to-day. During this time our team worked to transform the entire organization’s build and deployment processes to use fully automated build, test and deployment (where appropriate).
It was during this time that we introduced the concept of infrastructure as code (IAC). Over the next two years we transformed the way the company built and managed infrastructure, and that’s what I’m going to talk about today.
But first, a bit about Tatts. There are a lot of people who haven’t heard of Tatts, but you’ll know their Lotteries brands such as powerball, oz lotto and instant scratch-its. They also run a wagering brand, UBET which has retail and online products for gambling on sports and horse racing, as well as a radio station. There is a gaming services division which monitors poker machines in multiple states and also has equipment support contracts for retail outlets and national telecommunications providers. Finally there is George Squared, a small charitable division providing technology solutions to help charities raise funds.
Tatts Group is a 175+ year-old company which has gone through a long history of growth, mergers and acquisitions. As a result of this growth the company had a variety of systems and processes, and the challenges that come with this.
During my time at Tatts I had the privilege of working in, or consulting with, all four of these business units, and in the wider business. This gave me a broad understanding of the cross-cutting challenges facing our 400 person strong IT department and allowed us to work on solutions which could be applied across the board.
Before I get into what we did at Tatts, I’d like to share my thoughts on what DevOps is, and isn’t. Despite the fact I had two job titles with DevOps in there, I think this is an anti-pattern and in reality everyone should be “thinking” about DevOps Culture, Automation, Lean, Metrics, Sharing -or CALMS. I found that as we were rolling out our agility and DevOps transformation, we spent much more time on culture and people than on tools and technologies.
There’s a great talk by Adam Jacob (pictured) from Chef where he introduces the concept of DevOps Kung Fu where it’s more of a way of thinking and mindset than a specific tool. Tools help, but they’re not that important, it’s so much more about the mindset, and this is why if you get the right attitude and culture – your people will be able to achieve anything!
Prior to me moving into the UBET web team to set up some continuous delivery processes, most of our software builds and source code management work was undertaken by a separate team (who incidentally sat in our Ops space). This team eventually evolved into the Environment Management Team (EMT) with a wider remit to also maintain our various large test environments. These poor guys were so under the pump to be doing software builds and releases, branching, environment management and who knows what else.
The UBET team had already determined that EMT was a bottleneck and so they were trying to remove their reliance upon EMT. The problem was that they hadn’t automated anything and so they were basically losing a developer's time for a couple of days every 2 weeks to manually build the software and deploy it to a test environment.
So when I joined that team the first thing we did was to automate the build process by introducing TeamCity. I guess the tools specifically don't matter, this is just what we used and again, it's more about the mindset than the tools. This would at least give more timely feedback of merge or compilation issues. To make this easier we decided to adopt Git version control at this time.
The ultimate goal would be to deploy the UBET website to production via fully automated pipelines, but the first step was to just get it deployable to a test environment without human intervention. For this we chose to use Octopus Deploy.
So now we had pretty much got to the point where any test branch or master branch would be automatically deployed to a virtual host in the test environment. There were probably 25 developers working on this project at the time and so that meant that at least once a day when they were committing changes, a 250mb build would be deployed to their test environment.
And then things blew up… in the funniest way possible. We filled up all the test environments with all these test builds. Oops.
As we had been working through the various changes up to this point we had been applying the theory of constraints to each problem as we saw it. We’d improved the flow behind the infrastructure by automating the build, deployment and improving version control practices.
Infrastructure provisioning was our next constraint. Up until this point we had been using a single test environment, with a couple of long-lived servers. Now we wanted to be able to scale that up, and ultimately use any automation to eventually provision our production environments.
What we had to tackle next was infrastructure automation, and it was going to be a challenge because it meant stepping out of the development teams where we had built up some great positive support. The developers and testers were loving their new way of working, that they could push a change and have it available in an isolated environment for a tester to work with.
The infrastructure teams, including EMT, had very little exposure to automation, version control or DevOps concepts. A few forward-thinking people in those teams did hear the pain-points of the development teams though.
Those forward-thinkers who were focused on trying to solve the problems of their “customers”, the developers, tried a few approaches. The first attempts were to just do things faster, which we all know wasn’t going to work long term – this just resulted in frustrated people.
Another group started to build automation based on the SCCM task-runner, this was a noble attempt, but the SCCM task runner was designed for running a couple of tasks on a managed machine – it wasn’t meant to automate the provisioning of an 80 VM test environment. If any part of the process failed, the whole process failed and you would end up with a mess to clean up. Unfortunately it was very common for these task sequences to fail part-way through.
There were also pockets of ad-hoc automation occurring, but again these were generally flaky and not managed or consistently applied. These were also dependent upon who was doing the work.
There were a few things working against us moving to a new way of delivering infrastructure. Due to the fact that much of the existing deployed infrastructure was manually configured, we had minimal record of what had actually been built, and more importantly what had since changed. There were confluence articles and word documents for some things, but most were outdated.
Although I hated hearing this excuse, many of the systems managed by Tatts are under some level of regulatory scrutiny. This sometimes meant we weren’t able to take the easiest option, though it usually just meant having conversations with the regulators who were generally excited about progress – they just wanted to understand any impacts changes would have on their ability to maintain integrity of the systems. In reality, the automation of these processes would greatly improve the integrity of everything.
We were about to introduce a huge number of new tools, some of which were familiar to those of us from a development background, but definitely not to the infrastructure and ops teams. I remember one meeting where we were talking to some of the infrastructure teams about version control and checkins and checkouts and thankfully someone stopped me and pointed out that they didn’t know what we were going on about. This was a good reminder that DevOps is about sharing, and having understanding and empathy for your colleagues.
Now a couple of years after that conversation and almost everyone in the organisation is comfortable with version control and we have ops teams managing DNS via git and automated deployment.
Automation, Continuous Delivery, IAC are all really easy with public cloud. It was kind of built for these concepts, but as I mentioned earlier about the regulatory impacts, and the use of public cloud was one area in which our hands were tied to some extent. It wasn’t that we were unable to use public cloud, but for various reasons we just hadn’t got to the stage of seeking approval. Regardless, there were operational reasons why we wouldn’t move our entire workload to the cloud and so any investment in automation of legacy VM infrastructure was going to pay off anyway.
There were a few negative people in various teams across the company, mostly due to feeling threatened by automation or changing the way they work. Many people associate their identity with what they do – and so if someone is an infrastructure engineer and you question the way they do their job, suddenly they have an existential crisis. Understanding what makes different people tick goes a long way to minimising these fears, but it’s an imperfect art and an area where we tripped up a couple of times.
On the upside, we also had some great things going for us.
Due to the scale and importance of system integrity and availability, Tatts had invested a huge amount of capital into their data centres and they were world class. Some of the nicest data centres I’ve ever seen and with great enterprise software and support. This meant we had access to work with some of the engineers at places like VMWare and Chef when we were trying to bend their products to our puzzling will.
We had support for these initiatives from the top of the organisation. Mandy Ross, the CIO was 100% behind these efforts because she understood the benefits. Executive support is one of the things people always ask about at conferences and meetups. We were really lucky to have someone who we didn’t need to convince.
Once we started introducing infrastructure as code and some other things, the supporters started to come out of the woodwork and we built a really good team of advocates and guilds of people who were keen to participate in a community to make this stuff a success. In fact, the infrastructure as code guild was one of the most vibrant (and passionate) communities of practice with regular meetings to discuss any problems and to come up with solutions.
Finally, one of the benefits of being a company with $4.7B market cap is that budget isn’t as tight as at some other places. The main benefit of this wasn’t so much in our ability to purchase things, but that we were given the freedom of being able to stand up a dedicated R&D project team for almost two years to work on piloting these ideas and to work with other teams to distil what would work best for the whole company.
So, we used a few general approaches for our incremental build of these practices.
Proof of concepts, we built our own internal self-service cloud offering ”TattsCloud”, we tried to establish patterns and abstractions so that people could apply them to other similar use cases, and leading on from this we attempted to maximise re-use of anything we built.
We had some “DevOps Jam” sessions with some of our new friends in the infrastructure teams about common problems we were all sharing. One of these was around Windows VM template creation and management. The natural tools for managing these images were not version controllable and as such there were manually copied versions of configurations and templates all over the place.
I had been running a couple of side projects and doing research into solutions for scalable infrastructure automation. Although our infrastructure was all on-premise, I looked at tools and patterns in use by public cloud providers like AWS and Azure and identified a few tools which might be worth looking into.
The Tatts tech stack was very Windows-heavy at the time so it was crucial that any solutions would work on Windows, but we also anticipated that we would eventually want to adopt more open source products running on Linux, so we didn’t want to select a Windows-only solution.
Another factor in the selection of tools and processes for infrastructure automation was that we wanted to create a set of practices where the development experience would match production as closely as possible. It had been an ongoing problem where development, test, and production were inconsistent and we would have configuration drift resulting in protracted release processes.
A major keystone in the whole process was Chef. For those who haven’t come across this tool before, it’s a configuration management tool that lets you define your server infrastructure as configuration files. There are other similar tools such as Puppet and Ansible.
So I put together a pilot using the tools shown on the slide, using vagrant and packer to do development of base images so that we would have a good developer/contributor experience. Because I knew there would be some resistance to changing things here because “that won’t work here”, I built a replica of our VMWare vSphere setup, with management servers, VM hosts etc. Then on my laptop I did a full end-to-end demonstration of how we could use a configuration file to built a Windows VM template and have it automatically added to my vSphere setup and then stand up a new VM based on this template.
There was a lot going on at once, the fan on my laptop was going crazy! But in the end the demo was a success and people were willing to give it a go, and then we started talking about learning git, and packer, and vagrant… and all the other tools we needed to adopt.
Once we’d proven some value in this automation thing, we somehow managed to convince our CTO and CIO to lend us a few more people to start up an agile team to start building automation around our existing VMWare infrastructure and build what we dreamed would be an internal self-service cloud.
Because we wanted to maximise portability and re-use though, we didn’t make use of the VMWare products exactly as intended. We had some strong opinions about how we wanted to manage configuration, though code, and the platform was lacking in this respect. So we ended up building glue between a few of our existing tools rather than building heaps of logic into the VMWare platform.
One of the other aspects which drove our decision to integrate systems rather than heavily configure the existing products was that we wanted to use a CI/CD process for all configuration changes and to build automated testing into a pipeline to increase the quality of contributions to the infrastructure artifacts. We were able to make use of our existing GitHub and Teamcity systems to provide quality control along the way, just pushing the final configuration into the VMWare platform at the end of the process.
This diagram shows roughly how it all hung together.
The green boxes are configuration or other artefacts which we wanted to version control. These were the source of truth for everything. If anyone wanted to make a change they could do local development on their workstation using Vagrant and other tools. Once they were happy with the changes they would submit a pull-request change through GitHub which would be peer-reviewed (not ”approved”) by the community. The accepted pull request would then be built or checked by teamcity depending on what it was. For example chef cookbooks which passed automated testing would be pushed into our chef server, if they failed they would never get to the server which meant we reduced the chances of bad scripts getting into real environments.
In the use-case I mentioned earlier with the packer proof of concept, we got to the stage where all of our windows and linux (yes linux now!) VMWare templates were generated 100% automatically each month and were patched with the latest versions of the vendor patches. These were then automatically tested and pushed into our VMWare clusters for consumption by users. Each month the obsolete templates were also cleaned up by the process, eliminating the previous template sprawl and clutter problem.
With all of this setup we were then able to use the VMWare vRA platform to provide a self-service request form which would allow a user to request a specific type of VM into any environment from development, through test, performance test and into production. They were able to select from a defined list of roles and cookbooks and set some parameters on the deployment. About 15-20min later they'd receive and email notifying them that their requested VM was ready to go.
One of the other really cool things we did was to integrate SolarWinds IPAM (IP Address Management) into our self-service pipeline. The VMWare platform manages IP addresses normally, but it’s quite restrictive in the way it does this and requires the entire address range to be managed by VMWare. Due to the evolution of the infrastructure at Tatts this wasn’t feasible, and we didn’t want to create a huge amount of work to restructure the entire VM farm. Instead, the VM request workflow would request a free IP from SolarWinds dependent upon which cluster and network zone the VM was going into, this meant that the requestor of a VM didn’t need to worry about IP addresses at all, they just needed to specify which zone and environment, and the platform would take care of the rest.
As I mentioned earlier we liked to develop simple, re-usable, patterns and abstractions as part of our work. This is an example of one. On the left, in purple are the four abstraction layers we were using when building up a destination server. On the right hand side in blue are concrete examples to illustrate what I mean by layering.
By using a pattern like this it made it easier for us to re-use automation scripts and processes across different VM tools such as Vagrant, VMWare, and AWS. It also meant we were following similar patterns across different operating systems.
This helped to reduce the maintenance burden and cognitive load on people who were maintaining pipelines. Imagine if we had used all the native windows tools on windows, and the native Linux tools on Linux – the result would have been two totally separate code-bases. Instead we ended up with only a tiny amount of bootstrapping required for each operating system and then we used tools like Packer and Chef to do the rest. Obviously the specifics of what was loaded on the different servers varied, but the pattern was the same and as a result it made the transition from windows to Linux more of a step than a leap.
We started the infrastructure as code POD in September 2015, and over the next 18-24 months we had almost a full-time group of 3-6 people working on building TattsCloud and the associated supporting tools. The team was involved in working with customer teams and advocating for the use of the new way of working. They were performing training and coaching, accepting feature requests from customer teams and showcasing their progress fortnightly.
Now almost all VM instances are deployed using TattsCloud from development through to production. The VM templates are exclusively managed by the pipeline.
The way that infrastructure teams and development teams work together has changed totally, to the point where they are both contributing to infrastructure code and chef cookbooks. The same templates and cookbooks are used to build development servers and then to build production servers. This means there is a massively reduced risk of inconsistency as work progresses through the environments.
Finally, just before I left Tatts there was a great story of the lotteries team having a need to scale up some of their API infrastructure from 8 old physical servers which were on aging hardware and had been carefully hand crafted and cared for over the years.
With a bit of coaching and encouragement the lotteries developers worked with some of the infrastructure engineers to define new API servers including network and load balancer automation. Over a period of a few weeks they built and destroyed hundreds of iterations of these machines in the test environments until they were confident that they could deploy a VM free from human hands. They eventually succeeded in this task and rolled out 30 VMs to production.
A few weeks later there was a need to scale up even further and so with no further code or configuration changes they just submitted a request for 30 more VMs… a couple of hours later the job was done and the new VMs were automatically added to the load balancers. I can’t even imagine how long or painful it would have been previously to request and provision an additional 30 VMs – it definitely wouldn’t have been done in hours.
I’d like to leave a few points for you to think about.
Start Small, this stuff is quite easy to sell once you get some wins on the board, so just start with a big pain point that people are experiencing and try to do something small to improve that. Once you’ve built some support by solving people’s problems they’ll be more engaged in being part of the change.
Patterns allow you to scale adoption without too much pain. We looked at the patterns used by cloud and container systems for inspiration. For example, minimising dependencies to maximise reusability. The layering approach I talked about earlier was also inspired by container technology, rather than build 100 VM templates with every permutation of configuration and environment, why don’t we build a very generic base OS template, and customise it at deployment time. Less complexity, less sprawl. The downside was slightly longer build times, but that wasn’t such big deal in the end.
If you’re building large-scale systems or integrations to do infrastructure as code, treat the whole thing as a product. It’s unreasonable to expect to build it and hand it over to a support team and happy days. Like any complex software system, your infrastructure as code pipelines and ecosystem require care and feeding. Consider the formation of a tools team to maintain and evolve the systems – they should be a blended team of systems and software people to ensure that the voices of all customers are heard – a true Dev+Ops team.
I would also say don’t underestimate the complexity of this task, of course the benefits are huge, but there are a lot of moving parts and one of the great challenges we faced was with organisational change. You’re changing people’s identities and pushing them outside their comfort zones. Some people will love this and become raving fans, others will resist with all their might. Don’t underestimate the importance of managing this change, if you invest the time into doing it properly it will be a much smoother ride. Having said that, some times you just need to get on with it and show people what’s possible before they’ll be open to accepting the reality.
And finally, take the time to retrospect on what HAS improved. It’s really easy when you’re in an enterprise with lots of legacy processes and systems to always be focusing on what can be improved and what needs to be fixed. As I was preparing to leave Tatts I reflected upon what had been achieved in my time there, and it was quite amazing. Not only had we implemented some impressive technical solutions, we had also totally changed the mindsets and ways of working of an entire technology department.