The document discusses techniques for going beyond continuous delivery including modular development, infrastructure as code, semi-fluid dependencies, cloneable pipelines, personal pipelines, pre-flight pipelines, quantum pipelines, extreme integration, and cloud IDEs. These techniques aim to improve the continuous delivery process through practices like breaking applications into independently deployable modules, defining infrastructure programmatically, loosening rigid dependencies between code components, and enabling developers to run automated tests against their uncommitted code changes before merging them.
Beyond Continuous Delivery at ThoughtWorks North America Away DayChris Hilton
The document discusses techniques for going beyond continuous delivery including modular development, dependency management, infrastructure as code, semi-fluid dependencies, cloneable pipelines, pre-flight pipelines, quantum pipelines, evergreen trunks, extreme integration, and cloud IDEs that bring together development, QA, and operations. These techniques aim to enable continuous integration and delivery through automated testing and deployment with each code change.
Working with micro-services is arguably the best part of OSGi development. However, everyone agrees that tracking service dependencies with the bare-bones OSGi API is not ideal. So, you pick one of the available dependency managers: either Declarative Services, Felix Dependency manager, Blueprint or iPojo.
But how do you pick the right one? Easy! After this shoot-out you’ll know all about the performance, usability and other aspects of the existing dependency managers. We show the strengths and weaknesses of the implementations side-by-side. How usable is the API? What about performance, does it scale beyond trivial amounts of services? Does it matter which OSGi framework you run the dependency manager in?
Make up your mind with the facts presented in this session.
This document discusses the motivation, design, and implementation of a distributed high availability Asterisk application using Stasis and ARI over Kafka. The current setup uses AGI and AMI with an evolved monolithic architecture. The new approach uses ARI, modularizes the system, and uses a single Asterisk server. Kamailio dispatches calls and observes Asterisk. A Stasis app handles SIP/media, and a call controller manages call logic and routing. ARI events are sent to Kafka, and commands are sent back. This allows for transparent server farms, easy scaling, and restart safety while meeting demands for high availability, performance, scalability, and continuous deployment.
SITREP - Asterisk REST. The first steps are done, now what? - CommCon 2019Jöran Vinzens
- The current Asterisk infrastructure is complex, outdated, and difficult to maintain. It uses custom patches and an old version of chan_sip.
- The team wants to update components, simplify interactions, and eliminate custom code to make the system more secure, scalable, and able to utilize new Asterisk features.
- They plan to use Asterisk REST Interface (ARI) to create a new interface and decouple Asterisk from the call controller for improved scalability. This will involve a step-by-step "soft" migration rather than a full rewrite.
An Asterisk administrator discusses upgrading an Asterisk PBX from version 11 to version 16. This involved building new Debian packages, porting configurations, updating dialplans, testing features, and addressing issues like crashes, logging problems, and high system loads. Through testing and troubleshooting, the upgrade was eventually successful and the system could handle the same call load as the previous version. However, ongoing work is still needed to optimize logging and reduce continuous CPU loads.
Blasting Through the Clouds - Automating Cloud Foundry with Concourse CIFabian Keller
Cloud Foundry has an extremly high release velocity with new versions being available multiple times every week for a usual deployment. It is important for operators to deploy these releases in a timely manner in order to keep up with security patches and feature improvements. Commonly, there is not only one Cloud Foundry deployment to be kept up to date, but rather a couple of different stages that need to be upgraded in a specific order, for example from a sandbox to a development to a production environment.
Automation is key to keep up to date with Cloud Foundry's release velocity and Concourse CI is the continuous thing-doer of choice to do this honorable task. In this talk we'll first get to know Concourse CI basics and then see how we can leverage Concourse to automate staged platform updates for Pivotal Cloud Foundry. With pcf-automation being sunsetted in favor of PCF Automation we'll have a look at how we can tailor upgrade pipelines to suit different needs all while keeping the thrust at high pace to blast through the clouds!
Talk given at the Cloud Foundry Meetup Stuttgart in May 2019.
- The document discusses ROS 2 features including new Quality of Service settings, code quality improvements, security features, and performance testing tools.
- ROS 2 D introduces new QoS policies including deadline, lifespan, and liveliness to allow finer-grained control over ROS communication.
- Code quality has improved with fixes to memory leaks and data races. Security testing and secure communication capabilities are also discussed.
This document discusses various tools and techniques for improving the quality of robotics software, including ROS 2 code. It covers using compiler instrumentation like AddressSanitizer and ThreadSanitizer to detect memory bugs and concurrency issues. It also discusses annotating code with thread safety annotations, fuzz testing ROS 2 to find crashes, and integrating these techniques into continuous integration systems to catch issues early. The goal is to help the robotics community build more robust, secure software.
Beyond Continuous Delivery at ThoughtWorks North America Away DayChris Hilton
The document discusses techniques for going beyond continuous delivery including modular development, dependency management, infrastructure as code, semi-fluid dependencies, cloneable pipelines, pre-flight pipelines, quantum pipelines, evergreen trunks, extreme integration, and cloud IDEs that bring together development, QA, and operations. These techniques aim to enable continuous integration and delivery through automated testing and deployment with each code change.
Working with micro-services is arguably the best part of OSGi development. However, everyone agrees that tracking service dependencies with the bare-bones OSGi API is not ideal. So, you pick one of the available dependency managers: either Declarative Services, Felix Dependency manager, Blueprint or iPojo.
But how do you pick the right one? Easy! After this shoot-out you’ll know all about the performance, usability and other aspects of the existing dependency managers. We show the strengths and weaknesses of the implementations side-by-side. How usable is the API? What about performance, does it scale beyond trivial amounts of services? Does it matter which OSGi framework you run the dependency manager in?
Make up your mind with the facts presented in this session.
This document discusses the motivation, design, and implementation of a distributed high availability Asterisk application using Stasis and ARI over Kafka. The current setup uses AGI and AMI with an evolved monolithic architecture. The new approach uses ARI, modularizes the system, and uses a single Asterisk server. Kamailio dispatches calls and observes Asterisk. A Stasis app handles SIP/media, and a call controller manages call logic and routing. ARI events are sent to Kafka, and commands are sent back. This allows for transparent server farms, easy scaling, and restart safety while meeting demands for high availability, performance, scalability, and continuous deployment.
SITREP - Asterisk REST. The first steps are done, now what? - CommCon 2019Jöran Vinzens
- The current Asterisk infrastructure is complex, outdated, and difficult to maintain. It uses custom patches and an old version of chan_sip.
- The team wants to update components, simplify interactions, and eliminate custom code to make the system more secure, scalable, and able to utilize new Asterisk features.
- They plan to use Asterisk REST Interface (ARI) to create a new interface and decouple Asterisk from the call controller for improved scalability. This will involve a step-by-step "soft" migration rather than a full rewrite.
An Asterisk administrator discusses upgrading an Asterisk PBX from version 11 to version 16. This involved building new Debian packages, porting configurations, updating dialplans, testing features, and addressing issues like crashes, logging problems, and high system loads. Through testing and troubleshooting, the upgrade was eventually successful and the system could handle the same call load as the previous version. However, ongoing work is still needed to optimize logging and reduce continuous CPU loads.
Blasting Through the Clouds - Automating Cloud Foundry with Concourse CIFabian Keller
Cloud Foundry has an extremly high release velocity with new versions being available multiple times every week for a usual deployment. It is important for operators to deploy these releases in a timely manner in order to keep up with security patches and feature improvements. Commonly, there is not only one Cloud Foundry deployment to be kept up to date, but rather a couple of different stages that need to be upgraded in a specific order, for example from a sandbox to a development to a production environment.
Automation is key to keep up to date with Cloud Foundry's release velocity and Concourse CI is the continuous thing-doer of choice to do this honorable task. In this talk we'll first get to know Concourse CI basics and then see how we can leverage Concourse to automate staged platform updates for Pivotal Cloud Foundry. With pcf-automation being sunsetted in favor of PCF Automation we'll have a look at how we can tailor upgrade pipelines to suit different needs all while keeping the thrust at high pace to blast through the clouds!
Talk given at the Cloud Foundry Meetup Stuttgart in May 2019.
- The document discusses ROS 2 features including new Quality of Service settings, code quality improvements, security features, and performance testing tools.
- ROS 2 D introduces new QoS policies including deadline, lifespan, and liveliness to allow finer-grained control over ROS communication.
- Code quality has improved with fixes to memory leaks and data races. Security testing and secure communication capabilities are also discussed.
This document discusses various tools and techniques for improving the quality of robotics software, including ROS 2 code. It covers using compiler instrumentation like AddressSanitizer and ThreadSanitizer to detect memory bugs and concurrency issues. It also discusses annotating code with thread safety annotations, fuzz testing ROS 2 to find crashes, and integrating these techniques into continuous integration systems to catch issues early. The goal is to help the robotics community build more robust, secure software.
Microservices Manchester: Testing Microservices: Pain or Opportunity? By Davi...OpenCredo
Testing Microservices is hard! Or is it really? One of the precepts of TDD is that if something is hard to test, then potentially the design itself is at fault. With a sweep through design options for Microservices, it can be shown that testing Microservices does not have to be hard, and can become as straightforward as any other test. This does require a radically different design philosophy, which this talk will review and show applied.
Come prepared to discover why we should blame Plato for bad software, and why Microservices are really about data, not processes.
About David Dawson
David is CEO of Simplicity Itself, helping their clients adopt Microservices and cloud-native architectures.
www.simplicityitself.io
Going FaaSter, Functions as a Service at NetflixYunong Xiao
The document discusses Netflix's use of serverless computing via its own Function as a Service (FaaS) platform. Some key points:
- Netflix built its own FaaS platform called Titus that runs functions at scale using containers for portability and efficiency.
- The platform handles operations concerns so developers can focus on business logic. It provides a full runtime API and handles updates, metrics, and management automatically.
- Netflix developed tools like NEWT to improve the developer experience with one-click setup, local development and debugging, testing, and CI/CD integration for fast and reliable software development.
Making the LAMP Stack of Manufacturing - for Make Hardware Innovation WorkshopNick Pinkston
This is my presentation at the Make: Hardware Innovation Workshop about how we can apply the analogies of the LAMP stack and programming to a future system of manufacturing automation.
10 ways to shoot yourself in the foot with kubernetes, #9 will surprise you! ...Laurent Bernaille
Kubernetes is a very powerful and complicated system, and many users don’t understand the underlying systems. Come learn how your users can abuse container runtimes, overwhelm your control plane, and cause outages - it’s actually quite easy!
In the last year, we have containerized hundreds of applications and deployed them in large scale clusters (more than 1000 nodes). The journey was eventful and we learned a lot along the way. We’ll share stories of our ten favorite Kubernetes foot guns, including the dangers of cargo culting, rolling updates gone wrong, the pitfalls of initContainers, and nightmarish daemonset upgrades. The talk will present solutions we adopted to avoid or work around some these problems and will finally show several improvements we plan deploy in the future.
Similar to the Kubecon talk with the same title with a few new incidents.
This document discusses the history of the oldest Django project, which was originally created for a regional news company with newspapers, TV stations, and magazines. It details the challenges of porting the project to newer versions of Django over many years as Django continued to evolve, including two major porting attempts. Key lessons learned include the importance of preparation, testing, and making deployment automated. The project was later open sourced and the codebase simplified, cutting around 40,000 lines of code.
KKBOX is a music streaming service founded in 2004 that uses PHP and handles around 4000 API requests per second. It uses an event-driven asynchronous architecture and tools like Gitlab for code hosting and reviews, Gitlab CI for testing, and Slack for communication. The document discusses KKBOX's technologies and processes for code management, testing, deployment, and communication between teams.
Tech Days 2015: Ada 2012 and Spark Crazyflie and Railway DemoAdaCore
This document summarizes presentations given by Eric Perlade on using Ada 2012 and SPARK 2014 for safety-critical drone and railway signaling software. It describes reimplementing the stabilization system of the Crazyflie drone firmware in SPARK to prove absence of runtime errors. It also outlines plans to reimplement the entire drone firmware without C using Ada 2012 and SPARK. Additionally, it discusses a demonstration of using SPARK 2014 to model a railway signaling system and prove absence of collisions.
Мониторинг облачной CI-системы на примере Jenkins / Александр Акбашев (HERE T...Ontico
This document discusses monitoring a Jenkins continuous integration (CI) system using cloud services. It begins by outlining some common issues that can occur in Jenkins like compilation or test failures. It then evaluates the default Jenkins monitoring capabilities and proposes designing a custom monitoring system using events, FluentD for processing, and InfluxDB for storage. Examples are provided of plugins developed to analyze build failures and improve node utilization. The presentation concludes with a discussion of dashboards used for daily monitoring of the Jenkins CI system.
What is Digital Rebar Provision (and how RackN extends)?rhirschfeld
Walks through how Digital Rebar Provision rethinks bare metal automation beyond simple O/S install into an integrated workflow system for building data center underlay.
INCLUDES VIDEO OF PRESO
"Today it's crystal clear why we need unit tests. Even integration and acceptance tests are quite common but who is making sure that your pages are working in production environment? I'd like to show in detail how smoke tests will help achieving this goal and why you should try to burn down your production server."
This is a hands-on talk about how we use excessive smoke tests in our continuous deployment to make sure a freshly rolled-out release can actually go live.
The document describes an automated continuous integration and continuous deployment (CICD) pipeline using a blue-green deployment strategy. It begins by showing a basic CICD pipeline and then introduces blue-green deployment to reduce downtime when deploying new changes. It demonstrates how blue-green deployment works step-by-step and discusses how it helps minimize downtime, preserve the last known good deployment, enable robust infrastructure, and allow for parallel pipelines. The document then provides recommendations for implementing blue-green deployment, including using virtualization, automating the process, and incorporating security testing. It emphasizes securing the entire CICD pipeline, not just the final application.
Practical virtual network functions with Snabb (SDN Barcelona VI)Igalia
By Andy Wingo.
SDN and Network Programmability Meetup in Barcelona (VI)
21 June 2017
https://www.meetup.com/es-ES/SDN-and-Network-Programmability-Meetup-in-Barcelona
/events/239667457/?eventId=239667457
Breaking down your build: Architectural patterns for a more efficient pipelin...Abraham Marin-Perez
The document discusses architectural patterns for more efficient software pipelines, including breaking down monoliths, building microservices, refactoring techniques, and restructuring patterns. Some key restructuring patterns covered are decoupling APIs from implementations, using horizontal and vertical slices, treating libraries as services, reducing fan-out, and managing configuration as a service. The goal is to seek simplicity and only run necessary processes in the pipeline.
Iceoryx is an open-source middleware developed by Eclipse that provides real-time data transport capabilities. It can be used as an alternative to ROS2's Fast-RTPS and Connext middleware implementations. Iceoryx uses shared memory and message queues for high-performance data transport between processes. However, it currently has some limitations including single point of failure if the central RouDi daemon crashes, fixed memory mapping, and lack of support for request/response calls and quality of service features.
The document provides an overview of income and spending trends in Mohave and La Paz Counties in Arizona based on 2010 Census data. It reports that the combined income for the two counties was $4.7 billion in 2010. It then breaks down expenditures by category, such as 12.4% ($582 million) spent on groceries, 6.5% ($305 million) on auto purchases, and 34.1% ($1.6 billion) on housing. The largest population and per capita income figures are also listed for each county from the 2010 Census.
La pandemia de COVID-19 ha tenido un impacto significativo en la economía mundial. Muchos países experimentaron fuertes caídas en el PIB y aumentos en el desempleo debido a los cierres generalizados y las restricciones a los viajes. Aunque las vacunas han permitido la reapertura de muchas economías, los efectos a largo plazo de la pandemia en sectores como el turismo y los viajes aún no están claros.
Diabetes is a chronic disease that requires lifelong treatment and monitoring by a physician. There are several potential complications of diabetes including heart disease, kidney disease, eye complications, nerve damage, and foot complications. Left unmanaged, diabetes can lead to serious health issues such as heart attack, stroke, kidney failure, blindness, and lower limb amputations. It is important for those with diabetes to work closely with their healthcare team to manage blood sugar, blood pressure, cholesterol levels and prevent or treat any complications through medication, lifestyle changes, and regular screening exams.
General Continuous Delivery for Agile Practitioners Meetup May 2014Chris Hilton
A generalized version of the presentation on Continuous Delivery given at the Agile Practitioners meetup at Gap headquarters in San Francisco on May 28, 2014.
Microservices Manchester: Testing Microservices: Pain or Opportunity? By Davi...OpenCredo
Testing Microservices is hard! Or is it really? One of the precepts of TDD is that if something is hard to test, then potentially the design itself is at fault. With a sweep through design options for Microservices, it can be shown that testing Microservices does not have to be hard, and can become as straightforward as any other test. This does require a radically different design philosophy, which this talk will review and show applied.
Come prepared to discover why we should blame Plato for bad software, and why Microservices are really about data, not processes.
About David Dawson
David is CEO of Simplicity Itself, helping their clients adopt Microservices and cloud-native architectures.
www.simplicityitself.io
Going FaaSter, Functions as a Service at NetflixYunong Xiao
The document discusses Netflix's use of serverless computing via its own Function as a Service (FaaS) platform. Some key points:
- Netflix built its own FaaS platform called Titus that runs functions at scale using containers for portability and efficiency.
- The platform handles operations concerns so developers can focus on business logic. It provides a full runtime API and handles updates, metrics, and management automatically.
- Netflix developed tools like NEWT to improve the developer experience with one-click setup, local development and debugging, testing, and CI/CD integration for fast and reliable software development.
Making the LAMP Stack of Manufacturing - for Make Hardware Innovation WorkshopNick Pinkston
This is my presentation at the Make: Hardware Innovation Workshop about how we can apply the analogies of the LAMP stack and programming to a future system of manufacturing automation.
10 ways to shoot yourself in the foot with kubernetes, #9 will surprise you! ...Laurent Bernaille
Kubernetes is a very powerful and complicated system, and many users don’t understand the underlying systems. Come learn how your users can abuse container runtimes, overwhelm your control plane, and cause outages - it’s actually quite easy!
In the last year, we have containerized hundreds of applications and deployed them in large scale clusters (more than 1000 nodes). The journey was eventful and we learned a lot along the way. We’ll share stories of our ten favorite Kubernetes foot guns, including the dangers of cargo culting, rolling updates gone wrong, the pitfalls of initContainers, and nightmarish daemonset upgrades. The talk will present solutions we adopted to avoid or work around some these problems and will finally show several improvements we plan deploy in the future.
Similar to the Kubecon talk with the same title with a few new incidents.
This document discusses the history of the oldest Django project, which was originally created for a regional news company with newspapers, TV stations, and magazines. It details the challenges of porting the project to newer versions of Django over many years as Django continued to evolve, including two major porting attempts. Key lessons learned include the importance of preparation, testing, and making deployment automated. The project was later open sourced and the codebase simplified, cutting around 40,000 lines of code.
KKBOX is a music streaming service founded in 2004 that uses PHP and handles around 4000 API requests per second. It uses an event-driven asynchronous architecture and tools like Gitlab for code hosting and reviews, Gitlab CI for testing, and Slack for communication. The document discusses KKBOX's technologies and processes for code management, testing, deployment, and communication between teams.
Tech Days 2015: Ada 2012 and Spark Crazyflie and Railway DemoAdaCore
This document summarizes presentations given by Eric Perlade on using Ada 2012 and SPARK 2014 for safety-critical drone and railway signaling software. It describes reimplementing the stabilization system of the Crazyflie drone firmware in SPARK to prove absence of runtime errors. It also outlines plans to reimplement the entire drone firmware without C using Ada 2012 and SPARK. Additionally, it discusses a demonstration of using SPARK 2014 to model a railway signaling system and prove absence of collisions.
Мониторинг облачной CI-системы на примере Jenkins / Александр Акбашев (HERE T...Ontico
This document discusses monitoring a Jenkins continuous integration (CI) system using cloud services. It begins by outlining some common issues that can occur in Jenkins like compilation or test failures. It then evaluates the default Jenkins monitoring capabilities and proposes designing a custom monitoring system using events, FluentD for processing, and InfluxDB for storage. Examples are provided of plugins developed to analyze build failures and improve node utilization. The presentation concludes with a discussion of dashboards used for daily monitoring of the Jenkins CI system.
What is Digital Rebar Provision (and how RackN extends)?rhirschfeld
Walks through how Digital Rebar Provision rethinks bare metal automation beyond simple O/S install into an integrated workflow system for building data center underlay.
INCLUDES VIDEO OF PRESO
"Today it's crystal clear why we need unit tests. Even integration and acceptance tests are quite common but who is making sure that your pages are working in production environment? I'd like to show in detail how smoke tests will help achieving this goal and why you should try to burn down your production server."
This is a hands-on talk about how we use excessive smoke tests in our continuous deployment to make sure a freshly rolled-out release can actually go live.
The document describes an automated continuous integration and continuous deployment (CICD) pipeline using a blue-green deployment strategy. It begins by showing a basic CICD pipeline and then introduces blue-green deployment to reduce downtime when deploying new changes. It demonstrates how blue-green deployment works step-by-step and discusses how it helps minimize downtime, preserve the last known good deployment, enable robust infrastructure, and allow for parallel pipelines. The document then provides recommendations for implementing blue-green deployment, including using virtualization, automating the process, and incorporating security testing. It emphasizes securing the entire CICD pipeline, not just the final application.
Practical virtual network functions with Snabb (SDN Barcelona VI)Igalia
By Andy Wingo.
SDN and Network Programmability Meetup in Barcelona (VI)
21 June 2017
https://www.meetup.com/es-ES/SDN-and-Network-Programmability-Meetup-in-Barcelona
/events/239667457/?eventId=239667457
Breaking down your build: Architectural patterns for a more efficient pipelin...Abraham Marin-Perez
The document discusses architectural patterns for more efficient software pipelines, including breaking down monoliths, building microservices, refactoring techniques, and restructuring patterns. Some key restructuring patterns covered are decoupling APIs from implementations, using horizontal and vertical slices, treating libraries as services, reducing fan-out, and managing configuration as a service. The goal is to seek simplicity and only run necessary processes in the pipeline.
Iceoryx is an open-source middleware developed by Eclipse that provides real-time data transport capabilities. It can be used as an alternative to ROS2's Fast-RTPS and Connext middleware implementations. Iceoryx uses shared memory and message queues for high-performance data transport between processes. However, it currently has some limitations including single point of failure if the central RouDi daemon crashes, fixed memory mapping, and lack of support for request/response calls and quality of service features.
The document provides an overview of income and spending trends in Mohave and La Paz Counties in Arizona based on 2010 Census data. It reports that the combined income for the two counties was $4.7 billion in 2010. It then breaks down expenditures by category, such as 12.4% ($582 million) spent on groceries, 6.5% ($305 million) on auto purchases, and 34.1% ($1.6 billion) on housing. The largest population and per capita income figures are also listed for each county from the 2010 Census.
La pandemia de COVID-19 ha tenido un impacto significativo en la economía mundial. Muchos países experimentaron fuertes caídas en el PIB y aumentos en el desempleo debido a los cierres generalizados y las restricciones a los viajes. Aunque las vacunas han permitido la reapertura de muchas economías, los efectos a largo plazo de la pandemia en sectores como el turismo y los viajes aún no están claros.
Diabetes is a chronic disease that requires lifelong treatment and monitoring by a physician. There are several potential complications of diabetes including heart disease, kidney disease, eye complications, nerve damage, and foot complications. Left unmanaged, diabetes can lead to serious health issues such as heart attack, stroke, kidney failure, blindness, and lower limb amputations. It is important for those with diabetes to work closely with their healthcare team to manage blood sugar, blood pressure, cholesterol levels and prevent or treat any complications through medication, lifestyle changes, and regular screening exams.
General Continuous Delivery for Agile Practitioners Meetup May 2014Chris Hilton
A generalized version of the presentation on Continuous Delivery given at the Agile Practitioners meetup at Gap headquarters in San Francisco on May 28, 2014.
Delivery Engines: Software & SpaceflightMax Lincoln
The document discusses the importance of balancing investment in new features with investment in the delivery engine or technical quality of the system. It states that focusing only on velocity can detract from customer experience and that resources need to be allocated to both new features and improving the delivery engine. Management must find the right balance between the two areas of investment.
(ARC402) Deployment Automation: From Developers' Keyboards to End Users' Scre...Amazon Web Services
Some of the best businesses today are deploying their code dozens of times a day. How? By making heavy use of automation, smart tools, and repeatable patterns to get process out of the way and keep the workflow moving. Come to this session to learn how you can do this too, using services such as AWS OpsWorks, AWS CloudFormation, Amazon Simple Workflow Service, and other tools. We'll discuss a number of different deployment patterns, and what aspects you need to focus on when working toward deployment automation yourself.
Forward Networks - Networking Field Day 13 presentationAndrew Wesbecher
On November 17th, 2016, Forward Networks conducted its first public unveiling of its Network Assurance platform at Networking Field Day 13. Visit https://www.forwardnetworks.com/ for more details.
This document discusses continuous delivery using Jenkins, Docker, and Spring Boot. It defines continuous delivery as getting changes safely and quickly into production. It describes how continuous integration and automated testing can help achieve continuous delivery. It then explains how using Docker can help address issues like environment configuration differences. The document outlines a continuous delivery pipeline from code checkout through deployment to production and testing. It provides an example of building a Docker image and running a container mapped to a port.
This document discusses moving from continuous integration (CI) to continuous delivery and deployment (CD&D). It recommends standardizing the build, test, and deployment process through CI to ensure solid, reproducible steps. Continuous deployment takes CI a step further by automatically deploying code changes. The document provides an overview of tools and processes for implementing continuous integration, delivery, and deployment both externally using cloud providers and in-house using software, quality, and delivery/deployment factories. It emphasizes the benefits of native packaging for continuous deployment.
Continuous Delivery - Voxxed Days Thessaloniki 21.10.2016Rafał Leszko
The document discusses continuous delivery using Jenkins, Docker, and Spring Boot. It describes continuous delivery as getting changes into production quickly and safely. It then explains how continuous integration, automated testing, and using a continuous delivery pipeline with Docker can help achieve this. Key points are that Docker allows applications to be packaged and run the same way anywhere, and treating servers as "cattle not pets" allows easy replacement and consistency across environments.
Forward Networks - Networking Field Day 13 presentationForward Networks
On November 17th, 2016, Forward Networks conducted its first public unveiling of its Network Assurance platform at Networking Field Day 13. Visit https://www.forwardnetworks.com/ for more details.
EuroPython 2019: Modern Continuous Delivery for Python DevelopersPeter Bittner
Deployment automation, cloud platforms, containerization, short iterations to develop and release software—we’ve progressed a lot. And finally it’s official: Kubernetes and OpenShift are the established platforms to help us do scaling and zero downtime deployments with just a few hundred lines of YAML. It’s a great time.
Can we finally put all our eggs into one basket? Identify the cloud platform that fits our needs, and jump on it? That could well backfire: Vendor lock-in is the new waterfall, it slows you down. In future you’ll want to jump over to the next better platform, in a matter of minutes. Not months.
This talk is about The Art of Writing deployment pipelines that will survive Kubernetes, OpenShift and the like. It’s for Python developers and Kubernetes enthusiasts of all levels – no domain specific knowledge required, all you need to understand will be explained. You’ll learn how to separate application-specific and deployment-specific configuration details, to maximize your freedom and avoid vendor lock-in.
Come see a demo of a Django project setup that covers everything from local development to automatic scaling, flexible enough to be deployed on any of your favorite container platforms. Take home a working, future-proof setup for your Python applications.
See the original presentation at https://slides.com/bittner/modern-continuous-delivery/
Build and Deploy Cloud Native Camel Quarkus routes with Tekton and KnativeOmar Al-Safi
In this talk, we will leverage all cloud native stacks and tools to build Camel Quarkus routes natively using GraalVM native-image on Tekton pipeline and deploy these routes to Kubernetes cluster with Knative installed. We will dive into the following topics in the talk: - Introduction to Camel - Introduction to Camel Quarkus - Introduction to GraalVM Native Image - Introduction to Tekon - Introduction to Knative - Demo shows how to deploy end to end a Camel Quarkus route which include the following steps: - Look at whole deployment pipeline for Cloud Native Camel Quarkus routes - Build Camel Quarkus routes with GraalVM native-image on Tekton pipeline. - Deploy Camel Quarkus routes to Kubernetes cluster with Knative Targeted Audience: Users with basic Camel knowledge
Last update to the DevOps anti-patterns talk that IMO deserves separate upload. It was about anti patterns captured consulting several projects on their DevOps adoption. There are few common pitfalls we can see repeating again and again over DevOps culture discovery. This talk is my experience summary there
This document discusses monitoring a Jenkins continuous integration (CI) system using cloud services. It begins by outlining common problems that can occur in Jenkins like compilation failures, test failures, and network issues. It then evaluates monitoring Jenkins out of the box and with plugins. The document proposes building custom monitoring by collecting Jenkins event data and sending it to InfluxDB using Python and FluentD for storage and visualization. It provides code examples of collecting build queue and failure data and sending it to FluentD. Finally, it discusses how the monitoring data was used to address issues like infrastructure problems, slow compilations, and node utilization.
On the road to Continuous Delivery and/or DevOps? Just want to deliver better software faster? Pipelines are a key to achieving these goals because going fast and still producing quality output requires better visibility, coordination, and control. There is confusion out there on how best to use pipelines, and where they should be used. I’ll separate the ideal from the reality so you can figure out your next steps. During this talk we will demonstrate how ElectricFlow can provide visibility to all stakeholders, coordinate multiple people and automations, and control what gets released into production environments.
Implement a disaster recovery solution for your on-prem SQL with Azure? Easy!Marco Obinu
Slides presented at SQL Saturday 980 Plovdiv, talking about the different architectures you can implement to protect your on-premises SQL Server workloads on Azure for DR purposes.
Building a Service Mesh with NGINX Owen Garrett.pptxPINGXIONG3
This document discusses building a service mesh with NGINX. It notes that operating distributed applications is difficult due to issues like slow and unreliable calls between services, distributed fault finding, and continuous updates occurring in production. It reviews existing approaches like using an NGINX proxy per pod or a simple mesh. A full service mesh provides more capabilities but also more complexity. The document outlines NGINX's plans to build a service mesh focused on hybrid applications, with lightweight and performant data and control planes using open source projects like SPIRE and OpenTracing where possible.
Sam Newman is a technologist at ThoughtWorks. This talk from FlowCon 2014 goes into the nitty gritty of managing build, test and release of microservices and also covers the often ignored tradeoff between testing before deployment, and testing afterwards.
stackconf 2021 | Continuous Security – integrating security into your pipelinesNETWAYS
In the world of continuous delivery and cloud native, the boundaries between what is our application and what constitutes infrastructure is becoming increasing blurred. Our workloads, the containers they ship in, and our platform configuration is now often developed and deployed by the same teams, and development velocity is the key metric to success. This presents us with a challenge which the previous models of security as a final external gatekeeper step cannot keep up with. To ensure our apps and platforms are secure, we need to integrate security at all stages of our pipelines and ensure that our developers and engineering teams have tools and data with enable them to make decisions about security on an ongoing basis. In this session I will talk through the problem space, look at the kinds of security issues we need to consider, and look at where the integration points are to build in security as part of our CI/CD process.
The Jenkins open source continuous integration server now provides a “pipeline” scripting language which can define jobs that persist across server restarts, can be stored in a source code repository and can be versioned with the source code they are building. By defining the build and deployment pipeline in source code, teams can take full control of their build and deployment steps. The Docker project provides lightweight containers and a system for defining and managing those containers. The Jenkins pipeline and Docker containers are a great combination to improve the portability, reliability, and consistency of your build process.
This session will demonstrate Jenkins and Docker in the journey from continuous integration to DevOps.
Canary Deployments for Kubernetes (KubeCon 2018 North America)Nail Islamov
https://kccna18.sched.com/event/GrUI/custom-deployment-strategies-for-kubernetes-nail-islamov-atlassian
Many tech companies are using continuous deployments (CD) to deliver changes to their users faster and more frequently. One of the challenges with automated deployments is making them safe by detecting and quickly rolling back in the event of a bad release. Standard CD practices include using canary and blue-green deployments; unfortunately, Kubernetes only supports the "rolling update" deployment strategy out of the box, which can only prevent trivial failures. Thanks to extensibility of Kubernetes, it is possible to build custom advanced deployment strategies while reusing Kubernetes core concepts. Nail Islamov will give an overview of how Deployment, ReplicaSet and Pod objects work together along with Service and Ingress, and will provide examples of implementing blue-green and canary deployments reusing these concepts by introducing extra CRD resources.
Similar to Beyond Continuous Delivery TW Away Day June 2013 (20)
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/temporal-event-neural-networks-a-more-efficient-alternative-to-the-transformer-a-presentation-from-brainchip/
Chris Jones, Director of Product Management at BrainChip , presents the “Temporal Event Neural Networks: A More Efficient Alternative to the Transformer” tutorial at the May 2024 Embedded Vision Summit.
The expansion of AI services necessitates enhanced computational capabilities on edge devices. Temporal Event Neural Networks (TENNs), developed by BrainChip, represent a novel and highly efficient state-space network. TENNs demonstrate exceptional proficiency in handling multi-dimensional streaming data, facilitating advancements in object detection, action recognition, speech enhancement and language model/sequence generation. Through the utilization of polynomial-based continuous convolutions, TENNs streamline models, expedite training processes and significantly diminish memory requirements, achieving notable reductions of up to 50x in parameters and 5,000x in energy consumption compared to prevailing methodologies like transformers.
Integration with BrainChip’s Akida neuromorphic hardware IP further enhances TENNs’ capabilities, enabling the realization of highly capable, portable and passively cooled edge devices. This presentation delves into the technical innovations underlying TENNs, presents real-world benchmarks, and elucidates how this cutting-edge approach is positioned to revolutionize edge AI across diverse applications.
Northern Engraving | Nameplate Manufacturing Process - 2024Northern Engraving
Manufacturing custom quality metal nameplates and badges involves several standard operations. Processes include sheet prep, lithography, screening, coating, punch press and inspection. All decoration is completed in the flat sheet with adhesive and tooling operations following. The possibilities for creating unique durable nameplates are endless. How will you create your brand identity? We can help!
In the realm of cybersecurity, offensive security practices act as a critical shield. By simulating real-world attacks in a controlled environment, these techniques expose vulnerabilities before malicious actors can exploit them. This proactive approach allows manufacturers to identify and fix weaknesses, significantly enhancing system security.
This presentation delves into the development of a system designed to mimic Galileo's Open Service signal using software-defined radio (SDR) technology. We'll begin with a foundational overview of both Global Navigation Satellite Systems (GNSS) and the intricacies of digital signal processing.
The presentation culminates in a live demonstration. We'll showcase the manipulation of Galileo's Open Service pilot signal, simulating an attack on various software and hardware systems. This practical demonstration serves to highlight the potential consequences of unaddressed vulnerabilities, emphasizing the importance of offensive security practices in safeguarding critical infrastructure.
The Microsoft 365 Migration Tutorial For Beginner.pptxoperationspcvita
This presentation will help you understand the power of Microsoft 365. However, we have mentioned every productivity app included in Office 365. Additionally, we have suggested the migration situation related to Office 365 and how we can help you.
You can also read: https://www.systoolsgroup.com/updates/office-365-tenant-to-tenant-migration-step-by-step-complete-guide/
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
What is an RPA CoE? Session 1 – CoE VisionDianaGray10
In the first session, we will review the organization's vision and how this has an impact on the COE Structure.
Topics covered:
• The role of a steering committee
• How do the organization’s priorities determine CoE Structure?
Speaker:
Chris Bolin, Senior Intelligent Automation Architect Anika Systems
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
2. Continuous Delivery
• Frequent, automated releases
• Every check-in is a potential release
• Every change triggers feedback
• Feedback must be received as soon
as possible
• Automate almost everything
• Build quality in
13. Cloneable Pipelines
Application InfrastructureIntegration
BaseVM
IsolationTests IsolationTests
Production
Staging
IntegrationTests
Other App Env Scripts
IT Scripts
Env Scripts AppWAR
3.6
A JAR
2.3
Common JAR
4.3
B JAR
1.4
2.3 :2.0+ 1.4 :1.0+
4.3 :4.0+ 4.3 :4.0+
Staging
IntegrationTests
IsolationTests
AppWAR
A JAR B JAR
Common JAR
50. Beyond Continuous Delivery
Chris Hilton
chilton@thoughtworks.com
@dirtyagile
Graphics: Matthew Tobiasz
mtobiasz@thoughtworks.com
Editor's Notes
Hi, I’m Chris Hilton from ThoughtWorks. How many of you have heard of Continuous Delivery? I’m going to be talking about some of the work I have done, am doing, want to do, and then what I think about the possible evolution of release management beyond the standard Continuous Delivery model.There are a lot of technology and process innovations that got me interested in Continuous Delivery, but they also had implications for me far beyond just CD. This presentation explores some of those ideas.I’ll start with some basic concepts rooted in the here and now, then build on top of those to farther out concepts. This sort of builds up to a far future “grand vision”, if you will, but I think it’s more important just to explore the concepts along the way.I have a lot of high-level stuff to get through so I’m not going to get too bogged down in details. Lots of this stuff hasn’t been done yet so the devil is in the details, but those could be long discussions themselves. I don’t want to discourage questions, but just want to keep it at a level of solvable problems.
These are concepts I assumeeveryone here is pretty familiar with.For the purposes of this discussion, I’ll be assuming cloud resources are cheap and unlimited. I’m going to push these assumptions to ridiculous limits, but I’m going to go with it for the sake of these thought experiments
As an alternative to monolithic applications, this is an example of a simple modular application wired together with a dependency management system. Each module can build and unit test independently and pulls in needed dependencies through the dependency management system.A more realistic example can look like this…
This is a moderately complex application at big retailer. There are more complicated ones.But let’s keep it simple…
…and stick to discussing this instead.So what can you do with dependency management and modular applications that you can’t do with a monolithic build?Chained builds- each module kicks off build of downstream modulesParallel building- instead of monolithic sequential build, build modules in parallelMinimal builds, impact zone- build only what’s been updatedMaven and Ivy for dependency management, Jenkins plugins for chained buildsTim Brown did some of this at GAP and I did all of this at my previous job.
Now with infrastructure as code, we can add infrastructure modules and include those in our dependency tree, ideally with tests. They can also build and test independently.So we have a base VM module, on top of that we add some basic IT setup (monitoring, security, etc.), then add some environment scripts. The isolation tests module here does the deploy and isolated tests like functional tests, acceptance tests, etc. This could be multiple modules, I just grouped them for simplicity.
Building on that, we can integrate with another app and run integration tests.
And even extend our dependency tree all the way to production.So I’m going to say a few possibly controversial things here.1. Point out the pipelines in the graph. Pipelines are abstractions of dependency graphs. Useful concept and I’ll keep using the term, but kind of bullshit. You can talk about pipelines of pipelines, but that’s pretty much just another way of saying dependencies. Can encourage monoliths.2. Everything is a unit.3. Building on that, all tests are unit tests for the right size of unit.
Next, I want to talk about semi-fluid dependencies and how they can keep every module up-to-date but always in a working state.Semi-fluid dependencies are a combination of static and fluid dependencies.Static dependencies are stable but hard to maintain manuallyFluid dependencies keep things up-to-date easily, but also break things easilyPerversely, fluid dependencies also make it easier for third parties to commit code to your project than your own developersAt Expedia, fluid dependencies were causing outages for the web team several times a week. With a team size of around 150 developers, that was $25-50K in lost productivity every week.Semi-fluid dependencies try to get the best of both worlds. Every dependency has both a static and (possibly) fluid dependency. Developers use the static dependencies. An automated system uses the fluid dependencies, runs tests, and updates the static dependency.So when a new version of common jar is published…
The automated system buildsA.jar with the new version, it passes, so the static dependency is updated. The build for B.jar also runs, but it doesn’t pass, so the static dependency remains. I’m assuming a Jenkins-like build here, so App.war also builds attempting to incorporate the new version of A.jar, but fails with a dependency version conflict.At this point, developers can still build and work on every module because they each have static dependencies on known good versions, though work on A.jar and B.jar are depending on different versions of common.jar and work on A.jar and common.jar won’t be incorporated into App.war. Most times, like 95%, new dependencies at Expedia were incorporated automatically, but the rest have to be resolved manually. This could be resolve two ways. Update B.jar to work with the new version of common.jar or publish a new version of common.jar that works with B.jar.
Here, we published a new version of common.jar with a fix. The A.jar build passes and updates its static dependency, B.jar does the same, and App.war updates the static version for both of its dependencies. I implemented something like this at Expedia, saving them a couple of million in lost productivity per year.There’s still some confusion and back and forth that has to go on to fix things, so I’ll talk more about eliminating the introduction of errors later, but first we need to talk about…
Cloneable pipelinesWith all of the pipeline infrastructure well-defined in source code, it should be trivial to create a copy of of the release pipeline. rPath does some of this for infrastructure already
Clone the source code repository, as well, such as with git, and you have a personal pipeline that is an exact copy of the official pipeline. Commit to your repo, let your pipeline run, push to master when it succeeds. No more it works on my machine. And your code will be better tested.With most pre-flight build concepts, you run just the module build and tests before the change is accepted for commit. Using a cloneable pipeline, we can accept only changes that work for the entire pipeline.Unlike before, when common.jar breaks the build for B.jar, this time it happens in a separate pipeline instance and the commit is never pushed into the trunk.
When a change is committed that does work all the way through the pipeline, then it is committed.This is something we are working towards using Electric Commander at GAP. It’s also been wanted in one form or another at 4 of the last 5 places I’ve worked. The fifth one was Shaw.
We’re getting closer to a pipeline that can never be “red”. Not a great diagram, but the black line is trunk and the brown and blue lines are pre-flight pipelines for different developers. The first brown pipeline runs and successfully commits. The blue pipeline runs with a later change and also successfully commits. While that pipeline was running, another brown pipeline runs and fails, so that change is not committed. Another blue pipeline also runs and commits. And the brown pipeline is corrected and eventually commits.There’s still some opportunity for redness here as testing is running while new commits are being made (a common problem fro developers). We need a way for ordered entry of changes.
We’re getting closer to a pipeline that can never be “red”. I don’t really like the name “pristine trunks”, but I haven’t come up with better wording yet. Not a great diagram, but the black line is trunk and the brown and blue lines are pre-flight pipelines for different developers. The first brown pipeline runs and successfully commits. The blue pipeline runs with a later change and also successfully commits. While that pipeline was running, another brown pipeline runs and fails, so that change is not committed. Another blue pipeline also runs and commits. And the brown pipeline is corrected and eventually commits.There’s still some opportunity for redness here as testing is running while new commits are being made (a common problem fro developers). We need a way for ordered entry of changes.
We’re getting closer to a pipeline that can never be “red”. I don’t really like the name “pristine trunks”, but I haven’t come up with better wording yet. Not a great diagram, but the black line is trunk and the brown and blue lines are pre-flight pipelines for different developers. The first brown pipeline runs and successfully commits. The blue pipeline runs with a later change and also successfully commits. While that pipeline was running, another brown pipeline runs and fails, so that change is not committed. Another blue pipeline also runs and commits. And the brown pipeline is corrected and eventually commits.There’s still some opportunity for redness here as testing is running while new commits are being made (a common problem fro developers). We need a way for ordered entry of changes.
We’re getting closer to a pipeline that can never be “red”. I don’t really like the name “pristine trunks”, but I haven’t come up with better wording yet. Not a great diagram, but the black line is trunk and the brown and blue lines are pre-flight pipelines for different developers. The first brown pipeline runs and successfully commits. The blue pipeline runs with a later change and also successfully commits. While that pipeline was running, another brown pipeline runs and fails, so that change is not committed. Another blue pipeline also runs and commits. And the brown pipeline is corrected and eventually commits.There’s still some opportunity for redness here as testing is running while new commits are being made (a common problem fro developers). We need a way for ordered entry of changes.
We’re getting closer to a pipeline that can never be “red”. I don’t really like the name “pristine trunks”, but I haven’t come up with better wording yet. Not a great diagram, but the black line is trunk and the brown and blue lines are pre-flight pipelines for different developers. The first brown pipeline runs and successfully commits. The blue pipeline runs with a later change and also successfully commits. While that pipeline was running, another brown pipeline runs and fails, so that change is not committed. Another blue pipeline also runs and commits. And the brown pipeline is corrected and eventually commits.There’s still some opportunity for redness here as testing is running while new commits are being made (a common problem fro developers). We need a way for ordered entry of changes.
We’re getting closer to a pipeline that can never be “red”. I don’t really like the name “pristine trunks”, but I haven’t come up with better wording yet. Not a great diagram, but the black line is trunk and the brown and blue lines are pre-flight pipelines for different developers. The first brown pipeline runs and successfully commits. The blue pipeline runs with a later change and also successfully commits. While that pipeline was running, another brown pipeline runs and fails, so that change is not committed. Another blue pipeline also runs and commits. And the brown pipeline is corrected and eventually commits.There’s still some opportunity for redness here as testing is running while new commits are being made (a common problem fro developers). We need a way for ordered entry of changes.
We’re getting closer to a pipeline that can never be “red”. I don’t really like the name “pristine trunks”, but I haven’t come up with better wording yet. Not a great diagram, but the black line is trunk and the brown and blue lines are pre-flight pipelines for different developers. The first brown pipeline runs and successfully commits. The blue pipeline runs with a later change and also successfully commits. While that pipeline was running, another brown pipeline runs and fails, so that change is not committed. Another blue pipeline also runs and commits. And the brown pipeline is corrected and eventually commits.There’s still some opportunity for redness here as testing is running while new commits are being made (a common problem fro developers). We need a way for ordered entry of changes.
We’re getting closer to a pipeline that can never be “red”. I don’t really like the name “pristine trunks”, but I haven’t come up with better wording yet. Not a great diagram, but the black line is trunk and the brown and blue lines are pre-flight pipelines for different developers. The first brown pipeline runs and successfully commits. The blue pipeline runs with a later change and also successfully commits. While that pipeline was running, another brown pipeline runs and fails, so that change is not committed. Another blue pipeline also runs and commits. And the brown pipeline is corrected and eventually commits.There’s still some opportunity for redness here as testing is running while new commits are being made (a common problem fro developers). We need a way for ordered entry of changes.
Another not so great diagram. Here, I’m saying trunk is already at change n and we know it’s green. Change n+1 comes in and a pre-flight pipeline kicks off with that change.What happens when the next change comes in? We could wait for this pipeline to complete, but that won’t scale with many changes coming in and/or a long-running pipeline.
With cloneable pipelines, we can do better. So instead, we kick off a pipeline both with and without the in-flight change to start testing the n+2 change, a so-called quantum pipeline.
When the n+1 pipeline completes, we can “collapse the wave” and abort the unneeded pipeline that assumed the n+1 pipeline would fail.
When the n+1 pipeline completes, we can “collapse the wave” and abort the unneeded pipeline that assumed the n+1 pipeline would fail.
And here’s a more complicated example with 3 changes in-flight at once. When n+1 completes, abort half of the pipelines. When n+2 fails, abort the associated n+3 pipeline. N+3 completes and successfully commits.We now have only changes that are successful within the entire pipeline being committed to trunk. It should be impossible for trunk to ever be “red”, at least within the length of the cloned pipeline. What’s more, these successful pipeline are exact copies of the trunk pipeline, so the artifacts from these builds can be used directly without being rebuilt in a trunk pipeline.
With cloneable pipelines, I can recreate pipelines for dependent pipelines and prevent breaks to downstream projects.Say I’m log4j and want to make sure my changes don’t break tomcat. Log4j could recreate the tomcat pipeline as part of its pipeline and reject changes based results including the tomcat pipeline.Not a lot to say here, but I think there’s opportunity to communicate better across these types of boundaries with some of these concepts.
With so many pipelines running, won’t this take a lot of time? One way to cut down on the build times would be to have a central build service and use the “power of the swarm”. The bottom build of common.jar refers to the build service and builds normally, returning the built jar. The middle build runs the same build of common.jar, but this time the build service returns the previously built jar as the code and dependencies are exactly the same. The top build runs with a change of one new file and the build service compiles the one file, adds it to the previously built jar to make a new jar artifact, then returns that jar to the running build.Basically, all those pipelines are doing very similar work and could benefit from some level of sharing their work. A build service could do detailed analysis of individual compilable units and their dependencies and optimize the required work. And the build service infrastructure should be well-defined and reproducible for developers to support ‘airplane mode’.
- Continuously run tests in personal pipeline- Continuously integrate from trunk into integration pipeline- Auto-commit to trunk when all is greenNext, I want to talk about extreme integration. Not a great term, but it’s an extension of extreme continuous integration. If you aren’t familiar with extreme continuous integration, it’s a concept where as you are programming, a background process is continuously running your unit tests as you pause typing so you get instant feedback without you even needing to explicitly run them.Similarly, extreme integration applies the same automatic feedback on a pipeline scale. - Continuously run tests in personal pipeline- Continuously integrate from trunk into integration pipeline- Auto-commit to trunk when all is greenWhen all of this works and multiple developers are working together in trunk, it’s almost as if the code is a Google WaveBreak this up into multiple slides laterPersonal pipelineIntegrating into extreme branchIntegrating into trunk
- Continuously run tests in personal pipeline- Continuously integrate from trunk into integration pipeline- Auto-commit to trunk when all is greenNext, I want to talk about extreme integration. Not a great term, but it’s an extension of extreme continuous integration. If you aren’t familiar with extreme continuous integration, it’s a concept where as you are programming, a background process is continuously running your unit tests as you pause typing so you get instant feedback without you even needing to explicitly run them.Similarly, extreme integration applies the same automatic feedback on a pipeline scale. - Continuously run tests in personal pipeline- Continuously integrate from trunk into integration pipeline- Auto-commit to trunk when all is greenWhen all of this works and multiple developers are working together in trunk, it’s almost as if the code is a Google WaveBreak this up into multiple slides laterPersonal pipelineIntegrating into extreme branchIntegrating into trunk
- Continuously run tests in personal pipeline- Continuously integrate from trunk into integration pipeline- Auto-commit to trunk when all is greenNext, I want to talk about extreme integration. Not a great term, but it’s an extension of extreme continuous integration. If you aren’t familiar with extreme continuous integration, it’s a concept where as you are programming, a background process is continuously running your unit tests as you pause typing so you get instant feedback without you even needing to explicitly run them.Similarly, extreme integration applies the same automatic feedback on a pipeline scale. - Continuously run tests in personal pipeline- Continuously integrate from trunk into integration pipeline- Auto-commit to trunk when all is greenWhen all of this works and multiple developers are working together in trunk, it’s almost as if the code is a Google WaveBreak this up into multiple slides laterPersonal pipelineIntegrating into extreme branchIntegrating into trunk
Why develop locally at all? Why not have a cloud IDE?Everyone works on modules/code/infrastructure remotely through a web-based front end.Here’s a really simple mock-up of what I mean.
Create project, personal pipeline red
Create project, personal pipeline red
Create project, personal pipeline red
Create test-driven infrastructure test, personal pipeline red
Create test-driven infrastructure test, personal pipeline red
Create test-driven infrastructure test, personal pipeline red
Create platform, personal pipeline redAdd tomcat package, personal pipeline green, auto-merged to project pipeline
Create jar projectCreate classEnter test, personal pipeline red
Create jar projectCreate classEnter test, personal pipeline red
Create jar projectCreate classEnter test, personal pipeline red
Enter codeAdd dependency with dynamic range, personal pipeline green, auto-merge to project pipeline
Create war projectCreate cucumber test, personal pipeline red
Create war projectCreate cucumber test, personal pipeline red
Create war projectCreate cucumber test, personal pipeline red
Create JSP fileAdd dependency on jar (automatically adds dynamic range), personal pipeline green, auto-merge to project
Add test-driven infrastructure test for war, personal pipeline red
Add test-driven infrastructure test for war, personal pipeline red
Add test-driven infrastructure test for war, personal pipeline red
Create deployment script, personal pipeline green, auto-merge to project pipelineUse it to wire dependencies together.It has automatic setup and provisioning for all the environments and pipelines needed.It has continuous delivery built-in; not something users even need to think about.Some object to working remotely, but I think these are solvable problems. Such as having push button access to hotspots and saving VMs or even whole environments for investigating problems. Also, when I say cloud IDE, this doesn’t preclude the system actually being based locally.Kodingen.comis doing a little bit of this around code, but I don’t know too much about it and think there’s a lot more to be done as far as controlling infrastructure and pipelines with something like this.
These topics are a bit tacked on, but they are somewhat related things I have been thinking about.Somewhat like public bug tracking, customers could write public BDD tests. Developers could flesh out the steps and a personal pipeline would let the reporter and product team know when the test is passing.Compare functionality across products by having public BDD tests run against multiple products.Public development- give public access to code, but not immune system