Brian Moyles and Gareth Bowles from Netflix describe the continuous integration system that lets us build and deploy the Netflix streaming service fast and at scale.
CI and CD at Scale: Scaling Jenkins with Docker and Apache MesosCarlos Sanchez
In this presentation Carlos Sanchez will share his experience running Jenkins at scale, using Docker and Apache Mesos to create one of the biggest (if not the biggest) Jenkins clusters to date.
By taking advantage of Apache Mesos, the Jenkins platform is dynamically scaled to run jobs across hundreds of Jenkins masters, on Docker containers distributed across the Mesos cluster. Jenkins slaves are dynamically created based on load, using the Jenkins Mesos and Docker plugins, running in containers distributed across multiple hosts, and isolating job execution.
This presentation will allow a better understanding of Apache Mesos and the challenges of running Docker containerized and distributed applications, particularly JVM ones, by sharing a real world use case, including good and bad decisions and how they affected the development.
CI and CD at Scale: Scaling Jenkins with Docker and Apache MesosCarlos Sanchez
In this presentation Carlos Sanchez will share his experience running Jenkins at scale, using Docker and Apache Mesos to create one of the biggest (if not the biggest) Jenkins clusters to date.
By taking advantage of Apache Mesos, the Jenkins platform is dynamically scaled to run jobs across hundreds of Jenkins masters, on Docker containers distributed across the Mesos cluster. Jenkins slaves are dynamically created based on load, using the Jenkins Mesos and Docker plugins, running in containers distributed across multiple hosts, and isolating job execution.
This presentation will allow a better understanding of Apache Mesos and the challenges of running Docker containerized and distributed applications, particularly JVM ones, by sharing a real world use case, including good and bad decisions and how they affected the development.
From Monolith to Docker Distributed ApplicationsCarlos Sanchez
Docker is revolutionizing the way people think about applications and deployments. It provides a simple way to run and distribute Linux containers for a variety of use cases, from lightweight virtual machines to complex distributed microservice architectures. But migrating an existing Java application to a distributed microservice architecture is no easy task, requiring a shift in the software development, networking, and storage to accommodate the new architecture. This presentation provides insights into the experience of the speaker and his colleagues in creating a Jenkins platform based on distributed Docker containers running on Apache Mesos and Marathon and applicable to all types of applications, especially Java- and JVM-based ones.
Divide and Conquer: Easier Continuous Delivery using Micro-ServicesCarlos Sanchez
Docker has revolutionized the way people think about applications and deployments. It provides a simple way to run and distribute Linux containers for a variety of use cases, from lightweight virtual machines to complex distributed micro-services architectures.
Containers allow to run services in isolation with a minimum performance penalty, increased speed, easier configuration and less complexity, making it ideal for continuous integration and continuous delivery based workloads. But testing a distributed micro-services architecture is no easy task, requiring a shift in mindset and tooling to accommodate the new architecture.
We will provide insight on our experience creating a Jenkins platform based on distributed Docker containers running on Apache Mesos and Marathon, applicable for all types of applications, but specially Java and JVM based ones.
CI and CD at Scale: Scaling Jenkins with Docker and Apache MesosCarlos Sanchez
In this presentation Carlos Sanchez will share his experience running Jenkins at scale, using Docker and Apache Mesos to create one of the biggest (if not the biggest) Jenkins clusters to date.
By taking advantage of Apache Mesos, the Jenkins platform is dynamically scaled to run jobs across hundreds of Jenkins masters, on Docker containers distributed across the Mesos cluster. Jenkins slaves are dynamically created based on load, using the Jenkins Mesos and Docker plugins, running in containers distributed across multiple hosts, and isolating job execution.
This presentation will allow a better understanding of Apache Mesos and the challenges of running Docker containerized and distributed applications, particularly JVM ones, by sharing a real world use case, including good and bad decisions and how they affected the development.
CI and CD at Scale: Scaling Jenkins with Docker and Apache MesosCarlos Sanchez
In this presentation Carlos Sanchez will share his experience running Jenkins at scale, using Docker and Apache Mesos to create one of the biggest (if not the biggest) Jenkins clusters to date.
By taking advantage of Apache Mesos, the Jenkins platform is dynamically scaled to run jobs across hundreds of Jenkins masters, on Docker containers distributed across the Mesos cluster. Jenkins slaves are dynamically created based on load, using the Jenkins Mesos and Docker plugins, running in containers distributed across multiple hosts, and isolating job execution.
This presentation will allow a better understanding of Apache Mesos and the challenges of running Docker containerized and distributed applications, particularly JVM ones, by sharing a real world use case, including good and bad decisions and how they affected the development.
From Monolith to Docker Distributed ApplicationsCarlos Sanchez
Docker is revolutionizing the way people think about applications and deployments. It provides a simple way to run and distribute Linux containers for a variety of use cases, from lightweight virtual machines to complex distributed microservice architectures. But migrating an existing Java application to a distributed microservice architecture is no easy task, requiring a shift in the software development, networking, and storage to accommodate the new architecture. This presentation provides insights into the experience of the speaker and his colleagues in creating a Jenkins platform based on distributed Docker containers running on Apache Mesos and Marathon and applicable to all types of applications, especially Java- and JVM-based ones.
Divide and Conquer: Easier Continuous Delivery using Micro-ServicesCarlos Sanchez
Docker has revolutionized the way people think about applications and deployments. It provides a simple way to run and distribute Linux containers for a variety of use cases, from lightweight virtual machines to complex distributed micro-services architectures.
Containers allow to run services in isolation with a minimum performance penalty, increased speed, easier configuration and less complexity, making it ideal for continuous integration and continuous delivery based workloads. But testing a distributed micro-services architecture is no easy task, requiring a shift in mindset and tooling to accommodate the new architecture.
We will provide insight on our experience creating a Jenkins platform based on distributed Docker containers running on Apache Mesos and Marathon, applicable for all types of applications, but specially Java and JVM based ones.
The DevOps movement, like its Agile predecessor, is focused on improving the communication and collaboration between the development and operations teams responsible for different aspects of an app throughout its lifecycle. While successful DevOps initiatives start and end with organizational and cultural change, there are also common practices that are enablers and/or tools used in support of DevOps. In this session you will learn about the DevOps practice of Configuration as Code &ndash managing and maintaining application configuration as versionable assets. This session will focus on the practice of Configuration as Code, while demonstrating a few of the popular tools available today, including Opscode Chef, Powershell DSC (Desired State Configuration) and others. If you are interested in implementing a DevOps initiative in your organization, then this session is a must-see.
Testing Distributed Micro Services. Agile Testing Days 2017Carlos Sanchez
Docker is revolutionizing the way people think about applications and deployments. It provides a simple way to run and distribute Linux containers for a variety of use cases, from lightweight virtual machines to complex distributed micro-services architectures.
Containers allow to run services in isolation with a minimum performance penalty, increased speed, easier configuration and less complexity, making it ideal for continuous integration and continuous delivery based workloads.
But testing a distributed micro-services architecture is no easy task, requiring a shift in mindset and tooling to accommodate the new architecture.
We will provide insight on our experience creating a Jenkins platform based on distributed Docker containers running on Apache Mesos and Marathon, applicable for all types of applications, but specially Java and JVM based ones.
What’s New in Docker - Victor Vieux, DockerDocker, Inc.
It’s the first breakout after the keynote and you need to know more about all the latest and greatest Docker announcements. We've got you covered! In this session, Victor Vieux, will go deeper looking into what's new with Docker, demo the latest features and answer your questions.
Using Containers for Building and Testing: Docker, Kubernetes and Mesos. FOSD...Carlos Sanchez
Building and testing is a great use case for containers, both due to the dynamic and isolation aspects, but running in just one machine is not enough and quickly needs to scale to a clustered setup. But which cluster technology should be used? Docker Swarm? Apache Mesos? Kubernetes? how do they compare? All of them can be used to dynamically run a cluster of containers.
Building and testing is a great use case for containers, both due to the dynamic and isolation aspects, but running in just one machine is not enough and quickly needs to scale to a clustered setup. But which cluster technology should be used? Docker Swarm? Apache Mesos? Kubernetes? how do they compare? All of them can be used to dynamically run a cluster of containers.
The Jenkins platform is an example of dynamically scaling by using several Docker cluster and orchestration platforms, using containers to run build agents and jobs, and also isolate job execution.
This talk will cover these main container clusters, outlining the pros and cons, the current state of the art of the technologies and Jenkins support.
The presentation will allow a better understanding of using Docker in the main Docker cluster/orchestration platforms out there (Docker Swarm, Apache Mesos, Kubernetes), sharing my experience and helping people decide which one to use, going through Jenkins examples and current support.
Securing Containers, One Patch at a Time - Michael Crosby, DockerDocker, Inc.
Responsible disclosure is a key ingredient of any solid security strategy. In this session, Docker maintainer Michael Crosby will explain the ins and outs of CVE-2016-9962: how it was discovered, how it could even happen in the first place, and how it was addressed. A vertiginous abseil at the boundaries of the kernel, in the fascinating land of system calls and randomized address space. You will think twice before leaking a file descriptor again.
How to Improve Your Image Builds Using Advance Docker BuildDocker, Inc.
Nicholas Dille, Haufe-Lexware + Docker Captain -
Docker continues to be the standard tool for building container images. For more than a year Docker ships with BuildKit as an alternative image builder, providing advanced features for secret and cache management. These features help to make image builds faster and more secure. In this session, Docker Captain Nicholas Dille will teach you how to use Buildkit features to your advantage.
Seven Habits of Highly Effective Jenkins Users (2014 edition!)Andrew Bayer
What plugins, tools and behaviors can help you get the most out of your Jenkins setup without all of the pain? We'll find out as we go over a set of Jenkins power tools, habits and best practices that will help with any Jenkins setup.
This talk will give you tips and tricks to get better build time performance and smaller images. The most important take-away is: you should be using multi-stage Dockerfiles and enable BuildKit.
(Click 2nd slide for video) Deploy PHP apps faster in 2017. This talk focuses on how PHP developers can use simple Ansible scripts to rapidly configure new dev and production servers from scratch, and deploy their apps. No more "snowflake servers"!
This is a general introduction to DevOps essentials and Ansible, with a few extras for PHP developers, including some best practice tips and overview of two major Ansible-based PHP projects, Drupal-VM and Trellis (modern WordPress setup).
The Golden Ticket: Docker and High Security Microservices by Aaron GrattafioriDocker, Inc.
True microservices are more than simply bolting a REST interface on your legacy application, packing it in a Docker container and hoping for the best. Security is a key component when designing and building out any new architecture, and it must be considered from top to bottom. Umpa Lumpas might not be considered "real" microservices, but Willy Wonka still has them locked down tight!
In this talk, Aaron will briefly touch on the idea and security benefits of microservices before diving into practical and real world examples of creating a secure microservices architecture. We'll start with designing and building high security Docker containers, using and examining the latest security features in Docker (such as User Namespaces and seccomp-bpf) as well as examine some typically forgotten security principals. Aaron will end on exploring related challenges and solutions in the areas of network security, secrets management and application hardening. Finally, while this talk is geared towards Microservices, it should prove informational for all Docker users, building a PaaS or otherwise.
You Don't Have to Start Over! A Practical Guide for Adopting Docker in the En...Docker, Inc.
So you are looking to adopt docker, but receive feedback and commentary such as "our development pipeline won't support containers" or "the applications aren't micro services, so I don't see a benefit." You are not alone, these and other statements are common misconceptions when considering using docker in the enterprise. Perhaps having a real enterprise use case with some tips on objection handling would support your goal of adopting docker in your organization? In this presentation, Chris Ciborowski, CEO and Principal Consultant at Nebulaworks and Docker Captain will discuss ways that you can leverage docker in existing enterprise environments providing tangible benefits to both developers and operations teams and accelerate DevOps adoption. He will also provide a few insider tips on objection handling learned while working on enterprise container adoption in enterprise clients.
From Monolith to Docker Distributed ApplicationsCarlos Sanchez
Docker is revolutionizing the way people think about applications and deployments. It provides a simple way to run and distribute Linux containers for a variety of use cases, from lightweight virtual machines to complex distributed micro-services architectures.
Containers allow to run services in isolation with a minimum performance penalty, increased speed, easier configuration and less complexity, making it ideal for continuous integration and continuous delivery based workloads. But migrating an existing application to a distributed microservices architecture is no easy task, requiring a shift in the software development, networking and storage to accommodate the new architecture.
We will provide insight on our experience creating a Jenkins platform based on distributed Docker containers running on Apache Mesos and Marathon, applicable for all types of applications, but specially Java and JVM based nones.
Ansible is tool for Configuration Management. The big difference to Chef and Puppet is, that Ansible doesn't need a Master and doesn't need a special client on the servers. It works completely via SSH and the configuration is done in Yaml.
These slides give a short introduction & motivation for Ansible.
Continuous Deployment with Jenkins on KubernetesMatt Baldwin
Google Senior Software Engineer Evan Brown's presentation from the March 18, 2016 Seattle Kubernetes meetup hosted by StackPointCloud. Evan shows how you deploy Jenkins into Kubernetes, then takes us through CD and canary deployments. Join us in Seattle: http://www.meetup.com/Seattle-Kubernetes-Meetup/
Webinar: Development Swarm Cluster with Docker Compose V3Codefresh
Docker 1.13 introduced a new version of Compose that simplifies deployment. In our last webinar, Alexei Ledenev (Cheif Researcher at Codefresh) walked us through the new features in Compose V3 developers can use for deployment. In case you missed it, we recorded it for you to view on demand. During the session, you’ll learn how to quickly create a multi-node Swarm cluster on your laptop, (without needing to install and manage additional VMs).
An Integrated Pipeline for Private and Public Clouds with Jenkins, Artifactor...VMware Tanzu
This presentation was delivered jointly with a hands-on demo. The presentation briefly discusses how Cloud Foundry enables organizations to continuously deliver high-quality software and highlights an integrated development process built with Jenkins, Artifactory and Cloud Foundry.
The Netflix API has undergone a transformation since its inception in 2008. It has transitioned from being a public API with a generic RESTful interface to a platform for creating highly optimized, device-centric APIs that are critical to delivering the Netflix streaming experience on over 1000 different device types.
This talk covers the design principles that shaped the transformation of the API as well as the technology that powers it, enabling rapid user experience iteration and bringing Netflix streaming to almost 38 million subscribers around the world.
The DevOps movement, like its Agile predecessor, is focused on improving the communication and collaboration between the development and operations teams responsible for different aspects of an app throughout its lifecycle. While successful DevOps initiatives start and end with organizational and cultural change, there are also common practices that are enablers and/or tools used in support of DevOps. In this session you will learn about the DevOps practice of Configuration as Code &ndash managing and maintaining application configuration as versionable assets. This session will focus on the practice of Configuration as Code, while demonstrating a few of the popular tools available today, including Opscode Chef, Powershell DSC (Desired State Configuration) and others. If you are interested in implementing a DevOps initiative in your organization, then this session is a must-see.
Testing Distributed Micro Services. Agile Testing Days 2017Carlos Sanchez
Docker is revolutionizing the way people think about applications and deployments. It provides a simple way to run and distribute Linux containers for a variety of use cases, from lightweight virtual machines to complex distributed micro-services architectures.
Containers allow to run services in isolation with a minimum performance penalty, increased speed, easier configuration and less complexity, making it ideal for continuous integration and continuous delivery based workloads.
But testing a distributed micro-services architecture is no easy task, requiring a shift in mindset and tooling to accommodate the new architecture.
We will provide insight on our experience creating a Jenkins platform based on distributed Docker containers running on Apache Mesos and Marathon, applicable for all types of applications, but specially Java and JVM based ones.
What’s New in Docker - Victor Vieux, DockerDocker, Inc.
It’s the first breakout after the keynote and you need to know more about all the latest and greatest Docker announcements. We've got you covered! In this session, Victor Vieux, will go deeper looking into what's new with Docker, demo the latest features and answer your questions.
Using Containers for Building and Testing: Docker, Kubernetes and Mesos. FOSD...Carlos Sanchez
Building and testing is a great use case for containers, both due to the dynamic and isolation aspects, but running in just one machine is not enough and quickly needs to scale to a clustered setup. But which cluster technology should be used? Docker Swarm? Apache Mesos? Kubernetes? how do they compare? All of them can be used to dynamically run a cluster of containers.
Building and testing is a great use case for containers, both due to the dynamic and isolation aspects, but running in just one machine is not enough and quickly needs to scale to a clustered setup. But which cluster technology should be used? Docker Swarm? Apache Mesos? Kubernetes? how do they compare? All of them can be used to dynamically run a cluster of containers.
The Jenkins platform is an example of dynamically scaling by using several Docker cluster and orchestration platforms, using containers to run build agents and jobs, and also isolate job execution.
This talk will cover these main container clusters, outlining the pros and cons, the current state of the art of the technologies and Jenkins support.
The presentation will allow a better understanding of using Docker in the main Docker cluster/orchestration platforms out there (Docker Swarm, Apache Mesos, Kubernetes), sharing my experience and helping people decide which one to use, going through Jenkins examples and current support.
Securing Containers, One Patch at a Time - Michael Crosby, DockerDocker, Inc.
Responsible disclosure is a key ingredient of any solid security strategy. In this session, Docker maintainer Michael Crosby will explain the ins and outs of CVE-2016-9962: how it was discovered, how it could even happen in the first place, and how it was addressed. A vertiginous abseil at the boundaries of the kernel, in the fascinating land of system calls and randomized address space. You will think twice before leaking a file descriptor again.
How to Improve Your Image Builds Using Advance Docker BuildDocker, Inc.
Nicholas Dille, Haufe-Lexware + Docker Captain -
Docker continues to be the standard tool for building container images. For more than a year Docker ships with BuildKit as an alternative image builder, providing advanced features for secret and cache management. These features help to make image builds faster and more secure. In this session, Docker Captain Nicholas Dille will teach you how to use Buildkit features to your advantage.
Seven Habits of Highly Effective Jenkins Users (2014 edition!)Andrew Bayer
What plugins, tools and behaviors can help you get the most out of your Jenkins setup without all of the pain? We'll find out as we go over a set of Jenkins power tools, habits and best practices that will help with any Jenkins setup.
This talk will give you tips and tricks to get better build time performance and smaller images. The most important take-away is: you should be using multi-stage Dockerfiles and enable BuildKit.
(Click 2nd slide for video) Deploy PHP apps faster in 2017. This talk focuses on how PHP developers can use simple Ansible scripts to rapidly configure new dev and production servers from scratch, and deploy their apps. No more "snowflake servers"!
This is a general introduction to DevOps essentials and Ansible, with a few extras for PHP developers, including some best practice tips and overview of two major Ansible-based PHP projects, Drupal-VM and Trellis (modern WordPress setup).
The Golden Ticket: Docker and High Security Microservices by Aaron GrattafioriDocker, Inc.
True microservices are more than simply bolting a REST interface on your legacy application, packing it in a Docker container and hoping for the best. Security is a key component when designing and building out any new architecture, and it must be considered from top to bottom. Umpa Lumpas might not be considered "real" microservices, but Willy Wonka still has them locked down tight!
In this talk, Aaron will briefly touch on the idea and security benefits of microservices before diving into practical and real world examples of creating a secure microservices architecture. We'll start with designing and building high security Docker containers, using and examining the latest security features in Docker (such as User Namespaces and seccomp-bpf) as well as examine some typically forgotten security principals. Aaron will end on exploring related challenges and solutions in the areas of network security, secrets management and application hardening. Finally, while this talk is geared towards Microservices, it should prove informational for all Docker users, building a PaaS or otherwise.
You Don't Have to Start Over! A Practical Guide for Adopting Docker in the En...Docker, Inc.
So you are looking to adopt docker, but receive feedback and commentary such as "our development pipeline won't support containers" or "the applications aren't micro services, so I don't see a benefit." You are not alone, these and other statements are common misconceptions when considering using docker in the enterprise. Perhaps having a real enterprise use case with some tips on objection handling would support your goal of adopting docker in your organization? In this presentation, Chris Ciborowski, CEO and Principal Consultant at Nebulaworks and Docker Captain will discuss ways that you can leverage docker in existing enterprise environments providing tangible benefits to both developers and operations teams and accelerate DevOps adoption. He will also provide a few insider tips on objection handling learned while working on enterprise container adoption in enterprise clients.
From Monolith to Docker Distributed ApplicationsCarlos Sanchez
Docker is revolutionizing the way people think about applications and deployments. It provides a simple way to run and distribute Linux containers for a variety of use cases, from lightweight virtual machines to complex distributed micro-services architectures.
Containers allow to run services in isolation with a minimum performance penalty, increased speed, easier configuration and less complexity, making it ideal for continuous integration and continuous delivery based workloads. But migrating an existing application to a distributed microservices architecture is no easy task, requiring a shift in the software development, networking and storage to accommodate the new architecture.
We will provide insight on our experience creating a Jenkins platform based on distributed Docker containers running on Apache Mesos and Marathon, applicable for all types of applications, but specially Java and JVM based nones.
Ansible is tool for Configuration Management. The big difference to Chef and Puppet is, that Ansible doesn't need a Master and doesn't need a special client on the servers. It works completely via SSH and the configuration is done in Yaml.
These slides give a short introduction & motivation for Ansible.
Continuous Deployment with Jenkins on KubernetesMatt Baldwin
Google Senior Software Engineer Evan Brown's presentation from the March 18, 2016 Seattle Kubernetes meetup hosted by StackPointCloud. Evan shows how you deploy Jenkins into Kubernetes, then takes us through CD and canary deployments. Join us in Seattle: http://www.meetup.com/Seattle-Kubernetes-Meetup/
Webinar: Development Swarm Cluster with Docker Compose V3Codefresh
Docker 1.13 introduced a new version of Compose that simplifies deployment. In our last webinar, Alexei Ledenev (Cheif Researcher at Codefresh) walked us through the new features in Compose V3 developers can use for deployment. In case you missed it, we recorded it for you to view on demand. During the session, you’ll learn how to quickly create a multi-node Swarm cluster on your laptop, (without needing to install and manage additional VMs).
An Integrated Pipeline for Private and Public Clouds with Jenkins, Artifactor...VMware Tanzu
This presentation was delivered jointly with a hands-on demo. The presentation briefly discusses how Cloud Foundry enables organizations to continuously deliver high-quality software and highlights an integrated development process built with Jenkins, Artifactory and Cloud Foundry.
The Netflix API has undergone a transformation since its inception in 2008. It has transitioned from being a public API with a generic RESTful interface to a platform for creating highly optimized, device-centric APIs that are critical to delivering the Netflix streaming experience on over 1000 different device types.
This talk covers the design principles that shaped the transformation of the API as well as the technology that powers it, enabling rapid user experience iteration and bringing Netflix streaming to almost 38 million subscribers around the world.
Latest version of the Netflix Cloud Architecture story was given at Gluecon May 23rd 2012. Gluecon rocks, and lots of Van Halen references were added for the occasion. There tradeoff between developer driven high functionality AWS based PaaS, and operations driven low cost portable PaaS is discussed. The three sections cover the developer view, the operator view and the builder view.
SpringOne Platform 2016
Speaker: Amit Gupta; Product Manager, Pivotal.
Find out how Cloud Foundry does continuous integration, from a GitHub pull request against a small repository to an official final release. See how we're striving to raise the bar for open source projects when it comes to rigor, automation, and transparency of our CI. We’ll talk about how we:
-integrate work from community contributors and core Foundation contributors, spread across multiple teams and continents;
-test at multiple layers, from fast, tightly-scoped unit tests, to full blown deployments and acceptance tests across multiple IaaSes; and
-keep the full end-to-end process transparent to the community; not just the source code, but also the build pipelines and the discussions that surround artifact promotion.
The audience will come away with strategies for continuously integrating and deploying their own Cloud Foundry installations or other distributed systems.
Using PaaS for Continuous Delivery (Cloud Foundry Summit 2014)VMware Tanzu
Technical Track presented by Elisabeth Hendrickson at Pivotal.
With continuous delivery, you release frequently and with very little, or no, manual intervention. That requires three things: fully automated tests; a continuous integration server that executes those tests and can promote successful deployments; and an automated deployment mechanism with zero downtime. PaaS's are a perfect fit for this. Cloud Foundry makes zero-downtime automated deployments straightforward. Further, cloud-based CI services such as Cloudbees work well with Cloud Foundry. In this talk, Elisabeth explains how to achieve continuous delivery with Cloud Foundry using one of our own applications (docs.cloudfoundry.org) as an example.
Continuous Integration: SaaS vs Jenkins in CloudIdeato
Dopo la diffusione del Cloud Computing e di Docker, è ancora preferibile
adottare i classici SaaS di Continuous Integration rispetto ad un
sistema Jenkins in cloud?
L'intervento ha l’obiettivo di mostrare un caso d'uso applicato in
Ideato di migrazione da un SaaS quale Travis ad un sistema Jenkins in
cloud, sfruttando funzionalità di on demand tramite il cloud di Amazon
Web Services e di containerizzazione tramite Docker.
Tenendo in considerazione gli aspetti tecnici legati all’implementazione
e quelli che potrebbero impattare sul fronte economico come la mancanza
di automatizzazione e i tempi di setup, verranno mostrati pregi e
difetti di questo sistema e come può essere applicato ad una serie di
progetti. Infine verranno elencati una serie di prodotti recentemente
rilasciati e in grado di far evolvere ulteriormente l'attuale sistema.
A high level view of how Netflix culture, open source technology, and custom software can build a continuous delivery pipeline to allow multiple deployments a day.
Netflix designed a massive scale cloud based media transcoding system from scratch for processing professionally produced studio content(to meet the unique scale and time constraints of our business). We bucked the common industry trend of vertical scaling and, instead, designed a horizontally scaled elastic system using AWS to meet the unique scale and time constraints of our business. Come hear how we designed this system, how it continues to get less expensive for Netflix, and how AWS represents a transformative opportunity in the wider media owning industry.
Continuous Delivery at Netflix, and beyondMike McGarr
A talk I gave on how Netflix delivers code to production, some of the enabling factors and recommendations for how to implement continuous delivery in your organization.
Building a Scalable CI Platform using Docker, Drone and RancherShannon Williams
In these slides from our online meetup from August of 2015, we discussed using Docker, Drone and Rancher to build a scalable CI/CD platform.
Over the course of the session we discussed and demonstrated:
• General Benefits of Docker for CI/CD
• Why we decided to use Drone
• How we deployed Drone with Docker Compose and Rancher
• How we automate deployment of testing environments with Rancher
• The latest news on Rancher and Docker
You can view the recording here: https://youtu.be/86u8pVESbPQ
Docker moves very fast, with an edge channel released every month and a stable release every 3 months. Patrick will talk about how Docker introduced Docker EE and a certification program for containers and plugins with Docker CE and EE 17.03 (from March), the announcements from DockerCon (April), and the many new features planned for Docker CE 17.05 in May.
This talk will be about what's new in Docker and what's next on the roadmap
Slides from DockerCon SF 2015 –
Docker at Lyft: Speeding up development w/ Matthew Leventi
Talk description: Learn how Docker enables Lyft to increase developer productivity across our engineering organization. We'll go through a local development model that decreases our developer onboard time, and keeps our teams focused on delivering product goals. We'll also talk about how we use Docker to test changes to our servers and allow QA testing of our mobile clients. You'll come out of the talk with techniques and reasons for integrating docker not just in the cloud but also onto developer's laptops.
Docker containers have been making inroads into Windows and Azure world. Docker has now replaced the traditional Azure IaaS & PaaS services, offering superior container versions which are more responsive, cost effective, and agile. In this session for Charlotte Azure User Group, we will take an in-depth look at the intersection of Docker and Azure, and how Docker is empowering next gen Azure services.
Here's the link to CAG meetup for the event - https://www.meetup.com/Charlotte-Microsoft-Azure/events/fpftgmyxjbjb/
Docker - Demo on PHP Application deployment Arun prasath
Docker is an open-source project to easily create lightweight, portable, self-sufficient containers from any application. The same container that a developer builds and tests on a laptop can run at scale, in production, on VMs, bare metal, OpenStack clusters, public clouds and more.
In this demo, I will show how to build a Apache image from a Dockerfile and deploy a PHP application which is present in an external folder using custom configuration files.
(ARC402) Deployment Automation: From Developers' Keyboards to End Users' Scre...Amazon Web Services
Some of the best businesses today are deploying their code dozens of times a day. How? By making heavy use of automation, smart tools, and repeatable patterns to get process out of the way and keep the workflow moving. Come to this session to learn how you can do this too, using services such as AWS OpsWorks, AWS CloudFormation, Amazon Simple Workflow Service, and other tools. We'll discuss a number of different deployment patterns, and what aspects you need to focus on when working toward deployment automation yourself.
KVM and docker LXC Benchmarking with OpenStackBoden Russell
Passive benchmarking with docker LXC and KVM using OpenStack hosted in SoftLayer. These results provide initial incite as to why LXC as a technology choice offers benefits over traditional VMs and seek to provide answers as to the typical initial LXC question -- "why would I consider Linux Containers over VMs" from a performance perspective.
Results here provide insight as to:
- Cloudy ops times (start, stop, reboot) using OpenStack.
- Guest micro benchmark performance (I/O, network, memory, CPU).
- Guest micro benchmark performance of MySQL; OLTP read, read / write complex and indexed insertion.
- Compute node resource consumption; VM / Container density factors.
- Lessons learned during benchmarking.
The tests here were performed using OpenStack Rally to drive the OpenStack cloudy tests and various other linux tools to test the guest performance on a "micro level". The nova docker virt driver was used in the Cloud scenario to realize VMs as docker LXC containers and compared to the nova virt driver for libvirt KVM.
Please read the disclaimers in the presentation as this is only intended to be the "chip of the ice burg".
Continuous Delivery the hard way with KubernetesLuke Marsden
This talk shows three increasingly advanced levels of continuous delivery with Kubernetes and GitLab (as an example), arguing for a continuous delivery architecture which has an explicit _Release Manager_ component. We then propose Flux, the open source project which powers the _Deploy_ feature of Weave Cloud, as an implementation of that idea. This approach is the precursor to GitOps.
Virtualizing Apache Spark and Machine Learning with Justin MurrayDatabricks
This talk explains the reasons why virtualizing Spark, in-house or elsewhere, is a requirement in today’s fast-moving and experimental world of data science and data engineering. Different teams want to spin up a Spark cluster “on the fly” to carry out some research and quickly answer business questions. They are not concerned with the availability of the server hardware – or with what any other team might be doing on it at the time. Virtualization provides the means of working within your own sandbox to try out the new query or Machine Learning algorithm. Deep performance test results will be shown that demonstrate that Spark and ML programs perform equally well on virtual machines just like native implementations do. An early introduction is given to the best practices you should adhere to when you do this.
Best Practices for Running Kafka on Docker ContainersBlueData, Inc.
Docker containers provide an ideal foundation for running Kafka-as-a-Service on-premises or in the public cloud. However, using Docker containers in production environments for Big Data workloads using Kafka poses some challenges – including container management, scheduling, network configuration and security, and performance.
In this session at Kafka Summit in August 2017, Nanda Vijyaydev of BlueData shared lessons learned from implementing Kafka-as-a-Service with Docker containers.
https://kafka-summit.org/sessions/kafka-service-docker-containers
RTP NPUG: Ansible Intro and Integration with ACIJoel W. King
Ansible is one of the newer and more exciting automation toolsets for networking. Ansible (unlike Puppet and Chef) is agentless, which makes it significantly easier to automate existing devices that may not have an agent installed – such as many networking devices.
Networks are evolving from hundreds or thousands of individual devices to the Software-Defined Network paradigm of a single fabric under a central controller. The GUI on top of an SDN controller isn’t sufficient and will still need automation.
This presentation describes how Ansible can add value to configuration management of a Cisco Application Centric Infrastructure (ACI) infrastructure.
Similar to Building Cloud Tools for Netflix with Jenkins (20)
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
GridMate - End to end testing is a critical piece to ensure quality and avoid...ThomasParaiso2
End to end testing is a critical piece to ensure quality and avoid regressions. In this session, we share our journey building an E2E testing pipeline for GridMate components (LWC and Aura) using Cypress, JSForce, FakerJS…
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
3. What We Build
Large number of loosely-coupled Java Web Services
Common code in libraries that can be shared across
apps
Each service is “baked” - installed onto a base Amazon
Machine Image and then created as a new AMI ...
... and then deployed into a Service Cluster (a set of
Auto Scaling Groups running a particular service)
9. Jenkins Statistics
1600 job definitions, 50% SCM triggered
2000 builds per day
Common Build Framework updates trigger 800 rebuilds;
by scaling up to 20 cloud slaves we can complete the
flood of new builds in 30 minutes
2TB of build data
10. Jenkins Architecture
x86_64 slave 11
x86_64 slave 1
x86_64 slave
buildnode01 1
x86_64 slave
Standard
buildnode01 custom slaves
buildnode01
buildnode01 custom slaves
custom slaves
slave group misc. architecture
custom slaves
misc. architecture
misc. architecture
custom slaves
Amazon Linux Single Master misc. architecture
m1.xlarge misc. architecture
Ad-hoc slaves
Red Hat Linux
2x quad core x86_64 misc. O/S & architectures
26G RAM
x86_64 slave 11
x86_64Custom
x86_64slave 1
slave
buildnode01
~40 custom slaves
buildnode01
slave group
buildnode01 maintained by product
Amazon Linux teams
various
us-west-1 VPC Netflix data center Netflix data center and
office
11. Other Uses of Jenkins
Monitoring of our test and production Cassandra clusters
Automated integration tests, including bake and deploy
Production bake and deployment
Housekeeping of the build / deploy infrastructure:
Reap unreferenced artifacts in Artifactory
Disable Jenkins jobs with no recent successful builds
Mark Jenkins builds as permanent if they are used by
an active deployment in prod or test
Alert owners when slaves get disconnected
12. Jenkins Scaling Challenges
Flood of simultaneous builds can quickly exhaust all build
executors and clog the pipeline
Flood of simultaneous builds can hammer rest of the
infrastructure (especially Artifactory)
Making global changes to all jobs
Some plugins don’t scale to our number of jobs / builds
Hard to test every job before upgrading master or plugins
Large amount of state encapsulated in build data makes
restoring from backup time consuming
13. Netflix Extensions to Jenkins
Job DSL plugin: allow jobs to be set up with minimal
definition, using templates and a Groovy-based DSL.
Housekeeping and maintenance processes
implemented as Jenkins jobs, system Groovy scripts
15. The DynaSlave Plugin
Genesis
Original build fleet: 15 VMs on datacenter hardware, 8G
RAM, single vCPU, 2 executors per node
Many jobs build on SCM change. Changes to our
common build framework create massive thundering
herd since everything depends on it.
Ask for more VMs? Modify CBF less frequently?
16. The DynaSlave Plugin
What We Wanted
Leverage our extensive AWS infrastructure, tooling, and
experience
No manual fiddling with machines once they launch
Quick and easy to maintain a fixed pool of slave nodes
that can grow/shrink to meet build demand
17. The DynaSlave Plugin
What We Have
Exposes a new endpoint in Jenkins that EC2 instances
in VPC use for registration
Allows a slave to name itself, label itself, tell Jenkins
how many executors it can support
EC2 == Ephemeral. Disconnected nodes that are gone
for > 30 mins are reaped
Sizing handled by EC2 ASGs, tweaks passed through
via user data (labels, names, etc)
18. The DynaSlave Plugin
What’s Next
Dynamic resource management: have Jenkins respond
to build demand and manage its own slave pools
Slave groups: Allows us to create specialized (and
isolated from the genpop) pools of build nodes
Refresh mechanism for slave tools (JDKs, Ant versions,
etc)
Enhanced security/registration of nodes
Give it back to the community (watch
techblog.netflix.com!)
Abstract: Over the last couple of years Netflix’ streaming service has become almost completely cloud-based, using Amazon's AWS. This talk will delve into our build and deployment architecture, detailing the evolution of our continuous integration systems which helped prepare us for the cloud move. \n
We work on the Engineering Tools team at Netflix. Both of us came a long way to be here. \n\nOur team is all about creating tools and systems for our engineers to use to build, test and deploy their apps to the cloud. (and DC if they reaaaaally have to :))\n\nI’ll give an overview of our continuous integration system and how Jenkins fits into it, then Brian will talk about how we’ve extended Jenkins and some of the challenges we’ve found running it at such a large scale.\n\n
To get to the cloud, we rearchitected the Netflix streaming service into many individual modules implemented as web services, usually web applications or shared libraries (jars).\nOur team was responsible for creating a set of easy to use tools to simplify and automate the build of the applications and shared libraries.\nWe also were responsible for building the base machine image, creating the architecture for automating the assembly (aka baking - nothing to do with Qwikster !) of the individual application images, and building the web-based tool which is used to deploy and manage the application clusters - but we’ll concentrate on our build process for this talk.\nNote that a key aspect of using so many shared services is that each service team has to rebuild often in order to pick up changes from the other services that they depend on. This is the CONTINUOUS part of continuous integration and is where Jenkins comes in.\n
Here are a few details on how we build all those cloud services.\n
We wrote a Common Build Framework, based on Ant with some custom Groovy scripting, that’s used by all our development teams to build different kinds of libraries and apps. \nFor the continuous integration to run all those builds, we picked Jenkins because it’s very feature rich, easy to extend, and has a very active community. \nWe use Perforce for our version control system as it’s arguably the best centralized VCS available. But we’re making increasing use of Git; for example, our many open sourced projects are all hosted on GitHub, and we use Jenkins to build them. \nWe publish library JARs and application WAR files to the Artifactory binary repository tool. This gives us access to the build metadata and allows us to add Ivy to Ant to abstract the build and runtime jars into a dynamic dependency graph. So each project only has to know about its immediate dependencies.\nUnlike many shops we don’t use Jenkins plugins to do build tasks such as publishing to Artifactory; these are implemented in our common build framework to give us finer-grained control over functionality without having to patch a bunch of plugins.\n\n\n
Here is all you need to do in Jenkins to set up a typical project’s build job. You just tell Jenkins where to find the source code and add in the Common Build Framework, then specify what targets to call from your Ant build file.\n
And here is most of a typical project’s Ant and Ivy files. You can see the Ant code simply pulls in one of the standard framework entry points like, library, webapplication, etc. \n\nThen the Ivy file specifies what needs to get built and what are the dependencies. We have some extra Groovy code added to our Ant scripts that can drive Ant targets based on the Ivy artifact definitions. This helps make the build definition declarative and yet flexible.\n\nYes, XML makes your eyes bleed, and there is a lot of redundancy here. But at least it’s small and manageable.\n\n
Let’s take a closer look at how we use Jenkins as the core of our build infrastructure, plus a few other interesting uses we’ve come up with.\n
*** Other 50% of jobs manual or run on a fixed schedule. ***\n
Our Jenkins master runs on a physical server in our data center. The master provides the UI for defining build jobs, plus controlling and monitoring their execution. \nSlave servers are used to execute the actual builds. Our standard slaves can each run 4 simultaneous builds. Custom slave groups are set up for requirements such as C/C++ builds or jobs with high CPU or memory needs.\nWe vary the number of slaves from 15 to 30 depending on demand. This is currently a manual operation but we’re working on autoscaling.\nOur cloud slaves are set up in an AWS Virtual Private Cloud (VPC), which provides common network access between our data centre and AWS. Amazon’s us-west-1 region is physically located close to our data centre, so latency is not an issue.\nAd-hoc slaves in our DC or office are used by individual teams if they need an O/S variant other than those on our standard slaves, or a specific tool or licensed app.\n\nWe keep our standard slaves updated by maintaining a common set of tools (JDKs, Ant, Groovy, etc.) on the master and syncing the tools to the slaves when they are restarted. Custom slaves can also use this mechanism if they choose.\n\n\n \n
At its heart Jenkins is just a really nice job scheduler, so we’ve found lots of other uses for it. Here are some of the main ones; in the interest of time I’m not going to describe each one in detail, but please hit us up with questions if you’re interested.\n\nHousekeeping jobs usually use system Groovy scripts for access to the Jenkins runtime. Looking at posting some of these to the public scripts repository.\n\nNow I’ll hand it over to Brian who is going to talk about some scalability challenges and how we’re addressing them.\n
We’ve run into a number of scaling challenges as we’ve evolved our build pipeline: Thundering herd problems, modifying and managing 1600 jobs, making sure those 1600 jobs work from Jenkins version to Jenkins version, plugin version to plugin version, and so on.\nOur goal, of course, is to have one button build/test/deploy with as little human intervention as possible, and make the developer’s life as pain-free as we can. All of these get in our way.\n
We’ve enhanced Jenkins with a few plugins and odd jobs: \n- We’re working on a job DSL that will allow us to create job templates and simplify the process of configuring new jobs \n- We’ve got a number of housekeeping and maintenance jobs running via Jenkins and system Groovy scripts doing things from disabling builds that consistently fail for a long period of time with no intervention (abandoned jobs) to enforcing consistency in job configuration\n\n
And we created the DynaSlave plugin, our cloud-based army of build nodes, to directly address one of our scalability problems: executor exhaustion and deep build queues during thundering herds/build storms.\n
When we started the project, our build node fleet was a set of virtual machines in our datacenter.\nAs I mentioned, when we change the build framework, everything tries to rebuild (which sounds crazy but is a good thing--*continuous integration*. The sooner we can find a problem, the sooner we can fix it).\nWe could’ve bounded our changes, restricted them to off hours, but at Netflix, there isn’t really such a thing as off hours and you’re bound to get in someone’s way! We could’ve deployed more VMs, but that involves other teams, leaves us with excess capacity and wasted resources during lulls...\n
Plus we had this great platform built on top of AWS and EC2. Why not leverage that?\nWe get to take advantage of our tooling, our experience with the service, we can add and remove capacity on demand, and maybe even make Jenkins master of its own domain and let it control the build node population directly.\n\nAt the time we started building this (mid-2011), nothing plugin-wise we found could maintain a small fixed fleet of AWS resources for us. Plugins seemed to take aim at using EC2 for nothing but spikes in demand, whereas we wanted to forklift the whole fleet into the cloud.\n
We put together a plugin that accomplished some of those goals. The DynaSlave plugin currently allows an EC2 node to launch and register itself with Jenkins, totally hands-free. The slaves can tell Jenkins details about what it wants to be, what it can build, and so on. We can tailor nodes to specific needs, create custom pools of nodes with different instance sizes. The plugin, today, has no idea these nodes are even in EC2--pool sizing is managed by AWS ASGs and our cloud management tools like ASGARD, our Amazon management console (Soon to be open sourced!)\n
We’re not done, though. We have a number of enhancements in the pipeline, but one of the bigger bits is dynamic resource management.\n\nWe’re still doing some things manually, like controlling the pool size. If someone wants to make a change to our framework, they have to remember to scale the pool up, but not too big as that can kill other systems by proxy, and they have to remember to scale down after the event, but that is EXTRA tedious as resizing ASGs will swat nodes away that are still executing jobs.\nJenkins knows what the queue looks like, Jenkins knows how many slaves are doing work, so we want to make the plugin intelligent enough to manage its own pools, and when it scales the pool down, Jenkins can pause nodes that are idle and make sure those are the ones that are pulled by the ASG, as well as bleed off traffic from busy nodes that need to be reaped.\nWe’re planning on giving this back, so keep an eye on our blog at techblog.netflix.com for announcements to that effect.\n
Here are some places to look for more info.\n\nAdrian’s presentations on Slideshare are a great resource if you want to know more about our cloud architecture in general.\n\nWe’re hiring !\n