The document describes the process of migrating a legacy monolithic system called San Diego to a microservices architecture with continuous deployment. It discusses how the system was originally complex and difficult to work with. The approach taken was to use a strangler pattern, building API-first services for each domain object that could be deployed independently. This allowed deploying new features continuously using Docker containers, load balancing, and automation. The results were improved performance, velocity, and job satisfaction for the team. Lessons learned include gaining team acceptance for change and ensuring stability of the new systems.
Getting out of the Job Jungle with JenkinsSonatype
Damien Corabouef, Multipharma, Clear2Pay
Implementing a CI/CD solution based on Jenkins has become very easy. Dealing with multiple feature, staging and release branches? Not so much. Having to handle that for multiple teams and multiple projects becomes a real challenge. This presentation shows a solution to scale to several thousands of jobs, used by dozens of different development and test teams, 24 hours a day, 7 days a week, on a worldwide schedule.
I will talk about the challenges that we’ve met, and how we’ve put in place a scalable and on-demand solution, secure and simple to use.
This is a real-life, real-scale story of making CI/CD a day-to-day reality by allowing development and test teams to consider automation as a simple and customisable service.
http://www.meetup.com/BruJUG/events/228994900/
During this session, you will presented a solution to the problem of scalability of continuous delivery in Jenkins, when your organisation has to deal with thousands of jobs, by introducing a self-service approach based on the "pipeline as code" principles.
Codecoon is the next generation hosting portal from the punkt.de GmbH. In this talk we explain how we implemented the portal and its components using TYPO3 Flow, Opscode Chef, Vagrant and Sinatra. We give a detailed insight in why we used which technologies and which developer itches we want to tackle.
Getting out of the Job Jungle with JenkinsSonatype
Damien Corabouef, Multipharma, Clear2Pay
Implementing a CI/CD solution based on Jenkins has become very easy. Dealing with multiple feature, staging and release branches? Not so much. Having to handle that for multiple teams and multiple projects becomes a real challenge. This presentation shows a solution to scale to several thousands of jobs, used by dozens of different development and test teams, 24 hours a day, 7 days a week, on a worldwide schedule.
I will talk about the challenges that we’ve met, and how we’ve put in place a scalable and on-demand solution, secure and simple to use.
This is a real-life, real-scale story of making CI/CD a day-to-day reality by allowing development and test teams to consider automation as a simple and customisable service.
http://www.meetup.com/BruJUG/events/228994900/
During this session, you will presented a solution to the problem of scalability of continuous delivery in Jenkins, when your organisation has to deal with thousands of jobs, by introducing a self-service approach based on the "pipeline as code" principles.
Codecoon is the next generation hosting portal from the punkt.de GmbH. In this talk we explain how we implemented the portal and its components using TYPO3 Flow, Opscode Chef, Vagrant and Sinatra. We give a detailed insight in why we used which technologies and which developer itches we want to tackle.
JUC Europe 2015: Scaling Your Jenkins Master with DockerCloudBees
By Christophe Muller, Wyplay
Christophe will share his experience at Wyplay, a provider of connected TV middleware, in Jenkins scaling and best practices. At Wyplay, the Jenkins master server had grown to include 34 attached slaves, 78 plugins and 527 jobs. That would not have been a problem if there were no major issues - but that was not the case. The team had issues related to performance, reliability, ability to upgrade and security.
Because of these issues, project teams started to build their own master servers and on several of them, the issues (particularly security) were even greater than on the original Jenkins master server.
The Wyplay team decided to migrate to a system where they could easily generate and manage new masters for each project. They migrated the 500 original jobs to this new infrastructure. Christophe will explain in his talk how Wyplay used Docker to support the migration to the new architecture. In addition to this master infrastructure, Wyplay also makes use of Docker for launching dedicated slaves for small jobs using the Docker plugin. Tips on how this infrastructure can be enhanced will be provided.
We know how complicated it is to have a stable grid, and how hard it is to maintain over time with enough capabilities to cover most browsers and platforms. Internally, we found that ~75% of our tests were executed in Firefox/Chrome, and the remaining were executed in Safari/IE. We decided to develop a tool where docker-selenium nodes are created, used and disposed on demand. For Safari/IE, we just forward the tests to Sauce Labs/BrowserStack.
Zalenium is an OSS extension to scale up and down your local grid dynamically with Docker containers. It uses Docker-Selenium to run tests in Firefox/Chrome, and when a different browser is needed, tests get redirected to a cloud testing service. Result: our tests suites run faster since most of the tests run on local Firefox/Chrome nodes, and we use in a smarter way the cloud testing service we pay for.
Diego Molina – Software Engineer in Test, Zalando SE
Leo Gallucci – Software Engineer, Tools and Infrastructure, Zalando SE
DockerCon EU 2015: Continuous Integration with Jenkins, Docker and ComposeDocker, Inc.
Presented by Sandro Cirulli, Platform Tech Lead, Oxford University Press
Oxford University Press (OUP) recently started the Oxford Global Languages (OGL) initiative (http://www.oxforddictionaries.com/words/oxfordlanguages) which aims at providing language resources for digitally under represented languages. In August 2015 OUP launched two African languages websites for Zulu (http://zu.oxforddictionaries.com) and Northern Sotho (http://nso.oxforddictionaries.com). The backend of these websites is based on an API retrieving data in RDF from a triple store and delivering data to the frontend in JSON-LD.
The entire micro-service infrastructure for development, staging, and production runs on Docker containers in Amazon EC2 instances. In particular, we use Jenkins to rebuild the Docker image for the API based on a Python Flask application and Docker Compose to orchestrate the containers. A typical CI workflow is as follows:
- a developer commits code to the codebase
- Jenkins triggers a job to run unit tests
- if the unit tests are successful, the Docker image of the Python Flask application is rebuilt and the container is restarted via Docker Compose
- if the unit tests or the Docker build failed, the monitor view shows the Jenkins jobs in red and displays the name of the possible culprit who broke the build.
A demo of this CI workflow is available at http://www.sandrocirulli.net/continuous-integration-with-jenkins-docker-and-compose
JUC Europe 2015: Scaling of Jenkins Pipeline Creation and MaintenanceCloudBees
By Damien Coraboeuf, Clear2Pay
In a large company where several dozens of projects and branches will need their own pipelines, you cannot afford to maintain all the jobs, manually. For security reasons and knowledge limitations, Clear2Pay does not want to open Jenkins job configurations in the centralised master. Instead, the Clear2Pay team offers project teams a "shopping list" that they can use to automatically generate their own pipelines for all branches, without requiring the Jenkins administration team to intervene. Projects just update a jenkins.properties in the SCM branch and the pipeline for this branch is updated accordingly. This allows the number of projects to scale, each getting their own pipeline in the Jenkins master without having to worry about administering hundreds of jobs.
Voxxed Luxembourd 2016 Jenkins 2.0 et Pipeline as codeDamien Duportal
Né Hudson en 2004 (cf. http://kohsuke.org/2011/01/11/bye-bye-hudson-hello-jenkins/), le projet Jenkins vient de franchir un cap majeur : la version Jenkins 2.0 (cf. https://groups.google.com/forum/#!msg/jenkinsci-dev/vbXK7JJekFw/BlEvO0UxBgAJ) !
Cette étape majeure réussit à concilier la gestion de l'ancien, et la transition vers des pratiques de déploiement continu plus modernes.
Parmi les nouveautés, la gestion des Pipeline-as-a-Code et l'intégration de Docker sont deux éléments dont vous allez pouvoir tirer de nombreux bénéfices.
Si vous êtes intéressés pour un exemple concret de migration depuis un Jenkins 1.x vers un flux basé sur Docker et Pipeline avec Jenkins 2.0, cette session est faite pour vous !
L'exemple suivi sera un projet Java-Maven "type", stocké sur un dépôt Git, bénéficiant de tests et d'analyses, en "multi-job enchaînés", que nous ferons glisser dans un "Jenkins Pipeline", configuré via un fichier du dépôt Git, en mode "livraison continue" via Docker.
Atlassian faces the same issues as any other software company in the world. The battle for continuous integration glory is fought every day, and at stake is nothing less than our development and delivery speed. Join us to find out how we do it at Atlassian, powered by Bamboo. Because in the Game of Codes, you win... or you die.
Introduction to Continuous integration and the differences with continuous delivery and deployment. It shows the main benefits you should expect by incorporating CI practices to your project and how to do it with Drone.
Pimp your Continuous Delivery Pipeline with Jenkins workflow (W-JAX 14)CloudBees
Continuous delivery pipelines are, by definition, workflows with parallel job executions, join points, retries of jobs (Selenium tests are fragile) and manual steps (validation by a QA team). Come and discover how the new workflow engine of Jenkins CI and its Groovy-based DSL will give another dimension to your continuous delivery pipelines and greatly simplify your life.
Sample workflow groovy script used in this presentation: https://gist.github.com/cyrille-leclerc/796085e19d9cec4a71ef
Jenkins workflow syntax reference card: https://github.com/cyrille-leclerc/workflow-plugin/blob/master/SYNTAX-REFERENCE-CARD.md
Using Docker to Develop, Test and Run Maven Projects - Wouter DanesNLJUG
Docker recently hit version 1.0 and is being picked up around the world by Ops teams to ease running their applications. Docker can also play a big role in easing the development of applications. In this talk I will address how to use docker to: - create a more scalable build environment using jenkins and docker; - integration test your software using maven and docker; - package your software and run the images in different environments.
CI and CD Across the Enterprise with Jenkins (devops.com Nov 2014)CloudBees
Delivering value to the business faster thanks to Continuous Delivery and DevOps is the new mantra of IT organizations. In this webinar, CloudBees will discuss how Jenkins, the most popular open source Continuous Integration tool, allows DevOps teams to implement Continuous Delivery.
You will learn how to:
* Orchestrate Continuous Delivery pipelines with the new workflow feature,
* Scale Jenkins horizontally in your organization using Jenkins Operations Center by CloudBees,
* Implement end to end traceability with Jenkins and Puppet and Chef.
http://devops.com/news/ci-and-cd-across-enterprise-jenkins/
https://github.com/CloudBees-community/vagrant-puppet-petclinic
Short presentation about Docker and some usage scenarios for Web Developement, Operations and Continuous Delivery. This talk was held at the TYPO3 Camp Stuttgart in 2015.
At Wyplay, the Jenkins master server had grown to include 34 attached slaves, 78 plugins and 527 jobs. That would not have been a problem if there were no major issues - but that was not the case. The team had issues related to performance, reliability, ability to upgrade and security. Because of these issues, project teams started to build their own master servers and on several of them, the issues (particularly security) were even greater than on the original Jenkins master server.
The Wyplay team decided to migrate to a system where they could easily generate and manage new masters for each project.
In his talk, I explain how Wyplay used Docker and the official Jenkins image from CloudBees to support the migration to the new multi-masters architecture. I also briefly present how it could evolve using orchestration tools, and a better separation between ansible scripts and Dockerfiles specifications.
Nathen Harvey, Chef
Automation at scale is the foundation of every successful high velocity organization.
Automation requires dynamic infrastructure that is managed as code. Modern infrastructure code means bringing the lessons from software development to your infrastructure. Automation is managed in version control systems, tests drive code development, code moves through a continuous pipeline from the workstation to the production environment. What will this look like in five years? We will see a continued improvement in the way teams work together toward common goals, build more operable applications, and embrace complexity while improving ease-of-use.
DevOps Friendly Doc Publishing for APIs & MicroservicesSonatype
Mandy Whaley, CISCO
Microservices create an explosion of internal and external APIs. These APIs need great docs. Many organizations end up with a jungle of wiki pages, swagger docs and api consoles, and maybe just a few secret documents trapped in chat room somewhere… Keeping docs updated and in sync with code can be a challenge.
We’ve been working on a project at Cisco DevNet to help solve this problem for engineering teams across Cisco. The goal is to create a forward looking developer and API doc publishing pipeline that:
Has a developer friendly editing flow
Accepts many API spec formats (Swagger, RAML, etc)
Supports long form documentation in markdown
Is CI/CD pipeline friendly so that code and docs stay in sync
Flexible enough to be used by a wide scope of teams and technologies
We have many interesting lessons learned about tooling and how to solve documentation challenges for internal and external facing APIs. We have found that solving this doc publishing flow is a key component of a building modern infrastructure. This is most definitely a culture + tech + ops + dev story, we look forward to sharing with the DevOps Days community.
JUC Europe 2015: Scaling Your Jenkins Master with DockerCloudBees
By Christophe Muller, Wyplay
Christophe will share his experience at Wyplay, a provider of connected TV middleware, in Jenkins scaling and best practices. At Wyplay, the Jenkins master server had grown to include 34 attached slaves, 78 plugins and 527 jobs. That would not have been a problem if there were no major issues - but that was not the case. The team had issues related to performance, reliability, ability to upgrade and security.
Because of these issues, project teams started to build their own master servers and on several of them, the issues (particularly security) were even greater than on the original Jenkins master server.
The Wyplay team decided to migrate to a system where they could easily generate and manage new masters for each project. They migrated the 500 original jobs to this new infrastructure. Christophe will explain in his talk how Wyplay used Docker to support the migration to the new architecture. In addition to this master infrastructure, Wyplay also makes use of Docker for launching dedicated slaves for small jobs using the Docker plugin. Tips on how this infrastructure can be enhanced will be provided.
We know how complicated it is to have a stable grid, and how hard it is to maintain over time with enough capabilities to cover most browsers and platforms. Internally, we found that ~75% of our tests were executed in Firefox/Chrome, and the remaining were executed in Safari/IE. We decided to develop a tool where docker-selenium nodes are created, used and disposed on demand. For Safari/IE, we just forward the tests to Sauce Labs/BrowserStack.
Zalenium is an OSS extension to scale up and down your local grid dynamically with Docker containers. It uses Docker-Selenium to run tests in Firefox/Chrome, and when a different browser is needed, tests get redirected to a cloud testing service. Result: our tests suites run faster since most of the tests run on local Firefox/Chrome nodes, and we use in a smarter way the cloud testing service we pay for.
Diego Molina – Software Engineer in Test, Zalando SE
Leo Gallucci – Software Engineer, Tools and Infrastructure, Zalando SE
DockerCon EU 2015: Continuous Integration with Jenkins, Docker and ComposeDocker, Inc.
Presented by Sandro Cirulli, Platform Tech Lead, Oxford University Press
Oxford University Press (OUP) recently started the Oxford Global Languages (OGL) initiative (http://www.oxforddictionaries.com/words/oxfordlanguages) which aims at providing language resources for digitally under represented languages. In August 2015 OUP launched two African languages websites for Zulu (http://zu.oxforddictionaries.com) and Northern Sotho (http://nso.oxforddictionaries.com). The backend of these websites is based on an API retrieving data in RDF from a triple store and delivering data to the frontend in JSON-LD.
The entire micro-service infrastructure for development, staging, and production runs on Docker containers in Amazon EC2 instances. In particular, we use Jenkins to rebuild the Docker image for the API based on a Python Flask application and Docker Compose to orchestrate the containers. A typical CI workflow is as follows:
- a developer commits code to the codebase
- Jenkins triggers a job to run unit tests
- if the unit tests are successful, the Docker image of the Python Flask application is rebuilt and the container is restarted via Docker Compose
- if the unit tests or the Docker build failed, the monitor view shows the Jenkins jobs in red and displays the name of the possible culprit who broke the build.
A demo of this CI workflow is available at http://www.sandrocirulli.net/continuous-integration-with-jenkins-docker-and-compose
JUC Europe 2015: Scaling of Jenkins Pipeline Creation and MaintenanceCloudBees
By Damien Coraboeuf, Clear2Pay
In a large company where several dozens of projects and branches will need their own pipelines, you cannot afford to maintain all the jobs, manually. For security reasons and knowledge limitations, Clear2Pay does not want to open Jenkins job configurations in the centralised master. Instead, the Clear2Pay team offers project teams a "shopping list" that they can use to automatically generate their own pipelines for all branches, without requiring the Jenkins administration team to intervene. Projects just update a jenkins.properties in the SCM branch and the pipeline for this branch is updated accordingly. This allows the number of projects to scale, each getting their own pipeline in the Jenkins master without having to worry about administering hundreds of jobs.
Voxxed Luxembourd 2016 Jenkins 2.0 et Pipeline as codeDamien Duportal
Né Hudson en 2004 (cf. http://kohsuke.org/2011/01/11/bye-bye-hudson-hello-jenkins/), le projet Jenkins vient de franchir un cap majeur : la version Jenkins 2.0 (cf. https://groups.google.com/forum/#!msg/jenkinsci-dev/vbXK7JJekFw/BlEvO0UxBgAJ) !
Cette étape majeure réussit à concilier la gestion de l'ancien, et la transition vers des pratiques de déploiement continu plus modernes.
Parmi les nouveautés, la gestion des Pipeline-as-a-Code et l'intégration de Docker sont deux éléments dont vous allez pouvoir tirer de nombreux bénéfices.
Si vous êtes intéressés pour un exemple concret de migration depuis un Jenkins 1.x vers un flux basé sur Docker et Pipeline avec Jenkins 2.0, cette session est faite pour vous !
L'exemple suivi sera un projet Java-Maven "type", stocké sur un dépôt Git, bénéficiant de tests et d'analyses, en "multi-job enchaînés", que nous ferons glisser dans un "Jenkins Pipeline", configuré via un fichier du dépôt Git, en mode "livraison continue" via Docker.
Atlassian faces the same issues as any other software company in the world. The battle for continuous integration glory is fought every day, and at stake is nothing less than our development and delivery speed. Join us to find out how we do it at Atlassian, powered by Bamboo. Because in the Game of Codes, you win... or you die.
Introduction to Continuous integration and the differences with continuous delivery and deployment. It shows the main benefits you should expect by incorporating CI practices to your project and how to do it with Drone.
Pimp your Continuous Delivery Pipeline with Jenkins workflow (W-JAX 14)CloudBees
Continuous delivery pipelines are, by definition, workflows with parallel job executions, join points, retries of jobs (Selenium tests are fragile) and manual steps (validation by a QA team). Come and discover how the new workflow engine of Jenkins CI and its Groovy-based DSL will give another dimension to your continuous delivery pipelines and greatly simplify your life.
Sample workflow groovy script used in this presentation: https://gist.github.com/cyrille-leclerc/796085e19d9cec4a71ef
Jenkins workflow syntax reference card: https://github.com/cyrille-leclerc/workflow-plugin/blob/master/SYNTAX-REFERENCE-CARD.md
Using Docker to Develop, Test and Run Maven Projects - Wouter DanesNLJUG
Docker recently hit version 1.0 and is being picked up around the world by Ops teams to ease running their applications. Docker can also play a big role in easing the development of applications. In this talk I will address how to use docker to: - create a more scalable build environment using jenkins and docker; - integration test your software using maven and docker; - package your software and run the images in different environments.
CI and CD Across the Enterprise with Jenkins (devops.com Nov 2014)CloudBees
Delivering value to the business faster thanks to Continuous Delivery and DevOps is the new mantra of IT organizations. In this webinar, CloudBees will discuss how Jenkins, the most popular open source Continuous Integration tool, allows DevOps teams to implement Continuous Delivery.
You will learn how to:
* Orchestrate Continuous Delivery pipelines with the new workflow feature,
* Scale Jenkins horizontally in your organization using Jenkins Operations Center by CloudBees,
* Implement end to end traceability with Jenkins and Puppet and Chef.
http://devops.com/news/ci-and-cd-across-enterprise-jenkins/
https://github.com/CloudBees-community/vagrant-puppet-petclinic
Short presentation about Docker and some usage scenarios for Web Developement, Operations and Continuous Delivery. This talk was held at the TYPO3 Camp Stuttgart in 2015.
At Wyplay, the Jenkins master server had grown to include 34 attached slaves, 78 plugins and 527 jobs. That would not have been a problem if there were no major issues - but that was not the case. The team had issues related to performance, reliability, ability to upgrade and security. Because of these issues, project teams started to build their own master servers and on several of them, the issues (particularly security) were even greater than on the original Jenkins master server.
The Wyplay team decided to migrate to a system where they could easily generate and manage new masters for each project.
In his talk, I explain how Wyplay used Docker and the official Jenkins image from CloudBees to support the migration to the new multi-masters architecture. I also briefly present how it could evolve using orchestration tools, and a better separation between ansible scripts and Dockerfiles specifications.
Nathen Harvey, Chef
Automation at scale is the foundation of every successful high velocity organization.
Automation requires dynamic infrastructure that is managed as code. Modern infrastructure code means bringing the lessons from software development to your infrastructure. Automation is managed in version control systems, tests drive code development, code moves through a continuous pipeline from the workstation to the production environment. What will this look like in five years? We will see a continued improvement in the way teams work together toward common goals, build more operable applications, and embrace complexity while improving ease-of-use.
DevOps Friendly Doc Publishing for APIs & MicroservicesSonatype
Mandy Whaley, CISCO
Microservices create an explosion of internal and external APIs. These APIs need great docs. Many organizations end up with a jungle of wiki pages, swagger docs and api consoles, and maybe just a few secret documents trapped in chat room somewhere… Keeping docs updated and in sync with code can be a challenge.
We’ve been working on a project at Cisco DevNet to help solve this problem for engineering teams across Cisco. The goal is to create a forward looking developer and API doc publishing pipeline that:
Has a developer friendly editing flow
Accepts many API spec formats (Swagger, RAML, etc)
Supports long form documentation in markdown
Is CI/CD pipeline friendly so that code and docs stay in sync
Flexible enough to be used by a wide scope of teams and technologies
We have many interesting lessons learned about tooling and how to solve documentation challenges for internal and external facing APIs. We have found that solving this doc publishing flow is a key component of a building modern infrastructure. This is most definitely a culture + tech + ops + dev story, we look forward to sharing with the DevOps Days community.
Continuous Everyone: Engaging People Across the Continuous PipelineSonatype
Jayne Groll, DevOps Institute
Culture is undoubtedly one of the most critical aspects of any DevOps initiative. While much emphasis is placed on the automation of the deployment pipeline, there is also a need for a “Continuous People Pipeline”. Continuous People Pipelines help individuals and teams recognize their contribution to the value stream, provide realistic approaches and milestones for ongoing communication and collaboration and can be the basis for shared accountabilities and meaningful metrics. Most importantly, people pipelines help increase trust, flow, feedback and connection across IT silos.
This session will provide insight on the value, creation and support of Continuous People Pipelines. It will help attendees understand some of the human dynamics of change that must be considered – cultural debt, adoption models, acceptance curves, collaboration, immersion and conflict management. At the end of this session, leaders will take away some innovative strategic and tactical ideas for overcoming silo constraints and creating a collaborative culture that excites, engages and unifies people towards common business goals.
Starting and Scaling DevOps In the EnterpriseSonatype
Gary Gruver, Gruver Consulting
In my role, I get to meet lots of different companies, and I realized quickly that DevOps means different things to different people. They all want to do “DevOps” because of all the benefits they are hearing about, but they are not sure exactly what DevOps is, where to start, or how to drive improvements over time. They are hearing a lot of different great ideas about DevOps, but they struggle to get every-one to agree on a common definition and what changes they should make. It is like five blind men describing an elephant. In large orga-nizations, this lack of alignment on DevOps improvements impedes progress and leads to a lack of focus.
This session is intended to help structure and align those improvements by providing a framework that large organizations and their executives can use to understand the DevOps principles in the context of their current development processes and to gain alignment across the organization for success-ful implem
DevOps and Continuous Delivery Reference Architectures - Volume 2Sonatype
CONTINUOUS DELIVERY REFERENCE ARCHITECTURES Including Sonatype Nexus and other popular DevOps tools Derek E. Weeks (@weekstweets) VP and DevOps Advocate Sonatype.
Continuous Delivery and DevOps Reference Architectures include many common tool choices. The most common tool choices we find in these reference architectures are: Eclipse, git, Cloudbees Jenkins / Atlassian Bamboo, Sonatype Nexus, Atlassian JIRA, SonarQube, Puppet, Chef, Rundeck, Maven / Ant / Gradle, Subversion (svn), Junit, LiveRebel, ServiceNow
Hidden Speed Bumps on the Road to "Continuous"Sonatype
As a companion piece for our '2015 State of the Software Supply Chain Report' this ebook explores the hidden complexities in modern software development by drawing analogies to a traditional supply chain. This is a real eye-opener for anyone who cares about development speed, efficiency and quality.
Characterizing and Contrasting Kuhn-tey-ner Awr-kuh-streyt-orsSonatype
Lee Calcote, Solar Winds
Running a few containers? No problem. Running hundreds or thousands? Enter the container orchestrator. Let’s take a look at the characteristics of the four most popular container orchestrators and what makes them alike, yet unique.
Swarm
Nomad
Kubernetes
Mesos+Marathon
We’ll take a structured looked at these container orchestrators, contrasting them across these categories:
Genesis & Purpose
Support & Momentum
Host & Service Discovery
Scheduling
Modularity & Extensibility
Updates & Maintenance
Health Monitoring
Networking & Load-Balancing
High Availability & Scale
Britt Treece, PhishMe
Terraform is a tool that enables you to easily orchestrate potentially complex infrastructure. The simplicity of the tool also allows you to code yourself into a corner. This talk aims to offer practical techniques to avoid common hurdles that often result in a refactor.
Cloud Computing is a broad term that describes a diverse and rapidly expanding set of on-demand services. The availability of these services does not mean they are simple to use or easily integrated with each other or with your infrastructure. Terraform provides a common interface for these services and allows for the expression of your infrastructure as code. Terraforming Your Infrastructure will get you started with Terraform and help you avoid common hurdles that are encountered as your configurations get more advanced. We will…
learn how Terraform simplifies infrastructure management.
demonstrate practical techniques to avoid common problems.
deploy single and multi-provider configurations using Terraform.
Tools & Techniques for Addressing Component Vulnerabilities for PCI ComplianceSonatype
Recent revisions to the Payment Card Industry (PCI) guidelines now require organizations to address potential vulnerabilities caused by use of open source components in their applications.
Continuous Delivery Pipeline - Patterns and Anti-patternsSonatype
Juni Mukherjee, Consultant CI/CD, Lifelock
Continuous Delivery (CD) is important for a business to be sustainable. However, CD is not a discipline on it’s own (not yet), and the science behind it is rarely covered in schools.
The intended audience for this talk are engineers, architects and technical managers who are starting out to build Continuous Delivery Pipelines, or are seeking to improve ROI on their existing investments.
Every company aspires to sustainably flow their ideas into the hands of their customers, and reduce Time2Market. This talk goes into the heart of this burning topic and provides technical recipes that the audience can take away.
This talk would cover:
a) Domain Driven Design (DDD) for CD, based on concepts authored by Eric Evans
The Continuous Delivery Pipeline can be modeled as a domain.
b) How the CD Pipeline, along with its assets, can be orchestrated with Jenkins
The Continuous Delivery Pipeline domain can be orchestrated with Jenkins 2.0, aka Pipeline-as-code. Each box in the model could be authored as a stage in Jenkinsfile.
c) Pipeline patterns and anti-patterns
There are some trends that are consistently observed in the industry.
d) KPIs to measure ROI from the Pipeline
“Show me the money!”. This is the “Jerry Maguire moment”, whereby the ROI is demonstrated.
As SDN technologies become more mainstream, it is imperative to replicate the success of DevOps techniques from the IT world to bridge the gap that few envisioned in the first place –between the Application/Service and the Network layer.
This presentation made in the DevNet Zone at Cisco Live, San Francisco, 2014.
There is No Server: Immutable Infrastructure and Serverless ArchitectureSonatype
Erlend Oftedal, Blank
Immutable infrastructure and serverless architectures have very interesting security properties. This talk will give an introduction to immutable infrastructure and serverless architecture and try to highlight some of the properties of such architectures. Next we will look at the positive effects this can have on the security of our systems, but also highlight some of the negative aspects and potential problems.
At the conclusion of this sessions, we hope to have shed some light on the positive and negative security effects of such architectures.
Compliance as Code - Using the Open Source InSpec testing FrameworkSonatype
George Miranda, Chef
Compliance rules are notoriously applied differently between various organizations, sometimes between various auditors in the same organization. Compliance rules typically start as written policy as part of a regulatory body of concerns. That policy is then translated in discussions, in meetings, and at implementation based on the understanding of those involved. The lack of consistency creates procedural loopholes that may leave us unaware of vulnerabilities due to a lack of clarity about what’s being inspected on our systems.
Compliance as Code aims to deliver tangible, repeatable, and executable code that clearly states exactly how that policy is translated for your organization. Once compliance checks are expressed as code, a number of possibilities open up such as shifting compliance to the left of the software development lifecycle. If we enable developers to easily scan for compliance violations early and often, we stop having to go back to the drawing board right before we’re ready to release into production. Code is the collaborative lingua franca of DevOps. This session explores the open-source InSpec testing framework and how to use it to drive a culture of creating Compliance as Code.
Meta Infrastructure as Code: How Capital One Automated Our Automation Tools w...Sonatype
George Parris III, Capital One
In many companies, the cornerstone of their continuous integration and continuous deployment strategy is a few, well known pieces of automation software that are absolutely vital to the way companies are building software these days using agile methodologies. Many times though, someone with some infrastructure experience will just spin up a server and install the packages, building and iterating upon that same install for the following years that they're using it, which puts them in a shaky place every time they have to make changes to it.
On the Online Account Opening project at Capital One, we’ve strived to maintain our entire infrastructure as immutable as possible. In doing so, it was decided that we should apply that principle to our core CI/CD automation tools as well. By using Config As Code, Implementing a useful backup and testing strategy, and utilizing some AWS capabilities, we’re able to make that happen.
Akash Mahajan, Appsecco
Ansible offers a flexible approach to building a SecOps pipeline. System hardening can become just another software project. Using it we can do secure application deployment, configuration management and continuous monitoring. Security can be codified & attack surfaces reduced by using Ansible.
Who is this talk for?
This talks and demo is relevant and useful for any practitioner of DevSecOps.
It introduces the concepts of declarative security
Showcases one of the tools (Ansible) to embrace DevSecOps in a friction free no expense required manner
Implements security architecture principles using a structured language (YAML) as part of the framework (playbooks) which is ‘Infrastructure As Code’
Gives a clear roadmap on how to find the best practices for security hardening
Covers how continuous monitoring can be applied for security
Technical Requirements
While 30 minutes short for letting attendees do hands-on, the following will be required
- A modern Linux distribution with Python and Ansible installed
- Basic idea of running commands on the Linux command line
The Unrealized Role of Monitoring & Alerting w/ Jason HandSonatype
In today’s world, a company must be a “Learning Organization” in order to be successful and innovative. Learning from both failure and success, in order to implement small incremental improvements is critical. But until you implement and apply new information, you haven’t truly “learned” anything and you certainly haven’t improved.
According to the 2015 Monitoring Survey, most companies leverage metrics from monitoring and logging purely for performance analytics and trending. If high availability and reliability are important, they also leverage metrics to alert on fault and anomaly detection. Despite these “best practices”, the metrics are primarily only used as context to keep things “running” or return them back to “normal” if there’s a problem. Rarely is that data used as a method to identify areas of improvement once services have been restored. When an outage occurs to your system, you will absolutely repair and restore services as best you know how, but are you paying attention to the data from the recovery efforts? What were operators seeing during diagnosis and remediation? What were their actions? What was going on with everyone, including conversations? A step-by-step replay of exactly what took place during that outage.
This “old-view” perspective on the purpose of monitoring, logging, and alerting leaves the full value of metrics unrealized. It fails to address what’s important to the overall business objective and it lacks any hope of seeking out innovation or disruption of the status quo.
This talk will illustrate how to identify if your company is making the best use of metrics and ways to not only learn from failure, but to become a “Learning Company”.
Shows an excerpt of the PERFORM 2014 Conference's Hands-On Training on Automated Deployments. Tells the why and the how and differentiates between agent-based and agentless solutions, such as Chef, Puppet or Ansible. Goes into greater detail on the Ansible host automation tool.
The road to continuous deployment (PHPCon Poland 2016)Michiel Rook
As presented at PHPCon Poland 2016
It's a situation many of us are familiar with: a large legacy application, limited or no tests, slow & manual release process, low velocity, no confidence.... Oh, and management wants new features, fast.
But how to proceed? Using examples and lessons learned from a real-world case, I'll show you how to strangle the legacy application with a modern service architecture and build a continuous deployment pipeline to deliver value from the first sprint. On the way, we take a look at testing strategies and various (possibly controversial!) tips and best practices.
The road to continuous deployment (DomCode September 2016)Michiel Rook
DomCode Meetup September 2016
It's there. The mountain. A giant monolith of code that blocks everything we do. But there's no rewriting it. What can we do? How can we continue? Don't worry, because this month's speaker Michiel Rook is here to help! He's going to speak from his experience tweaking, refactoring and replacing a...less than stellar...code base to Continuous Deployment.
It's a situation many of us are familiar with: a large legacy application, limited or no tests, slow & manual release process, low velocity, no confidence.... Oh, and management wants new features, fast.
But how to proceed? Using examples and lessons learned from a real-world case, I'll show you how to strangle a legacy application with a modern service architecture and build a continuous integration and deployment pipeline to deliver value from the first sprint. On the way, we’ll take a look at the process, automated testing, monitoring, master/trunk based development and various (possibly controversial!) tips and best practices.
The road to continuous deployment: a case study (DPC16)Michiel Rook
Dutch PHP Conference 2016
It's a situation many of us are familiar with: a large legacy application, limited or no tests, slow & manual release process, low velocity, no confidence.... Oh, and management wants new features, fast.
But how to proceed? Using examples and lessons learned from a real-world case, I'll show you how to strangle the legacy application with a modern service architecture and build a continuous deployment pipeline to deliver value from the first sprint. On the way, we take a look at testing strategies and various (possibly controversial!) tips and best practices.
The road to continuous deployment: a case study - Michiel Rook - Codemotion A...Codemotion
It's a situation many of us are familiar with: a large legacy application, limited or no tests, slow & manual release process, low velocity, no confidence.... Oh, and management wants new features, fast. Using examples and lessons learned from a real-world case, I'll show you how to strangle a legacy application with a modern service architecture and build a continuous integration and deployment pipeline to deliver value from the first sprint. On the way, we’ll take a look at the process, automated testing, monitoring, master/trunk based development and various tips and best practices.
Adam Polak: Architektura oparto o mikroserwisy to temat ogromny. Wszyscy znamy teorię, ale jak to wygląda w praktyce? W jaki sposób ogarnąć kilka niezależnych aplikacji? Co z komunikacją pomiędzy nimi? Podczas prezentacji podzielę się naszymi doświadczeniami, problemami, które napotkaliśmy oraz rozwiązaniami, które znacznie ułatwiły nam cały proces wdrażania tejże architektury.
Continuous Delivery and Automated Operations on k8s with keptnAndreas Grabner
Slidedeck from Vienna DevOps & Security Meetup. This talk is keptn - an open source event driven control plane for continuous delivery and automated operations for kubernetes
THE PROJECT MANAGEMENT YACHT: How the World's Greatest Problem Solvers Disrup...Rod King, Ph.D.
This presentation describes the tool of the Project Management Yacht as well as its platform of the Vision Dashboard. Although not well known, the Vision Dashboard is the tacit framework used by visionaries such as Steve Jobs, Albert Einstein, and Genrich Altshuller. When the Project Management Yacht is systematically and deeply used with the Vision Dashboard, a user can rapidly discover how to disrupt Red Ocean industries while generating extraordinary profit.
In a DEV world where everything is built automatically (with Jenkins, Gitlab, Maven...), the developers still struggle to integrate some operational tasks in their build pipelines.
Despite tools like Ansible, Puppet and Rundeck are being used by more and more companies, some DBAs need (or want) to keep control over their legacy scripts. How can the developers convince the DBAs to implement some useful REST endpoints without much development effort?
In this session I will introduce some ideas for integration between applications, ORDS and operational scripts that will help cooperation between developers and DBAs.
Faster PHP apps using Queues and WorkersRichard Baker
PHP apps typically perform tasks in a synchronous manner; Resizing an image or sending a push notification. For most applications this works well, but as apps grow or experience increased traffic, each task adds extra milliseconds to a request, leaving users waiting.
A common solution is to defer these tasks to the background using a cron task. However, there is a better way. Job queues not only help to decouple your application and improve resilience but will also cut request times.
In this talk I we’ll explore some common queue systems; the features and tradeoffs of each solution, what to queue, refactoring existing code into jobs, and running workers. By the end you’ll be ready to build your next app one job at a time.
Real-World DevOps — 20 Practical Developers Tips for Tightening Your Operatio...VictorSzoltysek
DevOps continues to be a confusing space, between the plethora of lingo and technologies it’s often nothing more then a relabelling of “Release Engineering” with the same long feedback loops, tiresome ticket wait times, and nightmarish merge-hell conflicts.
We will do things the right way .. by implementing a modern, scrappy, and at times controversial “Day-One” DevOps solution for a typical Java Spring Boot front-end app, including setting a full end-to-end CI/CD pipeline.
Simple and Easy to implement Strategies will be provided that you can immediately start applying on your project regardless of programming languages used.
Bring the light in your Always FREE Oracle CloudDimitri Gielis
Presentation done at APEX@Home and ODTUG KScope 2020 online.
It shows extra information on using the Always FREE Oracle Cloud:
- Moving your data and APEX app to the Always Free Autonomous Cloud
- Getting more storage by using Advanced Compression
- Performance & Uptime Monitoring
- Setup Production from Always Free Autonomous Cloud
The Journey from Monolith to Microservices: a Guided AdventureVMware Tanzu
SpringOne Platform 2016
Speaker: Mike Gehard; Senior Software Engineer, Pivotal
Are you starting a new application and wondering whether to go with a monolith or take the microservices path? Do you have an existing application that is getting too big to deliver business value with a predictable velocity? Ever wonder how to regain the agility you had when an application was smaller?
The current discussions around application architecture with microservices seem like an all or nothing journey without any stops along the way to catch your breath. This talk outlines questions to ask yourself to drive decisions along the way. It also demonstrates one possible path for future growth, complete with intermediate stops along the way where you can pause to evaluate your next step. This path avoids implementing too much complexity early in the process.
At the end of the journey you will not only have ideas to guide your own path, but tools that you can use to make the journey easier and less costly.
40 DevSecOps Reference Architectures for you. See what tools your peers are using to scale DevSecOps and how enterprises are automating security into their DevOps pipeline. Learn what DevSecOps tools and integrations others are deploying in 2019 and where your choices stack up as you consider shifting security left.
30+ Nexus Integrations to Accelerate DevOpsSonatype
No single tool can deliver on the promise of DevOps. Instead it’s a collection of tools, easily integrated, tightly managed, and effectively automated. Learn how Nexus integrates with more DevOps tools you use everyday.
DevOps and All the Continuouses w/ Helen BealSonatype
DevOps promises to make better software faster and more safely and many organizations begin by practicing Continuous Integration and moving on to Continuous Delivery and sometimes even extending as far as Continuous Deployment - but this is only the tip of the iceberg.
DevOps demands a fundamental shift in the way we work and requires all participants in an organization to live its principles. It’s much more than a tool chain.
When you are delivering software in an Agile manner in fortnightly sprints, are you still funding in an annual manner? Are you adhering to The Third Way? I.e. are you practicing Continuous Experimentation? Continuous Learning? How are you doing Continuous Testing? Are you including security in that? Have you have Continuous Improvement in your organization for years? When does Continuous Everything turn into Continuous Apathy?
A Small Association's Journey to DevOps w/ Edward RuizSonatype
Small and medium-size businesses are under the same pressure to innovate-at-speed as large corporations. They face these challenges with shoestring IT budgets and limited staff who are stretched thin and forced to wear multiple hats. These limits are particularly acute in the world of nonprofit associations. But with the right vision and culture, even small teams can successfully implement a DevOps philosophy and bust the barriers to high-speed IT innovation.
In this presentation, I will recount our small membership association’s transformative journey to DevOps and share the lessons we learned along the way. I will offer first-hand experiences and practical ideas on how to cultivate a collaborative team culture to realize faster deployment cycles while improving build quality and delighting customers with great software.
What's My Security Policy Doing to My Help Desk w/ Chris SwanSonatype
Operational data mining gives us a rich source of data for the third devops way - continual learning by experimentation. It also shows us just how damaging those 90 day password resets can be. This talk will look at what can go wrong, and the renewed fight to fix the problem at the root.
Static Analysis For Security and DevOps Happiness w/ Justin CollinsSonatype
Justin Collins, Brakeman Security
It is not enough to have fast, automated code deployment. We also need some level of assurance the code being deployed is stable and secure. Static analysis tools that operate on source code can be an efficient and reliable method for ensuring properties about the code - such as meeting basic security requirements. Automated static analysis security tools help prevent vulnerabilities from ever reaching production, while avoiding slow, fallible manual code reviews.
This talk will cover the benefits of static analysis and strategies for integrating tools with the development workflow.
Automated Infrastructure Security: Monitoring using FOSSSonatype
Madhu Akula, Automation Ninja
We can see attacks happening in real time using a dashboard. By collecting logs from various sources we will monitor & analyse. Using data gleaned from the logs, we can apply defensive rules against the attackers. We will use AWS for managing and securing the infrastructure discussed in our talk.
For most network engineers who monitor the perimeter for malicious content, it is very important to respond to an imminent threat originating from outside the boundaries of their network. Having to crunch through all the logs that the various devices (firewalls, routers, security appliances etc.) spit out, correlating that data and in real time making the right choices can prove to be a nightmare. Even with the solutions already available in the market.
As I have experienced this myself, as part of the Internal DevOps and Incident Response Teams, in several cases, I would want to create a space for interested folks to design, build, customise and deploy their very own FOSS based centralised visual attack monitoring dashboard. This setup would be able to perform real time analysis using the trusted ELK stack and visually denote what popular attack hotspots exist on a network.
Docker Inside/Out: The 'Real' Real- World World of Stacking Containers in pro...Sonatype
Daniël van Gils, Cloud 66
So you’ve already containerized the shit out of your code, broken down monoliths, microserviced the hell out of your app and have run some awesome workloads in your local, dev and test environments. It’s all looking good, but now what?
Running Docker commands is one thing, but maintaining containers in production is a whole other ballgame. So during this talk I’ll show you the REAL wild world of Docker in production. With the added benefit of talking to and observing how over 900 of our customers have been using Docker in production, I’ll be presenting some of these data points and sharing our observations on how to get it right.
My aim? I want to turn the conversation on its head and dispel some of the ‘silver bullet’ assumptions flying around by taking an inside-out approach to building with Docker. The idea is to provide you with a framework for how to get your code into containers, streamline the Docker build flow and avoid common pitfalls when moving from dev to live environments.
Because remember, Docker will NOT, and I repeat, will not solve your bad dev and ops behaviours. So don’t end up with a ‘hot mess’ (more on that later), and attend my talk to get container smart
I, For One, Welcome Our New Robot OverlordsSonatype
Mykel Alvis, Coviti Labs
Infrastructure-as-code is how we build deployments now. If your infrastructure cannot be automated using code, then you might want to reconsider your life choices. However, even if it can be described as code, the process by which that infra-code is delivered is often of low quality. Having your infrastructure describable with poorly managed source code is only a slight step up from not managing it at all.
Poor testing and bad release-management practices make many of the efforts of automation hard to replicate. The resulting codebases are often a mass of poorly organized scripting and binaries committed to source control, much of which has never been validated except by a set of Mk I eyeballs.
Cotiviti has taken what turns out to be a radical approach to this problem, although in retrospect it should not have been so radical. Our approach uses artifact repositories, formal release mechanisms, and enforced testing gatekeepers to ensure the quality of the generated result. Because the approach is well-regimented, it is trivially easy to automate. Because the automation applies tests automatically, it tends to produce very high-quality artifacts. Because the output was delivered through a formal release process, it is repeatable. Because everything in that artifact is code, it is reviewable and auditable. And when any of these are untrue, we have a solid path toward remediating that problem.
I have done bespoke deliveries by hand in the past. I’m never going back to my not-having-automation-ways again.
Matt Williams, Datadog
Just as we got a hang of monitoring our server-based applications, they take away the server. How do you monitor something that doesn’t exist? What metrics matter most in a serverless world? In this session, we will look at how applications are different in a AWS Lambda-based world and how to monitor them. Join us as we work our way through the stack and demonstrate how to capture the health and performance of your services.
The focus of this session is not tool specific. Attendees will learn production tested lessons and leave with frameworks they can implement with their serverless workloads regardless of the platforms and tools they use.
Dave Mangot, Solarwinds Cloud Companies
In this session, we will looks at the ways that Operations can deliver
business value. Long ago Operations was a cost center, now it's a
strategic differentiator. We used to think our job was to work with
technology, now we realize it's to deliver value to the business.
We'll examine some principles that are signs of a mature DevOps
practice and use examples from the Librato move from EC2 Classic to
VPC and the next generation platform we built in the process to
demonstrate how adherence to those principles allow us to deliver
value to the business faster and more reliably than ever before.
Warner Moore, CoverMymeds
Everyone’s concerned about security from your peers to your board, but what does that mean to building software? This presentation will explore techniques for embedding security into your Software Development LifeCycle using automation and aligning to your existing practices for building software. Better yet, many of these automation techniques align to DevOps culture and practices. Building secure software doesn’t mean slowing down delivery or adding meaningless paperwork – it can complement your favorite ways to build software!
Multi Security Checkpoints on DevOps PlatformSonatype
Hasan Yasar, Carnegie Mellon University
“Software security” often evokes negative feelings amongst developers because it is linked with challenges and uncertainty on rapid releases. The burgeoning concepts of DevOps can be applied to increase the security of developed applications. Applying these DevOps principles can have a big impact with resiliency and secure at multiple checkpoints. This talk explains how to do with live demo.
Taking a Selfie - Just Try to Resist! Doing Forensics the DevSecOps WaySonatype
Brandon Sherman, Twilio
You can’t physically touch your computing environment anymore, so how do you capture a forensic image? In this talk, learn how to take a selfie of an EC2 instance. Selfie is a tool that can jump in with an incident responder type role, trigger snapshots of a suspect instance, and copy those snapshots to a safe place. Of course, this can be automated. Did you even have to ask?
Into the Box Keynote Day 2: Unveiling amazing updates and announcements for modern CFML developers! Get ready for exciting releases and updates on Ortus tools and products. Stay tuned for cutting-edge innovations designed to boost your productivity.
Navigating the Metaverse: A Journey into Virtual Evolution"Donna Lenk
Join us for an exploration of the Metaverse's evolution, where innovation meets imagination. Discover new dimensions of virtual events, engage with thought-provoking discussions, and witness the transformative power of digital realms."
Globus Connect Server Deep Dive - GlobusWorld 2024Globus
We explore the Globus Connect Server (GCS) architecture and experiment with advanced configuration options and use cases. This content is targeted at system administrators who are familiar with GCS and currently operate—or are planning to operate—broader deployments at their institution.
Enhancing Project Management Efficiency_ Leveraging AI Tools like ChatGPT.pdfJay Das
With the advent of artificial intelligence or AI tools, project management processes are undergoing a transformative shift. By using tools like ChatGPT, and Bard organizations can empower their leaders and managers to plan, execute, and monitor projects more effectively.
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I ...Juraj Vysvader
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I didn't get rich from it but it did have 63K downloads (powered possible tens of thousands of websites).
Check out the webinar slides to learn more about how XfilesPro transforms Salesforce document management by leveraging its world-class applications. For more details, please connect with sales@xfilespro.com
If you want to watch the on-demand webinar, please click here: https://www.xfilespro.com/webinars/salesforce-document-management-2-0-smarter-faster-better/
May Marketo Masterclass, London MUG May 22 2024.pdfAdele Miller
Can't make Adobe Summit in Vegas? No sweat because the EMEA Marketo Engage Champions are coming to London to share their Summit sessions, insights and more!
This is a MUG with a twist you don't want to miss.
Exploring Innovations in Data Repository Solutions - Insights from the U.S. G...Globus
The U.S. Geological Survey (USGS) has made substantial investments in meeting evolving scientific, technical, and policy driven demands on storing, managing, and delivering data. As these demands continue to grow in complexity and scale, the USGS must continue to explore innovative solutions to improve its management, curation, sharing, delivering, and preservation approaches for large-scale research data. Supporting these needs, the USGS has partnered with the University of Chicago-Globus to research and develop advanced repository components and workflows leveraging its current investment in Globus. The primary outcome of this partnership includes the development of a prototype enterprise repository, driven by USGS Data Release requirements, through exploration and implementation of the entire suite of the Globus platform offerings, including Globus Flow, Globus Auth, Globus Transfer, and Globus Search. This presentation will provide insights into this research partnership, introduce the unique requirements and challenges being addressed and provide relevant project progress.
Top Features to Include in Your Winzo Clone App for Business Growth (4).pptxrickgrimesss22
Discover the essential features to incorporate in your Winzo clone app to boost business growth, enhance user engagement, and drive revenue. Learn how to create a compelling gaming experience that stands out in the competitive market.
First Steps with Globus Compute Multi-User EndpointsGlobus
In this presentation we will share our experiences around getting started with the Globus Compute multi-user endpoint. Working with the Pharmacology group at the University of Auckland, we have previously written an application using Globus Compute that can offload computationally expensive steps in the researcher's workflows, which they wish to manage from their familiar Windows environments, onto the NeSI (New Zealand eScience Infrastructure) cluster. Some of the challenges we have encountered were that each researcher had to set up and manage their own single-user globus compute endpoint and that the workloads had varying resource requirements (CPUs, memory and wall time) between different runs. We hope that the multi-user endpoint will help to address these challenges and share an update on our progress here.
Code reviews are vital for ensuring good code quality. They serve as one of our last lines of defense against bugs and subpar code reaching production.
Yet, they often turn into annoying tasks riddled with frustration, hostility, unclear feedback and lack of standards. How can we improve this crucial process?
In this session we will cover:
- The Art of Effective Code Reviews
- Streamlining the Review Process
- Elevating Reviews with Automated Tools
By the end of this presentation, you'll have the knowledge on how to organize and improve your code review proces
Prosigns: Transforming Business with Tailored Technology SolutionsProsigns
Unlocking Business Potential: Tailored Technology Solutions by Prosigns
Discover how Prosigns, a leading technology solutions provider, partners with businesses to drive innovation and success. Our presentation showcases our comprehensive range of services, including custom software development, web and mobile app development, AI & ML solutions, blockchain integration, DevOps services, and Microsoft Dynamics 365 support.
Custom Software Development: Prosigns specializes in creating bespoke software solutions that cater to your unique business needs. Our team of experts works closely with you to understand your requirements and deliver tailor-made software that enhances efficiency and drives growth.
Web and Mobile App Development: From responsive websites to intuitive mobile applications, Prosigns develops cutting-edge solutions that engage users and deliver seamless experiences across devices.
AI & ML Solutions: Harnessing the power of Artificial Intelligence and Machine Learning, Prosigns provides smart solutions that automate processes, provide valuable insights, and drive informed decision-making.
Blockchain Integration: Prosigns offers comprehensive blockchain solutions, including development, integration, and consulting services, enabling businesses to leverage blockchain technology for enhanced security, transparency, and efficiency.
DevOps Services: Prosigns' DevOps services streamline development and operations processes, ensuring faster and more reliable software delivery through automation and continuous integration.
Microsoft Dynamics 365 Support: Prosigns provides comprehensive support and maintenance services for Microsoft Dynamics 365, ensuring your system is always up-to-date, secure, and running smoothly.
Learn how our collaborative approach and dedication to excellence help businesses achieve their goals and stay ahead in today's digital landscape. From concept to deployment, Prosigns is your trusted partner for transforming ideas into reality and unlocking the full potential of your business.
Join us on a journey of innovation and growth. Let's partner for success with Prosigns.
Cyaniclab : Software Development Agency Portfolio.pdfCyanic lab
CyanicLab, an offshore custom software development company based in Sweden,India, Finland, is your go-to partner for startup development and innovative web design solutions. Our expert team specializes in crafting cutting-edge software tailored to meet the unique needs of startups and established enterprises alike. From conceptualization to execution, we offer comprehensive services including web and mobile app development, UI/UX design, and ongoing software maintenance. Ready to elevate your business? Contact CyanicLab today and let us propel your vision to success with our top-notch IT solutions.
Climate Science Flows: Enabling Petabyte-Scale Climate Analysis with the Eart...Globus
The Earth System Grid Federation (ESGF) is a global network of data servers that archives and distributes the planet’s largest collection of Earth system model output for thousands of climate and environmental scientists worldwide. Many of these petabyte-scale data archives are located in proximity to large high-performance computing (HPC) or cloud computing resources, but the primary workflow for data users consists of transferring data, and applying computations on a different system. As a part of the ESGF 2.0 US project (funded by the United States Department of Energy Office of Science), we developed pre-defined data workflows, which can be run on-demand, capable of applying many data reduction and data analysis to the large ESGF data archives, transferring only the resultant analysis (ex. visualizations, smaller data files). In this talk, we will showcase a few of these workflows, highlighting how Globus Flows can be used for petabyte-scale climate analysis.
Experience our free, in-depth three-part Tendenci Platform Corporate Membership Management workshop series! In Session 1 on May 14th, 2024, we began with an Introduction and Setup, mastering the configuration of your Corporate Membership Module settings to establish membership types, applications, and more. Then, on May 16th, 2024, in Session 2, we focused on binding individual members to a Corporate Membership and Corporate Reps, teaching you how to add individual members and assign Corporate Representatives to manage dues, renewals, and associated members. Finally, on May 28th, 2024, in Session 3, we covered questions and concerns, addressing any queries or issues you may have.
For more Tendenci AMS events, check out www.tendenci.com/events
AI Pilot Review: The World’s First Virtual Assistant Marketing SuiteGoogle
AI Pilot Review: The World’s First Virtual Assistant Marketing Suite
👉👉 Click Here To Get More Info 👇👇
https://sumonreview.com/ai-pilot-review/
AI Pilot Review: Key Features
✅Deploy AI expert bots in Any Niche With Just A Click
✅With one keyword, generate complete funnels, websites, landing pages, and more.
✅More than 85 AI features are included in the AI pilot.
✅No setup or configuration; use your voice (like Siri) to do whatever you want.
✅You Can Use AI Pilot To Create your version of AI Pilot And Charge People For It…
✅ZERO Manual Work With AI Pilot. Never write, Design, Or Code Again.
✅ZERO Limits On Features Or Usages
✅Use Our AI-powered Traffic To Get Hundreds Of Customers
✅No Complicated Setup: Get Up And Running In 2 Minutes
✅99.99% Up-Time Guaranteed
✅30 Days Money-Back Guarantee
✅ZERO Upfront Cost
See My Other Reviews Article:
(1) TubeTrivia AI Review: https://sumonreview.com/tubetrivia-ai-review
(2) SocioWave Review: https://sumonreview.com/sociowave-review
(3) AI Partner & Profit Review: https://sumonreview.com/ai-partner-profit-review
(4) AI Ebook Suite Review: https://sumonreview.com/ai-ebook-suite-review
Innovating Inference - Remote Triggering of Large Language Models on HPC Clus...Globus
Large Language Models (LLMs) are currently the center of attention in the tech world, particularly for their potential to advance research. In this presentation, we'll explore a straightforward and effective method for quickly initiating inference runs on supercomputers using the vLLM tool with Globus Compute, specifically on the Polaris system at ALCF. We'll begin by briefly discussing the popularity and applications of LLMs in various fields. Following this, we will introduce the vLLM tool, and explain how it integrates with Globus Compute to efficiently manage LLM operations on Polaris. Attendees will learn the practical aspects of setting up and remotely triggering LLMs from local machines, focusing on ease of use and efficiency. This talk is ideal for researchers and practitioners looking to leverage the power of LLMs in their work, offering a clear guide to harnessing supercomputing resources for quick and effective LLM inference.
Innovating Inference - Remote Triggering of Large Language Models on HPC Clus...
The Road to Continuous Deployment
1.
2. THE ROAD TO CONTINUOUS
DEPLOYMENT - A CASE STUDY
MICHIEL ROOK
@michieltcs
3.
4. THE SYSTEM - SAN DIEGO
▸ ... or the Big Ball Of Mud
▸ Large legacy monolith
▸ Generates significant income
▸ Slow & complex
▸ Technical debt
@michieltcs
5. SAN DIEGO
FRONTEND
MYSQL
DB
SAN DIEGO
BACKEND
LOAD BALANCERS / VARNISH
ITBANEN INTERMEDIAIR NATIONALEVACATUREBANK
SAN DIEGO
FRONTEND
SAN DIEGO
FRONTEND
SAN DIEGO
FRONTEND
SAN DIEGO
BACKEND
SAN DIEGO
BACKEND
SAN DIEGO
BACKEND
MEMCACHE FTP
EXT.
SERVICES
SOLR
@michieltcs
6. THE SYSTEM - SAN DIEGO
▸ Infrequent, manual releases
▸ Fragile tests
▸ Frequent outages & issues
▸ Frustrated team
▸ Low confidence modifying existing code
@michieltcs
28. PIPELINE AS CODE
node {
stage 'Run tests'
sh "phpunit"
sh "behat"
stage 'Build docker image'
sh "phing build"
sh "docker build -t jobservice:${env.BUILD_NUMBER} ."
sh "docker push jobservice:${env.BUILD_NUMBER}"
stage 'Deploy acceptance'
sh "ansible-playbook -e BUILD=${env.BUILD_NUMBER}
-i acc deploy.yml"
stage 'Deploy production'
sh "ansible-playbook -e BUILD=${env.BUILD_NUMBER}
-i prod deploy.yml"
}
@michieltcs
29. DEPLOYING
PULL IMAGE
START NEW CONTAINER
WAIT FOR PORT
SMOKE TESTS / HEALTH CHECKS
ADD NEW CONTAINER TO LB
REMOVE OLD CONTAINER FROM LB
STOP OLD CONTAINER
docker pull
@michieltcs
30. DEPLOYING
PULL IMAGE
START NEW CONTAINER
WAIT FOR PORT
SMOKE TESTS / HEALTH CHECKS
ADD NEW CONTAINER TO LB
REMOVE OLD CONTAINER FROM LB
STOP OLD CONTAINER
docker run
@michieltcs
31. DEPLOYING
PULL IMAGE
START NEW CONTAINER
WAIT FOR PORT
SMOKE TESTS / HEALTH CHECKS
ADD NEW CONTAINER TO LB
REMOVE OLD CONTAINER FROM LB
STOP OLD CONTAINER
wait_for: port=8080 delay=5 timeout=15
@michieltcs
32. DEPLOYING
PULL IMAGE
START NEW CONTAINER
WAIT FOR PORT
SMOKE TESTS / HEALTH CHECKS
ADD NEW CONTAINER TO LB
REMOVE OLD CONTAINER FROM LB
STOP OLD CONTAINER
uri:
url: http://localhost:8080/_health
status_code: 200
timeout: 30
@michieltcs
33. DEPLOYING
PULL IMAGE
START NEW CONTAINER
WAIT FOR PORT
SMOKE TESTS / HEALTH CHECKS
ADD NEW CONTAINER TO LB
REMOVE OLD CONTAINER FROM LB
STOP OLD CONTAINER
template: src=haproxy.cfg.j2
dest=/etc/haproxy/haproxy.cfg
service: name=haproxy state=reloaded
@michieltcs
34. DEPLOYING
PULL IMAGE
START NEW CONTAINER
WAIT FOR PORT
SMOKE TESTS / HEALTH CHECKS
ADD NEW CONTAINER TO LB
REMOVE OLD CONTAINER FROM LB
STOP OLD CONTAINER
template: src=haproxy.cfg.j2
dest=/etc/haproxy/haproxy.cfg
service: name=haproxy state=reloaded
@michieltcs
35. DEPLOYING
PULL IMAGE
START NEW CONTAINER
WAIT FOR PORT
SMOKE TESTS / HEALTH CHECKS
ADD NEW CONTAINER TO LB
REMOVE OLD CONTAINER FROM LB
STOP OLD CONTAINER
docker stop
docker rm
@michieltcs
39. RESULTS
▸ Total build time per service < 10 minutes
▸ Significantly improved page load times
▸ Improved audience stats (time on page, pages per session,
session duration, traffic, seo ranking, etc)
▸ Increased confidence and velocity
▸ Experimented with new tech/stacks (angular, jvm, event
sourcing)
▸ More fun
@michieltcs
40. LESSONS LEARNED
▸ Team acceptance
▸ Change is hard
▸ Mentality / discipline
▸ Docker stability/orchestration
▸ Issues with traffic between Amazon <-> on-premise
datacenter
@michieltcs
41. LESSONS LEARNED
▸ Experience with new tech
▸ Stability of build pipelines
▸ Business alignment
▸ Limit feature toggles
▸ Keep focus on replacing legacy application
@michieltcs
42.
43.
44. Turn-key Continuous Deployment
Zero downtime
deployments
Modern, autoscaling
infrastructure with
built-in monitoring
Pipeline in five
minutes