Continuous Delivery in Practice (extended)Tzach Zohar
Extended version of a previously uploaded presentation:
10 practical field-proven tips for building a continuously delivered service, based on Kenshoo's experience with its RTB service - a critical, high throughput, highly available component, serving millions of requests per minute in under 50 milliseconds.
From coding practices to test automation, from monitoring tools to feature A/B testing - the entire development chain should be focused around removing blockers and manual steps between your code and your clients, without ever settling for quality. Join to see what makes our clients and developers happy and effective.
The document discusses designing software for testability in production. It recommends removing staging environments and running automated tests and a subset of code in production to gain confidence in software quality. Specific techniques include using sandbox accounts to test functionality without affecting real users, running integration tests that create and delete sandbox environments, and monitoring databases and logs in real-time to detect issues. Designing systems with isolation, well-defined interfaces, and the ability to restrict access and trace effects can help make testing in production safer and more effective. Tools like API documentation generators and mocks can also help improve quality.
Security Implications for a DevOps TransformationDevOps.com
If your organization is undergoing a DevOps transformation, you’re probably thinking about where security fits in. All too often, we tack on security testing at the end of the delivery process, which means significant problems go undetected until development is complete. As we adopt DevOps principles and practices, we enable a natural solution to this problem: ensure that security experts are involved throughout the delivery process.
In this webinar, DevOps.com and Puppet defined a reference implementation of DevOps from the ground up, by illustrating how the software delivery process evolves at a hypothetical startup. Once we've laid a technical foundation for DevOps, we discussed the implications for security. We also discussed:
Benefits for and challenges to security during a DevOps transformation
How to craft a DevOps-ready security practice
Refinements of a standard DevOps workflow to address security needs
It is always tough to test a complex API comprehensively. The additional level of complexity brings us to the question “How can we validate that our API is working as intended?”
In this talk I will explain how to use test driven development for APIs to solve this problem and even further how TDD can drive an API Design towards a more usable design. I will outline my practical approach with an implementation example based on django. And finally I will give you a brief summary of my lessons learned using this approach in customer projects.
This document outlines a structured approach to debugging distributed systems. It begins with observing and documenting the problem. The next steps involve creating a minimal reproducer, debugging the client and server sides, and checking DNS, routing, and network connections. Traffic and messages should also be inspected. The process concludes by wrapping up findings and conducting a post-mortem analysis. Key challenges in distributed systems like concurrency, lack of a global clock, and independent failures are discussed.
The document discusses software architecture in a DevOps world. It defines software architecture and DevOps, and explains how applying DevOps principles like gradual changes, customer orientation, automation, ownership, collaboration, experimentation and continuous improvement can help architects work with DevOps teams. The document provides examples of how each principle can be applied to software architecture. It emphasizes that software architecture should focus on business needs, involve developers, and evolve incrementally rather than being designed upfront.
Skills Matter DevSecOps eXchange Forum 2022 - Software architecture in a DevO...Bert Jan Schrijver
The document discusses how software architects can work with DevOps teams by applying DevOps principles to software architecture. Some key points made:
- DevOps principles like gradual changes, customer orientation, automation, ownership, collaboration, experimentation and continuous improvement should guide architectural decisions and processes.
- The architecture should start simply and evolve iteratively based on feedback from developers and customers.
- Automation, infrastructure as code, and measuring architectural decisions are important.
- The team should own the architecture collaboratively and the architect must be accountable and involved.
Continuous Delivery in Practice (extended)Tzach Zohar
Extended version of a previously uploaded presentation:
10 practical field-proven tips for building a continuously delivered service, based on Kenshoo's experience with its RTB service - a critical, high throughput, highly available component, serving millions of requests per minute in under 50 milliseconds.
From coding practices to test automation, from monitoring tools to feature A/B testing - the entire development chain should be focused around removing blockers and manual steps between your code and your clients, without ever settling for quality. Join to see what makes our clients and developers happy and effective.
The document discusses designing software for testability in production. It recommends removing staging environments and running automated tests and a subset of code in production to gain confidence in software quality. Specific techniques include using sandbox accounts to test functionality without affecting real users, running integration tests that create and delete sandbox environments, and monitoring databases and logs in real-time to detect issues. Designing systems with isolation, well-defined interfaces, and the ability to restrict access and trace effects can help make testing in production safer and more effective. Tools like API documentation generators and mocks can also help improve quality.
Security Implications for a DevOps TransformationDevOps.com
If your organization is undergoing a DevOps transformation, you’re probably thinking about where security fits in. All too often, we tack on security testing at the end of the delivery process, which means significant problems go undetected until development is complete. As we adopt DevOps principles and practices, we enable a natural solution to this problem: ensure that security experts are involved throughout the delivery process.
In this webinar, DevOps.com and Puppet defined a reference implementation of DevOps from the ground up, by illustrating how the software delivery process evolves at a hypothetical startup. Once we've laid a technical foundation for DevOps, we discussed the implications for security. We also discussed:
Benefits for and challenges to security during a DevOps transformation
How to craft a DevOps-ready security practice
Refinements of a standard DevOps workflow to address security needs
It is always tough to test a complex API comprehensively. The additional level of complexity brings us to the question “How can we validate that our API is working as intended?”
In this talk I will explain how to use test driven development for APIs to solve this problem and even further how TDD can drive an API Design towards a more usable design. I will outline my practical approach with an implementation example based on django. And finally I will give you a brief summary of my lessons learned using this approach in customer projects.
This document outlines a structured approach to debugging distributed systems. It begins with observing and documenting the problem. The next steps involve creating a minimal reproducer, debugging the client and server sides, and checking DNS, routing, and network connections. Traffic and messages should also be inspected. The process concludes by wrapping up findings and conducting a post-mortem analysis. Key challenges in distributed systems like concurrency, lack of a global clock, and independent failures are discussed.
The document discusses software architecture in a DevOps world. It defines software architecture and DevOps, and explains how applying DevOps principles like gradual changes, customer orientation, automation, ownership, collaboration, experimentation and continuous improvement can help architects work with DevOps teams. The document provides examples of how each principle can be applied to software architecture. It emphasizes that software architecture should focus on business needs, involve developers, and evolve incrementally rather than being designed upfront.
Skills Matter DevSecOps eXchange Forum 2022 - Software architecture in a DevO...Bert Jan Schrijver
The document discusses how software architects can work with DevOps teams by applying DevOps principles to software architecture. Some key points made:
- DevOps principles like gradual changes, customer orientation, automation, ownership, collaboration, experimentation and continuous improvement should guide architectural decisions and processes.
- The architecture should start simply and evolve iteratively based on feedback from developers and customers.
- Automation, infrastructure as code, and measuring architectural decisions are important.
- The team should own the architecture collaboratively and the architect must be accountable and involved.
TDC 2021 - Better software, faster: Principles of Continuous Delivery and DevOpsBert Jan Schrijver
This document discusses principles of Continuous Delivery (CD) and DevOps. It defines CD, Continuous Integration, and Continuous Deployment. The goal of CD is to have software ready for release at any time by building and testing it frequently. CD relies on principles like keeping everything in version control, automating processes, and having all team members share responsibility for software delivery. CD is enabled by practices like uniform build pipelines, test automation, and deploying changes frequently through production-like environments. CD requires a cultural shift and collaboration between development and operations.
This document discusses strategies for implementing continuous integration (CI) at scale. It describes the challenges of long build times when integrating code from many committers across a large codebase. Various CI strategies are evaluated, including multiple single jobs builds, pipelined builds, staged team commits, parallel jobs builds, and using a commit gate to check build status before committing code. The best approach depends on factors like code modularity, testability, and team distribution. Continuous integration, testing, and deployment are important, but one must consider build time, resource usage, and understandability of the system.
This document outlines a structured approach for debugging distributed systems. It begins with observing and documenting what is known about the problem. The next steps involve creating a minimal reproducer, debugging the client and server sides, and checking DNS, routing, and network connections. Traffic and messages should be inspected, with a focus on eliminating potential issues on the client side first. The process concludes by wrapping up with documentation of findings, impacts, and lessons learned to prevent future issues. Several tools are recommended for each step to aid in debugging distributed systems effectively.
The document discusses continuous integration and automated builds. It defines continuous integration as decreasing integration times to speed up software delivery. It outlines 5 steps to continuous integration: using a source control repository, automating builds, integrating continuously rather than just daily, using tools like Team Foundation Server, MSBuild and CruiseControl.NET, and following a development process of frequent check-ins and merges. Benefits include reduced risk and improved team collaboration and morale. Anti-patterns include delaying check-ins and ignoring broken builds.
2016 04-25 continuous integration at google scaleJohn Micco
Google runs continuous integration at an enormous scale, with over 30,000 developers submitting changes 30,000 times per day. They developed techniques like just-in-time scheduling to efficiently run the 150+ million tests per day in a linear rather than quadratic time complexity. This allows them to provide fast feedback to developers while keeping compute costs reasonable as the number of changes, tests, and developers grows enormously over time.
DevQAOps - Surviving in a DevOps WorldWinston Laoh
Talk given by Winston Laoh at the QA/LA meetup hosted at Q Los Angeles. The goal of the presentation was to inform and persuade test related engineers on how to integrate into DevOps organizations.
Today’s cutting edge companies have software release cycles measured in days instead of months. This agility is enabled by the DevOps practice of continuous delivery, which automates building, testing, and deploying all code changes. This automation helps you catch bugs sooner and accelerates developer productivity. In this session, we’ll share best practices (including ones followed internally at Amazon) and how you can bring them to your company by using open source and AWS services.
Speaker: Raghuraman Balachandran, Solutions Architect, Amazon India
Development is inherently collaborative. So why aren't you doing code review? This session discusses the importance of collaboration around your source code, the impact code review can have on development teams, and offers guidance on how to get started.
Atlassian Speaker: Matt Quail
Customer Speaker: Patrick Coleman of Dash
Key Takeaways:
* Peer code review explained
* Benefits and approaches to effective code review
Continuous Integration and Continuous Delivery (CI/CD) to release higher quality products faster. However, building a CI/CD pipeline can be challenging, and you can’t fully achieve successful CI/CD without a decent automation strategy.
Github Copilot and tools that help us code better are cool. But I’m lucky if I spend 90 minutes a day writing code. We really need to optimize the hours we spend reviewing code, updating tickets and tracing where our code is deployed. Learn how I save an hour a day streamlining non-coding tasks.
This talk is unique because 99% of developer productivity tools and hacks are about coding faster, better, smarter. And yet the vast majority of our time is spent doing all of this other stuff. After I started focusing on optimizing the 10 hours I spend every day on non-coding tasks, I found I my productivity went up and my frustration at annoying stuff went way down. I cover how to save time by reducing cognitive load and by cutting menial, non-coding tasks that we have to perform 10-50 times every day. For example:
Bug or hotfix comes through and you want to start working on it right away so you create a branch and start fixing. What you don’t do is create a Jira ticket but then later your boss/PM/CSM yells at your due to lack of visibility. I share how I automated ticket creation in Slack by correlating Github to Jira.
You have 20 minutes until your next meeting and you open a pull request and start a review. But you get pulled away half way through and when you come back the next day you forgot everything and have to start over. Huge waste of time. I share an ML job I wrote that tells me how long the review will take so I can pick PRs that fit the amount of time I have.
You build. You ship it. You own it. Great. But after I merge my code I never know where it actually is. Did the CI job fail? Is it release under feature flag? Did it just go GA to everyone? I share a bot I wrote that personally tells me where my code is in the pipeline after it leaves my hands so I can actually take full ownership without spending tons of time figuring out what code is in what release.
A quick guide through the wonders of teamwork with distributed version control systems, dependency management, build automation, and continuous integration and delivery.
This document discusses the DevOps movement and related concepts. It provides background on how development and operations teams historically worked separately ("Devs vs Ops") and the problems that caused. DevOps aims to break down barriers between teams through practices like automation, continuous integration/delivery, infrastructure as code, and collaboration between teams from the beginning of a project. The document outlines problems DevOps aims to solve and gives examples of tools and approaches for bringing development and operations cultures together.
Extreme Programming - to the next-levelLars Thorup
This document discusses taking extreme programming principles to the next level by turning traditional practices "up to 11". It explores several ideas for achieving faster feedback including mob programming, continuous deployment, hypothesis-driven user stories, shared product ownership, monitoring-driven development, and continuous learning. The author asks the reader to consider which ideas may work to implement in their organization this month.
This document summarizes Lars Thorup's presentation on fast end-to-end tests. It discusses the drawbacks of traditional end-to-end tests like being slow, imprecise, complicated and fragile. Unit tests are faster but rely on mocked dependencies. The presentation demonstrates a way to automatically generate mocks to test applications from end-to-end very quickly while avoiding the downsides of mocks. The demo shows a Node.js app with front-end and back-end tested at almost 100 end-to-end tests per second.
QA in DevOps: Transformation thru Automation via JenkinsTatyana Kravtsov
This document outlines the agenda for a Jenkins World Tour 2015 presentation in Washington D.C. on QA in DevOps through automation using Jenkins. The presentation discusses the definition of DevOps and provides a 10 step process to DevOps transformation focusing on continuous integration, automated testing, code quality metrics, environment testing, and automated reporting. The presenter is Tanya Kravtsov, founder of the DevOpsQA NJ Meetup group.
The New York Times: Sustainable Systems, Powered by PythonAll Things Open
The document discusses Python tools and techniques for building sustainable systems at the New York Times. It describes applications they build using a Python-based microservices framework called Photon for ingesting and managing photos. It then summarizes several Python packages they use for installation and running applications, ensuring code quality, adding features, providing resilient runtimes, and generating automated documentation.
QA Fest 2017. Диана Пинчук. Разработка мульти- платформенного мобильного SDK:...QAFest
Мобильная индустрия развивается быстрыми темпами, и многие тестировщики сталкивались со спецификой тестирования мобильных приложений. Но кроме полноценных мобильных продуктов некоторые компании разрабатывают SDK, которые используются другими разработчиками. В докладе Вы услышите об особенностях тестирования мобильного SDK, почему Вашего тестирования никогда не будет достаточно, и убедительные доводы о том, почему разработчики - лучшие друзья тестировщиков :)
This document discusses continuous delivery, which aims to build, test, and release software faster through frequent integration and deployment. The goals are quality, speed, and reducing the time it takes to deploy changes from development to production through practices like test-driven development, continuous integration, automated testing, and deployment pipelines. It provides an overview of tools to support continuous delivery processes.
AgileLINC Continous Slides by Daniel HarpBarry Gavril
This document discusses continuous integration practices. It defines continuous integration as an attitude rather than a tool. Key practices of continuous integration include: maintaining a single source repository, automating builds, keeping builds fast, making builds self-testing through automated tests, committing to the mainline daily to avoid branches, triggering a build on every commit through a build server, making every commit potentially shippable, ensuring binary integrity, testing in a clone of production, making the latest executable easily available, and enabling everyone to see what's happening. The document emphasizes focusing on organizational, architectural and process changes over tools when adopting continuous integration.
It has been said that Scrum doesn’t fix problems, it just makes them visible. Fixing the problems is the responsibility of the team. If you don’t have interest in solving the problems, Scrum just causes you pain without benefit. Continuous delivery is similar.
The talk will go through the pains encountered and reliefs implemented by a team aiming for continuous delivery. Many of the implemented practices are valuable on their own but continuous delivery brings the pain so visible it can’t be ignored. We will also go through concrete tools used to accelerate the process towards faster feedback, confidence and predictability given by continuous delivery.
Continuous Delivery on a Modern Web StackLuke Crouch
With the modern web, in a single hour, we can deploy fully-functioning applications with baseline deployment, reliability, scalability, monitoring, and analytics infrastructure that used to take weeks of heavy lifting. This talk is a whirlwind tour using modern, cloud-based, turn-key web development services to go from "Code Zero" to a continuously delivered & monitored basic web application, including:
* GitHub for code hosting
* Travis CI for continuous integration
* Heroku for deployment
* New Relic for monitoring
* Google Analytics
By the end of this talk, you will have a basic understanding of how (mostly free) Cloud? services can continuously deliver a simple web application.
TDC 2021 - Better software, faster: Principles of Continuous Delivery and DevOpsBert Jan Schrijver
This document discusses principles of Continuous Delivery (CD) and DevOps. It defines CD, Continuous Integration, and Continuous Deployment. The goal of CD is to have software ready for release at any time by building and testing it frequently. CD relies on principles like keeping everything in version control, automating processes, and having all team members share responsibility for software delivery. CD is enabled by practices like uniform build pipelines, test automation, and deploying changes frequently through production-like environments. CD requires a cultural shift and collaboration between development and operations.
This document discusses strategies for implementing continuous integration (CI) at scale. It describes the challenges of long build times when integrating code from many committers across a large codebase. Various CI strategies are evaluated, including multiple single jobs builds, pipelined builds, staged team commits, parallel jobs builds, and using a commit gate to check build status before committing code. The best approach depends on factors like code modularity, testability, and team distribution. Continuous integration, testing, and deployment are important, but one must consider build time, resource usage, and understandability of the system.
This document outlines a structured approach for debugging distributed systems. It begins with observing and documenting what is known about the problem. The next steps involve creating a minimal reproducer, debugging the client and server sides, and checking DNS, routing, and network connections. Traffic and messages should be inspected, with a focus on eliminating potential issues on the client side first. The process concludes by wrapping up with documentation of findings, impacts, and lessons learned to prevent future issues. Several tools are recommended for each step to aid in debugging distributed systems effectively.
The document discusses continuous integration and automated builds. It defines continuous integration as decreasing integration times to speed up software delivery. It outlines 5 steps to continuous integration: using a source control repository, automating builds, integrating continuously rather than just daily, using tools like Team Foundation Server, MSBuild and CruiseControl.NET, and following a development process of frequent check-ins and merges. Benefits include reduced risk and improved team collaboration and morale. Anti-patterns include delaying check-ins and ignoring broken builds.
2016 04-25 continuous integration at google scaleJohn Micco
Google runs continuous integration at an enormous scale, with over 30,000 developers submitting changes 30,000 times per day. They developed techniques like just-in-time scheduling to efficiently run the 150+ million tests per day in a linear rather than quadratic time complexity. This allows them to provide fast feedback to developers while keeping compute costs reasonable as the number of changes, tests, and developers grows enormously over time.
DevQAOps - Surviving in a DevOps WorldWinston Laoh
Talk given by Winston Laoh at the QA/LA meetup hosted at Q Los Angeles. The goal of the presentation was to inform and persuade test related engineers on how to integrate into DevOps organizations.
Today’s cutting edge companies have software release cycles measured in days instead of months. This agility is enabled by the DevOps practice of continuous delivery, which automates building, testing, and deploying all code changes. This automation helps you catch bugs sooner and accelerates developer productivity. In this session, we’ll share best practices (including ones followed internally at Amazon) and how you can bring them to your company by using open source and AWS services.
Speaker: Raghuraman Balachandran, Solutions Architect, Amazon India
Development is inherently collaborative. So why aren't you doing code review? This session discusses the importance of collaboration around your source code, the impact code review can have on development teams, and offers guidance on how to get started.
Atlassian Speaker: Matt Quail
Customer Speaker: Patrick Coleman of Dash
Key Takeaways:
* Peer code review explained
* Benefits and approaches to effective code review
Continuous Integration and Continuous Delivery (CI/CD) to release higher quality products faster. However, building a CI/CD pipeline can be challenging, and you can’t fully achieve successful CI/CD without a decent automation strategy.
Github Copilot and tools that help us code better are cool. But I’m lucky if I spend 90 minutes a day writing code. We really need to optimize the hours we spend reviewing code, updating tickets and tracing where our code is deployed. Learn how I save an hour a day streamlining non-coding tasks.
This talk is unique because 99% of developer productivity tools and hacks are about coding faster, better, smarter. And yet the vast majority of our time is spent doing all of this other stuff. After I started focusing on optimizing the 10 hours I spend every day on non-coding tasks, I found I my productivity went up and my frustration at annoying stuff went way down. I cover how to save time by reducing cognitive load and by cutting menial, non-coding tasks that we have to perform 10-50 times every day. For example:
Bug or hotfix comes through and you want to start working on it right away so you create a branch and start fixing. What you don’t do is create a Jira ticket but then later your boss/PM/CSM yells at your due to lack of visibility. I share how I automated ticket creation in Slack by correlating Github to Jira.
You have 20 minutes until your next meeting and you open a pull request and start a review. But you get pulled away half way through and when you come back the next day you forgot everything and have to start over. Huge waste of time. I share an ML job I wrote that tells me how long the review will take so I can pick PRs that fit the amount of time I have.
You build. You ship it. You own it. Great. But after I merge my code I never know where it actually is. Did the CI job fail? Is it release under feature flag? Did it just go GA to everyone? I share a bot I wrote that personally tells me where my code is in the pipeline after it leaves my hands so I can actually take full ownership without spending tons of time figuring out what code is in what release.
A quick guide through the wonders of teamwork with distributed version control systems, dependency management, build automation, and continuous integration and delivery.
This document discusses the DevOps movement and related concepts. It provides background on how development and operations teams historically worked separately ("Devs vs Ops") and the problems that caused. DevOps aims to break down barriers between teams through practices like automation, continuous integration/delivery, infrastructure as code, and collaboration between teams from the beginning of a project. The document outlines problems DevOps aims to solve and gives examples of tools and approaches for bringing development and operations cultures together.
Extreme Programming - to the next-levelLars Thorup
This document discusses taking extreme programming principles to the next level by turning traditional practices "up to 11". It explores several ideas for achieving faster feedback including mob programming, continuous deployment, hypothesis-driven user stories, shared product ownership, monitoring-driven development, and continuous learning. The author asks the reader to consider which ideas may work to implement in their organization this month.
This document summarizes Lars Thorup's presentation on fast end-to-end tests. It discusses the drawbacks of traditional end-to-end tests like being slow, imprecise, complicated and fragile. Unit tests are faster but rely on mocked dependencies. The presentation demonstrates a way to automatically generate mocks to test applications from end-to-end very quickly while avoiding the downsides of mocks. The demo shows a Node.js app with front-end and back-end tested at almost 100 end-to-end tests per second.
QA in DevOps: Transformation thru Automation via JenkinsTatyana Kravtsov
This document outlines the agenda for a Jenkins World Tour 2015 presentation in Washington D.C. on QA in DevOps through automation using Jenkins. The presentation discusses the definition of DevOps and provides a 10 step process to DevOps transformation focusing on continuous integration, automated testing, code quality metrics, environment testing, and automated reporting. The presenter is Tanya Kravtsov, founder of the DevOpsQA NJ Meetup group.
The New York Times: Sustainable Systems, Powered by PythonAll Things Open
The document discusses Python tools and techniques for building sustainable systems at the New York Times. It describes applications they build using a Python-based microservices framework called Photon for ingesting and managing photos. It then summarizes several Python packages they use for installation and running applications, ensuring code quality, adding features, providing resilient runtimes, and generating automated documentation.
QA Fest 2017. Диана Пинчук. Разработка мульти- платформенного мобильного SDK:...QAFest
Мобильная индустрия развивается быстрыми темпами, и многие тестировщики сталкивались со спецификой тестирования мобильных приложений. Но кроме полноценных мобильных продуктов некоторые компании разрабатывают SDK, которые используются другими разработчиками. В докладе Вы услышите об особенностях тестирования мобильного SDK, почему Вашего тестирования никогда не будет достаточно, и убедительные доводы о том, почему разработчики - лучшие друзья тестировщиков :)
This document discusses continuous delivery, which aims to build, test, and release software faster through frequent integration and deployment. The goals are quality, speed, and reducing the time it takes to deploy changes from development to production through practices like test-driven development, continuous integration, automated testing, and deployment pipelines. It provides an overview of tools to support continuous delivery processes.
AgileLINC Continous Slides by Daniel HarpBarry Gavril
This document discusses continuous integration practices. It defines continuous integration as an attitude rather than a tool. Key practices of continuous integration include: maintaining a single source repository, automating builds, keeping builds fast, making builds self-testing through automated tests, committing to the mainline daily to avoid branches, triggering a build on every commit through a build server, making every commit potentially shippable, ensuring binary integrity, testing in a clone of production, making the latest executable easily available, and enabling everyone to see what's happening. The document emphasizes focusing on organizational, architectural and process changes over tools when adopting continuous integration.
It has been said that Scrum doesn’t fix problems, it just makes them visible. Fixing the problems is the responsibility of the team. If you don’t have interest in solving the problems, Scrum just causes you pain without benefit. Continuous delivery is similar.
The talk will go through the pains encountered and reliefs implemented by a team aiming for continuous delivery. Many of the implemented practices are valuable on their own but continuous delivery brings the pain so visible it can’t be ignored. We will also go through concrete tools used to accelerate the process towards faster feedback, confidence and predictability given by continuous delivery.
Continuous Delivery on a Modern Web StackLuke Crouch
With the modern web, in a single hour, we can deploy fully-functioning applications with baseline deployment, reliability, scalability, monitoring, and analytics infrastructure that used to take weeks of heavy lifting. This talk is a whirlwind tour using modern, cloud-based, turn-key web development services to go from "Code Zero" to a continuously delivered & monitored basic web application, including:
* GitHub for code hosting
* Travis CI for continuous integration
* Heroku for deployment
* New Relic for monitoring
* Google Analytics
By the end of this talk, you will have a basic understanding of how (mostly free) Cloud? services can continuously deliver a simple web application.
TYPO3 Camp Stuttgart 2015 - Continuous Delivery with Open Source ToolsMichael Lihs
In diesem Talk beschreibe ich die Continuous Integartion Pipeline von punkt.de und deren Entstehen. Es wird motiviert, warum es sich lohnt, eine solche Pipeline zu implementieren und welche Tools wir dafür verwendet haben. Neben der Beschreibung von Git, Jenkins, Chef, Vagrant, Behat und Surf geht es auch um Integration der einzelnen Tools in eine Deployment Kette.
Continuous Delivery leveraging on Docker CaaS by Adrien BlindDocker, Inc.
At Societe Generale GBIS, time to market & quality matters; hence we do love continuous delivery. In this context, we’re considering the Container as a Service pattern: artifacts produced by the continuous integration chain would become self-sufficient “dockerized” application modules, onboarding both code and subsequent system requirements; then, a CaaS cloud would enable to host these containers. In this talk, I’ll present our usecase and current findings, considering both technical & operational aspects. We’ll talk about software factories, immutable IT, registries, containers configuration, API-driven infrastructure, DevOps roles shifts. Finally, we’ll discuss pros/cons of this solution toward regular IaaS and PaaS.
Microservices = Death of the Enterprise Service Bus (ESB)?Kai Wähner
Microservices are the next step after SOA: Services implement a limited set of functions. Services are developed, deployed and scaled independently. Continuous Integration and Continuous Delivery control deployments. This way you get shorter time to results and increased flexibility.
Microservices have to be independent regarding build, deployment, data management and business domains. A solid Microservices design requires single responsibility, loose coupling and a decentralized architecture. A Microservice can to be closed or open to partners and public via APIs.
This session discusses the requirements, best practices and challenges for creating a good Microservices architecture, and if this spells the end of the Enterprise Service Bus (ESB).
Key messages of the talk:
• Microservices = SOA done right
• Integration is key for success – the product name does not matter
• Real time event correlation is the game changer
The document discusses using cloud computing resources for continuous integration (CI) builds to address bottlenecks and improve build times. It notes that moving CI builds to the cloud provides on-demand servers, predictable and fast build times due to elastic computing capacity, and only paying for resources when used. Some considerations for moving to the cloud include making existing infrastructure accessible and transitioning control of the build systems.
SonarQube is an open source tool that can be used to automate code quality metrics through continuous inspection of codebases to identify issues, aggregate metrics across projects, and provide dashboards and reports. It supports multiple languages including C# and JavaScript, and can be integrated into continuous integration pipelines through plugins. Using SonarQube allows teams to focus QA efforts by prioritizing the most important issues, track improvements over time, and gain insights into code quality trends across an organization.
Beyond the Release: CI That Transforms OrganizationsSauce Labs
When DevOps talk meets DevOps tactics, companies are finding that Continuous Integration (CI) is the make or break point. And implementing CI is one thing, but making it healthy and sustainable takes a little bit more consideration.
In this webinar, Chris Riley (DevOps Analyst) and Andy Pemberton (CloudBees) will show you how Jenkins and Sauce Labs can work together to build a comprehensive CI tool set to help you release faster, at a higher quality and with more visibility.
This document provides an introduction to DevOps concepts including continuous integration, continuous delivery, infrastructure as code, and configuration management. It discusses the need for DevOps to improve processes like manual setups, lack of change tracking, and long release cycles. Key DevOps practices include infrastructure as code, configuration management, continuous integration, and continuous delivery. The document demonstrates a continuous delivery pipeline using Gocd.
DevOps purists may chafe at the DevSecOps term given that security and other important practices are supposed to already be an integral part of routine DevOps workflows. But the reality is that security often gets more lip service than thoughtful and systematic integration into open source software sourcing, development pipelines, and operations processes--in spite of an increasing number of threats.
In this session, we’ll look at successful practices that distributed and diverse teams use to iterate rapidly. We’ll discuss how a container platform can serve as the foundation for DevSecOps in your organization. We'll also consider the risk management associated with integrating components from a variety of sources--a consideration that open source software has had to deal with since the beginning. Finally, we'll show ways by which automation and repeatable trusted delivery of code can be built directly into a DevOps pipeline.
One of the challenges faced by many web development based projects is the integration of source code for multiple releases during parallel development. The task to build and test the multiple versions of source code can eat out the quality time and limit the efficiency of the development/QA team. The case study focuses to resolve the issues of extensive effort consumed in build and deployment process from multiple branches in source repository and aim at Identification of source code integration issues at the earliest stage. This can further be enhanced to limit the manual intervention by integration of build system with test automation tool.
The above can be achieved by using different CI tools (like Hudson/Bamboo/TeamCity/CruiseControl etc) for continuous build preparation and its integration with any test automation suite. The case study specifies the use of CI-Hudson tool for continuous integration using ANT tool for build preparation and further invoking the automation test suite developed using selenium. It also discusses the limitations and challenges of using such an integration system for testing a web based application deployed on Apache Tomcat server. It also details additional plugins available to enhance such an integration of multiple systems and what can be achieved using the above integration.
DevOps Automation and Maturity using FlexDeploy, webMethods demo: Kellton Web...Kellton Tech Solutions Ltd
This document discusses DevOps maturity and automation using the FlexDeploy platform. It provides an overview of FlexDeploy's build automation, deployment automation, and release pipeline orchestration capabilities. FlexDeploy allows comprehensive automation of the software development lifecycle from build through deployment. It offers out-of-the-box integrations with common tools and the ability to eliminate scripting. The document also highlights challenges with traditional approaches to deploying webMethods assets and how the FlexDeploy webMethods plugin streamlines continuous integration and deployment for webMethods environments.
Building an automated database deployment pipeline.
This presentation will cover:
1) Understand the technology and process requirements to work towards automation step-by-step in your release pipeline.
2) Learn about the organizational changes necessary to support process modifications.
3) Appreciate why these changes are necessary in support of modern development and deployment methodologies. To find out more go to http://www.red-gate.com/delivery/
Training Bootcamp - MainframeDevOps.pptxNashet Ali
Cloud Migration services from your on-premise environment can sometimes be very simple and other times an extremely complicated project to implement. For either scenario, there are always considerations to bear in mind when doing so. This course has been designed to highlight these topics to help you ask the right questions to aid in a successful Cloud migration.
Within this course, we look at how timing plays an important part in your project's success and why phased deployments are important. Security is also examined where we focus on a number of key questions that you should have answers to from a business perspective before your Cloud migration. One of the biggest decisions is your chosen public cloud vendor, how do you make the decision between the available vendors, what should you look for when selecting you will host your architecture, this course dives into this question to help you finalize your choice.
Understanding the correct deployment model is essential, it affects how you architect your environment and each provides different benefits, so gaining the knowledge. I look at how you can break this question down to help you with your design considerations. We also cover service readiness from your on-premise environment and how to align these to the relevant Cloud services. Your design will certainly be different from your on-premise solution, I discuss the best approach when you start to think about your solution design, some of the dos and some of the don’ts.
Once you have your design, it’s important to understand how you are actually going to migrate your services ensuring optimum availability and minimal interruption to your customer base, for example looking at Blue/Green and Canary deployments. Cloud migration allows for some great advantages within your business continuity plans, as a result, I have included a lecture to discuss various models that work great within the Cloud.
Course Objectives
By completing this course you will:
Have greater visibility of some of the key points of a cloud migration
Be able to confidently assess the requirements for your migration
Intended Audience
This course has been designed for anyone who works or operates in business management, business strategy, technical management, and technical operations.
Prerequisites
For this course, it's assumed that you have a working knowledge of cloud computing and cloud principles.
What You Will Learn about Cloud Migration
Introduction - This provides an introduction to the trainer and covers the intended audience. We will also look at what lectures are included in the course, and what you will gain as a student from attending the course.
Time Management – How time plays an important part in successful cloud migration. We discuss the key points to allow time for and how to use it to plan a phased migration.
Security – This lecture will give you the ability to ask the key security questions to the business before performing a migration to the Cloud.
Software Delivery in 2016 - A Continuous Delivery ApproachGiovanni Toraldo
The speech "Software Delivery in 2016" was held by Giovanni Toraldo (Lead Developer at ClouDesire) on July 1st 2016 in Pisa, Italy.
Event: Apericoder
Organizer: Coders TUG
Ci of js and apex using jasmine, phantom js and drone io df14Kevin Poorman
This document discusses using continuous integration and continuous delivery for Salesforce development. It introduces the concepts of CI and CD and describes using Grunt, Drone.io, Jasmine, Istanbul and Ant together in an opinionated stack. Grunt is used to define tasks. Jasmine is used for JavaScript testing. Ant is used for Apex tests and deploying to orgs. Drone.io automates running builds and deploying code changes to development and QA orgs after code is committed.
The Continuous delivery Value @ codemotion 2014David Funaro
System Crash, failure data migration, partial update: issues that no one would ever want to meet during the deploy and ... hoping for the best is not enough.
The deployment activity is important as those that precede it. The Continuous Delivery will give you low risk, cheap, fast, predictable delivery and ... soundly.
This document discusses the benefits of continuous delivery and deployment. It notes that without proper processes, deployments can fail due to crashes, failed migrations, or interrupted updates when introducing new features. Continuous delivery uses tools and methodologies to make releases low risk, fast, predictable, and ensure smooth deployments. The document outlines some of the key aspects of continuous delivery like source code management, continuous integration, automated deployments, monitoring, and root cause analysis. It discusses how these practices can help make software releases cheaper, more frequent, rapid, and reduce stress and errors compared to traditional release processes.
Building an Automated Database Deployment PipelineGrant Fritchey
The pace of business accelerates fairly continuously and application development moves right with it. But we’re still trying to deploy databases the same way we did 10 years ago. This session addresses the need for changes in organizational structure, process and technology necessary to arrive at a nimble, fast, automatable and continuous database deployment process. We’ll use actual customer case studies to illustrate both the common methods and the unique context that led to a continuous delivery process that is best described as a pipeline. You will learn how to customize common practices and tool sets to build a database deployment pipeline unique to your environment in order to speed your own database delivery while still protecting your organization’s most valuable asset, it’s data.
Continuous Delivery with Jenkins declarative pipeline XPDays-2018-12-08Борис Зора
When you start your journey with µServices, you should be confident with your delivery lifecycle. In case of mistake, you should be able to navigate to appropriate tag in vcs to reproduce the bug with a test & go though pipeline within 3 hours to production with high confidence of quality.
We will discuss set of tools that could help you to achieve this within 3 months on your project. It does not include system decoupling suggestions. And in the same time, if you decide to break down monolith, it is better to do with dev & devOps best practices.
Security Implications for a DevOps TransformationDeborah Schalm
DevOps aims to break down silos between development and operations teams through collaboration, automation, and continuous delivery. While this provides benefits, it can also introduce security risks if security is not properly included. The presentation discusses five key aspects of a DevOps transformation and their security implications. It argues that DevOps and security are not mutually exclusive if security is incorporated through collaboration, automated testing of security requirements, and accelerating remediation of vulnerabilities.
Delivery Pipelines as a First Class Citizen @deliverAgile2019ciberkleid
In this talk, we will cover important elements for successful CI and CD. We will discuss how these elements make CI and CD much simpler, and hence more attainable. We will cover some best practices / recommendations to include in your application pipelines. We will look at a sample implementation of a pipeline leveraging modern tools. Finally, we will discuss some forthcoming ideas for making it even easier to declaratively enable CI and CD for applications.
This document outlines best practices for continuous operations in DevOps. It discusses challenges with traditional system administration approaches being slower than development teams. The document recommends a DevOps approach where system administrators have development skills and focus on automation to deploy code within minutes. It provides an overview of continuous integration/delivery pipelines and infrastructure as code using AWS Elastic Beanstalk as an example. The document concludes by offering additional DevOps training.
The webinar discusses enabling continuous performance testing with Jenkins CI/CD pipelines. It introduces SOASTA and CloudBees as partners that offer a complete cloud-based service for continuous performance testing and continuous delivery with Jenkins. The webinar agenda includes building performance tests, connecting tests to Jenkins, establishing performance baselines, executing tests in parallel with CD pipelines in Jenkins Workflow, and reviewing performance and functional test results.
Using PaaS for Continuous Delivery (Cloud Foundry Summit 2014)VMware Tanzu
Technical Track presented by Elisabeth Hendrickson at Pivotal.
With continuous delivery, you release frequently and with very little, or no, manual intervention. That requires three things: fully automated tests; a continuous integration server that executes those tests and can promote successful deployments; and an automated deployment mechanism with zero downtime. PaaS's are a perfect fit for this. Cloud Foundry makes zero-downtime automated deployments straightforward. Further, cloud-based CI services such as Cloudbees work well with Cloud Foundry. In this talk, Elisabeth explains how to achieve continuous delivery with Cloud Foundry using one of our own applications (docs.cloudfoundry.org) as an example.
Digital Marketing Trends in 2024 | Guide for Staying AheadWask
https://www.wask.co/ebooks/digital-marketing-trends-in-2024
Feeling lost in the digital marketing whirlwind of 2024? Technology is changing, consumer habits are evolving, and staying ahead of the curve feels like a never-ending pursuit. This e-book is your compass. Dive into actionable insights to handle the complexities of modern marketing. From hyper-personalization to the power of user-generated content, learn how to build long-term relationships with your audience and unlock the secrets to success in the ever-shifting digital landscape.
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/temporal-event-neural-networks-a-more-efficient-alternative-to-the-transformer-a-presentation-from-brainchip/
Chris Jones, Director of Product Management at BrainChip , presents the “Temporal Event Neural Networks: A More Efficient Alternative to the Transformer” tutorial at the May 2024 Embedded Vision Summit.
The expansion of AI services necessitates enhanced computational capabilities on edge devices. Temporal Event Neural Networks (TENNs), developed by BrainChip, represent a novel and highly efficient state-space network. TENNs demonstrate exceptional proficiency in handling multi-dimensional streaming data, facilitating advancements in object detection, action recognition, speech enhancement and language model/sequence generation. Through the utilization of polynomial-based continuous convolutions, TENNs streamline models, expedite training processes and significantly diminish memory requirements, achieving notable reductions of up to 50x in parameters and 5,000x in energy consumption compared to prevailing methodologies like transformers.
Integration with BrainChip’s Akida neuromorphic hardware IP further enhances TENNs’ capabilities, enabling the realization of highly capable, portable and passively cooled edge devices. This presentation delves into the technical innovations underlying TENNs, presents real-world benchmarks, and elucidates how this cutting-edge approach is positioned to revolutionize edge AI across diverse applications.
zkStudyClub - LatticeFold: A Lattice-based Folding Scheme and its Application...Alex Pruden
Folding is a recent technique for building efficient recursive SNARKs. Several elegant folding protocols have been proposed, such as Nova, Supernova, Hypernova, Protostar, and others. However, all of them rely on an additively homomorphic commitment scheme based on discrete log, and are therefore not post-quantum secure. In this work we present LatticeFold, the first lattice-based folding protocol based on the Module SIS problem. This folding protocol naturally leads to an efficient recursive lattice-based SNARK and an efficient PCD scheme. LatticeFold supports folding low-degree relations, such as R1CS, as well as high-degree relations, such as CCS. The key challenge is to construct a secure folding protocol that works with the Ajtai commitment scheme. The difficulty, is ensuring that extracted witnesses are low norm through many rounds of folding. We present a novel technique using the sumcheck protocol to ensure that extracted witnesses are always low norm no matter how many rounds of folding are used. Our evaluation of the final proof system suggests that it is as performant as Hypernova, while providing post-quantum security.
Paper Link: https://eprint.iacr.org/2024/257
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
Digital Banking in the Cloud: How Citizens Bank Unlocked Their MainframePrecisely
Inconsistent user experience and siloed data, high costs, and changing customer expectations – Citizens Bank was experiencing these challenges while it was attempting to deliver a superior digital banking experience for its clients. Its core banking applications run on the mainframe and Citizens was using legacy utilities to get the critical mainframe data to feed customer-facing channels, like call centers, web, and mobile. Ultimately, this led to higher operating costs (MIPS), delayed response times, and longer time to market.
Ever-changing customer expectations demand more modern digital experiences, and the bank needed to find a solution that could provide real-time data to its customer channels with low latency and operating costs. Join this session to learn how Citizens is leveraging Precisely to replicate mainframe data to its customer channels and deliver on their “modern digital bank” experiences.
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
Dandelion Hashtable: beyond billion requests per second on a commodity serverAntonios Katsarakis
This slide deck presents DLHT, a concurrent in-memory hashtable. Despite efforts to optimize hashtables, that go as far as sacrificing core functionality, state-of-the-art designs still incur multiple memory accesses per request and block request processing in three cases. First, most hashtables block while waiting for data to be retrieved from memory. Second, open-addressing designs, which represent the current state-of-the-art, either cannot free index slots on deletes or must block all requests to do so. Third, index resizes block every request until all objects are copied to the new index. Defying folklore wisdom, DLHT forgoes open-addressing and adopts a fully-featured and memory-aware closed-addressing design based on bounded cache-line-chaining. This design offers lock-free index operations and deletes that free slots instantly, (2) completes most requests with a single memory access, (3) utilizes software prefetching to hide memory latencies, and (4) employs a novel non-blocking and parallel resizing. In a commodity server and a memory-resident workload, DLHT surpasses 1.6B requests per second and provides 3.5x (12x) the throughput of the state-of-the-art closed-addressing (open-addressing) resizable hashtable on Gets (Deletes).
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
7. Why Do You Want CI?
● Code Repository
● Small Commits Pushed Frequently
● Automated Tests
● Frequent Automated Builds
● Build Notification
● Test in a Production Clone
● Easy Access to Deliverables
8. Advantages
● Idempotent
● Early Detection of
o Failures
o Code Conflicts
● Immediate Testing
o TDD
o BDD
o Regression
● Immediate Availability of Current Code Base
o Demos
o Testing