The document discusses techniques for achieving zero downtime deployments. It begins with an introduction and overview before covering specific methods such as blue-green deployments, canary releases, and rolling deployments. It also provides details on tools that can be used and considerations for deploying to web servers and databases. The document advocates combining different techniques into a hybrid 1/10/100 approach for deploying code changes to environments in a phased manner to minimize risk.
Zero downtime deployments with laravel envoyTung Nguyen
This document discusses zero downtime deployments using Laravel Envoy. It begins with fundamentals of deployment including definitions, best practices, and tools. Common deployment tools mentioned include Capistrano, Ansible, and Envoy. The document then introduces Laravel Envoy, describing it as a tool to define common tasks on remote servers using Blade syntax. Key points about Envoy include installing it globally, writing tasks in an Envoy.blade.php file, and running tasks via the envoy command. The document concludes with an overview of how Envoy enables zero downtime deployments through stories for setup, deploy, and rollback.
- AWS OpsWorks for Chef Automate provides a fully managed Chef Automate server on AWS to help with infrastructure configuration management.
- It allows users to easily create an AWS managed Chef server in about 10 minutes to define infrastructure using code.
- The service handles backups, security updates, and Chef software updates automatically so users can focus on writing cookbooks and recipes.
Configuration Management in the Cloud - AWS Online Tech TalksAmazon Web Services
Learning Objectives:
- Learn how to use AWS OpsWorks, AWS CodeDeploy, and AWS CodePipeline to build a reliable and consistent development pipeline
- Understand about continous integration and delivery for Infrastructure as Code
- Learn how to get started with these services.
Amazon EC2 Systems Manager for Hybrid Cloud Management at ScaleAmazon Web Services
Amazon EC2 Systems Manager provides capabilities for automated management of systems at scale across AWS and on-premises environments. It includes components such as Run Command, State Manager, Inventory, Maintenance Windows, Patch Manager, and Automation. These capabilities enable organizations to remotely and securely manage servers, address configuration drift, simplify patching processes, and define automation workflows. Amazon EC2 Systems Manager helps reduce costs and complexity compared to traditional management approaches.
This document provides tips and best practices for using AWS Elastic Load Balancers (ELBs). It covers topics like load testing ELBs, using SSL with ELBs, CNAME records, balancing traffic both within and across availability zones, L4 load balancing support, internal ELBs, ELB logging, stickiness, blue/green deployments using ELBs, connection draining, using the ELB CLI for continuous integration/continuous delivery, auto scaling with ELB metrics, using CloudFront in front of ELBs, and some limitations around microservices support. The overall message is that ELBs are generally easy to use but have some limitations, so it's important to understand how to configure them properly
The document discusses options for optimizing server performance including using alternative databases like MariaDB instead of MySQL, implementing caching at the page level and web server level using techniques like mod_pagespeed, using Nginx as a web server or reverse proxy, and load balancing. It promotes using these advanced techniques to achieve wicked fast website performance.
The document discusses techniques for achieving zero downtime deployments. It begins with an introduction and overview before covering specific methods such as blue-green deployments, canary releases, and rolling deployments. It also provides details on tools that can be used and considerations for deploying to web servers and databases. The document advocates combining different techniques into a hybrid 1/10/100 approach for deploying code changes to environments in a phased manner to minimize risk.
Zero downtime deployments with laravel envoyTung Nguyen
This document discusses zero downtime deployments using Laravel Envoy. It begins with fundamentals of deployment including definitions, best practices, and tools. Common deployment tools mentioned include Capistrano, Ansible, and Envoy. The document then introduces Laravel Envoy, describing it as a tool to define common tasks on remote servers using Blade syntax. Key points about Envoy include installing it globally, writing tasks in an Envoy.blade.php file, and running tasks via the envoy command. The document concludes with an overview of how Envoy enables zero downtime deployments through stories for setup, deploy, and rollback.
- AWS OpsWorks for Chef Automate provides a fully managed Chef Automate server on AWS to help with infrastructure configuration management.
- It allows users to easily create an AWS managed Chef server in about 10 minutes to define infrastructure using code.
- The service handles backups, security updates, and Chef software updates automatically so users can focus on writing cookbooks and recipes.
Configuration Management in the Cloud - AWS Online Tech TalksAmazon Web Services
Learning Objectives:
- Learn how to use AWS OpsWorks, AWS CodeDeploy, and AWS CodePipeline to build a reliable and consistent development pipeline
- Understand about continous integration and delivery for Infrastructure as Code
- Learn how to get started with these services.
Amazon EC2 Systems Manager for Hybrid Cloud Management at ScaleAmazon Web Services
Amazon EC2 Systems Manager provides capabilities for automated management of systems at scale across AWS and on-premises environments. It includes components such as Run Command, State Manager, Inventory, Maintenance Windows, Patch Manager, and Automation. These capabilities enable organizations to remotely and securely manage servers, address configuration drift, simplify patching processes, and define automation workflows. Amazon EC2 Systems Manager helps reduce costs and complexity compared to traditional management approaches.
This document provides tips and best practices for using AWS Elastic Load Balancers (ELBs). It covers topics like load testing ELBs, using SSL with ELBs, CNAME records, balancing traffic both within and across availability zones, L4 load balancing support, internal ELBs, ELB logging, stickiness, blue/green deployments using ELBs, connection draining, using the ELB CLI for continuous integration/continuous delivery, auto scaling with ELB metrics, using CloudFront in front of ELBs, and some limitations around microservices support. The overall message is that ELBs are generally easy to use but have some limitations, so it's important to understand how to configure them properly
The document discusses options for optimizing server performance including using alternative databases like MariaDB instead of MySQL, implementing caching at the page level and web server level using techniques like mod_pagespeed, using Nginx as a web server or reverse proxy, and load balancing. It promotes using these advanced techniques to achieve wicked fast website performance.
This document provides an overview of a presentation on developing highly scalable applications. The presentation covers core concepts of scalability and availability, techniques for vertical and horizontal scaling, capacity planning, code examples for clustering, load balancing, and lazy loading, and frameworks like Hazelcast and JCache. It is delivered by Afkham Azeez, who has experience with Apache projects, web services, and architecture roles. The agenda includes scaling techniques, planning capacity, and a Q&A session.
Sascha Möllering gave a presentation on deploying applications to the AWS cloud. He began with an overview of AWS services like EC2, S3, RDS and explained how to initially create a simple cloud service with one instance each for a web application and database. He then described how to improve the architecture by separating components, adding redundancy and elasticity using services like ELB, autoscaling and read replicas. Sascha demonstrated deploying a sample application built with JHipster and Docker to AWS Elastic Beanstalk, which handles running the containers and mapping environment variables for the database connection.
(DVO205) Monitoring Evolution: Flying Blind to Flying by InstrumentAmazon Web Services
Today, AdRoll runs its infrastructure by instrumentation: constantly asking empirical questions, analyzing data for answers, and designing new features with instrumentation in mind to understand how functionality will work upon release. AdRoll’s development methodology did not start out this way, however. It took a cultural shift and many new tools and processes to adopt this approach. In this session, AdRoll and Datadog will discuss how to evolve your organization from a state of “flying blind” to a culture focused on monitoring and data-based decisions. Session sponsored by Datadog.
Scaling wix with microservices and multi cloud - 2015Aviran Mordo
Many small startups build their systems on top of a traditional toolset like Tomcat, Hibernate, and MySQL. These systems are used because they facilitate easy development and fast progress, but many of them are monolithic and have limited scalability. So as a startup grows, the team is confronted with the problem of how to evolve the system and make it scalable. Facing the same dilemma, Wix.com grew from 0 to 70 million users in just a few years. Facing some interesting challenges, like performance and availability. Traditional performance solutions, such as caching, would not help due to a very long tail problem which causes caching to be highly inefficient. And because every minute of downtime means customers lose money, the product needed to have near 100% availability. Solving these issues required some interesting and out-of-the-box thinking, and this talk will discuss some of these strategies: building a highly preformant, highly available and highly scalable system; and leveraging microservices architecture and multi-cloud platforms to help build a very efficient and cost-effective system.
This document summarizes an event-driven architecture presentation using Java. It discusses using Apache Kafka/Amazon Kinesis for messaging, Docker for containerization, Vert.x for reactive applications, Apache Camel/AWS Lambda for integration, and Google Protocol Buffers for data serialization. It covers infrastructure components, software frameworks, local and AWS deployment, and integration testing between Kinesis and Kafka. The presentation provides resources for code samples and Docker images discussed.
This document introduces React on Rails, which allows using React, Redux, and React-Router within Ruby on Rails views. It discusses using Webpack and NPM to manage front-end assets, integrating React components with Rails, supporting features like hot reloading and server rendering, and sharing Redux stores between components. React on Rails provides helpers, configuration, and documentation to facilitate building JavaScript-rich UIs with Rails.
How to Troubleshoot & Optimize Database Query Performance for Your ApplicationDynatrace
How to Troubleshoot & Optimize Database Query Performance for Your Application
According to the recent DZone Performance Guide, “database performance problems are the most challenging to fix” with manual firefighting and lack of actionable insights being the top monitoring challenges. When these three issues converge on your application delivery chain it can mean a long time and a lot of effort to find and fix critical issues.
Is it really your database that's slow? Or is it the way your OR-Mapper or code accesses the database? A misconfigured connection pool on one of your servers? Or a missing table index, a full tablespace, or simply an I/O issue?
In this webinar we show you how Dynatrace AppMon extends traditional APM through its new Database Agent, providing a view that both Developers and DBAs can trust and use to identify:
• Problematic SQL Queries, unprepared statements or misconfigured connection pools
• Performance impacting database sessions, slow queries, waits and locks on your database instances
• Optimizations by looking at the Execution Plans of your application-specific SQL queries
• Query patterns like n+1 being implemented in your application
Eliminate wasted cycles by bridging the Dev-DBA collaboration gap with a consistent view based on app-focused database access metrics, database instance system and performance metrics, and execution plans for your critical SQL queries. Buck the trends that DZone is seeing! Get tips you can use right away.
- The document discusses various technical considerations for running Exchange in a large environment with over 130,000 mailboxes spread across 132 Exchange servers in multiple data centers. It covers topics like TCP connections, TCP keepalive time, LDAP policies, .NET garbage collection, filtering, and using PowerShell for automation.
- It provides recommendations like setting the TCP keepalive time lower to prevent idle connections, modifying LDAP policies to prevent forced disconnections, monitoring .NET garbage collection, and using server-side filtering in PowerShell for better performance. The goal is to optimize the environment to handle the scale while preventing issues like port exhaustion or server overload.
Chef is a configuration management tool that turns infrastructure into code. It allows automating how systems are built, deployed, and managed. With Chef, infrastructure is versioned, tested, and repeatable like application code. The document provides an overview of key Chef concepts including the Chef server, nodes, organizations, environments, cookbooks, roles, and data bags. It also describes the basic Chef environment and components like the workstation, Chef client, and knife tool.
Alfresco DevCon 2018: SDK 3 Multi Module project using Nexus 3 for releases a...Martin Bergljung
In this talk you will learn how to set up an Alfresco SDK 3.0 multi module project that could be used in a larger consulting project context. Extension modules will be standalone and versioned and released independently in the Nexus 3 Repository Manager. The talk also includes a look at defining a Parent POM and an Aggregator POM for your SDK 3 project solution.
Unbreakable SharePoint 2013 with SQL Server Always On Availability Groups (HA...serge luca
SharePoint 2013 High Availability and Disaster Recovery with SQL Server Always On Availability Groups (HA and DR) - SharePoint Saturday Helsinki-Serge Luca (SharePoint MVP) et Isabelle Van Campenhoudt(SQ Server MVP); ShareQL, Belgium
Last year, Nurun and Walmart Canada launched the first responsively designed enterprise e-commerce website created for a large Canadian retailer. Built on a new platform with the Play Framework, Scala and Akka at its core, this foundation has proven itself in terms of flexibility, developer productivity, performance and scalability. We’ll share some of the insights we’ve gained in creating a best of breed solution that scales to Walmart’s needs—now and into the future.
This is the slide deck which was used for a talk 'Change Data Capture using Kafka' at Kafka Meetup at Linkedin (Bangalore) held on 11th June 2016.
The talk describes the need for CDC and why it's a good use case for Kafka.
Ingest and Stream Processing - What will you choose?Pat Patterson
This document discusses ingestion and stream processing options. It provides an overview of common streaming patterns and components, including producers, Kafka, and various streaming engines and destinations. Spark Streaming is highlighted as being highly used for its high throughput, SQL support, and ease of transition from batch. The document also discusses other streaming engines like Storm, Flink, and Kafka Streams, noting their strengths and weaknesses. Finally, it introduces StreamSets Data Collector as a tool for building data pipelines.
Serverless design considerations for Cloud Native workloadsTensult
We have built a news website with more than a billion views per month and we are sharing the learnings from that experience covering Serverless architectures, Design considerations, and Gotchas.
- An internal competition at Pariveda challenged teams to build a scalable e-commerce site on AWS and Azure to handle high volumes of search, add to cart, and order requests.
- The winning team's Azure solution was able to meet the performance SLAs at a lower cost than their AWS implementation by using .NET and optimizing for asynchronous writes and scaling out instances instead of up.
- Key lessons included scaling out instead of up for network-bound problems, capturing metrics to identify bottlenecks, reusing existing tools when possible, and automating deployments for reliability and fast retesting. While no single cloud won, either AWS or Azure could succeed with the right architecture.
This document discusses different solutions for efficiently identifying fields in Salesforce objects that have not been used for a long time. Solution 1 involves downloading all data via the API and processing it locally, which is inefficient. Solution 2 uses the API to query data in batches, but has high API usage and long duration. Solution 3 refines the query between batches to optimize records retrieved. Solution 4 executes the query as anonymous Apex on the server for faster processing of more records in one roundtrip, with optimized network usage and API calls. Code examples are provided to implement Solutions 3 and 4.
Save 90% on Your Containerized Workloads - August 2017 AWS Online Tech TalksAmazon Web Services
Learning Objectives:
- Learn how to run containers on a managed cluster of Amazon EC2 instances
- Learn how to then use EC2 Spot with your containers to significantly reduce the cost of running your applications, or grow your application's compute capacity and throughput for the same budget
- Understand the use cases best suited for running containerized workloads on EC2 Spot
Containers lend themselves to flexible and portable application deployments, and when used with the Amazon EC2 Container Service (or other schedulers), provide a simple and effective way to manage fleets of instances and containers at scale. Amazon EC2 Spot instances allow you to utilize spare Amazon EC2 computing capacity for a fraction of the cost. This webinar will help architects, engineers and developers understand when and how to run your containerized environment on EC2 Spot instances, saving up to 90% over On-Demand EC2.
Coordinating Micro-Services with Spring Cloud ContractOmri Spector
This document discusses coordinating APIs between producer and consumer applications using Spring Cloud Contract (SCC). It begins with an example of how separate development of APIs by each team can lead to misaligned implementations. It then demonstrates how SCC allows teams to define API contracts, generate test stubs, and validate implementations without copying contracts or requiring early integration. The key benefits of SCC are that it establishes contract artifacts, enables consumer-driven API design, and provides two-way validation of implementations against a shared contract. Various SCC workflows and uses of contract repositories are also outlined.
This document discusses five methods for migrating workloads to the cloud: 1) Manual data migration, 2) Offline media transfer, 3) Internet transfer of virtual disk images, 4) Software agent-based data replication, and 5) Full server failover using software agents. It provides advantages and considerations for each method, and explains how to implement the fourth and fifth methods which use software agents to replicate data over time without impacting production systems.
1. Traditional database development faces issues like lack of source control, tedious deployment scripts, and manual processes.
2. DevOps principles like continuous integration, static code analysis, and automation can help address these issues. Database changes can be tracked in source control and deployed automatically.
3. There are different approaches to database deployment like state-based using DACPAC files or migration-based using incremental scripts stored in source control. Tools like SSDT, ReadyRoll, and Flyway support these approaches.
This document provides an overview of a presentation on developing highly scalable applications. The presentation covers core concepts of scalability and availability, techniques for vertical and horizontal scaling, capacity planning, code examples for clustering, load balancing, and lazy loading, and frameworks like Hazelcast and JCache. It is delivered by Afkham Azeez, who has experience with Apache projects, web services, and architecture roles. The agenda includes scaling techniques, planning capacity, and a Q&A session.
Sascha Möllering gave a presentation on deploying applications to the AWS cloud. He began with an overview of AWS services like EC2, S3, RDS and explained how to initially create a simple cloud service with one instance each for a web application and database. He then described how to improve the architecture by separating components, adding redundancy and elasticity using services like ELB, autoscaling and read replicas. Sascha demonstrated deploying a sample application built with JHipster and Docker to AWS Elastic Beanstalk, which handles running the containers and mapping environment variables for the database connection.
(DVO205) Monitoring Evolution: Flying Blind to Flying by InstrumentAmazon Web Services
Today, AdRoll runs its infrastructure by instrumentation: constantly asking empirical questions, analyzing data for answers, and designing new features with instrumentation in mind to understand how functionality will work upon release. AdRoll’s development methodology did not start out this way, however. It took a cultural shift and many new tools and processes to adopt this approach. In this session, AdRoll and Datadog will discuss how to evolve your organization from a state of “flying blind” to a culture focused on monitoring and data-based decisions. Session sponsored by Datadog.
Scaling wix with microservices and multi cloud - 2015Aviran Mordo
Many small startups build their systems on top of a traditional toolset like Tomcat, Hibernate, and MySQL. These systems are used because they facilitate easy development and fast progress, but many of them are monolithic and have limited scalability. So as a startup grows, the team is confronted with the problem of how to evolve the system and make it scalable. Facing the same dilemma, Wix.com grew from 0 to 70 million users in just a few years. Facing some interesting challenges, like performance and availability. Traditional performance solutions, such as caching, would not help due to a very long tail problem which causes caching to be highly inefficient. And because every minute of downtime means customers lose money, the product needed to have near 100% availability. Solving these issues required some interesting and out-of-the-box thinking, and this talk will discuss some of these strategies: building a highly preformant, highly available and highly scalable system; and leveraging microservices architecture and multi-cloud platforms to help build a very efficient and cost-effective system.
This document summarizes an event-driven architecture presentation using Java. It discusses using Apache Kafka/Amazon Kinesis for messaging, Docker for containerization, Vert.x for reactive applications, Apache Camel/AWS Lambda for integration, and Google Protocol Buffers for data serialization. It covers infrastructure components, software frameworks, local and AWS deployment, and integration testing between Kinesis and Kafka. The presentation provides resources for code samples and Docker images discussed.
This document introduces React on Rails, which allows using React, Redux, and React-Router within Ruby on Rails views. It discusses using Webpack and NPM to manage front-end assets, integrating React components with Rails, supporting features like hot reloading and server rendering, and sharing Redux stores between components. React on Rails provides helpers, configuration, and documentation to facilitate building JavaScript-rich UIs with Rails.
How to Troubleshoot & Optimize Database Query Performance for Your ApplicationDynatrace
How to Troubleshoot & Optimize Database Query Performance for Your Application
According to the recent DZone Performance Guide, “database performance problems are the most challenging to fix” with manual firefighting and lack of actionable insights being the top monitoring challenges. When these three issues converge on your application delivery chain it can mean a long time and a lot of effort to find and fix critical issues.
Is it really your database that's slow? Or is it the way your OR-Mapper or code accesses the database? A misconfigured connection pool on one of your servers? Or a missing table index, a full tablespace, or simply an I/O issue?
In this webinar we show you how Dynatrace AppMon extends traditional APM through its new Database Agent, providing a view that both Developers and DBAs can trust and use to identify:
• Problematic SQL Queries, unprepared statements or misconfigured connection pools
• Performance impacting database sessions, slow queries, waits and locks on your database instances
• Optimizations by looking at the Execution Plans of your application-specific SQL queries
• Query patterns like n+1 being implemented in your application
Eliminate wasted cycles by bridging the Dev-DBA collaboration gap with a consistent view based on app-focused database access metrics, database instance system and performance metrics, and execution plans for your critical SQL queries. Buck the trends that DZone is seeing! Get tips you can use right away.
- The document discusses various technical considerations for running Exchange in a large environment with over 130,000 mailboxes spread across 132 Exchange servers in multiple data centers. It covers topics like TCP connections, TCP keepalive time, LDAP policies, .NET garbage collection, filtering, and using PowerShell for automation.
- It provides recommendations like setting the TCP keepalive time lower to prevent idle connections, modifying LDAP policies to prevent forced disconnections, monitoring .NET garbage collection, and using server-side filtering in PowerShell for better performance. The goal is to optimize the environment to handle the scale while preventing issues like port exhaustion or server overload.
Chef is a configuration management tool that turns infrastructure into code. It allows automating how systems are built, deployed, and managed. With Chef, infrastructure is versioned, tested, and repeatable like application code. The document provides an overview of key Chef concepts including the Chef server, nodes, organizations, environments, cookbooks, roles, and data bags. It also describes the basic Chef environment and components like the workstation, Chef client, and knife tool.
Alfresco DevCon 2018: SDK 3 Multi Module project using Nexus 3 for releases a...Martin Bergljung
In this talk you will learn how to set up an Alfresco SDK 3.0 multi module project that could be used in a larger consulting project context. Extension modules will be standalone and versioned and released independently in the Nexus 3 Repository Manager. The talk also includes a look at defining a Parent POM and an Aggregator POM for your SDK 3 project solution.
Unbreakable SharePoint 2013 with SQL Server Always On Availability Groups (HA...serge luca
SharePoint 2013 High Availability and Disaster Recovery with SQL Server Always On Availability Groups (HA and DR) - SharePoint Saturday Helsinki-Serge Luca (SharePoint MVP) et Isabelle Van Campenhoudt(SQ Server MVP); ShareQL, Belgium
Last year, Nurun and Walmart Canada launched the first responsively designed enterprise e-commerce website created for a large Canadian retailer. Built on a new platform with the Play Framework, Scala and Akka at its core, this foundation has proven itself in terms of flexibility, developer productivity, performance and scalability. We’ll share some of the insights we’ve gained in creating a best of breed solution that scales to Walmart’s needs—now and into the future.
This is the slide deck which was used for a talk 'Change Data Capture using Kafka' at Kafka Meetup at Linkedin (Bangalore) held on 11th June 2016.
The talk describes the need for CDC and why it's a good use case for Kafka.
Ingest and Stream Processing - What will you choose?Pat Patterson
This document discusses ingestion and stream processing options. It provides an overview of common streaming patterns and components, including producers, Kafka, and various streaming engines and destinations. Spark Streaming is highlighted as being highly used for its high throughput, SQL support, and ease of transition from batch. The document also discusses other streaming engines like Storm, Flink, and Kafka Streams, noting their strengths and weaknesses. Finally, it introduces StreamSets Data Collector as a tool for building data pipelines.
Serverless design considerations for Cloud Native workloadsTensult
We have built a news website with more than a billion views per month and we are sharing the learnings from that experience covering Serverless architectures, Design considerations, and Gotchas.
- An internal competition at Pariveda challenged teams to build a scalable e-commerce site on AWS and Azure to handle high volumes of search, add to cart, and order requests.
- The winning team's Azure solution was able to meet the performance SLAs at a lower cost than their AWS implementation by using .NET and optimizing for asynchronous writes and scaling out instances instead of up.
- Key lessons included scaling out instead of up for network-bound problems, capturing metrics to identify bottlenecks, reusing existing tools when possible, and automating deployments for reliability and fast retesting. While no single cloud won, either AWS or Azure could succeed with the right architecture.
This document discusses different solutions for efficiently identifying fields in Salesforce objects that have not been used for a long time. Solution 1 involves downloading all data via the API and processing it locally, which is inefficient. Solution 2 uses the API to query data in batches, but has high API usage and long duration. Solution 3 refines the query between batches to optimize records retrieved. Solution 4 executes the query as anonymous Apex on the server for faster processing of more records in one roundtrip, with optimized network usage and API calls. Code examples are provided to implement Solutions 3 and 4.
Save 90% on Your Containerized Workloads - August 2017 AWS Online Tech TalksAmazon Web Services
Learning Objectives:
- Learn how to run containers on a managed cluster of Amazon EC2 instances
- Learn how to then use EC2 Spot with your containers to significantly reduce the cost of running your applications, or grow your application's compute capacity and throughput for the same budget
- Understand the use cases best suited for running containerized workloads on EC2 Spot
Containers lend themselves to flexible and portable application deployments, and when used with the Amazon EC2 Container Service (or other schedulers), provide a simple and effective way to manage fleets of instances and containers at scale. Amazon EC2 Spot instances allow you to utilize spare Amazon EC2 computing capacity for a fraction of the cost. This webinar will help architects, engineers and developers understand when and how to run your containerized environment on EC2 Spot instances, saving up to 90% over On-Demand EC2.
Coordinating Micro-Services with Spring Cloud ContractOmri Spector
This document discusses coordinating APIs between producer and consumer applications using Spring Cloud Contract (SCC). It begins with an example of how separate development of APIs by each team can lead to misaligned implementations. It then demonstrates how SCC allows teams to define API contracts, generate test stubs, and validate implementations without copying contracts or requiring early integration. The key benefits of SCC are that it establishes contract artifacts, enables consumer-driven API design, and provides two-way validation of implementations against a shared contract. Various SCC workflows and uses of contract repositories are also outlined.
This document discusses five methods for migrating workloads to the cloud: 1) Manual data migration, 2) Offline media transfer, 3) Internet transfer of virtual disk images, 4) Software agent-based data replication, and 5) Full server failover using software agents. It provides advantages and considerations for each method, and explains how to implement the fourth and fifth methods which use software agents to replicate data over time without impacting production systems.
1. Traditional database development faces issues like lack of source control, tedious deployment scripts, and manual processes.
2. DevOps principles like continuous integration, static code analysis, and automation can help address these issues. Database changes can be tracked in source control and deployed automatically.
3. There are different approaches to database deployment like state-based using DACPAC files or migration-based using incremental scripts stored in source control. Tools like SSDT, ReadyRoll, and Flyway support these approaches.
This document discusses challenges faced by organizations in managing their infrastructure and applications, and how Chef and related tools can help address those challenges. It outlines Chef's approach of treating infrastructure as code and using automation to enable continuous delivery of infrastructure and applications. This allows for faster innovation, better quality/compliance, and rapid time to value. Key aspects covered include infrastructure as code, automation of the development stack, enabling DevOps workflows, and integrating security and compliance into the software delivery pipeline.
The document discusses database change management and maintaining multiple database environments. It recommends having at least a production and staging database to test changes before deploying to production. Developers record database changes as SQL scripts committed to source control. An automated process then executes change scripts to update databases as needed, keeping environments in sync. This allows individual developers and testers to maintain their own databases while also updating shared databases.
Continuous Deployment of your Application @SpringOneciberkleid
Spring Cloud Pipelines is an opinionated framework that automates the creation of structured continuous deployment pipelines.
In this presentation we’ll go through the contents of the Spring Cloud Pipelines project. We’ll start a new project for which we’ll have a deployment pipeline set up in no time. We’ll deploy to Cloud Foundry and check if our application is backwards compatible so that we can roll it back on production.
Improving Batch-Process Testing Techniques with a Domain-Specific LanguageDr. Spock
The document proposes using a domain-specific language (DSL) to improve testing of batch processes. It discusses challenges in batch process testing and principles for good test automation. The document then describes two case studies where DSLs were used to simplify test setup and writing for batch systems at a bank. An internal DSL using Selenium simplified visual testing, while an external DSL with Spring Remoting provided faster and more precise batch execution control. Both approaches made test automation easier but required effort to prepare isolated test environments.
SpringOne Platform 2017
Marcin Grzejszczak, Pivotal; Cora Iberkleid, Pivotal
"“I have stopped counting how many times I’ve done this from scratch” - was one of the responses to the tweet about starting the project called Spring Cloud Pipelines. Every company sets up a pipeline to take code from your source control, through unit testing and integration testing, to production from scratch. Every company creates some sort of automation to deploy its applications to servers. Enough is enough - time to automate that and focus on delivering business value.
In this presentation we’ll go through the contents of the Spring Cloud Pipelines project. We’ll start a new project for which we’ll have a deployment pipeline set up in no time. We’ll deploy to Cloud Foundry and check if our application is backwards compatible so that we can roll it back on production."
The document discusses Aviran Mordo's presentation on Wix's journey towards continuous delivery. Some key points:
- Wix has transitioned from traditional waterfall development to continuous delivery, deploying changes around 60 times per day.
- This was enabled by adopting DevOps practices like test-driven development, feature toggles, A/B testing, automated deployments, and monitoring.
- Tools like App-Info, New Relic, and custom deployment tools were crucial for implementing continuous delivery at Wix's scale across multiple data centers and cloud providers.
- Transitioning required cultural changes, empowering developers, and embracing risk and failure to improve continuously. Wix now develops and replaces infrastructure
Continuous Deployment of your Application @JUGtoberfestMarcin Grzejszczak
Spring Cloud Pipelines provides an opinionated template for continuous deployment pipelines that is based on best practices. It aims to solve the problem of having to create deployment pipelines from scratch for each new project. The pipelines support various automation servers like Concourse and Jenkins, and include steps for building, testing, and deploying applications. They promote practices like failing fast, standardized deployments, and testing rollbacks to enable techniques like zero-downtime deployments.
This document provides guidance on upgrading SQL Server instances from older versions to SQL Server 2012. It discusses allowable upgrade paths, pre-upgrade tasks like running SQL Best Practice Analyzer and SQL Upgrade Advisor to identify issues. Two main upgrade strategies are covered: in-place upgrade which replaces the existing instance, and side-by-side upgrade which installs SQL 2012 on a new instance. Testing the upgrade, estimating downtime, and developing rollback plans are also recommended steps in the upgrade process. Post-upgrade tasks include configuring logins, jobs, and other settings in the new SQL 2012 environment.
In our recent webinar hosted by Mike Current, a member of the Hyland Upgrade Council, and Mark Hamilton, DataBank's Infrastructure Engineer, we expanded on how upgrading OnBase offers the ability to not only gain enhancements and fixes, but also radically improve the security, stability and architecture of your entire OnBase environment.
In this presentation you will...
1. Learn the formula for upgrade success with actionable items to work through right away
2. Understand the team needed to get the job done and how DataBank can step in to help
3. The importance of establishing a test environment and more
You can also watch the full webinar here: http://info.databankimx.com/Upgrade-Webinar-RCD.html
Download the Hyland 3rd Part Compatibility Matrix from slide #25 here: http://info.databankimx.com/rs/167-SSD-475/images/Third%20Party%20Product%20Compatibility%20Matrix.pdf
Case Study: Credit Card Core System with Exalogic, Exadata, Oracle Cloud Mach...Hirofumi Iwasaki
This document discusses Rakuten Card's migration of its core credit card processing systems from an aging mainframe architecture to a new architecture based on Oracle Exadata, Exalogic, and Oracle Cloud Machine. The migration involved converting terabytes of data from legacy formats to Oracle Database, reimplementing software from Japanese COBOL to Java EE, and deploying the new systems with no downtime or issues. The new standardized architecture provides improved performance, scalability, portability, and security compared to the old vendor-locked mainframe systems. Overall the migration was completed on schedule and the new systems have been successfully operating in production.
This document discusses challenges with online patching in Oracle E-Business Suite release 12.2.5. It begins with an overview of the 12.2 architecture and how it enables features like file system editioning and database edition-based redefinition to allow patching while the application is online. It then covers the online patching cycle in detail and discusses options for developing custom code to be either fully or runtime compliant. The document concludes with lessons learned around areas like database object grants, the DB_Domain parameter, executing autoconfig, and administering application nodes. It also discusses some common challenges seen with online patching and useful utilities for monitoring and diagnosing issues.
This document discusses job scheduling, SQL Database, and pricing on the Azure PaaS. It describes how to create scheduled web jobs using the Azure scheduler portal by setting the job type, schedule, and action. It also discusses monitoring web jobs, DTUs and eDTUs in SQL Database, and how to determine the number needed. The document provides an overview of migration from Oracle and SQL Server databases to Azure SQL Database using tools like SSMA and SqlPackage.exe.
The document discusses several high availability and disaster recovery options for SQL Server including failover clustering, database mirroring, log shipping, and replication. It provides examples of how different companies have implemented these technologies depending on their requirements. Key factors that influence architecture choices are downtime tolerance, deployment of technologies, and operational procedures. The document also covers SQL Server upgrade processes and how to move databases to a new datacenter while maintaining high availability.
DOES SFO 2016 - Avan Mathur - Planning for Huge ScaleGene Kim
Installing one CI server or configuring a deployment pipeline for a specific application might be easy enough. However, as enterprises look to scale their DevOps adoption and optimize their software delivery practices across the organization (to support additional teams, product lines, application releases, processes and infrastructure) -- software delivery pipeline(s) need to scale to support enterprise workloads.
For some enterprises, this means having a pipeline that can withstand the velocity and throughput of thousands of product releases, supporting tens of thousands of developers and distributed teams, hundreds of thousands of infrastructure nodes, multitudes of inter-dependent application components, or millions of builds and test-cases.
This scale poses unique challenges and implications for your pipeline design. This talk covers best practices for analyzing and (re)designing your software delivery pipeline – regardless of your chosen tool-set or technologies. Obtain tips and tools for ensuring your pipelines and DevOps infrastructure have the right architecture and feature-set to support your software production as it scales, while also ensuring manageability, governance, security, and compliance.
Learn best practices for how to:
1) Plan for scale: how to project for the types of performance indicators/vectors you’d need to scale across.
2) How to design of your pipeline and supporting infrastructure and operations (such as data retention, artifact retrieval, monitoring, etc.).
3) Design your pipeline workflows and processes to allow reusability and standardization across the organization, while also enabling flexibility to support the needs of specific teams/apps.
4) Design your pipeline in a way that enables fast rollout- easy onboarding thousands of applications, across hundreds of teams
5) Incorporate security access controls, approval gates and compliance checks as part of your pipeline and have them standard across all releases
6) Ensure your architecture support HA, DR and business continuity.
Deploying to and Configuring WebSphere Application Server with UrbanCode DeployIBM DevOps
Integrating middleware configuration into your application delivery lifecycle can be difficult and usually requires painful manual processes and constant surveillance.
But, there is hope! IBM UrbanCode Deploy has a new and improved middleware configuration plugin for WebSphere Application Server that provides automated updates to WebSphere as part of the application deployment process. Instead of wrestling with manual changes, join us in this session to learn how this plugin can help you update, manage and configure multiple WebSphere instances automatically and automate application deployments on top every time.
Deploying to and Configuring WebSphere Application Server with UrbanCode DeployClaudia Ring
The document discusses how the WebSphere Application Server - Configure plug-in for IBM UrbanCode Deploy can be used to automate configuration management for WebSphere Application Server. It describes how the plug-in discovers WebSphere configuration, templates it, and applies configuration across environments. The plug-in supports simplifying configuration data, using tokens and snippets, live configuration comparison, and WebSphere migration. A demo is shown promoting dynamic cluster configuration from a development to quality assurance environment. Resources and prerequisites for using the plug-in are also provided.
Continuous Delivery of Cloud Applications:Blue/Green and Canary DeploymentsPraveen Yalagandula
Continuous delivery is becoming increasingly critical, however, its implementation remains a hard problem many enterprises struggle with. Canary upgrades and Blue/Green deployment are the two commonly used patterns to implement continuous delivery. In Canary upgrades, a small portion of the production traffic is sent to the new version under test. In Blue/Green deployments, all the traffic is switched to the new version.
We will show how to fully automate the above steps to achieve true continuous delivery in K8s. We will show how to use analytics to express and automate application evaluation and ML-based traffic switching without any downtime.
Similar to Always On - Zero Downtime releases (20)
Presentation on Gene Kims - DevOps Enterprise Summit 2021. Anders presents a journey from journey from Monolithic applications to Microservices, On-Premise hosting to Public Cloud and from 3 production deployments per year to 30+ per
day.
DevOps journey at Scania - Visiting MigrationsverketAnders Lundsgård
Scania underwent a DevOps journey that included:
1. Moving from manual deployments 12 times per year to continuous delivery with 30+ deploys per day.
2. Establishing autonomous teams of 3-8 people that fully own their microservices.
3. Shifting infrastructure to code to improve availability, durability, security and capacity.
Presentation about Cloud Security at Scania 2019. At the yearly auto:CODE we.CONECT conference in Berlin.
What needs have drive the Cloud movement and how to further improve agility with empowered feature teams that securely work autonomous in AWS Cloud.
The Cloud Journey in an Enterprise - IDC Multicloud - Stockholm November 20, ...Anders Lundsgård
Public presentation about Scania's Cloud migration. Why Scania goes for public cloud and how we organize and utilize cloud computing. New content are slides 18-20 that shows how we separate 'Migration projects' from 'Greenfield projects'.
The Cloud Journey in an Enterprise - CoDe-Conf - Copenhagen October 11, 2018 Anders Lundsgård
Public presentation about Scania's Cloud migration. Why Scania goes for public cloud and how we organize and utilize cloud computing. New content is (among other details from latest learnings) an example on serverless code hosted on AWS.
The Cloud journey in an Enterprise - Delivery of Things World - Berlin April ...Anders Lundsgård
This document summarizes the cloud journey of Scania Connected Services. It discusses how Scania moved to microservice architectures and autonomous teams to enable continuous delivery of 30+ deploys per day (up from 2-3 deploys per year previously). It also outlines how Scania organized its engineering teams between feature teams and delivery engineering teams to support over 1000 engineers. Finally, it discusses the rules of play and roles needed to operate securely in the cloud at scale, including centralizing some services while empowering feature teams.
The DevOps journey in an Enterprise - CoDe-Conf. Stockholm September 14, 2017Anders Lundsgård
The presentation about the DevOps transformation at Scania Connected Services. A journey that involve breakdown of a big monolithic application to smaller services and moving from an On-Prem hosting solution to the cloud.
BizDevOps Transformation, Metrics and Microservices at Scania, June 2017 in L...Anders Lundsgård
Presentation made by Anders Lundsgård and Jonatan Mossberg from Scania Conntected Services about our BizDevOps transformation. Was held on the TechXL8 conference in London on the Cloud and DevOps World track.
The DevOps journey in an Enterprise - Continuous Lifecycle London 2016Anders Lundsgård
Presentation about the DevOps movement at Scania. Conference: Continuous Lifecycle London 2016-05-03. http://continuouslifecycle.london/
By Anders Lundsgård (@anderslundsgard) and Mattias Järnhäll (@mattiasjarnhall)
1) Scania uses DevOps practices like continuous integration, infrastructure as code, and microservices to improve delivery speed and allow autonomous teams despite a large codebase and many engineers.
2) Key learnings included finding end user feedback, working on the main branch, avoiding database backups for dev/test, not blaming code, recognizing the value of operations staff, and preventing a "hero culture" through practices like documentation and version control.
3) Scania's DevOps transformation involved moving from a monolithic architecture to microservices, treating infrastructure as code, and empowering feature teams to own delivery of their code through the entire pipeline.
An agile journey - Scania Connected Services at Meetup Go Agile - Stockholm (...Anders Lundsgård
A agile journey from Scania with tips on working practices and pitfalls. Cultural and technical ones. Was arranged by Meetup: Go Agile! - Stockholm at the 3 office, 2015-08-12.
DevOps @ Scania - Trust and some code - NFI Testforum 2015Anders Lundsgård
Presentation about the DevOps movement at Scania by Anders Lundsgård and Mattias Järnhäll. The presentation was held in Stockholm the 15th of April on NFI Testforum 2015.
Suzanne Lagerweij - Influence Without Power - Why Empathy is Your Best Friend...Suzanne Lagerweij
This is a workshop about communication and collaboration. We will experience how we can analyze the reasons for resistance to change (exercise 1) and practice how to improve our conversation style and be more in control and effective in the way we communicate (exercise 2).
This session will use Dave Gray’s Empathy Mapping, Argyris’ Ladder of Inference and The Four Rs from Agile Conversations (Squirrel and Fredrick).
Abstract:
Let’s talk about powerful conversations! We all know how to lead a constructive conversation, right? Then why is it so difficult to have those conversations with people at work, especially those in powerful positions that show resistance to change?
Learning to control and direct conversations takes understanding and practice.
We can combine our innate empathy with our analytical skills to gain a deeper understanding of complex situations at work. Join this session to learn how to prepare for difficult conversations and how to improve our agile conversations in order to be more influential without power. We will use Dave Gray’s Empathy Mapping, Argyris’ Ladder of Inference and The Four Rs from Agile Conversations (Squirrel and Fredrick).
In the session you will experience how preparing and reflecting on your conversation can help you be more influential at work. You will learn how to communicate more effectively with the people needed to achieve positive change. You will leave with a self-revised version of a difficult conversation and a practical model to use when you get back to work.
Come learn more on how to become a real influencer!
Collapsing Narratives: Exploring Non-Linearity • a micro report by Rosie WellsRosie Wells
Insight: In a landscape where traditional narrative structures are giving way to fragmented and non-linear forms of storytelling, there lies immense potential for creativity and exploration.
'Collapsing Narratives: Exploring Non-Linearity' is a micro report from Rosie Wells.
Rosie Wells is an Arts & Cultural Strategist uniquely positioned at the intersection of grassroots and mainstream storytelling.
Their work is focused on developing meaningful and lasting connections that can drive social change.
Please download this presentation to enjoy the hyperlinks!
This presentation by Professor Alex Robson, Deputy Chair of Australia’s Productivity Commission, was made during the discussion “Competition and Regulation in Professions and Occupations” held at the 77th meeting of the OECD Working Party No. 2 on Competition and Regulation on 10 June 2024. More papers and presentations on the topic can be found at oe.cd/crps.
This presentation was uploaded with the author’s consent.
XP 2024 presentation: A New Look to Leadershipsamililja
Presentation slides from XP2024 conference, Bolzano IT. The slides describe a new view to leadership and combines it with anthro-complexity (aka cynefin).
Carrer goals.pptx and their importance in real lifeartemacademy2
Career goals serve as a roadmap for individuals, guiding them toward achieving long-term professional aspirations and personal fulfillment. Establishing clear career goals enables professionals to focus their efforts on developing specific skills, gaining relevant experience, and making strategic decisions that align with their desired career trajectory. By setting both short-term and long-term objectives, individuals can systematically track their progress, make necessary adjustments, and stay motivated. Short-term goals often include acquiring new qualifications, mastering particular competencies, or securing a specific role, while long-term goals might encompass reaching executive positions, becoming industry experts, or launching entrepreneurial ventures.
Moreover, having well-defined career goals fosters a sense of purpose and direction, enhancing job satisfaction and overall productivity. It encourages continuous learning and adaptation, as professionals remain attuned to industry trends and evolving job market demands. Career goals also facilitate better time management and resource allocation, as individuals prioritize tasks and opportunities that advance their professional growth. In addition, articulating career goals can aid in networking and mentorship, as it allows individuals to communicate their aspirations clearly to potential mentors, colleagues, and employers, thereby opening doors to valuable guidance and support. Ultimately, career goals are integral to personal and professional development, driving individuals toward sustained success and fulfillment in their chosen fields.
Mastering the Concepts Tested in the Databricks Certified Data Engineer Assoc...SkillCertProExams
• For a full set of 760+ questions. Go to
https://skillcertpro.com/product/databricks-certified-data-engineer-associate-exam-questions/
• SkillCertPro offers detailed explanations to each question which helps to understand the concepts better.
• It is recommended to score above 85% in SkillCertPro exams before attempting a real exam.
• SkillCertPro updates exam questions every 2 weeks.
• You will get life time access and life time free updates
• SkillCertPro assures 100% pass guarantee in first attempt.
This presentation by OECD, OECD Secretariat, was made during the discussion “Competition and Regulation in Professions and Occupations” held at the 77th meeting of the OECD Working Party No. 2 on Competition and Regulation on 10 June 2024. More papers and presentations on the topic can be found at oe.cd/crps.
This presentation was uploaded with the author’s consent.
This presentation, created by Syed Faiz ul Hassan, explores the profound influence of media on public perception and behavior. It delves into the evolution of media from oral traditions to modern digital and social media platforms. Key topics include the role of media in information propagation, socialization, crisis awareness, globalization, and education. The presentation also examines media influence through agenda setting, propaganda, and manipulative techniques used by advertisers and marketers. Furthermore, it highlights the impact of surveillance enabled by media technologies on personal behavior and preferences. Through this comprehensive overview, the presentation aims to shed light on how media shapes collective consciousness and public opinion.
2. Always On
• What?
– New versions of system components deployed at any time without causing
downtime for the end user
• Why?
– Enables Fast turnaround time
– Production deployments can be made at any time. Preferably in office hours when
the contributors are present.
• How?
– Automated deployment
– Load balanced deploy for webservers
– Database changes should be independent on application version
3. Pre-requirements for Always On
• A desire to increase quality and long term maintainability
• A desire to reduce the time between production deploys
• Versioning of code and database scripts
• Tooling for automated deployment
– Special tooling for relational databases is a must
7. Database server
Web1 Web2
Load balancer
End users don’t experience any downtime
1. Bring down web server #1 from load balancer
2. Install new version of application on server #1
Updating the web servers
3. Toggle the load balancer
4. Install new version of application on server #2
5. Activate load balancer
8. Database server
Web server
1. Add new schema
2. Write to both schemas
3. Backfill historical data
4. Read from new schema
5. Remove writes to old schema
6. Remove old schema
End users don’t experience any downtime
Updating the database Read Write
Probably after days or weeks
9. What does this mean to me?
• It’s easy!
– A database change should always be
backward compatible with the old code
• Example of what is not doable
– Changing a stored procedure so that it changes its interface (in and out
parameters)
• Solution: Create a new version of the stored procedure
– Migrate data to a new structure that is dependent on new data access
• Solution: Have parallel writes/reads