Spring, Spring Boot and Spring Cloud are tools that allow developers to speed up the creation of new business features. But a new feature is only useful if it's in production. Companies spend a lot of time and resources on building their own deployment pipelines using a plethora of technologies. Spring Cloud Pipelines provides an opinionated way for getting your features to production in a fast, reliable, reproducible and fully automated way.
A brief history of automation in Software EngineeringGeorg Buske
In this talk we will discuss different levels of automation and what automation has in common with DevOps, Product Maturity and Machine learning. We will show how automation enables fast feedback and finally, while looking at an example of an observable and continuous deployable system we will show how automation can make your team more productive (while delivering more stable software and decrease time to market).
Do you have a healthy CI/CD pipeline? Do releases simply flow through? CI, CD, PRs, Pipelines, Releases, Deployments and all that jazz.
Whether you're new to Continuous Delivery or a hardened traveller down that road, this session has something for you. We’ll start with an exploration of branching strategy (releaseflow.org) before walking through a healthy continuous delivery configuration.
We’ll watch a code change make it's way through a pipeline to production and discuss how we can apply such practices to our everyday work.
Achieving Full Stack DevOps at Colonial Life DevOps.com
In an ever more competitive marketplace, organizations have turned to Agile and DevOps practices to deliver software innovations to market more quickly and with high quality. Across industries, companies are making heavy investments in tools and process improvements around automated build, test, continuous integration and delivery, and release automation and orchestration. However, despite these investments, many organizations are still struggling to bring the necessary speed and quality to their software delivery. In many cases, this is because Agile and DevOps improvements have not been applied to the entire software stack and are often limited to application code delivery.
This webinar will explore the transformation that Colonial Life made in bringing DevOps to the entire software stack. Specifically, beyond automating and accelerating the validation and delivery of application code, this webinar will focus on the critical role that data and the database play in modern software delivery and the tools and processes that can bring the same automation to database code.
After this webinar, you will understand:
* What holds organizations back despite an Agile application development process
* The benefits of automating the validation and deployment of database changes
* A template for bringing DevOps to the entire software stack
A brief history of automation in Software EngineeringGeorg Buske
In this talk we will discuss different levels of automation and what automation has in common with DevOps, Product Maturity and Machine learning. We will show how automation enables fast feedback and finally, while looking at an example of an observable and continuous deployable system we will show how automation can make your team more productive (while delivering more stable software and decrease time to market).
Do you have a healthy CI/CD pipeline? Do releases simply flow through? CI, CD, PRs, Pipelines, Releases, Deployments and all that jazz.
Whether you're new to Continuous Delivery or a hardened traveller down that road, this session has something for you. We’ll start with an exploration of branching strategy (releaseflow.org) before walking through a healthy continuous delivery configuration.
We’ll watch a code change make it's way through a pipeline to production and discuss how we can apply such practices to our everyday work.
Achieving Full Stack DevOps at Colonial Life DevOps.com
In an ever more competitive marketplace, organizations have turned to Agile and DevOps practices to deliver software innovations to market more quickly and with high quality. Across industries, companies are making heavy investments in tools and process improvements around automated build, test, continuous integration and delivery, and release automation and orchestration. However, despite these investments, many organizations are still struggling to bring the necessary speed and quality to their software delivery. In many cases, this is because Agile and DevOps improvements have not been applied to the entire software stack and are often limited to application code delivery.
This webinar will explore the transformation that Colonial Life made in bringing DevOps to the entire software stack. Specifically, beyond automating and accelerating the validation and delivery of application code, this webinar will focus on the critical role that data and the database play in modern software delivery and the tools and processes that can bring the same automation to database code.
After this webinar, you will understand:
* What holds organizations back despite an Agile application development process
* The benefits of automating the validation and deployment of database changes
* A template for bringing DevOps to the entire software stack
Docker is a tool that didn't exist 2 years ago. Yet I am convinced that we will hear about it for a long time. We will almost certainly use containers to test and deploy our applications.
This talk is about the reasons to start using docker in your daily work as a programmer, tester, sysadmin or IT professional.
Lightweight continuous delivery for small schoolsCharles Fulton
In a continuous delivery environment web application updates are pushed out fast and frequently. Implementing that environment requires many different pieces: version control, automated testing, and automated deployment. It’s a lot to wrap your head around, but there are tangible benefits for small schools, including new opportunities to collaborate among institutions or with student developers.
In this presentation we will demonstrate how to build a lightweight continuous integration and delivery stack using free and open source tools: GitLab for version control, GitLab CI and Docker for testing, and Docker and Capistrano for deployment. We will walk through how each piece is separately important and how combining them creates a simple yet powerful deployment strategy. We will also describe concrete examples of how we are using these tools to share application development with students and each other.
Are you sick of Merge Hell? Do your feature branches go rogue? Do you spend more time fiddling with your Version Control System than doing actual development work? Then Trunk Based Development might be for you. Facebook does it. Google does it. Instead of messing with multiple branches, just use your master branch. Always. In addition to giving you an overview about how Trunk Based Development works, where it shines and where the pitfalls are, this talk will also cover the necessary techniques to succeed with it, such as Branch By abstraction, Feature Toggles and backwards compatible Database Migrations.
Developing and releasing software in a team setting can be messy. With many developers working on the same code base, we need a workflow that allows a team to develop in parallel and allows for new functionality to be safely integrated into our environments and applications. In order to achieve such a workflow, leveraging a branching strategy is a must. There are, however, many to choose from. In this talk, we'll be discussing Trunk-Based Development, a branching strategy that we leverage extensively here at Nebulaworks.
Key Takeaways:
*Learn about the various benefits that we get from leveraging Trunk-Based Development.
*We will talk about general best practices that should be followed when developing new functionality
*We will be discussing the release process (How and when to leverage Release Candidate Branches and git tags)
*We will walk through the Trunk-Based Development process in a demo where we develop a simple python app!
OPNFV CI and Challenges: How we solved them - if we solved them at all!Fatih Degirmenci
OPNFV is a carrier-grade, integrated, open source platform to accelerate the introduction of new NFV products and services. It aims to build the platform by integrating components from different upstream projects such as OpenStack, OpenDaylight, Open vSwitch, KVM and so on. Apart from integrating different components, OPNFV aims to identify gaps in these components and fixes them directly in upstream. OPNFV sees CI/CD to be a solution to its challenges by providing a foundation for developing, integrating and testing OPNFV faster and more efficient through the release cycles.
QA Strategies for Testing Legacy Web AppsRainforest QA
Paul Miles, Software Development Manager at NPR, discusses QA strategies and tools his team uses to address the challenge of maintaining legacy products at NPR.
In this presentation, he covers:
- How to effectively strategize what types of tests to add to legacy software
- What cost-effective tools and testing strategies you can adopt in your organization
- Approaches about how to incorporate testing into your organization’s build pipelines
- How to foster testing centric culture in your organization
OSDC 2018 | Migrating to the cloud by Devdas BhagatNETWAYS
This is an experience report of a migration from self-hosted services to running in the cloud. While there have been plenty of business case studies showing the benefits of a cloud migration, there are very few reports on the IT side of the migration. This talk covers the migration of Spilgames (a small Dutch games publisher) from a self-hosted Openstack and hardware based infrastructure to Google cloud, challenges, tooling (and lack thereof). This migration is still work in progress, and the talk will cover as much detail as possible.
QCon'17 talk: CI/CD at scale - lessons from LinkedIn and MockitoSzczepan Faber
Learn how continuous deployment can improve your organization's productivity. Learn about challenges, differences and similarities of CD at LinkedIn (large scale enterprise) and Mockito (OSS software library with huge user base).
More details: http://bit.do/qcon-cd-abstract
Google slides: http://bit.do/qcon-cd-gslides
Presentation abstract as in QCon session catalog:
LinkedIn and Mockito are two different use cases of implementing continuous delivery at scale. Yet the challenges, benefits and impact on the engineering culture are very similar.
In 2015, LinkedIn’s flagship application adopted a continuous delivery model we called 3x3: deploy to production 3 times a day, with a 3 hour maximum time from commit to production. At LinkedIn scale - hundreds of engineers building products for 500M users - implementing 3x3 was really hard. How did 3x3 change LinkedIn engineering culture and what we have learned on the way?
Mockito is a top 3 Java library with ~2M users. Even with that large user base, since 2014, the Mockito project has taken the surprising approach of publishing a new version of the library from every single pull request. This approach is challenging and innovative in the Java community, and Mockito leverages Shipkit to ship every change to production. Why did the Mockito team adopt continuous delivery in 2014 and what we have learned to date?
Join and learn from Szczepan Faber, the maker of Mockito framework since 2007, and the tech lead of LinkedIn Development Tools since 2015.
We discuss things to be taken into account when deciding on a policy for your CI/CD pipelines. This might include Git workflows, testing approaches, and shipping strategies.
A deep dive into Jenkins Continuos Integration, how you can enable your team to collaborate more, run tests and configure the robots to do all the things for you. Also talking about caveats around automation, testing on real devices, usb hub woes and more.
Video and slides synchronized, mp3 and slide download available at URL http://bit.ly/2Gtedjh.
Szczepan Faber talks about two different use cases of implementing continuous delivery at scale: LinkedIn and Mockito. Yet the challenges, benefits and impact on the engineering culture are very similar. Filmed at qconsf.com.
Szczepan Faber is a Tech Lead for LinkedIn Development Tools, responsible for developer productivity at LinkedIn. Previously, he was core engineer of Gradle 1.x and 2.x. and instructed numerous classes on build automation. He created Mockito framework in 2007, currently estimated user base of 2M, and has been giving classes on automated testing since.
From naive to agile - software engineering approachStayman Hou
Explains how a software development processes or a dev team can evolve from naive approach (develop everything in prod) to the agile approach. You still deliver changes fast, but more reliable this time.
Using Crowdsourced Testing to Turbocharge your Development TeamRainforest QA
Developer-owned QA testing is becoming more common as many organizations shift to leaner development processes and eschew traditional QA strategies.
This presentation discusses how crowdsourced testing can help teams offload repetitive testing work and streamline Agile testing processes. It also demonstrates how Rainforest Developer Experience (DevX) allows developers to increase productivity and minimize testing time with workflow-native crowdsourced testing.
Interested in seeing how Rainforest has helped companies save dev time and QA spend? Check out these success stories!
Guru: http://hubs.ly/H06lwC60
America's Test Kitchen: http://hubs.ly/H06lCX50
A deck from the first CDIsrael meetup, presenting our CD flow at Snyk, focusing on our testing framework. A day in a life of a developer - code, test, publish, deploy, monitor.
Have you ever wondered what the best way would be to test emails? Or how you would go about testing a messaging queue?
Making sure your components are correctly interacting with each other is both a tester and developer’s concern. Join us to get a better understanding of what you should test and how, both manually and automated.
This session is the first ever in which we will have two units working together to give you a nuanced insight on all aspects of integration testing. We’ll start off exploring the world of integration testing, defining the terminology, and creating a general understanding of what phases and kinds of testing exist. Later on we’ll delve into integration test automation, ranging from database integration testing to selenium UI testing and even as far as LDAP integration testing.
We have a wide variety of demos prepared where we will show you how easy it is to test various components of your infrastructure. Some examples:
- Database testing (JPA)
- Arquillian, exploring container testing, EJB testing and more
- Email testing
- SOAP testing using SoapUI
- LDAP testing
- JMS testing
Docker is a tool that didn't exist 2 years ago. Yet I am convinced that we will hear about it for a long time. We will almost certainly use containers to test and deploy our applications.
This talk is about the reasons to start using docker in your daily work as a programmer, tester, sysadmin or IT professional.
Lightweight continuous delivery for small schoolsCharles Fulton
In a continuous delivery environment web application updates are pushed out fast and frequently. Implementing that environment requires many different pieces: version control, automated testing, and automated deployment. It’s a lot to wrap your head around, but there are tangible benefits for small schools, including new opportunities to collaborate among institutions or with student developers.
In this presentation we will demonstrate how to build a lightweight continuous integration and delivery stack using free and open source tools: GitLab for version control, GitLab CI and Docker for testing, and Docker and Capistrano for deployment. We will walk through how each piece is separately important and how combining them creates a simple yet powerful deployment strategy. We will also describe concrete examples of how we are using these tools to share application development with students and each other.
Are you sick of Merge Hell? Do your feature branches go rogue? Do you spend more time fiddling with your Version Control System than doing actual development work? Then Trunk Based Development might be for you. Facebook does it. Google does it. Instead of messing with multiple branches, just use your master branch. Always. In addition to giving you an overview about how Trunk Based Development works, where it shines and where the pitfalls are, this talk will also cover the necessary techniques to succeed with it, such as Branch By abstraction, Feature Toggles and backwards compatible Database Migrations.
Developing and releasing software in a team setting can be messy. With many developers working on the same code base, we need a workflow that allows a team to develop in parallel and allows for new functionality to be safely integrated into our environments and applications. In order to achieve such a workflow, leveraging a branching strategy is a must. There are, however, many to choose from. In this talk, we'll be discussing Trunk-Based Development, a branching strategy that we leverage extensively here at Nebulaworks.
Key Takeaways:
*Learn about the various benefits that we get from leveraging Trunk-Based Development.
*We will talk about general best practices that should be followed when developing new functionality
*We will be discussing the release process (How and when to leverage Release Candidate Branches and git tags)
*We will walk through the Trunk-Based Development process in a demo where we develop a simple python app!
OPNFV CI and Challenges: How we solved them - if we solved them at all!Fatih Degirmenci
OPNFV is a carrier-grade, integrated, open source platform to accelerate the introduction of new NFV products and services. It aims to build the platform by integrating components from different upstream projects such as OpenStack, OpenDaylight, Open vSwitch, KVM and so on. Apart from integrating different components, OPNFV aims to identify gaps in these components and fixes them directly in upstream. OPNFV sees CI/CD to be a solution to its challenges by providing a foundation for developing, integrating and testing OPNFV faster and more efficient through the release cycles.
QA Strategies for Testing Legacy Web AppsRainforest QA
Paul Miles, Software Development Manager at NPR, discusses QA strategies and tools his team uses to address the challenge of maintaining legacy products at NPR.
In this presentation, he covers:
- How to effectively strategize what types of tests to add to legacy software
- What cost-effective tools and testing strategies you can adopt in your organization
- Approaches about how to incorporate testing into your organization’s build pipelines
- How to foster testing centric culture in your organization
OSDC 2018 | Migrating to the cloud by Devdas BhagatNETWAYS
This is an experience report of a migration from self-hosted services to running in the cloud. While there have been plenty of business case studies showing the benefits of a cloud migration, there are very few reports on the IT side of the migration. This talk covers the migration of Spilgames (a small Dutch games publisher) from a self-hosted Openstack and hardware based infrastructure to Google cloud, challenges, tooling (and lack thereof). This migration is still work in progress, and the talk will cover as much detail as possible.
QCon'17 talk: CI/CD at scale - lessons from LinkedIn and MockitoSzczepan Faber
Learn how continuous deployment can improve your organization's productivity. Learn about challenges, differences and similarities of CD at LinkedIn (large scale enterprise) and Mockito (OSS software library with huge user base).
More details: http://bit.do/qcon-cd-abstract
Google slides: http://bit.do/qcon-cd-gslides
Presentation abstract as in QCon session catalog:
LinkedIn and Mockito are two different use cases of implementing continuous delivery at scale. Yet the challenges, benefits and impact on the engineering culture are very similar.
In 2015, LinkedIn’s flagship application adopted a continuous delivery model we called 3x3: deploy to production 3 times a day, with a 3 hour maximum time from commit to production. At LinkedIn scale - hundreds of engineers building products for 500M users - implementing 3x3 was really hard. How did 3x3 change LinkedIn engineering culture and what we have learned on the way?
Mockito is a top 3 Java library with ~2M users. Even with that large user base, since 2014, the Mockito project has taken the surprising approach of publishing a new version of the library from every single pull request. This approach is challenging and innovative in the Java community, and Mockito leverages Shipkit to ship every change to production. Why did the Mockito team adopt continuous delivery in 2014 and what we have learned to date?
Join and learn from Szczepan Faber, the maker of Mockito framework since 2007, and the tech lead of LinkedIn Development Tools since 2015.
We discuss things to be taken into account when deciding on a policy for your CI/CD pipelines. This might include Git workflows, testing approaches, and shipping strategies.
A deep dive into Jenkins Continuos Integration, how you can enable your team to collaborate more, run tests and configure the robots to do all the things for you. Also talking about caveats around automation, testing on real devices, usb hub woes and more.
Video and slides synchronized, mp3 and slide download available at URL http://bit.ly/2Gtedjh.
Szczepan Faber talks about two different use cases of implementing continuous delivery at scale: LinkedIn and Mockito. Yet the challenges, benefits and impact on the engineering culture are very similar. Filmed at qconsf.com.
Szczepan Faber is a Tech Lead for LinkedIn Development Tools, responsible for developer productivity at LinkedIn. Previously, he was core engineer of Gradle 1.x and 2.x. and instructed numerous classes on build automation. He created Mockito framework in 2007, currently estimated user base of 2M, and has been giving classes on automated testing since.
From naive to agile - software engineering approachStayman Hou
Explains how a software development processes or a dev team can evolve from naive approach (develop everything in prod) to the agile approach. You still deliver changes fast, but more reliable this time.
Using Crowdsourced Testing to Turbocharge your Development TeamRainforest QA
Developer-owned QA testing is becoming more common as many organizations shift to leaner development processes and eschew traditional QA strategies.
This presentation discusses how crowdsourced testing can help teams offload repetitive testing work and streamline Agile testing processes. It also demonstrates how Rainforest Developer Experience (DevX) allows developers to increase productivity and minimize testing time with workflow-native crowdsourced testing.
Interested in seeing how Rainforest has helped companies save dev time and QA spend? Check out these success stories!
Guru: http://hubs.ly/H06lwC60
America's Test Kitchen: http://hubs.ly/H06lCX50
A deck from the first CDIsrael meetup, presenting our CD flow at Snyk, focusing on our testing framework. A day in a life of a developer - code, test, publish, deploy, monitor.
Have you ever wondered what the best way would be to test emails? Or how you would go about testing a messaging queue?
Making sure your components are correctly interacting with each other is both a tester and developer’s concern. Join us to get a better understanding of what you should test and how, both manually and automated.
This session is the first ever in which we will have two units working together to give you a nuanced insight on all aspects of integration testing. We’ll start off exploring the world of integration testing, defining the terminology, and creating a general understanding of what phases and kinds of testing exist. Later on we’ll delve into integration test automation, ranging from database integration testing to selenium UI testing and even as far as LDAP integration testing.
We have a wide variety of demos prepared where we will show you how easy it is to test various components of your infrastructure. Some examples:
- Database testing (JPA)
- Arquillian, exploring container testing, EJB testing and more
- Email testing
- SOAP testing using SoapUI
- LDAP testing
- JMS testing
Continuous Deployment of your Application @SpringOneciberkleid
Spring Cloud Pipelines is an opinionated framework that automates the creation of structured continuous deployment pipelines.
In this presentation we’ll go through the contents of the Spring Cloud Pipelines project. We’ll start a new project for which we’ll have a deployment pipeline set up in no time. We’ll deploy to Cloud Foundry and check if our application is backwards compatible so that we can roll it back on production.
DevOpsDays Tel Aviv DEC 2022 | Building A Cloud-Native Platform Brick by Bric...Haggai Philip Zagury
The overwhelming growth of technologies in the Cloud Native foundation overtook our toolbox and completely changed (well, really enhanced) the Developer Experience.
In this talk, I will try to provide my personal journey from the "Operator to Developer's chair" and the practices which helped me along my journey as a Cloud-Native Dev ;)
IFG for SAP Integration, webinar on Automated TestingDaniel Graversen
This is the presentation from the IFG webinar about Automating the testing of SAP Interfaces.
We are showing why testing, is really important and we are standing in a place where it really makes a lot of sense to automate the testing in a lot shorter time frame.
SpringOne Platform 2017
Marcin Grzejszczak, Pivotal; Cora Iberkleid, Pivotal
"“I have stopped counting how many times I’ve done this from scratch” - was one of the responses to the tweet about starting the project called Spring Cloud Pipelines. Every company sets up a pipeline to take code from your source control, through unit testing and integration testing, to production from scratch. Every company creates some sort of automation to deploy its applications to servers. Enough is enough - time to automate that and focus on delivering business value.
In this presentation we’ll go through the contents of the Spring Cloud Pipelines project. We’ll start a new project for which we’ll have a deployment pipeline set up in no time. We’ll deploy to Cloud Foundry and check if our application is backwards compatible so that we can roll it back on production."
Continuous Integration Testing Techniques to Improve Chef Cookbook QualityJosiah Renaudin
Chef, Puppet, and other tools that implement “infrastructure as code” are great for configuration management and automated deployments, but it is difficult to test these infrastructure scripts before putting them into production. Since infrastructure as code is a relatively new technology, methodologies for its testing are not yet standardized. Glen Buckholz shares a way to solve the two major problems with testing Chef scripts—[1] capturing a start state similar to your target environment, and [2] rolling back to the starting state when your script fails. Development techniques are typically ad-hoc with most developers creating a personal method of testing in their own environment or circumstance. Glen shows how to use established continuous integration (CI) techniques to allow an automated platform to more quickly generate test results and automatically stage the code to the Chef server. By linking together established CI and testing techniques, we can hold Chef code development to the same mature standard as application programming.
[DPE Summit] How Improving the Testing Experience Goes Beyond Quality: A Deve...Roberto Pérez Alcolea
It is well known that organizations connect software testing with software quality: making sure that the code does what it supposed to do.
Unfortunately, many organizations believe that testing is a slow process that causes stagnancy in the project. Organizations say that due to slow testing process they are not able to meet set milestones, but it doesn’t have to be this way.
The testing stage is also part of the developer experience, and making it such that engineers are productive and continue delivering software not only fast but with confidence is crucial.
In this talk, we will explore a few approaches that we are taking in order to deliver a more consistent and delightful testing experience for JVM engineers at Netflix. The end goal: speed up engineers’ feedback loop by running tests locally constantly as much as possible.
Weave GitOps 2022.09 Release: A Fast & Reliable Path to Production with Progr...Weaveworks
Weave GitOps 2022.09 Features Launch Event
The latest release of Weave GitOps introduces new features enabling progressive delivery, policy as code, and accelerated application onboarding.
Weave GitOps is the leading full-stack GitOps platform to automate trusted application delivery and secure infrastructure operations on premise, in the cloud and at the edge. Trusted by Customers, including Deutsche Telekom and The Department of Defense, Platform and Application Teams, Weave GitOps unlocks the benefits of increased efficiency and compliance, while boosting deployment velocity and confidence.
Join us where we’ll do a live demo of Weave GitOps showcasing:
- Advanced Deployment Patterns—Progressive Delivery has never been easier
- Multi-tenancy and Application Portability—More collaboration and control
- Strengthened GitOps Security—If you can code it, you can secure it.
Author: Izzet Mustafaiev, Java Solutions Architect.
Nowadays in the fast changing world we need to keep less and less time spent on routine activity and to spend more on creativity and bringing something new to move forward.
This slides brings some trending ideas and approaches to deliver software in modern fashion, from Micro-services architecture, Containerisation, Automation, Continuous Integration/Deployment/Delivery.
There is a demo application built with depicted approach https://github.com/webdizz/bootiful-apps.
Nona puntata del Mulesoft Meetup di Milano. Parliamo insieme a Paolo Petronzi di automazione e CI/CD e poi con Luca Bonaldo, il nostro Mulesoft Mentor in Italia, di best practices per batch processing.
Continuous Load Testing with CloudTest and JenkinsSOASTA
Two key challenges to continuous load testing are provisioning a test system to handle the load and accessing load generators to drive the traffic.
In this webinar from SOASTA & CloudBees, you will learn how to:
Build realistic automated web performance tests and run them in Jenkins
Architect and launch a test environment that auto-provisions in the cloud
Manage a load generation grid to drive load tests in a lights-out mode
Establish a performance baseline in your daily Jenkins reports
“I have stopped counting how many times I’ve done this from scratch” - was one of the responses to the tweet about starting the project called Spring Cloud Pipelines. Every company sets up a pipeline to take code from your source control, through unit testing and integration testing, to production from scratch. Every company creates some sort of automation to deploy its applications to servers. Enough is enough - time to automate that and focus on delivering business value.
In this presentation we’ll go through the contents of the Spring Cloud Pipelines project. We’ll start a new project for which we’ll have a deployment pipeline set up in no time. We’ll deploy to Cloud Foundry (but we also could do it with Kubernetes) and check if our application is backwards compatible so that we can roll it back on production.
Continuous Load Testing with CloudTest and JenkinsSOASTA
Two key challenges to continuous load testing are provisioning a test system to handle the load and accessing load generators to drive the traffic.
In this webinar from SOASTA & CloudBees, you will learn how to:
Build realistic automated web performance tests and run them in Jenkins
Architect and launch a test environment that auto-provisions in the cloud
Manage a load generation grid to drive load tests in a lights-out mode
Establish a performance baseline in your daily Jenkins reports
Modern HA applications in nowadays are developed with set of small focused and discrete Microservices. It's a trending concept and opens/solves questions like maintenance, scaling, live-deployments, security, fault-tolerance etc.
Similar to Cloud Native CI/CD with Spring Cloud Pipelines (20)
Field Employee Tracking System| MiTrack App| Best Employee Tracking Solution|...informapgpstrackings
Keep tabs on your field staff effortlessly with Informap Technology Centre LLC. Real-time tracking, task assignment, and smart features for efficient management. Request a live demo today!
For more details, visit us : https://informapuae.com/field-staff-tracking/
Accelerate Enterprise Software Engineering with PlatformlessWSO2
Key takeaways:
Challenges of building platforms and the benefits of platformless.
Key principles of platformless, including API-first, cloud-native middleware, platform engineering, and developer experience.
How Choreo enables the platformless experience.
How key concepts like application architecture, domain-driven design, zero trust, and cell-based architecture are inherently a part of Choreo.
Demo of an end-to-end app built and deployed on Choreo.
TROUBLESHOOTING 9 TYPES OF OUTOFMEMORYERRORTier1 app
Even though at surface level ‘java.lang.OutOfMemoryError’ appears as one single error; underlyingly there are 9 types of OutOfMemoryError. Each type of OutOfMemoryError has different causes, diagnosis approaches and solutions. This session equips you with the knowledge, tools, and techniques needed to troubleshoot and conquer OutOfMemoryError in all its forms, ensuring smoother, more efficient Java applications.
Enhancing Project Management Efficiency_ Leveraging AI Tools like ChatGPT.pdfJay Das
With the advent of artificial intelligence or AI tools, project management processes are undergoing a transformative shift. By using tools like ChatGPT, and Bard organizations can empower their leaders and managers to plan, execute, and monitor projects more effectively.
Into the Box Keynote Day 2: Unveiling amazing updates and announcements for modern CFML developers! Get ready for exciting releases and updates on Ortus tools and products. Stay tuned for cutting-edge innovations designed to boost your productivity.
SOCRadar Research Team: Latest Activities of IntelBrokerSOCRadar
The European Union Agency for Law Enforcement Cooperation (Europol) has suffered an alleged data breach after a notorious threat actor claimed to have exfiltrated data from its systems. Infamous data leaker IntelBroker posted on the even more infamous BreachForums hacking forum, saying that Europol suffered a data breach this month.
The alleged breach affected Europol agencies CCSE, EC3, Europol Platform for Experts, Law Enforcement Forum, and SIRIUS. Infiltration of these entities can disrupt ongoing investigations and compromise sensitive intelligence shared among international law enforcement agencies.
However, this is neither the first nor the last activity of IntekBroker. We have compiled for you what happened in the last few days. To track such hacker activities on dark web sources like hacker forums, private Telegram channels, and other hidden platforms where cyber threats often originate, you can check SOCRadar’s Dark Web News.
Stay Informed on Threat Actors’ Activity on the Dark Web with SOCRadar!
Software Engineering, Software Consulting, Tech Lead.
Spring Boot, Spring Cloud, Spring Core, Spring JDBC, Spring Security,
Spring Transaction, Spring MVC,
Log4j, REST/SOAP WEB-SERVICES.
Exploring Innovations in Data Repository Solutions - Insights from the U.S. G...Globus
The U.S. Geological Survey (USGS) has made substantial investments in meeting evolving scientific, technical, and policy driven demands on storing, managing, and delivering data. As these demands continue to grow in complexity and scale, the USGS must continue to explore innovative solutions to improve its management, curation, sharing, delivering, and preservation approaches for large-scale research data. Supporting these needs, the USGS has partnered with the University of Chicago-Globus to research and develop advanced repository components and workflows leveraging its current investment in Globus. The primary outcome of this partnership includes the development of a prototype enterprise repository, driven by USGS Data Release requirements, through exploration and implementation of the entire suite of the Globus platform offerings, including Globus Flow, Globus Auth, Globus Transfer, and Globus Search. This presentation will provide insights into this research partnership, introduce the unique requirements and challenges being addressed and provide relevant project progress.
Providing Globus Services to Users of JASMIN for Environmental Data AnalysisGlobus
JASMIN is the UK’s high-performance data analysis platform for environmental science, operated by STFC on behalf of the UK Natural Environment Research Council (NERC). In addition to its role in hosting the CEDA Archive (NERC’s long-term repository for climate, atmospheric science & Earth observation data in the UK), JASMIN provides a collaborative platform to a community of around 2,000 scientists in the UK and beyond, providing nearly 400 environmental science projects with working space, compute resources and tools to facilitate their work. High-performance data transfer into and out of JASMIN has always been a key feature, with many scientists bringing model outputs from supercomputers elsewhere in the UK, to analyse against observational or other model data in the CEDA Archive. A growing number of JASMIN users are now realising the benefits of using the Globus service to provide reliable and efficient data movement and other tasks in this and other contexts. Further use cases involve long-distance (intercontinental) transfers to and from JASMIN, and collecting results from a mobile atmospheric radar system, pushing data to JASMIN via a lightweight Globus deployment. We provide details of how Globus fits into our current infrastructure, our experience of the recent migration to GCSv5.4, and of our interest in developing use of the wider ecosystem of Globus services for the benefit of our user community.
Globus Connect Server Deep Dive - GlobusWorld 2024Globus
We explore the Globus Connect Server (GCS) architecture and experiment with advanced configuration options and use cases. This content is targeted at system administrators who are familiar with GCS and currently operate—or are planning to operate—broader deployments at their institution.
Globus Compute wth IRI Workflows - GlobusWorld 2024Globus
As part of the DOE Integrated Research Infrastructure (IRI) program, NERSC at Lawrence Berkeley National Lab and ALCF at Argonne National Lab are working closely with General Atomics on accelerating the computing requirements of the DIII-D experiment. As part of the work the team is investigating ways to speedup the time to solution for many different parts of the DIII-D workflow including how they run jobs on HPC systems. One of these routes is looking at Globus Compute as a way to replace the current method for managing tasks and we describe a brief proof of concept showing how Globus Compute could help to schedule jobs and be a tool to connect compute at different facilities.
Check out the webinar slides to learn more about how XfilesPro transforms Salesforce document management by leveraging its world-class applications. For more details, please connect with sales@xfilespro.com
If you want to watch the on-demand webinar, please click here: https://www.xfilespro.com/webinars/salesforce-document-management-2-0-smarter-faster-better/
Climate Science Flows: Enabling Petabyte-Scale Climate Analysis with the Eart...Globus
The Earth System Grid Federation (ESGF) is a global network of data servers that archives and distributes the planet’s largest collection of Earth system model output for thousands of climate and environmental scientists worldwide. Many of these petabyte-scale data archives are located in proximity to large high-performance computing (HPC) or cloud computing resources, but the primary workflow for data users consists of transferring data, and applying computations on a different system. As a part of the ESGF 2.0 US project (funded by the United States Department of Energy Office of Science), we developed pre-defined data workflows, which can be run on-demand, capable of applying many data reduction and data analysis to the large ESGF data archives, transferring only the resultant analysis (ex. visualizations, smaller data files). In this talk, we will showcase a few of these workflows, highlighting how Globus Flows can be used for petabyte-scale climate analysis.
How to Position Your Globus Data Portal for Success Ten Good PracticesGlobus
Science gateways allow science and engineering communities to access shared data, software, computing services, and instruments. Science gateways have gained a lot of traction in the last twenty years, as evidenced by projects such as the Science Gateways Community Institute (SGCI) and the Center of Excellence on Science Gateways (SGX3) in the US, The Australian Research Data Commons (ARDC) and its platforms in Australia, and the projects around Virtual Research Environments in Europe. A few mature frameworks have evolved with their different strengths and foci and have been taken up by a larger community such as the Globus Data Portal, Hubzero, Tapis, and Galaxy. However, even when gateways are built on successful frameworks, they continue to face the challenges of ongoing maintenance costs and how to meet the ever-expanding needs of the community they serve with enhanced features. It is not uncommon that gateways with compelling use cases are nonetheless unable to get past the prototype phase and become a full production service, or if they do, they don't survive more than a couple of years. While there is no guaranteed pathway to success, it seems likely that for any gateway there is a need for a strong community and/or solid funding streams to create and sustain its success. With over twenty years of examples to draw from, this presentation goes into detail for ten factors common to successful and enduring gateways that effectively serve as best practices for any new or developing gateway.
Unleash Unlimited Potential with One-Time Purchase
BoxLang is more than just a language; it's a community. By choosing a Visionary License, you're not just investing in your success, you're actively contributing to the ongoing development and support of BoxLang.
4. Lars Rosenquist
● Platform Architect at Pivotal
○ Field organisation
○ Helping customers become great software companies
● Developing software professionally since 1998
○ Financial, governmental, commercial
○ Java, Spring, Cloud Foundry
● Twitter: @larsrosenquist
● Email: lrosenquist@pivotal.io
About me
5. About you
How is CI/CD setup in your organization?
● Do you have a pipeline for every piece of software you build?
● How much effort (time, steps) does it cost to setup a new pipeline?
● Is it standardized, based on best practices? Or every app its own custom pipeline?
● Do your pipeline(s) make manual testing obsolete?
● Who’s doing CI? Who’s doing CD?
○ CI - Build and run tests
○ CD - The confidence to deploy to production in a fully automated way
● When to deploy to production? Office hours? Evenings/weekends?
7. Challenges with build server
● Setting up build server can be hard
○ Jenkins, (XML) jobs or (Groovy) pipelines
○ Concourse, (yaml) pipelines
● Lots of work in automating jobs and tests
○ Different pipeline for each app
○ Maintenance headache
● Pipeline setup is a chore. Why not automated?
CI/CD is hard (1/2)
8. Challenges on test environments
● Dependent services and applications
○ Available in test environment? Correct version?
● Dependent test data (sets)
○ Available in test environment?
○ Setup? Cleanup?
● Multiple teams in same environment
○ Wait until environment is ‘free’
● And then again for staging and production
CI/CD is hard (2/2)
9. We move to a microservices architecture
● Increases number of applications
● Increases number of dependencies
● Increases complexity
● Increases (wait) time
● Increases costs
And then?
13. We need
● Standardized and automated way of building our applications
● Standardized and automated way of deploying our applications
● Standardized and automated way of doing various kinds of tests
How do we fix this?
15. What does a good CI/CD pipeline look like?
A good pipeline
● Consistent, automated and repeatable steps
○ Build, test, deploy
○ Guarantee of success against a given set of tests
● Testing
○ Rollback testing
○ Backwards compatibility on DB schema changes
○ Use stubs for dependencies
● Zero downtime updates
○ Rolling, blue/green deployments
16. But that’s not all
Also use pipelines for
● Create new uSVC
○ App scaffolding (project structure, Spring/Boot/other libraries)
■ E.g. start.spring.io via curl
○ Build pipeline
○ Tracker/JIRA
○ Wiki/Confluence pages, etc.
● Managing your platform(s)
● Your use case here
18. Provide a common way of running, configuring and
deploying applications
Solve for:
- Creation of a common deployment pipeline
- Propagation of good testing & deployment
practices
- Speed up the time required to deploy a feature to
production
The goal
19. What it is...
- An opinionated pipeline to continuously deploy applications
to either Cloud Foundry or Kubernetes
- A mechanism to encourage /enforce best practices ranging
from increased automated test coverage, versioned
database schemas, and contract-based APIs
- Templates and scripts for easily creating standardized
pipelines for Concourse or Jenkins, and integrating with a
git source code repo and a maven artifact repo
- Easily extensible / customizable
20. What it isn’t...
- Your typical Spring project
- Annotations to add to your code
- Libraries to add to your application
- Turnkey solution
- A silver bullet or golden hammer
21. How do I use it?
- https://github.com/spring-cloud/spring-cloud-pipelines
- Treat the project as a template for your pipeline
- Download the repository and use it to init a new
git project
- Modify your new project to suit your needs
- We have our opinions
- But you have yours
- More important: standardize/automate!
22.
23. Anatomy of an opinionated
deployment pipeline
So how does it work?
28. Build
● Local build environment
● No dependencies or other apps
● CICD tool worker
Environments
29. Test
● Remote deployment environment
● Not production-like
● Shared with multiple teams
● Dependencies or other apps may or may not be present, so use stubs
Environments
30. Stage
● Remote deployment environment
● (Tries to be) production-like
● Shared with multiple teams
● Dependencies or other apps present, so no stubs
○ Correct state
○ Correct version
● Need to wait for time slot
○ Until environment is in correct state
○ No one else is using it when you run a (load) test
Environments
31. Prod
● Where our customers go
● Definitely production-like
○ Probably the only one that is. ;)
● Do you verify production?
● Collect metrics?
Environments
33. Automated testing
● Who does this in a structural and automated way?
● Who knows the testing pyramid?
● If not structural and automated, how then?
Tests
35. Unit tests
● Executed during build phase
● No dependencies
● Fast
● A lot of them
● Do you write before or after writing your code?
Tests
36. Integration tests
● Executed during build phase
● Integrated, but stubbed database/HTTP endpoints
● More expensive, so less amount than unit tests
Tests
37. Smoke tests
● Executed on a deployed application
● Only primary, most critical features
● Executed against an application surrounded by stubs
Tests
38. End to end tests
● Executed on multiple deployed applications
● Only primary, critical usage scenarios
● Executed against multiple applications and all of their dependencies
Tests
39. Performance tests
● Can your application(s) handle a certain load within parameters
○ Throughout
○ Response time
● Run against test (stubbed)
● or stage (end to end)
Tests
40. So about end to end testing
End to end testing
● End to end tests is supposed to be prod-like
● But in reality, most E2E testing is not (dependencies, versions, data, etc.)
● False sense of security/trust (doesn’t really protect against issues)
● Very complex to maintain
● In the end doesn’t work
● Is it just to check off responsibility/blame?
43. Getting rid of end to end testing
Benefits
● Replace with contract testing and stubs (e.g. Spring Cloud Contract)
● No need to deploy additional applications
● Stubs are the same as used in integration tests
● Stubs tested against the application that produces them (use Spring Cloud Contract)
● Tests will be a lot faster -> faster pipeline -> faster to production
● No waiting for other teams or preparation of testing environments
● Less resource (VM) usage
44. Getting rid of end to end testing
Drawbacks
● Your end to end test will be production
● First time applications will communicate will be production
● Do you really trust your tests enough?
● Can you detect in production if something goes wrong?
45. Use contract testing to replace end to end
Replace with contract testing and stubs, only if
● Your microservice architecture if mature
○ Well defined bound contexts
● Your contract testing is mature
○ Proper scenarios in place
○ Builds trust
● You have KPI/monitoring in place on PROD
■ Prometheus/Graphite/Grafana/Seyren/etc.
47. Pipeline steps
Basic layout
● Test application in isolation
● Test backwards compatibility so it can be rollback in case of failures
● Test deployed version of the application
● User acceptance/performance test on deployed environment (or use…;))
● Deploy to production
48. Build and upload
● Build and publish to artifact repository
● Generate stubs for REST interfaces
● Build and publish Docker image (if Kubernetes)
Steps
49. API compatibility check
● Make sure API changes don’t break current production version (if available)
● Test against API contracts of -1 version (if available)
Steps
50. Deploy to test platform
● Cloud Foundry space or Kubernetes namespace
● Application deployed in isolation with necessary stubs
● Test database migration (if applicable)
○ Flyway, Liquibase
Steps
51. Smoke test new version on test platform
● Execute against stubbed application in isolation
● Best practice
○ A few critical use cases
■ Is primary functionality working?
○ Keep relatively small/limited
■ Want to keep it quick
Steps
52. Deploy rollback on test platform
● Deploy current production version
● Maintain migrated database (to check backwards compatibility)
Steps
54. Deploy to staging platform for end to end test
● Deploy application alongside other microservices
● No stubs
● Test database migration
● Or use contract testing and stubs!
Steps
55. End to end test new version on staging platform
● Execute against non-stubbed application
● Best practice
○ Hard to maintain, so
■ Only a few critical use cases
■ Keep relatively small/limited
○ Use contract testing and stubs instead
Steps
56. Deploy to production platform
● Tag in git
● Blue-green deployment to new version
● Great success!
● Or is it?
Steps
Be clear about what portion of the pipe dreams/vision/circle of code this is addressing - maybe add a slide?
Spring and opinions
Spring projects provide opinions through annotations, the framework…
Spring Boot, JPA, Integration… all provide opinions through annotations
SCP provides the opinions through pipelines
SCP provides template pipelines in Jenkins and Concourse
Concept based on one or two years’ worth of real enterprise work
Maturity: in active development
Blog post by Marcin: Cct 2015
Future: will be partially replaced by Spinnaker, pipelining will be done by Spinnaker; jobs will remain
Spinnaker will call Jenkins jobs to run tests, etc. Spinnaker will orchestrate jobs
Marcin modified surefire plugin in maven to verify that the smoke or end2end tests are under the smoke or end2end package… Meaning when running profile smoke, maven runs tests in smoke package only, etc. In gradle, there is an inclusion pattern that accomplishes the same effect
Could integrate with JUnit annotations…??
SCP in concrete
Concourse Background: Pivotal build CI/CD tool borne out of need for CICD for CF, for managing many services and microservices
Can use Jenkins as well
In Concourse, there is a jenkins.yml file that is the manifest - you can think of it as the blueprint - of how the pipeline should look
You can modify it - provides a set of opinions but you can change it