This document provides an overview of Jenkins Pipeline, including what it is, how to get started, best practices, and advanced features. Pipeline allows configuring Jenkins jobs using code instead of the UI. Jobs are defined with Groovy scripts that can leverage features like branches, libraries, variables, and more. The document covers pipeline syntax, working with source control, error handling, testing scripts, and customizing build reporting. It emphasizes code review, simplicity, and avoiding inefficient practices that could impact the master node.
Slides from my presentation to the Sydney Jenkins Meetup on Declarative Pipeline. Video of the presentation available at https://www.youtube.com/watch?v=3R5xh4oeDg0&feature=youtu.be
** DevOps Training: https://www.edureka.co/devops **
This Edureka tutorial on "Jenkins pipeline Tutorial" will help you understand the basic concepts of a Jenkins pipeline. Below are the topics covered in the tutorial:
1. The need for Continuous Delivery
2. What is Continuous Delivery?
3. Features before the Jenkins Pipeline
4. What is a Jenkins Pipeline?
5. What is a Jenkinsfile?
6. Pipeline Concepts
7. Hands-On
Check our complete DevOps playlist here (includes all the videos mentioned in the video): http://goo.gl/O2vo13
Pipeline as code - new feature in Jenkins 2Michal Ziarnik
What is pipeline as code in continuous delivery/continuous deployment environment.
How to set up Multibranch pipeline to fully benefit from pipeline features.
Jenkins master-node concept in Kubernetes cluster.
TYPO3 Camp Stuttgart 2015 - Continuous Delivery with Open Source ToolsMichael Lihs
In diesem Talk beschreibe ich die Continuous Integartion Pipeline von punkt.de und deren Entstehen. Es wird motiviert, warum es sich lohnt, eine solche Pipeline zu implementieren und welche Tools wir dafür verwendet haben. Neben der Beschreibung von Git, Jenkins, Chef, Vagrant, Behat und Surf geht es auch um Integration der einzelnen Tools in eine Deployment Kette.
Conducted workshop on Jenkins : Pipeline As Code by Shreyas Chaudhari and Inder Pal Singh at ThoughtWorks, Pune on 16th March, 2019 in VodQA Pune 2019.
An Open-Source Chef Cookbook CI/CD Implementation Using Jenkins PipelinesSteffen Gebert
This document discusses implementing continuous integration and continuous delivery (CI/CD) for Chef cookbooks using Jenkins pipelines. It introduces Jenkins pipelines and how they can be used to test, version, and publish Chef cookbooks. Key steps include linting, dependency resolution, test-kitchen testing, version bumping, and uploading to the Chef Server. The jenkins-chefci cookbook automates setting up Jenkins with the necessary tools to run pipelines defined in a shared Groovy library for cookbook CI/CD.
Jenkins is a Continuous Integration (CI) server or tool which is written in Java. It provides Continuous Integration services for software development, which can be started via command line or web application server. Jenkins Pipeline is a suite of plugins which supports implementing and integrating continuous delivery pipelines into Jenkins.
Slides from my presentation to the Sydney Jenkins Meetup on Declarative Pipeline. Video of the presentation available at https://www.youtube.com/watch?v=3R5xh4oeDg0&feature=youtu.be
** DevOps Training: https://www.edureka.co/devops **
This Edureka tutorial on "Jenkins pipeline Tutorial" will help you understand the basic concepts of a Jenkins pipeline. Below are the topics covered in the tutorial:
1. The need for Continuous Delivery
2. What is Continuous Delivery?
3. Features before the Jenkins Pipeline
4. What is a Jenkins Pipeline?
5. What is a Jenkinsfile?
6. Pipeline Concepts
7. Hands-On
Check our complete DevOps playlist here (includes all the videos mentioned in the video): http://goo.gl/O2vo13
Pipeline as code - new feature in Jenkins 2Michal Ziarnik
What is pipeline as code in continuous delivery/continuous deployment environment.
How to set up Multibranch pipeline to fully benefit from pipeline features.
Jenkins master-node concept in Kubernetes cluster.
TYPO3 Camp Stuttgart 2015 - Continuous Delivery with Open Source ToolsMichael Lihs
In diesem Talk beschreibe ich die Continuous Integartion Pipeline von punkt.de und deren Entstehen. Es wird motiviert, warum es sich lohnt, eine solche Pipeline zu implementieren und welche Tools wir dafür verwendet haben. Neben der Beschreibung von Git, Jenkins, Chef, Vagrant, Behat und Surf geht es auch um Integration der einzelnen Tools in eine Deployment Kette.
Conducted workshop on Jenkins : Pipeline As Code by Shreyas Chaudhari and Inder Pal Singh at ThoughtWorks, Pune on 16th March, 2019 in VodQA Pune 2019.
An Open-Source Chef Cookbook CI/CD Implementation Using Jenkins PipelinesSteffen Gebert
This document discusses implementing continuous integration and continuous delivery (CI/CD) for Chef cookbooks using Jenkins pipelines. It introduces Jenkins pipelines and how they can be used to test, version, and publish Chef cookbooks. Key steps include linting, dependency resolution, test-kitchen testing, version bumping, and uploading to the Chef Server. The jenkins-chefci cookbook automates setting up Jenkins with the necessary tools to run pipelines defined in a shared Groovy library for cookbook CI/CD.
Jenkins is a Continuous Integration (CI) server or tool which is written in Java. It provides Continuous Integration services for software development, which can be started via command line or web application server. Jenkins Pipeline is a suite of plugins which supports implementing and integrating continuous delivery pipelines into Jenkins.
This document discusses how Docker and Jenkins can be used together for continuous integration. It describes how Docker provides repeatable environments that reduce unintended variations, and how Jenkins pipelines allow for frequent integration that detects issues earlier. Specific examples show how Docker images can standardize environments for building, testing, and reproducing bugs, while multi-branch Jenkins pipelines enable testing proposed changes before merging.
This document summarizes a Jenkins pipeline for testing and deploying Chef cookbooks. The pipeline is configured to automatically scan a GitHub organization for any repositories containing a Jenkinsfile. It will then create and manage multibranch pipeline jobs for each repository and branch. The pipelines leverage a shared Jenkins global library which contains pipeline logic to test and deploy the Chef cookbooks. This allows for standardized and reusable pipeline logic across all Chef cookbook repositories.
Jenkins days workshop pipelines - Eric Longericlongtx
This document provides an overview of a Jenkins Days workshop on building Jenkins pipelines. The workshop goals are to install Jenkins Enterprise, create a Jenkins pipeline, and explore additional capabilities. Hands-on exercises will guide attendees on installing Jenkins Enterprise using Docker, creating their first pipeline that includes checking code out of source control and stashing files, using input steps and checkpoints, managing tools, developing pipeline as code, and more advanced pipeline steps. The document encourages attendees to get involved with the Jenkins and CloudBees communities online and on Twitter.
In this talk, I will discuss our experiences at Mollie with setting up the Jenkins Continuous Integration server for all our PHP projects. The talk will be aimed at developers with little or no experience with CI.
This document discusses Jenkins 2.0 and its new "pipeline as code" feature. Pipeline as code allows automation of continuous delivery pipelines by describing the stages in a textual pipeline script stored in version control. This enables pipelines to be more flexible, reusable and survive Jenkins restarts. The document provides examples of pipeline scripts for common tasks like building, testing, archiving artifacts and running in parallel. It also discusses how pipelines can be made more reusable by defining shared library methods.
Building an Extensible, Resumable DSL on Top of Apache Groovyjgcloudbees
Presented at: https://apacheconeu2016.sched.org/event/8ULR
n 2014, a few Jenkins hackers set out to implement a new way of defining continuous delivery pipelines in Jenkins. Dissatisfied with chaining jobs together, configured in the web UI, the effort started with Apache Groovy as the foundation and grew from there. Today the result of that effort, named Jenkins Pipeline, supports a rich DSL with "steps" provided by a Jenkins plugins, built-in auto-generated documentation, and execution resumability which allow Pipelines to continue executing while the master is offline.
In this talk we'll take a peek behind the scenes of Jenkins Pipeline. Touring the various constraints we started with, whether imposed by Jenkins or Groovy, and discussing which features of Groovy were brought to bear during the implementation. If you're embedding, extending or are simply interested in the internals of Groovy this talk should have plenty of food for thought.
In this presentation, I covered how I've migrated Android project from old Jenkins (Freestyle jobs, 1st Jenkins instance) to new Jenkins (Multibranch pipeline, 2nd Jenkins instance).
Also, it covers a Jenkins Shared Library usage and integration tests on pipeline code.
At the end, I'm covering pros/cons of final result and what difficulties I faced during migration.
This document discusses Jenkins Pipelines, which allow defining continuous integration and delivery (CI/CD) pipelines as code. Key points:
- Pipelines are defined using a Groovy domain-specific language (DSL) for stages, steps, and environment configuration.
- This provides configuration as code that is version controlled and reusable across projects.
- Jenkins plugins support running builds and tests in parallel across Docker containers.
- Notifications can be sent to services like Slack on failure.
- The Blue Ocean UI in Jenkins focuses on visualization of pipeline runs.
Jenkins is a continuous integration server that detects code changes, runs automated builds and tests, and can deploy code. It supports defining build pipelines as code to make them version controlled and scalable. Popular plugins allow Jenkins pipelines to integrate with tools for testing, reporting, notifications, and deployments. Pipelines can define stages, run steps in parallel, and leverage existing Jenkins functionality.
This document discusses continuous delivery and the new features of Jenkins 2, including pipeline as code. Jenkins 2 introduces the concept of pipeline as a new type that allows defining build pipelines explicitly as code in Jenkinsfiles checked into source control. This enables pipelines to be versioned, more modular through shared libraries, and resumed if interrupted. The document provides examples of creating pipelines with Jenkinsfiles that define stages and steps for builds, tests and deployments.
The document discusses 7 habits of highly effective Jenkins users. It recommends using long-term support releases, breaking up large jobs into smaller modular jobs, and defining Jenkins tasks programmatically using scripts and pipelines rather than manually configuring through the UI. Key plugins are also discussed like Pipeline, Job DSL, and others that help automate Jenkins configuration and integration.
BSD/macOS Sed and GNU Sed both support additional features beyond POSIX Sed, such as extended regular expressions with -E/-r, but using only POSIX features ensures portability. GNU Sed defaults allow some non-POSIX behaviors, so --posix is recommended for strict POSIX compliance. The most portable Sed scripts use only basic regular expressions and features defined in the POSIX specification.
GraphQL is a query language for APIs that was created by Facebook in 2012. It allows clients to define the structure of the data required, and exactly the data they need from the server. This prevents over- and under-fetching of data. GraphQL has grown in popularity with the release of tools like Apollo and GraphQL code generation. GraphQL can be used to build APIs that integrate with existing backend systems and databases, with libraries like Express GraphQL and GraphQL Yoga making it simple to create GraphQL servers.
Continuous Delivery with Jenkins declarative pipeline XPDays-2018-12-08Борис Зора
When you start your journey with µServices, you should be confident with your delivery lifecycle. In case of mistake, you should be able to navigate to appropriate tag in vcs to reproduce the bug with a test & go though pipeline within 3 hours to production with high confidence of quality.
We will discuss set of tools that could help you to achieve this within 3 months on your project. It does not include system decoupling suggestions. And in the same time, if you decide to break down monolith, it is better to do with dev & devOps best practices.
The document outlines Julien Pivotto's presentation on building pipelines at scale using Jenkins and Puppet. It discusses how Puppet can be used to define Jenkins job configurations and pipelines for applications and infrastructure to allow easy deployment of new pipelines. It also covers alternative approaches using Jenkins plugins to define pipelines through Groovy scripts to reduce complexity compared to Puppet management.
Let's go HTTPS-only! - More Than Buying a CertificateSteffen Gebert
1) The document discusses various ways to secure a website or client's users, including getting an SSL certificate, setting up HTTPS, ensuring strong security practices with headers and configurations.
2) It describes letting's encrypt as a free and easy way to get SSL certificates with automated renewal, and quality testing services like QUALYS to check SSL configuration.
3) Additional security best practices discussed include HTTP headers like HSTS, CSP, and PKP to prevent vulnerabilities and protect against MITM attacks. Regular testing and integrating checks into development processes are recommended.
Gr8conf EU 2013 Speed up your development: GroovyServ and Grails Improx PluginYasuharu Nakano
The document discusses how to speed up development of Groovy and Grails applications using GroovyServ and the Improx plugin. GroovyServ launches Groovy faster by pre-invoking it as a server. The Improx plugin allows running Grails tests and commands from an IDE by connecting to an interactive Grails shell via TCP/IP. This avoids restarting the JVM for each test and provides autocompletion. Demostrations show how these tools improve development workflow by making Groovy and test execution much faster.
When the daily duty of delivering working software is done with ease, and also secondary tasks like working tests are done well, there’s always more work waiting at the ‘nice to have’ priority level. Things like code style, valid Composer files, updated dependencies or various other meta data that isn’t at all mission critical, but always provides a certain level of annoyance if not maintained properly.
I’ll show you our way how to deal with such a situation when maintaining about 140 distinct repositories with PHP software. At this scale automation is the only choice, and we not only do it for testing, but for these maintenance tasks as well. We have created a single point of attack from where we can influence all our repositories and it’s code, and we do it in a way that is not as intrusive as pre-commit or pre-receive hooks, by using pull requests.
This document discusses several Java build tools: Ant, Maven, and Gradle. It provides information on how to set up and use Ant and Maven, including setting environment variables, available tasks in Ant, Maven's build lifecycle and repositories, and differences between Ant and Maven such as Maven's conventions and declarative nature. Gradle is also briefly mentioned as another build tool.
This is the presentation I gave in Java.IL at June 19th 2016.
It's targeted for people who have some experience with Maven and want to learn some of the inner workings and how to be more effective with it.
Continuous Integration with Open Source Tools - PHPUgFfm 2014-11-20Michael Lihs
Presentation about open source tools to set up continuous integration and continuous deployment. Covers Git, Gitlab, Chef, Vagrant, Jenkins, Gatling, Dashing, TYPO3 Surf and some other tools. Shows some best practices for testing with Behat and Functional Testing.
Atlanta Jenkins Area Meetup October 22nd 2015Kurt Madel
Jenkins Workflow is a game changing way to write automation jobs with Jenkins. Workflows can support simple, one-step hello-world type jobs to the most complex, parallel pipelines. Best of all, they support manual/automated intervention (eg: approvals) and also workflows survive Jenkins master restarts. Combining Jenkins Workflow with Docker can seriously reduce friction in your DevOps efforts. Come learn how.
This document discusses how Docker and Jenkins can be used together for continuous integration. It describes how Docker provides repeatable environments that reduce unintended variations, and how Jenkins pipelines allow for frequent integration that detects issues earlier. Specific examples show how Docker images can standardize environments for building, testing, and reproducing bugs, while multi-branch Jenkins pipelines enable testing proposed changes before merging.
This document summarizes a Jenkins pipeline for testing and deploying Chef cookbooks. The pipeline is configured to automatically scan a GitHub organization for any repositories containing a Jenkinsfile. It will then create and manage multibranch pipeline jobs for each repository and branch. The pipelines leverage a shared Jenkins global library which contains pipeline logic to test and deploy the Chef cookbooks. This allows for standardized and reusable pipeline logic across all Chef cookbook repositories.
Jenkins days workshop pipelines - Eric Longericlongtx
This document provides an overview of a Jenkins Days workshop on building Jenkins pipelines. The workshop goals are to install Jenkins Enterprise, create a Jenkins pipeline, and explore additional capabilities. Hands-on exercises will guide attendees on installing Jenkins Enterprise using Docker, creating their first pipeline that includes checking code out of source control and stashing files, using input steps and checkpoints, managing tools, developing pipeline as code, and more advanced pipeline steps. The document encourages attendees to get involved with the Jenkins and CloudBees communities online and on Twitter.
In this talk, I will discuss our experiences at Mollie with setting up the Jenkins Continuous Integration server for all our PHP projects. The talk will be aimed at developers with little or no experience with CI.
This document discusses Jenkins 2.0 and its new "pipeline as code" feature. Pipeline as code allows automation of continuous delivery pipelines by describing the stages in a textual pipeline script stored in version control. This enables pipelines to be more flexible, reusable and survive Jenkins restarts. The document provides examples of pipeline scripts for common tasks like building, testing, archiving artifacts and running in parallel. It also discusses how pipelines can be made more reusable by defining shared library methods.
Building an Extensible, Resumable DSL on Top of Apache Groovyjgcloudbees
Presented at: https://apacheconeu2016.sched.org/event/8ULR
n 2014, a few Jenkins hackers set out to implement a new way of defining continuous delivery pipelines in Jenkins. Dissatisfied with chaining jobs together, configured in the web UI, the effort started with Apache Groovy as the foundation and grew from there. Today the result of that effort, named Jenkins Pipeline, supports a rich DSL with "steps" provided by a Jenkins plugins, built-in auto-generated documentation, and execution resumability which allow Pipelines to continue executing while the master is offline.
In this talk we'll take a peek behind the scenes of Jenkins Pipeline. Touring the various constraints we started with, whether imposed by Jenkins or Groovy, and discussing which features of Groovy were brought to bear during the implementation. If you're embedding, extending or are simply interested in the internals of Groovy this talk should have plenty of food for thought.
In this presentation, I covered how I've migrated Android project from old Jenkins (Freestyle jobs, 1st Jenkins instance) to new Jenkins (Multibranch pipeline, 2nd Jenkins instance).
Also, it covers a Jenkins Shared Library usage and integration tests on pipeline code.
At the end, I'm covering pros/cons of final result and what difficulties I faced during migration.
This document discusses Jenkins Pipelines, which allow defining continuous integration and delivery (CI/CD) pipelines as code. Key points:
- Pipelines are defined using a Groovy domain-specific language (DSL) for stages, steps, and environment configuration.
- This provides configuration as code that is version controlled and reusable across projects.
- Jenkins plugins support running builds and tests in parallel across Docker containers.
- Notifications can be sent to services like Slack on failure.
- The Blue Ocean UI in Jenkins focuses on visualization of pipeline runs.
Jenkins is a continuous integration server that detects code changes, runs automated builds and tests, and can deploy code. It supports defining build pipelines as code to make them version controlled and scalable. Popular plugins allow Jenkins pipelines to integrate with tools for testing, reporting, notifications, and deployments. Pipelines can define stages, run steps in parallel, and leverage existing Jenkins functionality.
This document discusses continuous delivery and the new features of Jenkins 2, including pipeline as code. Jenkins 2 introduces the concept of pipeline as a new type that allows defining build pipelines explicitly as code in Jenkinsfiles checked into source control. This enables pipelines to be versioned, more modular through shared libraries, and resumed if interrupted. The document provides examples of creating pipelines with Jenkinsfiles that define stages and steps for builds, tests and deployments.
The document discusses 7 habits of highly effective Jenkins users. It recommends using long-term support releases, breaking up large jobs into smaller modular jobs, and defining Jenkins tasks programmatically using scripts and pipelines rather than manually configuring through the UI. Key plugins are also discussed like Pipeline, Job DSL, and others that help automate Jenkins configuration and integration.
BSD/macOS Sed and GNU Sed both support additional features beyond POSIX Sed, such as extended regular expressions with -E/-r, but using only POSIX features ensures portability. GNU Sed defaults allow some non-POSIX behaviors, so --posix is recommended for strict POSIX compliance. The most portable Sed scripts use only basic regular expressions and features defined in the POSIX specification.
GraphQL is a query language for APIs that was created by Facebook in 2012. It allows clients to define the structure of the data required, and exactly the data they need from the server. This prevents over- and under-fetching of data. GraphQL has grown in popularity with the release of tools like Apollo and GraphQL code generation. GraphQL can be used to build APIs that integrate with existing backend systems and databases, with libraries like Express GraphQL and GraphQL Yoga making it simple to create GraphQL servers.
Continuous Delivery with Jenkins declarative pipeline XPDays-2018-12-08Борис Зора
When you start your journey with µServices, you should be confident with your delivery lifecycle. In case of mistake, you should be able to navigate to appropriate tag in vcs to reproduce the bug with a test & go though pipeline within 3 hours to production with high confidence of quality.
We will discuss set of tools that could help you to achieve this within 3 months on your project. It does not include system decoupling suggestions. And in the same time, if you decide to break down monolith, it is better to do with dev & devOps best practices.
The document outlines Julien Pivotto's presentation on building pipelines at scale using Jenkins and Puppet. It discusses how Puppet can be used to define Jenkins job configurations and pipelines for applications and infrastructure to allow easy deployment of new pipelines. It also covers alternative approaches using Jenkins plugins to define pipelines through Groovy scripts to reduce complexity compared to Puppet management.
Let's go HTTPS-only! - More Than Buying a CertificateSteffen Gebert
1) The document discusses various ways to secure a website or client's users, including getting an SSL certificate, setting up HTTPS, ensuring strong security practices with headers and configurations.
2) It describes letting's encrypt as a free and easy way to get SSL certificates with automated renewal, and quality testing services like QUALYS to check SSL configuration.
3) Additional security best practices discussed include HTTP headers like HSTS, CSP, and PKP to prevent vulnerabilities and protect against MITM attacks. Regular testing and integrating checks into development processes are recommended.
Gr8conf EU 2013 Speed up your development: GroovyServ and Grails Improx PluginYasuharu Nakano
The document discusses how to speed up development of Groovy and Grails applications using GroovyServ and the Improx plugin. GroovyServ launches Groovy faster by pre-invoking it as a server. The Improx plugin allows running Grails tests and commands from an IDE by connecting to an interactive Grails shell via TCP/IP. This avoids restarting the JVM for each test and provides autocompletion. Demostrations show how these tools improve development workflow by making Groovy and test execution much faster.
When the daily duty of delivering working software is done with ease, and also secondary tasks like working tests are done well, there’s always more work waiting at the ‘nice to have’ priority level. Things like code style, valid Composer files, updated dependencies or various other meta data that isn’t at all mission critical, but always provides a certain level of annoyance if not maintained properly.
I’ll show you our way how to deal with such a situation when maintaining about 140 distinct repositories with PHP software. At this scale automation is the only choice, and we not only do it for testing, but for these maintenance tasks as well. We have created a single point of attack from where we can influence all our repositories and it’s code, and we do it in a way that is not as intrusive as pre-commit or pre-receive hooks, by using pull requests.
This document discusses several Java build tools: Ant, Maven, and Gradle. It provides information on how to set up and use Ant and Maven, including setting environment variables, available tasks in Ant, Maven's build lifecycle and repositories, and differences between Ant and Maven such as Maven's conventions and declarative nature. Gradle is also briefly mentioned as another build tool.
This is the presentation I gave in Java.IL at June 19th 2016.
It's targeted for people who have some experience with Maven and want to learn some of the inner workings and how to be more effective with it.
Continuous Integration with Open Source Tools - PHPUgFfm 2014-11-20Michael Lihs
Presentation about open source tools to set up continuous integration and continuous deployment. Covers Git, Gitlab, Chef, Vagrant, Jenkins, Gatling, Dashing, TYPO3 Surf and some other tools. Shows some best practices for testing with Behat and Functional Testing.
Atlanta Jenkins Area Meetup October 22nd 2015Kurt Madel
Jenkins Workflow is a game changing way to write automation jobs with Jenkins. Workflows can support simple, one-step hello-world type jobs to the most complex, parallel pipelines. Best of all, they support manual/automated intervention (eg: approvals) and also workflows survive Jenkins master restarts. Combining Jenkins Workflow with Docker can seriously reduce friction in your DevOps efforts. Come learn how.
This document provides an overview of Jenkins, an open-source tool for continuous integration and continuous delivery. It discusses key Jenkins concepts like architecture, pipelines, and shared libraries. Jenkins allows integrating multiple stages of development through continuous integration and delivery. It has a master-slave architecture and supports defining automated build processes through pipelines implemented as code.
Codifying the Build and Release Process with a Jenkins Pipeline Shared LibraryAlvin Huang
These are my slides from my Jenkins World 2017 talk, detailing a war story of migrating 150-200 Freestyle Jobs for build and release, into ~10 line Jenkinsfiles that heavily leverages Jenkins Pipeline Shared Libraries (https://jenkins.io/doc/book/pipeline/shared-libraries/)
eZ Publish 5: from zero to automated deployment (and no regressions!) in one ...Gaetano Giunta
1. The workshop will cover Docker, managing environments, database changes, and automated deployments for eZPublish websites.
2. A Docker stack is proposed that includes containers for Apache, MySQL, Solr, PHP, and other tools to replicate a production environment for development. Configuration and code are mounted as volumes.
3. Managing environments involves storing settings in the code repository and using symlinks to deploy different configurations. Database changes should be managed via migration scripts rather than connecting directly to a shared database.
4. Automating deployments is important and involves tasks like updating code, the database, caches and reindexing content. The same deployment script should be used for development and production. Testing websites is also recommended.
Presentation that I gave to the Groovy Users of Minnesota group on May 11, 2010. Using Cucumber, cuke4duke, and Groovy together for acceptance test-driven development.
This document discusses how Vagrant was implemented at Wingify Engineering to help establish a DevOps culture. Previously, Wingify had issues with environments not matching production, difficult setups, and isolation between devs and ops. Vagrant provided developers similar environments to production using the same OS and configuration management. It simplified management through tools like "vagrant up", "vagrant ssh", and "vagrant destroy". This improved testing and reduced issues in production. It also improved collaboration by allowing ops to test configurations and devs to better understand infrastructure. Overall Vagrant helped establish closer alignment between devs and ops through shared responsibility of infrastructure.
Building Efficient Parallel Testing Platforms with DockerLaura Frank Tacho
We often use containers to maintain parity across development, testing, and production environments, but we can also use containerization to significantly reduce time needed for testing by spinning up multiple instances of fully isolated testing environments and executing tests in parallel. This strategy also helps you maximize the utilization of infrastructure resources. The enhanced toolset provided by Docker makes this process simple and unobtrusive, and you’ll see how Docker Engine, Registry, and Compose can work together to make your tests fast.
Efficient Parallel Testing with Docker by Laura FrankDocker, Inc.
Fast and efficient software testing is easy with Docker. We often
use containers to maintain parity across development, testing, and production environments, but we can also use containerization to significantly reduce time needed for testing by spinning up multiple instances of fully isolated testing environments and executing tests in parallel. This strategy also helps you maximize the utilization of infrastructure resources. The enhanced toolset provided by Docker makes this process simple and unobtrusive, and you’ll see how Docker Engine, Registry, Machine, and Compose can work together to make your tests fast.
CT Software Developers Meetup: Using Docker and Vagrant Within A GitHub Pull ...E. Camden Fisher
This was a talk given at the second CT Software Developers Meetup (http://www.meetup.com/CT-Software-Developers-Meetup/). It covers how NorthPage is using Docker and Vagrant with a home grown Preview tool to increase the efficiency of the GitHub Pull Request Workflow.
This document discusses using version control and continuous integration to deploy code. It recommends developing code locally, using distributed version control like Git, and deploying to a testing environment before production. The continuous integration workflow involves multiple developers sharing code through a central version control repository. Each code push is verified by automated builds to avoid integration issues and catch problems early. The document provides an example deployment script for Codeship that checks out code, installs dependencies, builds assets, commits changes, and pushes to the production remote. It also discusses testing deployments using Assertible and lessons learned around caching packages, installing dependencies, and using forceful Git pushes for deployment.
Production Ready WordPress - WC Utrecht 2017Edmund Turbin
This document discusses deploying code using version control and continuous integration. It recommends developing code locally, using distributed version control like Git, and deploying to a testing environment before production. The solution involves setting up continuous integration where multiple developers can share code and have automated builds and testing on each push. A workflow is proposed where code is developed locally, stored in a central version control repository, built through continuous integration pipelines, and deployed to servers. Tools like Composer, NPM, Gulp and Assertible are demonstrated in an example deployment script for automated testing and deployment.
Continuous Integration is a software development practice where developers regularly merge their work into a central repository. This triggers an automated build and test of the code. If the build fails, developers are immediately notified. There are typically five stages of adopting Continuous Integration - from just committing code occasionally to triggering automated builds and tests with every commit and deploying to production. Jenkins is an open source tool that supports Continuous Integration. It allows developers to easily set up CI/CD pipelines with features like automated testing, code quality reporting, deployment to staging environments and more.
Continuous Integration is a software development practice where developers regularly merge their work into a central repository. When code is committed, an automated build is triggered to check that new code does not break the existing code base. There are typically five stages of adopting Continuous Integration: 1) a few manual commits and builds, 2) nightly automated builds, 3) builds triggered with every commit, 4) code quality metrics added to builds, 5) automated deployment to staging environments. Continuous Integration helps catch bugs early in the development process and ensures code quality.
Aim of this presentation is not to make you masters in Java 8 Concurrency, but to help you guide towards that goal. Sometimes it helps just to know that there is some API that might be suitable for a particular situation. Make use of the pointers given to search more and learn more on those topics. Refer to books, Java API Documentation, Blogs etc. to learn more. Examples and demos for all cases discussed will be added to my blog www.javajee.com.
The document discusses various best practices for writing JavaScript code, including placing scripts at the bottom of pages, using meaningful variable and function names, avoiding global variables, and optimizing loops to minimize DOM access. It also covers JavaScript language features like namespaces, data types, and self-executing functions. Finally, it mentions tools for linting, minifying, and bundling code as well as popular integrated development environments for JavaScript development.
The document discusses continuous integration and continuous delivery workflows. It provides definitions and best practices for continuous integration, continuous delivery, and continuous deployment. It also summarizes the key concepts and components of ThoughtWorks Go including pipelines, stages, jobs, tasks, agents, and the value stream map for visualization. Pros and cons are listed for ThoughtWorks Go compared to Jenkins, noting Go's strengths in end-to-end visualization, fan-in support, and security configuration while Jenkins has more plugins and flexibility.
Fast and efficient software testing is easy with Docker. We often
use containers to maintain parity across development, testing, and production environments, but we can also use containerization to significantly reduce time needed for testing by spinning up multiple instances of fully isolated testing environments and executing tests in parallel. This strategy also helps you maximize the utilization of infrastructure resources. The enhanced toolset provided by Docker makes this process simple and unobtrusive, and you’ll see how Docker Engine, Registry, and Compose can work together to make your tests fast.
August Webinar - Water Cooler Talks: A Look into a Developer's WorkbenchHoward Greenberg
The webinar covered tools and techniques used by several developers in their work with Domino and XPages. Howard Greenberg discussed using SourceTree and BitBucket for version control of XPages applications. Jesse Gallagher presented his toolchain including Eclipse, Maven, and Jenkins for plugin and application development. Serdar Basegmez outlined his development environment including configuring Eclipse to develop OSGi plugins for the Domino runtime. All emphasized the importance of source control, testing, and documentation in their processes.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/temporal-event-neural-networks-a-more-efficient-alternative-to-the-transformer-a-presentation-from-brainchip/
Chris Jones, Director of Product Management at BrainChip , presents the “Temporal Event Neural Networks: A More Efficient Alternative to the Transformer” tutorial at the May 2024 Embedded Vision Summit.
The expansion of AI services necessitates enhanced computational capabilities on edge devices. Temporal Event Neural Networks (TENNs), developed by BrainChip, represent a novel and highly efficient state-space network. TENNs demonstrate exceptional proficiency in handling multi-dimensional streaming data, facilitating advancements in object detection, action recognition, speech enhancement and language model/sequence generation. Through the utilization of polynomial-based continuous convolutions, TENNs streamline models, expedite training processes and significantly diminish memory requirements, achieving notable reductions of up to 50x in parameters and 5,000x in energy consumption compared to prevailing methodologies like transformers.
Integration with BrainChip’s Akida neuromorphic hardware IP further enhances TENNs’ capabilities, enabling the realization of highly capable, portable and passively cooled edge devices. This presentation delves into the technical innovations underlying TENNs, presents real-world benchmarks, and elucidates how this cutting-edge approach is positioned to revolutionize edge AI across diverse applications.
Must Know Postgres Extension for DBA and Developer during MigrationMydbops
Mydbops Opensource Database Meetup 16
Topic: Must-Know PostgreSQL Extensions for Developers and DBAs During Migration
Speaker: Deepak Mahto, Founder of DataCloudGaze Consulting
Date & Time: 8th June | 10 AM - 1 PM IST
Venue: Bangalore International Centre, Bangalore
Abstract: Discover how PostgreSQL extensions can be your secret weapon! This talk explores how key extensions enhance database capabilities and streamline the migration process for users moving from other relational databases like Oracle.
Key Takeaways:
* Learn about crucial extensions like oracle_fdw, pgtt, and pg_audit that ease migration complexities.
* Gain valuable strategies for implementing these extensions in PostgreSQL to achieve license freedom.
* Discover how these key extensions can empower both developers and DBAs during the migration process.
* Don't miss this chance to gain practical knowledge from an industry expert and stay updated on the latest open-source database trends.
Mydbops Managed Services specializes in taking the pain out of database management while optimizing performance. Since 2015, we have been providing top-notch support and assistance for the top three open-source databases: MySQL, MongoDB, and PostgreSQL.
Our team offers a wide range of services, including assistance, support, consulting, 24/7 operations, and expertise in all relevant technologies. We help organizations improve their database's performance, scalability, efficiency, and availability.
Contact us: info@mydbops.com
Visit: https://www.mydbops.com/
Follow us on LinkedIn: https://in.linkedin.com/company/mydbops
For more details and updates, please follow up the below links.
Meetup Page : https://www.meetup.com/mydbops-databa...
Twitter: https://twitter.com/mydbopsofficial
Blogs: https://www.mydbops.com/blog/
Facebook(Meta): https://www.facebook.com/mydbops/
inQuba Webinar Mastering Customer Journey Management with Dr Graham HillLizaNolte
HERE IS YOUR WEBINAR CONTENT! 'Mastering Customer Journey Management with Dr. Graham Hill'. We hope you find the webinar recording both insightful and enjoyable.
In this webinar, we explored essential aspects of Customer Journey Management and personalization. Here’s a summary of the key insights and topics discussed:
Key Takeaways:
Understanding the Customer Journey: Dr. Hill emphasized the importance of mapping and understanding the complete customer journey to identify touchpoints and opportunities for improvement.
Personalization Strategies: We discussed how to leverage data and insights to create personalized experiences that resonate with customers.
Technology Integration: Insights were shared on how inQuba’s advanced technology can streamline customer interactions and drive operational efficiency.
Dandelion Hashtable: beyond billion requests per second on a commodity serverAntonios Katsarakis
This slide deck presents DLHT, a concurrent in-memory hashtable. Despite efforts to optimize hashtables, that go as far as sacrificing core functionality, state-of-the-art designs still incur multiple memory accesses per request and block request processing in three cases. First, most hashtables block while waiting for data to be retrieved from memory. Second, open-addressing designs, which represent the current state-of-the-art, either cannot free index slots on deletes or must block all requests to do so. Third, index resizes block every request until all objects are copied to the new index. Defying folklore wisdom, DLHT forgoes open-addressing and adopts a fully-featured and memory-aware closed-addressing design based on bounded cache-line-chaining. This design offers lock-free index operations and deletes that free slots instantly, (2) completes most requests with a single memory access, (3) utilizes software prefetching to hide memory latencies, and (4) employs a novel non-blocking and parallel resizing. In a commodity server and a memory-resident workload, DLHT surpasses 1.6B requests per second and provides 3.5x (12x) the throughput of the state-of-the-art closed-addressing (open-addressing) resizable hashtable on Gets (Deletes).
zkStudyClub - LatticeFold: A Lattice-based Folding Scheme and its Application...Alex Pruden
Folding is a recent technique for building efficient recursive SNARKs. Several elegant folding protocols have been proposed, such as Nova, Supernova, Hypernova, Protostar, and others. However, all of them rely on an additively homomorphic commitment scheme based on discrete log, and are therefore not post-quantum secure. In this work we present LatticeFold, the first lattice-based folding protocol based on the Module SIS problem. This folding protocol naturally leads to an efficient recursive lattice-based SNARK and an efficient PCD scheme. LatticeFold supports folding low-degree relations, such as R1CS, as well as high-degree relations, such as CCS. The key challenge is to construct a secure folding protocol that works with the Ajtai commitment scheme. The difficulty, is ensuring that extracted witnesses are low norm through many rounds of folding. We present a novel technique using the sumcheck protocol to ensure that extracted witnesses are always low norm no matter how many rounds of folding are used. Our evaluation of the final proof system suggests that it is as performant as Hypernova, while providing post-quantum security.
Paper Link: https://eprint.iacr.org/2024/257
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
High performance Serverless Java on AWS- GoTo Amsterdam 2024Vadym Kazulkin
Java is for many years one of the most popular programming languages, but it used to have hard times in the Serverless community. Java is known for its high cold start times and high memory footprint, comparing to other programming languages like Node.js and Python. In this talk I'll look at the general best practices and techniques we can use to decrease memory consumption, cold start times for Java Serverless development on AWS including GraalVM (Native Image) and AWS own offering SnapStart based on Firecracker microVM snapshot and restore and CRaC (Coordinated Restore at Checkpoint) runtime hooks. I'll also provide a lot of benchmarking on Lambda functions trying out various deployment package sizes, Lambda memory settings, Java compilation options and HTTP (a)synchronous clients and measure their impact on cold and warm start times.
"NATO Hackathon Winner: AI-Powered Drug Search", Taras KlobaFwdays
This is a session that details how PostgreSQL's features and Azure AI Services can be effectively used to significantly enhance the search functionality in any application.
In this session, we'll share insights on how we used PostgreSQL to facilitate precise searches across multiple fields in our mobile application. The techniques include using LIKE and ILIKE operators and integrating a trigram-based search to handle potential misspellings, thereby increasing the search accuracy.
We'll also discuss how the azure_ai extension on PostgreSQL databases in Azure and Azure AI Services were utilized to create vectors from user input, a feature beneficial when users wish to find specific items based on text prompts. While our application's case study involves a drug search, the techniques and principles shared in this session can be adapted to improve search functionality in a wide range of applications. Join us to learn how PostgreSQL and Azure AI can be harnessed to enhance your application's search capability.
Northern Engraving | Nameplate Manufacturing Process - 2024Northern Engraving
Manufacturing custom quality metal nameplates and badges involves several standard operations. Processes include sheet prep, lithography, screening, coating, punch press and inspection. All decoration is completed in the flat sheet with adhesive and tooling operations following. The possibilities for creating unique durable nameplates are endless. How will you create your brand identity? We can help!
Northern Engraving | Modern Metal Trim, Nameplates and Appliance PanelsNorthern Engraving
What began over 115 years ago as a supplier of precision gauges to the automotive industry has evolved into being an industry leader in the manufacture of product branding, automotive cockpit trim and decorative appliance trim. Value-added services include in-house Design, Engineering, Program Management, Test Lab and Tool Shops.
How information systems are built or acquired puts information, which is what they should be about, in a secondary place. Our language adapted accordingly, and we no longer talk about information systems but applications. Applications evolved in a way to break data into diverse fragments, tightly coupled with applications and expensive to integrate. The result is technical debt, which is re-paid by taking even bigger "loans", resulting in an ever-increasing technical debt. Software engineering and procurement practices work in sync with market forces to maintain this trend. This talk demonstrates how natural this situation is. The question is: can something be done to reverse the trend?
In our second session, we shall learn all about the main features and fundamentals of UiPath Studio that enable us to use the building blocks for any automation project.
📕 Detailed agenda:
Variables and Datatypes
Workflow Layouts
Arguments
Control Flows and Loops
Conditional Statements
💻 Extra training through UiPath Academy:
Variables, Constants, and Arguments in Studio
Control Flow in Studio
Freshworks Rethinks NoSQL for Rapid Scaling & Cost-EfficiencyScyllaDB
Freshworks creates AI-boosted business software that helps employees work more efficiently and effectively. Managing data across multiple RDBMS and NoSQL databases was already a challenge at their current scale. To prepare for 10X growth, they knew it was time to rethink their database strategy. Learn how they architected a solution that would simplify scaling while keeping costs under control.
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
2. Agenda
• What is Jenkins Pipeline?
• Getting Started
• Using SCM with Pipeline
• Pipeline Examples & Best Practices
• Shared Libraries
• Vars & Steps & Helper Functions
• Developing and Testing Pipeline scripts
• Advanced Reporting
• Q & A
3. What is Jenkins Pipeline?
• A new way to design/create/write Jenkins Jobs
• No more Job Config UI
• All Code
• Replaces Job Config UI Build steps with code
• Uses a Groovy CPS library to compile the code
• Allows to save states to disk as the program runs to survive restarts
• Groovy is the syntax of the pipeline script
• Not efficient due to the nature of the CPS compiler
• Doesn’t support all fancy syntax operations of the groovy language
4. Example of a FreeStyle Job:
• Copies Artifacts from another Job
• Executes some Shell Command
• Injects some properties from Shell step
into the Build
• Triggers another job
• Conditional statements as well
• Maybe a check out & Build
These Build Steps are how you
design/configure your Jenkins Jobs
5. Creating a Pipeline Job
Job Config Just no longer has
all those build steps you can
add, just a place for your code
6. Source Control for Pipeline
• Pipeline scripts can be inline
• Best practice is to put them in source control, also forces code
review….
• Can be in source control with the product the job is for
• Can be in separate source control project for just Pipeline code
• This is the option we took, our shared library, external resource files and the
pipeline scripts themselves are all in one git repo
• Source control forces the pipeline scripts to be executed in Groovy
Sandbox
7. What is a groovy sandbox?
• All over the Jenkins interface
• Means this groovy pipeline script runs in the master Jenkins JVM
space without restriction, and has access to everything
• Recommended to always use a groovy sandbox
• Rogue scripts can and will take down your Jenkins master
• Some methods may require Admin Approval
• Admin Approval can also tell you if the function an engineer wants to use is dangerous
and you can deny it
• Admin Approvals can be a pain in current design…. But worth it!
9. Examples of how to write Pipeline scripts
• Copy Artifacts
• Execute Shell
• Conditional
• Archive Artifacts
• File I/O
• Test Results
• Trigger Other Jobs
10. Basics of Pipeline Scripts:
• Node Blocks
• Determine where to run this part of the job… what Jenkins Agent/Node to use an
executor on
• Stage Blocks
• Organize segments of your Pipeline script
• Allows for easy code readability and for the Dashboard of the Job when the job is
running
• Can easily see what part of the Pipeline script is being executed
• Checkpoint Statements
• Almost like “saving your work”, identifies a good place in the script to save, so that if
you wanted to restart the pipeline at a later time you could resume from this point
13. Error Handling in Pipeline Scripts
• error(“Job Failed”)
• Stops your pipeline script from executing as if it was aborted
• Try/Catch Blocks
• Can handle errors and report yourself
• Allows you to handle if someone “Aborts the Build” and gracefully clean up and
report
• Notify people
• Email, slack all supported in pipeline
• Groovy Postbuild plugin
• Gives you access to manager.buildFailure() – marks the build red
• Gives you access to manager.buildUnstable() – marks the build yellow
• Allows you to control the result
14. Snippet Generator
For Plugins Installed:
• Helps you understand
and learn the syntax to
invoke them in your
Pipeline
• You provide how you
would invoke via Job
Config UI (Old way to
define Jenkins Jobs) and
it shows you the pipeline
equivalent!
15. Global Variables available to your scripts
These are recognized in your scripts and have methods/variables available to you
• env
• env.NODE_NAME
• env.WORKSPACE
• env.BUILD_URL
• env.JOB_URL
• params
• This builds parameters if it’s a Parameterized build
• currentBuild
• Next slide shows examples
• scm
• manager
• Groovy PostBuild Plugin available to Pipeline Scripts
• Shared Library Vars
• Ones you create, will discuss later when we look at Shared Libraries
• Other Plugins you may have installed that contribute here
16.
17. One pipeline script for multiple jobs?
• Found I had a lot of jobs that
were similar
• As I developed the pipeline
scripts, there was a lot in
common
• Too much in common, where
shared libraries didn’t really
apply, as it would be a entire
function of the job
• Prepared environment =
perfect solution
18. How to leverage the Prepared Environment?
• The items in prepared env come in in the env. Variable
• Can reference them and create what you need
• Then all your jobs can all reference the same pipeline script
• Makes easy to maintain and modify
19. Shared Libraries
• Used to store helper functions that may be common or used across pipeline
scripts
• Used to store global variables used throughout your pipeline projects
• Used to create your own “steps” for pipeline scripts to leverage
• Helper functions go in the /src folder and include: package com.ibm.shared;
• Steps and Variables for directly pipeline access go in the /vars folder
• Import and allow shared library access in a pipeline script with this at the top
• Access these shared libraries by instantiating the class
20. /vars/jobinfo.groovy
• Note: Lowercase name of the
groovy file
• In pipeline scripts you can access
these things by:
• jobinfo.PIPELINE_CONFIG.jobURL
• JenkinsJobRef is a class in our shared
library that has jobURL and
remotePath instance variables to
allow access to these items
• We wanted once place to
reference these hardcoded strings
in the event they were to change
21. /vars/nodescript.groovy
• We step out and execute
JavaScript resource files with
Node
• Created a “Step” helper to be
used in pipeline
• Call from pipline within a
nodescript {} block
• Where you can set variables to
determine what to do
22. /vars/nodescript.groovy
• Allows the pipeline script to
overwrite variables if they set them
in their block
• Checks out the resources project
• Copies down the javascript files
• Executes them and returns the
result
23. Pipeline Tips
• Pipeline is not efficient
• Use Strict typing even though Groovy doesn’t necessarily require it
• Not meant to do heavy parsing, or to hold complex logic
• You may be tempted to write it this way because the groovy programming language
allows it
• Declarative Pipeline
• New type of pipeline script that helps limit this
• The pipeline script is running in the master space, node blocks step out the “steps” to
your Jenkins Agent executor, but the main pipeline script uses your masters
CPU/memory
• Heavy logic can impact master to recommended to step it out to other processes,
shell, node, external groovy scripts
• You will notice for loops, if blocks in Pipeline are not efficient and can take a long time
to execute because of the CPS compiler
• If you must have these, you could use @NonCPS blocks around those functions, and it will compile
the functions normally
• CODE REVIEW!!!!
24. What doesn’t work?
• Nested Stages/Parallel blocks of stages
• Blue Ocean I think is looking to solve this
• Fancy Groovy programming
• Stick to the basics, go simple
• Shared Libraries cannot use Pipeline steps
• If you see a crazy exception and stack, likely you have something not
supported in your script
• Google is useful in this case, but carefully go through your pipeline script
• Pipeline scripts don’t automatically && its result with results of things you
do, unless a fatal error, you need to control the result
25. Test and Develop Pipeline Scripts
• Created a JobsUnderDevelopment Folder
• Engineers have their own sub folder within that space
• Keeps all Dev Jobs separate
• Engineers define their Folder Config to point to the SCM branch to load
the Shared Libraries
• Then any jobs executing in that Folder will use the dev stream of the shared
libraries
• Pipeline scripts of production jobs can be copied into these folders as well and
point to the SCM Branch to load the development pipeline code
• Allows multiple engineers to work and test off their SCM branch before
delivering pipeline script changes
26. Advanced Reporting
• Customize your
build pages of
the jobs
• Does require
Admin Approval
of various
functions to do
this
• We modify the
build description
first of our jobs
when they run
27. • Shared Library Helper function
• setDescriptionWithTestResult
• Lots of my pipelines use this so
made a shared library function
• Call it from pipeline script
• Returns a String of what summary
• Can be used in notification email
30. Sort Items on the
build page
Our parallel blocks write out
what is going on
So you know where it is
throughout the build
But at end of job we want to
clean that up