This document discusses using Docker and Jenkins for continuous integration. It introduces B1 Systems and their areas of expertise including virtualization, configuration management, and cloud technologies. It then describes how Docker is used to build and deploy applications into containers and how Fig, GitLab, Jenkins, and Puppet are integrated to provide continuous integration, collaboration on code, and configuration management capabilities. Use cases are presented for automatically testing Puppet modules and integrating/testing a simple web application.
Simply your Jenkins Projects with Docker Multi-Stage BuildsEric Smalling
This is a a talk I presented at Jenkins World 2017.
Abstract:
When building Docker images we often use multiple build steps and Dockerfiles to keep the image size down. Using multi-stage Docker builds we can eliminate this complexity, bringing all of the instructions back into a single Dockerfile while still keeping those images nice and small.
One of the most challenging things about building images is keeping the image size down. Each instruction in the Dockerfile adds a layer to the image, and you need to remember to clean up any artifacts you don’t need before moving on to the next layer. To write a really efficient Dockerfile, you have traditionally needed to employ shell tricks and other logic to keep the layers as small as possible and to ensure that each layer has the artifacts it needs from the previous layer and nothing else. It was actually very common to have multiple Jenkins pipeline steps and/or projects with unique Dockerfiles for different elements of the final build. Maintaining multiple sets of instructions to build your image is complicated and hard to maintain.
With multi-stage builds, you use multiple FROM statements in your Dockerfile. Each FROM instruction can use a different base, and each of them begins a new stage of the build. You can selectively copy artifacts from one stage to another, leaving behind everything you don’t want in the final image and simplifying the both the Dockerfile and Jenkins configurations needed to produce your images.
IBM Index 2018 Conference Workshop: Modernizing Traditional Java App's with D...Eric Smalling
Slides from my 2.5 hour hands-on workshop covering Docker basics, the Docker MTA program and how it applies to legacy Java applications and some tips on running those apps in containers in production.
Build, Publish, Deploy and Test Docker images and containers with Jenkins Wor...Docker, Inc.
This lightning talk will show you how simple it is to apply CI to the creation of Docker images, ensuring that each time the source is changed, a new image is created, tagged, and published. I will then show how easy it is to then deploy containers from this image and run tests to verify the behaviour.
Continuous Integration/Deployment with Docker and JenkinsFrancesco Bruni
“Continuous Integration doesn’t get rid of bugs, but it does make them dramatically easier to find and remove” M. Fowler
Jenkins and Docker are cool technologies. Here's how they serve in a continuous integration based process and how they could be exploited to deliver new version of the same software.
The slides present the whole process along with real code snippets.
The Jenkins open source continuous integration server now provides a “pipeline” scripting language which can define jobs that persist across server restarts, can be stored in a source code repository and can be versioned with the source code they are building. By defining the build and deployment pipeline in source code, teams can take full control of their build and deployment steps. The Docker project provides lightweight containers and a system for defining and managing those containers. The Jenkins pipeline and Docker containers are a great combination to improve the portability, reliability, and consistency of your build process.
This session will demonstrate Jenkins and Docker in the journey from continuous integration to DevOps.
Docker for any type of workload and any IT InfrastructureDocker, Inc.
This presentation discusses the different types of workloads typical enterprises are required to run, which use cases exist for containerizing them and how leading-edge workload orchestration can be used to deploy, run and manage the containerized workloads or various types or scale-out infrastructures, such as on-premise clusters, public clouds or hybrid clouds.
Simply your Jenkins Projects with Docker Multi-Stage BuildsEric Smalling
This is a a talk I presented at Jenkins World 2017.
Abstract:
When building Docker images we often use multiple build steps and Dockerfiles to keep the image size down. Using multi-stage Docker builds we can eliminate this complexity, bringing all of the instructions back into a single Dockerfile while still keeping those images nice and small.
One of the most challenging things about building images is keeping the image size down. Each instruction in the Dockerfile adds a layer to the image, and you need to remember to clean up any artifacts you don’t need before moving on to the next layer. To write a really efficient Dockerfile, you have traditionally needed to employ shell tricks and other logic to keep the layers as small as possible and to ensure that each layer has the artifacts it needs from the previous layer and nothing else. It was actually very common to have multiple Jenkins pipeline steps and/or projects with unique Dockerfiles for different elements of the final build. Maintaining multiple sets of instructions to build your image is complicated and hard to maintain.
With multi-stage builds, you use multiple FROM statements in your Dockerfile. Each FROM instruction can use a different base, and each of them begins a new stage of the build. You can selectively copy artifacts from one stage to another, leaving behind everything you don’t want in the final image and simplifying the both the Dockerfile and Jenkins configurations needed to produce your images.
IBM Index 2018 Conference Workshop: Modernizing Traditional Java App's with D...Eric Smalling
Slides from my 2.5 hour hands-on workshop covering Docker basics, the Docker MTA program and how it applies to legacy Java applications and some tips on running those apps in containers in production.
Build, Publish, Deploy and Test Docker images and containers with Jenkins Wor...Docker, Inc.
This lightning talk will show you how simple it is to apply CI to the creation of Docker images, ensuring that each time the source is changed, a new image is created, tagged, and published. I will then show how easy it is to then deploy containers from this image and run tests to verify the behaviour.
Continuous Integration/Deployment with Docker and JenkinsFrancesco Bruni
“Continuous Integration doesn’t get rid of bugs, but it does make them dramatically easier to find and remove” M. Fowler
Jenkins and Docker are cool technologies. Here's how they serve in a continuous integration based process and how they could be exploited to deliver new version of the same software.
The slides present the whole process along with real code snippets.
The Jenkins open source continuous integration server now provides a “pipeline” scripting language which can define jobs that persist across server restarts, can be stored in a source code repository and can be versioned with the source code they are building. By defining the build and deployment pipeline in source code, teams can take full control of their build and deployment steps. The Docker project provides lightweight containers and a system for defining and managing those containers. The Jenkins pipeline and Docker containers are a great combination to improve the portability, reliability, and consistency of your build process.
This session will demonstrate Jenkins and Docker in the journey from continuous integration to DevOps.
Docker for any type of workload and any IT InfrastructureDocker, Inc.
This presentation discusses the different types of workloads typical enterprises are required to run, which use cases exist for containerizing them and how leading-edge workload orchestration can be used to deploy, run and manage the containerized workloads or various types or scale-out infrastructures, such as on-premise clusters, public clouds or hybrid clouds.
Introduction to docker. Docker is open source framework that provides "container virtualization". This does not need hypervisor rather works directly with Kernel. It needs x64 Linux and kernel 3.8+ to provide virtualization
Amazon Web Services support Docker containers by Elastic Beanstalk service and with new Elastic Container Services. AWS Elastic Beanstalk is an easy-to-use service for deploying and scaling web applications and services. ECS is a highly scalable, high performance container management service that supports Docker.
What is Docker and why should you care? A Docker container is like a
lightweight Virtual Machine. It gives you the benefits of a virtual machine,
isolation of your application, without the drawbacks, having to ship an entire
operating system with your application, slow startup time, and difficult
interaction with the host.
In this presentation you will learn why Docker and containerization is the
future of DevOps and how to use it efficiently. You will learn how to build,
run, and link containers, and what volumes are and what they are used for.
You will also learn about some of the many orchestration solutions that exists
for managing a cluster of containers, both locally and in the cloud.
DockerCon SF 2015: Enabling Microservices @OrbitzDocker, Inc.
The slides from Steve Hoffman and Rick Fast's presentation at DockerCon SF 2015 -
Talk Description:
In this talk we will discuss how we enabled decomposition of one of our 250+ system components into a continously deployed microservice cluster.
This includes building a standardized Docker server composed of various local companion services along side the Docker daemon including: dynamic service discovery via Consul, a log relay to a centralized Elasticsearch cluster, and forwarding/batching of Dropwizard metrics to Graphite.
Building on this we'll cover our Jenkins driven automated pipeline for building Docker images and rolling deployments via Ansible using static placement on existing infrastructure while prototyping dynamic placement using Docker + Apache Mesos.
Docker is a relatively new technology, but it is based on solid underpinnings of the Linux Kernel. It can provision instances in a fraction of the time versus a traditional virtual machine. This makes it a great candidate for development teams to create consistent test benches for their developers. To set up your own disposable Docker environments bring a laptop and make your development a pleasurable experience.
Docker Continuous Delivery Workshop slide in Docker Training & Workshop for DevOps and Continuous Delivery at OSS Festival 2014 Thailand on October 11, 2014
If you're not familiar with Docker yet, here is your chance to catch up: a quick overview of the Open Source Docker Engine, and its associated services delivered through the Docker Hub. It also includes Jérôme will also discuss the new features of Docker 1.0, and briefly explain how you can run and maintain Docker on Azure. In addition, an Azure team member will demonstrate how deploy docker to Azure. The presentation will be followed by a Q&A session!
Java Developer Intro to Environment Management with Vagrant, Puppet, and Dock...Lucas Jellema
Creating and managing environments for development and R&D activities can be cumbersome. Quickly spinning up databases and web servers, using physical resources in a smart way, installing application components, and having all the elements talk to each other can take a lot of time. This session takes you by the hand and introduces you to Vagrant and Oracle VM VirtualBox for quickly provisioning VMs in which Docker containers run platform components, applications, and microservices—all set up by use of Puppet and interacting with Git(Hub). You’ll start from zero on your laptop and end with both local and public cloud environments in which to develop, test, and run various types of applications. Lean governance and evolution of the environments are discussed too.
Pipeline as code - new feature in Jenkins 2Michal Ziarnik
What is pipeline as code in continuous delivery/continuous deployment environment.
How to set up Multibranch pipeline to fully benefit from pipeline features.
Jenkins master-node concept in Kubernetes cluster.
Introduction to docker. Docker is open source framework that provides "container virtualization". This does not need hypervisor rather works directly with Kernel. It needs x64 Linux and kernel 3.8+ to provide virtualization
Amazon Web Services support Docker containers by Elastic Beanstalk service and with new Elastic Container Services. AWS Elastic Beanstalk is an easy-to-use service for deploying and scaling web applications and services. ECS is a highly scalable, high performance container management service that supports Docker.
What is Docker and why should you care? A Docker container is like a
lightweight Virtual Machine. It gives you the benefits of a virtual machine,
isolation of your application, without the drawbacks, having to ship an entire
operating system with your application, slow startup time, and difficult
interaction with the host.
In this presentation you will learn why Docker and containerization is the
future of DevOps and how to use it efficiently. You will learn how to build,
run, and link containers, and what volumes are and what they are used for.
You will also learn about some of the many orchestration solutions that exists
for managing a cluster of containers, both locally and in the cloud.
DockerCon SF 2015: Enabling Microservices @OrbitzDocker, Inc.
The slides from Steve Hoffman and Rick Fast's presentation at DockerCon SF 2015 -
Talk Description:
In this talk we will discuss how we enabled decomposition of one of our 250+ system components into a continously deployed microservice cluster.
This includes building a standardized Docker server composed of various local companion services along side the Docker daemon including: dynamic service discovery via Consul, a log relay to a centralized Elasticsearch cluster, and forwarding/batching of Dropwizard metrics to Graphite.
Building on this we'll cover our Jenkins driven automated pipeline for building Docker images and rolling deployments via Ansible using static placement on existing infrastructure while prototyping dynamic placement using Docker + Apache Mesos.
Docker is a relatively new technology, but it is based on solid underpinnings of the Linux Kernel. It can provision instances in a fraction of the time versus a traditional virtual machine. This makes it a great candidate for development teams to create consistent test benches for their developers. To set up your own disposable Docker environments bring a laptop and make your development a pleasurable experience.
Docker Continuous Delivery Workshop slide in Docker Training & Workshop for DevOps and Continuous Delivery at OSS Festival 2014 Thailand on October 11, 2014
If you're not familiar with Docker yet, here is your chance to catch up: a quick overview of the Open Source Docker Engine, and its associated services delivered through the Docker Hub. It also includes Jérôme will also discuss the new features of Docker 1.0, and briefly explain how you can run and maintain Docker on Azure. In addition, an Azure team member will demonstrate how deploy docker to Azure. The presentation will be followed by a Q&A session!
Java Developer Intro to Environment Management with Vagrant, Puppet, and Dock...Lucas Jellema
Creating and managing environments for development and R&D activities can be cumbersome. Quickly spinning up databases and web servers, using physical resources in a smart way, installing application components, and having all the elements talk to each other can take a lot of time. This session takes you by the hand and introduces you to Vagrant and Oracle VM VirtualBox for quickly provisioning VMs in which Docker containers run platform components, applications, and microservices—all set up by use of Puppet and interacting with Git(Hub). You’ll start from zero on your laptop and end with both local and public cloud environments in which to develop, test, and run various types of applications. Lean governance and evolution of the environments are discussed too.
Pipeline as code - new feature in Jenkins 2Michal Ziarnik
What is pipeline as code in continuous delivery/continuous deployment environment.
How to set up Multibranch pipeline to fully benefit from pipeline features.
Jenkins master-node concept in Kubernetes cluster.
stackconf 2020 | Infrastructure as Software by Paul StackNETWAYS
In this talk, Paul will demonstrate why writing infrastructure in general programming languages is a better way to is a better choice for infrastructure management. Pulumi is an open source tool that allows users to write their infrastructure code in TypeScript, Python, DotNet or Go.
General purpose languages allow infrastructure code to have integrated testing, compile time checks as well as being able to create infrastructure APIs and is more suited to infrastructure management than DSLs, JSON or YAML. In addition, he will demonstrate how to build infrastructure that manages Serverless, Kubernetes, PaaS and IaaS systems across multiple cloud providers.
Rancher et Kubernetes sont le moteur de la majorité des applications modernes en production. Mais la chaine d'automatisation permettant de livrer du code l'esprit léger commence bien plus en amont grace à un outillage Open Source.
Au programme :
- Commit Code : Avec Gitlab et les outils de collaboration
- Build Image : Toujours plus de fiabilité avec les images SLE Base Container Image
- Store in Registry : Archivage et scan de vulnérabilité avec Harbor
- Test & Go : Livraison en continue avec le mode GitOps et Rancher Fleet
Présentation d'une usine logicielle : du code jusqu'à la production en utilisant des composants SUSE & Rancher et des tierces parties telles que GitLab, Harbor.
Replay du webinar sur https://youtu.be/WuG716Io7sw
.docker : How to deploy Digital Experience in a container, drinking a cup of ...ICON UK EVENTS Limited
Matteo Bisi / Factor-y srl
Andrea Fontana / SOWRE SA
Docker is one of best technologies available on market to install and run and deploy application fastest , securely like never before. In this session you will see how to deploy a complete digital experience inside containers that will enable you to deploy a Portal drinking a cup of coffee. We will start from a deep overview of docker: what is docker, where you can find that, what is a container and why you should use container instead a complete Virtual Machine. After the overview we will enter inside how install IBM software inside a container using docker files that will run the setup using silent setup script. At last part we will talk about possible use of this configuration in real work scenario like staging or development environment or in WebSphere Portal farm setup.
3 years ago, Meetic chose to rebuild it's backend architecture using microservices and an event driven strategy. As we where moving along our old legacy application, testing features became gradually a pain, especially when those features rely on multiple changes across multiple components. Whatever the number of application you manage, unit testing is easy, as well as functional testing on a microservice. A good gherkin framework and a set of docker container can do the job. The real challenge is set in end-to-end testing even more when a feature can involve up to 60 different components.
To solve that issue, Meetic is building a Kubernetes strategy around testing. To do such a thing we need to :
- Be able to generate a docker container for each pull-request on any component of the stack
- Be able to create a full testing environment in the simplest way
- Be able to launch automated test on this newly created environment
- Have a clean-up process to destroy testing environment after tests To separate the various testing environment, we chose to use Kubernetes Namespaces each containing a variant of the Meetic stack. But when it comes to Kubernetes, managing multiple namespaces can be hard. Yaml configuration files need to be shared in a way that each people / automated job can access to them and modify them without impacting others.
This is typically why Meetic chose to develop it's own tool to manage namespace through a cli tool, or a REST API on which we can plug a friendly UI.
In this talk we will tell you the story of our CI/CD evolution to satisfy the need to create a docker container for each new pull request. And we will show you how to make end-to-end testing easier using Blackbeard, the tool we developed to handle the need to manage namespaces inspired by Helm.
Building Distributed Systems without Docker, Using Docker Plumbing Projects -...Patrick Chanezon
Docker provides an integrated and opinionated toolset to build, ship and run distributed applications. Over the past year, the Docker codebase has been refactored extensively to extract infrastructure plumbing components that can be used independently, following the UNIX philosophy of small tools doing one thing well: runC, containerd, swarmkit, hyperkit, vpnkit, datakit and the newly introduced InfraKit.
This talk will give an overview of these tools and how you can use them to build your own distributed systems without Docker.
Patrick Chanezon & David Chung, Docker & Phil Estes, IBM
Basic Idea
Develop a build system that leverages Docker for implementing continuous integration/deployment(CI/CD) pipeline. A git commit must kick off packaging a Docker Image and provisioning it in a VM.
A git based commit should be used for starting of a build for a docker image which would then be run and provisioned in a Virtual Machine. After every commit a series of test cases is then run on the code to ensure the correctness of the code. After all the test-cases pass, the image gets updated on docker-hub registry, and a VM gets provisioned which can then run the software directly (after pulling the image from the docker-hub).
This entire process ensures that the most recent and updated version of the code is available to the person who is using the software and this speeds up the overall process by at least 2-3 folds.
Building specialized container-based systems with Moby: a few use cases
This talk will explain how you can leverage the Moby project to assemble your own specialized container-based system, whether for IoT, cloud or bare metal scenarios. We will cover Moby itself, the framework, and tooling around the project, as well as many of it’s components: LinuxKit, InfraKit, containerd, SwarmKit, Notary. Then we will present a few use cases and demos of how different companies have leveraged Moby and some of the Moby components to create their own container-based systems.
"Docker best practice", Станислав Коленкин (senior devops, DataArt)DataArt
Docker best practice
про Docker, лучшие практики в написании Dockerfile, проблемы большого количества слоев (Layers) в images и подходы по оптимизации Layers в images, функционал multi-stage builds, подходы к безопастности контейнеров и Hosts системе, подходы дебагинга и мониторинга.
Docker - Demo on PHP Application deployment Arun prasath
Docker is an open-source project to easily create lightweight, portable, self-sufficient containers from any application. The same container that a developer builds and tests on a laptop can run at scale, in production, on VMs, bare metal, OpenStack clusters, public clouds and more.
In this demo, I will show how to build a Apache image from a Dockerfile and deploy a PHP application which is present in an external folder using custom configuration files.
Hitchhiker's guide to Cloud-Native Build Pipelines and Infrastructure as CodeRobert van Mölken
As more and more application deployments move to the cloud the scale and complexity becomes harder to manage. Instead of a handful of large instances, you might have many smaller instances, so there are many more things you need to provision. Because of this cloud vendors provide API abstraction of their compute, storage, network and other platform services. In this talk I present a guide to provision these services, such as a Kubernetes cluster, using infrastructure as code and deploy your applications through cloud-native build pipelines. Get to know the concepts behind these DevOps practices and come hear which tools to use like Terraform and Oracle Container Pipelines to automate these laborious tasks on the Oracle Cloud Infrastructure.
zkStudyClub - Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex ProofsAlex Pruden
This paper presents Reef, a system for generating publicly verifiable succinct non-interactive zero-knowledge proofs that a committed document matches or does not match a regular expression. We describe applications such as proving the strength of passwords, the provenance of email despite redactions, the validity of oblivious DNS queries, and the existence of mutations in DNA. Reef supports the Perl Compatible Regular Expression syntax, including wildcards, alternation, ranges, capture groups, Kleene star, negations, and lookarounds. Reef introduces a new type of automata, Skipping Alternating Finite Automata (SAFA), that skips irrelevant parts of a document when producing proofs without undermining soundness, and instantiates SAFA with a lookup argument. Our experimental evaluation confirms that Reef can generate proofs for documents with 32M characters; the proofs are small and cheap to verify (under a second).
Paper: https://eprint.iacr.org/2023/1886
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Welcome to the first live UiPath Community Day Dubai! Join us for this unique occasion to meet our local and global UiPath Community and leaders. You will get a full view of the MEA region's automation landscape and the AI Powered automation technology capabilities of UiPath. Also, hosted by our local partners Marc Ellis, you will enjoy a half-day packed with industry insights and automation peers networking.
📕 Curious on our agenda? Wait no more!
10:00 Welcome note - UiPath Community in Dubai
Lovely Sinha, UiPath Community Chapter Leader, UiPath MVPx3, Hyper-automation Consultant, First Abu Dhabi Bank
10:20 A UiPath cross-region MEA overview
Ashraf El Zarka, VP and Managing Director MEA, UiPath
10:35: Customer Success Journey
Deepthi Deepak, Head of Intelligent Automation CoE, First Abu Dhabi Bank
11:15 The UiPath approach to GenAI with our three principles: improve accuracy, supercharge productivity, and automate more
Boris Krumrey, Global VP, Automation Innovation, UiPath
12:15 To discover how Marc Ellis leverages tech-driven solutions in recruitment and managed services.
Brendan Lingam, Director of Sales and Business Development, Marc Ellis
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
The Metaverse and AI: how can decision-makers harness the Metaverse for their...Jen Stirrup
The Metaverse is popularized in science fiction, and now it is becoming closer to being a part of our daily lives through the use of social media and shopping companies. How can businesses survive in a world where Artificial Intelligence is becoming the present as well as the future of technology, and how does the Metaverse fit into business strategy when futurist ideas are developing into reality at accelerated rates? How do we do this when our data isn't up to scratch? How can we move towards success with our data so we are set up for the Metaverse when it arrives?
How can you help your company evolve, adapt, and succeed using Artificial Intelligence and the Metaverse to stay ahead of the competition? What are the potential issues, complications, and benefits that these technologies could bring to us and our organizations? In this session, Jen Stirrup will explain how to start thinking about these technologies as an organisation.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Le nuove frontiere dell'AI nell'RPA con UiPath Autopilot™UiPathCommunity
In questo evento online gratuito, organizzato dalla Community Italiana di UiPath, potrai esplorare le nuove funzionalità di Autopilot, il tool che integra l'Intelligenza Artificiale nei processi di sviluppo e utilizzo delle Automazioni.
📕 Vedremo insieme alcuni esempi dell'utilizzo di Autopilot in diversi tool della Suite UiPath:
Autopilot per Studio Web
Autopilot per Studio
Autopilot per Apps
Clipboard AI
GenAI applicata alla Document Understanding
👨🏫👨💻 Speakers:
Stefano Negro, UiPath MVPx3, RPA Tech Lead @ BSP Consultant
Flavio Martinelli, UiPath MVP 2023, Technical Account Manager @UiPath
Andrei Tasca, RPA Solutions Team Lead @NTT Data
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
1. Continuous Integration using Docker &
Jenkins
LinuxCon Europe 2014 October 13-15, 2014
Mattias Giese
Solutions Architect
B1 Systems GmbH
giese@b1-systems.de
B1 Systems GmbH - Linux/Open Source Consulting, Training, Support & Development
2. Introducing B1 Systems
founded in 2004
operating both nationally and internationally
more than 60 employees; low employee turnover
Provider for IBM, SUSE, Oracle & HP
vendor-independent (hardware and software)
Focus:
Consulting
Support
Development
Training
Operations
Solutions
B1 Systems GmbH
Continuous Integration using Docker &
Jenkins 2 / 43
3. Areas of Expertise
Virtualization (XEN, KVM & RHEV)
Systems management (Spacewalk, Red Hat Satellite, SUSE
Manager)
Configuration management (Puppet & Chef)
Monitoring (Nagios & Icinga)
IaaS Cloud (OpenStack & SUSE Cloud)
High availability (Pacemaker)
Shared Storage (GPFS, OCFS2, DRBD & CEPH)
File Sharing (ownCloud)
Packaging (Open Build Service)
Providing on-site systems administration and/or development
B1 Systems GmbH
Continuous Integration using Docker &
Jenkins 3 / 43
4. Partners
B1 Systems GmbH
Continuous Integration using Docker &
Jenkins 4 / 43
5. Deployment Stack
B1 Systems GmbH
Continuous Integration using Docker &
Jenkins 5 / 43
6. Deployment Stack – Technologies Used
Docker an open platform for developers and sysadmins to build,
ship, and run distributed applications
Fig a Docker orchestration tool
Gitlab an Open Source software to collaborate on code
Jenkins an Open Source continuous integration system
Puppet/r10k an Open Source configuration management system to
define the state of an IT infrastructure and to
automatically enforce this state
B1 Systems GmbH
Continuous Integration using Docker &
Jenkins 6 / 43
7. Docker – Build, Ship and
Run Applications
B1 Systems GmbH
Continuous Integration using Docker &
Jenkins 7 / 43
8. What is Docker?
an open platform for developers and sysadmins
Open Source Engine to standardize LXC
build, ship and run (distributed) applications
easy to use
create and share images
chroot on steroids
all you need is inside the container
not a Virtual Server – less overhead
B1 Systems GmbH
Continuous Integration using Docker &
Jenkins 8 / 43
9. Technologies Used
Linux Containers (LXC)
chroot
use of Linux Kernel features:
cgroups
kernel namespaces
. . .
B1 Systems GmbH
Continuous Integration using Docker &
Jenkins 9 / 43
10. Features
can run any distribution
“if it will run on Linux it will run in Docker”
limited to the same architecture as the host
all you need is inside the container
libraries
dependencies
. . .
B1 Systems GmbH
Continuous Integration using Docker &
Jenkins 10 / 43
11. Fig – A Docker Orchestration Tool
simple orchestration tool for Docker
easy to deploy and use
helps to define and control a multi-container service
B1 Systems GmbH
Continuous Integration using Docker &
Jenkins 11 / 43
12. GitLab – Collaboration on Code
B1 Systems GmbH
Continuous Integration using Docker &
Jenkins 12 / 43
13. Features
completely free and Open Source
manage and browse Git repositories
keep your code secure on own server
manage access permissions
perform code review and merge requests
hooks
much more!
B1 Systems GmbH
Continuous Integration using Docker &
Jenkins 13 / 43
14. Gitlab 1/3
B1 Systems GmbH
Continuous Integration using Docker &
Jenkins 14 / 43
15. Gitlab 2/3
B1 Systems GmbH
Continuous Integration using Docker &
Jenkins 15 / 43
16. Gitlab 3/3
B1 Systems GmbH
Continuous Integration using Docker &
Jenkins 16 / 43
17. Jenkins – Continuous Integration
B1 Systems GmbH
Continuous Integration using Docker &
Jenkins 17 / 43
18. What is Jenkins?
Jenkins is a server-based system for continuous integration
running in a servlet container (like Apache Tomcat).
Jenkins supports SCM tools including AccuRev, CVS,
Subversion, Git, Mercurial, Perforce, Clearcase and RTC, and
can execute Apache Ant and Apache Maven.
Jenkins is free software, released under the MIT License.
Builds can be started by various means, including being
triggered by commit in a version control system.
Jenkins monitors executions of repeated jobs, such as building a
software project or jobs run by cron.
Jenkins is written in Java and released under the MIT License.
B1 Systems GmbH
Continuous Integration using Docker &
Jenkins 18 / 43
19. Jenkins 1/3
B1 Systems GmbH
Continuous Integration using Docker &
Jenkins 19 / 43
20. Jenkins 2/3
B1 Systems GmbH
Continuous Integration using Docker &
Jenkins 20 / 43
21. Jenkins 3/3
B1 Systems GmbH
Continuous Integration using Docker &
Jenkins 21 / 43
22. Jenkins Plugins
Docker Buildstep allows to add various docker commands into a job
as a build step.
Docker publish provides the ability to build projects with a Dockerfile
and publish them to the docker registry.
Git allows the use of Git as a build SCM.
Gitlab a build trigger that makes GitLab think Jenkins is a
GitLab CI.
Build Pipeline Plugin provides a Build Pipeline View of upstream and
downstream connected jobs that typically form a build
pipeline.
Downstream-Ext Plugin supports extended configuration for
triggering downstream builds.
Publish Over SSH Plugin transfers files and data secured by SSH.
Chuck Norris ;) . . .
B1 Systems GmbH
Continuous Integration using Docker &
Jenkins 22 / 43
23. Jenkins – Build Pipeline Plugin
provides a Build Pipeline View of upstream and downstream
connected jobs that typically form a build pipeline.
offers the ability to define manual triggers for jobs that require
intervention prior to execution, e.g. an approval process outside
of Jenkins.
B1 Systems GmbH
Continuous Integration using Docker &
Jenkins 23 / 43
24. Jenkins – Downstream-Ext Plugin
This plugin supports extended configuration for triggering
downstream builds:
triggers build only if downstream job has SCM changes
triggers build if upstream build result is better/equal/worse than
any given result (SUCCESS, UNSTABLE, FAILURE,
ABORTED)
for Matrix (alias multi-configuration) jobs, you can decide which
part of the job should trigger the downstream job: parent only,
configurations only, or both
B1 Systems GmbH
Continuous Integration using Docker &
Jenkins 24 / 43
25. Jenkins – Publish Over SSH Plugin
SCP – send files over SSH (SFTP)
execute commands on a remote server
username and password or public key authentication
passwords & -phrases encryption in configuration files and UI
SSH SFTP/SSH Exec as a build step during the build process
SSH before a (maven) project build, or to run after a build
whether the build was successful or not
send files directly from the artifacts directory of the build that is
being promoted ("promotion aware")
optionally override the authentication credentials for each server
in the job configuration
optionally retry if the transfer of files fails
enable the command/script to be executed in a pseudo TTY
B1 Systems GmbH
Continuous Integration using Docker &
Jenkins 25 / 43
26. Puppet – Configuration Management
B1 Systems GmbH
Continuous Integration using Docker &
Jenkins 26 / 43
27. What is Puppet?
a configuration management system to define a certain state of
an IT infrastructure
developed since 2005 by Puppet Labs
describes resources and their state in manifests
uses its own declarative language
distributes these manifests through a server program called
master
Agents on the target systems enforce the desired state.
System specific information will be discovered using facter for
a dynamic configuration.
Agents also send a report on the taken actions back to the
Puppet master.
Puppet’s open API can send and receive data to/from
third-party tools.
B1 Systems GmbH
Continuous Integration using Docker &
Jenkins 27 / 43
28. Puppet – r10k
deployment helper for Puppet modules (internal/Puppet Forge)
uses a cache directory to preserve space
may use a so called Puppetfile for complex deployment needs
() Gemfile)
B1 Systems GmbH
Continuous Integration using Docker &
Jenkins 28 / 43
29. Use Case 1: Automatic Testing of a Puppet
Module
B1 Systems GmbH
Continuous Integration using Docker &
Jenkins 29 / 43
30. Use Case 1 – Automatic Testing of a puppet
module
B1 Systems GmbH
Continuous Integration using Docker &
Jenkins 30 / 43
31. Use Case 1 – Prerequesites
Requirements:
base Docker container for every supported OS of a module
are being built as seperate jobs
Preparation:
1 git push to development branch
2 Gitlab triggers Jenkins Webhook
3 Jenkins merges dev with test branch
B1 Systems GmbH
Continuous Integration using Docker &
Jenkins 31 / 43
32. Use Case 1 – puppet-memcached-units
1 Jenkins creates a new container.
2 r10k deploys all puppet code.
3 simple syntax and style(lint) checks
4 rspec-puppet is run.
B1 Systems GmbH
Continuous Integration using Docker &
Jenkins 32 / 43
34. Use Case 1 – Job:
puppet-memcached-acceptance
1 Jenkins starts a fresh container from a puppet-enabled base
image
2 r10k deploys all needed Puppet code
3 puppet apply is run with the specified module:
PUPPETENV=/puppet/environments/dev/
puppet apply –debug –modulepath $PUPPETENV/modules
$PUPPETENV/modules/$PUPPETMODULE/test/init.pp
B1 Systems GmbH
Continuous Integration using Docker &
Jenkins 34 / 43
36. Use Case 2: Integration/Acceptance Testing
of a Simple Webapp
B1 Systems GmbH
Continuous Integration using Docker &
Jenkins 36 / 43
37. Use Case 2: Integration/Acceptance Testing
of a Simple Webapp
simple use case for a multi-tier app
httpd + webapp and mysql
need to be linked together
should be automatically deployed if possible
two containers: owncloud and mysql
automatic rebuild on change
after changes, integration testing should be done
if integration tests succeed, deploy app on staging host
B1 Systems GmbH
Continuous Integration using Docker &
Jenkins 37 / 43
38. Jenkins Job Overview
B1 Systems GmbH
Continuous Integration using Docker &
Jenkins 38 / 43
39. Docker Build Pipeline 1
B1 Systems GmbH
Continuous Integration using Docker &
Jenkins 39 / 43
40. Docker Build Pipeline 2
B1 Systems GmbH
Continuous Integration using Docker &
Jenkins 40 / 43
41. Next Steps
B1 Systems GmbH
Continuous Integration using Docker &
Jenkins 41 / 43
42. Next Steps
Most Docker image jobs are identical:
implement flow/job dsl plugin
create a dynamic test matrix for Puppet modules
multi-configuration jobs may help with that
use Packer for building base Docker images (Puppet Provisioner)
integrate with CoreOS/Project Atomic/OpenStack
implement a better global orchestration scheme (TerraForm,
Ansible, SaltStack maybe?)
integrate with clustered configuration through Serf/Consul or
others
automatic handoff to QA, tests with real world data in staging
environments
easy push button tagging of Docker images to ’production’
may be implemented through ’promoted builds’
B1 Systems GmbH
Continuous Integration using Docker &
Jenkins 42 / 43
43. Thank you for your attention!
For further information, please contact:
info@b1-systems.de or +49 (0)8457 - 931096
B1 Systems GmbH - Linux/Open Source Consulting, Training, Support & Development