This document discusses infrastructure automation using tools like Ansible and Docker to help startups boost their infrastructure. It provides an introduction to concepts like infrastructure as code, DevOps, configuration management tools including Ansible, containers and Docker. It then dives deeper into explaining Ansible architecture, modules, inventory, roles and playbooks. Finally, it covers Docker architecture, the build-ship-run model and provides a simple "Hello World" example of running a Docker container. The presenter is introduced as an expert in these topics working to promote open source adoption.
PuppetConf 2016: Keynote: Pulling the Strings to Containerize Your Life - Sco...Puppet
Here are the slides from Scott Coulton's PuppetConf 2016 presentation called Pulling the Strings to Containerize Your Life. Watch the videos at https://www.youtube.com/playlist?list=PLV86BgbREluVjwwt-9UL8u2Uy8xnzpIqa
We Need to Talk: How Communication Helps CodeDocker, Inc.
To build a successful open source project requires more than just code. As Docker and many other household-name projects show, communication is also an essential ingredient in growing a project to greatness. This introvert-friendly talk will help you level up your development game by highlighting three tools and techniques: user research, InnerSource, and documentation. First, I'll help you apply some basic user research practices to refine your project purpose, vision, and value proposition. Then I'll talk about the role of documentation and effective storytelling in generating interest and feedback from broad development audiences. Next, I'll move on to InnerSource: what it is, how it works, and how it can improve your team's communication and collaboration habits. For this, I'll share real-world examples (including some from Zalando) of how InnerSource enabled teams to develop more effectively and efficiently. Finally, I’ll offer some examples of open-source projects (including Docker) that demonstrate how great communication leads to great software. Ideally, you’ll come away inspired to integrate more communication into your development processes.
The Beam Vision for Portability: "Write once run anywhere"Knoldus Inc.
This session is all about knowing a modern way to define and execute data processing pipelines with Apache Beam, an open-source unified programming model. we will talk about the Apache Beam vision and the benefits of the Beam Portability framework. The Beam Portability framework achieves the vision that a developer can use their favourite programming language with their preferred execution backend.
Your developers just walked into your cube and said: "Here's the new app, I built it with Docker, and it's ready to go live." What do you do next? In this session, we'll talk about what containers are and what they are not. And we'll step through a series of considerations that need to be examined when deploying containerized workloads - VMs or Container? Bare Metal or Cloud? What about capacity planning? Security? Disaster Recovery? How do I even get started?
PuppetConf 2016: Keynote: Pulling the Strings to Containerize Your Life - Sco...Puppet
Here are the slides from Scott Coulton's PuppetConf 2016 presentation called Pulling the Strings to Containerize Your Life. Watch the videos at https://www.youtube.com/playlist?list=PLV86BgbREluVjwwt-9UL8u2Uy8xnzpIqa
We Need to Talk: How Communication Helps CodeDocker, Inc.
To build a successful open source project requires more than just code. As Docker and many other household-name projects show, communication is also an essential ingredient in growing a project to greatness. This introvert-friendly talk will help you level up your development game by highlighting three tools and techniques: user research, InnerSource, and documentation. First, I'll help you apply some basic user research practices to refine your project purpose, vision, and value proposition. Then I'll talk about the role of documentation and effective storytelling in generating interest and feedback from broad development audiences. Next, I'll move on to InnerSource: what it is, how it works, and how it can improve your team's communication and collaboration habits. For this, I'll share real-world examples (including some from Zalando) of how InnerSource enabled teams to develop more effectively and efficiently. Finally, I’ll offer some examples of open-source projects (including Docker) that demonstrate how great communication leads to great software. Ideally, you’ll come away inspired to integrate more communication into your development processes.
The Beam Vision for Portability: "Write once run anywhere"Knoldus Inc.
This session is all about knowing a modern way to define and execute data processing pipelines with Apache Beam, an open-source unified programming model. we will talk about the Apache Beam vision and the benefits of the Beam Portability framework. The Beam Portability framework achieves the vision that a developer can use their favourite programming language with their preferred execution backend.
Your developers just walked into your cube and said: "Here's the new app, I built it with Docker, and it's ready to go live." What do you do next? In this session, we'll talk about what containers are and what they are not. And we'll step through a series of considerations that need to be examined when deploying containerized workloads - VMs or Container? Bare Metal or Cloud? What about capacity planning? Security? Disaster Recovery? How do I even get started?
>>WATCH THE WEBINAR HERE: https://codefresh.io/docker-based-pipelines-with-codefresh/
Most people think that Docker adoption means deploying Docker images. In this webinar, we will see the alternative way of adopting Docker in a Continuous Integration Pipeline, by packaging all build tools inside Docker containers. This makes it very easy to use different tool versions on the same build and puts an end to version conflicts in build machines. We will use Codefresh as a CI/CD solution as it fully supports pipelines where each build step is running on its own container image.
Sign up for FREE Codefresh account (120 builds/month) at Codefresh.io/codefresh-signup
Multi-cloud CI/CD with failover powered by K8s, Istio, Helm, and Codefresh Codefresh
**View the full webinar here: https://codefresh.io/multi-cloud-cicd-kubernetes-failover-across-clouds/
Multi-cloud Kubernetes is all about mitigating risk between hosting providers. In this webinar, we'll leverage Kubernetes as our universal cloud API, standup clusters in Google, Amazon, and Azure, setup multi-deploy so our application is in several locations, and demonstrate failover should one cloud fail.
We'll stand up and manage our clusters, then use Istio, Helm, and Codefresh to do a multi-cloud Canary rollout to each cloud.
Come ready to see:
- Continuous Delivery to multiple Kubernetes providers
- Cluster creation on multiple clouds from a single interface
- How to create failover rules
- A practical guide on how to set it up for yourself
Get a free Codefresh account today (that's 120 build/month!) at https://codefresh.io/codefresh-signup/
Tales of Training: Scaling CodeLabs with Swarm Mode and Docker-ComposeDocker, Inc.
Why does any "code lab workshop" or live demo are always such a challenge?
A wise sysadmin once told me: “Get your hand dirty with the production to learn”.
So I want to tell you a story of getting hand dirties, by creating a code lab environment considered as production.
This story will show that we can build a reproducible environment for code-labs workshops, by using the Docker “tools”: the Engine, Swarm Mode, Docker-Compose, Moby, LinuxKit.
Following the spirit of “Play With Docker”, but generalized at any service collection, this Codelab toolkit has been used on code-labs workshops of 120+ people.
That path was not a free lunch, but the lessons learned will give you an idea on how a training environment can be efficiently done with Compose and Swarm Mode, by treating it as a “production” platform, tackling the plumbing “youth” limitations for the better of your use case.
As a trainer, I never learned so much than building something to teach people someone else: this the story I want to tell you, the tale of using Docker as a tool of MASSIVE KNOWLEDGE SHARING, which is the root of growing our industry together.
Knative makes Developers Incredible on ServerlessDaniel Oh
What makes your developers incredible to develop, deploy, manage modern serverless workload? With Knative, developers can create and deploy their own serverless on Kubernetes where they want then develop your functions with your language of choice. For example, OpenWhisk has rich support for your preferred serverless apps such as Python, Java. If those languages don't suit your needs, you can deploy your own app container to act as your function. Because the containers only spin up for a function when in use, resource usage can be minimized during idle times.
Node.js Rocks in Docker for Dev and OpsBret Fisher
DockerCon 2019 session
Learn the best practices of managing Node and JavaScript projects when developing, testing, and operating containers from Docker Captain Bret Fisher, who's been building and deploying Node apps in containers since the early days of the Docker project.
This session will take you on a journey, starting with local development of Node and js-specific projects and how to optimize your Docker Desktop and Compose configs for "the best of both worlds" with js and Docker. You'll see examples of cutting edge features like macOS mind-mount performance enhancements, and multi-stage image targeting.
Then Bret will walk you through examples of optimizing your builds, testing, and CI/CD of Node with new features like test stages in multi-stage builds.
Finally, you'll get some examples around Node in production orchestration, and how you can optimize your cluster updates for zero-downtime scenarios on Kubernetes and Swarm using Node connection management techniques.
Node apps rock in containers, so come join Bret for a fun ride through the best parts and learn solutions for the problems that you'll need to solve along the way.
Securing the Software Supply Chain with TUF and Docker - Justin Cappos and Sa...Docker, Inc.
If you want to compromise millions of machines and users, software distribution and software updates are an excellent attack vector. Using public cryptography to sign your packages is a good starting point, but as we will see, it still leaves you open to a variety of attacks. This is why we designed TUF, a secure software update framework. TUF helps to handle key revocation securely, limits the impact a man-in-the-middle attacker may have, and reduces the impact of repository compromise. We will discuss TUF's protections and integration into Docker's Notary software, and demonstrate new techniques that could be added to verify other parts of the software supply chain, including the development, build, and quality assurance processes.
In this session, we will learn what is Docker and what was the need for it. We will also take a look at the benefits of Docker, and the concept behind containerization.
We will learn some core Docker concepts such as Docker images, Dockerfiles, Docker Hub, etc.
I will also show the Docker commands in action in the terminal and we will also take a look at an actual Dockerfile being used in an open-source project.
Finally, we will take a high-level look at the Docker architecture and understand how things work in Docker and what is the flow of commands.
DCSF19 How To Build Your Containerization Strategy Docker, Inc.
Lee Namba, Docker
The Docker Enterprise container platform helps organizations deploy and manage applications faster and it secures the application pipeline at a lower cost than traditional application delivery models. But it takes more than just great technology to achieve the desired results. The organization and culture of your enterprise directly impacts what you transform, how it’s done, and who does it. Success requires a strategy for how you will govern the container platform environment, how to assess your application estate, what your delivery pipeline will look like, and how to ensure developers, operators, security teams and others play nicely together. In this talk I will cover topics such as different types of workloads (legacy, microservices, FaaS, big data and more), how your org chart can influence whether you deploy CaaS (Containers as a Service) vs CLaaS (Clusters as a Service), how "shifting left" can determine if you can outsource, centralized vs distributed CI/CD and how containers play a role, transforming your pets into cattle, how giant whale balloons are used for onboarding, and a prescriptive and comprehensive methodology for successfully deploying containers into your enterprise.
Back to the Future: Containerize Legacy ApplicationsDocker, Inc.
People typically think of Docker for microservices and try to make the smallest container they can. There are tremendous benefits to a microservices model but those are not the only apps that qualify for containers. Traditional, homegrown, monolithic apps are also great candidates for Docker - why? By containerizing these apps, many of the same agility, portability, security and cost savings benefits can be applied to the hundreds (if not thousands) of apps in your datacenters. But where to begin? Attend this session to learn how to approach modernizing traditional apps (MTA), considerations, the available tools and possibilities.
In this session we will understand about creating an infrastructure using Terraform. Terraform is an IaC tool that manages infrastructure efficiently.
After this, we will see how we can perform end to end testing on code written with Terraform. So, Terratest is basically a Go Library, which helps to write automated tests for IaC.
Building CI/CD Pipelines with Jenkins and KubernetesJanakiram MSV
Learn how to configure CI/CD pipelines with Jenkins and Kubernetes. We will show you to how to automate deployments from source code to production clusters.
Instant developer onboarding with self contained repositoriesYshay Yaacobi
Slide from my talk on "Instant developer onboarding with self-contained repositories".
https://sched.co/l9yG
Code examples on:
https://github.com/Yshayy/self-contained-repositories
Conference Recordings will be added once it will be public
The purpose of MoCDA shall be to create a partnership among career development practitioners from business/industry, elementary/secondary schools, colleges, public and private agencies, and private practices, and to establish and improve the standards of professional service in the field of career development in Missouri and adjacent metropolitan areas.
>>WATCH THE WEBINAR HERE: https://codefresh.io/docker-based-pipelines-with-codefresh/
Most people think that Docker adoption means deploying Docker images. In this webinar, we will see the alternative way of adopting Docker in a Continuous Integration Pipeline, by packaging all build tools inside Docker containers. This makes it very easy to use different tool versions on the same build and puts an end to version conflicts in build machines. We will use Codefresh as a CI/CD solution as it fully supports pipelines where each build step is running on its own container image.
Sign up for FREE Codefresh account (120 builds/month) at Codefresh.io/codefresh-signup
Multi-cloud CI/CD with failover powered by K8s, Istio, Helm, and Codefresh Codefresh
**View the full webinar here: https://codefresh.io/multi-cloud-cicd-kubernetes-failover-across-clouds/
Multi-cloud Kubernetes is all about mitigating risk between hosting providers. In this webinar, we'll leverage Kubernetes as our universal cloud API, standup clusters in Google, Amazon, and Azure, setup multi-deploy so our application is in several locations, and demonstrate failover should one cloud fail.
We'll stand up and manage our clusters, then use Istio, Helm, and Codefresh to do a multi-cloud Canary rollout to each cloud.
Come ready to see:
- Continuous Delivery to multiple Kubernetes providers
- Cluster creation on multiple clouds from a single interface
- How to create failover rules
- A practical guide on how to set it up for yourself
Get a free Codefresh account today (that's 120 build/month!) at https://codefresh.io/codefresh-signup/
Tales of Training: Scaling CodeLabs with Swarm Mode and Docker-ComposeDocker, Inc.
Why does any "code lab workshop" or live demo are always such a challenge?
A wise sysadmin once told me: “Get your hand dirty with the production to learn”.
So I want to tell you a story of getting hand dirties, by creating a code lab environment considered as production.
This story will show that we can build a reproducible environment for code-labs workshops, by using the Docker “tools”: the Engine, Swarm Mode, Docker-Compose, Moby, LinuxKit.
Following the spirit of “Play With Docker”, but generalized at any service collection, this Codelab toolkit has been used on code-labs workshops of 120+ people.
That path was not a free lunch, but the lessons learned will give you an idea on how a training environment can be efficiently done with Compose and Swarm Mode, by treating it as a “production” platform, tackling the plumbing “youth” limitations for the better of your use case.
As a trainer, I never learned so much than building something to teach people someone else: this the story I want to tell you, the tale of using Docker as a tool of MASSIVE KNOWLEDGE SHARING, which is the root of growing our industry together.
Knative makes Developers Incredible on ServerlessDaniel Oh
What makes your developers incredible to develop, deploy, manage modern serverless workload? With Knative, developers can create and deploy their own serverless on Kubernetes where they want then develop your functions with your language of choice. For example, OpenWhisk has rich support for your preferred serverless apps such as Python, Java. If those languages don't suit your needs, you can deploy your own app container to act as your function. Because the containers only spin up for a function when in use, resource usage can be minimized during idle times.
Node.js Rocks in Docker for Dev and OpsBret Fisher
DockerCon 2019 session
Learn the best practices of managing Node and JavaScript projects when developing, testing, and operating containers from Docker Captain Bret Fisher, who's been building and deploying Node apps in containers since the early days of the Docker project.
This session will take you on a journey, starting with local development of Node and js-specific projects and how to optimize your Docker Desktop and Compose configs for "the best of both worlds" with js and Docker. You'll see examples of cutting edge features like macOS mind-mount performance enhancements, and multi-stage image targeting.
Then Bret will walk you through examples of optimizing your builds, testing, and CI/CD of Node with new features like test stages in multi-stage builds.
Finally, you'll get some examples around Node in production orchestration, and how you can optimize your cluster updates for zero-downtime scenarios on Kubernetes and Swarm using Node connection management techniques.
Node apps rock in containers, so come join Bret for a fun ride through the best parts and learn solutions for the problems that you'll need to solve along the way.
Securing the Software Supply Chain with TUF and Docker - Justin Cappos and Sa...Docker, Inc.
If you want to compromise millions of machines and users, software distribution and software updates are an excellent attack vector. Using public cryptography to sign your packages is a good starting point, but as we will see, it still leaves you open to a variety of attacks. This is why we designed TUF, a secure software update framework. TUF helps to handle key revocation securely, limits the impact a man-in-the-middle attacker may have, and reduces the impact of repository compromise. We will discuss TUF's protections and integration into Docker's Notary software, and demonstrate new techniques that could be added to verify other parts of the software supply chain, including the development, build, and quality assurance processes.
In this session, we will learn what is Docker and what was the need for it. We will also take a look at the benefits of Docker, and the concept behind containerization.
We will learn some core Docker concepts such as Docker images, Dockerfiles, Docker Hub, etc.
I will also show the Docker commands in action in the terminal and we will also take a look at an actual Dockerfile being used in an open-source project.
Finally, we will take a high-level look at the Docker architecture and understand how things work in Docker and what is the flow of commands.
DCSF19 How To Build Your Containerization Strategy Docker, Inc.
Lee Namba, Docker
The Docker Enterprise container platform helps organizations deploy and manage applications faster and it secures the application pipeline at a lower cost than traditional application delivery models. But it takes more than just great technology to achieve the desired results. The organization and culture of your enterprise directly impacts what you transform, how it’s done, and who does it. Success requires a strategy for how you will govern the container platform environment, how to assess your application estate, what your delivery pipeline will look like, and how to ensure developers, operators, security teams and others play nicely together. In this talk I will cover topics such as different types of workloads (legacy, microservices, FaaS, big data and more), how your org chart can influence whether you deploy CaaS (Containers as a Service) vs CLaaS (Clusters as a Service), how "shifting left" can determine if you can outsource, centralized vs distributed CI/CD and how containers play a role, transforming your pets into cattle, how giant whale balloons are used for onboarding, and a prescriptive and comprehensive methodology for successfully deploying containers into your enterprise.
Back to the Future: Containerize Legacy ApplicationsDocker, Inc.
People typically think of Docker for microservices and try to make the smallest container they can. There are tremendous benefits to a microservices model but those are not the only apps that qualify for containers. Traditional, homegrown, monolithic apps are also great candidates for Docker - why? By containerizing these apps, many of the same agility, portability, security and cost savings benefits can be applied to the hundreds (if not thousands) of apps in your datacenters. But where to begin? Attend this session to learn how to approach modernizing traditional apps (MTA), considerations, the available tools and possibilities.
In this session we will understand about creating an infrastructure using Terraform. Terraform is an IaC tool that manages infrastructure efficiently.
After this, we will see how we can perform end to end testing on code written with Terraform. So, Terratest is basically a Go Library, which helps to write automated tests for IaC.
Building CI/CD Pipelines with Jenkins and KubernetesJanakiram MSV
Learn how to configure CI/CD pipelines with Jenkins and Kubernetes. We will show you to how to automate deployments from source code to production clusters.
Instant developer onboarding with self contained repositoriesYshay Yaacobi
Slide from my talk on "Instant developer onboarding with self-contained repositories".
https://sched.co/l9yG
Code examples on:
https://github.com/Yshayy/self-contained-repositories
Conference Recordings will be added once it will be public
The purpose of MoCDA shall be to create a partnership among career development practitioners from business/industry, elementary/secondary schools, colleges, public and private agencies, and private practices, and to establish and improve the standards of professional service in the field of career development in Missouri and adjacent metropolitan areas.
Rande Lazar, MD, has extensive experience in the field of otolaryngology and is currently the lead physician at ENT Memphis. Over the course of his career, Rande Lazar, MD, has dealt with numerous issues of the ears, nose, and throat, including sinusitis.
A look at how customer relationship direct marketing programs from 20 years ago bear a resemblance to brand journalism in digital media today. How the content strategies are similar.
Through this PPT we want our Clients to know the process we would follow to make their web platform's digital promotion and hence, they would easily see a plan in the form of a proposal.
Ever since the “CloudNative revolution” took over our development environment (devenv), we have never been more challenged (or more excited). With Kubernetes, Docker (Containerd) & many other microservice-related technologies, we have a handful of technologies to master before we write the first line of code.
[HKOSCon x COSCUP 2020][20200801][Ansible: From VM to Kubernetes]Wong Hoi Sing Edison
By using Ansible for DevOps, we could manage both VM, Docker image provision, Kubernetes and CephFS provision, or even Kubernetes Pod runtime management.
PuppetConf 2016: Why Network Automation Matters, and What You Can Do About It...Puppet
Here are the slides from Rick Sherman's PuppetConf 2016 presentation called Why Network Automation Matters, and What You Can Do About It. Watch the videos at https://www.youtube.com/playlist?list=PLV86BgbREluVjwwt-9UL8u2Uy8xnzpIqa
Introduction to Docker at the Azure Meet-up in New YorkJérôme Petazzoni
This is the presentation given at the Azure New York Meet-Up group, September 3rd.
It includes a quick overview of the Open Source Docker Engine and its associated services delivered through the Docker Hub. It also covers the new features of Docker 1.0, and briefly explains how to get started with Docker on Azure.
A Kernel of Truth: Intrusion Detection and Attestation with eBPFoholiab
"Attestation is hard" is something you might hear from security researchers tracking nation states and APTs, but it's actually pretty true for most network-connected systems!
Modern deployment methodologies mean that disparate teams create workloads for shared worker-hosts (ranging from Jenkins to Kubernetes and all the other orchestrators and CI tools in-between), meaning that at any given moment your hosts could be running any one of a number of services, connecting to who-knows-what on the internet.
So when your network-based intrusion detection system (IDS) opaquely declares that one of these machines has made an "anomalous" network connection, how do you even determine if it's business as usual? Sure you can log on to the host to try and figure it out, but (in case you hadn't noticed) computers are pretty fast these days, and once the connection is closed it might as well not have happened... Assuming it wasn't actually a reverse shell...
At Yelp we turned to the Linux kernel to tell us whodunit! Utilizing the Linux kernel's eBPF subsystem - an in-kernel VM with syscall hooking capabilities - we're able to aggregate metadata about the calling process tree for any internet-bound TCP connection by filtering IPs and ports in-kernel and enriching with process tree information in userland. The result is "pidtree-bcc": a supplementary IDS. Now whenever there's an alert for a suspicious connection, we just search for it in our SIEM (spoiler alert: it's nearly always an engineer doing something "innovative")! And the cherry on top? It's stupid fast with negligible overhead, creating a much higher signal-to-noise ratio than the kernels firehose-like audit subsystems.
This talk will look at how you can tune the signal-to-noise ratio of your IDS by making it reflect your business logic and common usage patterns, get more work done by reducing MTTR for false positives, use eBPF and the kernel to do all the hard work for you, accidentally load test your new IDS by not filtering all RFC-1918 addresses, and abuse Docker to get to production ASAP!
As well as looking at some of the technologies that the kernel puts at your disposal, this talk will also tell pidtree-bcc's road from hackathon project to production system and how focus on demonstrating business value early on allowed the organization to give us buy-in to build and deploy a brand new project from scratch.
DevSecCon London 2019: A Kernel of Truth: Intrusion Detection and Attestation...DevSecCon
Matt Carroll
Infrastructure Security Engineer at Yelp
"Attestation is hard" is something you might hear from security researchers tracking nation states and APTs, but it's actually pretty true for most network-connected systems!
Modern deployment methodologies mean that disparate teams create workloads for shared worker-hosts (ranging from Jenkins to Kubernetes and all the other orchestrators and CI tools in-between), meaning that at any given moment your hosts could be running any one of a number of services, connecting to who-knows-what on the internet.
So when your network-based intrusion detection system (IDS) opaquely declares that one of these machines has made an "anomalous" network connection, how do you even determine if it's business as usual? Sure you can log on to the host to try and figure it out, but (in case you hadn't noticed) computers are pretty fast these days, and once the connection is closed it might as well not have happened... Assuming it wasn't actually a reverse shell...
At Yelp we turned to the Linux kernel to tell us whodunit! Utilizing the Linux kernel's eBPF subsystem - an in-kernel VM with syscall hooking capabilities - we're able to aggregate metadata about the calling process tree for any internet-bound TCP connection by filtering IPs and ports in-kernel and enriching with process tree information in userland. The result is "pidtree-bcc": a supplementary IDS. Now whenever there's an alert for a suspicious connection, we just search for it in our SIEM (spoiler alert: it's nearly always an engineer doing something "innovative")! And the cherry on top? It's stupid fast with negligible overhead, creating a much higher signal-to-noise ratio than the kernels firehose-like audit subsystems.
This talk will look at how you can tune the signal-to-noise ratio of your IDS by making it reflect your business logic and common usage patterns, get more work done by reducing MTTR for false positives, use eBPF and the kernel to do all the hard work for you, accidentally load test your new IDS by not filtering all RFC-1918 addresses, and abuse Docker to get to production ASAP!
As well as looking at some of the technologies that the kernel puts at your disposal, this talk will also tell pidtree-bcc's road from hackathon project to production system and how focus on demonstrating business value early on allowed the organization to give us buy-in to build and deploy a brand new project from scratch.
OSDC 2016 - rkt and Kubernentes what's new with Container Runtimes and Orches...NETWAYS
Application containers are changing some of the fundamentals of how Linux is used in the server environment. rkt is a daemon-free container runtime with a focus on security. rkt is also an implementation of the App Container (appc) runtime specification, which defines the concept of a pod: a grouping of multiple containerized applications in a single execution unit. Pods are also used as the abstraction within Kubernetes, and having rkt work natively with pods makes it uniquely suited as a Kubernetes container runtime engine. With different application container runtimes on Linux to choose from (including Docker, kurma and rkt) this session will cover the differences. It will also dive into use cases for rkt under Kubernetes.
OSDC 2016 | rkt and Kubernetes: What’s new with Container Runtimes and Orches...NETWAYS
Application containers are changing some of the fundamentals of how Linux is used in the server environment. rkt is a daemon-free container runtime with a focus on security. rkt is also an implementation of the App Container (appc) runtime specification, which defines the concept of a pod: a grouping of multiple containerized applications in a single execution unit. Pods are also used as the abstraction within Kubernetes, and having rkt work natively with pods makes it uniquely suited as a Kubernetes container runtime engine. With different application container runtimes on Linux to choose from (including Docker, kurma and rkt) this session will cover the differences. It will also dive into use cases for rkt under Kubernetes.
Deploy Multinode GitLab Runner in openSUSE 15.1 Instances with Ansible Automa...Samsul Ma'arif
Implementing Continous Integration/Continous Delivery/Deployment (CI/CD) is one of DevOps practice. As a DevOps Engineer in a software house company, i used to manage tools to support software developer to deliver the software to the client. By implementing CI/CD, software delivery can be faster than any traditional/manual deployment.
K8s in 3h - Kubernetes Fundamentals TrainingPiotr Perzyna
Kubernetes (K8s) is an open-source system for automating deployment, scaling, and management of containerized applications. This training helps you understand key concepts within 3 hours.
Encrypted Traffic in Egypt - an attempt to understandAhmed Mekkawy
UPDATE: The video is added after the last slide.
After OONI report about internet censorship in Egypt, I'm publishing some technical logs that discusses the existence of an active DPI with MITM capabilities in Egypt.
Securing Governmental Public Services with Free/Open Source Tools - Egyptian ...Ahmed Mekkawy
There's a common misconception that Free/Open Source Software is of significantly less security than propitiatory software. This was proven wrong in many occasions, and the speaker was the one behind one of these occasions in a nation-level project, which was the Egyptian Elections for seven consecutive rounds, starting the March 2011 referendums until the latest presidential elections in 2014.
My presentation in the OpenData day organized by GDG in Mushtarak TechHub in Cairo, Egypt. at March 12nd 2016. It's mainly talking about why governments should go for opening its data and the benefits on the government, NGOs, businesses, and indeveduals.
شركة سبيرولا للأنظمة والجمعية المصرية للمصادر المفتوحةAhmed Mekkawy
عرضى التقديمى فى اجتماع منظمة الأليكسو فى يناير 2013 بخصوص البرمجيات الحرة مفتوحة المصدر، عن شركة سبيرولا للأنطمة وعن الجمعية المصرية للمصادر المفتوحة OpenEgypt
My presentation in "ICT in our lives" in Faculty of Commerce, Alexandria University, Dec 2013
Trying to understand the IT industry trends in the lights of the game theory, with a focus on cloud computing and FOSS.
Why Cloud Computing has to go the FOSS wayAhmed Mekkawy
This presentation tries to show the trends of software industry to reach the conclusion that cloud computing as a concept is inevitable, and having them as open clouds in inevitable as well.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
2. - 2 -
The presenter
● Ahmed Mekkawy AKA linuxawy.
● CEO | Founder of Spirula Systems.
● Co-founder of OpenEgypt.
● Free Software Foundation (FSF) member.
● Independent consultant at MCIT.
● Advisory board member at Mushtarak.
● One of the authors of the Egyptian government's FOSS
adoption strategy.
Intro
BG
Ansible
Docker
3. - 3 -
Who is this for ?
● Entrepreneur with a technical background, to take wise
decision.
● Developers, to get closer to operations and DevOps.
● SysAdmins/SysOps, to get closer to developers and
DevOps.
● Entry level DevOps.
Intro
BG
Ansible
Docker
4. - 4 -
prerequisites
● A background of development or system administration.
● Linux systems awareness.
● Familiarity with Linux command line.
Intro
BG
Ansible
Docker
5. - 5 -
Infrastructure as a code
● Definition
● Unlocked potentials :
● Dynamic infrastructure
● Minimizing cycle
● Environment versioning – through source control
● Testing your code/environment
Intro
BG
Ansible
Docker
6. - 6 -
Devops ?
● DevOps (a clipped compound of "development" and
"operations") is a culture, movement or practice that
emphasizes the collaboration and communication of both
software developers and other information-technology (IT)
professionals while automating the process of software
delivery and infrastructure changes. It aims at establishing
a culture and environment where building, testing, and
releasing software, can happen rapidly, frequently, and
more reliably. - Wikipedia
Intro
BG
Ansible
Docker
7. - 7 -
DevOps Culture
● The opposite of DevOps is despair — Gene Kim
● Technology has became more reliable than our
management and our process.
● People > Process > Tools.
● DevOps is not a job title, nor a product, but rather a culture
and practices.
● It's an extension to « You operate what you build » to «
everyone's involved ». Everyone includes more than devs
and SysOps. Business guys are in, too.
● Everyone involved knows how the entire system works, and
is clear about the underlying business value they bring to
the table. Availability becomes the problem for the entire
organization, not just for the SysOps.
Intro
BG
Ansible
Docker
8. - 8 -
DevOps Culture
● DevOps is not a technology problem. DevOps is a business
problem.
● Waterfall
● Complete isolation between Devs, SysOps, Business
department.
● Each new release has destabilizing influence.
● DevOps
● Devs and Ops are a single team.
● « us » instead of « them ».
● Emphasizing people and process over tools.
● Allows tight alignment of operations with business needs
and thus with customer needs.
Intro
BG
Ansible
Docker
9. - 9 -
DevOps Automation
● Why automation?
● SysAdmin POV :
● Handle growing scale
● Counter increasing failures
● Ensuring servers consistency
● Stop repeating tasks
● Design for failure
● No more server documentation (yaaay!)
● Developers POV :
● Automation is fun
● Environment versioning
● You understand how production environment impact your
code, hence you write more efficient code.
Intro
BG
Ansible
Docker
10. - 10 -
DevOps Automation
● Entrepreneur POV :
● Decrease operation overhead with scale
● Move among infrastructure providers
● Be agile on the infrastructure level
● Disaster Recovery
● Rapid Growth
● Slashdot Effect / Reddit Hug of Death
Intro
BG
Ansible
Docker
12. - 12 -
DevOps Automation
● Why not ?
● Oopses here are bad, really bad.
● It can be tempting to do risky things.
● Knowing how to automate doesn't mean that you know
what to automate.
● Knowing what to operate doesn't mean that you know
how to automate.
● With great power comes great responsibility.
Intro
BG
Ansible
Docker
14. - 14 -
Cloud
● On-demand computing: (I|P|S)aaS, the aaS part is what
matters.
● Why? Too late to ask that now.
● Its impact from the infrastructure POV: resulting IT systems
become more complex and scalable
Intro
BG
Ansible
Docker
15. - 15 -
Configuration management
● Concept
● Since when
● Push vs. Pull
Intro
BG
Ansible
Docker
17. - 17 -
Containers
● OS level virtualization
● Containers vs. Virtualization
● Most known container engines:
● FreeBSD jails
● Solaris Zones
● Virtuozzo / OpenVZ
● LXC
● Docker
Intro
BG
Ansible
Docker
18. - 18 -
Microservices
● A software architecture style
● Application is broken down to lots of tiny services.
● The service does a single function.
● Each service is elastic, resilient, composable, minimal,
and complete.
● Services can be implemented using different
programming langages and environments.
● Services communicate using APIs
● Unix philosophy : Do one thing and do it well.
Intro
BG
Ansible
Docker
20. - 20 -
What is ansible ?
● named after the fictional
instantaneous hyperspace communication system
featured in Ender's Game.
● Feb 2012, Michael DeHaan – author of cobbler -
started Ansible project, after working in puppet labs.
● Design goals:
● Minimal
● Consistent
● Secure
● Highly reliable
● Low learning curve
● Commercially supported (Ansible Tower - GUI).
Intro
BG
Ansible
Docker
21. - 21 -
Why ansible ?
● Agentless : uses plain SSH.
● Idempotent : safe to re-run.
● Modular : large number of contributed modules.
● Simple
● Easy to use :
● YAML syntax
● JSON output
● It's python :)
● FOSS, naturally.
Intro
BG
Ansible
Docker
23. - 23 -
Modules
● ansible all -i hosts -s -m shell -a 'apt-
get install nginx'
● ansible all -i hosts -s -m apt -a
'pkg=nginx state=installed
update_cache=true'
● Note : 'state' not 'change'
● You can write your own on any language, but please
use python.
● https://docs.ansible.com/ansible/modules_by_category.html
Intro
BG
Ansible
Docker
26. - 26 -
Dynamic InvEntory
● Getting the inventory file from another system :
● LDAP
● Cobbler
● OpenStack
● EC2
● … etc
● https://docs.ansible.com/ansible/intro_dynamic_inventory.html
Intro
BG
Ansible
Docker
27. - 27 -
roles
● Organized set of tasks with their needs
rolename
- defaults
- files
- handlers
- meta
- templates
- tasks
- vars
● Each of those directories include main.yml, except files
and templates.
Intro
BG
Ansible
Docker
28. - 28 -
Files
● Files to be copied to the servers as is.
● No main.yml here.
● Example : startup scripts.
Intro
BG
Ansible
Docker
29. - 29 -
Templates
● Files to be copied to the server after substituting the
variables, or doing some minor logic (i.e. loops)
● Python's Jinja2 template engine.
● No main.yml here too.
● Simple use :
echo {{ ip_forward }} >
/proc/sys/net/ipv4/ip_forward
Intro
BG
Ansible
Docker
30. - 30 -
Templates
● Adbanced use :
{% for service in outgoing %}
{{'##'|e }} {{ service.name }}
{{'##'|e }} {{'=' * service.name|length }}
iptables -A OUTPUT -p {{ service.protocol |
default('tcp') }} {{ '-d '+service.destination if
service.destination is defined else '' }} --dport
{{ service.port }} -j ACCEPT
iptables -A INPUT -p {{ service.protocol |
default('tcp') }} {{ '-s '+service.destination if
service.destination is defined else '' }} --sport
{{ service.port }} {{ '' if service.protocol is
defined and service.protocol == 'udp' else '! --syn
' }} -j ACCEPT
{% endfor %}
Intro
BG
Ansible
Docker
31. - 31 -
handler
● Just as a task, but triggered from within another task.
● Notifiers are only run if the Task is run. Think Event
tasks:
- name: Install Nginx
apt: pkg=nginx state=installed
update_cache=true
notify:
- Start Nginx
handlers:
- name: Start Nginx
service: name=nginx state=started
Intro
BG
Ansible
Docker
32. - 32 -
Meta
● Role meta data, including dependencies on other roles.
---
author: your name
description: what this role does
company: your_company
licence: GPLv2
min_ansible_version: 1.2
dependencies:
- { role: ssl }
Intro
BG
Ansible
Docker
34. - 34 -
variables
● declaration (in override order):
● command line
● playbook file
● group_vars
● role (vars)
● role (defaults)
● facts.
● https://docs.ansible.com/ansible/playbooks_variables.html
Intro
BG
Ansible
Docker
35. - 35 -
Spirula's practices
● Treat your CM code as code :
● Use version control.
● Comment on commits.
● Code reuse.
● Have a testing procedure.
● Keep your /etc/ansible/hosts empty, so you have to
define inventory file on each run.
● Keep your variable definition clean:
● Don't define vars in playbook file.
● Keep your project vars in playbook's group_vars, in
all.yml, or in the group if needed.
Intro
BG
Ansible
Docker
36. - 36 -
Ansible Galaxy
● Community hub for contributing, downloading, and reviewing
ansible roles.
ansible-galaxy install Spirula.common
● https://galaxy.ansible.com
Intro
BG
Ansible
Docker
40. - 40 -
Build Ship Run
● Build images using Dockerfiles.
● Ship images to different environments (testing, staging,
production).
● Run and scale your containers on different platform.
Intro
BG
Ansible
Docker
41. - 41 -
Docker 'Hello world'
● Install Docker engine on your Linux platform.
● Run :
$ docker run -it ubuntu:14.04 /bin/bash
● Docker will run a /bin/bash process inside a container and
give you control of this process.
Intro
BG
Ansible
Docker