This document introduces Docker containers and provides examples of using Docker for networking containers across virtual machines. It discusses setting up a GRE tunnel between two VMs to connect their Docker interfaces and allow containers running on different VMs to communicate. Specific commands are provided to configure the Docker and overlay networks on each VM, establish the GRE tunnel, and run a sample container to test the connectivity.
Kurento is a media server for real-time video communication that needed to test its software under many scenarios and environments. Its CI infrastructure originally used many virtual machines, each configured differently for testing. This led to high costs, configuration difficulties, and slow development cycles. By using Docker, Kurento was able to define environments as reusable images and run tests in isolated containers on fewer virtual machines. This simplified infrastructure management and sped up development.
Cleaning Up the Dirt of the Nineties - How New Protocols are Modernizing the WebSteffen Gebert
This document summarizes recent developments in web protocols, including HTTP/2, QUIC, and Multipath TCP (MPTCP). HTTP/2 modernized HTTP by introducing binary framing, multiplexing, header compression and server push. QUIC aims to replace TCP with UDP to reduce latency during connection setup. MPTCP leverages multiple network paths simultaneously for increased throughput and resilience.
At LinkedIn we run lots of Java services on Linux boxes. Java and Linux are a perfect pair. Except when they're not; then there's fireworks. This talk describes 5 situations we encountered where Java interacted with normal Linux behavior to create stunningly sub-optimal application behavior like minutes-long GC pauses. We'll deep dive to show What Java Got Wrong, why Linux behaves the way it does, and how the two can conspire to ruin your day. Finally we'll examine actual code samples showing how we fixed or hid the problems.
NovaProva, a new generation unit test framework for C programsGreg Banks
I wrote NovaProva because I was sick of writing unit tests using the venerable but clunky CUnit library. CUnit looked easy when I started writing unit tests but I soon discovered it's limitations and ended up screaming in frustration. Meanwhile over 18 months the Cyrus IMAP server project has gone from 0 unit tests to 461, of which 277 are written in C with CUnit. So that's a big itch!
BSD/macOS Sed and GNU Sed both support additional features beyond POSIX Sed, such as extended regular expressions with -E/-r, but using only POSIX features ensures portability. GNU Sed defaults allow some non-POSIX behaviors, so --posix is recommended for strict POSIX compliance. The most portable Sed scripts use only basic regular expressions and features defined in the POSIX specification.
Kselftest is a developer-friendly test framework and set of tests for the Linux kernel. It aims to help developers test the kernel more easily and frequently. Tests are contained in the kernel source code and can be built, run, installed, and packaged quickly and easily. Writing new tests and using the test code as documentation makes kselftest a good starting point for new kernel developers.
This document provides an overview of developing cloud native Java applications. It discusses:
- Using microservices and containers to build distributed and scalable applications.
- Key principles of cloud native design like designing for distribution, resilience, and automation.
- Tools for building microservices like Java EE, Dropwizard Metrics, Hystrix, and MicroProfile.
- Techniques for configuration, communication, diagnostics, and resiliency when developing microservices.
- Examples of using technologies like Docker, Kubernetes, Payara Server, ActiveMQ, and PostgreSQL in a microservices architecture.
The document provides a comprehensive but concise introduction to developing cloud native applications using microservices and Java technologies.
Kurento is a media server for real-time video communication that needed to test its software under many scenarios and environments. Its CI infrastructure originally used many virtual machines, each configured differently for testing. This led to high costs, configuration difficulties, and slow development cycles. By using Docker, Kurento was able to define environments as reusable images and run tests in isolated containers on fewer virtual machines. This simplified infrastructure management and sped up development.
Cleaning Up the Dirt of the Nineties - How New Protocols are Modernizing the WebSteffen Gebert
This document summarizes recent developments in web protocols, including HTTP/2, QUIC, and Multipath TCP (MPTCP). HTTP/2 modernized HTTP by introducing binary framing, multiplexing, header compression and server push. QUIC aims to replace TCP with UDP to reduce latency during connection setup. MPTCP leverages multiple network paths simultaneously for increased throughput and resilience.
At LinkedIn we run lots of Java services on Linux boxes. Java and Linux are a perfect pair. Except when they're not; then there's fireworks. This talk describes 5 situations we encountered where Java interacted with normal Linux behavior to create stunningly sub-optimal application behavior like minutes-long GC pauses. We'll deep dive to show What Java Got Wrong, why Linux behaves the way it does, and how the two can conspire to ruin your day. Finally we'll examine actual code samples showing how we fixed or hid the problems.
NovaProva, a new generation unit test framework for C programsGreg Banks
I wrote NovaProva because I was sick of writing unit tests using the venerable but clunky CUnit library. CUnit looked easy when I started writing unit tests but I soon discovered it's limitations and ended up screaming in frustration. Meanwhile over 18 months the Cyrus IMAP server project has gone from 0 unit tests to 461, of which 277 are written in C with CUnit. So that's a big itch!
BSD/macOS Sed and GNU Sed both support additional features beyond POSIX Sed, such as extended regular expressions with -E/-r, but using only POSIX features ensures portability. GNU Sed defaults allow some non-POSIX behaviors, so --posix is recommended for strict POSIX compliance. The most portable Sed scripts use only basic regular expressions and features defined in the POSIX specification.
Kselftest is a developer-friendly test framework and set of tests for the Linux kernel. It aims to help developers test the kernel more easily and frequently. Tests are contained in the kernel source code and can be built, run, installed, and packaged quickly and easily. Writing new tests and using the test code as documentation makes kselftest a good starting point for new kernel developers.
This document provides an overview of developing cloud native Java applications. It discusses:
- Using microservices and containers to build distributed and scalable applications.
- Key principles of cloud native design like designing for distribution, resilience, and automation.
- Tools for building microservices like Java EE, Dropwizard Metrics, Hystrix, and MicroProfile.
- Techniques for configuration, communication, diagnostics, and resiliency when developing microservices.
- Examples of using technologies like Docker, Kubernetes, Payara Server, ActiveMQ, and PostgreSQL in a microservices architecture.
The document provides a comprehensive but concise introduction to developing cloud native applications using microservices and Java technologies.
BKK16-409 VOSY Switch Port to ARMv8 Platforms and ODP IntegrationLinaro
Virtual Open Systems has developed VOSYSwitch, a high-performance user space networking virtual switch solution enabling NFV, based on the open source packet processing framework SnabbSwitch. In this talk, the experience of porting VOSYSwitch from x86 to ARMv8 will be shared, along with the integration of ODP as a driver layer for the available hardware resources. In addition to this presentation, a live demonstration will showcase chained VNFs connected through VOSYSwitch, where an OpenFastPath web server is implemented behind an ODP enabled packet filtering firewall. The targeted platforms are Freescale (NXP) LS2085A and Cavium's ThunderX.
Remix of two other open source presentations along with my own content, 40 slides set to play at 20 seconds auto-timed (similar to Pecha-Kucha style timing). This was delivered via Caribbean Tech Dev forum's monthly Google Hangout in November 2015, and video can be viewed at https://www.youtube.com/watch?v=xANrsSin_-0
1) Sockets provide an interface between an application process and the transport layer to allow communication between processes locally or remotely. The main types of sockets are TCP, UDP, and raw sockets.
2) Socket programming in Java uses I/O streams, readers/writers, and socket classes like ServerSocket and Socket to create TCP and UDP sockets for communication. TCP sockets use streams for reliable connection-oriented communication while UDP uses datagram packets for unreliable connectionless communication.
3) The document discusses how two friends could use TCP or UDP sockets in Java to chat between their computers after their messaging apps were uninstalled, providing examples of one-way and two-way communication implementations.
The Docker network overlay driver relies on several technologies: network namespaces, VXLAN, Netlink and a distributed key-value store. This talk will present each of these mechanisms one by one along with their userland tools and show hands-on how they interact together when setting up an overlay to connect containers.
The talk will continue with a demo showing how to build your own simple overlay using these technologies.
Understanding of linux kernel memory modelSeongJae Park
SeongJae Park introduces himself and his work contributing to the Linux kernel memory model documentation. He developed a guaranteed contiguous memory allocator and maintains the Korean translation of the kernel's memory barrier documentation. The document discusses how the increasing prevalence of multi-core processors requires careful programming to ensure correct parallel execution given relaxed memory ordering. It notes that compilers and CPUs optimize for instruction throughput over programmer goals, and memory accesses can be reordered in ways that affect correctness on multi-processors. Understanding the memory model is important for writing high-performance parallel code.
Drupal is a free and open source content management framework that can be used to build a variety of library applications and websites. It has a large community that contributes code and modules. The presentation provided an overview of Drupal and discussed how it has been implemented in many libraries for tasks like digital repositories, intranets, and online catalogs. Drupal 7 is the upcoming version that includes improvements like better testing, a new database abstraction layer, and an updated field API. Resources for learning more about developing with Drupal were also recommended.
The document provides an agenda for a workshop on RabbitMQ. It introduces RabbitMQ and its core concepts like exchanges, queues, bindings and message passing. It then outlines 5 exercises demonstrating key RabbitMQ patterns including hello world, work queues, publish/subscribe, routing and topics. Environmental setup using Docker is also covered.
This document discusses debugging CPython processes with gdb. It begins by stating the goals of making gdb a useful option for debugging and highlighting common issues. It then discusses why debugging is important for large open source projects like OpenStack. Several typical problems that gdb can help with are described, such as hung processes, stepping into native code, and interpreter crashes. Advantages of gdb over pdb are outlined. Prerequisites for using gdb to debug Python are covered, including ensuring gdb has Python support and obtaining CPython debugging symbols. Various gdb commands for debugging Python processes are demonstrated, such as attaching to a process, printing tracebacks and locals, and executing Python code in the debugged process. Finally, some common gotchas are highlighted around virtual environments
The document provides an overview of snaps and snapd:
- Snaps are packages that provide application sandboxing and confinement using interfaces and security policies. They work across distributions and allow automatic updates.
- Snapcraft is used to build snaps by defining parts and plugins in a yaml file. Snaps are mounted at runtime rather than unpacked.
- Snapd is the daemon that installs, removes, and updates snaps. It manages security interfaces and confinement policies between snaps.
- The store publishes snaps to channels of different risk levels. Snapd installs the revisions specified by the store for each channel.
JDO 2019: Tips and Tricks from Docker Captain - Łukasz LachPROIDEA
The document provides tips and tricks for using Docker including:
1) Installing Docker on Linux in an easy way allowing choice of channel and version.
2) Setting up a local Docker Hub mirror for caching and revalidating images.
3) Using docker inspect to find containers that exited with non-zero codes or show commands for running containers.
4) Organizing docker-compose files with extensions, environment variables, anchors and aliases for well structured services.
Continuous Integration: SaaS vs Jenkins in CloudIdeato
Dopo la diffusione del Cloud Computing e di Docker, è ancora preferibile
adottare i classici SaaS di Continuous Integration rispetto ad un
sistema Jenkins in cloud?
L'intervento ha l’obiettivo di mostrare un caso d'uso applicato in
Ideato di migrazione da un SaaS quale Travis ad un sistema Jenkins in
cloud, sfruttando funzionalità di on demand tramite il cloud di Amazon
Web Services e di containerizzazione tramite Docker.
Tenendo in considerazione gli aspetti tecnici legati all’implementazione
e quelli che potrebbero impattare sul fronte economico come la mancanza
di automatizzazione e i tempi di setup, verranno mostrati pregi e
difetti di questo sistema e come può essere applicato ad una serie di
progetti. Infine verranno elencati una serie di prodotti recentemente
rilasciati e in grado di far evolvere ulteriormente l'attuale sistema.
The document discusses different approaches to packaging Java applications including fat JARs, thin JARs, skinny JARs, and hollow JARs. It analyzes the packaging of sample "Hello World" applications using Spring Boot, WildFly Swarm, Eclipse Vert.x, and Dropwizard. Fat JARs package the entire application and dependencies together but can become very large, while thinner approaches package dependencies externally to reduce size and improve redeployment speeds for container-based applications.
FOSS and Linux in particular provides an excellent OS when it comes to hacking gadgets. This presentation created a couple of years back presents GNU/Linux as the unconventional OS that makes this all possible!
This document discusses OpenFlow and Software Defined Networking (SDN). It provides an overview of OpenFlow including its history and how it works. OpenFlow allows the separation of the control plane from the data plane in networks by using a common protocol that can configure heterogeneous physical switches. The document also describes an OpenFlow switch implementation developed in Erlang including the OpenFlow protocol library and common switch logic.
Introductory seminar on Docker and its components (networks and Compose in particular). Focused on going through some basic concepts, mention some more advanced topics, and introduce a practical workshop held on the same evening.
This document outlines a process for volunteers to join a U-boot source code cleanup project. It explains that U-boot contains old code that could benefit from standardization. Volunteers would use git and mailing lists to identify unclean code, fix issues found by checkpatch.pl, and submit cleanup patches for review. The goal is to organize contributions to improve code quality and make U-boot easier for new developers to understand and modify.
This chapter discusses iptables, the program used to configure Linux firewalls using Netfilter. Iptables provides more features than older programs like ipchains and allows packets to be filtered through built-in chains. The document explains how packets flow differently with Netfilter compared to older implementations, traversing a single chain rather than multiple chains. It also provides the basic syntax for iptables commands.
This document summarizes the new features and improvements in JDK 11, including:
- JEP 181 which introduces nests to allow private member access between logically related classes;
- JEP 309 which extends class files to support dynamic constants;
- Improvements to Aarch64 intrinsics, the HTTP client API, lambda parameters syntax, Unicode support and single file program launching.
Docker 1 0 1 0 1: a Docker introduction, actualized for the stable release of...Jérôme Petazzoni
If you're not familiar yet with Docker, here is your chance to catch up. This presentation includes a quick overview of the Open Source Docker Engine, and its associated services delivered through the Docker Hub. Recent features are listed, as well as a glimpse at what's next in the Docker world.
This presentation was given during OSCON, at a meet-up hosted by New Relic, with co-presentations from CoreOS and Rackspace OnMetal.
Dockerizing Symfony2 application. Why Docker is so cool And what is Docker? And what are Containers? How they works? What are the ecosystem of Docker? And how to dockerize your web application (can be based on Symfony2 framework)?
BKK16-409 VOSY Switch Port to ARMv8 Platforms and ODP IntegrationLinaro
Virtual Open Systems has developed VOSYSwitch, a high-performance user space networking virtual switch solution enabling NFV, based on the open source packet processing framework SnabbSwitch. In this talk, the experience of porting VOSYSwitch from x86 to ARMv8 will be shared, along with the integration of ODP as a driver layer for the available hardware resources. In addition to this presentation, a live demonstration will showcase chained VNFs connected through VOSYSwitch, where an OpenFastPath web server is implemented behind an ODP enabled packet filtering firewall. The targeted platforms are Freescale (NXP) LS2085A and Cavium's ThunderX.
Remix of two other open source presentations along with my own content, 40 slides set to play at 20 seconds auto-timed (similar to Pecha-Kucha style timing). This was delivered via Caribbean Tech Dev forum's monthly Google Hangout in November 2015, and video can be viewed at https://www.youtube.com/watch?v=xANrsSin_-0
1) Sockets provide an interface between an application process and the transport layer to allow communication between processes locally or remotely. The main types of sockets are TCP, UDP, and raw sockets.
2) Socket programming in Java uses I/O streams, readers/writers, and socket classes like ServerSocket and Socket to create TCP and UDP sockets for communication. TCP sockets use streams for reliable connection-oriented communication while UDP uses datagram packets for unreliable connectionless communication.
3) The document discusses how two friends could use TCP or UDP sockets in Java to chat between their computers after their messaging apps were uninstalled, providing examples of one-way and two-way communication implementations.
The Docker network overlay driver relies on several technologies: network namespaces, VXLAN, Netlink and a distributed key-value store. This talk will present each of these mechanisms one by one along with their userland tools and show hands-on how they interact together when setting up an overlay to connect containers.
The talk will continue with a demo showing how to build your own simple overlay using these technologies.
Understanding of linux kernel memory modelSeongJae Park
SeongJae Park introduces himself and his work contributing to the Linux kernel memory model documentation. He developed a guaranteed contiguous memory allocator and maintains the Korean translation of the kernel's memory barrier documentation. The document discusses how the increasing prevalence of multi-core processors requires careful programming to ensure correct parallel execution given relaxed memory ordering. It notes that compilers and CPUs optimize for instruction throughput over programmer goals, and memory accesses can be reordered in ways that affect correctness on multi-processors. Understanding the memory model is important for writing high-performance parallel code.
Drupal is a free and open source content management framework that can be used to build a variety of library applications and websites. It has a large community that contributes code and modules. The presentation provided an overview of Drupal and discussed how it has been implemented in many libraries for tasks like digital repositories, intranets, and online catalogs. Drupal 7 is the upcoming version that includes improvements like better testing, a new database abstraction layer, and an updated field API. Resources for learning more about developing with Drupal were also recommended.
The document provides an agenda for a workshop on RabbitMQ. It introduces RabbitMQ and its core concepts like exchanges, queues, bindings and message passing. It then outlines 5 exercises demonstrating key RabbitMQ patterns including hello world, work queues, publish/subscribe, routing and topics. Environmental setup using Docker is also covered.
This document discusses debugging CPython processes with gdb. It begins by stating the goals of making gdb a useful option for debugging and highlighting common issues. It then discusses why debugging is important for large open source projects like OpenStack. Several typical problems that gdb can help with are described, such as hung processes, stepping into native code, and interpreter crashes. Advantages of gdb over pdb are outlined. Prerequisites for using gdb to debug Python are covered, including ensuring gdb has Python support and obtaining CPython debugging symbols. Various gdb commands for debugging Python processes are demonstrated, such as attaching to a process, printing tracebacks and locals, and executing Python code in the debugged process. Finally, some common gotchas are highlighted around virtual environments
The document provides an overview of snaps and snapd:
- Snaps are packages that provide application sandboxing and confinement using interfaces and security policies. They work across distributions and allow automatic updates.
- Snapcraft is used to build snaps by defining parts and plugins in a yaml file. Snaps are mounted at runtime rather than unpacked.
- Snapd is the daemon that installs, removes, and updates snaps. It manages security interfaces and confinement policies between snaps.
- The store publishes snaps to channels of different risk levels. Snapd installs the revisions specified by the store for each channel.
JDO 2019: Tips and Tricks from Docker Captain - Łukasz LachPROIDEA
The document provides tips and tricks for using Docker including:
1) Installing Docker on Linux in an easy way allowing choice of channel and version.
2) Setting up a local Docker Hub mirror for caching and revalidating images.
3) Using docker inspect to find containers that exited with non-zero codes or show commands for running containers.
4) Organizing docker-compose files with extensions, environment variables, anchors and aliases for well structured services.
Continuous Integration: SaaS vs Jenkins in CloudIdeato
Dopo la diffusione del Cloud Computing e di Docker, è ancora preferibile
adottare i classici SaaS di Continuous Integration rispetto ad un
sistema Jenkins in cloud?
L'intervento ha l’obiettivo di mostrare un caso d'uso applicato in
Ideato di migrazione da un SaaS quale Travis ad un sistema Jenkins in
cloud, sfruttando funzionalità di on demand tramite il cloud di Amazon
Web Services e di containerizzazione tramite Docker.
Tenendo in considerazione gli aspetti tecnici legati all’implementazione
e quelli che potrebbero impattare sul fronte economico come la mancanza
di automatizzazione e i tempi di setup, verranno mostrati pregi e
difetti di questo sistema e come può essere applicato ad una serie di
progetti. Infine verranno elencati una serie di prodotti recentemente
rilasciati e in grado di far evolvere ulteriormente l'attuale sistema.
The document discusses different approaches to packaging Java applications including fat JARs, thin JARs, skinny JARs, and hollow JARs. It analyzes the packaging of sample "Hello World" applications using Spring Boot, WildFly Swarm, Eclipse Vert.x, and Dropwizard. Fat JARs package the entire application and dependencies together but can become very large, while thinner approaches package dependencies externally to reduce size and improve redeployment speeds for container-based applications.
FOSS and Linux in particular provides an excellent OS when it comes to hacking gadgets. This presentation created a couple of years back presents GNU/Linux as the unconventional OS that makes this all possible!
This document discusses OpenFlow and Software Defined Networking (SDN). It provides an overview of OpenFlow including its history and how it works. OpenFlow allows the separation of the control plane from the data plane in networks by using a common protocol that can configure heterogeneous physical switches. The document also describes an OpenFlow switch implementation developed in Erlang including the OpenFlow protocol library and common switch logic.
Introductory seminar on Docker and its components (networks and Compose in particular). Focused on going through some basic concepts, mention some more advanced topics, and introduce a practical workshop held on the same evening.
This document outlines a process for volunteers to join a U-boot source code cleanup project. It explains that U-boot contains old code that could benefit from standardization. Volunteers would use git and mailing lists to identify unclean code, fix issues found by checkpatch.pl, and submit cleanup patches for review. The goal is to organize contributions to improve code quality and make U-boot easier for new developers to understand and modify.
This chapter discusses iptables, the program used to configure Linux firewalls using Netfilter. Iptables provides more features than older programs like ipchains and allows packets to be filtered through built-in chains. The document explains how packets flow differently with Netfilter compared to older implementations, traversing a single chain rather than multiple chains. It also provides the basic syntax for iptables commands.
This document summarizes the new features and improvements in JDK 11, including:
- JEP 181 which introduces nests to allow private member access between logically related classes;
- JEP 309 which extends class files to support dynamic constants;
- Improvements to Aarch64 intrinsics, the HTTP client API, lambda parameters syntax, Unicode support and single file program launching.
Docker 1 0 1 0 1: a Docker introduction, actualized for the stable release of...Jérôme Petazzoni
If you're not familiar yet with Docker, here is your chance to catch up. This presentation includes a quick overview of the Open Source Docker Engine, and its associated services delivered through the Docker Hub. Recent features are listed, as well as a glimpse at what's next in the Docker world.
This presentation was given during OSCON, at a meet-up hosted by New Relic, with co-presentations from CoreOS and Rackspace OnMetal.
Dockerizing Symfony2 application. Why Docker is so cool And what is Docker? And what are Containers? How they works? What are the ecosystem of Docker? And how to dockerize your web application (can be based on Symfony2 framework)?
ExpoQA 2017 Using docker to build and test in your laptop and JenkinsElasTest Project
This document discusses using Docker to build and test applications in laptops and Jenkins. It begins with an introduction to the author and their background/expertise. It then covers virtualization and containers, including VirtualBox, Vagrant, and Docker. The main concepts of Docker like images, containers, registries are defined. Hands-on examples are provided for running basic Docker commands, managing the lifecycle of containers, exposing network services, and managing Docker images. Building a simple Python web application image is demonstrated as a first example of creating a custom Docker image.
This document introduces Docker and provides an overview of its key concepts and capabilities. It explains that Docker allows deploying applications into lightweight Linux containers that are isolated but share resources and run at native speeds. It describes how Docker uses namespaces and cgroups for isolation and copy-on-write storage for efficiency. The document also outlines common Docker workflows for building, testing, and deploying containerized applications both locally and in production environments at scale.
Introduction to Docker, December 2014 "Tour de France" EditionJérôme Petazzoni
Docker, the Open Source container Engine, lets you build, ship and run, any app, anywhere.
This is the presentation which was shown in December 2014 for the "Tour de France" in Paris, Lille, Lyon, Nice...
Presentation about docker from Java User Group in Ostrava CZ (23th of November 2015). Presented by Martin Damovsky (@damovsky).
Demos are available at https://github.com/damovsky/jug-ostrava-docker
Introduction to Docker, December 2014 "Tour de France" Bordeaux Special EditionJérôme Petazzoni
Docker, the Open Source container Engine, lets you build, ship and run, any app, anywhere.
This is the presentation which was shown in December 2014 for the last stop of the "Tour de France" in Bordeaux. It is slightly different from the presentation which was shown in the other cities (http://www.slideshare.net/jpetazzo/introduction-to-docker-december-2014-tour-de-france-edition), and includes a detailed history of dotCloud and Docker and a few other differences.
Special thanks to https://twitter.com/LilliJane and https://twitter.com/zirkome, who gave me the necessary motivation to put together this slightly different presentation, since they had already seen the other presentation in Paris :-)
A Gentle Introduction to Docker and ContainersDocker, Inc.
This document provides an introduction to Docker and containers. It outlines that Docker is an open source tool that makes it easy to deploy applications by using containers. Containers allow applications to be isolated for easier management and deployment. The document discusses how Docker builds on existing container technologies and provides a standardized way to build, share, and run application containers.
MIT Licensed - Reuse freely, but attribute "Hamilton Turner"
An introduction to the Docker container engine. Focuses on how to use Docker and implications of Docker for Cloud-based services. Shows multiple examples of rapidly starting complex environments using Docker. Very minor discussion on how Docker works technically.
Presentation source is available at https://github.com/hamiltont/intro-to-docker
Introduction to Docker at the Azure Meet-up in New YorkJérôme Petazzoni
This is the presentation given at the Azure New York Meet-Up group, September 3rd.
It includes a quick overview of the Open Source Docker Engine and its associated services delivered through the Docker Hub. It also covers the new features of Docker 1.0, and briefly explains how to get started with Docker on Azure.
Adrian Otto from Rackspace will present "Docker 102", This includes a summary of Docker 101 as a refresher from the August session, and builds upon that by discussing who should use a registry, and what options are available for keeping them private. We will discuss best practices for keeping your production environments evergreen with updated operating system environments, library dependencies, and maintaining an immutable infrastructure.
Using Docker to build and test in your laptop and JenkinsMicael Gallego
Docker is changing the way we create and deploy software. This presentation is a hands-on introduction to how to use docker to build and test software, in your laptop and in your Jenkins CI server
Introduction to Docker at Glidewell Laboratories in Orange CountyJérôme Petazzoni
In this presentation we will introduce Docker, and how you can use it to build, ship, and run any application, anywhere. The presentation included short demos, links to further material, and of course Q&As. If you are already a seasoned Docker user, this presentation will probably be redundant; but if you started to use Docker and are still struggling with some of his facets, you'll learn some!
Running the Oracle SOA Suite Environment in a Docker ContainerGuido Schmutz
Running the Oracle SOA Suite Environment in a Docker Container
The document discusses running the Oracle SOA Suite environment in a Docker container. It begins with an introduction to Docker and its benefits over virtual machines. It then demonstrates various Docker commands like run, logs, images, ps to launch and manage containers. It also covers building custom images using Dockerfiles. The document provides examples to showcase common Docker tasks like committing changes to an image, pulling images, stopping and removing containers.
This document discusses Docker, an open source project that automates the deployment of applications inside software containers. It begins by describing common problems in application deployment and how virtual machines address some issues but introduce overhead. It then summarizes the history and rapid growth of Docker since its launch in 2013. The rest of the document dives into technical aspects of Docker like how images and containers work, comparisons to virtual machines, security considerations, the Docker workflow, and how Docker relates to DevOps and continuous delivery practices.
Introduction to automated environment management with Docker Containers - for...Lucas Jellema
(presented at the AMIS Platform SIG session on October 1st 2015, Nieuwegein, The Netherlands)
Creating and managing environments for development and r&d activities can be cumbersome. Quickly spinning up databases and web servers, using physical resources in a smart way, installing application components and having everything talk to each other can take a lot of time. This presentation introduces Docker - the key aspects of build, ship and run. It discusses the main concepts and typical actions.
Next, it takes you by the hand and introduces you to Vagrant and Virtual Box for quickly provisioning VMs in which Docker containers run platform components, applications and microservices - all environments fine tuned using Puppet and interacting with Git(Hub). We start from zero on your laptop and end with local environments in which to develop, test and run various types of applications.
The presentation spends some time on Oracle 's position regarding Docker and containers.
Why everyone is excited about Docker (and you should too...) - Carlo Bonamic...Codemotion
In less than two years Docker went from first line of code to major Open Source project with contributions from all the big names in IT. Everyone is excited, but what's in for me - as a Dev or Ops? In short, Docker makes creating Development, Test and even Production environments an order of magnitude simpler, faster and completely portable across both local and cloud infrastructure. We will start from Docker main concepts: how to create a Linux Container from base images, run your application in it, and version your runtimes as you would with source code, and finish with a concrete example.
This document discusses containerization and the Docker ecosystem. It provides a brief history of containerization technologies and an overview of Docker components like Docker Engine, Docker Hub, and Docker Inc. It also discusses developing with Docker through concepts like Dockerfiles, images, and Fig for running multi-container apps. More advanced topics covered include linking containers, volumes, Docker Machine for provisioning, and clustering with Swarm and Kubernetes.
Aquí están las respuestas a tus preguntas sobre el controlador OpenDayLight:
p1- Cuando haces ping entre H1 y H2, el controlador ODL instala reglas en los switches para encaminar el tráfico entre ambas máquinas virtuales. Esto permite que se establezca la comunicación. El feature que permite visualizar este flujo de tráfico es odl-dlux-core, que incluye la interfaz gráfica web del controlador.
p2- La información mostrada en esa URL muestra la topología de red detectada por el controlador ODL, incluy
Este documento presenta una introducción a las Redes Definidas por Software (SDN) y la programación en el plano de datos (P4). Explica los problemas de las redes convencionales y cómo SDN y P4 permiten una mayor programabilidad, flexibilidad y agilidad en la red. También resume los conceptos clave como OpenFlow, controladores SDN de código abierto y casos de uso comunes para SDN.
1) The document provides an agenda and instructions for a hands-on tutorial on OVS/NFV basics using Open vSwitch, Linux containers, Docker, and virtual private networks.
2) It describes how to access two provided virtual machines and configure port mirroring with Open vSwitch to monitor network traffic between VMs.
3) Instructions are given for installing Linux containers on the VMs, configuring network interfaces and scripts, and testing connectivity between containers using GRE tunnels.
4) The tutorial also covers installing and configuring Docker containers on the VMs, creating virtual networks between them using GRE tunnels, and deploying example containers from Docker Hub.
Desde el Lunes 9 al Viernes 13 de Enero se realizarán los tres Importantísimo eventos realizados en INICTEL-UNI. Los Ingresos para tales eventos son gratuitos!, Si no se ha inscrito aún?, Pues hazlo Ya!
Invitadas Oficiales: Dra. Guisela Orjeda (Directora Concytec), Dr. Carlos Valdez (Viceministro MTC), Embajador Néstor Popolizio (Viceministro MRE), entre otros
El documento explica los conceptos básicos de la criptografía simétrica y asimétrica, incluyendo ejemplos de cómo cifrar y descifrar archivos y mensajes utilizando los algoritmos DES, AES y RSA con la herramienta OpenSSL. También cubre la generación y uso de claves públicas y privadas para cifrado y firma digital entre dos personas, Bob y Alicia.
Security is a major concern in computer networking which faces increasing threats as the commercial
Internet and related economies continue to grow. Virtualization technologies enabling
scalable Cloud services pose further challenges to the security of computer infrastructures,
demanding novel mechanisms combining the best-of-breed to counter certain types of attacks
. Our work aims to explore advances in Cyber Threat Intelligence (CTI) in the context of
Software Defined Networking (SDN) architectures. While CTI represents a recent approach
to combat threats based on reliable sources, by sharing information and knowledge about
computer criminal activities, SDN is a recent trend in architecting computer networks based
on modularization and programmability principles. In this dissertation, we propose IntelFlow,
an intelligent detection system for SDN that follows a proactive approach using OpenFlow
to deploy countermeasures to the threats learned through a distributed intelligent plane. We
show through a proof of concept implementation that the proposed system is capable of delivering
a number of benefits in terms of effectiveness, altogether contributing to the security
of modern computer network designs.
The document proposes an architecture called IntelFlow that aims to integrate cyber threat intelligence into software defined networks. IntelFlow would introduce a knowledge plane that receives threat intelligence from various sources and allows the Bro IDS to query this intelligence. The knowledge plane would then export OpenFlow rules to implement countermeasures. The document outlines IntelFlow's components, how it would map intelligence indicators to OpenFlow flows, and presents initial results from a proof-of-concept showing IntelFlow can detect attacks faster than reactive approaches and successfully mitigated a DDoS attack in testing. Future work will further evaluate IntelFlow's effectiveness against other attacks.
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Webinar: Designing a schema for a Data WarehouseFederico Razzoli
Are you new to data warehouses (DWH)? Do you need to check whether your data warehouse follows the best practices for a good design? In both cases, this webinar is for you.
A data warehouse is a central relational database that contains all measurements about a business or an organisation. This data comes from a variety of heterogeneous data sources, which includes databases of any type that back the applications used by the company, data files exported by some applications, or APIs provided by internal or external services.
But designing a data warehouse correctly is a hard task, which requires gathering information about the business processes that need to be analysed in the first place. These processes must be translated into so-called star schemas, which means, denormalised databases where each table represents a dimension or facts.
We will discuss these topics:
- How to gather information about a business;
- Understanding dictionaries and how to identify business entities;
- Dimensions and facts;
- Setting a table granularity;
- Types of facts;
- Types of dimensions;
- Snowflakes and how to avoid them;
- Expanding existing dimensions and facts.
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
Project Management Semester Long Project - Acuityjpupo2018
Acuity is an innovative learning app designed to transform the way you engage with knowledge. Powered by AI technology, Acuity takes complex topics and distills them into concise, interactive summaries that are easy to read & understand. Whether you're exploring the depths of quantum mechanics or seeking insight into historical events, Acuity provides the key information you need without the burden of lengthy texts.
Digital Marketing Trends in 2024 | Guide for Staying AheadWask
https://www.wask.co/ebooks/digital-marketing-trends-in-2024
Feeling lost in the digital marketing whirlwind of 2024? Technology is changing, consumer habits are evolving, and staying ahead of the curve feels like a never-ending pursuit. This e-book is your compass. Dive into actionable insights to handle the complexities of modern marketing. From hyper-personalization to the power of user-generated content, learn how to build long-term relationships with your audience and unlock the secrets to success in the ever-shifting digital landscape.
Salesforce Integration for Bonterra Impact Management (fka Social Solutions A...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on integration of Salesforce with Bonterra Impact Management.
Interested in deploying an integration with Salesforce for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
OpenID AuthZEN Interop Read Out - AuthorizationDavid Brossard
During Identiverse 2024 and EIC 2024, members of the OpenID AuthZEN WG got together and demoed their authorization endpoints conforming to the AuthZEN API
1. Docker: Introdução e experiências
iniciais com uma tecnologia de
virtualização leve e ágil
Javier Richard Quinto Ancieta (MSc Candidate)
Raphael Vicente Rosa (Phd Candidate)
Prof. Christian Esteve Rothenberg
10/12/2014
Department of Computer Engineering and Industrial Automation (DCA)
Faculty of Electrical and Computer Engineering (FEEC)
University of Campinas (UNICAMP)
2. Agenda
• Introduction to Linux Containers
• Docker: Basics (theory and examples)
• Experiences at the INTRIG Lab @ FEEC/UNICAMP
• Hands-on examples
– Docker and Networking
Docker & OVS, tunneling, etc.
9. Solution to the deployment problem:
the Linux container
• Units of software delivery (ship it!)
• run anything
– if it can run on the host, it can run in the container
– i.e., if it can run on a Linux kernel, it can run
11. What’s a Container?
• High level approach: it's a lightweight VM
– own process space
– own network interface
– can run stuff as root
– can have its own /sbin/init
• (different from the host)
12. What’s a Container?
• Low level approach: it's chroot on steroids
– can also not have its own /sbin/init
– container = isolated process(es)
– share kernel with host
– no device emulation (neither HVM nor PV)
13. Separation of Concerns
Developer
• inside my container:
– my code
– my libraries
– my package manager
– my app
– my data
Operations
• outside the container:
– logging
– remote access
– network configuration
– monitoring
14. How does it work?
• Isolation with
namespaces
– pid
– mnt
– net
– uts
– ipc
– user
• Isolation with cgroups
– memory
– cpu
– blkio
– devices
16. Containers: Why is this a hot topic?
• LXC (Linux Containers) have been around for years
• Lightweight, fast, disposable... virtual environments
– boot in milliseconds
– just a few MB of intrinsic disk/memory usage
– bare metal performance is possible
• The new way to build, ship, deploy, run your apps!
• Everybody* wants to deploy containers now
• Tools like Docker made containers very easy to use
20. Docker-what?
The Big Picture
• Open Source engine to commoditize LXC
• using copy-on-write for quick provisioning
• allowing to create and share images
• standard format for containers
• (stack of layers; 1 layer = tarball+metadata)
• standard, reproducible way to easily build
• trusted images (Dockerfile, Stackbrew...)
21. Docker-what?
Under the hood
• rewrite of dotCloud internal container engine
– original version: Python, tied to dotCloud's internal stuff
– released version: Go, legacy-free
• the Docker daemon runs in the background
– manages containers, images, and builds
– HTTP API (over UNIX or TCP socket)
– embedded CLI talking to the API
• Open Source (GitHub public repository + issue tracking)
• user and dev mailing lists
• FreeNode IRC channels #docker, #docker-dev
22. Docker-what?
The Ecosystem
• Docker Inc. (formerly dotCloud Inc.)
– ~30 employees, VC-backed
– SaaS and support offering around Docker
• Docker, the community
– more than 300 contributors, 1500 forks on GitHub
– dozens of projects around/on top of Docker
– x100k trained developers
23. Working with Docker
• On your servers (Linux)
– Packages (Ubuntu, Debian, Fedora, Gentoo…)
– Single binary install (Golang)
– Easy provisioning on Rackspace, Digital Ocean,
EC2…
• On your dev env (Linux, OS X, Windows)
– Vagrantfile (describe machine config reqs)
– boot2docker (25 MB VM image)
– Natively (if you run Linux)
27. Dockerizing Applications: A "Hello world"
$ sudo docker run ubuntu:14.04 /bin/echo 'Hello world‘
– First App
$ sudo docker run –t –i ubuntu:14.04 /bin/bash
– Inside a container (Comparison with outside world)
$ sudo docker run -d ubuntu:14.04 /bin/sh -c "while true; do
echo hello world; sleep 1; done“
– Something about id/name, logs, stop
28. Working with Containers
$ sudo docker .... [version]
– What docker client can do
– More: https://docs.docker.com/reference/commandline/cli/
$ sudo docker run -d -P training/webapp python app.py
– Let’s inspect, stop, rm, top, ps, …
29. Working with Images
$ sudo docker images
– Repo, tag, id, ...
$ sudo docker search ...
– Or https://hub.docker.com/
Creating your images:
$ sudo docker pull training/sinatra
$ sudo docker run -t -i training/sinatra /bin/bash
# gem install json
$ sudo docker commit -m="Added json gem" -a="Kate Smith" ID
ouruser/sinatra:v2
30. Working with Images
Dockerfiles:
“# This is a comment
FROM ubuntu:14.04
MAINTAINER Kate Smith <ksmith@example.com>
RUN apt-get update && apt-get install -y ruby ruby-dev
RUN gem install sinatra”
$ sudo docker build -t="ouruser/sinatra:v2" .
– More: https://docs.docker.com/reference/builder
Tagging:
$ sudo docker tag ID ouruser/sinatra:devel
31. Managing Data in Containers
• A simple mount:
$ sudo docker run -d -P --name web -v /webapp training/webapp
python app.py
• Or
$ sudo docker run -d -P --name web -v /src/webapp:/opt/webapp
training/webapp python app.py
• And then:
$ sudo docker run -d --volumes-from web --name db
training/postgres
• Removing volumes
$ sudo docker rm -v /src/webapp
32. Linking Containers
• Links:
– Env Variables
– /etc/hosts updates
$ sudo docker run -d --name db training/postgres
$ sudo docker run -d -P --name web --link db:db training/webapp
/bin/bash
34. The Docker Workflow 1/2
• Work in dev environment
(local machine or container)
• Other services (databases etc.) in containers
(and behave just like the real thing!)
• Whenever you want to test ≪ for real ≫:
– Build in seconds
– Run instantly
35. The Docker Workflow 2/2
• Satisfied with your local build?
– Push it to a registry (public or private)
– Run it (automatically!) in CI/CD
– Run it in production
– Happiness!
• Something goes wrong? Rollback painlessly!
37. Authoring images
with run/commit
1) docker run ubuntu bash
2) apt-get install this and that
3) docker commit <containerid> <imagename>
4) docker run <imagename> bash
5) git clone git://.../mycode
6) pip install -r requirements.txt
7) docker commit <containerid> <imagename>
8) repeat steps 4-7 as necessary
9) docker tag <imagename> <user/image>
10) docker push <user/image>
38. Authoring images
with run/commit
• Pros
– Convenient, nothing to learn
– Can roll back/forward if needed
• Cons
– Manual process
– Iterative changes stack up
– Full rebuilds are boring, error-prone
39. Authoring images
with a Dockerfile
FROM ubuntu
RUN apt-get -y update
RUN apt-get install -y g++
RUN apt-get install -y make wget
EXPOSE 80
CMD ["/bin/ping"]
docker build -t AUTHOR/DOCKER-NAME .
40. Authoring images
with a Dockerfile
• Minimal learning curve
• Rebuilds are easy
• Caching system makes rebuilds faster
• Single file to define the whole environment!
42. File-level Backups
• Use volumes
$ docker run --name mysqldata -v /var/lib/mysql
busybox true
$ docker run --name mysql --volumes-from mysqldata
mysql
$ docker run --rm --volumes-from mysqldata
mysqlbackup tar -cJf- /var/lib/mysql | stream-it-to-the-
cloud.py
• Of course, you can use anything fancier than tar
(e.g. rsync, tarsnap...)
43. Data-level backups
• Use links
$ docker run --name mysql mysql
$ docker run --rm --link mysql:db mysqlbackup
$ mysqldump --all-databases | stream-it-to-the-cloud.py
• Can be combined with volumes
– put the SQL dump on a volume
– then backup that volume with file-level tools
(previous slide)
44. Logging for legacy apps
• Legacy = let me write to eleventy jillion arbitrary files in
/var/lib/tomcat/logs!
• Solution: volumes
$ docker run --name logs -v /var/lib/tomcat/logs busybox true
$ docker run --name tomcat --volumes-from logs
my_tomcat_image
– Inspect logs:
$ docker run --rm --volumes-from logs ubuntu bash
– Ship logs to something else:
$ docker run --name logshipper --volumes-from logs sawmill
45. Remote Access
• If you own the host: SSH to host + nsenter
https://github.com/jpetazzo/nsenter
• If you don't own the host: SSH in the container
https://github.com/phusion/baseimage-docker
• More on that topic (“do I need SSHD in containers?”):
http://blog.docker.com/2014/06/why-you-dont-need-to-run-sshd-in-
docker/
• In the future:
– run separate SSH container
– log into that
– “hop” onto the target container
46. Orchestration
• There's more than one way to do it (again!)
– describe your stack in files
(Fig, Maestro-NG, Ansible and other CMs)
– submit requests through an API
(Mesos)
– implement something that looks like a PAAS
(Flynn, Deis, OpenShift)
– the “new wave”
(Kubernetes, Centurion, Helios...)
– OpenStack
47. Do you (want to) use OpenStack?
• Yes
– if you are building a PaaS, keep an eye on Solum
(and consider contributing)
– if you are moving VM workloads to containers, use Nova
(that's probably what you already have; just enable the Docker
driver)
– otherwise, use Heat
(and use Docker resources in your Heat templates)
• No
– go to next slide
48. Are you looking for a PaaS?
• Yes
– CloudFoundry (Ruby, but increasing % Go)
– Deis (Python, Docker-ish, runs on top of CoreOS)
– Dokku (A few 100s of line of Bash!)
– Flynn (Go, bleeding edge)
– OpenShift geard (Go)
• Choose wisely (or go to the next slide)
– http://blog.lusis.org/blog/2014/06/14/paas-for-realists/
“I don’t think ANY of the current private PaaS solutions are a fit right now.”
49. How many Docker hosts do you have?
• Only one per app or environment
– Fig
• A few (up to ~10)
– Maestro-NG
– your favorite CM (e.g. Ansible has a nice Docker module)
• A lot
• Mesos
– have a look at (and contribute to) the “new wave”
(Centurion, Helios, Kubernetes...)
50. Performance: measure things
• cgroups give us per-container...
– CPU usage
– memory usage (fine-grained: cache and resident set size)
– I/O usage (per device, reads vs writes, in bytes and in ops)
• cgroups don't give us...
– network metrics (have to do tricks with network namespaces)
https://github.com/google/cadvisor
http://jpetazzo.github.io/2013/10/08/docker-containers-
metrics/
52. Most images already available
• We can built our own dockerfiles
– Images without being verified
– Some of them old
– Custom configs
• Old x86 machines (Core 2Duo, 4GB, 250 GB)
– Monitoring containers performance
– Following Apps recommendations
54. Outside world: 2015
• Start deploying our private repository
• Containers configuration management
– Documentation
• No need for CM platform right now!
• Security configs required for production
– Under analysis!
56. Hands-on #1
# Access to the VM recently started:
$ ssh ubuntu@192.168.122.179
# To start each VM remotely:
$ sudo virsh net-start default
$ sudo virsh start ubuntu1
$ sudo virsh start ubuntu2
#Use different terminals to access
each VM
From terminal 1
$ ssh ubuntu@192.168.123.2
From terminal 2
$ ssh ubuntu@192.168.123.3
57. # First, we check if docker is up
$ sudo ps aux |grep docker
root 624 0.0 1.5 328816 11536 ? Ssl 17:30 0:00 /usr/bin/docker -d
# If docker is not up then give the below command:
service docker start
# In each VM, search and pull the pre-configured container from docker hub
KVM-1: $ docker search intrig/tutorial
KVM-2: $ docker search intrig/tutorial
KVM-1: $ docker pull intrig/tutorial
KVM-2: $ docker pull intrig/tutorial
# Check if docker was correctly downloaded from reposities Docker Hub
KVM-1: $ docker images
KVM-2: $ docker images
Initiating Docker
58. # Virtual Machine 1 (KVM-1)
# Resetting the docker interface
sudo ip link set docker0 down
sudo brctl delbr docker0
sudo brctl addbr docker0
# Assigning IP to the Docker interface
sudo ip addr add 172.16.1.1/24 dev docker0
sudo ip link set docker0 up
sudo ovs-vsctl add-br br0
# Creating a tunnel GRE
sudo ovs-vsctl add-port br0 gre0 -- set interface
gre0 type=gre options:remote_ip=192.168.123.3
# Adding the bridge ‘br0’ as interface to the
bridge ‘docker0’
sudo brctl addif docker0 br0
GRE Tunnel in Docker 1/4
59. # Virtual Machine 2 (KVM-2)
# Resetting the docker interface
sudo ip link set docker0 down
sudo brctl delbr docker0
sudo brctl addbr docker0
# Assigning IP to the Docker interface
sudo ip addr add 172.16.1.2/24 dev docker0
sudo ip link set docker0 up
sudo ovs-vsctl add-br br0
# Creating a tunnel GRE
sudo ovs-vsctl add-port br0 gre0 -- set interface
gre0 type=gre options:remote_ip=192.168.123.2
# Adding the bridge ‘br0’ as interface to the
bridge ‘docker0’
sudo brctl addif docker0 br0
GRE Tunnel in Docker 2/4
60. GRE Tunnel in Docker 3/4
# Virtual Machine 1 (KVM-1)
# Activate docker for the container 1
$docker run -i -t --privileged --
name=container1 --hostname=container1 --
publish 127.0.0.1:2222:22 intrig/tutorial:v3
/bin/bash
# If you get the below mentioned prompt,
then configure an IP address for it. This
prompt signifies that you have successfully
started the container.
root@container1:/#
root@container1:/# ifconfig eth0 172.16.1.11
netmask 255.255.255.0
root@container1:/# route add default gw
172.16.1.1
61. GRE Tunnel in Docker 4/4
# Virtual Machine 2 (KVM-2)
# Activate docker for the container 2
$docker run -i -t --privileged --
name=container2 --hostname=container2 --
publish 127.0.0.1:2222:22 intrig/tutorial:v3
/bin/bash
# If you get the below mentioned prompt,
then configure an IP address for it. This
prompt signifies that you have successfully
started the container.
root@container2:/#
root@container2:/# ifconfig eth0 172.16.1.12
netmask 255.255.255.0
root@container2:/# route add default gw
172.16.1.2
62. Testing GRE Tunnel
# Testing the connectivity between
dockers
-From the Container 1 to Container 2
Container1:/# ping 172.16.1.12
- From the Container 2 to Container 1
Container2:/# ping 172.16.1.11
# Copy the binary 'iperf' from KVM1 to the
container docker of KVM-1
KVM1: sudo cp /usr/bin/iperf
/var/lib/docker/aufs/diff/<ID-docker1>/usr/bin/
KVM2: sudo cp /usr/bin/iperf
/var/lib/docker/aufs/diff/<ID-docker2>/usr/bin/
63. Testing GRE Tunnel
# Verify the RTT using IPERF
# From the Container Docker
172.16.1.12, we launch Iperf Server
listening on TCP port 5001
$ sudo iperf -s
# From the another Container Docker
172.16.1.11, we launch Iperf Client
connecting to 172.16.1.12, TCP port
5001
$ sudo iperf -c 172.16.1.12
What can you say of the “Bandwith” ?
66. # Virtual Machine 1 (KVM-1)
# Delete the container docker created in the last exercise
$ docker stop container1
$ docker rm container1
# Create two containers docker and set the network mode to none.
Each containers is on a local variable.
$ C1=$(docker run -d --net=none -t -i --privileged --name=container1 --
hostname=container1 intrig/tutorial:v3 /bin/bash)
$ C2=$(docker run -d --net=none -t -i --privileged --name=container2 --
hostname=container2 intrig/tutorial:v3 /bin/bash)
Docker with Open vSwitch
and GRE Tunnel 2/7
Starting Containers
67. # Virtual Machine 2 (KVM-2)
# Delete the container docker created in the last exercise
$ docker stop container2
$ docker rm container2
# Create two containers docker and set the network mode to none.
Each containers is on a local variable.
$ C3=$(docker run -d --net=none -t -i --privileged --name=container3 --
hostname=container3 intrig/tutorial:v3 /bin/bash)
$ C4=$(docker run -d --net=none -t -i --privileged --name=container4 --
hostname=container4 intrig/tutorial:v3 /bin/bash)
Docker with Open vSwitch
and GRE Tunnel 3/7
Starting Containers
68. Docker with Open vSwitch
and GRE Tunnel 4/7
# Virtual Machine 1 (KVM-1)
# To know the PID value of the container created, we use the script findPID.sh
$./findPID.sh $C1
The PID value of the container created is: 6485 (for example). Same for $C2
# Bind dockers with a Open vSwitch interface
$ sudo ./ovswork-1.sh br2 $C1 1.0.0.1/24 1.0.0.255 1.0.0.254 10
$ sudo ./ovswork-1.sh br2 $C2 1.0.0.2/24 1.0.0.255 1.0.0.254 20
# Virtual Machine 2 (KVM-2)
# Bind dockers with a OpenVswitch interface
$ sudo ./ovswork-1.sh br2 $C3 1.0.0.3/24 1.0.0.255 1.0.0.254 10
$ sudo ./ovswork-1.sh br2 $C4 1.0.0.4/24 1.0.0.255 1.0.0.254 20
Binding docker with Open Vswitch Interface
69. Docker with Open vSwitch
and GRE Tunnel 5/7
Initiating Docker
# Using different terminals, start the container1, container2,
container3, container4
From terminal 1:
$ docker start -a -i container1
From terminal 2:
$ docker start -a -i container2
From terminal 3:
$ docker start -a -i container3
From terminal 4:
$ docker start -a -i container4
70. Docker with Open vSwitch
and GRE Tunnel 6/7
Testing of connectivity
# From the Container1 (Terminal 1)
Container1$ ping 1.0.0.3 -c 2
Container1$ ping 1.0.0.4 -c 2
# From the Container3 (Terminal 3)
Container3$ ping 1.0.0.1 -c 2
Container3$ ping 1.0.0.2 -c 2
What ping is successful? And why?
root@container1:/# ping 1.0.0.3
PING 1.0.0.3 (1.0.0.3) 56(84) bytes of data.
64 bytes from 1.0.0.3: icmp_seq=1 ttl=64 time=2.52 ms
64 bytes from 1.0.0.3: icmp_seq=2 ttl=64 time=0.651 ms
71. Docker with Open vSwitch
and GRE Tunnel 7/7
Testing of connectivity with Iperf
# Verify the RTT using IPERF
# From the Container #1 1.0.0.3 launch Iperf Server listening on TCP port 5001
$ sudo iperf -s
# From the another Container #3, launch Iperf Client connecting to 1.0.0.3, TCP port
5001
$ sudo iperf -c 1.0.0.3
What can you say of the “Bandwith” ?
# Virtual Machine 1 (KVM-1)
$ sudo ovs-vsctl show br2
$ sudo ovs-ofctl show br2
$ sudo ovs-ofctl dump-flows br2
# Virtual Machine 2 (KVM-2)
$ sudo ovs-vsctl show br2
$ sudo ovs-ofctl show br2
$ sudo ovs-ofctl dump-flows br2