JDO 2019: Tips and Tricks from Docker Captain - Łukasz LachPROIDEA
The document provides tips and tricks for using Docker including:
1) Installing Docker on Linux in an easy way allowing choice of channel and version.
2) Setting up a local Docker Hub mirror for caching and revalidating images.
3) Using docker inspect to find containers that exited with non-zero codes or show commands for running containers.
4) Organizing docker-compose files with extensions, environment variables, anchors and aliases for well structured services.
The document discusses deploying a Rails application to Amazon EC2. It explains that the goals are to launch an EC2 instance, connect to it, set up the environment, deploy the application, and profit. It then outlines the plan to launch an instance, connect to it, install necessary packages like Ruby, Rails, and Nginx, configure Nginx and Unicorn, deploy the application using Capistrano, and start the Unicorn process.
Ondřej Šika: Docker, Traefik a CI - Mějte nasazené všeny větve na kterých pra...Develcz
The document describes setting up Docker, Traefik, and CI/CD pipelines. It includes a docker-compose.yml configuration file for Traefik that sets up port forwarding and SSL termination. It also includes a .gitlab-ci.yml file that defines a deploy job that builds a Docker image, pushes it to Docker Hub, and deploys it to a server using Traefik routing.
PuppetConf 2017: Use Puppet to Tame the Dockerfile Monster- Bryan Belanger, A...Puppet
You want to create an application? Great. Download a Docker image and install all your stuff. Sounds like a lot of work, huh? Wait, you also need to be able to patch your container too? That Dockerfile will become a Frankenfile! Well, guess what: Puppet has a an answer for you. Using Docker, Puppet and Jenkins we will show you how you can: 1. Put all your code in an easy to use project. 2. Give yourself a powerful toolkit for configuration 3. Automate your builds 4. Allow your project to automate security updates / patches
Raphaël Pinson's talk on "Configuration surgery with Augeas" at PuppetCamp Geneva '12. Video at http://youtu.be/H0MJaIv4bgk
Learn more: www.puppetlabs.com
This document provides an overview of the Perl programming language. It discusses scalars, arrays, hashes, references, subroutines, packages, warnings, strictness, password encryption, file handling, command line arguments, and process management in Perl. Specific topics covered include using @ and $ prefixes for arrays and scalars, accessing elements of arrays and hashes, defining and calling subroutines, using crypt() for password hashing, opening/writing/reading files, and using fork() to create child processes. Examples of code are provided to demonstrate many of these Perl concepts.
This document summarizes the steps to build and run a Docker container for Nginx. It describes creating a Dockerfile that installs Nginx on Ubuntu, builds the image, runs a container from the image mounting a local directory, and commits changes to create a new image version. Key steps include installing Nginx, exposing ports 80 and 443, committing a container to create a new image with added files, and using Docker commands like build, run, commit, diff and inspect.
The document describes UBIC, a toolkit for writing daemons, init scripts, and services in Perl. It provides common classes that handle tasks like starting, stopping, and monitoring services that simplify writing init scripts. Services can be organized hierarchically and non-root users can run services. The toolkit also provides HTTP status endpoints and watchdog functionality to restart services that fail. UBIC sees widespread use at Yandex across many packages, clusters, and hosts.
JDO 2019: Tips and Tricks from Docker Captain - Łukasz LachPROIDEA
The document provides tips and tricks for using Docker including:
1) Installing Docker on Linux in an easy way allowing choice of channel and version.
2) Setting up a local Docker Hub mirror for caching and revalidating images.
3) Using docker inspect to find containers that exited with non-zero codes or show commands for running containers.
4) Organizing docker-compose files with extensions, environment variables, anchors and aliases for well structured services.
The document discusses deploying a Rails application to Amazon EC2. It explains that the goals are to launch an EC2 instance, connect to it, set up the environment, deploy the application, and profit. It then outlines the plan to launch an instance, connect to it, install necessary packages like Ruby, Rails, and Nginx, configure Nginx and Unicorn, deploy the application using Capistrano, and start the Unicorn process.
Ondřej Šika: Docker, Traefik a CI - Mějte nasazené všeny větve na kterých pra...Develcz
The document describes setting up Docker, Traefik, and CI/CD pipelines. It includes a docker-compose.yml configuration file for Traefik that sets up port forwarding and SSL termination. It also includes a .gitlab-ci.yml file that defines a deploy job that builds a Docker image, pushes it to Docker Hub, and deploys it to a server using Traefik routing.
PuppetConf 2017: Use Puppet to Tame the Dockerfile Monster- Bryan Belanger, A...Puppet
You want to create an application? Great. Download a Docker image and install all your stuff. Sounds like a lot of work, huh? Wait, you also need to be able to patch your container too? That Dockerfile will become a Frankenfile! Well, guess what: Puppet has a an answer for you. Using Docker, Puppet and Jenkins we will show you how you can: 1. Put all your code in an easy to use project. 2. Give yourself a powerful toolkit for configuration 3. Automate your builds 4. Allow your project to automate security updates / patches
Raphaël Pinson's talk on "Configuration surgery with Augeas" at PuppetCamp Geneva '12. Video at http://youtu.be/H0MJaIv4bgk
Learn more: www.puppetlabs.com
This document provides an overview of the Perl programming language. It discusses scalars, arrays, hashes, references, subroutines, packages, warnings, strictness, password encryption, file handling, command line arguments, and process management in Perl. Specific topics covered include using @ and $ prefixes for arrays and scalars, accessing elements of arrays and hashes, defining and calling subroutines, using crypt() for password hashing, opening/writing/reading files, and using fork() to create child processes. Examples of code are provided to demonstrate many of these Perl concepts.
This document summarizes the steps to build and run a Docker container for Nginx. It describes creating a Dockerfile that installs Nginx on Ubuntu, builds the image, runs a container from the image mounting a local directory, and commits changes to create a new image version. Key steps include installing Nginx, exposing ports 80 and 443, committing a container to create a new image with added files, and using Docker commands like build, run, commit, diff and inspect.
The document describes UBIC, a toolkit for writing daemons, init scripts, and services in Perl. It provides common classes that handle tasks like starting, stopping, and monitoring services that simplify writing init scripts. Services can be organized hierarchically and non-root users can run services. The toolkit also provides HTTP status endpoints and watchdog functionality to restart services that fail. UBIC sees widespread use at Yandex across many packages, clusters, and hosts.
Making environment for_infrastructure_as_codeSoshi Nemoto
The document provides instructions for setting up an environment for infrastructure as code using tools like Vagrant, Ansible, and Fabric. It details steps to install the necessary tools, create a Vagrant machine, edit configuration files to configure the Vagrant IP address and SSH keys, and then provides a test command to validate the Fabric deployment is working properly.
This document discusses Composer, an open source tool for dependency management in PHP. It describes what Composer is, how to install it, how to define dependencies in a composer.json file and composer.lock file, how Composer generates autoload files and installs vendor libraries, and some common Composer commands. It also provides information on joining the community and lists sources for more documentation on Composer.
A journey through the years of UNIX and Linux service managementLubomir Rintel
This document provides a history of Unix and Linux service management from the early days of /etc/init through the development of systemd. It describes the issues with early init systems like limitations in flexibility, lack of monitoring, and inconsistencies. It then discusses how various operating systems attempted to address these problems through tools like SMF, launchd, upstart, and others. Finally, it provides an overview of how systemd comprehensively solves the issues through features like unit files, control groups, journald logging, and integration with the Linux kernel.
This document provides instructions for installing various development tools on Mac OSX, including Xcode, command line tools, Homebrew, Ruby, Python, VirtualBox, Vagrant, Chef, and Ansible. It describes downloading and installing each tool, and in some cases providing additional configuration steps or notes on cleaning up existing installations. The overall purpose is to set up a standard development environment with common utilities.
Linux Containers (LXC) @Open Source Camp Moldova 2018
LXC (Linux Containers) is an operating-system-level virtualization method for running multiple isolated Linux systems (containers) on a control host using a single Linux kernel. https://en.wikipedia.org/wiki/LXC
A massive attack was revealed that exploited the Shellshock vulnerability in QNAP NAS devices. The attackers deployed a payload that patched the vulnerability, armed the devices for DDOS attacks, and installed a scanner to search for more vulnerable devices. Over 500 compromised devices were detected. The payload installed a backdoor that could control the armed devices for DDOS attacks through IRC commands.
Vagrant allows users to configure and manage virtual machine environments through files and commands. It uses configuration files to define VMs and provisioning tools to automate software installation. Key features include:
- Managing virtual machines from a Vagrantfile configuration
- Provisioning VMs through tools like Chef and Ansible
- Accessing VMs through SSH using the vagrant command
- Installing plugins to add functionality like AWS integration
This document discusses setting up a network bridge without Docker. It provides a Vagrantfile to configure a virtual machine environment with Ubuntu 18.04, along with tools like Go and Docker installed. Instructions are given to create a bridge between two network namespaces called RED and BLUE using IP addresses in the 11.11.11.0/24 range. Tests show that hosts can ping each other within this network but not across the real interface and IP range of the host machine. Additional routing and IP configuration is needed to allow outside communication.
Ansible is an IT automation tool that can provision and configure servers. It works by defining playbooks that contain tasks to be run on target servers. Playbooks use YAML format and modules to automate configuration changes. Vagrant and Ansible can be integrated so that Ansible playbooks are run as part of the Vagrant provisioning process to automate server setup. The document provides an introduction and examples of using Ansible playbooks with Vagrant virtual machines to install and configure the Apache HTTP server.
Make container without_docker_6-overlay-network_1 Sam Kim
분산환경에서 컨테이너 간의 통신은 어떻게 이루어 지는 것일까요? 3,4편에서는 호스트 안에 가상네트워크를 만들어보았습니다. 6편에서는 이를 바탕으로 분산환경에서 호스트 간에 가상 네트워크로 통신이 가능하도록 만들어 봅니다. 이 방법은 실제 쿠버네티스 flannel 등의 CNI에서 사용하고 있는 vxlan 기반의 오버레이 네트워크 구성을 다룹니다.
agri inventory - nouka data collector / yaoya data convertorToshiaki Baba
This document provides instructions for setting up and using an agri inventory system called nouka and yaoya. Nouka collects data from servers using commands and sends it to the naya data store, which uses fluentd and MongoDB. It explains the components, data formats, and provides steps to get the required software and set up the system.
The document provides instructions for demonstrating message queues using Apache Kafka and RabbitMQ. It explains how to start the required servers, create topics and producers, and process streaming data using Kafka Streams. It also covers starting RabbitMQ, sending and receiving messages, exchange types for routing, and different client types.
PuppetCamp SEA 1 - Version Control with PuppetWalter Heck
Choon Ming Goh, System Administrator at OnApp Malaysia, gave a presentation on how OnApp implements version control. Since they have quite a few repositories, this is all puppetised and that is quite a nice way of doing version control.
This document provides an overview of shell scripting concepts in Linux. It discusses shell types including login shells and interactive shells. It covers how shells are invoked and how variables, commands, and processes work within the shell environment. Key topics include variable setting and substitution, command grouping and substitution, shell scripts, functions, parameters, and conditionally running commands based on return values.
The document discusses Nouka, an open source inventory management tool for Linux. Nouka consists of three parts - Nouka data collector, Naya data store, and Yaoya data converter. Nouka data collector runs commands periodically on Linux machines and sends the results to Naya data store. Naya uses Fluentd and MongoDB to store the collected data. Yaoya then converts and outputs the data in various formats like JSON, CSV for analysis. Overall, Nouka provides an automatic and periodic way to collect and centralize inventory data from Linux machines.
Exploring Async PHP (SF Live Berlin 2019)dantleech
(note slides are missing animated gifs and video)
As PHP programmers we are used to waiting for network I/O, in general we may not even consider any other option. But why wait? Why not jump on board the Async bullet-train and experience life in the fast lane and give Go and NodeJS a run for the money. This talk will aim to make the audience aware of the benefits, opportunities, and pitfalls of asynchronous programming in PHP, and guide them through the native functionality, frameworks and PHP extensions though which it can be facilitated.
1. The document discusses debugging FreeBSD kernels through various tools and techniques such as kgdb(1), ddb(4), ktrace(1), and kdump(1).
2. Common issues like kernel crashes and hangs can be debugged using tools that examine CPU registers, step through code, and analyze kernel traces.
3. Effective debugging requires understanding kernel data structures and configuration options for enabling debugging features.
Puppet Camp Phoenix 2015: Managing Files via Puppet: Let Me Count The Ways (B...Puppet
The document discusses various ways to manage files and lines within files using Puppet, including using the file, concat, augeas, file_line, inifile, datacat, and template resources and functions. It provides examples of managing entire files, specific lines, using static content or templates, and leveraging other modules to manage files and configurations.
Puppet can be used effectively and at scale without running as root. In many organizations, particularly large ones, different teams are responsible for different pieces of the infrastructure. In my case, I am on a team responsible for installation, configuration, upkeep, and monitoring of an application, but we are denied root access. Despite this, we have a rich puppet infrastructure thats saves us time and reduces configuration drift. I will present our model for success in this kind of limited environment, including recipes for using puppet as non root and some encouraging words and ideas for those who want to implement puppet, but the rest of their organization isn't ready yet.
Spencer Krum
Systems Admin, UTI Worldwide
Spencer is a Linux and application administrator with UTI Worldwide, a shipping and logistics firm. He lives and works in Portland. He has been using Linux and Puppet for years. Spencer is co-authoring (with William Van Hevelingen and Ben Kero) the second edition of Pro Puppet by James Turnbull and Jeff McCune, which should be available from Apress in alpha/beta E-Book in time for Puppet Conf '13. He enjoys hacking, tennis, StarCraft, and Hawaiian food.
The document provides instructions on Docker practice including prerequisites, basic Docker commands, running containers from images, committing container changes to new images, logging into Docker Hub and pushing images.
It begins with prerequisites of having Ubuntu 18.04 or higher and installing the latest Docker engine and Docker compose. It then explains that Docker runs processes in isolated containers and uses layered images.
The document demonstrates basic commands like docker version, docker images, docker pull, docker search, docker run, docker ps, docker stop, docker rm and docker rmi. It also shows how to commit container changes to a new image with docker commit, tag and push images to Docker Hub. Other topics covered include docker exec, docker save/load, docker
Preparation study for Docker Event
Mulodo Open Study Group (MOSG) @Ho chi minh, Vietnam
http://www.meetup.com/Open-Study-Group-Saigon/events/229781420/
Making environment for_infrastructure_as_codeSoshi Nemoto
The document provides instructions for setting up an environment for infrastructure as code using tools like Vagrant, Ansible, and Fabric. It details steps to install the necessary tools, create a Vagrant machine, edit configuration files to configure the Vagrant IP address and SSH keys, and then provides a test command to validate the Fabric deployment is working properly.
This document discusses Composer, an open source tool for dependency management in PHP. It describes what Composer is, how to install it, how to define dependencies in a composer.json file and composer.lock file, how Composer generates autoload files and installs vendor libraries, and some common Composer commands. It also provides information on joining the community and lists sources for more documentation on Composer.
A journey through the years of UNIX and Linux service managementLubomir Rintel
This document provides a history of Unix and Linux service management from the early days of /etc/init through the development of systemd. It describes the issues with early init systems like limitations in flexibility, lack of monitoring, and inconsistencies. It then discusses how various operating systems attempted to address these problems through tools like SMF, launchd, upstart, and others. Finally, it provides an overview of how systemd comprehensively solves the issues through features like unit files, control groups, journald logging, and integration with the Linux kernel.
This document provides instructions for installing various development tools on Mac OSX, including Xcode, command line tools, Homebrew, Ruby, Python, VirtualBox, Vagrant, Chef, and Ansible. It describes downloading and installing each tool, and in some cases providing additional configuration steps or notes on cleaning up existing installations. The overall purpose is to set up a standard development environment with common utilities.
Linux Containers (LXC) @Open Source Camp Moldova 2018
LXC (Linux Containers) is an operating-system-level virtualization method for running multiple isolated Linux systems (containers) on a control host using a single Linux kernel. https://en.wikipedia.org/wiki/LXC
A massive attack was revealed that exploited the Shellshock vulnerability in QNAP NAS devices. The attackers deployed a payload that patched the vulnerability, armed the devices for DDOS attacks, and installed a scanner to search for more vulnerable devices. Over 500 compromised devices were detected. The payload installed a backdoor that could control the armed devices for DDOS attacks through IRC commands.
Vagrant allows users to configure and manage virtual machine environments through files and commands. It uses configuration files to define VMs and provisioning tools to automate software installation. Key features include:
- Managing virtual machines from a Vagrantfile configuration
- Provisioning VMs through tools like Chef and Ansible
- Accessing VMs through SSH using the vagrant command
- Installing plugins to add functionality like AWS integration
This document discusses setting up a network bridge without Docker. It provides a Vagrantfile to configure a virtual machine environment with Ubuntu 18.04, along with tools like Go and Docker installed. Instructions are given to create a bridge between two network namespaces called RED and BLUE using IP addresses in the 11.11.11.0/24 range. Tests show that hosts can ping each other within this network but not across the real interface and IP range of the host machine. Additional routing and IP configuration is needed to allow outside communication.
Ansible is an IT automation tool that can provision and configure servers. It works by defining playbooks that contain tasks to be run on target servers. Playbooks use YAML format and modules to automate configuration changes. Vagrant and Ansible can be integrated so that Ansible playbooks are run as part of the Vagrant provisioning process to automate server setup. The document provides an introduction and examples of using Ansible playbooks with Vagrant virtual machines to install and configure the Apache HTTP server.
Make container without_docker_6-overlay-network_1 Sam Kim
분산환경에서 컨테이너 간의 통신은 어떻게 이루어 지는 것일까요? 3,4편에서는 호스트 안에 가상네트워크를 만들어보았습니다. 6편에서는 이를 바탕으로 분산환경에서 호스트 간에 가상 네트워크로 통신이 가능하도록 만들어 봅니다. 이 방법은 실제 쿠버네티스 flannel 등의 CNI에서 사용하고 있는 vxlan 기반의 오버레이 네트워크 구성을 다룹니다.
agri inventory - nouka data collector / yaoya data convertorToshiaki Baba
This document provides instructions for setting up and using an agri inventory system called nouka and yaoya. Nouka collects data from servers using commands and sends it to the naya data store, which uses fluentd and MongoDB. It explains the components, data formats, and provides steps to get the required software and set up the system.
The document provides instructions for demonstrating message queues using Apache Kafka and RabbitMQ. It explains how to start the required servers, create topics and producers, and process streaming data using Kafka Streams. It also covers starting RabbitMQ, sending and receiving messages, exchange types for routing, and different client types.
PuppetCamp SEA 1 - Version Control with PuppetWalter Heck
Choon Ming Goh, System Administrator at OnApp Malaysia, gave a presentation on how OnApp implements version control. Since they have quite a few repositories, this is all puppetised and that is quite a nice way of doing version control.
This document provides an overview of shell scripting concepts in Linux. It discusses shell types including login shells and interactive shells. It covers how shells are invoked and how variables, commands, and processes work within the shell environment. Key topics include variable setting and substitution, command grouping and substitution, shell scripts, functions, parameters, and conditionally running commands based on return values.
The document discusses Nouka, an open source inventory management tool for Linux. Nouka consists of three parts - Nouka data collector, Naya data store, and Yaoya data converter. Nouka data collector runs commands periodically on Linux machines and sends the results to Naya data store. Naya uses Fluentd and MongoDB to store the collected data. Yaoya then converts and outputs the data in various formats like JSON, CSV for analysis. Overall, Nouka provides an automatic and periodic way to collect and centralize inventory data from Linux machines.
Exploring Async PHP (SF Live Berlin 2019)dantleech
(note slides are missing animated gifs and video)
As PHP programmers we are used to waiting for network I/O, in general we may not even consider any other option. But why wait? Why not jump on board the Async bullet-train and experience life in the fast lane and give Go and NodeJS a run for the money. This talk will aim to make the audience aware of the benefits, opportunities, and pitfalls of asynchronous programming in PHP, and guide them through the native functionality, frameworks and PHP extensions though which it can be facilitated.
1. The document discusses debugging FreeBSD kernels through various tools and techniques such as kgdb(1), ddb(4), ktrace(1), and kdump(1).
2. Common issues like kernel crashes and hangs can be debugged using tools that examine CPU registers, step through code, and analyze kernel traces.
3. Effective debugging requires understanding kernel data structures and configuration options for enabling debugging features.
Puppet Camp Phoenix 2015: Managing Files via Puppet: Let Me Count The Ways (B...Puppet
The document discusses various ways to manage files and lines within files using Puppet, including using the file, concat, augeas, file_line, inifile, datacat, and template resources and functions. It provides examples of managing entire files, specific lines, using static content or templates, and leveraging other modules to manage files and configurations.
Puppet can be used effectively and at scale without running as root. In many organizations, particularly large ones, different teams are responsible for different pieces of the infrastructure. In my case, I am on a team responsible for installation, configuration, upkeep, and monitoring of an application, but we are denied root access. Despite this, we have a rich puppet infrastructure thats saves us time and reduces configuration drift. I will present our model for success in this kind of limited environment, including recipes for using puppet as non root and some encouraging words and ideas for those who want to implement puppet, but the rest of their organization isn't ready yet.
Spencer Krum
Systems Admin, UTI Worldwide
Spencer is a Linux and application administrator with UTI Worldwide, a shipping and logistics firm. He lives and works in Portland. He has been using Linux and Puppet for years. Spencer is co-authoring (with William Van Hevelingen and Ben Kero) the second edition of Pro Puppet by James Turnbull and Jeff McCune, which should be available from Apress in alpha/beta E-Book in time for Puppet Conf '13. He enjoys hacking, tennis, StarCraft, and Hawaiian food.
The document provides instructions on Docker practice including prerequisites, basic Docker commands, running containers from images, committing container changes to new images, logging into Docker Hub and pushing images.
It begins with prerequisites of having Ubuntu 18.04 or higher and installing the latest Docker engine and Docker compose. It then explains that Docker runs processes in isolated containers and uses layered images.
The document demonstrates basic commands like docker version, docker images, docker pull, docker search, docker run, docker ps, docker stop, docker rm and docker rmi. It also shows how to commit container changes to a new image with docker commit, tag and push images to Docker Hub. Other topics covered include docker exec, docker save/load, docker
Preparation study for Docker Event
Mulodo Open Study Group (MOSG) @Ho chi minh, Vietnam
http://www.meetup.com/Open-Study-Group-Saigon/events/229781420/
This document provides an agenda and overview for a hands-on lab on using DPDK in containers. It introduces Linux containers and how they use fewer system resources than VMs. It discusses how containers still use the kernel network stack, which is not ideal for SDN/NFV usages, and how DPDK can be used in containers to address this. The hands-on lab section guides users through building DPDK and Open vSwitch, configuring them to work with containers, and running packet generation and forwarding using testpmd and pktgen Docker containers connected via Open vSwitch.
How to make debian package from scratch (linux)Thierry Gayet
- The document discusses two methods for creating Debian packages: using dpkg-deb or dpkg-buildpackage.
- It provides step-by-step instructions for creating the package directory structure, metadata files, building and installing the package, and verifying installation.
- An alternative method using dh_make is also presented, which simplifies the process by automatically generating basic packaging files and directories.
DCEU 18: Tips and Tricks of the Docker CaptainsDocker, Inc.
Brandon Mitchell - Solutions Architect, BoxBoat
Docker Captain Brandon Mitchell will help you accelerate your adoption of Docker containers by delivering tips and tricks on getting the most out of Docker. Topics include managing disk usage, preventing subnet collisions, debugging container networking, understanding image layers, getting more value out of the default volume driver, and solving the UID/GID permission issues with volumes in a way that allows images to be portable from any developer laptop and to production.
Running Docker in Development & Production (DevSum 2015)Ben Hall
This document provides an overview of Docker containers and how to use Docker for development and production environments. It discusses Docker concepts like images, containers, and Dockerfiles. It also demonstrates how to build images, run containers, link containers, manage ports, and use Docker Compose. The document shows how Docker can be used to develop applications using technologies like ASP.NET, Node.js, and Go. It also covers testing, deploying to production, and optimizing containers for production.
This document discusses Docker, including:
1. Docker is a platform for running and managing Linux containers that provides operating-system-level virtualization without the overhead of traditional virtual machines.
2. Key Docker concepts include images (immutable templates for containers), containers (running instances of images that have mutable state), and layers (the building blocks of images).
3. Publishing Docker images to registries allows them to be shared and reused across different systems. Volumes and networking allow containers to share filesystems and communicate.
Running Docker in Development & Production (#ndcoslo 2015)Ben Hall
The document discusses running Docker in development and production. It covers:
- Using Docker containers to run individual services like Elasticsearch or web applications
- Creating Dockerfiles to build custom images
- Linking containers together and using environment variables for service discovery
- Scaling with Docker Compose, load balancing with Nginx, and service discovery with Consul
- Clustering containers together using Docker Swarm for high availability
Automate drupal deployments with linux containers, docker and vagrant Ricardo Amaro
This document discusses strategies for automating Drupal deployments using Linux containers, Vagrant, and Docker. It begins with an overview of virtual machines and their disadvantages compared to containers. It then covers using Linux containers (LXC), Vagrant, and Docker to build and deploy containerized Drupal environments that can be easily reproduced and deployed across different systems. The document provides examples of building Drupal containers using LXC, Vagrant, and Docker that take advantage of their portability and reproducibility.
Bryan McLellan discusses moving from VMware virtualization to KVM/libvirt virtualization. He found that using tools like ubuntu-vm-builder, Puppet, and libvirt provided a more homogeneous, automated, and well-documented virtual infrastructure compared to his previous manual VMware configuration. While early versions of KVM/libvirt lacked some enterprise features, the technologies continue to improve and provide capabilities like live migration and hotplugging.
This document summarizes a Docker workshop that covers:
1. Running Docker containers, including starting containers interactively or detached, checking statuses, port forwarding, linking containers, and mounting volumes.
2. Building Docker images, including committing existing containers or building from a Dockerfile, and using Docker build context.
3. The official Docker Hub for finding and using common Docker images like Redis, MySQL, and Jenkins. It also covers tagging and pushing images to private Docker registries.
Code testing and Continuous Integration are just the first step in a source code to production process. Combined with infrastructure-as-code tools such as Puppet the whole process can be automated, and tested!
Vagrant is a well-known tool for creating development environments in a simple and consistent way. Since we adopted in our organization we experienced several benefits: lower project setup times, better shared knowledge among team members, less wtf moments ;-)
In this session I'd like to share our experience, including but not limited to:
- advanced vagrantfile configuration
- vm configuration tips for dev environment: performance, debug, tuning
- our wtf moments
- puphet/phansilbe: hot or not?
- tips for sharing a box
This is the notes of a presentation I gave to our IT dept., people who know a lot about VMs! They include a description of differences betwen a VM and a container, why would someone would want to use Docker, how it works (at 30,000 feet), some hints of what are the hub and orchestration, some Dockerfiles examples: jenkins slave, jenkins master, sinopia server, etc. and finally some new features Docker is going to propose in the future and how I intend to mix Configuration tools, such as Ansible, and Docker.
1. This document provides instructions for creating a headless Ubuntu/XFCE container using Docker and Dockerfish.
2. It shows how to install Dockerfish, pull the Ubuntu image, create and run a container, install XFCE and VNC, and export the container configuration to a new reusable image.
3. Key steps include cloning the Dockerfish repo, pulling the Ubuntu image, running the container with XFCE and VNC installed, checking the container IP, and using Dockerfish to commit the container configuration to a new image called devel/headless-ubuntu-vnc.
The document discusses UBIC, a toolkit for writing daemons, init scripts, and services in Perl. It provides several key classes for common service tasks like starting, stopping, and getting the status of services. These classes standardize service management and make services more robust. UBIC sees wide use at Yandex across many packages, clusters, and hosts to manage services.
The document discusses building a lightweight Docker container for Perl by starting with a minimal base image like BusyBox, copying just the Perl installation and necessary shared libraries into the container, and setting Perl as the default command to avoid including unnecessary dependencies and tools from a full Linux distribution. It provides examples of Dockerfiles to build optimized Perl containers from Gentoo and by directly importing a tarball for minimal size and easy distribution.
This document discusses Docker and provides an introduction and overview. It introduces Docker concepts like Dockerfiles, commands, linking containers, volumes, port mapping and registries. It also discusses tools that can be used with Docker like Fig, Baseimage, Boot2Docker and Flynn. The document provides examples of Dockerfiles, commands and how to build, run, link and manage containers.
Docker allows applications and their dependencies to be packaged into standardized units called containers that can run on any computing environment regardless of the underlying infrastructure. Containers leverage and share the host operating system's kernel to run as isolated processes, which improves performance and reduces overhead compared to virtual machines. Dockerfiles define the build instructions for container images, while Docker Compose allows defining and running multi-container applications with a single configuration file.
Null Bangalore | Pentesters Approach to AWS IAMDivyanshu
#Abstract:
- Learn more about the real-world methods for auditing AWS IAM (Identity and Access Management) as a pentester. So let us proceed with a brief discussion of IAM as well as some typical misconfigurations and their potential exploits in order to reinforce the understanding of IAM security best practices.
- Gain actionable insights into AWS IAM policies and roles, using hands on approach.
#Prerequisites:
- Basic understanding of AWS services and architecture
- Familiarity with cloud security concepts
- Experience using the AWS Management Console or AWS CLI.
- For hands on lab create account on [killercoda.com](https://killercoda.com/cloudsecurity-scenario/)
# Scenario Covered:
- Basics of IAM in AWS
- Implementing IAM Policies with Least Privilege to Manage S3 Bucket
- Objective: Create an S3 bucket with least privilege IAM policy and validate access.
- Steps:
- Create S3 bucket.
- Attach least privilege policy to IAM user.
- Validate access.
- Exploiting IAM PassRole Misconfiguration
-Allows a user to pass a specific IAM role to an AWS service (ec2), typically used for service access delegation. Then exploit PassRole Misconfiguration granting unauthorized access to sensitive resources.
- Objective: Demonstrate how a PassRole misconfiguration can grant unauthorized access.
- Steps:
- Allow user to pass IAM role to EC2.
- Exploit misconfiguration for unauthorized access.
- Access sensitive resources.
- Exploiting IAM AssumeRole Misconfiguration with Overly Permissive Role
- An overly permissive IAM role configuration can lead to privilege escalation by creating a role with administrative privileges and allow a user to assume this role.
- Objective: Show how overly permissive IAM roles can lead to privilege escalation.
- Steps:
- Create role with administrative privileges.
- Allow user to assume the role.
- Perform administrative actions.
- Differentiation between PassRole vs AssumeRole
Try at [killercoda.com](https://killercoda.com/cloudsecurity-scenario/)
Use PyCharm for remote debugging of WSL on a Windo cf5c162d672e4e58b4dde5d797...shadow0702a
This document serves as a comprehensive step-by-step guide on how to effectively use PyCharm for remote debugging of the Windows Subsystem for Linux (WSL) on a local Windows machine. It meticulously outlines several critical steps in the process, starting with the crucial task of enabling permissions, followed by the installation and configuration of WSL.
The guide then proceeds to explain how to set up the SSH service within the WSL environment, an integral part of the process. Alongside this, it also provides detailed instructions on how to modify the inbound rules of the Windows firewall to facilitate the process, ensuring that there are no connectivity issues that could potentially hinder the debugging process.
The document further emphasizes on the importance of checking the connection between the Windows and WSL environments, providing instructions on how to ensure that the connection is optimal and ready for remote debugging.
It also offers an in-depth guide on how to configure the WSL interpreter and files within the PyCharm environment. This is essential for ensuring that the debugging process is set up correctly and that the program can be run effectively within the WSL terminal.
Additionally, the document provides guidance on how to set up breakpoints for debugging, a fundamental aspect of the debugging process which allows the developer to stop the execution of their code at certain points and inspect their program at those stages.
Finally, the document concludes by providing a link to a reference blog. This blog offers additional information and guidance on configuring the remote Python interpreter in PyCharm, providing the reader with a well-rounded understanding of the process.
Software Engineering and Project Management - Software Testing + Agile Method...Prakhyath Rai
Software Testing: A Strategic Approach to Software Testing, Strategic Issues, Test Strategies for Conventional Software, Test Strategies for Object -Oriented Software, Validation Testing, System Testing, The Art of Debugging.
Agile Methodology: Before Agile – Waterfall, Agile Development.
Home security is of paramount importance in today's world, where we rely more on technology, home
security is crucial. Using technology to make homes safer and easier to control from anywhere is
important. Home security is important for the occupant’s safety. In this paper, we came up with a low cost,
AI based model home security system. The system has a user-friendly interface, allowing users to start
model training and face detection with simple keyboard commands. Our goal is to introduce an innovative
home security system using facial recognition technology. Unlike traditional systems, this system trains
and saves images of friends and family members. The system scans this folder to recognize familiar faces
and provides real-time monitoring. If an unfamiliar face is detected, it promptly sends an email alert,
ensuring a proactive response to potential security threats.
Build the Next Generation of Apps with the Einstein 1 Platform.
Rejoignez Philippe Ozil pour une session de workshops qui vous guidera à travers les détails de la plateforme Einstein 1, l'importance des données pour la création d'applications d'intelligence artificielle et les différents outils et technologies que Salesforce propose pour vous apporter tous les bénéfices de l'IA.
Blood finder application project report (1).pdfKamal Acharya
Blood Finder is an emergency time app where a user can search for the blood banks as
well as the registered blood donors around Mumbai. This application also provide an
opportunity for the user of this application to become a registered donor for this user have
to enroll for the donor request from the application itself. If the admin wish to make user
a registered donor, with some of the formalities with the organization it can be done.
Specialization of this application is that the user will not have to register on sign-in for
searching the blood banks and blood donors it can be just done by installing the
application to the mobile.
The purpose of making this application is to save the user’s time for searching blood of
needed blood group during the time of the emergency.
This is an android application developed in Java and XML with the connectivity of
SQLite database. This application will provide most of basic functionality required for an
emergency time application. All the details of Blood banks and Blood donors are stored
in the database i.e. SQLite.
This application allowed the user to get all the information regarding blood banks and
blood donors such as Name, Number, Address, Blood Group, rather than searching it on
the different websites and wasting the precious time. This application is effective and
user friendly.
Open Channel Flow: fluid flow with a free surfaceIndrajeet sahu
Open Channel Flow: This topic focuses on fluid flow with a free surface, such as in rivers, canals, and drainage ditches. Key concepts include the classification of flow types (steady vs. unsteady, uniform vs. non-uniform), hydraulic radius, flow resistance, Manning's equation, critical flow conditions, and energy and momentum principles. It also covers flow measurement techniques, gradually varied flow analysis, and the design of open channels. Understanding these principles is vital for effective water resource management and engineering applications.
2. What’s Container
image#2
Linux Kernel
image#1
process
container A
Warden
LXC
MINCS
cgroupsnamespacenetfilter netlink
process
NW
Isorated
process
Software to create container
It’s just process be running on the
same host as containerising software
like docker.
But that process called as container
is isolated by the feature linux kernel
provided than host machine.
Docker is just one of the software to
create/control isolated
process(container)
Root namespace
5. Docker try to ensure container status as config in init
stop initialising running
This container should
start with such a
configuration.
If docker fail to ensure container as
configuration like failing to bind port,
docker restart to do initialisation process
7. Docker failed to start due to one of containers
port: 7000
port: 7000
port: 7001
port: 8080
port: 80
runningstop initilising
initilising
Binding failed due to
conflicting of port
Container initialisation
failed due to network
error
stop
infinite
$ docker ps
--restart
--restart
I can’t “docker ps”, means docker
control plane could be dead
8. If docker don’t respond
1. check if dockerd is restarting repeatedly
2. if yes, remove the existing resource
(/var/lib/docker/volumes, networks...)
9. What’s MINCS
❖ Minimum Container Shellscripts
❖ Consisit of all pure shellscripts
❖ The number of all lines
➢ 3250 lines
It’s easier to understand the basic consept
than other container management tool
How container
is managed/created
10. How to install MINCS
$ git clone https://github.com/mhiramat/mincs.git
It was supposed to finish with this one line…..
12. unshare command is needed to replace with other
~:$ git clone https://github.com/mirror/busybox.git
~:$ cd busybox
~/busybox:$ make config
require the user to fill with many configuration items….
~/busybox:$ make install
~/busybox:$ mv _install/bin /bin/busybox
13. Change minc-exec script as following
diff --git a/libexec/minc-exec b/libexec/minc-exec
index 834b4e0..a5a1b8c 100755
--- a/libexec/minc-exec
+++ b/libexec/minc-exec
@@ -174,4 +174,4 @@ cd /
UNSHARE_OPT=
# Enter new namespace and exec command
[ "$MINC_NOPRIV" ] && UNSHARE_OPT=--map-root-user
-$IP_NETNS unshare $UNSHARE_OPT -iumpf $LIBEXEC/`basename $0` "$@"
+$IP_NETNS busybox unshare $UNSHARE_OPT -iumpf $LIBEXEC/`basename
$0` "$@"
14. 1. Try to create container
$ sudo ./minc bash
vagrant@vagrant-ubuntu-trusty:~/mincs$ sudo ./minc bash # <- enter in container
mount: warning: /tmp/minc9215-334yCm/root/proc/sys seems to be mounted read-only.
mount: warning: /tmp/minc9215-334yCm/root/proc/sysrq-trigger seems to be mounted read-only.
mount: warning: /tmp/minc9215-334yCm/root/proc/irq seems to be mounted read-only.
mount: warning: /tmp/minc9215-334yCm/root/proc/bus seems to be mounted read-only.
root@vagrant-ubuntu-trusty:/# echo test >> test
root@vagrant-ubuntu-trusty:/# cat /test
test #<- There is /test files
root@vagrant-ubuntu-trusty:/# exit
exit #<- get out of container
vagrant@vagrant-ubuntu-trusty:~/mincs$ cat /test
cat: /test: No such file or directory # <- There is no /test file (directory tree is separated)
15. 2. Try to use image management
vagrant@vagrant-ubuntu-trusty:~$ sudo mincs/marten import ubuntu.tar.gz
mincs/marten: 1: mincs/marten: jq: not found # <- need jq package
vagrant@vagrant-ubuntu-trusty:~$ sudo apt-get install jq
vagrant@vagrant-ubuntu-trusty:~$ sudo mincs/marten import ubuntu.tar.gz
Importing image: ubuntu
jq: error: Cannot index number with string
parse error: Invalid numeric literal #<- This is bug… it can’t import the images to contain multiples
# https://github.com/mhiramat/mincs/issues/8
vagrant@vagrant-ubuntu-trusty:~$ sudo mincs/marten import ubuntu_latest.tar.gz
Importing image: ubuntu
9d2e5c12a9428108649812c24645eba52c030507a74c891984b3fb7f218d7690
………….
9177e32309d14441f30648db6ba1641800c79d959d63dddc0ab7da673cd6acd9
9d2e5c12a9428108649812c24645eba52c030507a74c891984b3fb7f218d7690
vagrant@vagrant-ubuntu-trusty:~$ sudo mincs/marten images #<- it works
16. 3. Try to create the container form images
vagrant@vagrant-ubuntu-trusty:~$ sudo mincs/marten images
ID SIZE NAME
06bd4c05b6dc 20K (noname)
72a988653a4a 84K (noname)
891a3a3af630 138M (noname)
9177e32309d1 16K (noname)
9d2e5c12a942 16K ubuntu
vagrant@vagrant-ubuntu-trusty:~$ sudo mincs/minc -r ubuntu bash
mount: special device overlayfs does not exist #<- I need fix, somehow overlayfs is enabled since kernel 3.18
To reuse this, run: mincs/minc -t 3c94cdd1629d
vagrant@vagrant-ubuntu-trusty:~/mincs$ uname -r
3.13.0-24-generic #<- not supported for overlayfs
vagrant@vagrant-ubuntu-trusty:~$ sudo apt-get install linux-generic-lts-vivid linux-headers-generic-lts-vivid
vagrant@vagrant-ubuntu-trusty:~$ reboot
17. 3. Try to create the container form images
root@vagrant-ubuntu-trusty:/home/vagrant/mincs# ./minc -r ubuntu
mount: wrong fs type, bad option, bad superblock on overlayfs,
missing codepage or helper program, or other error
(for several filesystems (e.g. nfs, cifs) you might
need a /sbin/mount.<type> helper program)
In some cases useful info is found in syslog - try
dmesg | tail or so
root@vagrant-ubuntu-trusty:/home/vagrant/mincs# sudo dmesg | tail -f
[ 1383.505546] overlayfs: failed to resolve
'/var/lib/mincs/images/9d2e5c12a9428108649812c24645eba52c030507a74c891984b3fb7f218d7690/root:/var
/lib/mincs/images/9177e32309d14441f30648db6ba1641800c79d959d63dddc0ab7da673cd6acd9/root:/var/lib/
mincs/images/06bd4c05b6dcfa6e669d02f4150b7842166a97ce536fbb0a98f66d2c4566c37e/root:/var/lib/mincs
/images/72a988653a4a1802b617429efccfb972f0693fa6665fed9d27d912cc23590670/root:/var/lib/mincs/imag
es/891a3a3af630e0853915722c47dc1a7002d2ea0218273456a12014fca609fc7d/root': -2
[ 1383.508533] overlayfs: missing upperdir or lowerdir or workdir
In the case of overlayfs, we can’t
use multiple base images with
kernel version less than 4.0.
since kernel 4.0, we can use
multiple images as lowerdir.
18. In a nutshel
Create container with no additaional image
1. rebuild latest busybox
2. correct minc-exec a little
Import docker image
1. image should be single images ( it’s ok to consist of multiple images )
Create container from the image to be imported from docker
1. kernel version should be updated over 3.18
2. merge multiple images into one image if kernel version is less than 4 ← I added
https://github.com/ukinau/mincs/commit/d94eb4fed4626e2f934a3ddc44912e8c2b28b269
19. Good articles
The slide original developer
- http://www.slideshare.net/mhiramat/mincs-containers-in-the-shell-script
Can’t support multiple lowerlayers in overlayfs
- http://queforum.com/unix-linux-basics/1008603-linux-how-use-multiple-lower-layers-overlayfs.html
- http://stackoverflow.com/questions/31044982/how-to-use-multiple-lower-layers-in-overlayfs
Support multiple lowerlayers in overlayfs since kernel 4.0 version
- https://www.kernel.org/doc/Documentation/filesystems/overlayfs.txt