Shipyard is a management tool for Docker servers that allows users to view and manage containers running on Docker hosts. This document outlines how to securely set up Shipyard 2.0.10 with TLS on a CoreOS server. It describes generating certificates, configuring Docker to use the certificates, and installing Shipyard by running its Docker images and linking them to a database container. When complete, Shipyard can be securely accessed via its web interface.
The document describes the process of setting up OpenStack Swift object storage. It includes installing and configuring Swift packages on both storage and proxy nodes, generating ring files to map objects to storage devices, and registering the Swift service with Keystone for authentication. Key steps are installing Swift packages, adding storage devices to the ring, distributing ring files, and configuring the proxy server and authentication filter.
Delve Labs was present during the GoSec 2016 conference, where our lead DevOps engineer presented an overview of the current options available for securing Docker in production environments.
https://www.delve-labs.com
This document provides an introduction to Docker Swarm, which allows multiple Docker hosts to be clustered together into a single virtual Docker host. It discusses key components of Docker Swarm including managers, nodes, services, discovery services, and scheduling. It also provides steps for creating a Swarm cluster, deploying services, and considering high availability and security aspects.
This document discusses security mechanisms in Docker containers, including control groups (cgroups) to limit resources, namespaces to isolate processes, and capabilities to restrict privileges. It covers secure computing modes like seccomp that sandbox system calls. Linux security modules like AppArmor and SELinux are also mentioned, along with best practices for the Docker daemon and container security overall.
This document discusses Docker containers and orchestration tools. It begins with an overview of Docker containers and how they differ from traditional virtual machines. It then covers four Docker tools: Docker Machine for provisioning hosts, Docker Compose for defining multi-container applications, Docker Swarm for orchestrating containers across a cluster of nodes, and Docker Network for container networking. It demonstrates using these tools together to deploy a sample application across a three node Docker swarm cluster.
This document provides instructions for configuring a Squid proxy server on CentOS. It discusses obtaining information about the system like the OS distribution, hardware architecture, and installed application versions. It also outlines basic Squid configuration steps like backing up the default configuration file, checking the port Squid listens on, and ensuring the log file location is set correctly before starting Squid. Configuring access controls and caching policies would be covered in more depth in subsequent sections.
Deploying applications to Windows Server 2016 and Windows ContainersBen Hall
Deploying applications to Windows Server 2016 and Windows Containers.
Delivered at NDC London 2017 on 20th January.
Sponsored by Katacoda.com, interactive learning platform for Docker and Cloud Native platforms.
The document describes the process of setting up OpenStack Swift object storage. It includes installing and configuring Swift packages on both storage and proxy nodes, generating ring files to map objects to storage devices, and registering the Swift service with Keystone for authentication. Key steps are installing Swift packages, adding storage devices to the ring, distributing ring files, and configuring the proxy server and authentication filter.
Delve Labs was present during the GoSec 2016 conference, where our lead DevOps engineer presented an overview of the current options available for securing Docker in production environments.
https://www.delve-labs.com
This document provides an introduction to Docker Swarm, which allows multiple Docker hosts to be clustered together into a single virtual Docker host. It discusses key components of Docker Swarm including managers, nodes, services, discovery services, and scheduling. It also provides steps for creating a Swarm cluster, deploying services, and considering high availability and security aspects.
This document discusses security mechanisms in Docker containers, including control groups (cgroups) to limit resources, namespaces to isolate processes, and capabilities to restrict privileges. It covers secure computing modes like seccomp that sandbox system calls. Linux security modules like AppArmor and SELinux are also mentioned, along with best practices for the Docker daemon and container security overall.
This document discusses Docker containers and orchestration tools. It begins with an overview of Docker containers and how they differ from traditional virtual machines. It then covers four Docker tools: Docker Machine for provisioning hosts, Docker Compose for defining multi-container applications, Docker Swarm for orchestrating containers across a cluster of nodes, and Docker Network for container networking. It demonstrates using these tools together to deploy a sample application across a three node Docker swarm cluster.
This document provides instructions for configuring a Squid proxy server on CentOS. It discusses obtaining information about the system like the OS distribution, hardware architecture, and installed application versions. It also outlines basic Squid configuration steps like backing up the default configuration file, checking the port Squid listens on, and ensuring the log file location is set correctly before starting Squid. Configuring access controls and caching policies would be covered in more depth in subsequent sections.
Deploying applications to Windows Server 2016 and Windows ContainersBen Hall
Deploying applications to Windows Server 2016 and Windows Containers.
Delivered at NDC London 2017 on 20th January.
Sponsored by Katacoda.com, interactive learning platform for Docker and Cloud Native platforms.
This document introduces Docker Swarm for clustering Docker hosts into a single virtual host. It discusses using Swarm with Consul and an overlay network. Key points:
- Docker Swarm turns a pool of Docker hosts into a single virtual host with a standard API.
- Consul provides service discovery, key-value storage, and health checking.
- An overlay network allows containers on different hosts to communicate, with networking defined by Docker but implemented by the hosts' kernels.
Swarm in a nutshell
• Exposes several Docker Engines as a single virtual Engine
• Serves the standard Docker API
• Extremely easy to get started
• Batteries included but swappable
Real World Lessons on the Pain Points of Node.js ApplicationsBen Hall
The document discusses several pain points experienced with Node.js applications and solutions for resolving them. It covers creating a strong foundation by upgrading to Node.js v5, locking down NPM dependencies, handling errors properly with try/catch blocks and promises, deploying applications using Docker for scaling, addressing security issues, and using tools like debug and profilers to improve performance.
Docker provides containerization capabilities while Ansible provides automation and configuration capabilities. Together they are useful DevOps tools. Docker allows building and sharing application environments while Ansible automates configuration and deployment. Key points covered include Docker concepts like images and containers, building images with Dockerfiles, and using Docker Compose to run multi-container apps. Ansible is described as a remote execution and configuration tool using YAML playbooks and roles to deploy applications. Their complementary nature makes them good DevOps partners.
This document provides an overview of Docker Swarm and how to set up and use a Docker Swarm cluster. It discusses key Swarm concepts, initializing a cluster, adding nodes, deploying services, rolling updates, draining nodes, failure scenarios, and the Raft consensus algorithm used for leader election in Swarm mode. The document walks through examples of creating a Swarm, adding nodes, deploying a service, inspecting and scaling services, rolling updates, and draining nodes. It also covers failure scenarios for nodes and managers and how the Swarm handles them.
An introduction to Docker native clustering: Swarm.
Deployment and configuration, integration with Consul, for a product-like cluster to serve web-application with multiple containers on multiple hosts. #dockerops
This document provides an overview and agenda for a two day Docker training course. Day one covers Docker introduction, installation, working with containers and images, building images with Dockerfiles, OpenStack integration, and Kubernetes introduction. Day two covers Docker cluster, Kubernetes in more depth, Docker networking, DockerHub, Docker use cases, and developing platforms with Docker. The document also includes sections on Docker basics, proposed cluster implementation strategies, and Kubernetes concepts and design principles.
The age of orchestration: from Docker basics to cluster managementNicola Paolucci
The container abstraction hit the collective developer mind with great force and created a space of innovation for the distribution, configuration and deployment of cloud based applications. Now that this new model has established itself work is moving towards orchestration and coordination of loosely coupled network services. There is an explosion of tools in this arena at different degrees of stability but the momentum is huge.
On the above premise this session we'll delve into a selection of the following topics:
- Two minute Docker intro refresher
- Overview of the orchestration landscape (Kubernetes, Mesos, Helios and Docker tools)
- Introduction to Docker own ecosystem orchestration tools (machine, swarm and compose)
- Live demo of cluster management using a sample application.
A basic understanding of Docker is suggested to fully enjoy the talk.
Linux Administration Tutorial | Configuring A DNS Server In 10 Simple Steps |...Edureka!
This Linux administration tutorial is ideal for those who want to learn how to configure a Bind DNS server in Linux. The following topics have been covered in this video:
1. What is DNS?
2. How Does DNS Server Work?
3. Configuring Bind DNS Server In 10 Steps.
This document provides information about configuring and using the Squid caching proxy server. It discusses Squid versions and improvements between versions, how to configure access control lists and ports in Squid's configuration file squid.conf, and provides a sample configuration file with ACL rules and cache directory settings. Advantages discussed include improved caching and access control capabilities.
Running High Performance & Fault-tolerant Elasticsearch Clusters on DockerSematext Group, Inc.
This document discusses running Elasticsearch clusters on Docker containers. It describes how Docker containers are more lightweight than virtual machines and have less overhead. It provides examples of running official Elasticsearch Docker images and customizing configurations. It also covers best practices for networking, storage, constraints, and high availability when running Elasticsearch on Docker.
This document provides an agenda and overview of Docker Machine and Docker Swarm. It discusses how Docker Machine allows managing Docker hosts on various platforms and distributions. It then explains how Docker Swarm exposes multiple Docker engines as a single virtual engine with built-in service discovery and scheduling. The document demonstrates how to set up a Docker Swarm cluster using the hosted discovery service and covers Swarm scheduling strategies, constraints, and container affinities.
This document discusses Amazon EC2 Container Service (ECS) and its benefits for container management. It provides an overview of ECS components like container instances, clusters, task definitions, and services. It also demonstrates how to use the ECS CLI to register task definitions, run tasks, and manage clusters. Examples are given of companies like Coursera using ECS for its benefits of scalability, flexibility, and ease of managing containers compared to traditional virtual servers. ECS can be used along with other AWS services like Lambda, ELB, and more to build flexible container-based architectures.
This document discusses Docker Swarm, a clustering and orchestration tool for Docker. It provides instructions for setting up a Swarm cluster using either a hosted discovery service or your own discovery service like Etcd. It also covers resource management using memory and CPU limits, port mapping, constraints to control where containers run, rescheduling policies, and the two step Swarm scheduler process of filtering nodes and selecting the best placement.
Scaling Next-Generation Internet TV on AWS With Docker, Packer, and Chefbridgetkromhout
This document discusses how DramaFever scaled their internet TV platform on AWS using Docker, Packer, and Chef. It describes how they built Docker images for consistent development and deployment, used Packer to build AMIs for consistent server provisioning, and implemented Chef recipes to define server configurations. The tools helped them achieve faster development cycles, consistent environments, and improved ability to automatically scale their infrastructure on AWS.
Security is often an afterthought; configured and applied at the last minute before rolling out a new system. Instaclustr has deployed Cassandra for customers with many different requirements.
From deployments in Heroku requiring total public access through to private data centres, we will walk you through securing Cassandra the right way.
The document provides steps to dockerize a WordPress application. It involves installing Docker, creating a Dockerfile to define the WordPress application environment, building a Docker image from the Dockerfile, running the image as a container and configuring WordPress. Key steps include creating a Dockerfile to install Apache, MySQL, PHP and WordPress, building an image from the Dockerfile, running the image as a container and mapping ports, and configuring WordPress inside the container.
Get hands-on with security features and best practices to protect your containerized services. Learn to push and verify signed images with Docker Content Trust, and collaborate with delegation roles. Intermediate to advanced level Docker experience recommended, participants will be building and pushing with Docker during the workshop.
Led By Docker Security Experts:
Riyaz Faizullabhoy
David Lawrence
Viktor Stanchev
Experience Level: Intermediate to advanced level Docker experience recommended
Running Docker in Development & Production (#ndcoslo 2015)Ben Hall
The document discusses running Docker in development and production. It covers:
- Using Docker containers to run individual services like Elasticsearch or web applications
- Creating Dockerfiles to build custom images
- Linking containers together and using environment variables for service discovery
- Scaling with Docker Compose, load balancing with Nginx, and service discovery with Consul
- Clustering containers together using Docker Swarm for high availability
This document introduces Docker Swarm for clustering Docker hosts into a single virtual host. It discusses using Swarm with Consul and an overlay network. Key points:
- Docker Swarm turns a pool of Docker hosts into a single virtual host with a standard API.
- Consul provides service discovery, key-value storage, and health checking.
- An overlay network allows containers on different hosts to communicate, with networking defined by Docker but implemented by the hosts' kernels.
Swarm in a nutshell
• Exposes several Docker Engines as a single virtual Engine
• Serves the standard Docker API
• Extremely easy to get started
• Batteries included but swappable
Real World Lessons on the Pain Points of Node.js ApplicationsBen Hall
The document discusses several pain points experienced with Node.js applications and solutions for resolving them. It covers creating a strong foundation by upgrading to Node.js v5, locking down NPM dependencies, handling errors properly with try/catch blocks and promises, deploying applications using Docker for scaling, addressing security issues, and using tools like debug and profilers to improve performance.
Docker provides containerization capabilities while Ansible provides automation and configuration capabilities. Together they are useful DevOps tools. Docker allows building and sharing application environments while Ansible automates configuration and deployment. Key points covered include Docker concepts like images and containers, building images with Dockerfiles, and using Docker Compose to run multi-container apps. Ansible is described as a remote execution and configuration tool using YAML playbooks and roles to deploy applications. Their complementary nature makes them good DevOps partners.
This document provides an overview of Docker Swarm and how to set up and use a Docker Swarm cluster. It discusses key Swarm concepts, initializing a cluster, adding nodes, deploying services, rolling updates, draining nodes, failure scenarios, and the Raft consensus algorithm used for leader election in Swarm mode. The document walks through examples of creating a Swarm, adding nodes, deploying a service, inspecting and scaling services, rolling updates, and draining nodes. It also covers failure scenarios for nodes and managers and how the Swarm handles them.
An introduction to Docker native clustering: Swarm.
Deployment and configuration, integration with Consul, for a product-like cluster to serve web-application with multiple containers on multiple hosts. #dockerops
This document provides an overview and agenda for a two day Docker training course. Day one covers Docker introduction, installation, working with containers and images, building images with Dockerfiles, OpenStack integration, and Kubernetes introduction. Day two covers Docker cluster, Kubernetes in more depth, Docker networking, DockerHub, Docker use cases, and developing platforms with Docker. The document also includes sections on Docker basics, proposed cluster implementation strategies, and Kubernetes concepts and design principles.
The age of orchestration: from Docker basics to cluster managementNicola Paolucci
The container abstraction hit the collective developer mind with great force and created a space of innovation for the distribution, configuration and deployment of cloud based applications. Now that this new model has established itself work is moving towards orchestration and coordination of loosely coupled network services. There is an explosion of tools in this arena at different degrees of stability but the momentum is huge.
On the above premise this session we'll delve into a selection of the following topics:
- Two minute Docker intro refresher
- Overview of the orchestration landscape (Kubernetes, Mesos, Helios and Docker tools)
- Introduction to Docker own ecosystem orchestration tools (machine, swarm and compose)
- Live demo of cluster management using a sample application.
A basic understanding of Docker is suggested to fully enjoy the talk.
Linux Administration Tutorial | Configuring A DNS Server In 10 Simple Steps |...Edureka!
This Linux administration tutorial is ideal for those who want to learn how to configure a Bind DNS server in Linux. The following topics have been covered in this video:
1. What is DNS?
2. How Does DNS Server Work?
3. Configuring Bind DNS Server In 10 Steps.
This document provides information about configuring and using the Squid caching proxy server. It discusses Squid versions and improvements between versions, how to configure access control lists and ports in Squid's configuration file squid.conf, and provides a sample configuration file with ACL rules and cache directory settings. Advantages discussed include improved caching and access control capabilities.
Running High Performance & Fault-tolerant Elasticsearch Clusters on DockerSematext Group, Inc.
This document discusses running Elasticsearch clusters on Docker containers. It describes how Docker containers are more lightweight than virtual machines and have less overhead. It provides examples of running official Elasticsearch Docker images and customizing configurations. It also covers best practices for networking, storage, constraints, and high availability when running Elasticsearch on Docker.
This document provides an agenda and overview of Docker Machine and Docker Swarm. It discusses how Docker Machine allows managing Docker hosts on various platforms and distributions. It then explains how Docker Swarm exposes multiple Docker engines as a single virtual engine with built-in service discovery and scheduling. The document demonstrates how to set up a Docker Swarm cluster using the hosted discovery service and covers Swarm scheduling strategies, constraints, and container affinities.
This document discusses Amazon EC2 Container Service (ECS) and its benefits for container management. It provides an overview of ECS components like container instances, clusters, task definitions, and services. It also demonstrates how to use the ECS CLI to register task definitions, run tasks, and manage clusters. Examples are given of companies like Coursera using ECS for its benefits of scalability, flexibility, and ease of managing containers compared to traditional virtual servers. ECS can be used along with other AWS services like Lambda, ELB, and more to build flexible container-based architectures.
This document discusses Docker Swarm, a clustering and orchestration tool for Docker. It provides instructions for setting up a Swarm cluster using either a hosted discovery service or your own discovery service like Etcd. It also covers resource management using memory and CPU limits, port mapping, constraints to control where containers run, rescheduling policies, and the two step Swarm scheduler process of filtering nodes and selecting the best placement.
Scaling Next-Generation Internet TV on AWS With Docker, Packer, and Chefbridgetkromhout
This document discusses how DramaFever scaled their internet TV platform on AWS using Docker, Packer, and Chef. It describes how they built Docker images for consistent development and deployment, used Packer to build AMIs for consistent server provisioning, and implemented Chef recipes to define server configurations. The tools helped them achieve faster development cycles, consistent environments, and improved ability to automatically scale their infrastructure on AWS.
Security is often an afterthought; configured and applied at the last minute before rolling out a new system. Instaclustr has deployed Cassandra for customers with many different requirements.
From deployments in Heroku requiring total public access through to private data centres, we will walk you through securing Cassandra the right way.
The document provides steps to dockerize a WordPress application. It involves installing Docker, creating a Dockerfile to define the WordPress application environment, building a Docker image from the Dockerfile, running the image as a container and configuring WordPress. Key steps include creating a Dockerfile to install Apache, MySQL, PHP and WordPress, building an image from the Dockerfile, running the image as a container and mapping ports, and configuring WordPress inside the container.
Get hands-on with security features and best practices to protect your containerized services. Learn to push and verify signed images with Docker Content Trust, and collaborate with delegation roles. Intermediate to advanced level Docker experience recommended, participants will be building and pushing with Docker during the workshop.
Led By Docker Security Experts:
Riyaz Faizullabhoy
David Lawrence
Viktor Stanchev
Experience Level: Intermediate to advanced level Docker experience recommended
Running Docker in Development & Production (#ndcoslo 2015)Ben Hall
The document discusses running Docker in development and production. It covers:
- Using Docker containers to run individual services like Elasticsearch or web applications
- Creating Dockerfiles to build custom images
- Linking containers together and using environment variables for service discovery
- Scaling with Docker Compose, load balancing with Nginx, and service discovery with Consul
- Clustering containers together using Docker Swarm for high availability
Docker Networking - Common Issues and Troubleshooting TechniquesSreenivas Makam
This document discusses Docker networking components and common issues. It covers Docker networking drivers like bridge, host, overlay, topics around Docker daemon access and configuration behind firewalls. It also discusses container networking best practices like using user-defined networks instead of links, connecting containers to multiple networks, and connecting managed services to unmanaged containers. The document is intended to help troubleshoot Docker networking issues.
Overview of Docker 1.11 features(Covers Docker release summary till 1.11, runc/containerd, dns load balancing ipv6 service discovery, labels, macvlan/ipvlan)
The document discusses OpenShift security context constraints (SCCs) and how to configure them to allow running a WordPress container. It begins with an overview of SCCs and their purpose in OpenShift for controlling permissions for pods. It then describes issues running the WordPress container under the default "restricted" SCC due to permission errors. The document explores editing the "restricted" SCC and removing capabilities and user restrictions to address the errors. Alternatively, it notes the "anyuid" SCC can be used which is more permissive and standard for allowing the WordPress container to run successfully.
JDO 2019: Tips and Tricks from Docker Captain - Łukasz LachPROIDEA
The document provides tips and tricks for using Docker including:
1) Installing Docker on Linux in an easy way allowing choice of channel and version.
2) Setting up a local Docker Hub mirror for caching and revalidating images.
3) Using docker inspect to find containers that exited with non-zero codes or show commands for running containers.
4) Organizing docker-compose files with extensions, environment variables, anchors and aliases for well structured services.
ContainerDayVietnam2016: Dockerize a small businessDocker-Hanoi
This document discusses how Docker can transform development and deployment processes for modern applications. It outlines some of the challenges of developing and deploying applications across different environments, and how Docker addresses these challenges through containerization. The document then provides examples of how to dockerize a Rails and Python application, set up an Nginx reverse proxy with Let's Encrypt, and configure a Docker cluster for continuous integration testing.
This document summarizes a presentation about running .NET applications on Docker containers. It discusses getting started with Docker, differences between Windows and Linux containers, building .NET and Node.js applications as Docker images, deploying containers to production environments, and the future of Docker integration with desktop applications and Microsoft technologies. Examples are provided of Dockerfile instructions for .NET and Node.js applications and using Docker Compose to run multi-container applications.
The Nova driver for Docker has been maturing rapidly since its mainline removal in Icehouse. During the Juno cycle, substantial improvements have been made to the driver, and greater parity has been reached with other virtualization drivers. We will explore these improvements and what they mean to deployers. Eric will additionally showcase deployment scenarios for the deployment of OpenStack itself inside and underneath of Docker for powering traditional VM-based computing, storage, and other cloud services. Finally, users should expect a preview of the planned integration with the new OpenStack Containers Service effort to provide automation of advanced containers functionality and Docker-API semantics inside of an OpenStack cloud.
Note that the included Heat templates are NOT usable. See the linked Heat resources for viable templates and examples.
Dockers & kubernetes detailed - Beginners to GeekwiTTyMinds1
Docker is a platform for building, distributing and running containerized applications. It allows applications to be bundled with their dependencies and run in isolated containers that share the same operating system kernel. Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. It groups Docker containers that make up an application into logical units for easy management and discovery. Docker Swarm is a native clustering tool that can orchestrate and schedule containers on machine clusters. It allows Docker containers to run as a cluster on multiple Docker hosts.
A guide to deploying an initial Docker Swarm mode network and then incorporating Asterisk into that swarm. Commands, a discussion of host mode vs overlay networking, and the basics of a deployable Docker Swarm mode Stack file are all covered.
This document provides an introduction to Docker and the need for orchestration tools when deploying multi-container applications. It discusses how Docker solves the problem of portability for software artifacts and defines key Docker concepts like images, containers, and registries. It also introduces orchestration tools like Docker Compose and Docker Swarm that automate deployment of interdependent services across clusters. The document argues for guidelines on Docker use at organizations to address questions around containerization strategies and orchestration platforms.
Azure Bootcamp 2016 - Docker Orchestration on Azure with RancherKarim Vaes
This document discusses Docker orchestration on Azure using Rancher. It begins with an introduction to Docker concepts like containers, images and the Docker workflow. It then demonstrates deploying a Rancher server on Azure, adding nodes, upgrading a sample application, enabling cross-region networking, auto-scaling services, and using a Docker volume plugin to connect to Azure File Storage for persistent storage. The document includes code samples and step-by-step demonstrations of these Rancher and Docker capabilities on Azure.
AtlasCamp 2015: The age of orchestration: From Docker basics to cluster manag...Atlassian
Nicola Paolucci, Atlassian
Containers hit the collective developer mind with great force the past two years and created a space of fervent innovation. Now work is moving towards orchestration. In this session we'll cover an overview of the container orchestration landscape, give an introduction to Docker's own tools - machine, swarm and compose - and show a (semi)live demo of how they work in practice.
Create and use a Dockerized Aruba Cloud server - CloudConf 2017Aruba S.p.A.
Docker can be used to provision and manage virtual servers hosted on the Aruba Cloud platform. The docker-machine driver for Aruba Cloud allows users to create, start, stop, and remove Docker-enabled virtual servers using docker-machine commands. Virtual servers on Aruba Cloud include Smart and Pro options and can be created from templates like Ubuntu or CentOS in different sizes.
1. The document summarizes the topics covered in an advanced Docker workshop, including Docker Machine, Docker Swarm, networking, services, GitLab integration, IoT applications, Moby/LinuxKit, and a call to action to learn more about Docker on their own.
2. Specific topics included how to create Docker Machines on Azure, build a Swarm cluster, configure networking and services, integrate with GitLab for continuous integration/delivery, develop IoT applications using Docker on Raspberry Pi, and introduce Moby and LinuxKit for building customized container-based operating systems.
3. The workshop concluded by emphasizing business models, microservices, infrastructure as code, container design, DevOps, and
Similar to How To Securely Set Up Shipyard 2.0.10 with TLS on CoreOS (20)
The document discusses 10 essential Laravel packages that make common functionality easier to implement without having to code it from scratch. These include packages for generators, IDE integration, testing, validation, debugging, authentication, authorization, forms, optimization, and administration. Using these packages can save significant development time.
This document provides instructions for installing Ruby on Rails, Nginx, and Passenger on Ubuntu. It details downloading RVM and using it to install Ruby 1.9.3 and the Rails gem. Passenger is then installed as a module for Nginx to interface it with Rails. The Nginx configuration file is edited to enable Passenger and set the document root for a new Rails app. Finally, a new Rails app is generated and Nginx started to serve the application.
This document provides steps to create an SSL certificate for Nginx on Ubuntu. It involves generating a private key, creating a certificate signing request (CSR) with the key, and using the CSR to generate a self-signed certificate. The certificate and key are then configured for a virtual host in Nginx to encrypt website traffic.
This document provides instructions for adding swap space on an Ubuntu server. It explains that swap space is used when RAM is full and inactive memory pages are moved to a slower hard drive. It describes how to check for existing swap, determine an appropriate swap size based on available disk space and RAM, create a swap file, enable the swap file, and ensure the swap file is mounted on boot by adding it to fstab. Following these steps creates a persistent 2GB swap file on the server to supplement the 1GB of RAM when needed.
The document introduces the MEAN stack, which combines MongoDB, Express.js, Angular.js, and Node.js into a full-stack web development framework. It discusses installing and setting up a MEAN.js boilerplate project, which provides a sample application with user authentication and articles features to demonstrate the modular architecture. The document also describes using the MEAN.js generator to quickly scaffold additional modules and components.
This document discusses how to fix 403 Forbidden errors in Nginx by checking the error logs. It explains how to find the error logs using lsof and tail commands. Common causes of 403 errors are then outlined, such as incorrect directory settings if the directory listing is not enabled or the index file is wrong, and improper file permissions preventing Nginx from accessing files. The key is to identify the specific error by monitoring the logs.
This document provides instructions for deploying a WordPress application on Ubuntu 14.04 using either a control panel or command line tools. Users can launch a pre-configured WordPress server image with a single click or command. Once the server is ready, the WordPress installation process requires setting up basic info and accounts. After DNS is configured, the site will be ready to use at the assigned domain or IP address.
This document discusses how to deploy a MariaDB Galera cluster on Ubuntu 14.04. It requires installing MariaDB, Galera, and Rsync on at least 3 Ubuntu nodes. Specific configuration files are edited to set the cluster address, node addresses, and other settings. The MySQL services are restarted and tests run to validate the cluster is functioning properly with data replicated across all nodes.
This document provides instructions for mitigating the OpenSSL Heartbleed bug on CentOS or Ubuntu systems. It describes updating to the latest OpenSSL version to fix the vulnerability, verifying the OpenSSL version, regenerating certificates with new keys, and restarting services that use SSL certificates such as Apache. Following these steps ensures systems are protected from the Heartbleed bug.
Ruby on Rails is a web application framework that runs on the Ruby programming language. This document provides instructions for installing Ruby on Rails on Ubuntu, which involves updating the system, installing RVM to compile and install the latest versions of Ruby and Rails, and then verifying the installation by checking the rails command output.
This document provides steps to run Nginx in a Docker container on Ubuntu 16.04. It explains how to install Docker, pull the Nginx image, run the Nginx container with port mapping and in detached mode, and serve a custom web page from the host directory mapped to the container. Running Nginx in a container allows it to be portable across systems and modular to compose into distributed applications.
This document provides instructions on how to install and configure Varnish as a web accelerator in front of Apache on Ubuntu. It describes downloading and installing Varnish from its repositories, configuring Apache to listen on a different port than the default 80 so Varnish can listen on 80, editing the Varnish configuration files to set parameters and define Apache as the backend, and restarting the services so traffic is routed through Varnish. It also provides a command to check Varnish caching status and metrics.
CentOS 7 was released shortly after Red Hat Enterprise Linux 7 and includes similar new features such as Systemd and Docker support. It also allows upgrading from CentOS 6 to 7 automatically without installation media by downloading and using upgrade tools, requiring a reboot but not an entire reinstallation. The upgrade assistant analyzes systems for potential issues but does not perform the upgrade itself, which requires a separate upgrade tool after importing keys from CentOS repositories. Testing showed the upgrade process worked well for clean virtual machines and remote servers.
In this article we will be showing you the complete deployment steps of Clojure Web Application on Ubuntu 14.04.
Presented by VEXXHOST, provider of Openstack based Public and Private Cloud Infrastructure
https://vexxhost.com/
The purpose of OpenVPN is simple; it allows connecting to other devices within one secure network. It allows to keep online data safe by tunneling them through encrypted servers. So if you’re looking for a reliable, easy-to-use system that is adaptable enough to deal with any operating system, then OpenVPN is a no-brainer.
Presented by VEXXHOST, provider of Openstack based Public and Private Cloud Infrastructure
https://vexxhost.com/
How To Setup Highly Available Web Servers with Keepalived & Floating IPs on U...VEXXHOST Private Cloud
In this guide, we will show you to use keepalived to set up a highly available web service on Ubuntu 16.04 by using a floating IP address that can be moved between two capable web servers. The keepalived daemon can be used to monitor services or systems and to automatically failover to a standby if their’s any problems occur. If the primary server goes down, the floating IP will be moved to the second server automatically, allowing service to resume by the help of floating IP that we are gonna use in this tutorial.
Presented by VEXXHOST, provider of Openstack based Public and Private Cloud Infrastructure
https://vexxhost.com/
In this tutorial, we will explain how to get your own GitHub instance running on your own Ubuntu 12.04 VPS. Ubuntu 12.04 is recommended because of some incompatibilities between Python and Ruby on other Linux distributions. Also, make sure you have at least 1GB RAM memory on your VPS. Our first step is to install some required packages and dependencies.
Presented by VEXXHOST, provider of Openstack based Public and Private Cloud Infrastructure
https://vexxhost.com/
Exploiting Artificial Intelligence for Empowering Researchers and Faculty, In...Dr. Vinod Kumar Kanvaria
Exploiting Artificial Intelligence for Empowering Researchers and Faculty,
International FDP on Fundamentals of Research in Social Sciences
at Integral University, Lucknow, 06.06.2024
By Dr. Vinod Kumar Kanvaria
Strategies for Effective Upskilling is a presentation by Chinwendu Peace in a Your Skill Boost Masterclass organisation by the Excellence Foundation for South Sudan on 08th and 09th June 2024 from 1 PM to 3 PM on each day.
A review of the growth of the Israel Genealogy Research Association Database Collection for the last 12 months. Our collection is now passed the 3 million mark and still growing. See which archives have contributed the most. See the different types of records we have, and which years have had records added. You can also see what we have for the future.
A workshop hosted by the South African Journal of Science aimed at postgraduate students and early career researchers with little or no experience in writing and publishing journal articles.
Executive Directors Chat Leveraging AI for Diversity, Equity, and InclusionTechSoup
Let’s explore the intersection of technology and equity in the final session of our DEI series. Discover how AI tools, like ChatGPT, can be used to support and enhance your nonprofit's DEI initiatives. Participants will gain insights into practical AI applications and get tips for leveraging technology to advance their DEI goals.
Physiology and chemistry of skin and pigmentation, hairs, scalp, lips and nail, Cleansing cream, Lotions, Face powders, Face packs, Lipsticks, Bath products, soaps and baby product,
Preparation and standardization of the following : Tonic, Bleaches, Dentifrices and Mouth washes & Tooth Pastes, Cosmetics for Nails.
A Strategic Approach: GenAI in EducationPeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
How To Securely Set Up Shipyard 2.0.10 with TLS on CoreOS
1.
2. HOW TO SECURELY SET UP SHIPYARD 2.0.10 WITH
TLS ON COREOS
• Shipyard is a management tool for Docker servers.
• Docker is a cutting-edge piece of software used for containerization.
• Shipyard allows you to see which containers each of your servers are running, in order to start or stop existing containers
or create new ones.
• Once you’ve set up Shipyard on your server you can access it using a graphic interface, a command-line interface, or an
API.
• Shipyard lacks some of the advanced features of other Docker orchestration tools, but it’s very simple to set up, free to
use, and you can manage and host it yourself.
• It also lets you manage resource allocation to specific containers and manage containers across multiple Docker hosts.
• However, it’s important to ensure that your Docker server and Shipyard system are secure, especially if they are being
used in production.
3. HOW TO SECURELY SET UP SHIPYARD 2.0.10 WITH
TLS ON COREOS
• In this article, we are going to show you Shipyard 2.0.10 installation setup on a single CoreOS server and securing
Docker with a TLS certificate to ensure that only authorized clients may connect to it
• TLS Stands for Transport Layer Security which is used to encrypt data as it is transported from the client to the server
and back again.
• Here, we’ll use it to encrypt our connection to the Docker host, and Docker’s connection to Shipyard.
4. HOW TO SECURELY SET UP SHIPYARD 2.0.10 WITH
TLS ON COREOS
Prerequisites:
• In order to setup shipyard 2.0.10 with TLS on CoreOS, we need to make sure that following prerequisites are complete.
• First of all setup one CoreOS Droplet with at least 1 GB or more recommended RAM and choose the latest stable version
of CoreOS.
• Login to your server using SSH-key as all CoreOS servers require an SSH key, then setup a fully qualified domain name
(FQDN) or subdomain for your Docker host.
• Now lets start with setting up Docker to use certificates for authentication.
5. HOW TO SECURELY SET UP SHIPYARD 2.0.10 WITH
TLS ON COREOS
1) Creating the Server Certificate:
• CoreOS comes with OpenSSL, a utility that can be used to generate and sign certificates.
• Let’s create a Certificate Authority that we can use to sign server and client certificates.
• First, create and move to a directory called ‘dockertls’, so it’s easy to remember where the files are.
$ mkdir ~/dockertls $ cd ~/dockertls
6. HOW TO SECURELY SET UP SHIPYARD 2.0.10 WITH
TLS ON COREOS
• Then create an RSA private key using below command which will prompt you to create a passphrase for your key.
$ openssl genrsa -aes256 -out private-key.pem 4096
• Here in this command genrsa will generate a private RSA private key. -out private-key.pem specifies the name of the
file we want to generate, which is ‘private-key.pem’ and the last bit, 4096, is the length of the key in bits.
• It’s recommended to keep this at a high number like 4096.
• Next, generate a new certificate and sign it with the private key we just created. You’ll need to enter the same passphrase
you chose when creating the key.
$ openssl req -new -x509 -sha512 -days 365 -key private-key.pem -out
myca.pem
7. HOW TO SECURELY SET UP SHIPYARD 2.0.10 WITH
TLS ON COREOS
• Here OpenSSL will also ask for some required information, like the FQDN of your server and the county your organization
is based out of. Let’s try to answer these questions as accurately as possible. This is the last step in creating our self-
signed Certificate Authority, or CA as shown below.
8. HOW TO SECURELY SET UP SHIPYARD 2.0.10 WITH
TLS ON COREOS
• After creating CA, we will create some server certificates for use with the Docker daemon.
• The following two commands generate a signing request but sure to replace test.com with the domain or subdomain of
your own you using for Docker.
$ openssl genrsa -out docker-1-key.pem 4096
$ openssl req -subj "/CN=example.com" -sha512 -new -key docker-1-key.pem -out docker.csr
• Finally, sign with the CA’s private key. You’ll need to enter the key passphrase again.
$ openssl x509 -req -days 365 -sha256 -in docker.csr -CA myca.pem -CAkey private-key.pem -CAcreateserial -out final-server-
cert.pem
• This will create a file in the current directory called final-server-cert.pem, which is the server certificate that will be
used on the Docker host.
10. HOW TO SECURELY SET UP SHIPYARD 2.0.10 WITH
TLS ON COREOS
2) Creating the Client Certificate:
• After creating server certificate, we need to create a client certificate.
• This will be used whenever we try to connect to the Docker host.
• It will verify that the client connection has actually been verified and signed by our personal CA.
• Therefore, only authorized clients will be allowed to connect and send commands to Docker.
• First, create another signing request for the client using below commands.
$ openssl genrsa -out client-key.pem 4096
$ openssl req -subj '/CN=client' -new -key client-key.pem -out docker-client.csr
11. HOW TO SECURELY SET UP SHIPYARD 2.0.10 WITH
TLS ON COREOS
• We need to create a config file which specifies that the resulting certificate can actually be used for client authentication.
$ echo extendedKeyUsage = clientAuth > client.cnf
• The will creates a file called 'client.cnf' with the content extendedKeyUsage = clientAuth without needing to use a text
editor.
• Next, sign the client with the CA key.
$ openssl x509 -req -days 365 -sha512 -in docker-client.csr -CA myca.pem -CAkey private-key.pem -
CAcreateserial -out client.pem -extfile client.cnf
Signature ok
subject=/CN=client
Getting CA Private Key
Enter pass phrase for private-key.pem:
• Now we have a CA, a server certificate, and a client certificate setup let’s move to the next step.
12. HOW TO SECURELY SET UP SHIPYARD 2.0.10 WITH
TLS ON COREOS
• 3) Configuring Docker and CoreOS:
• In this step, we’ll configure the Docker daemon to use our certificates by modifying the startup options for Docker.
• CoreOS uses systemd command to manage services.
• Let’s start by editing the Docker unit file. There’s an option for the systemctl command that will help us by duplicating the
actual unit file instead of modifying the original directly.
• Open the Docker unit file for editing using systemctl as shown.
$ sudo systemctl edit --full docker
• This will open the file for editing using vim, find the line that begins with ExecStart=/usr/lib/coreos/dockerd.
Append this line with below config after –host=fd:// of that line as shown.
ExecStart=/usr/lib/coreos/dockerd daemon --host=fd:// --tlsverify --tlscacert=/home/core/dockertls/myca.pem --
tlscert=/home/core/dockertls/final-server-cert.pem --tlskey=/home/core/dockertls/docker-1-key.pem -H=0.0.0.0:2376
$DOCKER_OPTS $DOCKER_CGROUPS $DOCKER_OPT_BIP $DOCKER_OPT_MTU $DOCKER_OPT_IPMASQ
14. HOW TO SECURELY SET UP SHIPYARD 2.0.10 WITH
TLS ON COREOS
• Here in this configuration --tlsverify simply turns on TLS verification so that only authorized clients may connect.
• --tlscacert specifies the location of our CA’s certificate.
• --tlscert specifies the server certificate location.
• --tlskey specifies the server key location and -H=0.0.0.0:2376 means that Docker will listen for connections
from anywhere, but it still will not allow any connections that don’t have an authorized client key or certificate.
15. HOW TO SECURELY SET UP SHIPYARD 2.0.10 WITH
TLS ON COREOS
• Now reload the Docker daemon after saving and closing the file, so that it will use our new configuration.
$ sudo systemctl restart docker
$ sudo systemctl status docker
• Once the docker service is up running, then run the command below to test our TLS verification.
docker --tlsverify --tlscacert=myca.pem --tlscert=client.pem --tlskey=client-key.pem -H=test.com:2376
info
• You will get some basic system information about your Docker host as shown below.
• This means you just secured your Docker host with TLS.
• If you get an error, check the logs using systemctl status docker.
• You we can access Docker host from anywhere as long as we are connecting using a valid certificate and client key. We
can generate and sign as many client certificates as we want for use in a cluster.
16. HOW TO SECURELY SET UP SHIPYARD 2.0.10 WITH
TLS ON COREOS
4) Installing Shipyard
• In this step, we will install Shipyard.
• Once you have Docker running, it is quite easy to install Shipyard because it ships as Docker images.
• All you need to do is pull the images from the Docker registry and run the necessary containers.
• First we will create a data volume container to hold Shipyard’s database data.
• This container won’t do anything by itself; it is a convenient label for the location of all of Shipyard’s data.
$ docker create --name shipyard-rethinkdb-data shipyard/rethinkdb
17. HOW TO SECURELY SET UP SHIPYARD 2.0.10 WITH
TLS ON COREOS
• Now that the data volume container is created, this is the database engine Shipyard uses to keep track of real-time data from
Docker.
• Now we can launch the database server for Shipyard and link them together.
$ docker run -it -d --name shipyard-rethinkdb --restart=always --volumes-from shipyard-rethinkdb-data -p 127.0.0.1:49153:8080 -p
127.0.0.1:49154:28015 -p 127.0.0.1:29015:29015 shipyard/rethinkdb
• This command also ensures that RethinkDB will only listen on localhost. This is a good way to secure this database because it
means no one will be able to access it from outside the server.
• We’ll be using Shipyard version 2.0.10 because it’s the easiest to configure with Docker TLS.
• The following command will start a new container that runs Shipyard and links it to the RethinkDB container, allowing
them to communicate.
$ docker run -it -p 8080:8080 -d --restart=always --name shipyard --link shipyard-
rethinkdb:rethinkdb shipyard/shipyard:2.0.10
19. HOW TO SECURELY SET UP SHIPYARD 2.0.10 WITH
TLS ON COREOS
5) Accessing Shipyard Web:
• Once you have completed your Shipyard setup, open your web browser to
visit http://test.com:8080 or http://your_server_ip:8080 to access the Shipyard control panel. You can
log in with the default username admin and password shipyard.
20. HOW TO SECURELY SET UP SHIPYARD 2.0.10 WITH
TLS ON COREOS
• Shipyard will prompt you to add a new engine to the cluster. Click the green + ADD button.
21. HOW TO SECURELY SET UP SHIPYARD 2.0.10 WITH
TLS ON COREOS
• You will be presented with some options to fill with name of the new engine and it keys like shown below.
22. HOW TO SECURELY SET UP SHIPYARD 2.0.10 WITH
TLS ON COREOS
• Once you have updated the required information then click on the ADD button at the bottom of the page.
• If everything is configured correctly,
• If you point to the Shipyard dashboard you will see CPU and RAM stats along with events on its right side.
23. HOW TO SECURELY SET UP SHIPYARD 2.0.10 WITH
TLS ON COREOS
Conclusion:
• Shipyard is up and running with secured TLS on CoreOS.
• You should also be able to configure additional servers with Docker and connect them to your Shipyard instance for
management.
• You’ve also learned how to connect to your Shipyard instance using the GUI, and learned how to deploy new containers
on your Docker host with secured TLS using the command line as well as GUI.
• It helps you in managing your containers and cluster of hosts safely and securely.
• You can also add a client key and certificate to your local machine so you can remotely manage your Docker cluster from
anywhere.
• That’s all, I hope you have got this article much helpful.
• Feel free to get back to us in case of any issue.