Presented by Luke Marsden at Software Circus Amsterdam
Microservices are smashing monolithic databases into lots of pieces. CI and CD is making testing those consistently more and more challenging. This talk will explore the problem space and dive into detailed examples, exploring the pros and cons of both ephemeral data stores and storage orchestration.
Behind the scenes with Docker volume pluginsClusterHQ
A story of cross-company and cross-continent open-source collaboration.
Last June at DockerCon in San Francisco, Solomon announced experimental support for Docker volume plugins, making it possible to build add-ons to Docker that manage persistent storage. The run-up to that announcement was a frenetic 9 months of cross-company and cross-timezone collaboration. At ClusterHQ, we were deeply involved in building the interface, and we also made our own Docker volume plugin Flocker available for migrating data volumes between nodes in a cluster. In this talk, I'll share stories on what it was like working with the Docker team and others in the ecosystem to build this API. We'll cover how you also can take advantage of Docker volume plugins to leverage stateful containers. I'll guide you through the Docker plugin model and show off some of the existing plugins so you can see how to enable stateful containers for your own use cases.
Discovering Docker Volume Plugins and Apps using VirtualBoxClinton Kitson
There are right and wrong ways to use containers with persistent applications. Lucky for you, doing it the right way is much easier nowadays with Docker Volume Plugins. This talk will focus on doing some basic education including mostly live demos to show how you can take advantage of these new capabilities for expanding how you leverage containers.
A presentation focused on the latest Storage API from Docker and integrating with an EMC {code} project called Rexray to provide container storage from EBS volumes.
Stephen Nguyen a Developer Evangelist for ClusterHQ reviews how volumes work and overviews the benefits of allowing Flocker to orchestrate your Volumes. (video coming soon)
Scaling Docker Containers using Kubernetes and Azure Container ServiceBen Hall
This document discusses scaling Docker containers using Kubernetes and Azure Container Service. It begins with an introduction to containers and Docker, including how containers improve dependency and configuration management. It then demonstrates building and deploying containerized applications using Docker and discusses how to optimize Docker images. Finally, it introduces Kubernetes as a tool for orchestrating containers at scale and provides an example of deploying a containerized application on Kubernetes in Azure.
Discovering Volume Plugins with Applications using Docker Toolbox and VirtualBoxClinton Kitson
There are right and wrong ways to use containers with persistent applications. Lucky for you, doing it the right way is much easier nowadays with Docker Volume Plugins. This talk will focus on doing some basic education including mostly live demos to show how you can take advantage of these new capabilities for expanding how you leverage containers. We will be going through a workflow that you can do yourself using boot2docker.
This document discusses Docker, including:
1. Docker is a platform for running and managing Linux containers that provides operating-system-level virtualization without the overhead of traditional virtual machines.
2. Key Docker concepts include images (immutable templates for containers), containers (running instances of images that have mutable state), and layers (the building blocks of images).
3. Publishing Docker images to registries allows them to be shared and reused across different systems. Volumes and networking allow containers to share filesystems and communicate.
This document provides an overview and comparison of Docker, Kubernetes, OpenShift, Fabric8, and Jube container technologies. It discusses key concepts like containers, images, and Dockerfiles. It explains how Kubernetes provides horizontal scaling of Docker through replication controllers and services. OpenShift builds on Kubernetes to provide a platform as a service with routing, multi-tenancy, and a build/deploy pipeline. Fabric8 and Jube add additional functionality for developers, with tools, libraries, logging, and pure Java Kubernetes implementations respectively.
Behind the scenes with Docker volume pluginsClusterHQ
A story of cross-company and cross-continent open-source collaboration.
Last June at DockerCon in San Francisco, Solomon announced experimental support for Docker volume plugins, making it possible to build add-ons to Docker that manage persistent storage. The run-up to that announcement was a frenetic 9 months of cross-company and cross-timezone collaboration. At ClusterHQ, we were deeply involved in building the interface, and we also made our own Docker volume plugin Flocker available for migrating data volumes between nodes in a cluster. In this talk, I'll share stories on what it was like working with the Docker team and others in the ecosystem to build this API. We'll cover how you also can take advantage of Docker volume plugins to leverage stateful containers. I'll guide you through the Docker plugin model and show off some of the existing plugins so you can see how to enable stateful containers for your own use cases.
Discovering Docker Volume Plugins and Apps using VirtualBoxClinton Kitson
There are right and wrong ways to use containers with persistent applications. Lucky for you, doing it the right way is much easier nowadays with Docker Volume Plugins. This talk will focus on doing some basic education including mostly live demos to show how you can take advantage of these new capabilities for expanding how you leverage containers.
A presentation focused on the latest Storage API from Docker and integrating with an EMC {code} project called Rexray to provide container storage from EBS volumes.
Stephen Nguyen a Developer Evangelist for ClusterHQ reviews how volumes work and overviews the benefits of allowing Flocker to orchestrate your Volumes. (video coming soon)
Scaling Docker Containers using Kubernetes and Azure Container ServiceBen Hall
This document discusses scaling Docker containers using Kubernetes and Azure Container Service. It begins with an introduction to containers and Docker, including how containers improve dependency and configuration management. It then demonstrates building and deploying containerized applications using Docker and discusses how to optimize Docker images. Finally, it introduces Kubernetes as a tool for orchestrating containers at scale and provides an example of deploying a containerized application on Kubernetes in Azure.
Discovering Volume Plugins with Applications using Docker Toolbox and VirtualBoxClinton Kitson
There are right and wrong ways to use containers with persistent applications. Lucky for you, doing it the right way is much easier nowadays with Docker Volume Plugins. This talk will focus on doing some basic education including mostly live demos to show how you can take advantage of these new capabilities for expanding how you leverage containers. We will be going through a workflow that you can do yourself using boot2docker.
This document discusses Docker, including:
1. Docker is a platform for running and managing Linux containers that provides operating-system-level virtualization without the overhead of traditional virtual machines.
2. Key Docker concepts include images (immutable templates for containers), containers (running instances of images that have mutable state), and layers (the building blocks of images).
3. Publishing Docker images to registries allows them to be shared and reused across different systems. Volumes and networking allow containers to share filesystems and communicate.
This document provides an overview and comparison of Docker, Kubernetes, OpenShift, Fabric8, and Jube container technologies. It discusses key concepts like containers, images, and Dockerfiles. It explains how Kubernetes provides horizontal scaling of Docker through replication controllers and services. OpenShift builds on Kubernetes to provide a platform as a service with routing, multi-tenancy, and a build/deploy pipeline. Fabric8 and Jube add additional functionality for developers, with tools, libraries, logging, and pure Java Kubernetes implementations respectively.
This document provides an introduction to Kubernetes and Container Network Interface (CNI). It begins with an introduction to the presenter and their background. It then discusses the differences between VMs and containers before explaining why Kubernetes is needed for container orchestration. The rest of the document details the architecture of Kubernetes, including the master node, worker nodes, pods, labels, replica sets, deployments, services, and how to build a Kubernetes cluster. It concludes with a brief introduction to CNI and a call for questions.
From SCALE13 session on 2015-02-22. Overview of Docker, swarm, and demonstration of docker-machine for easily bootstrapping container environments and swarm clusters.
Cloud native applications are popular these days – applications that run in the cloud reliably und scale almost arbitrarily. They follow three key principles: they are built and composed as micro services. They are packaged and distributed in containers. The containers are executed dynamically in the cloud. Kubernetes is an open-source cluster manager for the automated deployment, scaling and management of cloud native applications. In this hands-on session we will introduce the core concepts of Kubernetes and then show how to build, package and operate a cloud native showcase application on top of Kubernetes step-by-step. Throughout this session we will be using an off-the-shelf MIDI controller to demonstrate and visualize the concepts and to remote control Kubernetes. This session has been presented at the ContainerCon Europe 2016 in Berlin. #qaware #cloudnativenerd #LinuxCon #ContainerCon
This document provides an overview of Docker including:
- Docker allows building applications once and deploying them anywhere reliably through containers that provide resource isolation.
- Key Docker components include images, resource isolation using cgroups and namespaces, filesystem isolation using layers, and networking capabilities.
- Under the hood, Docker utilizes cgroups for resource accounting, namespaces for isolation, security features like capabilities and AppArmor, and UnionFS for the layered filesystem.
- The Docker codebase includes components for the daemon, API, image and container management, networking, and integration testing. Commonly used packages include libcontainer for namespaces and cgroups and packages for security, mounting, and networking.
Kubernetes Architecture and Introduction – Paris Kubernetes MeetupStefan Schimanski
The document provides an overview of Kubernetes architecture and introduces how to deploy Kubernetes clusters on different platforms like Mesosphere's DCOS, Google Container Engine, and Mesos/Docker. It discusses the core components of Kubernetes including the API server, scheduler, controller manager and kubelet. It also demonstrates how to interact with Kubernetes using kubectl and view cluster state.
A small introduction to get started on Kubernetes as a user. This explains the main concepts like pod, deployment and services and gives some hints to help you use kubectl command.
These slides were presented in Grenoble Docker meetup in November 2017.
This document provides an overview of using Kubernetes to scale microservices. It discusses the challenges of scaling, monitoring, and discovery for microservices. Kubernetes provides a solution to these challenges through its automation of deployment, scaling, and management of containerized applications. The document then describes Kubernetes architecture and components like the master, nodes, pods, services, deployments and secrets which allow Kubernetes to provide portability, self-healing and a declarative way to manage the desired state of applications.
This document discusses Kube-AWS, which is a tool for deploying Kubernetes clusters on AWS. It outlines the design goals of creating artifacts that are secure, reproducible, and auditable. It then demonstrates "under the hood" how Kube-AWS works by initializing a cluster configuration, rendering assets, deploying the cluster, exporting the deployment details, and making changes to reproduce the cluster. Recent work is noted along with future plans.
Overview of kubernetes and its use as a DevOps cluster management framework.
Problems with deployment via kube-up.sh and improving kubernetes on AWS via custom cloud formation template.
Practical Docker for OpenStack (Juno Summit - May 15th, 2014)Erica Windisch
This document discusses using Docker containers with OpenStack. It describes installing the Nova Docker compute driver plugin to enable launching and managing Docker containers via the OpenStack Nova API. The plugin allows spawning Docker containers from images in Glance and supports basic container operations. However, some Nova features like live migration and advanced Docker capabilities are not yet supported. Using Docker with Nova provides an alternative to Heat for container orchestration with OpenStack.
Kubernetes on AWS allows users to deploy and manage Kubernetes clusters on the AWS cloud infrastructure. It provides tools to create clusters across multiple AWS availability zones for high availability. Users can define Kubernetes objects like pods, services, deployments etc using kubectl and utilize AWS services like EBS volumes for persistent storage. The presentation demonstrated setting up a Kubernetes cluster on AWS using kube-up.sh along with examples of using EBS volumes in pods through persistent volume claims. It also showed monitoring and managing applications running on the Kubernetes cluster deployed on AWS.
KubeCon EU 2016: A Practical Guide to Container SchedulingKubeAcademy
Containers are at the forefront of a new wave of technology innovation but the methods for scheduling and managing them are still new to most developers. In this talk we'll look at the kind of problems that container scheduling solves and at how maximising efficiency and maiximising QoS don't have to be exclusive goals. We'll take a behind the scenes look at the Kubernetes scheduler: How does it prioritize? What about node selection and external dependencies? How do you schedule based on your own specific needs? How does it scale and what’s in it both for developers already using containers and for those that aren't? We’ll use a combination of slides, code, demos to answer all these questions and hopefully all of yours.
Sched Link: http://sched.co/6BZa
I am glad to share the presentation of the Kubernetes Pune meetup organized on 29 July 2017. One of the good response from the Pune folks to the community.
This document discusses using Docker to create a cluster networking setup with Weave. It begins by explaining the goals of automatically assigning unique IPs to containers without DHCP or an overlay network. It then describes setting up Weave to serve as the network bridge for Docker containers, so they are assigned IPs on the Weave overlay network and can communicate across hosts. The document outlines starting Weave, configuring Docker to use the Weave bridge, and ensuring containers on different hosts can connect. It concludes that this approach allows using native ports and container IPs without port management.
The document introduces Docker and how to use it to containerize applications. It discusses building a Docker image from a Dockerfile that includes an app and its dependencies. It shows how to run the containerized app and interact with it. Finally, it discusses potential uses of Docker like deploying apps, continuous integration testing, and sharing development environments through containers.
Build Your Own CaaS (Container as a Service)HungWei Chiu
In this slide, I introduce the kubernetes and show an example what is CaaS and what it can provides.
Besides, I also introduce how to setup a continuous integration and continuous deployment for the CaaS platform.
This document discusses integrating Docker containers with OpenStack Heat for orchestration. It describes Docker and Heat separately, then introduces a Docker plugin for Heat that allows Heat templates to directly control Docker containers. The plugin enables Heat to orchestrate Docker containers similarly to how it orchestrates virtual machines. A demo is provided of launching a WordPress+MySQL stack on Docker containers using Heat's orchestration capabilities.
The Nova driver for Docker has been maturing rapidly since its mainline removal in Icehouse. During the Juno cycle, substantial improvements have been made to the driver, and greater parity has been reached with other virtualization drivers. We will explore these improvements and what they mean to deployers. Eric will additionally showcase deployment scenarios for the deployment of OpenStack itself inside and underneath of Docker for powering traditional VM-based computing, storage, and other cloud services. Finally, users should expect a preview of the planned integration with the new OpenStack Containers Service effort to provide automation of advanced containers functionality and Docker-API semantics inside of an OpenStack cloud.
Note that the included Heat templates are NOT usable. See the linked Heat resources for viable templates and examples.
Docker orchestration using core os and ansible - Ansible IL 2015Leonid Mirsky
The last couple of years have seen an increasing interest in Docker and related technologies. One of these technologies is CoreOS, a new operating system built from the ground up for running Docker containers at scale.
In this talk we will learn about CoreOS main concepts and tools. We will get our hands dirty as we work together toward a goal of running a CoreOS cluster on AWS (using Ansible) and running docker containers on it.
The talk will conclude with a discussion on the place of Ansible (and configuration management tools in general) in the "next-generation" stack.
This document discusses various options for deploying solid state drives (SSDs) in the data center to address storage performance issues. It describes all-flash arrays that use only SSDs, hybrid arrays that combine SSDs and hard disk drives, and server-side flash caching. Key points covered include the performance benefits of SSDs over HDDs, different types of SSDs, form factors, deployment architectures like all-flash arrays from vendors, hybrid arrays, server-side caching software, virtual storage appliances, and hyperconverged infrastructure systems. Choosing the best solution depends on factors like performance needs, capacity, data services required, and budget.
This document provides an introduction to Kubernetes and Container Network Interface (CNI). It begins with an introduction to the presenter and their background. It then discusses the differences between VMs and containers before explaining why Kubernetes is needed for container orchestration. The rest of the document details the architecture of Kubernetes, including the master node, worker nodes, pods, labels, replica sets, deployments, services, and how to build a Kubernetes cluster. It concludes with a brief introduction to CNI and a call for questions.
From SCALE13 session on 2015-02-22. Overview of Docker, swarm, and demonstration of docker-machine for easily bootstrapping container environments and swarm clusters.
Cloud native applications are popular these days – applications that run in the cloud reliably und scale almost arbitrarily. They follow three key principles: they are built and composed as micro services. They are packaged and distributed in containers. The containers are executed dynamically in the cloud. Kubernetes is an open-source cluster manager for the automated deployment, scaling and management of cloud native applications. In this hands-on session we will introduce the core concepts of Kubernetes and then show how to build, package and operate a cloud native showcase application on top of Kubernetes step-by-step. Throughout this session we will be using an off-the-shelf MIDI controller to demonstrate and visualize the concepts and to remote control Kubernetes. This session has been presented at the ContainerCon Europe 2016 in Berlin. #qaware #cloudnativenerd #LinuxCon #ContainerCon
This document provides an overview of Docker including:
- Docker allows building applications once and deploying them anywhere reliably through containers that provide resource isolation.
- Key Docker components include images, resource isolation using cgroups and namespaces, filesystem isolation using layers, and networking capabilities.
- Under the hood, Docker utilizes cgroups for resource accounting, namespaces for isolation, security features like capabilities and AppArmor, and UnionFS for the layered filesystem.
- The Docker codebase includes components for the daemon, API, image and container management, networking, and integration testing. Commonly used packages include libcontainer for namespaces and cgroups and packages for security, mounting, and networking.
Kubernetes Architecture and Introduction – Paris Kubernetes MeetupStefan Schimanski
The document provides an overview of Kubernetes architecture and introduces how to deploy Kubernetes clusters on different platforms like Mesosphere's DCOS, Google Container Engine, and Mesos/Docker. It discusses the core components of Kubernetes including the API server, scheduler, controller manager and kubelet. It also demonstrates how to interact with Kubernetes using kubectl and view cluster state.
A small introduction to get started on Kubernetes as a user. This explains the main concepts like pod, deployment and services and gives some hints to help you use kubectl command.
These slides were presented in Grenoble Docker meetup in November 2017.
This document provides an overview of using Kubernetes to scale microservices. It discusses the challenges of scaling, monitoring, and discovery for microservices. Kubernetes provides a solution to these challenges through its automation of deployment, scaling, and management of containerized applications. The document then describes Kubernetes architecture and components like the master, nodes, pods, services, deployments and secrets which allow Kubernetes to provide portability, self-healing and a declarative way to manage the desired state of applications.
This document discusses Kube-AWS, which is a tool for deploying Kubernetes clusters on AWS. It outlines the design goals of creating artifacts that are secure, reproducible, and auditable. It then demonstrates "under the hood" how Kube-AWS works by initializing a cluster configuration, rendering assets, deploying the cluster, exporting the deployment details, and making changes to reproduce the cluster. Recent work is noted along with future plans.
Overview of kubernetes and its use as a DevOps cluster management framework.
Problems with deployment via kube-up.sh and improving kubernetes on AWS via custom cloud formation template.
Practical Docker for OpenStack (Juno Summit - May 15th, 2014)Erica Windisch
This document discusses using Docker containers with OpenStack. It describes installing the Nova Docker compute driver plugin to enable launching and managing Docker containers via the OpenStack Nova API. The plugin allows spawning Docker containers from images in Glance and supports basic container operations. However, some Nova features like live migration and advanced Docker capabilities are not yet supported. Using Docker with Nova provides an alternative to Heat for container orchestration with OpenStack.
Kubernetes on AWS allows users to deploy and manage Kubernetes clusters on the AWS cloud infrastructure. It provides tools to create clusters across multiple AWS availability zones for high availability. Users can define Kubernetes objects like pods, services, deployments etc using kubectl and utilize AWS services like EBS volumes for persistent storage. The presentation demonstrated setting up a Kubernetes cluster on AWS using kube-up.sh along with examples of using EBS volumes in pods through persistent volume claims. It also showed monitoring and managing applications running on the Kubernetes cluster deployed on AWS.
KubeCon EU 2016: A Practical Guide to Container SchedulingKubeAcademy
Containers are at the forefront of a new wave of technology innovation but the methods for scheduling and managing them are still new to most developers. In this talk we'll look at the kind of problems that container scheduling solves and at how maximising efficiency and maiximising QoS don't have to be exclusive goals. We'll take a behind the scenes look at the Kubernetes scheduler: How does it prioritize? What about node selection and external dependencies? How do you schedule based on your own specific needs? How does it scale and what’s in it both for developers already using containers and for those that aren't? We’ll use a combination of slides, code, demos to answer all these questions and hopefully all of yours.
Sched Link: http://sched.co/6BZa
I am glad to share the presentation of the Kubernetes Pune meetup organized on 29 July 2017. One of the good response from the Pune folks to the community.
This document discusses using Docker to create a cluster networking setup with Weave. It begins by explaining the goals of automatically assigning unique IPs to containers without DHCP or an overlay network. It then describes setting up Weave to serve as the network bridge for Docker containers, so they are assigned IPs on the Weave overlay network and can communicate across hosts. The document outlines starting Weave, configuring Docker to use the Weave bridge, and ensuring containers on different hosts can connect. It concludes that this approach allows using native ports and container IPs without port management.
The document introduces Docker and how to use it to containerize applications. It discusses building a Docker image from a Dockerfile that includes an app and its dependencies. It shows how to run the containerized app and interact with it. Finally, it discusses potential uses of Docker like deploying apps, continuous integration testing, and sharing development environments through containers.
Build Your Own CaaS (Container as a Service)HungWei Chiu
In this slide, I introduce the kubernetes and show an example what is CaaS and what it can provides.
Besides, I also introduce how to setup a continuous integration and continuous deployment for the CaaS platform.
This document discusses integrating Docker containers with OpenStack Heat for orchestration. It describes Docker and Heat separately, then introduces a Docker plugin for Heat that allows Heat templates to directly control Docker containers. The plugin enables Heat to orchestrate Docker containers similarly to how it orchestrates virtual machines. A demo is provided of launching a WordPress+MySQL stack on Docker containers using Heat's orchestration capabilities.
The Nova driver for Docker has been maturing rapidly since its mainline removal in Icehouse. During the Juno cycle, substantial improvements have been made to the driver, and greater parity has been reached with other virtualization drivers. We will explore these improvements and what they mean to deployers. Eric will additionally showcase deployment scenarios for the deployment of OpenStack itself inside and underneath of Docker for powering traditional VM-based computing, storage, and other cloud services. Finally, users should expect a preview of the planned integration with the new OpenStack Containers Service effort to provide automation of advanced containers functionality and Docker-API semantics inside of an OpenStack cloud.
Note that the included Heat templates are NOT usable. See the linked Heat resources for viable templates and examples.
Docker orchestration using core os and ansible - Ansible IL 2015Leonid Mirsky
The last couple of years have seen an increasing interest in Docker and related technologies. One of these technologies is CoreOS, a new operating system built from the ground up for running Docker containers at scale.
In this talk we will learn about CoreOS main concepts and tools. We will get our hands dirty as we work together toward a goal of running a CoreOS cluster on AWS (using Ansible) and running docker containers on it.
The talk will conclude with a discussion on the place of Ansible (and configuration management tools in general) in the "next-generation" stack.
This document discusses various options for deploying solid state drives (SSDs) in the data center to address storage performance issues. It describes all-flash arrays that use only SSDs, hybrid arrays that combine SSDs and hard disk drives, and server-side flash caching. Key points covered include the performance benefits of SSDs over HDDs, different types of SSDs, form factors, deployment architectures like all-flash arrays from vendors, hybrid arrays, server-side caching software, virtual storage appliances, and hyperconverged infrastructure systems. Choosing the best solution depends on factors like performance needs, capacity, data services required, and budget.
Puppet – Make stateful apps easier than statelessStarcounter
Stateful apps are considered hard and unpractical. The truth is the opposite! With the correct technology, you can develop a thick client SPA with state entirely controlled on the server. Forget writing countless lines of glue code and the callback hell. Welcome to the DRY world of JSON-Patch and PuppetJS!
An excite talk I gave talking about Pets versus Cattle and the pros and cons of this approach going forward. TL;DR having more cattle than pets will make datacenter more efficient, shift the burden of uptime towards more of a DevOps role and provide a smoother development and deployment model. Let's do this!
This document discusses the differences between treating servers as "pets" versus treating them as "cattle". Pets are given names and cared for individually, while cattle are treated as interchangeable and replaced when needed. The document traces the evolution of CERN's data centers from a pet model in 2004 to a cattle model by 2010, using configuration management tools to provision servers. It concludes that while cattle are preferable, pets may still be needed in some cases if managed well.
This document lists donations from various foundations to organizations focused on helping the homeless and increasing equal opportunity and access in San Antonio, Texas. It provides the names and titles of representatives from 14 different foundations, along with the organizations' addresses and contact information. Each entry also includes the donation amount and designated usage area, such as homeless centers and services, employment assistance, or providing monthly bus passes to help people access job interviews. Total donations amounted to over $1 million.
This document summarizes an academic paper that describes an uncomfortable learning experience faculty members had connecting with an Indigenous elder on Darninjung Country. The faculty learned about Indigenous ways of knowing, including that knowledge is specific to place and people, and cannot be acquired by just reading, but must be earned through respectful observation and experience on the land. They learned to listen to the land and observe signs in silence rather than discussing. While uncomfortable with this approach at first, they came to understand how it aligned with educational principles of authentic, contextual learning. The experience prompted reflection on how to bring more community and connection to their own teaching practice.
Conoce nuestro catálogo de prendas para tu empresa.
Confeccionamos y maquilamos cualquier tipo de prenda. (a partir de 12 piezas.)
Playeras y Uniformes GIO.
Entra a nuestro sitio:
www.playerasgio.com
Barron County Habitat for Humanity builds homes locally and supports international projects to provide adequate housing. Their mission is to eliminate poverty housing worldwide. Locally, they have built 19 homes since 1996 helping 20 families in Barron County. They describe how partnering with families to build homes provides stability and opportunities to improve lives. Donations support their work building homes and communities both locally and internationally through their tithe program.
Elliott Erwitt é um fotógrafo documentalista e jornalista nascido em 1928 em Paris. Ele começou a fotografar na adolescência em Hollywood e se juntou à agência Magnum Photos em 1953, onde ficou conhecido por imagens preto e branco cheias de humor e ironia. Suas fotos costumam capturar momentos inesperados de forma leve.
This document is a curriculum vitae for Lamin Jarju, a Sous Chef at the Sheraton Gambia Hotel Resort & SPA with over 15 years of experience. Jarju has extensive hands-on experience working in all areas of commercial kitchens. As Sous Chef, Jarju is responsible for delivering high quality food to guests and maintaining an impeccable, clean kitchen. Jarju's duties include creating innovative dishes, approving timesheets, writing performance reviews, monitoring costs, ordering supplies, and other tasks assigned by the Head Chef.
The technology of cleaning semiconductor wafersMaxim Zarezov
The technology of slicessurface structuring and purification from metal, organic contaminants and removal of the photoresist, using highly concentrated ozone
The document summarizes an architect's design of a three-bedroom apartment in Sydney's Point Piper with an intimate connection to the harbor. Andrew Burges avoided large balconies that distance occupants from the water and instead incorporated clever window placements and an additional living area that extends into an outdoor space. Lower levels have a distinct, more robust look compared to the refined upper levels, using natural materials that relate closely to the harbor. The understated design and finishes harness the harbor views while avoiding ostentation.
برنامج أي إي أس لأدارة المؤسسات التجاريةAmadeus Petra
برنامج أي إي أس هو تطبيق ويب محاسبي حيث انه يعمل من خلال الانترنت بحيث يستطيع المُستخدم متابعة اعماله من أي مكان و في أي زمان و يقوم بربط الانشطة التجارية و الفروع معا
برنامج يقوم بربط المخازن بجميع المُتعاملين معها
يوفر لك اتمام عمليات البيع اون لاين من المخازن
ينقسم الي برنامجين في برنامج واحد برنامج إداره الحسابات والمخزن و واجهه خاصه بالمُتعاملين مع المخزن يستطيع من خلالها تنفيذ الطلبيات
وبمجرد طلب الطلبيه تُرسل الي المخزن ويقوم البرنامج بتنبيه المخزن انه يوجد طلبيه جديده
The technology of cleaning semiconductor wafersMaxim Zarezov
Solway is a Russian company founded in 2011 to develop high technology devices using electrochemistry and physics. It has developed a new technology for cleaning and structuring semiconductor surfaces using highly concentrated ozone that is more efficient, environmentally friendly, and less expensive than existing methods. If successful in initial testing and market entry, the technology could achieve a realistic market share of $150 million annually in the growing $30 billion global semiconductor cleaning equipment market. The core team includes specialists in microelectronics, solar energy, and materials development who plan to initially enter the Russian market and expand to North America and Europe.
برنامج اداره الخزائن EAT تقدمه شركه بترا للبرمجيات و ذلك ليكون لديك رقابه قويه علي خزائنك والوارد و الصادر منها و معرفه الحركات التي تمت علي الخزينه خطوه بخطوه و من قام بتنفيذها حيث ان EAT يساعدك علي :
معرفه حركات الخزينه ليكون لك رقابه قويه علي اموالك
معرفه حركه الصادرات والواردات
معرفه حركات الخزينه لكل مستخدم علي حدي
معرفه يوميه الخزينه
تعيين اجمالي حساباتك خلال فتره معينه
و غيرها الكثير و الكثير
تسجيل الاصناف و الوحدات - برنامج المبيعات المحاسبي - اي سيلزAmadeus Petra
تسجل الاصناف و ووحدات القياس الخاصة بكل صنف يُمكنك برنامج اي سيلز لادارة المبيعات و المشتريات من تسجيل كافة بيانات الاصناف لديك من كود الصنف و اسم الصنف و سعر الشراء و البيع و ايضا سعر الجملة و يمكنك ايضا من ادخال نسبة الربح في كل صنف و حساب سعر البيع علي اساس الربح و يمكنك ايضا ادخال رصيد البداية الموجود بالمخازن و لتسهيل بداية العمل بالبرنامج
JK Cements provides guidance on various aspects of construction including selecting land, reducing costs, involving experts, selecting materials, and more. They recommend consulting experienced contractors and architects, using quality materials like branded cement and steel rods, and emphasizing durability over appearance. JK Cements also provides estimates of typical material quantities needed for constructions of different sizes.
EMC World 2016 - code.14 Deep Dive with Mesos and Persistent Storage for Appl...{code}
This document provides an overview and demonstration of using Mesos frameworks with persistent storage for production applications. It begins with a review of Mesos and how frameworks provide two-layer scheduling. It then discusses how production applications need persistent, highly available storage. The demonstration shows an ElasticSearch framework deployed on Mesos that uses external persistent storage for data persistence even if compute nodes fail. It simulates a node failure to show the framework recreating the executor and reattaching it to the persistent storage volume. The document promotes the idea that frameworks can help applications fail fast and recover quickly when using external persistent stateful storage.
Containers and Nutanix - Acropolis Container ServicesNEXTtour
This presentation was given at the London Nutanix user group (NUG) on Oct 26 by Denis Guyadeen. If you would like to join a NUG, you can find more information here http://bit.ly/NTNXUG - Hope to see you at a community meeting!
In this slide, I briefly introduce the container and how docker implement it, including the image and container itself. also show how docker setup the networking connectivity by default bridge network.
Quick overview on the journey of storage w.r.t containers and emerging trends in container storage. A look at the requirements and architectural patterns of the new storage startups. Presented at the Docker meetup#24 at #GoJekIndia
This document discusses containerizing stateful applications using Docker volumes and volume plugins. It provides an overview of local persistent storage options in Docker and their pros and cons. It also discusses solutions like volume plugins from companies like EMC, Microsoft, and VMware. Finally, it announces a demo of running PostgreSQL with Docker Swarm on AWS using the Flocker volume plugin.
Managing Container Clusters in OpenStack Native WayQiming Teng
This is a presentation from the OpenStack Austin Summit. It talks about managing containers in an OpenStack native way where containers are treated as first class citizens.
AWS is an elastic, secure, flexible, and developer-centric ecosystem that serves as an ideal platform for Docker deployments. AWS offers the scalable infrastructure, APIs, and SDKs that integrate tightly into a development lifecycle and accentuate the benefits of the lightweight and portable containers that Docker offers to its users. This session will familiarize you with the benefits of containers, introduce Amazon EC2 Container Service (ECS), and demonstrate how to use Amazon ECS for your applications.
AWS DevDay San Francisco, June 21, 2016.
Presenter: Asha Chakrabarty, Senior Solutions Architect
Data relay introduction to big data clustersChris Adkin
This document provides an overview of SQL Server 2019 Big Data Clusters, which enable hybrid SQL Server/Spark scale-out data platforms that run on Kubernetes. Big Data Clusters are available in public preview and will generally be available in the second half of 2019. They provide a true scale-out data platform for aggregating data from various sources, using data science tools with sensitive data on the same platform, and storing/querying large amounts of unstructured data with SQL Server tools.
O serviço Elastic Beanstalk da AWS permite que voce automatize o lançamento de aplicativos e sites inteiros na nuvem da AWS. Nesta apresentação, feita durante o AWS Summit 2015, mostramos como usá-lo.
Container Attached Storage - Chennai Kubernetes Meetup #2 - April 21st 2018OpenEBS
Kubernetes can be used with external storage like NAS, SAN, and S3, but applications requiring persistence need special care due to tight coupling. OpenEBS addresses this by providing containerized storage that adjusts automatically to containers' needs, allowing persistence without changes to developers' workflows. It achieves this through containerized storage components and integration with Kubernetes, providing features like replication, snapshots, and intelligent QoS.
This document provides an overview of using Docker containers on Amazon Web Services (AWS). It discusses the benefits of containers, how Amazon ECS provides container management and scheduling capabilities, and how to run containerized services on ECS. Key points covered include how ECS handles resource management and scheduling across a cluster, its use of APIs, tasks, services, load balancing, and updating deployments. The document concludes with a reminder to complete an evaluation.
This document summarizes Darren Shepherd's presentation about Stampede.io, a hybrid IaaS/Docker orchestration platform he developed. It can run both VMs and containers consistently. Stampede.io provides portable cloud infrastructure, including compute, storage, and networking capabilities using Linux technologies. Darren discussed how Stampede.io could help normalize the infrastructure market by reducing reliance on large cloud providers if it can tackle portable storage and networking challenges for containers. He demonstrated a Stampede deployment across Digital Ocean nodes that launched over 127,000 containers reliably.
The document provides an overview of running Docker containers on AWS using ECS. It discusses:
- Why containers are useful for building scalable microservices applications.
- How ECS handles cluster management, scheduling containers across a cluster, and integrates with other AWS services.
- Common workflows for using ECS, such as pushing images to ECR, defining tasks, running tasks/services, updating services, and monitoring with CloudWatch.
- Security considerations like IAM roles for containers and tasks.
- Examples of task placement strategies and a customer case study on using ECS at scale.
The document concludes by noting other AWS services that complement ECS and taking questions.
This document provides an overview of containers and Kubernetes (k8s). It discusses the advantages of containers over virtual machines, popular container orchestration frameworks including Kubernetes, and what Kubernetes does. Specifically, it describes how Kubernetes schedules containers into pods, organizes pods into clusters, exposes groups of pods as services, and uses ingress to load balance traffic. It also covers how Kubernetes provides scalability by restarting pods, starting more replicas, and expanding the cluster as needed.
This document summarizes a presentation about the vCloud architecture ecosystem and components. It discusses the various building blocks, importance of orchestration, and depth of knowledge required. It provides an example solution using vCloud Application Director 2.0 and details how published catalog cloning works. It covers cell network considerations and possible cluster configurations. Finally, it discusses common themes and vCloud maximum limits.
Kubernets on IBM Cloud + DevOps discusses Kubernetes and IBM Cloud Container Service. Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. IBM Cloud Container Service provides a managed Kubernetes cluster. The document then discusses using Kubernetes with continuous integration and delivery pipelines for DevOps, with examples of building and deploying containers to Kubernetes from a Git repository using IBM Cloud DevOps tools and pipelines. It concludes with announcing a demo.
Docker Meetup - Melbourne 2015 - Kubernetes Deep DiveKen Thompson
This document provides an overview of Kubernetes networking and storage capabilities. It begins with an agenda that includes a deep dive on Kubernetes networking and persistent volumes, as well as live demos of persistent storage and another topic. The document then discusses Kubernetes networking at the host level using pods that share IP, IPC, and disk, as well as inter-host networking solutions like OpenShift SDN. It also covers Kubernetes persistent volume claims that allow administrators to provision storage and developers to request storage that is independent of the underlying devices. The document concludes with demos of storage and another topic.
AWS re:Invent 2016: Getting Started with Docker on AWS (CMP209)Amazon Web Services
AWS is an elastic, secure, flexible, and developer-centric ecosystem that serves as an ideal platform for Docker deployments. AWS offers the scalable infrastructure, APIs, and SDKs that integrate tightly into a development lifecycle and accentuate the benefits of the lightweight and portable containers that Docker offers to its users.
This session familiarizes you with the benefits of containers, introduce Amazon EC2 Container Service, and demonstrates how to use Amazon ECS to run containerized applications at scale in production.
Similar to Why should i care about stateful containers? (20)
Salesforce Integration for Bonterra Impact Management (fka Social Solutions A...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on integration of Salesforce with Bonterra Impact Management.
Interested in deploying an integration with Salesforce for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Trusted Execution Environment for Decentralized Process MiningLucaBarbaro3
Presentation of the paper "Trusted Execution Environment for Decentralized Process Mining" given during the CAiSE 2024 Conference in Cyprus on June 7, 2024.
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
Ocean lotus Threat actors project by John Sitima 2024 (1).pptxSitimaJohn
Ocean Lotus cyber threat actors represent a sophisticated, persistent, and politically motivated group that poses a significant risk to organizations and individuals in the Southeast Asian region. Their continuous evolution and adaptability underscore the need for robust cybersecurity measures and international cooperation to identify and mitigate the threats posed by such advanced persistent threat groups.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Skybuffer AI: Advanced Conversational and Generative AI Solution on SAP Busin...Tatiana Kojar
Skybuffer AI, built on the robust SAP Business Technology Platform (SAP BTP), is the latest and most advanced version of our AI development, reaffirming our commitment to delivering top-tier AI solutions. Skybuffer AI harnesses all the innovative capabilities of the SAP BTP in the AI domain, from Conversational AI to cutting-edge Generative AI and Retrieval-Augmented Generation (RAG). It also helps SAP customers safeguard their investments into SAP Conversational AI and ensure a seamless, one-click transition to SAP Business AI.
With Skybuffer AI, various AI models can be integrated into a single communication channel such as Microsoft Teams. This integration empowers business users with insights drawn from SAP backend systems, enterprise documents, and the expansive knowledge of Generative AI. And the best part of it is that it is all managed through our intuitive no-code Action Server interface, requiring no extensive coding knowledge and making the advanced AI accessible to more users.
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
A Comprehensive Guide to DeFi Development Services in 2024Intelisync
DeFi represents a paradigm shift in the financial industry. Instead of relying on traditional, centralized institutions like banks, DeFi leverages blockchain technology to create a decentralized network of financial services. This means that financial transactions can occur directly between parties, without intermediaries, using smart contracts on platforms like Ethereum.
In 2024, we are witnessing an explosion of new DeFi projects and protocols, each pushing the boundaries of what’s possible in finance.
In summary, DeFi in 2024 is not just a trend; it’s a revolution that democratizes finance, enhances security and transparency, and fosters continuous innovation. As we proceed through this presentation, we'll explore the various components and services of DeFi in detail, shedding light on how they are transforming the financial landscape.
At Intelisync, we specialize in providing comprehensive DeFi development services tailored to meet the unique needs of our clients. From smart contract development to dApp creation and security audits, we ensure that your DeFi project is built with innovation, security, and scalability in mind. Trust Intelisync to guide you through the intricate landscape of decentralized finance and unlock the full potential of blockchain technology.
Ready to take your DeFi project to the next level? Partner with Intelisync for expert DeFi development services today!
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Programming Foundation Models with DSPy - Meetup Slides
Why should i care about stateful containers?
1. Why should I care about
stateful containers?
Luke Marsden
CTO, ClusterHQ
@lmarsden @clusterhq
2. Microservices are smashing up
monolithic databases
Many more database instances, many more flavours
prod
staging in cloud 2 DR in cloud 3
hosted CI
dev laptop
17. Servers Volumes What is it?
Pets Pets just crap
Cattle Pets
storage
orchestration
Cattle Cattle
ephemeral stateful
containers
Pet ~= HA provided by infrastructure/platform
Cattle ~= HA provided by application
18. Pros & cons
Storage orchestration
• Run any database/stateful app
• Storage layer responsible for data
resilience
• Ops independent of database
types
• Works within a storage zone
Ephemeral stateful containers
• Requires distributed database(s)
• Database-defined data resilience
• Ops requires understanding each
data service
• Can span storage zones
40. Try Flocker today
For stateful microservices
luke@clusterhq.com
clusterhq.com
github.com/clusterhq/flocker
We are hiring in Bristol (1 hour hop from Schiphol)
+ SF Bay Area!
clusterhq.com/careers
Editor's Notes
hi i’m luke from clusterhq and today i’m going to try and convince you that it’s worth considering deploying databases and other stateful workloads in containers (if you’re not already!) and talk about some ways of doing that.
but first let me tell you a story
1. In the beginning there were monolithic apps.
They typically had one database and we knew where it was.
It was in that rack over there and it was powered by Oracle or MySQL or whatever.
The data was in one flavour and it was in one place.
2. But now we’re building applications as microservices using containers. Each microservice handles its own data and that data now comes in lots of different flavours.
Developers are being encouraged to “use the best tool for the job” for each microservice. You might have some MongoDB, ElasticSearch, MySQL, PostgreSQL, Cassandra, Redis.
So an app that would have been built as a monolith is now being built as 30 separate components.
So as microservices and containers spread, we’re seeing at least an order of magnitude more data services popping up in enterprises all over the planet.
It gets worse, there isn’t just one instance of your app.
4. You’ll want an entire staging copy of your app.
5. You have microservices popping up ephemerally on developers’ laptops.
6. You want a DR plan for what happens if Amazon goes south.
7. And you want continuous integration.
8. You’ve got all these copies of your microservices, all with separate but related silos of data — it’s lots of different parts to manage.
9. So what I want you to take away from this slide is that microservices are smashing monolithic databases into a large number of different types of databases.
10. Folks doing microservices therefore have many more database instances, in many more flavours (different types of databases). and they need to be able to deploy these data services alongside their applications throughout staged of their SDLC and across different environments, and they need to be able to manage them.
so the first objection i normally hear when i talk about stateful containers, is aren’t containers meant to be stateless?
after all, aren’t we all building 12-factor apps now? we don’t put data on the filesystem of the application containers, that means we can scale our apps by spinning up many of them, and scale them down just by blowing them away.
that’s true and 12 factor is great.
but doesn’t mean that applications don’t have data. quite the opposite in fact.
applications are a set of microservices, microservices are sets of containers and data services.
every application has data at its core.
it’s just that 12-factor convinced us to think of data services as external to our applications.
but that was then and this is now.
but i believe the platforms we’re building to support microservices and containers should not be limited to just stateless apps. here are some reasons to embrace the stateful container…
the first problem is that if you don’t support stateful containers, but you need to run databases (and who doesn’t) then you end up with not one but two platforms.
if you manage your stateless parts in one way, say with mesos or k8s or swarm,
and your stateful components separately, perhaps using virtualization or databases on bare metal,
then you end up with two or more separate sets of systems to manage.
as a devops team, we don’t want to have to think about talking to more than one system with more than one API if you want to stand up a new production database for my microservice.
i want to just be able to include the database in my docker compose file or marathon/k8s manifest and deploy it on the prod cluster! and then as far as possible have the database look after itself.
the other reason - apart from not wanting to have multiple platforms, is that containers promise that your app will run in the same exact environment on your laptop as it will in production.
but that’s problematic when you introduce third party — or separately-managed — data services because now it’s not the same version of, for example, mysql running there, in fact it’s not even necessarily *mysql*.
for example, you can get something that looks a bit like mysql or postgresql from AWS or GCE
but they’re not actually mysql or postgresql, they’re patched, or could even be completely different software, so they might behave differently at runtime to how the db ran on your laptop, and they can only be provisioned via different APIs, which brings me on to…
the promise of containers is portability!
the ability to run an app or a microservice locally on your laptop, and then be able to run that exact same app in production environment.
this whole movement around docker, containers, mesos, k8s, swarm is that you should be able to deploy apps in the same way where-ever they’re running.
that’s why we’re all trying to layer this portable infrastructure on top of different clouds means that you can have the same service APIs accessible on top of any infrastructure. be that bare metal, vSphere, AWS or GCE!
by deploying on this it means you don’t get tied into any specific cloud APIs.
i believe in a single platform for stateful and stateless components, and so what we’re working on at clusterhq is making it easy to spin up stateful services under your choice of platform with off the shelf open source components.
ok, so who’s in? who thinks we should consider trying to run our entire apps in containers on a portable infrastructure, including the data services?
next i want to look at the historical evolution of infrastructure architecture and look at some options for *how* to run stateful containers, try to derive how we should handle stateful containers.
so who’s heard of cattle vs. pets?
<describe idea>
pretty obvious when applied to stateless apps
ain’t got no time for individual pet app servers or VMs…
but less obvious when applied to stateful components - arguably your data “is” a pet. you certainly care your production database! so there are two approaches to dealing with the fact that shit happens - that i will outline…
on the left we’ve got the truth table of servers & volumes
back in the 90s, and before that even, we had computers with disks in them.
they were pet servers with pet data.
we cared about the servers and their RAID arrays and we nursed them back to health if they were sick.
some storage companies came along and invented storage boxes. and then when vmware started taking off, it was because it allowed us to take these pet servers, and by virtualising them, basically have vmware start automatically looking after them. so when a computer broke, the vmware cluster would bring back the VMs that were on that host. this used this expensive shared storage hardware but it was worth it because now when a disk fails or or a cpu fries or a cleaner trips over a power cable you don’t get paged in the middle of the night. vmware would just spin up the vm on another node.
then the cloud happened
abstracted away virtualisation behind a service boundary
and notice that VMs stopped being HA
at first EC2 only supported only ephemeral storage but users couldn’t hack it - there was much demand for data which persists beyond the lifetime of a VM.
so they invented EBS which looks after your storage independently from the instances
EBS tries to keep your data safe, whereas your EC2 VMs can be killed at any time.
storage and compute are fundamentally quite different types of things!
coming back into the datacenter, remember we had SANs and VMs..
<explain the diagram>
there has recently been a shift away from big expensive SANs towards what people are calling “hyperconverged” or “software defined storage” and this is starting to become mature and popular in the enterprise.
if you set up VMware vSAN or OpenStack with Ceph or ScaleIO it will look something like this
note that this could be an implementation strategy for a cloud!
it’s worth shouting out to distributed databases here
they treat both servers and volumes as ephemeral, which is a difficult thing to do.
which support two important attributes: they allow write workloads to scale out, which is hard, and they actually treat things down to the bottom layer, servers and volumes both as cattle for purposes of resilience. so you can take cassandra for example and run it straight on unreliable hardware and unreliable storage. this is great if your database supports it and does a good job of it, however you just need to google “aphyr call me maybe” to see this doesn’t always work that well.
it’s also possible to run databases, including distributed databases on top of software defined storage or cloud storage or SANs. then, if you have want to run multiple databases on a pool of servers, some which are good at doing automatic replication and failover, and some which are less good at that, you can run them on top of SDS. so if a computer fails in this scenario, SDS will keep the data in that db or shard safe, but at the moment there’s a mismatch - non distributed databases won’t come back automatically— you’d have to go in manually to bring it back.
now with our product flocker, it’s starting to become possible to run databases in containers underneath container schedulers like mesos, k8s or docker swarm. and in this mode, both distributed and singleton databases can safely coexist on top of this reliable storage layer, we can still get the benefits of scale-out we get from the distributed databases, but we can also get consistent data safety from the underlying replicated block storage layer whichever databases we’re running.
so here’s the truth table for pets versus cattle for compute and storage
to define our terms, by pet we mean <explain>
by cattle we mean <explain>
so we’ve already agreed that if both servers and volumes are pets that’s crap.
if servers are cattle but volumes are pets, i’m going to call that storage orchestration (in fact adrianco coined this term)
and if servers are cattle and volumes are cattle as well, which is apparently how netflix operates, then let’s call this ephemeral stateful containers.
i’m not going to bother defending pets, but i do want to take a minute to compare and contrast storage orch with ephemeral stateful containers.
<read the slides>
note on spanning storage zones - you may want to use a combination of these approaches as you look at different types of failure modes. within a storage zone, a more common failure such as a node going down, you may want to handle by using storage orchestration like flocker. across storage zones or even regions, like if EBS goes down in us-east-1a or you lose a whole zone, you’ll need to rely on replication performed at the database layer. and because of the latencies involved in doing cross-region replication, you may need to relax your consistency requirements to eventual consistency or something like that in those really large scale deployments.
fairly straightforward.
<explain>
this worked well enough for netflix, so if you’re ok using local storage and you really trust every data service you need to operate, you can do it this way.
this is an entirely viable strategy and it will work for people who are careful about their choice of databases.
but if you’re looking for operational consistency in the way you manage your stateful containers from an ops perspective, you’ll want to look at storage orchestration which you can do with flocker. for the rest of this talk i’m going to talk about how to do this — treating the volumes like pets while you treat the servers like cattle. we’re going to look at how flocker, the project we’re working on at clusterhq, helps you connect reliable storage to containers on unreliable compute, and some of the beneficial use cases you can get from that.
so why should data be pets when servers are cattle?
well applications are fundamentally lightweight, they scale out, they can come and go.
data is heavy.
when you’re building a platform, and you want developers to be able to throw applications, including stateful bits, like the microservices at the beginning, it’s far simpler to be able to assume some level of resilience in the storage layer than to require that every single data service provides availability itself.
in fact a distributed scale-out block storage solution (EBS, ceph, scaleio) often does a better job than *all* the databases you want to run on a platform.
so what does flocker do?
<describe problem with docker>
<describe solution with flocker>
<describe diagram>
so keep in mind here a fundamental principle of stateless and stateful things. stateless things can scale out - great - got more load on your web tier, throw more instances at it. this is totally like a solved problem.
but with stateful things, each singleton database, or each shard of a distributed database, reads and writes to a single filesystem. that filesystem should only be mounted in one place at a time and so there must only be one instance of it. if it dies, the container framework should bring it back up. so in mesos it’s a task, in k8s it’s a service with a replication controller = 1 copy. and swarm are still working on rescheduling tasks on failed nodes.
but that doesn’t mean you can’t scale out data services if you choose the right software. remember earlier i spoke about distributed databases - from the platform’s point of view each shard of a distributed database is a separate container, with its own independent filesystem.
with flocker, it becomes possible to run any stateful containers in your container cluster and manage them sensibly, because your containers don’t get detached from the data they’re operating on.
so now i want to talk about some use cases
so to summarize, at clusterhq we’re connecting the container universe to the storage universe.
because while we believe that ephemeral stateful containers are doable
we believe that the operational benefits of being able to treat containers and their data as atomic units, move them around and fail them over is useful.
and the customers we speak to are more often than not using EBS or SAN or SDS and so why not connect your storage to your containers?
thanks and so i’d just like to encourage you to try flocker today, you can go to our website now and spin up a 3 node environment which will allow you to play with it in minutes, then you can try out our integrations with docker swarm, compose, mesos, with k8s coming soon.
also we’re hiring in bristol which is just a 1 hour hop from schiphol in sunny england so if you’re interested in coming and helping build the data layer for containers get in touch!
questions?