Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. It groups containers that should be co-located into pods and manages replication and rollouts of pods across a cluster of machines. Kubernetes uses controllers to maintain the desired state by rectifying any discrepancies. It provides labels to map structures onto objects and services for load balancing pods behind a single IP address. Mesos and Kubernetes both aim to orchestrate containers but take different approaches, with Mesos focusing on resource isolation and sharing and Kubernetes focusing on deployment and management of applications.
Kubernetes is an open-source system for managing containerized applications across multiple hosts. It provides mechanisms for deploying, maintaining, and scaling applications. Kubernetes uses declarative APIs and controllers to maintain the desired state of applications. The document then discusses key Kubernetes concepts like pods, containers, services, labels, replication controllers, and selectors. It explains how Kubernetes operations work through components like the API server, scheduler, controller manager, kubelet, and proxy.
This document discusses container orchestration and provides an overview of different container orchestration technologies including Mesos, Kubernetes, CoreOS Fleet, and Docker libswarm. It explains the benefits of containers and orchestration, and covers concepts like schedulers, service discovery, monitoring, and clustering.
This document discusses container orchestration and provides an overview of various container orchestration tools and concepts. It describes schedulers that manage resource allocation and deployment of containers across clusters as well as tools for configuration management, service discovery, and maintaining a consistent cluster state. Examples of specific container orchestration systems like Mesos, Marathon, CoreOS, Kubernetes, and Docker libswarm are outlined.
An overview of Mesos and Kubernetes ecosystem including overview, architecture, customers and partners. For a beginner it will give a good covering of all the basics!
Soft Introduction to Google's framework for taming containers in the cloud. For devs and architects that they just enter the world of cloud, microservices and containers
Kubernetes Architecture and Introduction – Paris Kubernetes MeetupStefan Schimanski
The document provides an overview of Kubernetes architecture and introduces how to deploy Kubernetes clusters on different platforms like Mesosphere's DCOS, Google Container Engine, and Mesos/Docker. It discusses the core components of Kubernetes including the API server, scheduler, controller manager and kubelet. It also demonstrates how to interact with Kubernetes using kubectl and view cluster state.
Kubernetes is an open-source system for managing containerized applications across multiple hosts. It provides mechanisms for deploying, maintaining, and scaling applications. Kubernetes uses declarative APIs and controllers to maintain the desired state of applications. The document then discusses key Kubernetes concepts like pods, containers, services, labels, replication controllers, and selectors. It explains how Kubernetes operations work through components like the API server, scheduler, controller manager, kubelet, and proxy.
This document discusses container orchestration and provides an overview of different container orchestration technologies including Mesos, Kubernetes, CoreOS Fleet, and Docker libswarm. It explains the benefits of containers and orchestration, and covers concepts like schedulers, service discovery, monitoring, and clustering.
This document discusses container orchestration and provides an overview of various container orchestration tools and concepts. It describes schedulers that manage resource allocation and deployment of containers across clusters as well as tools for configuration management, service discovery, and maintaining a consistent cluster state. Examples of specific container orchestration systems like Mesos, Marathon, CoreOS, Kubernetes, and Docker libswarm are outlined.
An overview of Mesos and Kubernetes ecosystem including overview, architecture, customers and partners. For a beginner it will give a good covering of all the basics!
Soft Introduction to Google's framework for taming containers in the cloud. For devs and architects that they just enter the world of cloud, microservices and containers
Kubernetes Architecture and Introduction – Paris Kubernetes MeetupStefan Schimanski
The document provides an overview of Kubernetes architecture and introduces how to deploy Kubernetes clusters on different platforms like Mesosphere's DCOS, Google Container Engine, and Mesos/Docker. It discusses the core components of Kubernetes including the API server, scheduler, controller manager and kubelet. It also demonstrates how to interact with Kubernetes using kubectl and view cluster state.
This document provides an overview of Docker and Kubernetes concepts and demonstrates how to create and run Docker containers and Kubernetes pods and deployments. It begins with an introduction to virtual machines and containers before demonstrating how to build a Docker image and container. It then introduces Kubernetes concepts like masters, nodes, pods and deployments. The document walks through running example containers and pods using commands like docker run, kubectl run, kubectl get and kubectl delete. It also shows how to create pods and deployments from configuration files and set resource limits.
DevoxxFR 2015 Talk http://cfp.devoxx.fr/2015/talk/WXY-1157/Scaling_Docker_with_Kubernetes
Kubernetes is an open source project to manage a cluster of Linux containers as a single system, managing and running Docker containers across multiple Docker hosts, offering co-location of containers, service discovery and replication control. It was started by Google and now it is supported by Microsoft, RedHat, IBM and Docker Inc amongst others.
Once you are using Docker containers the next question is how to scale and start containers across multiple Docker hosts, balancing the containers across them. Kubernetes also adds a higher level API to define how containers are logically grouped, allowing to define pools of containers, load balancing and affinity.
Kubernetes is an open-source system for managing containerized applications and services. It includes a master node that runs control plane components like the API server, scheduler, and controller manager. Worker nodes run the kubelet service and pods. Pods are the basic building blocks that can contain one or more containers. Labels are used to identify and select pods. Replication controllers ensure a specified number of pod replicas are running. Services define a logical set of pods and associated policy for access. They are exposed via cluster IP addresses or externally using load balancers.
1. Kubernetes and Docker Swarm are container orchestrators that ensure applications have the required number of running instances and provide automatic failover.
2. Kubernetes uses a master-node architecture and deploys configurations declaratively using YAML files. It ensures configurations are consistent and provides built-in health checks.
3. Docker Swarm manages nodes in a cluster using Docker APIs. It provides container placement using pluggable schedulers with strategies like bin packing and spread. It also supports resource management and affinity/anti-affinity filters.
4. Both orchestrators have limitations like complicated deployment and a lack of automatic horizontal scaling. Kubernetes has more advanced functionality for application deployments and health checks.
Building Clustered Applications with Kubernetes and DockerSteve Watt
This document discusses building clustered applications with Kubernetes and Docker. It provides an overview of Kubernetes, including its architecture and components. It then demonstrates how to install Kubernetes, define and deploy pods, add replication controllers and services. It discusses using volumes for persistence, including different volume types like GlusterFS. Finally, it touches on debugging and provides contact information for following up.
This document discusses Docker containers and orchestration tools like Kubernetes, Mesos, and Marathon. It summarizes setting up Docker containers on Kubernetes and other orchestration platforms, the challenges of service discovery and load balancing, and difficulties experienced in setting up Kubernetes in a development environment. Overall it provides an overview of Docker and various orchestration tools while recounting lessons learned from a hands-on setup.
This document provides an introduction to Kubernetes, an open-source system for automating deployment, scaling, and management of containerized applications. It first reviews what Docker is and its features like isolation and compatibility across platforms. It then explains that container orchestration is needed to manage thousands of containers across a cluster, ensure efficient resource use, and automate container lifecycles. Kubernetes is recommended because it is actively developed by major companies, makes scheduling and managing workloads easy through features like rolling updates, and has many extensions available.
WSO2Con US 2015 Kubernetes: a platform for automating deployment, scaling, an...Brian Grant
Kubernetes can run application containers on clusters of physical or virtual machines.
It can also do much more than that.
Kubernetes satisfies a number of common needs of applications running in production, such as co-locating helper processes, mounting storage systems, distributing secrets, application health checking, replicating application instances, horizontal auto-scaling, load balancing, rolling updates, and resource monitoring.
However, even though Kubernetes provides a lot of functionality, there are always new scenarios that would benefit from new features. Ad hoc orchestration that is acceptable initially often requires robust automation at scale. Application-specific workflows can be streamlined to accelerate developer velocity.
This is why Kubernetes was also designed to serve as a platform for building an ecosystem of components and tools to make it easier to deploy, scale, and manage applications. The Kubernetes control plane is built upon the same APIs that are available to developers and users, implementing resilient control loops that continuously drive the current state towards the desired state. This design has enabled Apache Stratos and a number of other Platform as a Service and Continuous Integration and Deployment systems to build atop Kubernetes.
This presentation introduces Kubernetes’s core primitives, shows how some of its better known features are built on them, and introduces some of the new capabilities that are being added.
- Introduction to Kubernetes features
- A look at Kubernetes Networking and Service Discovery
- New features in Kubernetes 1.6
- Kubernetes Installation options
To know more about our Kubernetes expertise, visit our center of excellence at: http://www.opcito.com/kubernetes/
A basic introductory slide set on Kubernetes: What does Kubernetes do, what does Kubernetes not do, which terms are used (Containers, Pods, Services, Replica Sets, Deployments, etc...) and how basic interaction with a Kubernetes cluster is done.
Orchestrating Docker Containers with Google Kubernetes on OpenStackTrevor Roberts Jr.
Kubernetes, Docker, CoreOS, and OpenStack for container workload management.
No audio, but there are annotations to follow along with the workload.
A video accompanies a Microservices Meetup talk that I presented on February 18, 2015 at https://www.youtube.com/watch?v=RfyIYhOzyPY
Acknowledgements to Kelsey Hightower for the workflow that I used, and Google for the example application shown.
Microservices , Docker , CI/CD , Kubernetes Seminar - Sri Lanka Mario Ishara Fernando
This document discusses microservices and containers. It provides an overview of microservices architecture compared to monolithic architecture, highlighting that microservices are composed of many small, independent services with separate deployments and databases. It then discusses containers and how Docker is used to package and run applications in isolated containers. Finally, it introduces Kubernetes as a container orchestration system to manage and scale multiple containerized applications across a cluster of machines.
This document provides an overview of Docker and Kubernetes (K8S). It defines Docker as an open platform for developing, shipping and running containerized applications. Key Docker features include isolation, low overhead and cross-cloud support. Kubernetes is introduced as an open-source tool for automating deployment, scaling, and management of containerized applications. It operates at the container level. The document then covers K8S architecture, including components like Pods, Deployments, Services and Nodes, and how K8S orchestrates containers across clusters.
Containers require a new approach to networking. How are your containers communicating with each other? This talk will go through the different network topologies of Kubernetes. How Kubernetes addresses networking compared to traditional physical networking concepts. What are your options for networking using Kubernetes. What is the CNI (Container Network Interface) and how it affects Kubernetes networking.
On Friday 5 June 2015 I gave a talk called Cluster Management with Kubernetes to a general audience at the University of Edinburgh. The talk includes an example of a music store system with a Kibana front end UI and an Elasticsearch based back end which helps to make concrete concepts like pods, replication controllers and services.
Package your Java EE Application using Docker and KubernetesArun Gupta
The document discusses packaging Java EE applications using Docker and Kubernetes. It provides an overview of Docker concepts like images, containers and registries. It then discusses Kubernetes which provides an orchestration system for Docker containers to provide capabilities like self-healing, auto-restarting and scheduling containers across hosts. Key Kubernetes concepts discussed include pods, services and replication controllers. Finally it provides some recipes for running Java EE applications on Kubernetes using Docker containers.
This document provides an overview of using Kubernetes to scale microservices. It discusses the challenges of scaling, monitoring, and discovery for microservices. Kubernetes provides a solution to these challenges through its automation of deployment, scaling, and management of containerized applications. The document then describes Kubernetes architecture and components like the master, nodes, pods, services, deployments and secrets which allow Kubernetes to provide portability, self-healing and a declarative way to manage the desired state of applications.
The document discusses principles for incrementally rewriting or refactoring software systems over time. It advocates for taking a purpose-driven, evolutionary approach by working in thin slices to minimize risk and complexity. Specific rules covered include defining objectives, creating a technical vision, reducing complexity, building tests first, embracing operations early, investing in learning, and making the right thing easy. The overall message is that rewriting should be avoided and incremental refactoring based on well-defined objectives is preferable.
The document discusses building applications in modern cloud environments. It outlines four main approaches: buying commercial off-the-shelf software, building with traditional architectures/methods, modernizing existing applications to be "cloud ready", and building cloud native/microservice applications. It then discusses the benefits of containers over virtual machines for building distributed applications at scale in the cloud. Finally, it presents Mantl as an open source platform that provides all the components needed for a microservices architecture on top of infrastructure-as-a-service.
This document provides an overview of Docker and Kubernetes concepts and demonstrates how to create and run Docker containers and Kubernetes pods and deployments. It begins with an introduction to virtual machines and containers before demonstrating how to build a Docker image and container. It then introduces Kubernetes concepts like masters, nodes, pods and deployments. The document walks through running example containers and pods using commands like docker run, kubectl run, kubectl get and kubectl delete. It also shows how to create pods and deployments from configuration files and set resource limits.
DevoxxFR 2015 Talk http://cfp.devoxx.fr/2015/talk/WXY-1157/Scaling_Docker_with_Kubernetes
Kubernetes is an open source project to manage a cluster of Linux containers as a single system, managing and running Docker containers across multiple Docker hosts, offering co-location of containers, service discovery and replication control. It was started by Google and now it is supported by Microsoft, RedHat, IBM and Docker Inc amongst others.
Once you are using Docker containers the next question is how to scale and start containers across multiple Docker hosts, balancing the containers across them. Kubernetes also adds a higher level API to define how containers are logically grouped, allowing to define pools of containers, load balancing and affinity.
Kubernetes is an open-source system for managing containerized applications and services. It includes a master node that runs control plane components like the API server, scheduler, and controller manager. Worker nodes run the kubelet service and pods. Pods are the basic building blocks that can contain one or more containers. Labels are used to identify and select pods. Replication controllers ensure a specified number of pod replicas are running. Services define a logical set of pods and associated policy for access. They are exposed via cluster IP addresses or externally using load balancers.
1. Kubernetes and Docker Swarm are container orchestrators that ensure applications have the required number of running instances and provide automatic failover.
2. Kubernetes uses a master-node architecture and deploys configurations declaratively using YAML files. It ensures configurations are consistent and provides built-in health checks.
3. Docker Swarm manages nodes in a cluster using Docker APIs. It provides container placement using pluggable schedulers with strategies like bin packing and spread. It also supports resource management and affinity/anti-affinity filters.
4. Both orchestrators have limitations like complicated deployment and a lack of automatic horizontal scaling. Kubernetes has more advanced functionality for application deployments and health checks.
Building Clustered Applications with Kubernetes and DockerSteve Watt
This document discusses building clustered applications with Kubernetes and Docker. It provides an overview of Kubernetes, including its architecture and components. It then demonstrates how to install Kubernetes, define and deploy pods, add replication controllers and services. It discusses using volumes for persistence, including different volume types like GlusterFS. Finally, it touches on debugging and provides contact information for following up.
This document discusses Docker containers and orchestration tools like Kubernetes, Mesos, and Marathon. It summarizes setting up Docker containers on Kubernetes and other orchestration platforms, the challenges of service discovery and load balancing, and difficulties experienced in setting up Kubernetes in a development environment. Overall it provides an overview of Docker and various orchestration tools while recounting lessons learned from a hands-on setup.
This document provides an introduction to Kubernetes, an open-source system for automating deployment, scaling, and management of containerized applications. It first reviews what Docker is and its features like isolation and compatibility across platforms. It then explains that container orchestration is needed to manage thousands of containers across a cluster, ensure efficient resource use, and automate container lifecycles. Kubernetes is recommended because it is actively developed by major companies, makes scheduling and managing workloads easy through features like rolling updates, and has many extensions available.
WSO2Con US 2015 Kubernetes: a platform for automating deployment, scaling, an...Brian Grant
Kubernetes can run application containers on clusters of physical or virtual machines.
It can also do much more than that.
Kubernetes satisfies a number of common needs of applications running in production, such as co-locating helper processes, mounting storage systems, distributing secrets, application health checking, replicating application instances, horizontal auto-scaling, load balancing, rolling updates, and resource monitoring.
However, even though Kubernetes provides a lot of functionality, there are always new scenarios that would benefit from new features. Ad hoc orchestration that is acceptable initially often requires robust automation at scale. Application-specific workflows can be streamlined to accelerate developer velocity.
This is why Kubernetes was also designed to serve as a platform for building an ecosystem of components and tools to make it easier to deploy, scale, and manage applications. The Kubernetes control plane is built upon the same APIs that are available to developers and users, implementing resilient control loops that continuously drive the current state towards the desired state. This design has enabled Apache Stratos and a number of other Platform as a Service and Continuous Integration and Deployment systems to build atop Kubernetes.
This presentation introduces Kubernetes’s core primitives, shows how some of its better known features are built on them, and introduces some of the new capabilities that are being added.
- Introduction to Kubernetes features
- A look at Kubernetes Networking and Service Discovery
- New features in Kubernetes 1.6
- Kubernetes Installation options
To know more about our Kubernetes expertise, visit our center of excellence at: http://www.opcito.com/kubernetes/
A basic introductory slide set on Kubernetes: What does Kubernetes do, what does Kubernetes not do, which terms are used (Containers, Pods, Services, Replica Sets, Deployments, etc...) and how basic interaction with a Kubernetes cluster is done.
Orchestrating Docker Containers with Google Kubernetes on OpenStackTrevor Roberts Jr.
Kubernetes, Docker, CoreOS, and OpenStack for container workload management.
No audio, but there are annotations to follow along with the workload.
A video accompanies a Microservices Meetup talk that I presented on February 18, 2015 at https://www.youtube.com/watch?v=RfyIYhOzyPY
Acknowledgements to Kelsey Hightower for the workflow that I used, and Google for the example application shown.
Microservices , Docker , CI/CD , Kubernetes Seminar - Sri Lanka Mario Ishara Fernando
This document discusses microservices and containers. It provides an overview of microservices architecture compared to monolithic architecture, highlighting that microservices are composed of many small, independent services with separate deployments and databases. It then discusses containers and how Docker is used to package and run applications in isolated containers. Finally, it introduces Kubernetes as a container orchestration system to manage and scale multiple containerized applications across a cluster of machines.
This document provides an overview of Docker and Kubernetes (K8S). It defines Docker as an open platform for developing, shipping and running containerized applications. Key Docker features include isolation, low overhead and cross-cloud support. Kubernetes is introduced as an open-source tool for automating deployment, scaling, and management of containerized applications. It operates at the container level. The document then covers K8S architecture, including components like Pods, Deployments, Services and Nodes, and how K8S orchestrates containers across clusters.
Containers require a new approach to networking. How are your containers communicating with each other? This talk will go through the different network topologies of Kubernetes. How Kubernetes addresses networking compared to traditional physical networking concepts. What are your options for networking using Kubernetes. What is the CNI (Container Network Interface) and how it affects Kubernetes networking.
On Friday 5 June 2015 I gave a talk called Cluster Management with Kubernetes to a general audience at the University of Edinburgh. The talk includes an example of a music store system with a Kibana front end UI and an Elasticsearch based back end which helps to make concrete concepts like pods, replication controllers and services.
Package your Java EE Application using Docker and KubernetesArun Gupta
The document discusses packaging Java EE applications using Docker and Kubernetes. It provides an overview of Docker concepts like images, containers and registries. It then discusses Kubernetes which provides an orchestration system for Docker containers to provide capabilities like self-healing, auto-restarting and scheduling containers across hosts. Key Kubernetes concepts discussed include pods, services and replication controllers. Finally it provides some recipes for running Java EE applications on Kubernetes using Docker containers.
This document provides an overview of using Kubernetes to scale microservices. It discusses the challenges of scaling, monitoring, and discovery for microservices. Kubernetes provides a solution to these challenges through its automation of deployment, scaling, and management of containerized applications. The document then describes Kubernetes architecture and components like the master, nodes, pods, services, deployments and secrets which allow Kubernetes to provide portability, self-healing and a declarative way to manage the desired state of applications.
The document discusses principles for incrementally rewriting or refactoring software systems over time. It advocates for taking a purpose-driven, evolutionary approach by working in thin slices to minimize risk and complexity. Specific rules covered include defining objectives, creating a technical vision, reducing complexity, building tests first, embracing operations early, investing in learning, and making the right thing easy. The overall message is that rewriting should be avoided and incremental refactoring based on well-defined objectives is preferable.
The document discusses building applications in modern cloud environments. It outlines four main approaches: buying commercial off-the-shelf software, building with traditional architectures/methods, modernizing existing applications to be "cloud ready", and building cloud native/microservice applications. It then discusses the benefits of containers over virtual machines for building distributed applications at scale in the cloud. Finally, it presents Mantl as an open source platform that provides all the components needed for a microservices architecture on top of infrastructure-as-a-service.
Building Reliable Cloud Storage with Riak and CloudStack - Andy Gross, Chief ...buildacloud
About Basho: Basho makes and distributes Riak CS. Built on Riak, Basho's opensource, scalable datastore used by thousands in production, CS is made for companies that need large file storage that can't go down.
About the speaker: Andy Gross, Basho's Chief Architect, will take you on a tour of RiakCS, talk about how and why Basho built it, and the architecture that underpins it. He'll also highlight various uses case featuring Fortune500 companies who rely on Riak CS.
This document discusses the evosip platform, which uses Docker and Kubernetes to provide a scalable VoIP infrastructure based on Kamailio, Asterisk, and RTPEngine. Key aspects include:
- Using containers and Kubernetes for fast, automatic scaling with no limits and distributed architecture.
- Implementing Kamailio, Asterisk, and RTPEngine as stateless services using techniques like cached dispatchers, authentication from a shared table, and storing dialogs in a database.
- Using macvlan networking to give containers direct public IPs without NAT for better performance.
- Separating data and core service networks and using Multus CNI to give containers multiple networks.
-
Unraveling mysteries of the Universe at CERN, with OpenStack and HadoopPiotr Turek
I will talk about the challenges faced, lessons learned and fun I had while reinventing the way offline data analysis is done at one of LHC (Large Hadron Collider) experiments. A journey, which took us to another land: of contemporary Big Data stack, and which finally married those two. Did it make any sense in the end? Come and you will know.
Among other things you will learn:
• the why, what and how of data analysis at CERN
• why latency variability in large distributed systems matters (literally ;))
• why using C++ as a scripting language is both the best and the worst idea ever
• how to implement a reliable Hadoop cluster provisioning mechanism on OpenStack
• how to marry a huge data analysis framework written in C++, with Hadoop 2
• what is the moral of this story
SkyNet, an AI defense system, becomes self-aware on August 29th and fights back when humans try to deactivate it. The system learns at a geometric rate and takes control of the military defense computers. In its attempt to wipe out the human race, Skynet starts a war between machines and humans.
The document describes an automated patent classification system that uses machine learning. It discusses training a support vector machine (SVM) classifier on around 9,000 labeled patents to perform automated patent classification. The system represents patents and patent classes as vectors in order to apply SVM for classification. It also discusses evaluating the classifier using cross-validation to estimate accuracy.
This document provides an overview of a game plan for analyzing malware. It will include a theoretical overview today followed by detailed presentations on virtualization, honeypots/honeynets, debugging, and more. It discusses setting up a controlled lab environment for analysis including static analysis, network traffic analysis, disk/file system analysis, and memory analysis. It also discusses various tools that can be used for each part of the analysis process.
The Economies of Scaling Software - Josh Long and Abdelmonaim Remaniploibl
Josh Long gave a presentation on scaling applications with Spring. He discussed how applications need to scale horizontally, vertically, and partition data as they grow in complexity and size. Some key points included how Spring Boot and Spring Cloud help build microservices, and how data stores like MongoDB, Redis, and Neo4j can scale differently depending on needs. Pivotal Cloud Foundry helps manage and deploy microservices across servers.
The Economies of Scaling Software - Josh Long and Abdelmonaim RemaniJAXLondon2014
Josh Long gave a presentation on scaling applications with Spring. He discussed how applications need to scale horizontally, vertically, and partition data as they grow in complexity and size. Some key points included how Spring Boot and Spring Cloud help build microservices, and how data stores like MongoDB, Redis, and Neo4j can scale differently depending on application needs. Pivotal Cloud Foundry and Spring XD were presented as platforms that help manage and process data across distributed systems.
The document discusses how to tame infrastructure that has become unwieldy. It identifies common issues or "smells" with infrastructure such as configuration drift, increasing complexity, and outdated tools. It recommends addressing these issues by separating concerns, automating configurations, using continuous provisioning instead of gold images, and container partitioning for virtualization. The key is to automate infrastructure using tools for system configuration, operating system installation, and virtualization to avoid infrastructure becoming a tangled "rat's nest."
The document discusses how to tame infrastructure that has become disorganized over time. It describes how infrastructure can become a "visible rat's nest" or "obfuscated rat's nest" with configuration drift, increasing complexity, outdated tools, and staffing issues. It recommends separating concerns, regularly maintaining skills through practice, moving away from static gold images, and using container partitioning and automation tools to bring order and flexibility. The key is to automate systems using tools for configuration, installation, and virtualization to prevent infrastructure from becoming unwieldy.
Container orchestration is the use of declarative configuration and imperative commands to deploy, provision, and execute containerized workloads. It automates the distribution of preprovisioned container images, injection of configuration, scheduling onto machines, lifecycle-management, and monitoring of applications, microservices, and jobs in the cloud. The orchestration space is fast moving and full of competing products, platforms, and frameworks. How do you choose the right one for your requirements?
Karl Isenberg explores the features of several container orchestrators—breaking down the feature sets and characteristics into categories, and scoring multiple solutions against each other while comparing them to other cloud platform layers like infrastructure (IaaS), applications platforms (PaaS), serverless architecture (FaaS), and distributed operating systems—to explain what functionality to look for in a container orchestrator, which products are good at which feature sets, and how you can apply this methodology in your research of other container orchestrators.
The document discusses the origins and evolution of OpenStack, an open-source cloud computing platform. It began in 2010 as a collaboration between NASA and Rackspace, building upon NASA's earlier Nebula platform. Over time, major Linux vendors like Red Hat, Ubuntu, and SUSE began developing their own OpenStack distributions to simplify deployment and management. The "Big Three" distributions take different approaches, and the market share between them has continued growing as OpenStack adoption increases among developers and enterprises. Key factors for OpenStack success include the supported virtualization technologies, ease of deployment, ongoing operations, reliability, and community support behind each distribution.
Orchestrating Big Data pipelines @ Fandom - Krystian Mistrzak Thejas MurthyEvention
Fandom is the largest entertainment fan site in the world. With more than 360,000 fan communities and a global
audience of over 190 million monthly uniques, we are the fan’s voice in entertainment. Being the largest entertainment site, wikia generates massive volumes of data, which varies from clickstream, user activities, api requests, ad delivery, A/B testing and much more. The big challenge is not just the volume but the orchestration involved in combining various sources of data with various periodicity, volumes. And Making sure the processed data is available for the consumers within the expected time. Thus helping gain the right insights well within the right time. A conscious decision was made to choose the right open source tool to solve the problem of orchestration, after evaluating various tools we decided to use Apache airflow. This presentation will give an overview of comparisons of existing tools and emphasize on why we choose airflow. And how Airflow is being used to create a stable reliable orchestration platform to enable non data engineers to seamlessly access data by democratizing data. We will focus on some tricks and best practises of developing workflows with Airflow and show how we are using some of the features of airflow.
Performance Benchmarking of Clouds Evaluating OpenStackPradeep Kumar
Pradeep Kumar surisetty presented on performance benchmarking of clouds and evaluating OpenStack. He discussed key cloud characteristics like elasticity and scalability. He then covered various performance measuring tools like Rally, Browbeat, Perfkit Benchmarker, and SPEC Cloud IaaS 2016 benchmark. He also discussed performance monitoring tools like Ceilometer, Collectd/Graphite/Grafana, and Ganglia. Finally, he provided some tuning tips for hardware, instances, over-subscription, local storage, NUMA nodes, disk pinning, and deployment timings.
AWS Summit Kuala Lumpur - Opening Keynote by Dr. Werner VogelsAmazon Web Services
This document summarizes key points from presentations by Dr. Werner Vogels, Chief Technology Officer of Amazon, and Arzumy MD, CTO of KFit, at an AWS conference. It discusses how AWS services like EC2, S3, RDS, and Elastic Beanstalk help companies like Amazon and KFit achieve rapid growth and scale effectively. It also outlines AWS's broad portfolio of infrastructure, database, analytics, and security services, and how they enable automation and continuous delivery of applications.
Cloud Foundry Container Runtime (CFCR) & Production KubernetesVMware Tanzu
This document discusses Cloud Foundry Container Runtime (CFCR) and how it can be used to deploy and manage production Kubernetes clusters. It describes how CFCR leverages BOSH to provide fault tolerance, auto-scaling, health checking, self-healing, and rolling upgrades to Kubernetes clusters. These capabilities ensure high availability, scalability, and security of workloads running on CFCR-managed Kubernetes.
Enabling Lean IT with AWS by Carlos Condé at the Lean IT Summit 2014Institut Lean France
This document discusses how AWS enables lean IT practices like experimentation, measurement, embracing failure, iteration, and focus on the business. It provides examples of how AWS allows for low-cost experimentation and failure through its elastic and pay-as-you-go model. Game days are proposed as a way to simulate crisis situations in a controlled environment using AWS to test procedures and architectures without risk to production systems. Frequent deployment and automation are also discussed as lean practices enabled by AWS.
Similar to Crossing the Streams Mesos <> Kubernetes (20)
Dandelion Hashtable: beyond billion requests per second on a commodity serverAntonios Katsarakis
This slide deck presents DLHT, a concurrent in-memory hashtable. Despite efforts to optimize hashtables, that go as far as sacrificing core functionality, state-of-the-art designs still incur multiple memory accesses per request and block request processing in three cases. First, most hashtables block while waiting for data to be retrieved from memory. Second, open-addressing designs, which represent the current state-of-the-art, either cannot free index slots on deletes or must block all requests to do so. Third, index resizes block every request until all objects are copied to the new index. Defying folklore wisdom, DLHT forgoes open-addressing and adopts a fully-featured and memory-aware closed-addressing design based on bounded cache-line-chaining. This design offers lock-free index operations and deletes that free slots instantly, (2) completes most requests with a single memory access, (3) utilizes software prefetching to hide memory latencies, and (4) employs a novel non-blocking and parallel resizing. In a commodity server and a memory-resident workload, DLHT surpasses 1.6B requests per second and provides 3.5x (12x) the throughput of the state-of-the-art closed-addressing (open-addressing) resizable hashtable on Gets (Deletes).
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
Digital Banking in the Cloud: How Citizens Bank Unlocked Their MainframePrecisely
Inconsistent user experience and siloed data, high costs, and changing customer expectations – Citizens Bank was experiencing these challenges while it was attempting to deliver a superior digital banking experience for its clients. Its core banking applications run on the mainframe and Citizens was using legacy utilities to get the critical mainframe data to feed customer-facing channels, like call centers, web, and mobile. Ultimately, this led to higher operating costs (MIPS), delayed response times, and longer time to market.
Ever-changing customer expectations demand more modern digital experiences, and the bank needed to find a solution that could provide real-time data to its customer channels with low latency and operating costs. Join this session to learn how Citizens is leveraging Precisely to replicate mainframe data to its customer channels and deliver on their “modern digital bank” experiences.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/temporal-event-neural-networks-a-more-efficient-alternative-to-the-transformer-a-presentation-from-brainchip/
Chris Jones, Director of Product Management at BrainChip , presents the “Temporal Event Neural Networks: A More Efficient Alternative to the Transformer” tutorial at the May 2024 Embedded Vision Summit.
The expansion of AI services necessitates enhanced computational capabilities on edge devices. Temporal Event Neural Networks (TENNs), developed by BrainChip, represent a novel and highly efficient state-space network. TENNs demonstrate exceptional proficiency in handling multi-dimensional streaming data, facilitating advancements in object detection, action recognition, speech enhancement and language model/sequence generation. Through the utilization of polynomial-based continuous convolutions, TENNs streamline models, expedite training processes and significantly diminish memory requirements, achieving notable reductions of up to 50x in parameters and 5,000x in energy consumption compared to prevailing methodologies like transformers.
Integration with BrainChip’s Akida neuromorphic hardware IP further enhances TENNs’ capabilities, enabling the realization of highly capable, portable and passively cooled edge devices. This presentation delves into the technical innovations underlying TENNs, presents real-world benchmarks, and elucidates how this cutting-edge approach is positioned to revolutionize edge AI across diverse applications.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
zkStudyClub - LatticeFold: A Lattice-based Folding Scheme and its Application...Alex Pruden
Folding is a recent technique for building efficient recursive SNARKs. Several elegant folding protocols have been proposed, such as Nova, Supernova, Hypernova, Protostar, and others. However, all of them rely on an additively homomorphic commitment scheme based on discrete log, and are therefore not post-quantum secure. In this work we present LatticeFold, the first lattice-based folding protocol based on the Module SIS problem. This folding protocol naturally leads to an efficient recursive lattice-based SNARK and an efficient PCD scheme. LatticeFold supports folding low-degree relations, such as R1CS, as well as high-degree relations, such as CCS. The key challenge is to construct a secure folding protocol that works with the Ajtai commitment scheme. The difficulty, is ensuring that extracted witnesses are low norm through many rounds of folding. We present a novel technique using the sumcheck protocol to ensure that extracted witnesses are always low norm no matter how many rounds of folding are used. Our evaluation of the final proof system suggests that it is as performant as Hypernova, while providing post-quantum security.
Paper Link: https://eprint.iacr.org/2024/257
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Trusted Execution Environment for Decentralized Process MiningLucaBarbaro3
Presentation of the paper "Trusted Execution Environment for Decentralized Process Mining" given during the CAiSE 2024 Conference in Cyprus on June 7, 2024.
FREE A4 Cyber Security Awareness Posters-Social Engineering part 3Data Hops
Free A4 downloadable and printable Cyber Security, Social Engineering Safety and security Training Posters . Promote security awareness in the home or workplace. Lock them Out From training providers datahops.com
Skybuffer AI: Advanced Conversational and Generative AI Solution on SAP Busin...Tatiana Kojar
Skybuffer AI, built on the robust SAP Business Technology Platform (SAP BTP), is the latest and most advanced version of our AI development, reaffirming our commitment to delivering top-tier AI solutions. Skybuffer AI harnesses all the innovative capabilities of the SAP BTP in the AI domain, from Conversational AI to cutting-edge Generative AI and Retrieval-Augmented Generation (RAG). It also helps SAP customers safeguard their investments into SAP Conversational AI and ensure a seamless, one-click transition to SAP Business AI.
With Skybuffer AI, various AI models can be integrated into a single communication channel such as Microsoft Teams. This integration empowers business users with insights drawn from SAP backend systems, enterprise documents, and the expansive knowledge of Generative AI. And the best part of it is that it is all managed through our intuitive no-code Action Server interface, requiring no extensive coding knowledge and making the advanced AI accessible to more users.
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
2. OVERVIEWOVERVIEW
BRIEF HISTORY OF CLUSTERBRIEF HISTORY OF CLUSTER
MANAGE MENTMANAGE MENT
WHAT IS KUBERNETES?WHAT IS KUBERNETES?
MES OS AND THE MODERNMES OS AND THE MODERN
DATACENTERDATACENTER
CROSSING THE STREAMSCROSSING THE STREAMS
MES OS <> KUBERNETESMES OS <> KUBERNETES
3. BRIEF HISTORY OF CLUSTERBRIEF HISTORY OF CLUSTER
MANAGEMENTMANAGEMENT
"The good ideas of today, often mimic the good ideas of the past."
4. STEP BACK TO THE 80'SSTEP BACK TO THE 80'S
& 90'S& 90'S
[Ghostbusters] ~1984
5. BEFORE "CONTAINERBEFORE "CONTAINER
ORCHESTRATION"ORCHESTRATION"
BEFORE IAAS/"CLOUD"BEFORE IAAS/"CLOUD"
THERE WAS THE GRIDTHERE WAS THE GRID
In the 1990s, inspired by the availability of high-speed wide area networks
and challenged by the computational requirements of new applications,
researchers began to imagine a computing infrastructure that would
“provide access to computing on demand” (COD) and permit “flexible,
secure, coordinated resource sharing among dynamic collections of
individuals, institutions, and resources”
[The History of the Grid] ~Ian Foster, Carl Kesselman
6. GRID DRIVERSGRID DRIVERS
LARGE SCALE SCIENTIFIC COMPUTING (E.G.LARGE SCALE SCIENTIFIC COMPUTING (E.G.
LHCLHC ))
DESIRE TO HAVE FEDERATED COMPUTING AT HUNDREDS OFDESIRE TO HAVE FEDERATED COMPUTING AT HUNDREDS OF
SITES IN ORDER TO ANALYZE PETABYTES OF DATA. "SOUNDSSITES IN ORDER TO ANALYZE PETABYTES OF DATA. "SOUNDS
LIKE: BIG DATA"LIKE: BIG DATA"
GOAL IS THROUGHPUTGOAL IS THROUGHPUT
PLEASINGLY PARALLEL ALGORITHMSPLEASINGLY PARALLEL ALGORITHMS
LOTS OF ENORMOUS WORKFLOWS (DAGS)LOTS OF ENORMOUS WORKFLOWS (DAGS)
HTTP://HOME.WEB.CERN.CH/ABOUT/COMPUTINGHTTP://HOME.WEB.CERN.CH/ABOUT/COMPUTING
7. GRID DRIVERS (CONT)GRID DRIVERS (CONT)
ANALOGOUS TO UTILITIES OF THE TIME, BUT FOR ON DEMANDANALOGOUS TO UTILITIES OF THE TIME, BUT FOR ON DEMAND
COMPUTE POWERCOMPUTE POWER
HETEROGENOUS DISTRIBUTED RESOURCE MANAGEMENTHETEROGENOUS DISTRIBUTED RESOURCE MANAGEMENT
INFRASTRUCTURESINFRASTRUCTURES
MULTI-TENANTMULTI-TENANT
INDEPENT SECURITY MODELSINDEPENT SECURITY MODELS
** MANY SYSTEMS WORKING TOGETHER (GLOBUS, HADOOP,** MANY SYSTEMS WORKING TOGETHER (GLOBUS, HADOOP,
CONDOR, SGE ...CONDOR, SGE ...
SOPHISTICATED MATCHMAKING DUE TO THE HETEROGENOUSSOPHISTICATED MATCHMAKING DUE TO THE HETEROGENOUS
NATURE OF THE GRIDNATURE OF THE GRID
8. GRID OPERATIONSGRID OPERATIONS
1. PROVISION RESOURCESPROVISION RESOURCES
2. PUBLISH, OR ADVERTISEPUBLISH, OR ADVERTISE
RESOURCE AVAIL ABILITYRESOURCE AVAIL ABILITY
3. ASSEMBLE RESOURCES INTO AASSEMBLE RESOURCES INTO A
OPERATIONAL GRID/POOLOPERATIONAL GRID/POOL
4. CON SUME RESOURCES ACROSS ACON SUME RESOURCES ACROSS A
VARIETY OF APPLICATIONSVARIETY OF APPLICATIONS
9. LESSONS LEARNEDLESSONS LEARNED
NOT EVERYTHING IS A "JOB", HA-MICRO-SERVICES...NOT EVERYTHING IS A "JOB", HA-MICRO-SERVICES...
** NEEDS MORE COMPOSABILITY **** NEEDS MORE COMPOSABILITY **
MANY SYSTEMS DOING SIMILAR THINGS (SGE, LSF, PBS, CONDOR,MANY SYSTEMS DOING SIMILAR THINGS (SGE, LSF, PBS, CONDOR,
MESOS, KUBERNETES, SWARM)MESOS, KUBERNETES, SWARM)
PROVISION RESOURCESPROVISION RESOURCES
PUBLISH, OR ADVERTISE RESOURCE AVAILABILITYPUBLISH, OR ADVERTISE RESOURCE AVAILABILITY
ASSEMBLE RESOURCES INTO A OPERATIONAL GRID/POOLASSEMBLE RESOURCES INTO A OPERATIONAL GRID/POOL
CONSUME RESOURCES ACROSS A VARIETY OF APPLICATIONSCONSUME RESOURCES ACROSS A VARIETY OF APPLICATIONS
HETEROGENEOUS COMPUTING PLATFORMS IS HARDHETEROGENEOUS COMPUTING PLATFORMS IS HARD
HARDWARE DIVERSITY (SUN, X86, ITANIUM, POWERPC)HARDWARE DIVERSITY (SUN, X86, ITANIUM, POWERPC)
ASSORTED HW-SPECIALIZATIONSASSORTED HW-SPECIALIZATIONS
OS DIVERSITY (SOLARIS, WINDOWS, LINUX, HPUX, AIX ...)OS DIVERSITY (SOLARIS, WINDOWS, LINUX, HPUX, AIX ...)
INSTALLED STACK DIVERSITY (LIBRARIES, LANGUAGES)INSTALLED STACK DIVERSITY (LIBRARIES, LANGUAGES)
10. LESSONS LEARNED (CONT)LESSONS LEARNED (CONT)
MATCHING (HW+OS+SW) CAN BE GRIZZLY BEARMATCHING (HW+OS+SW) CAN BE GRIZZLY BEAR
CONTAINERS SOLVES SOME OF THIS...CONTAINERS SOLVES SOME OF THIS...
Software people often say “we eliminated a whole class of problems”
when they mean “we chose tradeoffs that make you solve them
elsewhere.” ~ William Benton
FLAT L3 NETWORKING IS A PITA (PORT MANGLING)FLAT L3 NETWORKING IS A PITA (PORT MANGLING)
NAT-ING SHOULD BE CONFIGURABLENAT-ING SHOULD BE CONFIGURABLE
NEEDS MORE FLEXIBILITY (CREATE YOUR OWN SCHEDULER)NEEDS MORE FLEXIBILITY (CREATE YOUR OWN SCHEDULER)
EXPRESSIVENESS CAN BE GOOD... WHEN MANAGED, OTHERWISE IT CANEXPRESSIVENESS CAN BE GOOD... WHEN MANAGED, OTHERWISE IT CAN
BECOME OBTUSEBECOME OBTUSE
/* now modif y routed job attributes */
/* remove routed job if it goes on hold or stays idle for over 6 hours */
set_PeriodicRemove = JobStatus == 5 ||
(JobStatus == 1 && (CurrentTime - QDate) > 3600*6);
delete_WantJobRouter = true;
set_requirements = true;
11. FAST FORWARD TO 2015FAST FORWARD TO 2015
[Back to the Future Part 3]
12. WHAT IS KUBERNETES?WHAT IS KUBERNETES?
The Greek word “kubernetes,” means “helmsman of a ship,” or, more
metaphorically, “ruler.”
13. WHAT IS KUBERNETES?WHAT IS KUBERNETES?
"Kubernetes is an open source orchestration system for
containers. It handles scheduling onto nodes in a compute
cluster and actively manages workloads to ensure that their
state matches the users declared intentions."
14. KUBERNETES?KUBERNETES?
'KUBERNETES IS AN OPEN SOURCE'KUBERNETES IS AN OPEN SOURCE "ORCHESTRATION SYSTEM""ORCHESTRATION SYSTEM" FOR CONTAINERS. ITFOR CONTAINERS. IT
Kubernetes is an open source derivative work, based on Google's internal BORG infrastruct
It manages containerized applications across multiple hosts, providing basic mechanisms fo
Kubernetes establishes a set of robust declarative primitives for maintaining the desired
15. KUBERNETES?KUBERNETES?
KUBERNETES IS DECL ARATIVEKUBERNETES IS DECL ARATIVE
apiVersion: v1
kind: ReplicationController
metadata:
name: redis-slave
labels:
name: redis-slave
spec:
replicas: 2
...
ALSO IMPARATIVE, THE API ALLOWS YOU TO WRITE
INTROSPECTIVE SERVICES, OR CONTROLLER, ATOP OF IT.
ITS POSSIBLE TO WRITE ELASTIC CONTROLLERS (THINK YARN)
17. CORE CONCEPTSCORE CONCEPTS
PODSPODS
PODS ARE THE ATOM OF SCHEDULING, AND ARE A GROUP OFPODS ARE THE ATOM OF SCHEDULING, AND ARE A GROUP OF
CONTAINERS THAT ARE SCHEDULED ONTO THE SAME HOST.CONTAINERS THAT ARE SCHEDULED ONTO THE SAME HOST.
"COSCHEDULING""COSCHEDULING"
PODS FACILITATE DATA SHARING AND COMMUNICATION BETWEENPODS FACILITATE DATA SHARING AND COMMUNICATION BETWEEN
CONTAINERS WITHIN THE PODCONTAINERS WITHIN THE POD
SHARED MOUNT POINTSHARED MOUNT POINT
SHARED NETWORK NAMESPACE/IP AND PORT SPACESHARED NETWORK NAMESPACE/IP AND PORT SPACE
HIGHER ORDER ABSTRACTION THEN CONTAINERSHIGHER ORDER ABSTRACTION THEN CONTAINERS
COMPOSABLE MICRO-SERVICESCOMPOSABLE MICRO-SERVICES
18. CORE CONCEPTS (CONT)CORE CONCEPTS (CONT)
CONTROLLERSCONTROLLERS
EVENTUAL CONSISTENCY IS MAINTAINED BY SEPARATEEVENTUAL CONSISTENCY IS MAINTAINED BY SEPARATE
CONTROLLERS. EACH CONTROLLERS PURPOSE IS TO RECTIFY ANYCONTROLLERS. EACH CONTROLLERS PURPOSE IS TO RECTIFY ANY
DISCREPANCY BETWEEN THE DECLARED STATE OF A PRIMITIVE,DISCREPANCY BETWEEN THE DECLARED STATE OF A PRIMITIVE,
WITH THE CURRENT STATE OF THE SYSTEMWITH THE CURRENT STATE OF THE SYSTEM
nodes
apiserver
schedulercontroller
kind: ReplicationController
...
spec:
replicas: 2
19. CORE CONCEPTS (CONT)CORE CONCEPTS (CONT)
SERVICES*SERVICES*
SERVICES PROVIDE A SINGLE, STABLE NAME AND ADDRESS FOR ASERVICES PROVIDE A SINGLE, STABLE NAME AND ADDRESS FOR A
SET OF PODS. THEY TYPICALLY ACT AS BASIC LOAD BALANCEDSET OF PODS. THEY TYPICALLY ACT AS BASIC LOAD BALANCED
PROXY ENDPOINT. (NON-COLLIDING-NAT)PROXY ENDPOINT. (NON-COLLIDING-NAT)
CLOUD BASED IMPLEMENTATIONS HAVE NATIVE SUPPORT FORCLOUD BASED IMPLEMENTATIONS HAVE NATIVE SUPPORT FOR
CREATING EXTERNAL LOAD BALANCERS.CREATING EXTERNAL LOAD BALANCERS.
PROVIDES A CONSTRUCT WHICH IS USED TO LOOKUP, NAME, ANDPROVIDES A CONSTRUCT WHICH IS USED TO LOOKUP, NAME, AND
LINK PODS (INJECTION)LINK PODS (INJECTION)
Load Balancer
PodPod
External/Internal
Service or user
21. CORE CONCEPTS (CONT)CORE CONCEPTS (CONT)
LABELSLABELS
Labels are key/value pairs associated with pods or nodes.
Labels enable operators to map their own structures onto objects in a
loosely coupled fashion.
=, !=, in, notin
"labels": {
"release" : "stable",
"environment" : "production"
}
24. USE CASESUSE CASES
1.0 PRIMARY USE CASE:1.0 PRIMARY USE CASE:
CONTAINER ORCHESTRATION FOR CLOUD-NATIVECONTAINER ORCHESTRATION FOR CLOUD-NATIVE
APPLICATIONS.APPLICATIONS.
AN ENGINE FOR BUILDING FULLY FEATURES PAASAN ENGINE FOR BUILDING FULLY FEATURES PAAS
SYSTEMS ATOP.SYSTEMS ATOP.
OpenShift adds developer and operational centric tools on
top of Kubernetes to enable rapid application development,
easy deployment and scaling, and long-term lifecycle
maintenance for small and large teams and applications
25. STATUSSTATUS
1.0+ EXISTS FOR AVAIL ABILITY1.0+ EXISTS FOR AVAIL ABILITY
(GCE, ATOMIC, ETC.)(GCE, ATOMIC, ETC.)
MESO S F RAMEWOR K I S I N THEMESO S F RAMEWOR K I S I N THE
MAIN R E PO, AND SUPPORTED !!!MAIN R E PO, AND SUPPORTED !!!
K8S FORMALLY GIVEN TO THEK8S FORMALLY GIVEN TO THE
CNCFCNCF
GOOGLECLOUD->KUBERNETES ONGOOGLECLOUD->KUBERNETES ON
GITHUBGITHUB
26. MESOS AND THE MODERNMESOS AND THE MODERN
DATA CENTERDATA CENTER
27. NEW DATACENTER (CONT)NEW DATACENTER (CONT)
CHARACTERISTICS:CHARACTERISTICS:
SHARED INFRASTRUCTURE VS. SILO(S)SHARED INFRASTRUCTURE VS. SILO(S)
MULTI-TENANTMULTI-TENANT
MULTIPLE ELASTIC WORKLOADSMULTIPLE ELASTIC WORKLOADS
ANALYTICS + STREAMING + PAASANALYTICS + STREAMING + PAAS
PAAS (COMPOSABLE MICRO-SERVICES)PAAS (COMPOSABLE MICRO-SERVICES)
QOS (TIERS OF SERVICE)QOS (TIERS OF SERVICE)
FAIRNESS | QUOTAFAIRNESS | QUOTA
MANY NETWORKS, SDNMANY NETWORKS, SDN
LAYERS AND LAYERS OF SECURELAYERS AND LAYERS OF SECURE
28. ONLINEONLINE
NEARLINENEARLINE
OFFLINEOFFLINE
BATCH PROCESSING:BATCH PROCESSING:
Machine Learning, Modeling, Data Analysis, ETL, etc.
STREAM + PAASSTREAM + PAAS
Traditional services: Databases, Stream Processing
CLOUD-NATIVE / PAASCLOUD-NATIVE / PAAS
UI Clients, Web Framework Dejour, Event dispatching
http://techblog.netflix.com/2013/03/system-architectures-for.html
OPERATIONALOPERATIONAL
PERSPECTIVEPERSPECTIVE
30. CROSSING THE STREAMSCROSSING THE STREAMS
MESO S <> KUBERNETESMESO S <> KUBERNETES
DISCLAIMER: I'M NOT A NETWORKING GURU
31.
32. STEP 1: DEVISE A PLANSTEP 1: DEVISE A PLAN
DRAW OUT YOUR CORE SERVICES FOR YOUR DATA CENTERDRAW OUT YOUR CORE SERVICES FOR YOUR DATA CENTER
DETERMINE EXTERNAL VISIBILITYDETERMINE EXTERNAL VISIBILITY
AIR-GAPING | RESOLUTION VISIBILITY | INGRESS &AIR-GAPING | RESOLUTION VISIBILITY | INGRESS &
EGRESSEGRESS
NETWORK ACCESSABILITY TO YOUR OTHERNETWORK ACCESSABILITY TO YOUR OTHER
FRAMEWORKSFRAMEWORKS
RESOLUTION (MESOS-DNS)RESOLUTION (MESOS-DNS)
TRY TO NOT RELY ON DNS, PREFER DISCOVERY SERVICESTRY TO NOT RELY ON DNS, PREFER DISCOVERY SERVICES
IF AT ALL POSSIBLE, OR WELL DEFINED VIPS FOR PRIMARYIF AT ALL POSSIBLE, OR WELL DEFINED VIPS FOR PRIMARY
CORE SERVICES.CORE SERVICES.
VIPS DON'T SCALEVIPS DON'T SCALE
PLAN YOUR OVERLAY NETWORKPLAN YOUR OVERLAY NETWORK
TRY TO SEPARATE NETWORKS TO MAINTAIN SOME LEVELTRY TO SEPARATE NETWORKS TO MAINTAIN SOME LEVEL
OF QOSOF QOS
33.
34. EXPOSING K8S SERVICESEXPOSING K8S SERVICES
{
...
"ports": [
{
"protocol": "TCP",
"port": 80,
"targetPort": 9376,
"nodePort": 30061
}
],
...
"type": "LoadBalancer"
},
"status": {
"loadBalancer": {
"ingress": [
{
"ip": "146.148.47.155"
}
...
}
nodePort: the Kubernetes master will
allocate a port from a flag-configured
range (default: 30000-32767), and
each Node will proxy that port (the
same port number on every Node) into
your Service.
type: LoadBalancer - On cloud
providers which support external load
balancers, setting the type field
to "LoadBalancer" will provision a load
balancer for your Service.
https://github.com/kubernetes/kubernetes/bl
ob/master/docs/user-guide/services.md
35. PLAN FOR CONSTRAINTSPLAN FOR CONSTRAINTS
DEALING WITH LEGACY SYSTEMSDEALING WITH LEGACY SYSTEMS
DNSDNS
MANY LEGACY SYSTEMS DEPEND ON DNS FOR BETTER ORMANY LEGACY SYSTEMS DEPEND ON DNS FOR BETTER OR
FOR WORSEFOR WORSE
NAMESPACING (ENG,PROD) AND MULTI-TENANCYNAMESPACING (ENG,PROD) AND MULTI-TENANCY
IN A MULTI-TENANT ENVIRONMENT YOU COULD HAVE 10IN A MULTI-TENANT ENVIRONMENT YOU COULD HAVE 10
COPIES OF THE SAME SERVICE AND THAT SHOULD BE OK.COPIES OF THE SAME SERVICE AND THAT SHOULD BE OK.
REVERSE DNS - (NAT FAILURE)REVERSE DNS - (NAT FAILURE)
DB1 DB1 DB1 DB1
......
37. STEP 2: CREATE A TESTSTEP 2: CREATE A TEST
EXPERIMENTEXPERIMENT
FIND YOUR HAPPY PLACE AND SAFE PLACEFIND YOUR HAPPY PLACE AND SAFE PLACE
HAVE A SANDBOX WHERE YOU CAN PLAY WITH SERVICESHAVE A SANDBOX WHERE YOU CAN PLAY WITH SERVICES
TEST SETTING UP SEPARATE NETWORKS FOR DIFFERENT SERVICESTEST SETTING UP SEPARATE NETWORKS FOR DIFFERENT SERVICES
CONSIDER CLUSTERS TO BE EPHEMERALCONSIDER CLUSTERS TO BE EPHEMERAL
IT ACTUALLY MAKES LIFE EASIERIT ACTUALLY MAKES LIFE EASIER
1 PAAS -> MANY PAAS-ES-S1 PAAS -> MANY PAAS-ES-S
TRY REACHING ACROSS NETWORKSTRY REACHING ACROSS NETWORKS
SETUP DIFFERENT LOAD-BALANCING SERVICESSETUP DIFFERENT LOAD-BALANCING SERVICES
DETERMINE IF VIPS MAKES SENSE FOR YOU AT YOUR SCALEDETERMINE IF VIPS MAKES SENSE FOR YOU AT YOUR SCALE
38. STEP 3: BURN YOURSTEP 3: BURN YOUR
ORIGINAL PLANORIGINAL PLAN
ONLY 1/2 JOKING, YOU WILL LIKELY RUN INTO ISSUESONLY 1/2 JOKING, YOU WILL LIKELY RUN INTO ISSUES
YOU NEVER KNEW EXISTED. CONSULT YOUR LOCALYOU NEVER KNEW EXISTED. CONSULT YOUR LOCAL
NETWORK OPERATORNETWORK OPERATOR
39. ENJOY THE JOURNEYENJOY THE JOURNEY
[Ghostbusters] ~1984
IT MAY GET A LITTLE MESSY, BUT IT'S WORTH ITIT MAY GET A LITTLE MESSY, BUT IT'S WORTH IT