VMware & Pivotal’s Pivotal Container Service (PKS) is a container management platform that provides a Kubernetes container orchestration service. PKS runs Kubernetes clusters on vSphere and VMware Cloud Foundation. It provides high availability, security and multi-tenancy capabilities. PKS integrates deeply with NSX for network and security services.
Building Developer Pipelines with PKS, Harbor, Clair, and ConcourseVMware Tanzu
SpringOne Platform 2017
Thomas Kraus, VMware; Merlin Glynn, VMware
Today's developer needs to rapidly build and deploy code in a consistent, predictable, and declarative manner. This session will illustrate how companies can leverage PKS, Kubernetes, Harbor, Clair, and Concourse to achieve these goals. The session will provide a solution overview for developing, building, and deploying applications using Container technologies from VMware and Pivotal. A brief review of each of the technologies being discussed will be provided. The session will include a proposed end to end solution leveraging all of these technologies to provide a better developer experience. The session will conclude with a demonstration illustrating a development workflow leveraging these technologies to initially develop and then update an Application running on PKS and Kubernetes.
Basics of Kubernetes on BOSH: Run Production-grade Kubernetes on the SDDCMatt McNeeney
This document provides an overview of running production-grade Kubernetes on VMware's Software Defined Data Center (SDDC) using BOSH and Pivotal Container Service (PKS). It begins with introductions and discusses the benefits of the SDDC for abstracting hardware resources. BOSH is introduced as a tool for deploying and managing distributed systems that provides capabilities for bundled releases, integration, and consistent deployments. Kubernetes is summarized as an open-source platform for container orchestration. KuBo and PKS are presented as solutions for deploying Kubernetes on BOSH that address challenges of configuration, tenancy, and isolation across teams. PKS provisions BOSH-managed Kubernetes environments through a service broker to provide each team
Pivotal Container Service (PKS) at SF Cloud Foundry Meetupcornelia davis
Overview of Pivotal Container Service (PKS), built on the open source Cloud Foundry Container Runtime (CFCR). Covers what Kubernetes is, how PKS presents a complete platform that includes Kubernetes and much more, and key cloud principles.
Presented at the San Francisco-Bay Area Cloud Foundry meetup.
Pivotal Container Service (PKS) provides an enterprise-grade Kubernetes platform that can be deployed on any cloud infrastructure using the open source BOSH tool. PKS handles operations tasks like provisioning and upgrading Kubernetes clusters, integrates with VMware technologies for networking and security, and provides a centralized control plane for managing multiple clusters and tenants. It aims to deliver the benefits of Kubernetes to enterprises by adding capabilities for high availability, multi-tenancy, security and automation.
The document discusses VMware Enterprise PKS, a turnkey solution for deploying and managing Kubernetes clusters in production environments. It addresses common challenges in running Kubernetes at scale, such as complexity, networking, storage, monitoring, logging, and security. VMware Enterprise PKS provides capabilities like on-demand cluster provisioning, integration with NSX-T for networking and security, persistent storage options, and monitoring and logging tools from the VMware portfolio. The solution aims to simplify operations of Kubernetes and provide an enterprise-grade platform for running containerized workloads.
Kube Your Enthusiasm - Paul CzarkowskiVMware Tanzu
This document provides an overview of container platforms and Kubernetes concepts. It discusses hardware platforms, infrastructure as a service (IaaS), container as a service (CaaS), platform as a service (PaaS), and function as a service (FaaS). It then covers Kubernetes architecture and resources like pods, services, volumes, replica sets, deployments, and stateful sets. Examples are given of using kubectl to deploy and manage applications on Kubernetes.
Cloud-Native Operations with Kubernetes and CI/CDVMware Tanzu
Operations practices have historically lagged behind development. Agile and Extreme Programming have become common practice for development teams. In the last decade, the DevOps and SRE movements have brought these concepts to operations, borrowing heavily from Lean principles such as Kanban and Value Stream Mapping. So, how does all of this play out if we’re using Kubernetes?
In this class, Paul Czarkowski, Principal Technologist at Pivotal, will explain how Kubernetes enables a new cloud-native way of operating software. Attend to learn:
● what cloud-native operations are;
● how to build a cloud-native CI/CD stack; and
● how to deploy and upgrade an application from source to production on Kubernetes.
Presenter:
Paul Czarkowski, Principal Technologist, Pivotal Software
Building Developer Pipelines with PKS, Harbor, Clair, and ConcourseVMware Tanzu
SpringOne Platform 2017
Thomas Kraus, VMware; Merlin Glynn, VMware
Today's developer needs to rapidly build and deploy code in a consistent, predictable, and declarative manner. This session will illustrate how companies can leverage PKS, Kubernetes, Harbor, Clair, and Concourse to achieve these goals. The session will provide a solution overview for developing, building, and deploying applications using Container technologies from VMware and Pivotal. A brief review of each of the technologies being discussed will be provided. The session will include a proposed end to end solution leveraging all of these technologies to provide a better developer experience. The session will conclude with a demonstration illustrating a development workflow leveraging these technologies to initially develop and then update an Application running on PKS and Kubernetes.
Basics of Kubernetes on BOSH: Run Production-grade Kubernetes on the SDDCMatt McNeeney
This document provides an overview of running production-grade Kubernetes on VMware's Software Defined Data Center (SDDC) using BOSH and Pivotal Container Service (PKS). It begins with introductions and discusses the benefits of the SDDC for abstracting hardware resources. BOSH is introduced as a tool for deploying and managing distributed systems that provides capabilities for bundled releases, integration, and consistent deployments. Kubernetes is summarized as an open-source platform for container orchestration. KuBo and PKS are presented as solutions for deploying Kubernetes on BOSH that address challenges of configuration, tenancy, and isolation across teams. PKS provisions BOSH-managed Kubernetes environments through a service broker to provide each team
Pivotal Container Service (PKS) at SF Cloud Foundry Meetupcornelia davis
Overview of Pivotal Container Service (PKS), built on the open source Cloud Foundry Container Runtime (CFCR). Covers what Kubernetes is, how PKS presents a complete platform that includes Kubernetes and much more, and key cloud principles.
Presented at the San Francisco-Bay Area Cloud Foundry meetup.
Pivotal Container Service (PKS) provides an enterprise-grade Kubernetes platform that can be deployed on any cloud infrastructure using the open source BOSH tool. PKS handles operations tasks like provisioning and upgrading Kubernetes clusters, integrates with VMware technologies for networking and security, and provides a centralized control plane for managing multiple clusters and tenants. It aims to deliver the benefits of Kubernetes to enterprises by adding capabilities for high availability, multi-tenancy, security and automation.
The document discusses VMware Enterprise PKS, a turnkey solution for deploying and managing Kubernetes clusters in production environments. It addresses common challenges in running Kubernetes at scale, such as complexity, networking, storage, monitoring, logging, and security. VMware Enterprise PKS provides capabilities like on-demand cluster provisioning, integration with NSX-T for networking and security, persistent storage options, and monitoring and logging tools from the VMware portfolio. The solution aims to simplify operations of Kubernetes and provide an enterprise-grade platform for running containerized workloads.
Kube Your Enthusiasm - Paul CzarkowskiVMware Tanzu
This document provides an overview of container platforms and Kubernetes concepts. It discusses hardware platforms, infrastructure as a service (IaaS), container as a service (CaaS), platform as a service (PaaS), and function as a service (FaaS). It then covers Kubernetes architecture and resources like pods, services, volumes, replica sets, deployments, and stateful sets. Examples are given of using kubectl to deploy and manage applications on Kubernetes.
Cloud-Native Operations with Kubernetes and CI/CDVMware Tanzu
Operations practices have historically lagged behind development. Agile and Extreme Programming have become common practice for development teams. In the last decade, the DevOps and SRE movements have brought these concepts to operations, borrowing heavily from Lean principles such as Kanban and Value Stream Mapping. So, how does all of this play out if we’re using Kubernetes?
In this class, Paul Czarkowski, Principal Technologist at Pivotal, will explain how Kubernetes enables a new cloud-native way of operating software. Attend to learn:
● what cloud-native operations are;
● how to build a cloud-native CI/CD stack; and
● how to deploy and upgrade an application from source to production on Kubernetes.
Presenter:
Paul Czarkowski, Principal Technologist, Pivotal Software
PKS: The What and How of Enterprise-Grade KubernetesVMware Tanzu
SpringOne Platform 2017
Cornelia Davis, Pivotal; Fred Melo, Pivotal
Because of its well thought out and powerful abstractions, robust and cloud-native architecture, and the vibrant community around it, the use of Kubernetes for containerized workloads has surged. And while Kubernetes is theoretically ready to run applications in production, the actual viability is highly dependent on how Kubernetes itself is managed. In this session Cornelia and Fred will cover role of the container orchestration system in your IT landscape, and they’ll dive under the covers to show how it provides the enterprise-class Kubernetes services you need to trust your most critical workloads to it. Yes, technical details revealed!
The document describes the twelve-factor app methodology for building software-as-a-service applications. The twelve factors are: codebase, dependencies, configuration, backing services, build-release-run, processes, port binding, concurrency, disposability, logs, admin processes, and dev/prod parity. The methodology advocates designing apps that are optimal to deploy on modern cloud platforms by separating an app from its infrastructure, using declarative formats for setup automation, and enabling continuous deployment for maximum agility.
Application Modernization with PKS / KubernetesPaul Czarkowski
This document discusses strategies for modernizing applications and replatforming them using Project Kubernetes Service (PKS). It outlines how companies have different options for packaging and running workloads, such as using containers, microservices, serverless functions, and monolithic applications. PKS aims to provide the right runtime for each workload type. The document compares container orchestrators, application platforms, and serverless functions, noting that PKS aims to push workloads higher in the platform hierarchy for more flexibility and less enforcement of standards while lowering development complexity and improving operational efficiency. It provides recommendations for getting started with migrating workloads to PKS, such as lifting and shifting applications with minimal modernization, leveraging platform capabilities, and fully modernizing
Run Stateful Apps on Kubernetes with VMware PKS - Highlight WebLogic Server Simone Morellato
The document discusses running Oracle WebLogic Server applications on Kubernetes and VMware PKS. It provides an overview of Kubernetes, PKS, and WebLogic Server challenges in containerization due to state management needs. It then describes how Kubernetes StatefulSets address these challenges by providing stable network identities and preserving state across container restarts. The document concludes with a demo of deploying WebLogic Server on PKS and lists five reasons why this approach is better than traditional deployment methods in terms of developer productivity, application monitoring, elasticity, multi-cloud support, and patching/upgrades.
Zero-downtime deployment of Micro-services with KubernetesWojciech Barczyński
Talk on deployment strategies with Kubernetes covering kubernetes configuration files and the actual implementation of your service in Golang.
You will find demos for recreate, rolling updates, blue-green, and canary deployments.
Source and demos, you will find on github: https://github.com/wojciech12/talk_zero_downtime_deployment_with_kubernetes
Devops lifecycle with Kabanero Appsody, Codewind, TektonWinton Winton
This document discusses how IBM's Cloud Pak for Applications and associated DevOps Add-On can help organizations with application modernization, development, and deployment. It provides an integrated platform for both traditional and cloud-native applications using containers and Kubernetes. The DevOps Add-On includes UrbanCode DevOps tools to automate deployments across platforms and orchestrate releases through the development pipeline. This allows consistent processes for both modernized and existing applications.
Tectonic Summit 2016: Brandon Philips, CTO of CoreOS, KeynoteCoreOS
The document discusses CoreOS's expertise across the technology stack for container-based applications. This includes Linux, container engines, container image specifications, clustered databases like etcd, cloud independence, identity federation, and more. CoreOS is focused on open standards through initiatives like the Open Container Initiative and ensuring technologies like Kubernetes, rkt, and etcd can scale to power large production deployments.
OSDC 2018 | Highly Available Cloud Foundry on Kubernetes by Cornelius SchumacherNETWAYS
This document discusses running Cloud Foundry on Kubernetes to provide highly available cloud platforms. It begins with an overview of cloud computing models and introduces Cloud Foundry. It then discusses deploying Cloud Foundry using Kubernetes primitives like pods, services, and stateful sets for high availability. The document demonstrates how to install Cloud Foundry on Kubernetes using Helm charts and configure for high availability. It shows the components have been made highly available to prevent downtime during failures or upgrades. Finally, it provides a demo of deploying a sample application on Cloud Foundry on Kubernetes under chaotic conditions to showcase the high availability.
Installing and Using Kubernetes is hard, but Operating Kubernetes is even harder! This BOF is for Kubernetes Operators to get together and discuss our day to day Operations, and for people new to Kubernetes to learn more about how to operate it.
Kubernetes 1.21 included 51 enhancements, including 13 features graduating to stable and 15 graduating to beta. Major themes included CronJobs graduating to stable, immutable secrets and configmaps, dual-stack IPv4/IPv6 support, graceful node shutdown, and the persistent volume health monitor. The 1.22 release timeline was also outlined, with enhancements freeze on May 13th and code freeze on July 8th, targeting August 4th for release. Various SIG updates provided information on enhancements for API machinery, apps, auth, CLI, cloud providers, instrumentation, network, node, scheduling and storage.
Kubecon US 2019: Kubernetes Multitenancy WG Deep DiveSanjeev Rampal
This document provides an overview and agenda for a presentation on secure multitenancy in Kubernetes. It discusses what Kubernetes multitenancy is, available solutions, architectural models for multitenancy including namespace grouping and virtual Kubernetes clusters. It also covers community initiatives for multitenancy control plane including tenant controllers and hierarchical namespaces. The document outlines benchmarking categories and a proposed baseline reference implementation for multitenancy including control plane, data plane, and network isolation techniques.
Modern DevOps practices involve deploying applications to platforms. From basic IaaS to PaaS to serverless functions. But who runs those platforms and how? At Pivotal we build and operate platforms, and we run those platforms on a platform designed to run complex distributed systems called Bosh which was inspired by google borg. Paul will talk through a couple of successful patterns for deploying and operating platforms as well as how to help your business determine which platform[s] are right for them and how to successfully get the business to adopt those platforms.
Openstack days sv building highly available services using kubernetes (preso)Allan Naim
This document discusses Google Cloud Platform's Kubernetes and how it can be used to build highly available services. It provides an overview of Kubernetes concepts like pods, labels, replica sets, volumes, and services. It then describes how Kubernetes Cluster Federation allows deploying applications across multiple Kubernetes clusters for high availability, geographic scaling, and other benefits. It outlines how to create clusters, configure the federated control plane, add clusters to the federation, deploy federated services and backends, and perform cross-cluster service discovery.
The document discusses how Kubernetes and the 12 factors of cloud applications relate. It provides an overview of each of the 12 factors and examples of how they can be implemented using Kubernetes. Key takeaways include designing stateless applications, keeping environments similar between development and production, and preferring managed services for persistence. The document encourages decoupling infrastructure complexity from application code and ensuring applications can scale and are monitored properly.
DCEU 18: Designing a Global Centralized Container Platform for a Multi-Cluste...Docker, Inc.
The document summarizes Robert Bosch GmbH's efforts to design and implement a centralized container platform and global image repository for its large, multicluster enterprise environment. It established standardized Docker clusters across its 280+ sites worldwide and deployed multiple Docker Trusted Registries for image distribution. This centralized approach replaced decentralized environments, reduced costs and efforts for users, and ensured security and compliance. Lessons learned included the need for central management of the multi-component system and additional security tools for container environments.
When we think about establishing a Kubernetes capability for our organization, our instinct, or perhaps just habit, might lead us to stand up a single cluster that will then be a shared resource across numerous tenants. Kubernetes offers namespaces that are intended to carve up the capacity across different users or groups of users. And while this may work well in some scenarios, it does impose certain constraints and limitations on its use. For example, it is well understood that the multitenancy in Kubernetes is soft, meaning it does not guard against deliberately malicious attacks from one tenant to another.
If instead, we align tenant boundaries to Kubernetes clusters, effectively creating many single tenant clusters we can not only avoid certain limitations but we gain some significant advantages. Add a control plane for managing these sets of clusters and we have a powerful solution built on decades of maturity in machine virtualization.
In this session we will present both models, multi-tenant clusters and multi-clusters and study the tradeoffs of each.
The document provides an introduction to Red Hat OpenShift, including:
- An overview of the differences between virtual machines and container technologies like Docker.
- The evolution of container technologies and standards like Kubernetes, CRI, and CNI.
- Why Kubernetes is used for container orchestration and why Red Hat OpenShift is a popular Kubernetes distribution.
- Key features of Red Hat OpenShift like source-to-image builds, integrated monitoring, security, and log aggregation with EFK.
Migrating from Self-Managed Kubernetes on EC2 to a GitOps Enabled EKSWeaveworks
Did your company start down the path of building a cloud native platform using Kubernetes with the goal of enabling developers to innovate faster and increase productivity, but then run into challenges keeping it operating in an optimal way?
In this session, Weaveworks will discuss how to migrate from self-managed Kubernetes on EC2 to a GitOps managed Shared Services Platform (SSP) on EKS. A SSP built on EKS and managed with Weave GitOps provides developers and operators with common workflows to update both applications and infrastructure. With every change in version control, full audit trails are available, and security is enforced. While at the same time enabling easier rollbacks and faster mean-time-to-recovery (MTTR). In short, a Weave GitOps managed SSP increases developer velocity while boosting stability.
How to operate a hybrid Kubernetes architecture, using managed EKS in the AWS Cloud and EKS-Distro on premises.
How to structure your infrastructure repository to efficiently manage multiple teams.
How to use Kubernetes RBAC to provide secure cluster multi-tenancy.
How to use GitOps to promote releases across a hybrid set of independent clusters.
How to accomplish data and operational sovereignty.
Webinar: End-to-End CI/CD with GitLab and DC/OSMesosphere Inc.
Seven years ago, Apache Mesos was born as a platform to bring the distributed computing capabilities that powered the largest digital companies to the masses. Today, Mesosphere DC/OS technologies power more containers in production than any other software stack in the world, and has emerged as the premier platform for building and elastically scaling data-rich, modern applications and the associated CI/CD infrastructure across any infrastructure, public or private.
GitLab is an end-to-end software development and delivery platform with built-in CI/CD, monitoring, and performance metrics. With a unified experience for every step of the development lifecycle and seamless integration with container schedulers, GitLab provides the most efficient approach to reduce cycle time, increase velocity, and improve software quality.
In this webinar, you will learn how to combine DC/OS and GitLab to easily build a CI/CD infrastructure and build a complete CI/CD pipeline in minutes.
Slides cover:
1. An introduction to Apache Mesos and Mesosphere DC/OS and overview of DC/OS features and capabilities for developing, deploying, and operating containerized applications, microservices and CI/CD
2. An introduction to GitLab
3. How to use DC/OS and GitLab to build a CI/CD solution and go from idea to production
This document summarizes a webinar about spinning up Kubernetes infrastructure in a GitOps way. It introduces Kubermatic and their start.kubermatic project, which provides a wizard to easily bootstrap infrastructure on cloud providers and install Kubermatic Kubernetes Platform (KKP) using GitOps. The webinar demonstrates how tools like Terraform, KubeOne, Helm, Flux, and SOPS are used to automate the provisioning and management of the Kubernetes cluster and KKP configuration. It also discusses security aspects and provides a live demo.
PKS: The What and How of Enterprise-Grade KubernetesVMware Tanzu
SpringOne Platform 2017
Cornelia Davis, Pivotal; Fred Melo, Pivotal
Because of its well thought out and powerful abstractions, robust and cloud-native architecture, and the vibrant community around it, the use of Kubernetes for containerized workloads has surged. And while Kubernetes is theoretically ready to run applications in production, the actual viability is highly dependent on how Kubernetes itself is managed. In this session Cornelia and Fred will cover role of the container orchestration system in your IT landscape, and they’ll dive under the covers to show how it provides the enterprise-class Kubernetes services you need to trust your most critical workloads to it. Yes, technical details revealed!
The document describes the twelve-factor app methodology for building software-as-a-service applications. The twelve factors are: codebase, dependencies, configuration, backing services, build-release-run, processes, port binding, concurrency, disposability, logs, admin processes, and dev/prod parity. The methodology advocates designing apps that are optimal to deploy on modern cloud platforms by separating an app from its infrastructure, using declarative formats for setup automation, and enabling continuous deployment for maximum agility.
Application Modernization with PKS / KubernetesPaul Czarkowski
This document discusses strategies for modernizing applications and replatforming them using Project Kubernetes Service (PKS). It outlines how companies have different options for packaging and running workloads, such as using containers, microservices, serverless functions, and monolithic applications. PKS aims to provide the right runtime for each workload type. The document compares container orchestrators, application platforms, and serverless functions, noting that PKS aims to push workloads higher in the platform hierarchy for more flexibility and less enforcement of standards while lowering development complexity and improving operational efficiency. It provides recommendations for getting started with migrating workloads to PKS, such as lifting and shifting applications with minimal modernization, leveraging platform capabilities, and fully modernizing
Run Stateful Apps on Kubernetes with VMware PKS - Highlight WebLogic Server Simone Morellato
The document discusses running Oracle WebLogic Server applications on Kubernetes and VMware PKS. It provides an overview of Kubernetes, PKS, and WebLogic Server challenges in containerization due to state management needs. It then describes how Kubernetes StatefulSets address these challenges by providing stable network identities and preserving state across container restarts. The document concludes with a demo of deploying WebLogic Server on PKS and lists five reasons why this approach is better than traditional deployment methods in terms of developer productivity, application monitoring, elasticity, multi-cloud support, and patching/upgrades.
Zero-downtime deployment of Micro-services with KubernetesWojciech Barczyński
Talk on deployment strategies with Kubernetes covering kubernetes configuration files and the actual implementation of your service in Golang.
You will find demos for recreate, rolling updates, blue-green, and canary deployments.
Source and demos, you will find on github: https://github.com/wojciech12/talk_zero_downtime_deployment_with_kubernetes
Devops lifecycle with Kabanero Appsody, Codewind, TektonWinton Winton
This document discusses how IBM's Cloud Pak for Applications and associated DevOps Add-On can help organizations with application modernization, development, and deployment. It provides an integrated platform for both traditional and cloud-native applications using containers and Kubernetes. The DevOps Add-On includes UrbanCode DevOps tools to automate deployments across platforms and orchestrate releases through the development pipeline. This allows consistent processes for both modernized and existing applications.
Tectonic Summit 2016: Brandon Philips, CTO of CoreOS, KeynoteCoreOS
The document discusses CoreOS's expertise across the technology stack for container-based applications. This includes Linux, container engines, container image specifications, clustered databases like etcd, cloud independence, identity federation, and more. CoreOS is focused on open standards through initiatives like the Open Container Initiative and ensuring technologies like Kubernetes, rkt, and etcd can scale to power large production deployments.
OSDC 2018 | Highly Available Cloud Foundry on Kubernetes by Cornelius SchumacherNETWAYS
This document discusses running Cloud Foundry on Kubernetes to provide highly available cloud platforms. It begins with an overview of cloud computing models and introduces Cloud Foundry. It then discusses deploying Cloud Foundry using Kubernetes primitives like pods, services, and stateful sets for high availability. The document demonstrates how to install Cloud Foundry on Kubernetes using Helm charts and configure for high availability. It shows the components have been made highly available to prevent downtime during failures or upgrades. Finally, it provides a demo of deploying a sample application on Cloud Foundry on Kubernetes under chaotic conditions to showcase the high availability.
Installing and Using Kubernetes is hard, but Operating Kubernetes is even harder! This BOF is for Kubernetes Operators to get together and discuss our day to day Operations, and for people new to Kubernetes to learn more about how to operate it.
Kubernetes 1.21 included 51 enhancements, including 13 features graduating to stable and 15 graduating to beta. Major themes included CronJobs graduating to stable, immutable secrets and configmaps, dual-stack IPv4/IPv6 support, graceful node shutdown, and the persistent volume health monitor. The 1.22 release timeline was also outlined, with enhancements freeze on May 13th and code freeze on July 8th, targeting August 4th for release. Various SIG updates provided information on enhancements for API machinery, apps, auth, CLI, cloud providers, instrumentation, network, node, scheduling and storage.
Kubecon US 2019: Kubernetes Multitenancy WG Deep DiveSanjeev Rampal
This document provides an overview and agenda for a presentation on secure multitenancy in Kubernetes. It discusses what Kubernetes multitenancy is, available solutions, architectural models for multitenancy including namespace grouping and virtual Kubernetes clusters. It also covers community initiatives for multitenancy control plane including tenant controllers and hierarchical namespaces. The document outlines benchmarking categories and a proposed baseline reference implementation for multitenancy including control plane, data plane, and network isolation techniques.
Modern DevOps practices involve deploying applications to platforms. From basic IaaS to PaaS to serverless functions. But who runs those platforms and how? At Pivotal we build and operate platforms, and we run those platforms on a platform designed to run complex distributed systems called Bosh which was inspired by google borg. Paul will talk through a couple of successful patterns for deploying and operating platforms as well as how to help your business determine which platform[s] are right for them and how to successfully get the business to adopt those platforms.
Openstack days sv building highly available services using kubernetes (preso)Allan Naim
This document discusses Google Cloud Platform's Kubernetes and how it can be used to build highly available services. It provides an overview of Kubernetes concepts like pods, labels, replica sets, volumes, and services. It then describes how Kubernetes Cluster Federation allows deploying applications across multiple Kubernetes clusters for high availability, geographic scaling, and other benefits. It outlines how to create clusters, configure the federated control plane, add clusters to the federation, deploy federated services and backends, and perform cross-cluster service discovery.
The document discusses how Kubernetes and the 12 factors of cloud applications relate. It provides an overview of each of the 12 factors and examples of how they can be implemented using Kubernetes. Key takeaways include designing stateless applications, keeping environments similar between development and production, and preferring managed services for persistence. The document encourages decoupling infrastructure complexity from application code and ensuring applications can scale and are monitored properly.
DCEU 18: Designing a Global Centralized Container Platform for a Multi-Cluste...Docker, Inc.
The document summarizes Robert Bosch GmbH's efforts to design and implement a centralized container platform and global image repository for its large, multicluster enterprise environment. It established standardized Docker clusters across its 280+ sites worldwide and deployed multiple Docker Trusted Registries for image distribution. This centralized approach replaced decentralized environments, reduced costs and efforts for users, and ensured security and compliance. Lessons learned included the need for central management of the multi-component system and additional security tools for container environments.
When we think about establishing a Kubernetes capability for our organization, our instinct, or perhaps just habit, might lead us to stand up a single cluster that will then be a shared resource across numerous tenants. Kubernetes offers namespaces that are intended to carve up the capacity across different users or groups of users. And while this may work well in some scenarios, it does impose certain constraints and limitations on its use. For example, it is well understood that the multitenancy in Kubernetes is soft, meaning it does not guard against deliberately malicious attacks from one tenant to another.
If instead, we align tenant boundaries to Kubernetes clusters, effectively creating many single tenant clusters we can not only avoid certain limitations but we gain some significant advantages. Add a control plane for managing these sets of clusters and we have a powerful solution built on decades of maturity in machine virtualization.
In this session we will present both models, multi-tenant clusters and multi-clusters and study the tradeoffs of each.
The document provides an introduction to Red Hat OpenShift, including:
- An overview of the differences between virtual machines and container technologies like Docker.
- The evolution of container technologies and standards like Kubernetes, CRI, and CNI.
- Why Kubernetes is used for container orchestration and why Red Hat OpenShift is a popular Kubernetes distribution.
- Key features of Red Hat OpenShift like source-to-image builds, integrated monitoring, security, and log aggregation with EFK.
Migrating from Self-Managed Kubernetes on EC2 to a GitOps Enabled EKSWeaveworks
Did your company start down the path of building a cloud native platform using Kubernetes with the goal of enabling developers to innovate faster and increase productivity, but then run into challenges keeping it operating in an optimal way?
In this session, Weaveworks will discuss how to migrate from self-managed Kubernetes on EC2 to a GitOps managed Shared Services Platform (SSP) on EKS. A SSP built on EKS and managed with Weave GitOps provides developers and operators with common workflows to update both applications and infrastructure. With every change in version control, full audit trails are available, and security is enforced. While at the same time enabling easier rollbacks and faster mean-time-to-recovery (MTTR). In short, a Weave GitOps managed SSP increases developer velocity while boosting stability.
How to operate a hybrid Kubernetes architecture, using managed EKS in the AWS Cloud and EKS-Distro on premises.
How to structure your infrastructure repository to efficiently manage multiple teams.
How to use Kubernetes RBAC to provide secure cluster multi-tenancy.
How to use GitOps to promote releases across a hybrid set of independent clusters.
How to accomplish data and operational sovereignty.
Webinar: End-to-End CI/CD with GitLab and DC/OSMesosphere Inc.
Seven years ago, Apache Mesos was born as a platform to bring the distributed computing capabilities that powered the largest digital companies to the masses. Today, Mesosphere DC/OS technologies power more containers in production than any other software stack in the world, and has emerged as the premier platform for building and elastically scaling data-rich, modern applications and the associated CI/CD infrastructure across any infrastructure, public or private.
GitLab is an end-to-end software development and delivery platform with built-in CI/CD, monitoring, and performance metrics. With a unified experience for every step of the development lifecycle and seamless integration with container schedulers, GitLab provides the most efficient approach to reduce cycle time, increase velocity, and improve software quality.
In this webinar, you will learn how to combine DC/OS and GitLab to easily build a CI/CD infrastructure and build a complete CI/CD pipeline in minutes.
Slides cover:
1. An introduction to Apache Mesos and Mesosphere DC/OS and overview of DC/OS features and capabilities for developing, deploying, and operating containerized applications, microservices and CI/CD
2. An introduction to GitLab
3. How to use DC/OS and GitLab to build a CI/CD solution and go from idea to production
This document summarizes a webinar about spinning up Kubernetes infrastructure in a GitOps way. It introduces Kubermatic and their start.kubermatic project, which provides a wizard to easily bootstrap infrastructure on cloud providers and install Kubermatic Kubernetes Platform (KKP) using GitOps. The webinar demonstrates how tools like Terraform, KubeOne, Helm, Flux, and SOPS are used to automate the provisioning and management of the Kubernetes cluster and KKP configuration. It also discusses security aspects and provides a live demo.
This document summarizes a webinar about spinning up Kubernetes infrastructure in a GitOps way using start.kubermatic. It introduces Kubermatic and its start.kubermatic project, which provides a wizard to easily bootstrap infrastructure on cloud providers using GitOps. The webinar demonstrates how it uses tools like Terraform, KubeOne and Flux to automate the creation of a Kubernetes cluster and then configure it and manage its resources with GitOps. It discusses the security and automation benefits of this approach for managing Kubernetes at scale across multiple clusters.
Demystifying Application Connectivity with Kubernetes in the Docker PlatformNicola Kabar
The addition of Kubernetes support to Docker Enterprise Platform presents deployments with interesting new abstractions for application connectivity. Users and Operators are often challenged with rationalizing how pod networking (with CNI plugins like Calico or Flannel), Services (via kube-proxy) and Ingress work in concert to enable application connectivity within and outside a cluster. Similarly, given the dynamic and transient nature of containerized microservice workloads, how to leverage scalable and declarative approaches like network policies to express segmentation and security primitives. This session provides an illustrative walkthrough of these core concepts by going through common deployment architectures providing design, operations, and scale considerations based on experience from numerous production deployments. The session will also showcase how to complement application and operations workflows with policy-driven business, compliance and security controls typically required in enterprise production deployments.
Demystifying container connectivity with kubernetes in dockerDocker, Inc.
The addition of Kubernetes support to Docker Enterprise Platform presents deployments with interesting new abstractions for application connectivity. Users and Operators are often challenged with rationalizing how pod networking (with CNI plugins like Calico or Flannel), Services (via kube-proxy) and Ingress work in concert to enable application connectivity within and outside a cluster. Similarly, given the dynamic and transient nature of containerized microservice workloads, how to leverage scalable and declarative approaches like network policies to express segmentation and security primitives.
This session provides an illustrative walkthrough of these core concepts by going through common deployment architectures providing design, operations, and scale considerations based on experience from numerous production deployments. The session will also showcase how to complement application and operations workflows with policy-driven business, compliance and security controls typically required in enterprise production deployments.
Nebulaworks invited Bitnami's software engineer, Adnan Abdulhussein to present on, "The App Developer's Kubernetes Toolbox."
Details:
If you're developing applications on top of Kubernetes, you may be feeling overwhelmed with the vast number of development tools in the ecosystem at your disposal. Kubernetes is growing at a rapid pace, and it's becoming impossible to keep up with the latest and greatest development environments, debuggers, and build test and deployment tools.
Learn:
• The current state of development in Kubernetes
• Comparison of shared and local Kubernetes development environments
• Overview of different development tools in the ecosystem
• Which tools make sense in common scenarios
• How Bitnami uses Kubernetes as a development environment
OSDC 2018 | Three years running containers with Kubernetes in Production by T...NETWAYS
The talk gives a state of the art update of experiences with deploying applications in Kubernetes on scale. If in clouds or on premises, Kubernetes took over the leading role as a container operating system. The central paradigm of stateless containers connected to storage and services is the core of Kubernetes. However, it can be extended to distributed databases, Machine Learning, Windows VMs in Kubernetes. All these applications have been considered as edge cases a few years ago, however, are going more and more mainstream today.
Kubernetes is designed to be an extensible system. But what is the vision for Kubernetes Extensibility? Do you know the difference between webhooks and cloud providers, or between CRI, CSI, and CNI? In this talk we will explore what extension points exist, how they have evolved, and how to use them to make the system do new and interesting things. We’ll give our vision for how they will probably evolve in the future, and talk about the sorts of things we expect the broader Kubernetes ecosystem to build with them.
Cloud Native Night, April 2018, Mainz: Workshop led by Jörg Schad (@joerg_schad, Technical Community Lead / Developer at Mesosphere)
Join our Meetup: https://www.meetup.com/de-DE/Cloud-Native-Night/
PLEASE NOTE:
During this workshop, Jörg showed many demos and the audience could participate on their laptops. Unfortunately, we can't provide these demos. Nevertheless, Jörg's slides give a deep dive into the topic.
DETAILS ABOUT THE WORKSHOP:
Kubernetes has been one of the topics in 2017 and will probably remain so in 2018. In this hands-on technical workshop you will learn how best to deploy, operate and scale Kubernetes clusters from one to hundreds of nodes using DC/OS. You will learn how to integrate and run Kubernetes alongside traditional applications and fast data services of your choice (e.g. Apache Cassandra, Apache Kafka, Apache Spark, TensorFlow and more) on any infrastructure.
This workshop best suits operators focussed on keeping their apps and services up and running in production and developers focussed on quickly delivering internal and customer facing apps into production.
You will learn how to:
- Introduction to Kubernetes and DC/OS (including the differences between both)
- Deploy Kubernetes on DC/OS in a secure, highly available, and fault-tolerant manner
- Solve operational challenges of running a large/multiple Kubernetes cluster
- One-click deploy big data stateful and stateless services alongside a Kubernetes cluster
Moby is an open source project providing a "LEGO set" of dozens of components, the framework to assemble them into specialized container-based systems, and a place for all container enthusiasts to experiment and exchange ideas.
One of these assemblies is Docker CE, an open source product that lets you build, ship, and run containers.
This talk will explain how you can leverage the Moby project to assemble your own specialized container-based system, whether for IoT, cloud or bare metal scenarios.
We will cover Moby itself, the framework, and tooling around the project, as well as many of it’s components: LinuxKit, InfraKit, containerd, SwarmKit, Notary.
Then we will present a few use cases and demos of how different companies have leveraged Moby and some of the Moby components to create their own container-based systems.
Video at https://www.youtube.com/watch?v=kDp22YkD6WY
Centralizing Kubernetes and Container OperationsKublr
While developers see and realize the benefits of Kubernetes, how it improves efficiencies, saves time, and enables focus on the unique business requirements of each project; InfoSec, infrastructure, and software operations teams still face challenges when managing a new set of tools and technologies, and integrating them into an existing enterprise infrastructure.
These meetup slides go over what’s needed for a general architecture of a centralized Kubernetes operations layer based on open source components such as Prometheus, Grafana, ELK Stack, Keycloak, etc., and how to set up reliable clusters and multi-master configuration without a load balancer. It also outlines how these components should be combined into an operations-friendly enterprise Kubernetes management platform with centralized monitoring and log collection, identity and access management, backup and disaster recovery, and infrastructure management capabilities. This presentation will show real-world open source projects use cases to implement an ops-friendly environment.
Check out this and more webinars in our BrightTalk channel: https://goo.gl/QPE5rZ
Cloud-native .NET Microservices mit KubernetesQAware GmbH
Mario-Leander Reimer presented on building cloud-native .NET microservices with Kubernetes. He discussed key principles of cloud native applications including designing for distribution, performance, automation, resiliency and elasticity. He also covered containerization with Docker, composing services with Kubernetes and common concepts like deployments, services and probes. Reimer provided examples of Dockerfiles, Kubernetes definitions and using tools like Steeltoe and docker-compose to develop cloud native applications.
A short introduction into how the GitOps toolkit can be used to deploy Confluent for Kubernetes.
The demo covers:
1. Building a clear Kafka vision
2. Declarative cluster management (including Connectors)
3. Automating Confluent Cloud
4. Demo’ing GitOps with Terraform provision of Confluent Cloud
All code for this Demo can be found here: https://github.com/osodevops/confluent-gitops-demo
This document discusses SPN's journey to implement CI/CD on AWS. It begins with describing SPN's original process for delivering services which involved many manual steps. It then discusses DevOps goals of faster delivery, lower failure rates, and faster recovery compared to the original process. The document outlines using AWS services like CloudFormation, OpsWorks, and Auto Scaling to implement CI/CD and automate deploying a sample analytic engine service. Lessons learned include automating as much as possible, splitting CloudFormation templates, focusing on updates without impacting SLAs, and emphasizing monitoring and testing.
Spring Cloud Services with Pivotal Cloud Foundry- Gokhan GoksuVMware Tanzu
- Pivotal Cloud Foundry (PCF) is a cloud application platform that supports Spring applications. It provides automated deployment of Spring and Spring Boot apps along with a services ecosystem.
- Spring Cloud Services (SCS) provides services for PCF like service registry, configuration management, and circuit breakers that integrate with Spring apps. It includes tools to manage credentials and integrate apps with services.
- The document discusses how PCF supports developers through services, buildpacks, and automation to deploy Spring apps and discusses integrating apps with services through SCS. It also provides an agenda for a demo of deploying Spring apps on PCF.
Kubernetes for java developers - Tutorial at Oracle Code One 2018Anthony Dahanne
You’re a Java developer? Already familiar with Docker? Want to know more about Kubernetes and its ecosystem for developers? During this session, you’ll get familiar with core Kubernetes concepts (pods, deployments, services, volumes, and so on) before seeing the most-popular and most-productive Kubernetes tools in action, with a special focus on Java development. By the end of the session, you’ll have a better understanding of how you can leverage Kubernetes to speed up your Java deployments on-premises or to any cloud.
This document provides an agenda and overview for a NetApp competitive webcast on Kubernetes storage competitors. The webcast covers cloud competitive resources available from NetApp, why organizations are adopting containers and Kubernetes, a fireside chat on VMware Tanzu and Pure Storage Portworx, and key takeaways. Presenters include NetApp technical marketing engineers and a senior director of product management.
07 vmugit aprile_2018_massimiliano_moschiniVMUG IT
VMware Hyper-Converged Software provides Virtual SAN, which allows for storage to be pooled and shared across servers. Virtual SAN enables the creation of a shared datastore that can be accessed by any VM running on the servers in the Virtual SAN cluster. It provides a simple, efficient and resilient way to store and protect VM data without the need for external shared storage.
07 - VMUGIT - Lecce 2018 - Antonio Gentile, FortinetVMUG IT
VMUGIT Meeting - Lecce, 5 Aprile 2018
Antonio Gentile - System Engineer Fortinet Italy - Fortinet Security Fabric - Le nuove sfide della cyber security su infrastrutture software defined
VMUGIT Meeting - Lecce, 5 Aprile 2018
Rodolfo Rotondo VMware Sr. Business Solution Strategist, SEMEA - Difendere tutto... difendere niente! Come sviluppare un approccio strategico alla cyber security nell'era del mobile-cloud e degli oggetti interconnessi
Rubrik offers a software-defined data management platform that can help organizations accelerate their GDPR compliance efforts. The platform provides centralized management of data across on-premises, edge, and cloud environments. It employs security measures like encryption and immutable storage that are designed with privacy and compliance in mind. Rubrik also simplifies compliance through policy-driven automation that enforces data protection, retention, and deletion policies. Reporting tools give insights into policy effectiveness. The unified platform streamlines compliance processes around identifying, managing, and securing personal data.
This document discusses blockchain and enterprise IT, dispelling myths around distributed ledgers. It provides an overview of blockchain concepts like data integrity, actors, and public vs private blockchains. It also includes decision diagrams to help determine if a blockchain is needed and compares databases to blockchains. Example use cases for blockchains are listed such as supply chain management. Considerations for blockchain projects like requirements and limitations are also covered.
VMUGIT Meeting - Lecce, 5 Aprile 2018
Enrico Signoretti, Head of Product Strategy at OpenIO, blogger at Juku - IIoT. Il futuro è nell'integrazione Cloud-Edge
This document describes various "rebels" or non-virtualized applications in a datacenter that need to be managed. It discusses "Filerix", an old file server that has grown significantly in size and files. It also mentions "Maniscalchix", an application installed long ago whose purpose is unknown, and "Nonmifotografarix", which produces a lot of I/O and could crash during snapshots. The document provides information on how to back up these different applications using Veeam solutions like NAS shares, agents, I/O filtering, and archive tier despite their non-virtualized nature or other challenges.
The document provides an agenda for a PowerCLI session that will cover topics like getting started with PowerCLI, common errors and pitfalls, advanced functionality, and the PowerCLI community. It includes code snippets and examples for working with PowerCLI to retrieve and report on VMware vSphere infrastructure information using PowerShell. The session aims to help attendees become more proficient PowerCLI users.
Storage Policy Based Management (SPBM) allows data services like replication, encryption, and performance policies to be applied on a per-VM or per-VMDK level through configurable storage policies. The presenter discusses how SPBM is central to VMware's software-defined storage vision and allows administrators to take an application-centric approach to assigning storage services and service level agreements. Administrators can define storage policies, apply them dynamically to VMs, and change policies without disrupting services.
VMware Cloud on AWS allows customers to run VMware workloads on AWS infrastructure providing operational consistency, existing skillsets and tools, and control and security. It introduces VMware's software-defined data center (SDDC) technologies like vSphere, vSAN, and NSX running on AWS. This provides enterprises hybrid cloud capabilities with elasticity, portability of applications between on-premises and cloud, and access to AWS native services. Customers can easily deploy and manage their VMware environments on AWS.
Security groups and security policies were created to microsegment the network and restrict traffic flows based on the new segmentation. This was done using vRNI to visualize traffic before and after the changes. Security groups were defined using dynamic membership based on VM name, security tag, or other attributes. A shared services security policy template was also created to securely allow access to common management and services resources from different security groups.
Building RAG with self-deployed Milvus vector database and Snowpark Container...Zilliz
This talk will give hands-on advice on building RAG applications with an open-source Milvus database deployed as a docker container. We will also introduce the integration of Milvus with Snowpark Container Services.
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
4. Containers 101
4
Container Host (VM)
Developer
Dev Host (VM)
UBUNTU
JAVA
TC SERVER
{APP}
KERNEL
CONTAINERCONTAINER
Portable
Container Image
`docker run –d myimage`
CONTAINER
• Reliable Packaging
• Server/VM Density
• Fast Time To Launch
• Built for CI/CD
CONFIDENTIAL
6. Pivotal Cloud Foundry 101 (PaaS)
6
war
Availability Zone 1 Availability Zone 2 Availability Zone 3
Staging
Root
FS
Build
Pack
war
`cf push`
Drop
let
A
I
A
I
myapp.foo.com
*.foo.com = NSX Edge Vip
NSX Edge
PCF Routing PCF Routing PCF Routing
LB Pool Members
“Here is my source code
Run it on the cloud for me
I do not care how”
URL Request:
myapp.foo.com
Developer
CONFIDENTIAL
9. 9
Code Analysis Testing
Commit Code
Changes
Staging Production Zero Downtime Upgrades
AUTOMATED
PIPELINE
SPEED
Releasing smaller things more
often will reduce complexity and
improve time-to-market
QUALITY
We embed testing early in the lifecycle
to surface problems sooner, avoiding
last minute issues and helping us be
more responsive to change
AGILITY
Let’s push updates on a
regular basis without ANY
downtime to improve
customer experience and
shorten time-to-market
AUTOMATION
Let’s integrate tools and
automate processes from
testing, to builds & deployment
CI/CD CI/CD CI/CD CI/CD CI/CD
SOFTWARE DEVELOPMENT LIFECYCLE
Agile methods help drive Digital Transformation
Problem to Solve, Faster Time To Value …
Drive Business Value into Production Faster and Safer
CONFIDENTIAL
10. Multiple Use Cases Dictate Multiple Workloads and Approaches
10
Container Instance (CI) Container Service (CaaS) Application Platform (PaaS)
IaaS
CONFIDENTIAL
CONTAINERS BATCHES
DATA SERVICES MICROSERVICESMONOLITHIC
APPLICATIONS
The Goal:
Pick the Right
Approach for
the Workload
CONFIDENTIAL
11. IaaS
Choosing the Right Tool for the Job
11
Developer
Provides
Tool
Provides
Container
Service
Container Orchestration
Container Scheduling
Primitives for Routing,
Logs & Metrics
CONTAINER IMAGES,
TEMPLATES, DEPLOYMENTS
Application
Platform
APPLICATION CODE
Container Service
Container Image & build
L7 Network & Routing
Logs, Metrics, Monitoring
Services Marketplace
Team, Quotas & Usage
Container
Instance
CONTAINER IMAGE
Container Runtime
Primitives for Network and
Storage
Container Instance
CONFIDENTIAL
CONFIDENTIAL
12. IaaS
Choosing the Right Tool for the Job
12
Developer
Provides
Tool
Provides
Container
Service
Container Orchestration
Container Scheduling
Primitives for Routing,
Logs & Metrics
CONTAINER IMAGES,
TEMPLATES, DEPLOYMENTS
Application
Platform
APPLICATION CODE
Container Service
Container Image & build
L7 Network & Routing
Logs, Metrics, Monitoring
Services Marketplace
Team, Quotas & Usage
Container
Instance
CONTAINER IMAGE
Container Runtime
Primitives for Network and
Storage
Container Instance
CONFIDENTIAL
Application Specificty
Higher flexibility, lower automation, more DIY
CONFIDENTIAL
13. IaaS
Choosing the Right Tool for the Job
13
Abstraction
Container
Service
CONTAINER IMAGES,
TEMPLATES, DEPLOYMENTS
Application
Platform
APPLICATION CODE
Container
Instance
CONTAINER IMAGE
CONFIDENTIAL
Pivotal Container Service
Pivotal Cloud Foundry
Elastic Runtime
BOSH
vSphere Integrated
Containers
CONFIDENTIAL
15. Purpose-built container service to operationalize Kubernetes
for the multi-cloud enterprises and service providers
Fully Supported Kubernetes
Runs on vSphere and VMC
Unified VM + Containers on SDDC
Deep Integration with NSX
Hardened, Production-grade
HA, Security, Multi-tenancy, Tools
VMware and Pivotal Collaborate to Deliver
VMware Pivotal Container Service (VMware PKS)
16. Fault-tolerance for
masters, workers,
and etcd nodes
Auto-scaling of
masters, workers,
and etcd nodes
Routine health
checks and self-
healing of cluster
LCM includes rolling
upgrades to ensure
workload uptime &
application of CVEs
ScalingHigh Availability Health Checks
& Healing
Lifecycle
Management
VMware PKS – Solving Day-2 Operational Challenges
17. 17
BOSH
VMware GCP Azure Openstack AWS
Container Infrastructure for Cloud-Native Apps
Rapidly deliver and operationalize next generation apps
Container
Registry
Kubernetes on BOSH (Kubo)
NSX-T
GCP
Service
Broker
masteretcd workermasteretcd worker
PKS Controller
18. Who is PKS built for?
18
IT
Operator
– PRE (Platform Reliability
Engineering)
– Deploy, Scale, Operate
Platform
– Innovation of Business
Capability as Cloud
native Apps
– Develop, Deploy, Scale,
Monitor Apps
– Physical Infrastructure is
Operated
– Network & Security
Control Policy is defined
• Platform Reliability Engineers
– Platform is Reliable
– Capacity Is planned for
– Platform is Secured & Controlled
– Platform is Auditable
– Application Dev/Ops owners are Agile
• Application Dev/Ops owner
– Automate Everything
– Agile
* Role Shift
– It is common to see the VI Admins (IT Ops), becoming the Platform Reliability Engineer
Cloud Native Applications at scale can & should
be kept running by a 2 Pizza Team mentality
(DevOps in Action) Application
Dev/Ops Owner
Platform
Reliability Engineer
CONFIDENTIAL
19. 19
BOSH
VMware GCP Azure Openstack AWS
Container
Registry
Kubernetes on BOSH (Kubo)
NSX-T
GCP
Service
Broker
masteretcd workermasteretcd worker
PKS Controller
PKS Technical Overview
21. PKS
BOSH
K8S-1
Work
er
Worker
K8S-2
BOSH
Agent
BOSH
Agent
K8s-api
Team A
K8s-api
KUBO
BOSH
Release
(tgz)
DAY 2 Ops
- Auto/Manual Rebuild
- Auto/Manual Repair
- Manual Scale
- Patch & Upgrade
- Control & Audit OPS Events
NAMESPACE_1: TEAM A
NAMESPACE_2: TEAM B
Team C
Team B
NAMESPACE_1: DEFAULT
DAY 1 Ops
DEPLOY
OperateK8s+RunApps/Containers
UI
&
API
Worker
Application
Dev/Ops Owner
Application
Dev/Ops Owner
Application
Dev/Ops Owner
Work
er
MASTER
WorkerMASTER
ETCD
WorkerMASTER
ETCD
MASTER
MASTER
ETCD
Platform
Reliability Engineer
Self Service K8s
BOSH Day 2
1.7 -> 1.8
1.7 -> 1.8
PKS Controller
CONFIDENTIAL 21
22. 22
BOSH
VMware GCP Azure Openstack AWS
Container
Registry
Kubernetes on BOSH (Kubo)
NSX-T
GCP
Service
Broker
masteretcd workermasteretcd worker
PKS Controller
PKS Technical Overview
23. 23
Need Harbor screenshot
• user management & access control
• role-based access control
• AD/LDAP integration
• Security vulnerability scanning
(Clair)
• content trust - image signing
• policy based image replication
• audit and logs
• Restful API
• open-source under Apache 2
license
Harbor – Enterprise Grade Private Registry
CONFIDENTIAL
24. 24
Harbor – Content Trust,
When Enabled Un-signed Images Can’t Be Pulled
CONFIDENTIAL
26. Harbor – Use Cases
PKS Stemcell
CVE in Root File
System of Container
CVE Exec Layer: TC
Server
CVE on the Container
Host OS
Vulnerability in
Code{}
Restage Applications
CVE FOUND
!!!
BOSH
CVE & Update Patching
• Patch OS Level via Stemcells
• Harbor Scans Images for
Vulnerability (Clair)
• Address CVE in minutes/hours
versus days/weeks
Application
Dev/Ops Owner
Platform
Reliability Engineer
OS CVE
FOUND !!!
Patched
Stemcell
Patched
Stemcell
Patched
Worker(s)
CONFIDENTIAL 26
27. 27
BOSH
VMware GCP Azure Openstack AWS
Container
Registry
Kubernetes on BOSH (Kubo)
NSX-T
GCP
Service
Broker
masteretcd workermasteretcd worker
PKS Controller
PKS Technical Overview
28. WorkerWorkerWorker
K8s
Master
K8s
Master
Kubernetes Components
• K8s Cluster Consists of Master(s)
and Nodes
• K8s Master Components
– API Server
– Scheduler
– Controller Manager
– Dashboard
• K8s Node Components
– Kubelet
– Kube-Proxy
– Containers Runtime (Docker for PKS 1.0)
28
Controller
Manager
K8s API
Server
Key-Value
Store
dashboard
Scheduler
K8s Nodes
kubelet c runtime
Kube-proxy
> _
Kubectl
CLI
K8s Master(s)
POD POD
Application
Dev/Ops Owner
CONFIDENTIAL
29. K8s POD
Kubernetes Pod – Networking Basics
Special
‘Pause’ container
(‘owns’ the IP stack)
10.24.0.0/16
10.24.0.2
nginx
tcp/80
mgmt
tcp/22
logging
udp/514
IPC
External IP Traffic
• A Pod is a group of one
or more co-located
containers that share
an IP address, PID
namespace and/or
Data Volumes
29CONFIDENTIAL
31. NSX-T & PKS Components
NSX Container Plugin (NCP)
• NCP is a software component
provided by VMware in form of a
container image, e.g. to be run as a
K8s Pod.
• NCP is build in a modular way, so
that individual adapters can be
added for different CaaS and PaaS
systems
31CONFIDENTIAL
32. PKS & NSX-V • PKS supported with NSX-V or without NSX
• Flannel overlay.
• 1 Flat SDN Overlay per Cluster
• 1 Large CIDR “10.200.0.0/16”
• Each worker node routes a
subnet for Pods across
• Example: 10.200.1.0/24
• No integrated North South Load
Balancing
• No Integrated Security Policy
32
K8s Cluster
K8s Cluster
Namespace 1 Namespace 2 Namespace 3
VXLAN Network
Namespace 1 Namespace 2 Namespace 3
• NSX-T
• Multiple Logical Switches (L2 Domain)
per Namespace
• Routable as NAT or No-NAT
• Integrated Load Balancing (NSX-T 2.1)
• Integrated Security Policy
CONFIDENTIAL
33. PKS w/ NSX-T & NSX-V
• NSX-V and NSX-T Can coexist.
• Dedicated Clusters for
NSX-T Managed Hosts
• Can Share a common
vCenter backplane
33
NSX-T
Managed
Common vCenter
w/ NSX-v
managed Hosts
CONFIDENTIAL
40. PKS Telemetry – On vSphere
Who needs what?
40
Infra K8s Containers Apps Application
Dev/Ops Owner
Platform
Reliability Engineer
vRLI
vRops Wavefront
CONFIDENTIAL
41. Monitoring & Logging
41
METRICS
LOGS
Metrics & Logs emit from
many Sources:
• IaaS (vSphere)
• PKS K8s Platform
• Applications
• NSX
• Physical & Logical
Platform Reliability
Engineer MUST leverage
ALL of them
PKS Control
IaaS
CONFIDENTIAL
42. Deamon
Set
Deamon
Set
vRLI Logging w/ PKS
POD vRLI
POD
vRLI
• App Logging
• System Logging
– OS & Processes not
run in Containers
App Logging
• Per App Only
Sidecar
• App Logging @ Pod level
POD
Daemon
Set
(PODs)
vRLI
POD
LOGGER
DOCKERDDOCKERD
vRLI
DaemonSet
• App Logging @ Cluster level
• Cluster Logging
Dockerd
• App Logging @ Cluster level
• Cluster Logging
• Not handled in K8s API
SyslogD
Platform
Reliability Engineer
Application
Dev/Ops Owner
&
CONFIDENTIAL 42
43. Wavefront & PKS
K8s Monitoring Integration w/
Wavefront by VMware
Wavefront Integration can be
deployed as containers within the
K8s Cluster
– Proxy
– Heapster
• Comprehensive Dashboards
– SaaS
• APM for the Developer
• Cluster KPIs for the Operator
• Integrated with PKS
Image source: https://www.wavefront.com/surf-container-wave-join-wavefront-container-world-santa-clara/
Platform
Reliability Engineer
Application
Dev/Ops Owner
CONFIDENTIAL 43
Walk Thru of a Container 101
Describe benefits of containers and establish common understanding for K8s discussion.
With announcements today about PKS lets look a little at how K8S is different from PCF
From the Developer point of view:
I check my code in just like if I were pushing to PCF
But in addition to application artifacts, the pipeline is going to build an image for me …
In this visual we have a K8S cluster already running docker as the backend container engine, so our CI/CD pipeline will build a docker image for us and post it to a registry, in this case VMware Harbor
Afterwhich, the pipeline will instantiate a K8S deployment to run our docker image based application as a set of pods in a replica set in case a worker note goes offline.
The developer can than create a ‘service’ that gives worker nodes (or any external node) running the kube-proxy service the ability to route to where those pods are and access the apps/microservices running in them.
Ingress routing from external is similar to that of CF with an external DNS map being required to forward requests to 1 or more worker nodes running kube-proxy
One of the key differences is that Kubernetes isn’t opinionated on how the container image should be built, this give more flex to the developers but in some cases can make things more difficult for operators as we’ll see later on in the presentation
Agility is why developers want it
Lets walk thru what makes PCF so Powerful ….
From the Developer point of view:
I write my code {}
I check it into a repository
A CI/CD pipeline then builds & tests my code, then outputs an ‘artifact’. In this visual, we will use a java app, so it’s a war.
The pipeline then ‘pushes’ the artifact to PCF to stage
From here its all up to the platform ….
Staging occurs, where an image called a ‘droplet’ is built by combining a (1) a read only root filesystem , (2) a buildpack that is a tarball that contains the exec components like tc server for example to run a java app, (3) and the app artifact
After staging, the app can now be run. For example if we say that we want 2 instances of the application, PCF will launch 2 containers using the same droplet image we just compiled and schedule them across CF Availability Zones. This gives us the ability to keep our app up if an AZ were to go offline.
PCF also creates a route map for our application so when a request is forwarded to it, the request can be routed to the correct containers. PCF calls these containers Application Instances or AIs
Developers also benefit from a rich set of buildpacks in the platform support many application dev frameworks. Even .net apps with Windows Container hosts are supported by PCF.
Agility is why developers want it
Application Purchases Will Increasingly Be "Build," Not "Buy"
Gartner predicts that by 2020, 75 percent of application purchases supporting digital business will be "build," not "buy." Gartner's research shows that many organizations already favor a new kind of "build" that does not include out-of-the-box solutions, but instead is a combination of application components that are differentiated, innovative and not standard software or software with professional services (for customization and integration requirements), or solutions that are increasingly sourced from startups, disrupters or specialized local providers.
http://www.gartner.com/newsroom/id/3119717
Adopting Agile processes is a key driver to help a business digitally transform. Software truly is eating the world.
The key for these business is changing not only the way apps are coded, for example cloud native/12 factor) but also the processes by which they are built and operationalized
Speed: Compose apps as micro services to allow more scalable and rapid development. Work for smaller releases to reduce sprints
Automation: Automate everything. It reduces risk and increases speed
Quality: Test Driven coding, tests should be part of the pipeline, if a fault is found, tests go back into the pipeline.
Agility: Release often, design apps and pipelines to allow for frequent pushes.
By making the first task on any software effort “delivery” - deploy the code somewhere, even if it doesn’t do anything.
And then keep doing that every time you change anything…
In the ‘New Stack” required for an agile world , the Developer and the Operator need to act as 1, or at least a 1 pizza team (or 2 pizza if they are hungry). Sort of like the acronym Devops
This means that just like the Developer needs everything API Driven & self service from the platform, the Platform Operator also needs everything API driven & self service from his infrastructure. The Devops team cant lob stuff over the fence, they own it!!!!
API server: Target for all operations to the data model. External API clients like the K8s CLI client, the dashboard Web-Service, as well as all external and internal components interact with the API server by ’watching’ and ‘setting’ resources
Scheduler: Monitors Container (Pod) resources on the API Server, and assigns Worker Nodes to run the Pods based on filters
Controller Manager: Embeds the core control loops shipped with Kubernetes. In Kubernetes, a controller is a control loop that watches the shared state of the cluster through the apiserver and makes changes attempting to move the current state towards the desired state
Etcd: Is used as the distributed key-value store of Kubernetes
Watching: In etcd and Kubernetes everything is centered around ‘watching’ resources. Every resource can be watched in K8s on etcd through the API Server
Kubelet: The Kubelet agent on the Nodes is watching for ‘PodSpecs’ to determine what it is supposed to run
Kubelet: Instructs Container runtimes to run containers through the container runtime API interface
Docker: Is the most used container runtime in K8s. However K8s is ‘runtime agnostic’, and the goal is to support any runtime through a standard interface (CRI-O)
Rkt: Besides Docker, Rkt by CoreOS is the most visible alternative, and CoreOS drives a lot of standards like CNI and CRI-O
Kube-Proxy: Is a daemon watching the K8s ‘services’ on the API Server and implements east/west load-balancing on the nodes using NAT in IPTables
POD: A pod (as in a pod of whales or pea pod) is a group of one or more containers
Networking: Containers within a pod share an IP address and port space, and can find each other via localhost. They can also communicate with each other using standard inter-process communications like SystemV semaphores or POSIX shared memory
Pause Container: A service container named ‘pause’ is created by Kubelet. Its sole purpose is to own the network stack (linux network namespace) and build the ‘low level network plumbing’
External Connectivity: Only the pause container is started with an IP interface
Storage: Containers in a Pod also share the same data volumes
Motivation: Pods are a model of the pattern of multiple cooperating processes which form a cohesive unit of service
(click) Configure a vSphere Cloud Provider Manifest. Provide key info …. like vCenter Creds & default datastores
(click) Restart all core K8s components & add new flags to enable vSphere Cloud Provider (API, K8s Ctrlr, & Kubelets
(click) Create a K8s Persistent volume
kubectl cmd applys the yaml via the K8s API…
The Kubelet picks up the work and uses the configured Storage provider
The Persistent Volume is created on the Datastore (can even optionally pass vSAN Storage Classes for SBPM)
(click) The vmdk is represented as a K8 PersistentVolume
A running POD can now make a PersistentVolumeClaim and mount the volume
https://vmware.github.io/vsphere-storage-for-kubernetes
A PCF deployment will emit various logs & metrics from various sources.
How do we modernize IT and Applications across multiple clouds and multiple platforms:
1. Make the cloud easy: Create /Deploy/ OOTB content / integrations for a private cloud
A.) Easy deploy(LCM)
B.) Quick TTV(OOTB dashboard, sizing, workflows, Integrations) infoblox, snow, puppet, teraform, OOTB content
C.) SaaS services
2. Simplify dev consumption: Unified consumption model across all clouds
A.) Globally consistent IaaS (API)
B.) Blueprints and Iterative dev
C.) Integrated catalog of services and pipeline
3. Consistent, unified ops: Unified Ops for all apps across platforms
A.) Closed loop workload scheduling (Automatically place and re-balance VMs)
B.) Realtime full-stack troubleshooting and monitoring (wavefront) (extra slide)
C.) App intelligence (bringing together infra and apps, NI, apps, infra metrics) (possible extra slide)