Building infrastructure as code using Terraform - DevOps KrakowAnton Babenko
This document provides an overview of a DevOps meetup on building infrastructure as code using Terraform. The agenda includes Terraform basics, frequent questions, and problems. The presenter then discusses Terraform modules, tools, and solutions. He addresses common questions like secrets handling and integration with other tools. Finally, he solicits questions from the audience on Terraform use cases and challenges.
Docker Kubernetes Istio
Understanding Docker and creating containers.
Container Orchestration based on Kubernetes
Blue Green Deployment, AB Testing, Canary Deployment, Traffic Rules based on Istio
The document provides an overview of Terraform and discusses why it was chosen over other infrastructure as code tools. It outlines an agenda covering Terraform installation, configuration, and use of data sources and resources to build example infrastructure including a VCN, internet gateway, subnets, and how to taint and destroy resources. The live demo then walks through setting up Terraform and using it to provision example OCI resources.
Deploying Elasticsearch and Kibana on Kubernetes with the Elastic Operator / ECKImma Valls Bernaus
Managing and Elasticsearch deployment on Kubernetes can be challenging. Orchestrating a deployment or upgrading are not simple tasks. Our operator will help you easily manage simple or complex deployments like hot/warm/cold.
In this talk, Janko Strassburg and Imma Valls, Sr. Support Engineers at Elastic will demonstrate how to use the new operator, Elastic Cloud on Kubernetes (ECK) to automate deployments and manage an Elasticsearch cluster.
"Yahoo! JAPAN の Kubernetes-as-a-Service" で加速するアプリケーション開発Yahoo!デベロッパーネットワーク
This document discusses automating Kubernetes deployments using Kubernetes-as-a-Service. It defines a CustomResourceDefinition for Kubernetes clusters that includes specifications for the Kubernetes version, number of master and worker nodes, and hardware flavors. It also includes an example KubernetesCluster resource definition.
Building infrastructure as code using Terraform - DevOps KrakowAnton Babenko
This document provides an overview of a DevOps meetup on building infrastructure as code using Terraform. The agenda includes Terraform basics, frequent questions, and problems. The presenter then discusses Terraform modules, tools, and solutions. He addresses common questions like secrets handling and integration with other tools. Finally, he solicits questions from the audience on Terraform use cases and challenges.
Docker Kubernetes Istio
Understanding Docker and creating containers.
Container Orchestration based on Kubernetes
Blue Green Deployment, AB Testing, Canary Deployment, Traffic Rules based on Istio
The document provides an overview of Terraform and discusses why it was chosen over other infrastructure as code tools. It outlines an agenda covering Terraform installation, configuration, and use of data sources and resources to build example infrastructure including a VCN, internet gateway, subnets, and how to taint and destroy resources. The live demo then walks through setting up Terraform and using it to provision example OCI resources.
Deploying Elasticsearch and Kibana on Kubernetes with the Elastic Operator / ECKImma Valls Bernaus
Managing and Elasticsearch deployment on Kubernetes can be challenging. Orchestrating a deployment or upgrading are not simple tasks. Our operator will help you easily manage simple or complex deployments like hot/warm/cold.
In this talk, Janko Strassburg and Imma Valls, Sr. Support Engineers at Elastic will demonstrate how to use the new operator, Elastic Cloud on Kubernetes (ECK) to automate deployments and manage an Elasticsearch cluster.
"Yahoo! JAPAN の Kubernetes-as-a-Service" で加速するアプリケーション開発Yahoo!デベロッパーネットワーク
This document discusses automating Kubernetes deployments using Kubernetes-as-a-Service. It defines a CustomResourceDefinition for Kubernetes clusters that includes specifications for the Kubernetes version, number of master and worker nodes, and hardware flavors. It also includes an example KubernetesCluster resource definition.
Red Hat OpenShift on Bare Metal and Containerized StorageGreg Hoelzer
OpenShift Hyper-Converged Infrastructure allows building a container application platform from bare metal using containerized Gluster storage without virtualization. The document discusses building a "Kontainer Garden" test environment using OpenShift on RHEL Atomic hosts with containerized GlusterFS storage. It describes configuring and testing the environment, including deploying PHP/MySQL and .NET applications using persistent storage. The observations are that RHEL Atomic is mature enough to evaluate for containers, and Docker/Kubernetes with containerized storage provide an alternative to virtualization for density and scale.
Web App for Containers + MySQLでコンテナ対応したPHPアプリを作ろう! Yoichi Kawasaki
Web App for Containers は、アプリスタックのホストに Docker コンテナーを使用するため皆さんが今Linux上で利用しているOSSベースのアプリもアプリスタックごとDockerコンテナ化することでそのまま Web App for Containersで利用することができます。本ウェビナーでは簡単なMySQL + PHPアプリ(Wordpress)を題材に、アプリをコンテナ化し Web App for Containersにデプロイするまでの一連の流れを解説し、CIツールを使った継続的なデプロイ方法についてご紹介します。今回、AzureのフルマネージドMySQLサービスであるAzure DB for MySQLを利用して完全マネージドな環境でのアプリ実行を実現します。
Confluent Operator as Cloud-Native Kafka Operator for KubernetesKai Wähner
Agenda:
- Cloud Native vs. SaaS / Serverless Kafka
- The Emergence of Kubernetes
- Kafka on K8s Deployment Challenges
- Confluent Operator as Kafka Operator
- Q&A
Confluent Operator enables you to:
Provisioning, management and operations of Confluent Platform (including ZooKeeper, Apache Kafka, Kafka Connect, KSQL, Schema Registry, REST Proxy, Control Center)
Deployment on any Kubernetes Platform (Vanilla K8s, OpenShift, Rancher, Mesosphere, Cloud Foundry, Amazon EKS, Azure AKS, Google GKE, etc.)
Automate provisioning of Kafka pods in minutes
Monitor SLAs through Confluent Control Center or Prometheus
Scale Kafka elastically, handle fail-over & Automate rolling updates
Automate security configuration
Built on our first hand knowledge of running Confluent at scale
Fully supported for production usage
Terraform in production - experiences, best practices and deep dive- Piotr Ki...PROIDEA
In my presentation I would like to share my experiences about working with Terraform in various infra projects (ECS/Kops/Core-infra types). I'm gonna share what's "common-sense" in deploying projects with terraform with several different approaches (Should I use module? Should I write my own? How to structure repo with code? Terraform in Terraform (kops example)?)
KubeVirt (Kubernetes and Cloud Native Toronto)Stephen Gordon
KubeVirt enables running virtual machines alongside containers on Kubernetes clusters. It allows virtual machines to be scheduled and managed just like containers. KubeVirt focuses on enabling existing virtualized workloads to run on Kubernetes and integrates features like storage, networking, metrics, and monitoring. Example use cases include starting with a virtual machine, building new services on VMs and containers together, and decomposing existing virtualized workloads.
SlideTeam presents Kubernetes Docker Container Implementation Ppt PowerPoint Presentation Slide Templates. This PPT slideshow is an ideal virtual expression of the fundamentals of Kubernetes. The smart data-visualizations make this PowerPoint presentation easy-to-understand and perfect to introduce your audience to the container orchestration system. Use our PPT theme to communicate the definition and need for containers or virtual private servers. Communicate the container, and microservices architecture using cutting-edge graphics. Explain the need for and benefits of Kubernetes for an organization. Elucidate the features, architecture, use cases, installation roadmap, and the 30-60-90 day plan in Kubernetes. Use the neat tabular format to compare Kubernetes with docker swarm based on various parameters. Familiarize your viewers with the various components of Kubernetes. Elaborate on what is Kubelet, Kubectl, and Kubeadm with the help of labeled diagrams. This presentation acquaints your audience with the significance of Kubernetes in management, scaling, automating, and deploying computer applications. Hit the download icon and start personalization. https://bit.ly/2L0Ojdu
EFK Stack이란 ElasticSearch, Fluentd, Kibana라는 오픈소스의 조합으로, 방대한 양의 데이터를 신속하고 실시간으로 수집/저장/분석/시각화 할 수 있는 솔루션입니다. 특히 컨테이너 환경에서 로그 수집을 위해 주로 사용되는 기술 스택입니다.
Elasitc Stack에 대한 소개와 EFK Stack 설치 방법에 대해 설명합니다.
Infrastructure-as-Code (IaC) Using Terraform (Intermediate Edition)Adin Ermie
In this presentation, we will cover intermediate Terraform topics including alternative providers, collection types, loops and conditionals, and resource lifecycles. We will also focus on reusability with a discussion on modules, data sources, and remote state (including live demo examples).
Finally, we start the initial look into a full DevOps process with a quick review of Workspaces and Terraform Cloud; and wrap everything up with some key takeaway learning resources in your Terraform learning adventure.
NOTE: A recording this presentation can be found here: https://youtu.be/0CEF4eZ6HiQ
Devops, the future is here, it's just not evenly distributed yet.Kris Buytaert
This document discusses the DevOps movement and how operations and development teams can work more collaboratively. Some key points:
- DevOps aims to break down barriers between development and operations teams through better communication and automation.
- In the past, developers would deploy code without considering operational requirements, leading to problems once code was in production. DevOps promotes developing and deploying code as a team effort between devs and ops.
- Automating processes like configuration management, continuous integration, deployment and monitoring helps align dev and ops goals and allows more frequent, lower-risk deployments. Tools like Puppet, Chef, Jenkins and Nagios are mentioned.
- The document advocates for practices like test-driven
Integrating Docker EE into Société Générale's Existing Enterprise IT SystemsDocker, Inc.
Société Générale knows that containers and the cloud are the future of the IT industry and have been using Docker EE for over a year and a half. In this talk, we will share how Docker EE fits into our global strategy and our architecture for integrating the platform to our existing IT systems. We will go over tradeoffs of how we operationalized the platform to provide a highly available CAAS to our global enterprise. Finally, we will share how we are onboarding development teams and deploying their applications to production.
[오픈테크넷서밋2022] 국내 PaaS(Kubernetes) Best Practice 및 DevOps 환경 구축 사례.pdfOpen Source Consulting
최근 금융권이나 공공기관에서는 차세대 프로젝트에 PaaS 기반 시스템을 구축하고 그 위에 마이크로서비스아키텍처(MSA)를 구현하기 위해 많은 투자를 하고 있는데요, 많은 기업들이 오픈소스 기반의 인프라를 고려할 때 기술지원이나 버전 업그레이드 등에 대한 애로사항을 겪게 됩니다. 이런 문제에 대한 해결 방안 중 하나가 바로 커뮤니티 기반의 오픈소스 재단을 활용하는 것인데요!
본 자료에서 커뮤니티 오픈소스 기반 인프라 구축의 장점과 실제 사례에 대해 확인해 보실 수 있습니다.
OpenShift 4, the smarter Kubernetes platformKangaroot
OpenShift 4 introduces automated installation, patching, and upgrades for every layer of the container stack from the operating system through application services.
Terraform can be used to automate the deployment and management of infrastructure as code. It allows defining infrastructure components like VMs, networks, DNS records etc. as code in configuration files. Key benefits include versioning infrastructure changes, consistency across environments, and automation of deployments. The document then provides details on installing Terraform, using common commands like plan, apply and import, defining resources, variables, modules and managing remote state. It also demonstrates creating an EC2 instance using a generated AMI.
Distributed storage performance for OpenStack clouds using small-file IO work...Principled Technologies
OpenStack cloud environments demand strong storage performance to handle the requests of end users. Software-based distributed storage can provide this performance while also providing much needed flexibility for storage resources.
In our tests, we found that Red Hat Storage Server better handled small-file IO workloads than did Ceph Storage, handling up to two times the number of files per second in some instances. The smallfile tool we used simulated users performing actions on their files to show the kind of end-user performance you could expect using both solutions at various node, VM, and thread counts.
These results show that Red Hat Storage Server can provide equivalent or better performance than Ceph Storage for similar workloads in OpenStack cloud environments, which can help users better access the files they keep in the cloud.
Combine containerization and GPU acceleration on VMware: Dell PowerEdge R750 ...Principled Technologies
Our results running a vGPU-accelerated deep learning image-classification workload in this environment
We ran a deep learning image-classification workload on a Dell PowerEdge R750 server with an NVIDIA A100 Tensor Core GPU that was in a VMware vSphere with Tanzu Kubernetes container environment along with two other servers. Using GPU virtualization to allow 10 containers to share the single GPU, our test solution achieved a maximum of 29,896 samples per second. With a single container using all of the GPU resources and a larger batch size, the solution achieved a maximum of 34,352 samples per second. These results show that the PowerEdge R750 with an NVIDIA A100 Tensor Core GPU in a VMware Kubernetes environment with GPU virtualization can support flexibly apportioning GPU compute capability across multiple machine learning workloads in Kubernetes clusters.
Red Hat OpenShift on Bare Metal and Containerized StorageGreg Hoelzer
OpenShift Hyper-Converged Infrastructure allows building a container application platform from bare metal using containerized Gluster storage without virtualization. The document discusses building a "Kontainer Garden" test environment using OpenShift on RHEL Atomic hosts with containerized GlusterFS storage. It describes configuring and testing the environment, including deploying PHP/MySQL and .NET applications using persistent storage. The observations are that RHEL Atomic is mature enough to evaluate for containers, and Docker/Kubernetes with containerized storage provide an alternative to virtualization for density and scale.
Web App for Containers + MySQLでコンテナ対応したPHPアプリを作ろう! Yoichi Kawasaki
Web App for Containers は、アプリスタックのホストに Docker コンテナーを使用するため皆さんが今Linux上で利用しているOSSベースのアプリもアプリスタックごとDockerコンテナ化することでそのまま Web App for Containersで利用することができます。本ウェビナーでは簡単なMySQL + PHPアプリ(Wordpress)を題材に、アプリをコンテナ化し Web App for Containersにデプロイするまでの一連の流れを解説し、CIツールを使った継続的なデプロイ方法についてご紹介します。今回、AzureのフルマネージドMySQLサービスであるAzure DB for MySQLを利用して完全マネージドな環境でのアプリ実行を実現します。
Confluent Operator as Cloud-Native Kafka Operator for KubernetesKai Wähner
Agenda:
- Cloud Native vs. SaaS / Serverless Kafka
- The Emergence of Kubernetes
- Kafka on K8s Deployment Challenges
- Confluent Operator as Kafka Operator
- Q&A
Confluent Operator enables you to:
Provisioning, management and operations of Confluent Platform (including ZooKeeper, Apache Kafka, Kafka Connect, KSQL, Schema Registry, REST Proxy, Control Center)
Deployment on any Kubernetes Platform (Vanilla K8s, OpenShift, Rancher, Mesosphere, Cloud Foundry, Amazon EKS, Azure AKS, Google GKE, etc.)
Automate provisioning of Kafka pods in minutes
Monitor SLAs through Confluent Control Center or Prometheus
Scale Kafka elastically, handle fail-over & Automate rolling updates
Automate security configuration
Built on our first hand knowledge of running Confluent at scale
Fully supported for production usage
Terraform in production - experiences, best practices and deep dive- Piotr Ki...PROIDEA
In my presentation I would like to share my experiences about working with Terraform in various infra projects (ECS/Kops/Core-infra types). I'm gonna share what's "common-sense" in deploying projects with terraform with several different approaches (Should I use module? Should I write my own? How to structure repo with code? Terraform in Terraform (kops example)?)
KubeVirt (Kubernetes and Cloud Native Toronto)Stephen Gordon
KubeVirt enables running virtual machines alongside containers on Kubernetes clusters. It allows virtual machines to be scheduled and managed just like containers. KubeVirt focuses on enabling existing virtualized workloads to run on Kubernetes and integrates features like storage, networking, metrics, and monitoring. Example use cases include starting with a virtual machine, building new services on VMs and containers together, and decomposing existing virtualized workloads.
SlideTeam presents Kubernetes Docker Container Implementation Ppt PowerPoint Presentation Slide Templates. This PPT slideshow is an ideal virtual expression of the fundamentals of Kubernetes. The smart data-visualizations make this PowerPoint presentation easy-to-understand and perfect to introduce your audience to the container orchestration system. Use our PPT theme to communicate the definition and need for containers or virtual private servers. Communicate the container, and microservices architecture using cutting-edge graphics. Explain the need for and benefits of Kubernetes for an organization. Elucidate the features, architecture, use cases, installation roadmap, and the 30-60-90 day plan in Kubernetes. Use the neat tabular format to compare Kubernetes with docker swarm based on various parameters. Familiarize your viewers with the various components of Kubernetes. Elaborate on what is Kubelet, Kubectl, and Kubeadm with the help of labeled diagrams. This presentation acquaints your audience with the significance of Kubernetes in management, scaling, automating, and deploying computer applications. Hit the download icon and start personalization. https://bit.ly/2L0Ojdu
EFK Stack이란 ElasticSearch, Fluentd, Kibana라는 오픈소스의 조합으로, 방대한 양의 데이터를 신속하고 실시간으로 수집/저장/분석/시각화 할 수 있는 솔루션입니다. 특히 컨테이너 환경에서 로그 수집을 위해 주로 사용되는 기술 스택입니다.
Elasitc Stack에 대한 소개와 EFK Stack 설치 방법에 대해 설명합니다.
Infrastructure-as-Code (IaC) Using Terraform (Intermediate Edition)Adin Ermie
In this presentation, we will cover intermediate Terraform topics including alternative providers, collection types, loops and conditionals, and resource lifecycles. We will also focus on reusability with a discussion on modules, data sources, and remote state (including live demo examples).
Finally, we start the initial look into a full DevOps process with a quick review of Workspaces and Terraform Cloud; and wrap everything up with some key takeaway learning resources in your Terraform learning adventure.
NOTE: A recording this presentation can be found here: https://youtu.be/0CEF4eZ6HiQ
Devops, the future is here, it's just not evenly distributed yet.Kris Buytaert
This document discusses the DevOps movement and how operations and development teams can work more collaboratively. Some key points:
- DevOps aims to break down barriers between development and operations teams through better communication and automation.
- In the past, developers would deploy code without considering operational requirements, leading to problems once code was in production. DevOps promotes developing and deploying code as a team effort between devs and ops.
- Automating processes like configuration management, continuous integration, deployment and monitoring helps align dev and ops goals and allows more frequent, lower-risk deployments. Tools like Puppet, Chef, Jenkins and Nagios are mentioned.
- The document advocates for practices like test-driven
Integrating Docker EE into Société Générale's Existing Enterprise IT SystemsDocker, Inc.
Société Générale knows that containers and the cloud are the future of the IT industry and have been using Docker EE for over a year and a half. In this talk, we will share how Docker EE fits into our global strategy and our architecture for integrating the platform to our existing IT systems. We will go over tradeoffs of how we operationalized the platform to provide a highly available CAAS to our global enterprise. Finally, we will share how we are onboarding development teams and deploying their applications to production.
[오픈테크넷서밋2022] 국내 PaaS(Kubernetes) Best Practice 및 DevOps 환경 구축 사례.pdfOpen Source Consulting
최근 금융권이나 공공기관에서는 차세대 프로젝트에 PaaS 기반 시스템을 구축하고 그 위에 마이크로서비스아키텍처(MSA)를 구현하기 위해 많은 투자를 하고 있는데요, 많은 기업들이 오픈소스 기반의 인프라를 고려할 때 기술지원이나 버전 업그레이드 등에 대한 애로사항을 겪게 됩니다. 이런 문제에 대한 해결 방안 중 하나가 바로 커뮤니티 기반의 오픈소스 재단을 활용하는 것인데요!
본 자료에서 커뮤니티 오픈소스 기반 인프라 구축의 장점과 실제 사례에 대해 확인해 보실 수 있습니다.
OpenShift 4, the smarter Kubernetes platformKangaroot
OpenShift 4 introduces automated installation, patching, and upgrades for every layer of the container stack from the operating system through application services.
Terraform can be used to automate the deployment and management of infrastructure as code. It allows defining infrastructure components like VMs, networks, DNS records etc. as code in configuration files. Key benefits include versioning infrastructure changes, consistency across environments, and automation of deployments. The document then provides details on installing Terraform, using common commands like plan, apply and import, defining resources, variables, modules and managing remote state. It also demonstrates creating an EC2 instance using a generated AMI.
Distributed storage performance for OpenStack clouds using small-file IO work...Principled Technologies
OpenStack cloud environments demand strong storage performance to handle the requests of end users. Software-based distributed storage can provide this performance while also providing much needed flexibility for storage resources.
In our tests, we found that Red Hat Storage Server better handled small-file IO workloads than did Ceph Storage, handling up to two times the number of files per second in some instances. The smallfile tool we used simulated users performing actions on their files to show the kind of end-user performance you could expect using both solutions at various node, VM, and thread counts.
These results show that Red Hat Storage Server can provide equivalent or better performance than Ceph Storage for similar workloads in OpenStack cloud environments, which can help users better access the files they keep in the cloud.
Combine containerization and GPU acceleration on VMware: Dell PowerEdge R750 ...Principled Technologies
Our results running a vGPU-accelerated deep learning image-classification workload in this environment
We ran a deep learning image-classification workload on a Dell PowerEdge R750 server with an NVIDIA A100 Tensor Core GPU that was in a VMware vSphere with Tanzu Kubernetes container environment along with two other servers. Using GPU virtualization to allow 10 containers to share the single GPU, our test solution achieved a maximum of 29,896 samples per second. With a single container using all of the GPU resources and a larger batch size, the solution achieved a maximum of 34,352 samples per second. These results show that the PowerEdge R750 with an NVIDIA A100 Tensor Core GPU in a VMware Kubernetes environment with GPU virtualization can support flexibly apportioning GPU compute capability across multiple machine learning workloads in Kubernetes clusters.
VMware vSphere 7 Update 2 offered greater VM density and increased availabili...Principled Technologies
vSphere not only supported more VMs than the container native virtualization approach in OpenShift, but it required less downtime and less hands-on admin time
How to Increase Performance of Your Hadoop ClusterAltoros
Three identical Apache Hadoop clusters were provisioned on Joyent infrastructure using different operating systems: SmartOS, Ubuntu, and KVM virtual machines. Monitoring showed the Ubuntu and KVM clusters spent more time in the OS kernel during I/O operations compared to the SmartOS cluster. The SmartOS cluster was able to utilize CPU resources more efficiently and scale to more mappers and reducers. Basic cluster configuration and tuning the number of map and reduce tasks are important to optimize Hadoop performance.
Intel(R) Xeon(R) E7 v3-based X6 platforms + Lenovo Flex System Interconnect Fabric solutions deliver a highly-reliable, cost-efficient and scalable system for your data center.
The Best Infrastructure for OpenStack: VMware vSphere and Virtual SANEMC
Side-by-side OpenStack benchmarking conducted by a third-party technical validation firm shows that infrastructure based on vSphere with VSAN performs better than a similar Red Hat stack built upon KVM and Red Hat Storage (GlusterFS). OpenStack workloads run faster on vSphere than on KVM and cost less - benefits of the hyperconverged Virtual SAN architecture.
Cost and performance comparison for OpenStack compute and storage infrastructurePrincipled Technologies
To be competitive in the cloud, businesses need more efficient hardware and software for their OpenStack environment and solutions that can pool physical resources efficiently for consumption and management through OpenStack. Software-defined storage has made it easier to create storage resource pools that spread across your datacenter, but external or separate distributed storage systems are still a challenge for many. VMware vSphere with Virtual SAN converges storage resources with the hypervisor, which can allow for management of an entire infrastructure through a single vCenter Server, increase performance, save space, and reduce costs.
Both solutions used the same twelve drive bays per storage node for virtual disk storage, however the tiered design of VMware Virtual SAN allowed for greater performance in our tests. We used a mix of two SSDs and 10 rotational drives for the VMware Virtual SAN solution, while we used 12 rotational drives for the Red Hat Storage Server solution behind a battery-backed RAID controller, the Red Hat recommended approach.
In our testing, the VMware vSphere with Virtual SAN solution performed better than the Red Hat Storage solution in both real world and raw performance testing by providing 53 percent more database OPS and 159 percent more IOPS. In addition, the vSphere with Virtual SAN solution can occupy less datacenter space, which can result in lower costs associated with density. A three-year cost projection for the two solutions showed that VMware vSphere with Virtual SAN could save your business up to 26 percent in hardware and software costs when compared to the Red Hat Storage solution we tested.
Improve performance and gain room to grow by easily migrating to a modern Ope...Principled Technologies
We deployed this modern environment, then migrated database VMs from legacy servers and saw performance improvements that support consolidation
Conclusion
If your organization’s transactional databases are running on gear that is several years old, you have much to gain by upgrading to modern servers with new processors and networking components and an OpenShift environment. In our testing, a modern OpenShift environment with a cluster of three Dell PowerEdge R7615 servers with 4th Generation AMD EPYC processors and high-speed 100Gb Broadcom NICs outperformed a legacy environment with MySQL VMs running on a cluster of three Dell PowerEdge R7515 servers with 3rd Generation AMD EPYC processors and 25Gb Broadcom NICs. We also easily migrated a VM from the legacy environment to the modern environment, with only a few steps required to set up and less than ten minutes of hands-on time. The performance advantage of the modern servers would allow a company to reduce the number of servers necessary to perform a given amount of database work, thus lowering operational expenditures such as power and cooling and IT staff time for maintenance. The high-speed 100Gb Broadcom NICs in this solution also give companies better network performance and networking capacity to grow as they embrace emerging technologies such as AI that put great demands on networks.
This is a level 200 - 300 presentation.
It assumes:
Good understanding of vCenter 4, ESX 4, ESXi 4.
Preferably hands-on
We will only cover the delta between 4.1 and 4.0
Overview understanding of related products like VUM, Data Recovery, SRM, View, Nexus, Chargeback, CapacityIQ, vShieldZones, etc
Good understanding of related storage, server, network technology
Target audience
VMware Specialist: SE + Delivery from partners
Performance of persistent apps on Container-Native Storage for Red Hat OpenSh...Principled Technologies
The document summarizes benchmark testing of Container-Native Storage (CNS) on Red Hat OpenShift Container Platform using two different storage media: solid-state drives (SSDs) and hard disk drives (HDDs). It tested CNS under IO-intensive and CPU-intensive workloads. For the IO-intensive workload, the SSD configuration significantly outperformed the HDD configuration, achieving over 5 times the maximum performance. However, for the CPU-intensive workload, the two configurations achieved similar maximum performance levels, with SSDs having less than a 5% improvement. The testing demonstrated that CNS can provide scalable storage and that storage performance depends on understanding how it matches the workload characteristics.
Ceph Day Shanghai - Hyper Converged PLCloud with Ceph Ceph Community
Hyper Converged PLCloud with CEPH
This document discusses PowerLeader Cloud (PLCloud), a cloud computing platform that uses a hyper-converged infrastructure with OpenStack, Docker, and Ceph. It provides an overview of PLCloud and how it has adopted OpenStack, Ceph, and other open source technologies. It then describes PLCloud's hyper-converged architecture and how it leverages OpenStack, Docker, and Ceph. Finally, it discusses a specific use case where Ceph RADOS Gateway is used for media storage and access in PLCloud.
Most medium and large-sized IT organizations have deployed several generations of virtualized servers, and they have become more comfortable with the performance and reliability with each deployment. As IT organizations increased virtual machine (VM) density, they reached the limits of vSphere software, server memory, CPU, and I/O.
A new VM engine is now available and this document describes how it can help IT organizations maximize use of their servers running VMware® vSphere® 5.1 (henceforth referred to as vSphere 5.1).
Open Source Software on OpenPOWER systems.
With 100% open source system software (including the firmware), OpenPOWER is the most open server architecture in the market. Based on the IBM POWER8 chip, this new family of servers featuring the latest Nvidia NVLink technology runs all the software solutions presented at OPEN'16 with significant cost advantages. This session explains how Docker, EnterpriseDB and many others benefit from this advanced design, and how 200+ technology companies including Google and RackSpace are collaborating in an open development alliance to build the datacenter of the future.
The document discusses PROSE (Partitioned Reliable Operating System Environment), an approach that runs applications in specialized kernel partitions for finer control over system resources and improved reliability. It aims to simplify development of specialized kernels and enable resource sharing across partitions. The approach is evaluated using IBM's research hypervisor rHype, which shows PROSE can reduce noise and provide more deterministic performance than Linux. Future work focuses on running larger commercial workloads and further performance/noise experiments.
This document summarizes a presentation about Ceph, an open-source distributed storage system. It discusses Ceph's introduction and components, benchmarks Ceph's block and object storage performance on Intel architecture, and describes optimizations like cache tiering and erasure coding. It also outlines Intel's product portfolio in supporting Ceph through optimized CPUs, flash storage, networking, server boards, software libraries, and contributions to the open source Ceph community.
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Odinot Stanislas
Après la petite intro sur le stockage distribué et la description de Ceph, Jian Zhang réalise dans cette présentation quelques benchmarks intéressants : tests séquentiels, tests random et surtout comparaison des résultats avant et après optimisations. Les paramètres de configuration touchés et optimisations (Large page numbers, Omap data sur un disque séparé, ...) apportent au minimum 2x de perf en plus.
This document discusses optimizing MySQL performance when deployed in cloud environments. Some key points:
1) Deploying MySQL in the cloud presents new challenges like increased IO latency from network storage, but also opportunities to scale horizontally across multiple servers.
2) InnoDB performance is particularly important to optimize, through techniques like reducing mutex contention, increasing IO throughput, and using patches that improve handling of higher latency storage.
3) Benchmarks show that direct attached storage outperforms network storage, and optimizations like IO threads and capacity tuning can help mitigate higher latencies of network storage. Virtualization also incurs some performance overhead.
Consolidate SAS 9.4 workloads with Intel Xeon processor E7 v2 and Intel SSD t...Principled Technologies
A key to modernizing your data center is to consolidate your legacy workloads through virtualization, which can help reduce complexity for your business. Fewer servers require fewer physical resources, such as power, cabling, and switches, and reduce the burden on IT for ongoing management tasks such as updates. In addition, integrating newer hardware technology into your data center can provide new features that strengthen your infrastructure, such as RAS features on the processor and disk performance improvements. Finally, using SAS 9.4 ensures that you have the latest features and toolsets that SAS can offer.
Compared to a legacy server, we found that a modern four-socket server powered by Intel Xeon processors E7-4890 v2 with Intel SSD DC P3700 Series provided eight times the amount of SAS work, over 11 times the relative performance, and a shorter average time to complete the SAS workload. Running eight virtual SAS instances also left capacity on the server for additional work. Consolidating your SAS workloads from legacy servers onto servers powered by Intel Xeon processors E7 v2 and SAS 9.4 can provide your business with the latest hardware and software features, reduce complexity in your data center, and potentially reduce costs for your business.
This document summarizes experience with Red Hat Enterprise Linux, including resolving leap second issues, hardware sizing, kernel parameter tuning, patching management, storage configuration, RAID and LVM management, cloning VMs, upgrading versions, on-call support, iSCSI storage, automount, and testing Red Hat 7. It also covers VMware administration including ESXi/ESX upgrades, VMware Tools upgrades, vMotion, P2V, networking, backups with Symantec, and performance monitoring and troubleshooting.
Similar to Pod density comparison: VMware vSphere with Tanzu vs. a bare-metal approach using Red Hat OpenShift 4.7 (20)
Help skilled workers succeed with Dell Latitude 7030 and 7230 Rugged Extreme ...Principled Technologies
Instead of equipping consumer-grade tablets with rugged cases
Conclusion
In our hands-on testing, the Dell Latitude 7030 and 7230 Rugged Extreme Tablets showed that they are better equipped to help skilled workers than consumer-grade Apple iPad Pro and Samsung Galaxy Tab S9 tablets in multiple ways. They provide more built-in capabilities and features than the consumer-grade tablets we tested. And, while they were more expensive than the rugged-case fortified consumer-grade options we tested, their rugged claims were more than skin deep.
In our performance and durability tests, the Dell Latitude 7030 and 7230 Rugged Extreme Tablets performed better in demanding manufacturing, logistics, and field service environments than consumer-grade tablets with rugged cases. Both Rugged Extreme Tablets, with their greater thermal range, suffered less performance degradation in extreme temperatures, never failed and were merely scuffed after 26 hard drops, survived a 10 minute drenching with no ill effects, and were easier to view in direct sunlight than Apple iPad Pro and Samsung Galaxy Tab S9 tablets.
Bring ideas to life with the HP Z2 G9 Tower Workstation - InfographicPrincipled Technologies
We compared CPU performance and noise output of an HP Z2 G9 Tower Workstation in High Performance Mode to a similarly configured Dell Precision 3660 Tower Workstation in its out-of-box performance mode
Investing in GenAI: Cost‑benefit analysis of Dell on‑premises deployments vs....Principled Technologies
Conclusion
Diving into the world of GenAI has the potential to yield a great many benefits for your organization, but it first requires consideration for how best to implement those GenAI workloads. Whether your AI goals are to create a chatbot for online visitors, generate marketing materials, aid troubleshooting, or something else, implementing an AI solution requires careful planning and decision-making. A major decision is whether to host GenAI in the cloud or keep your data on premises. Traditional on-premises solutions can provide superior security and control, a substantial concern when dealing with large amounts of potentially sensitive data. But will supporting a GenAI solution on site be a drain on an organization’s IT budget?
In our research, we found that the value proposition is just the opposite: Hosting GenAI workloads on premises, either in a traditional Dell solution or using a managed Dell APEX pay-per-use solution, could significantly lower your GenAI costs over 3 years compared to hosting these workloads in the cloud. In fact, we found that a comparable AWS SageMaker solution would cost up to 3.8 times as much and an Azure ML solution would cost up to 3.6 times as much as GenAI on a Dell APEX pay-per-use solution. These results show that organizations looking to implement GenAI and reap the business benefits to come can find many advantages in an on-premises Dell solution, whether they opt to purchase and manage it themselves or choose a subscription-based Dell APEX pay-per-use solution. Choosing an on-premises Dell solution could save your organization significantly over hosting GenAI in the cloud, while giving you control over the security and privacy of your data as well as any updates and changes to the environment, and while ensuring your environment is managed consistently.
Workstations powered by Intel can play a vital role in CPU-intensive AI devel...Principled Technologies
In three AI development workflows, Intel processor-powered workstations delivered strong performance, without using their GPUs, making them a good choice for this part of the AI process
Conclusion
We executed three AI development workflows on tower workstations and mobile workstations from three vendors, with each workflow utilizing only the Intel CPU cores, and found that these platforms were suitable for carrying out various AI tasks. For two of the workflows, we learned that completing the tasks on the tower workstations took roughly half as much time as on the mobile workstations. This supports the idea that the tower workstations would be appropriate for a development environment for more complex models with a greater volume of data and that the mobile workstations would be well-suited for data scientists fine-tuning simpler models. In the third workflow, we explored tower workstation performance with different precision levels and learned that using 16-bit floating point precision allowed the workstations to execute the workflow in less time and also reduced memory usage dramatically. For all three AI workflows we executed, we consider the time the workstations needed to complete the tasks to be acceptable, and believe that these workstations can be appropriate, cost-effective choices for these kinds of activities.
Enable security features with no impact to OLTP performance with Dell PowerEd...Principled Technologies
Get comparable online transaction processing (OLTP) performance with or without enabling AMD Secure Memory Encryption and AMD Secure Encrypted Virtualization - Encrypted State
Conclusion
You’ve likely already implemented many security measures for your servers, which may include physical security for the data center, hardware-level security, and software-level security. With the cost of data breaches high and still growing, however, wise IT teams will consider what additional security measures they may be able to implement.
AMD SME and SEV-ES are technologies that are already available within your AMD processor-powered 16th Generation Dell PowerEdge servers—and in our testing, we saw that they can offer extra layers of security without affecting performance. We compared the online transaction processing performance of a Dell PowerEdge R7625 server, powered by AMD EPYC 9274F processors, with and without these two security features enabled. We found that enabling AMD Secure Memory Encryption and Secure Encrypted Virtualization-Encrypted State did not impact performance at all.
If your team is assessing areas where you might be able to enhance security—without paying a large performance cost—consider enabling AME SME and AMD SEV-ES in your Dell PowerEdge servers.
Improving energy efficiency in the data center: Endure higher temperatures wi...Principled Technologies
In high-temperature test scenarios, a Dell PowerEdge HS5620 server continued running an intensive workload without component warnings or failures, while a Supermicro SYS‑621C-TN12R server failed
Conclusion: Remain resilient in high temperatures with the Dell PowerEdge HS5620 to help increase efficiency
Increasing your data center’s temperature can help your organization make strides in energy efficiency and cooling cost savings. With servers that can hold up to these higher everyday temperatures—as well as high temperatures due to unforeseen circumstances—your business can continue to deliver the performance your apps and clients require.
When we ran an intensive floating-point workload on a Dell PowerEdge HS5620 and a Supermicro SYS-621CTN12R in three scenario types simulating typical operations at 25°C, a fan failure, and an HVAC malfunction, the Dell server experienced no component warnings or failures. In contrast, the Supermicro server experienced warnings in all three scenario types and experienced component failures in the latter two tests, rendering the system unusable. When we inspected and analyzed each system, we found that the Dell PowerEdge HS5620 server’s motherboard layout, fans, and chassis offered cooling design advantages.
For businesses aiming to meet sustainability goals by running hotter data centers, as well as those concerned with server cooling design, the Dell PowerEdge HS5620 is a strong contender to take on higher temperatures during day-to-day operations and unexpected malfunctions.
Dell APEX Cloud Platform for Red Hat OpenShift: An easily deployable and powe...Principled Technologies
The 4th Generation Intel Xeon Scalable processor‑powered solution deployed in less than two hours and ran a Kubernetes container-based generative AI workload effectively
Dell APEX Cloud Platform for Red Hat OpenShift: An easily deployable and powe...Principled Technologies
The 4th Generation Intel Xeon Scalable processor‑powered solution deployed in less than two hours and ran a generative AI workload effectively
Conclusion
The appeal of incorporating GenAI into your organization’s operations is likely great. Getting started with an efficient solution for your next LLM workload or application can seem daunting because of the changing hardware and software landscape, but Dell APEX Cloud Platform for Red Hat OpenShift powered by 4th Gen Intel Xeon Scalable processors could provide the solution you need. We started with a Dell Validated Design as a reference, and then went on to modify the deployment as necessary for our Llama 2 workload. The Dell APEX Cloud Platform for Red Hat OpenShift solution worked well for our LLM, and by using this deployment guide in conjunction with numerous Dell documents and some flexibility, you could be well on your way to innovating your next GenAI breakthrough.
Upgrade your cloud infrastructure with Dell PowerEdge R760 servers and VMware...Principled Technologies
Compared to a cluster of PowerEdge R750 servers running VMware Cloud Foundation (VCF)
For organizations running clusters of moderately configured, older Dell PowerEdge servers with a previous version of VCF, upgrading to better-configured modern servers can provide a significant performance boost and more.
Upgrade your cloud infrastructure with Dell PowerEdge R760 servers and VMware...Principled Technologies
Compared to a cluster of PowerEdge R750 servers running VMware Cloud Foundation 4.5
If your company is struggling with underperforming infrastructure, upgrading to 16th Generation Dell PowerEdge servers running VCF 5.1 could be just what you need to handle more database throughput and reduce vSAN latencies. As an additional benefit to IT admins, we also found that the embedded VMware Aria Operation adapter provided useful infrastructure insights.
Realize 2.1X the performance with 20% less power with AMD EPYC processor-back...Principled Technologies
Three AMD EPYC processor-based two-processor solutions outshined comparable Intel Xeon Scalable processor-based solutions by handling more Redis workload transactions and requests while consuming less power
Conclusion
Performance and energy efficiency are significant factors in processor selection for servers running data-intensive workloads, such as Redis. We compared the Redis performance and energy consumption of a server cluster in three AMD EPYC two-processor configurations against that of a server cluster in two Intel Xeon Scalable two-processor configurations. In each of our three test scenarios, the server cluster backed by AMD EPYC processors outperformed the server cluster backed by Intel Xeon Scalable processors. In addition, one of the AMD EPYC processor-based clusters consumed 20 percent less power than its Intel Xeon Scalable processor-based counterpart. Combining these measurements gave us power efficiency metrics that demonstrate how valuable AMD EPYC processor-based servers could be—you could see better performance per watt with these AMD EPYC processor-based server clusters and potentially get more from your Redis or other data intensive applications and workloads while reducing data center power costs.
Boost PC performance: How more available memory can improve productivityPrincipled Technologies
With more memory available, system performance of three Dell devices increased, which can translate to a better user experience
Conclusion
When your system has plenty of RAM to meet your needs, you can efficiently access the applications and data you need to finish projects and to-do lists without sacrificing time and focus. Our test results show that with more memory available, three Dell PCs delivered better performance and took less time to complete the Procyon Office Productivity benchmark. These advantages translate to users being able to complete workflows more quickly and multitask more easily. Whether you need the mobility of the Latitude 5440, the creative capabilities of the Precision 3470, or the high performance of the OptiPlex Tower Plus 7010, configuring your system with more RAM can help keep processes running smoothly, enabling you to do more without compromising performance.
Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...Principled Technologies
A Principled Technologies deployment guide
Conclusion
Deploying VMware Cloud Foundation 5.1 on next gen Dell PowerEdge servers brings together critical virtualization capabilities and high-performing hardware infrastructure. Relying on our hands-on experience, this deployment guide offers a comprehensive roadmap that can guide your organization through the seamless integration of advanced VMware cloud solutions with the performance and reliability of Dell PowerEdge servers. In addition to the deployment efficiency, the Cloud Foundation 5.1 and PowerEdge solution delivered strong performance while running a MySQL database workload. By leveraging VMware Cloud Foundation 5.1 and PowerEdge servers, you could help your organization embrace cloud computing with confidence, potentially unlocking a new level of agility, scalability, and efficiency in your data center operations.
Upgrade your cloud infrastructure with Dell PowerEdge R760 servers and VMware...Principled Technologies
Compared to a cluster of PowerEdge R750 servers running VMware Cloud Foundation 4.5
Conclusion
If your company is struggling with underperforming infrastructure, upgrading to 16th Generation Dell PowerEdge servers running VCF 5.1 could be just what you need to handle more database throughput and reduce vSAN latencies. We found that a Dell PowerEdge R760 server cluster running VCF 5.1 processed over 78 percent more TPM and 79 percent more NOPM than a Dell PowerEdge R750 server cluster running VCF 4.5. It’s also worth noting that the PowerEdge R750 cluster bottlenecked on vSAN storage, with max write latency at 8.9ms. For reference, the PowerEdge R760 cluster clocked in at 3.8ms max write latency. This higher latency is due in part to the single disk group per host on the moderately configured PowerEdge R750 cluster, while the better-configured PowerEdge R760 cluster supported four disk groups per host. As an additional benefit to IT admins, we also found that the embedded VMware Aria Operation adapter provided useful infrastructure insights.
Based on our research using publicly available materials, it appears that Dell supports nine of the ten PC security features we investigated, HP supports six of them, and Lenovo supports three features.
Increase security, sustainability, and efficiency with robust Dell server man...Principled Technologies
Compared to the Supermicro management portfolio
Conclusion
Choosing a vendor for server purchases is about more than just the hardware platform. Decision-makers must also consider more long-term concerns, including system/data security, energy efficiency, and ease of management. These concerns make the systems management tools a vendor offers as important as the hardware.
We investigated the features and capabilities of server management tools from Dell and Supermicro, comparing Dell iDRAC9 against Supermicro IPMI for embedded server management and Dell OpenManage Enterprise and CloudIQ against Supermicro Server Manager for one-to-many device and console management and monitoring. We found that the Dell management tools provided more comprehensive security, sustainability, and management/monitoring features and capabilities than Supermicro servers did. In addition, Dell tools automated more tasks to ease server management, resulting in significant time savings for administrators versus having to do the same tasks manually with Supermicro tools.
When making a server purchase, a vendor’s associated management products are critical to protect data, support a more sustainable environment, and to ease the maintenance of systems. Our tests and research showed that the Dell management portfolio for PowerEdge servers offered more features to help organizations meet these goals than the comparable Supermicro management products.
Increase security, sustainability, and efficiency with robust Dell server man...Principled Technologies
Compared to the Supermicro management portfolio
Conclusion
Choosing a vendor for server purchases is about more than just the hardware platform. Decision-makers must also consider more long-term concerns, including system/data security, energy efficiency, and ease of management. These concerns make the systems management tools a vendor offers as important as the hardware.
We investigated the features and capabilities of server management tools from Dell and Supermicro, comparing Dell iDRAC9 against Supermicro IPMI for embedded server management and Dell OpenManage Enterprise and CloudIQ against Supermicro Server Manager for one-to-many device and console management and monitoring. We found that the Dell management tools provided more comprehensive security, sustainability, and management/monitoring features and capabilities than Supermicro servers did. In addition, Dell tools automated more tasks to ease server management, resulting in significant time savings for administrators versus having to do the same tasks manually with Supermicro tools.
When making a server purchase, a vendor’s associated management products are critical to protect data, support a more sustainable environment, and to ease the maintenance of systems. Our tests and research showed that the Dell management portfolio for PowerEdge servers offered more features to help organizations meet these goals than the comparable Supermicro management products.
Scale up your storage with higher-performing Dell APEX Block Storage for AWS ...Principled Technologies
In our tests, Dell APEX Block Storage for AWS outperformed similarly configured solutions from Vendor A, achieving more IOPS, better throughput, and more consistent performance on both NVMe-supported configurations and configurations backed by Elastic Block Store (EBS) alone.
Dell APEX Block Storage for AWS supports a full NVMe backed configuration, but Vendor A doesn’t—its solution uses EBS for storage capacity and NVMe as an extended read cache—which means APEX Block Storage for AWS can deliver faster storage performance.
Scale up your storage with higher-performing Dell APEX Block Storage for AWSPrincipled Technologies
Dell APEX Block Storage for AWS offered stronger and more consistent storage performance for better business agility than a Vendor A solution
Conclusion
Enterprises desiring the flexibility and convenience of the cloud for their block storage workloads can find fast-performing solutions with the enterprise storage features they’re used to in on-premises infrastructure by selecting Dell APEX Block Storage for AWS.
Our hands-on tests showed that compared to the Vendor A solution, Dell APEX Block Storage for AWS offered stronger, more consistent storage performance in both NVMe-supported and EBS-backed configurations. Using NVMe-supported configurations, Dell APEX Block Storage for AWS achieved 4.7x the random read IOPS and 5.1x the throughput on sequential read operations per node vs. Vendor A. In our EBS-backed comparison, Dell APEX Block Storage for AWS offered 2.2x the throughput per node on sequential read operations vs. Vendor A.
Plus, the ability to scale beyond three nodes—up to 512 storage nodes with capacity of up to 8 PBs—enables Dell APEX Block Storage for AWS to help ensure performance and capacity as your team plans for the future.
Get in and stay in the productivity zone with the HP Z2 G9 Tower WorkstationPrincipled Technologies
We compared CPU performance and noise output of an HP Z2 G9 Tower Workstation in High Performance Mode to Dell Precision 3660 and 5860 tower workstations in optimized performance modes
Conclusion
HP Z2 G9 Tower Workstation users can change the BIOS settings to dial in the performance mode that best suits their needs: High Performance Mode, Performance Mode, or Quiet Mode. In good
news for both creative and technical professionals, we found that an Intel Core i9-13900 processor-powered HP Z2 G9 Tower Workstation set to High Performance mode received higher CPU-based benchmark scores than both a similarly configured Dell Precision 3660 and a Dell Precision 5860 equipped with an Intel Xeon w5-2455x processor. Plus, the HP Z2 G9 Tower Workstation was quieter while running CPU-intensive Cinebench 2024 and SPECapc for Solidworks 2022 workloads than both Dell Precision tower workstations. This means HP Z2 G9 Tower Workstation users who prize performance over everything else can do so without sacrificing a quiet workspace.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
Dive into the realm of operating systems (OS) with Pravash Chandra Das, a seasoned Digital Forensic Analyst, as your guide. 🚀 This comprehensive presentation illuminates the core concepts, types, and evolution of OS, essential for understanding modern computing landscapes.
Beginning with the foundational definition, Das clarifies the pivotal role of OS as system software orchestrating hardware resources, software applications, and user interactions. Through succinct descriptions, he delineates the diverse types of OS, from single-user, single-task environments like early MS-DOS iterations, to multi-user, multi-tasking systems exemplified by modern Linux distributions.
Crucial components like the kernel and shell are dissected, highlighting their indispensable functions in resource management and user interface interaction. Das elucidates how the kernel acts as the central nervous system, orchestrating process scheduling, memory allocation, and device management. Meanwhile, the shell serves as the gateway for user commands, bridging the gap between human input and machine execution. 💻
The narrative then shifts to a captivating exploration of prominent desktop OSs, Windows, macOS, and Linux. Windows, with its globally ubiquitous presence and user-friendly interface, emerges as a cornerstone in personal computing history. macOS, lauded for its sleek design and seamless integration with Apple's ecosystem, stands as a beacon of stability and creativity. Linux, an open-source marvel, offers unparalleled flexibility and security, revolutionizing the computing landscape. 🖥️
Moving to the realm of mobile devices, Das unravels the dominance of Android and iOS. Android's open-source ethos fosters a vibrant ecosystem of customization and innovation, while iOS boasts a seamless user experience and robust security infrastructure. Meanwhile, discontinued platforms like Symbian and Palm OS evoke nostalgia for their pioneering roles in the smartphone revolution.
The journey concludes with a reflection on the ever-evolving landscape of OS, underscored by the emergence of real-time operating systems (RTOS) and the persistent quest for innovation and efficiency. As technology continues to shape our world, understanding the foundations and evolution of operating systems remains paramount. Join Pravash Chandra Das on this illuminating journey through the heart of computing. 🌟
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
Ocean lotus Threat actors project by John Sitima 2024 (1).pptxSitimaJohn
Ocean Lotus cyber threat actors represent a sophisticated, persistent, and politically motivated group that poses a significant risk to organizations and individuals in the Southeast Asian region. Their continuous evolution and adaptability underscore the need for robust cybersecurity measures and international cooperation to identify and mitigate the threats posed by such advanced persistent threat groups.
A Comprehensive Guide to DeFi Development Services in 2024Intelisync
DeFi represents a paradigm shift in the financial industry. Instead of relying on traditional, centralized institutions like banks, DeFi leverages blockchain technology to create a decentralized network of financial services. This means that financial transactions can occur directly between parties, without intermediaries, using smart contracts on platforms like Ethereum.
In 2024, we are witnessing an explosion of new DeFi projects and protocols, each pushing the boundaries of what’s possible in finance.
In summary, DeFi in 2024 is not just a trend; it’s a revolution that democratizes finance, enhances security and transparency, and fosters continuous innovation. As we proceed through this presentation, we'll explore the various components and services of DeFi in detail, shedding light on how they are transforming the financial landscape.
At Intelisync, we specialize in providing comprehensive DeFi development services tailored to meet the unique needs of our clients. From smart contract development to dApp creation and security audits, we ensure that your DeFi project is built with innovation, security, and scalability in mind. Trust Intelisync to guide you through the intricate landscape of decentralized finance and unlock the full potential of blockchain technology.
Ready to take your DeFi project to the next level? Partner with Intelisync for expert DeFi development services today!
Nunit vs XUnit vs MSTest Differences Between These Unit Testing Frameworks.pdfflufftailshop
When it comes to unit testing in the .NET ecosystem, developers have a wide range of options available. Among the most popular choices are NUnit, XUnit, and MSTest. These unit testing frameworks provide essential tools and features to help ensure the quality and reliability of code. However, understanding the differences between these frameworks is crucial for selecting the most suitable one for your projects.
This presentation provides valuable insights into effective cost-saving techniques on AWS. Learn how to optimize your AWS resources by rightsizing, increasing elasticity, picking the right storage class, and choosing the best pricing model. Additionally, discover essential governance mechanisms to ensure continuous cost efficiency. Whether you are new to AWS or an experienced user, this presentation provides clear and practical tips to help you reduce your cloud costs and get the most out of your budget.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Pod density comparison: VMware vSphere with Tanzu vs. a bare-metal approach using Red Hat OpenShift 4.7
1. The solutions we tested
We deployed two on-premises cloud platforms with Kubernetes integrations: VMware®
vSphere®
7 Update 2
(7U2) with VMware Tanzu™
and bare-metal Red Hat®
OpenShift®
4.7. Our goal was to quantify the maximum
number of pods that each solution supported. Due to limitations of the kubelet, Kubernetes API, and other
platform components, each Kubernetes node can support only a maximum number of pods regardless of the
underlying physical resources. In a bare-metal environment, each node runs on a dedicated physical server.
Virtualized environments provide a workaround to those limitations, however, by supporting more nodes
and pods on the same physical hardware. Virtual environments can thus offer a pod density advantage for
Kubernetes compared to bare-metal solutions.
We configured the worker hosts on each cloud platform on five HPE ProLiant DL380 Gen10 servers with identical
processors, memory, and storage configurations. We kept the number of hosts performing workloads in each
deployment consistent across both platforms to represent a company deploying either cloud platform onto a
defined number of servers. Each solution had different requirements for management, storage, and worker hosts.
For the vSphere 7U2 with Tanzu deployment, we configured all five hosts as a complete vSphere cluster with
Tanzu Kubernetes capabilities. For the bare-metal Red Hat OpenShift deployment, we configured three hosts
(i.e., servers) as management hosts and five hosts as worker hosts, for a total of eight hosts.
Our test approach
Given the identical hardware in our two five-host clusters, one might expect that they would support similar
numbers of pods. To explore this, we conducted pod density testing using a minimal web-service workload that
supported a basic website. Each pod in the workload ran a replica of the website, and a standard Kubernetes
load balancer split the traffic evenly across all pods. We started with a small number of pods and gradually
increased the number until we reached the point where increasing pod count further would cause performance
deterioration or cluster instability.
What we found
The vSphere with Tanzu environment supported 13,700 pods, 6.3 times the 2,150 pods that the bare-metal Red
Hat OpenShift 4.7 cluster supported.
Pod density comparison:
VMware vSphere with Tanzu
vs. a bare-metal approach using
Red Hat OpenShift 4.7
Pod density comparison: VMware vSphere with Tanzu vs. a bare-metal approach using Red Hat OpenShift 4.7 November2021(Revised)
A Principled Technologies report: Hands-on testing. Real-world results.
2. Red Hat OpenShift solution
According to Red Hat OpenShift documentation, the platform supports up to 250 pods per node, or physical
host in our case. However, Red Hat provides additional documentation on scaling to 500 pods per node (or
physical host in our case). We set out to test this higher limit and see whether our five-host bare-metal cluster
could support 2,500 pods. To do so, we modified default parameters as follows to allow the cluster to achieve
the theoretical max pod count:
• kubeletConfig
y maxPods: 520 #default is 250
y kubeAPIBurst: 200 #default is 100
y kubeAPIQPS: 100 #default is 50
We also modified the install.yaml configuration by changing the hostPrefix networking setting to 22 (from a
default of 23).
For the Red Hat OpenShift solution, we stopped our pod count increase when we began observing instability
in the physical hosts operating as worker nodes. We found that when we exceeded 2,150 of our workload pods
(430 workload pods per host), individual worker node hosts would unpredictably go offline, and pod counts
would drop accordingly. In Figure 1, the status for the second-to-last node is “Not Ready,” illustrating the
behavior we observed.
Figure 1: Red Hat OpenShift solution node dashboard. Source: Principled Technologies.
Pod density comparison: VMware vSphere with Tanzu vs. a bare-metal approach using Red Hat OpenShift 4.7 November 2021 (Revised) | 2
3. Figure 2 shows 2,150 stable pods.
Figure 2: Red Hat OpenShift environment showing 2,150 stable pods. Source: Principled Technologies.
We did not observe any specific software or hardware bottleneck or error message to identify and resolve the
source of the host instability we experienced. All our further attempts to deploy a pod count beyond 2,150
ultimately resulted in unpredictable host instability for one or more hosts operating as worker hosts. We found
that the most reliable way to bring the offline worker hosts back was to delete all pods, reboot the offline host,
and redeploy the pods at a maximum of 2,150 for stable cluster performance.
As Figure 3 shows, the cluster utilization metrics did not reveal bottlenecks in any areas. (Note: The dramatic dip
in the Pod count chart reflects a previous test that failed.)
Figure 3: Red Hat OpenShift environment utilization screen. Source: Principled Technologies.
Pod density comparison: VMware vSphere with Tanzu vs. a bare-metal approach using Red Hat OpenShift 4.7 November 2021 (Revised) | 3
4. VMware vSphere with Tanzu solution
In contrast to the Red Hat OpenShift solution, we observed no node instability with VMware vSphere with Tanzu.
On this platform, we were able to steadily increase the worker node VMs and the number of pods, which allowed
us to scale the number of pods to 13,700, the point at which we reached 95 percent memory utilization. Figure
4 shows the 13,700 stable pods, and Figure 5 shows the memory utilization. We determined this to be the
maximum because continuing to increase pod counts would have caused performance deterioration across the
cluster due to memory paging because of overcommitted memory resources.
Figure 4: VMware vSphere with a Tanzu cluster showing 13,700 stable pods. Source: Principled Technologies.
Figure 5: VMware vSphere with Tanzu cluster memory utilization screen. Source: Principled Technologies.
Pod density comparison: VMware vSphere with Tanzu vs. a bare-metal approach using Red Hat OpenShift 4.7 November 2021 (Revised) | 4
5. Figure 6 illustrates the difference we observed. The 13,700 pods that the vSphere with Tanzu cluster supported
was 6.3 times the 2,150 pods the bare-metal Red Hat OpenShift cluster could handle in our testing. The vSphere
with Tanzu cluster’s maximum pod count of 13,700 is 5.4 times the theoretical maximum 2,500 pods that Red
Hat OpenShift should support per Red Hat documentation. We ran more pods in our virtualized vSphere
environment because it supported more worker nodes than the Red Hat bare-metal solution.
Figure 6: The number of pods each cluster supported in our testing. Source: Principled Technologies.
Clearly, in situations where it is necessary to run a large number of simple web app workloads, the vSphere with
Tanzu cluster made much more efficient use of CPU and memory resources across the worker nodes, giving it a
substantial advantage over the bare-metal Red Hat OpenShift cluster. In addition, the vSphere with Tanzu cluster
did not require additional management hosts, while the Red Hat OpenShift cluster required three additional
control plane hosts.
The greater efficiency of the vSphere with Tanzu environment on this workload could translate to significant
CapEx and OpEx savings because you would theoretically need an environment five to six times the size to run
the same number of pods in a bare-metal Red Hat environment. The greater pod density that the vSphere with
Tanzu environment supports could allow platform administration teams to work with their infrastructure peers to
reduce hardware spend.
Number of pods each five-host cluster supported
(higher is better)
Theoretical total 2,500
13,700
2,150
Bare-metal Red Hat OpenShift 4.7 cluster
VMware vSphere with Tanzu cluster
Pod density comparison: VMware vSphere with Tanzu vs. a bare-metal approach using Red Hat OpenShift 4.7 November 2021 (Revised) | 5
6. Appendix A – System configuration information
Table 1: System configuration information for the test clusters. Note that each cluster comprised five servers.
System configuration information vSphere with Tanzu cluster systems Bare-metal Red Hat OpenShift cluster systems
Server HPE ProLiant DL380 Gen10 HPE ProLiant DL380 Gen10
BIOS name and version U30 v2.34 U30 v2.34
Operating system name and
version/build number
VMware ESXi™
7.0.2
Red Hat Enterprise Linux®
CoreOS 4.18
47.83.202106252242-0
Date of last OS updates/
patches applied
4/1/2021 4/1/2021
Power management policy Dynamic Power Savings Mode Dynamic Power Savings Mode
Processor
Number of processors 2 2
Vendor and model Intel®
Xeon®
Gold 5120 CPU Intel Xeon Gold 5120 CPU
Core count (per processor) 14 14
Core frequency (GHz) 2.2 2.2
Stepping - M0
Memory module(s)
Total memory in system (GB) 256 256
Number of memory modules 4 4
Vendor and model Hynix®
809085-091 Hynix 809085-091
Size (GB) 64 64
Type PC4-19200T PC4-19200T
Speed (MHz) 2,400 2,400
Speed running in the server (MHz) 2,400 2,400
Storage controller 1
Vendor and model HPE Smart Array P408i-a SR Gen10 HPE Smart Array P408i-a SR Gen10
Firmware version 1.04 1.04
Local storage (capacity)
Number of drives 2 2
Drive vendor and model Intel S4500 SSDSC2KG96 Intel S4500 SSDSC2KG96
Drive size (GB) 960 960
Drive information (speed,
interface, type)
6Gbps, SATA, SSD 6Gbps, SATA, SSD
Local storage (cache)
Number of drives 2 2
Drive vendor and model Intel S3700 SSDSC2BA40 Intel S3700 SSDSC2BA40
Drive size (GB) 400 400
Drive information (speed,
interface, type)
6Gbps, SATA, SSD 6Gbps, SATA, SSD
Pod density comparison: VMware vSphere with Tanzu vs. a bare-metal approach using Red Hat OpenShift 4.7 November 2021 (Revised) | 6
7. System configuration information vSphere with Tanzu cluster systems Bare-metal Red Hat OpenShift cluster systems
Network adapter
Vendor and model HPE 331i HPE 331i
Number and type of ports 4x 1Gb Ethernet 4x 1Gb Ethernet
Firmware version 20.6.41 20.6.41
Network adapter 2
Vendor and model Mellanox®
ConnectX-3 Ethernet Adapter Mellanox ConnectX-3 Ethernet Adapter
Number and type of ports 2x 10GbE SFP 2x 10GbE SFP
Firmware version 02.40.70.04 02.40.70.04
Cooling fans
Vendor and model HPE PFR0612XHE HPE PFR0612XHE
Number of cooling fans 6 6
Power supplies
Vendor and model HPE 865408-B21 HPE 865408-B21
Number of power supplies 2 2
Wattage of each (W) 500 500
Pod density comparison: VMware vSphere with Tanzu vs. a bare-metal approach using Red Hat OpenShift 4.7 November 2021 (Revised) | 7
8. Appendix B – How we tested
Deploying VMware vSphere 7U2 with Tanzu
Installing ESXi
1. Boot into the ESXi install ISO.
2. In the installation welcome screen, press Enter to continue.
3. In the EULA, press F11 to accept and continue.
4. Select the correct drive, and press Enter to continue.
5. Select the keyboard layout, and press Enter to continue.
6. Enter a root password, and press Enter to continue.
7. In the confirm install screen, press F11 to install.
8. Repeat steps 1 through 7 for the remaining four hosts.
Configuring ESXi
1. Log into the ESXi server console.
2. Select Configure Management Network.
3. Select IPv4 Configuration.
4. In IPv4 Configuration, select a static IPv4 address, enter the appropriate IP address, and press Enter.
5. Select DNS Configuration.
6. In DNS Configuration, enter your Primary DNS server and fully qualified domain name, and press Enter.
7. To exit the management network configuration, press Esc.
8. When prompted, press Y to apply changes and restart the management network.
9. Select Troubleshooting Options.
10. In Troubleshooting Mode Options, enable ESXi Shell and SSH, and press Esc to confirm.
11. Repeat steps 1 through 10 for the remaining four hosts.
Enabling NTP on ESXi
1. Log into the ESXi server GUI.
2. Click Manage.
3. Click Time & date.
4. Click Edit NTP Settings.
5. In Edit time configuration, select Use Network Time Protocol (enable NTP client), select Start and stop with host, add your NTP server,
and click Save.
6. Repeat steps 1 through 5 for the remaining four hosts.
Installing the VMware vCenter Server Appliance
1. Download the VMware vCenter®
installation ISO from the VMware support portal at https://my.vmware.com.
2. Mount the image on your local system, and browse to the vcsa-ui-installer folder. Expand the folder for your OS, and launch the installer
if it doesn’t automatically begin.
3. Once the vCenter Server Installer wizard opens, click install.
4. To begin installation of the new vCenter server appliance, click Next.
5. Check the box to accept the license agreement, and click Next.
6. Enter the IP address of a temporary ESXi host. Provide the root password, and click Next.
7. To accept the SHA1 thumbprint of the server’s certificate, click Yes.
8. Accept the VM name, and provide and confirm the root password for the VCSA. Click Next.
9. Set the size for environment you’re planning to deploy, check Thin Disk, and click Next.
10. Select the datastore to install on, accept the datastore defaults, and click Next.
11. Enter the FQDN, IP address information, and DNS servers you want to use for the vCenter server appliance. Click Next.
12. To begin deployment, click Finish.
13. When Stage 1 has completed, click Close. To confirm, click Yes.
14. Open a browser window, and connect to https://[vcenter.fqdn:5480/.
15. On the Getting Started - vCenter Server page, click Set up.
16. Enter the root password, and click Log in.
17. Click Next.
Pod density comparison: VMware vSphere with Tanzu vs. a bare-metal approach using Red Hat OpenShift 4.7 November 2021 (Revised) | 8
9. 18. Enable SSH access, and click Next.
19. To confirm the changes, click OK.
20. Enter vsphere.local for the Single Sign-On domain name, enter a password for the administrator account, confirm it, and click Next.
21. Click Next.
22. Click Finish.
Creating a cluster in VMware vSphere
1. Open a browser and enter the address of the vCenter server you deployed (https://[vcenter.FQDN]/ui).
2. Select the vCenter server in the left panel, right-click, and select New Datacenter.
3. Provide a name for the new data center, and click OK.
4. Select the data center you just created, right-click, and select New Cluster.
5. Give a name to the cluster, and enable VMware vSphere®
DRS, HA, and VMware vSAN™
. Click OK.
6. In the cluster configuration panel, under Add hosts, click Add.
7. Check the box for Use the same credentials for all hosts. Enter the IP Address and root credentials for the first host, and the IP addresses
of all remaining hosts. Click Next.
8. Check the box beside Hostname/IP Address to select all hosts, and click Ok.
9. Click Next.
10. Click Finish.
Configuring Tanzu
The steps below contain the methodology for setting up the pre-requisites and deploying Tanzu in vSphere. These instructions assume
you have at least a five-host vSphere cluster running ESXi and vCenter, backed by vSAN storage. These instructions also assume you have
configured upstream networking switches to accommodate the VLANs and routing for your environment.
Tanzu prerequisites: Defining networks and IP addressing
In this section, we define the networks and addresses we used in this deployment.
Management network
This network can interact with all vSphere components and ESXi hosts. We used our VM network on a standard (not Distributed) vSwitch.
We used a 10.209.0.0/16 network scope with a gateway of 10.209.0.1. We did not assign VLAN tagging for this network. (All ports were
“untagged” or access.)
vMotion network
We defined the VMware vSphere vMotion®
Network port group on a distributed vSwitch, which Tanzu Kubernetes deployment requires. We
used a 192.168.12.0/24 network scope for the vMotion network. We assigned this network as VLAN 1612.
Storage network
We defined the Storage Network port group on a Distributed vSwitch, which Tanzu Kubernetes deployment requires. We used a
192.168.13.0/24 network scope for the storage network. We assigned this network as VLAN 1613.
Workload network
We isolated this network behind a NAT gateway and used private addresses for all connectivity. We defined the Workload Network port
group on a Distributed vSwitch, which Tanzu Kubernetes deployment requires. We used a 192.168.0.0/24 network scope for the workload
network with a NAT gateway of 192.168.1.1. We assigned the network as VLAN 1614.
Pod density comparison: VMware vSphere with Tanzu vs. a bare-metal approach using Red Hat OpenShift 4.7 November 2021 (Revised) | 9
10. Required network addresses
We used the following network addresses for our configuration. Your environment may use a different address scheme as long as it meets the
requirements listed under each address section.
• HAProxy management address: 10.209.201.200/16, gateway 10.209.0.1
• HAProxy Workload address: 192.168.1.2/24, gateway 192.168.1.1
• Cluster management addresses 10.209.201.201 - 10.209.201.205
y The Supervisor Control Plane VMs use these addresses to manage the namespace.
y These must be five contiguous IP addresses.
Š You will only provide the starting IP of 10.209.201.201, which the shared cluster IP use.
Š The next three addresses will be individual VM IP addresses.
Š One will be for upgrade purposes.
• Worker VM addresses: 192.168.1.64/26 (192.168.1.65 - 192.168.1.126)
y The workload network uses these addresses to provision deployed services, and the Supervisor Control Plane VMs use them to
provide connectivity to those services.
y In this case, the CIDR notation defines a scope of addresses. When inputting the address range, exclude the first (.64) and last
(.127) addresses
• Load Balancer addresses: 192.168.1.240/29 (192.168.1.240 - 192.168.1.247)
y The HAProxy management address and must not overlap any other network or VM (especially gateways) because the HAProxy
will bound and respond to all the addresses.
y In this case, the CIDR notation includes the first and last addresses. You will need to provide all eight addresses to the wizard.
Configuring a cluster through Quickstart
1. Select Menu Hosts and Clusters.
2. Click the TKG Cluster.
3. In the right pane, the Quickstart view should automatically appear. Click Configure.
4. In Distributed switches, accept the defaults, add your host adapters, and click Next.
5. In vMotion traffic, complete the fields as follows, and click Next:
• Use VLAN: 1612
• Protocol: IPv4
• IP Type: Static
• Host #1 IP: 192.168.12.11
• Host #1 subnet: 255.255.255.0
• Check the autofill checkbox.
6. In Storage traffic, complete the fields as follows:
• Use VLAN: 1613
• Protocol: IPv4
• IP Type: Static
• Host #1 IP: 192.168.13.11
• Host #1 subnet: 255.255.255.0
• Check the autofill checkbox
7. In Advanced options, leave defaults, and click Next.
8. In Claim disks, ensure the correct disks are selected, and click Next.
9. In Create fault domains, accept the defaults, and click Next.
10. In Ready to complete, click Finish to create the cluster.
Pod density comparison: VMware vSphere with Tanzu vs. a bare-metal approach using Red Hat OpenShift 4.7 November 2021 (Revised) | 10
11. Adding the Tanzu Kubernetes Grid Workload Network port group to the distributed switch
1. Right-click the DvSwitch you created during the cluster Quickstart wizard, and select Distributed Port Group New
Distributed Port Group.
2. Give the name Workload Network, and click Next.
3. Change the VLAN type to VLAN, set the VLAN ID to VLAN 1614, and click Next.
4. Click Finish.
Creating a DevOps user
Create a dedicated vSphere user account to access Kubernetes namespaces.
1. From the vSphere client, click Home Administration.
2. In the left panel, click Users and Groups.
3. In the right panel, click Users, select the vsphere.local domain, and click Add.
4. Provide a username and password, and click Add.
5. For simplicity, we added the DevOps user to a group with Administrator privileges. Click Groups, and select the Administrators
group. Click Edit.
6. Under Add Members, search for the DevOps user you just created, and add them to the administrators group. Click Save.
Creating the HAProxy content library
Create a content library for housing the HAProxy appliance source image.
1. Click the following link to download the vSphere compatible HAProxy OVF(v0.1.8): https://github.com/haproxytech/vmware-haproxy.
2. From the vSphere client, in the left menu pane, click Content Libraries.
3. In the Content Libraries panel on the right, click Create.
4. Name the content library HAproxy-cl and click Next.
5. Accept the default, and click Next.
6. Choose the storage location for the content library, and click Next.
7. Review, and click Finish.
8. Click the newly created HAproxy-cl content library.
9. In the upper portion of the right-side panel for HAproxy-cl, click the actions pull-down menu, and select Import Item.
10. Change the selection to local file, and click the upload files button.
11. Browse to the location of the OVF file you downloaded in step 1, and click Open.
12. Click Import.
Creating the TKG content library
Create a Tanzu Kubernetes Grid (TKG) content library for Kubernetes-specific images.
1. From the vSphere client, in the left menu pane, click Content Libraries.
2. In the Content Libraries panel on the right, click Create.
3. Name the content library TKG-cl, and click Next.
4. Select Subscribed content library, and use https://wp-content.vmware.com/v2/latest/lib.json for the subscription URL. Click Next .
5. To verify, click Yes.
6. Choose the storage location for the content library, and click Next.
7. Review, and click Finish.
Installing a pfSense gateway
1. Download the most recent release of pfSense from: https://www.pfsense.org/download/.
2. Extract the pfSense ISO.
3. Select Menu Hosts and Clusters.
4. Right-click your TKG cluster, and select New Virtual Machine.
5. In Select a creation type, select Create a new virtual machine, and click Next.
6. In Select a name and folder, name your pfSense VM, and click Next.
7. In Select a compute resource, select your TKG cluster, and click Next.
8. In Select storage, select your vSAN datastore, and click Next.
9. In Select compatibility, accept defaults, and click Next.
10. In Select a guest OS, change the OS family to Linux and the OS version to Debian GNU/Linux 10 (64-bit), and click Next.
Pod density comparison: VMware vSphere with Tanzu vs. a bare-metal approach using Red Hat OpenShift 4.7 November 2021 (Revised) | 11
12. 11. In Customize hardware, change the following:
• CPU: 2
• Memory: 8 GB
• Hard Disk: 32 GB
• SCSI controller: LSI Logic Parallel
• Network 1: Management Network
• Network 2: Workload Network
12. Click Next.
13. Click Complete.
14. Right-click your newly created VM, and select Power On.
15. Right-click the now-on VM, and select Open Remote Console.
16. Select Removable Devices CD/DVD drive 1 Connect to Disk Image File (iso).
17. Select the pfSense ISO you downloaded.
18. After the VM boots off of the pfSense ISO, accept the license agreement.
19. Select Install.
20. Select Continue with the default keymap.
21. Select Auto (UFS) BIOS.
22. Select No.
23. Select Reboot.
24. When prompted to set up VLANs, select No.
25. For the WAN interface, select vmx0.
26. For the LAN interface, select vmx1.
27. To proceed, press Y.
28. In a web browser, navigate to the vmx1 IP Address of the pfSense VM (it should be 192.168.1.1).
29. Select to ignore the security warning.
30. Sign into the console (default username admin/pfsense).
31. In the welcome screen, click Next.
32. Click Next.
33. Enter your DNS server, and click Next.
34. Enter your time zone, and click Next.
35. In Configure WAN interface, leave defaults, and click Next.
36. In Configure LAN interface, level defaults, and click Next.
37. In Set Admin WebGUI Password, enter the password you want for the admin user, and click Next.
38. To apply the pfSense changes, click Reload.
39. To finish the pfSense installation, click Finish.
Creating storage tag
Create a storage tag for use in storage policies to control virtual machine placement.
1. From the vSphere client, select Menu Storage.
2. From the left pane, select the storage you want to tag for Tanzu.
3. Under the Summary tab, locate the Tags panel, and click Assign.
4. Click Add Tag.
5. Name the tag Tanzu. Click Create New Category.
6. Give the category name Tanzu Storage. Clear all object types except Datastore, and click Create.
7. Use the Category pull-down menu to select Tanzu Storage, and click Create.
8. Check the box beside the newly created tag, and click Assign.
Creating VM storage policies
1. From the vSphere client, click Menu Policies and Profiles.
2. On the left panel, click VM Storage Policies.
3. Click Create.
4. Create a new VM Storage policy named tkg-cl, and click Next.
5. Check the box for Enable tag based placement rules, and click Next.
6. Use the Tag Category pulldown menu to select the Tanzu Storage policy you created. Click Browse Tags.
7. Click the Tanzu checkbox, and click OK.
8. Click Next.
9. Review the compatible storage to make sure your storage target is marked as compatible, and click Next.
10. Click Finish.
Pod density comparison: VMware vSphere with Tanzu vs. a bare-metal approach using Red Hat OpenShift 4.7 November 2021 (Revised) | 12
13. Deploying the HAProxy load balancer
Deploy the load balancer supported for Tanzu Kubernetes with vSphere
1. From the vSphere client, click Menu Content Libraries.
2. Click the HAproxy-cl library.
3. In the left panel, click OVF & OVA Templates, and right-click the HAproxy template that appears in the panel below, and select New VM
from This Template…
4. Provide a simple name (we used HAproxy), and select the Datacenter and/or folder you want to deploy to. Click Next.
5. Select the cluster or compute resource where you want to deploy the HAProxy VM, and click Next.
6. Review details, and click Next.
7. Check the box to accept all license agreements, and click Next.
8. Accept the default configuration, and click Next.
9. Select the target storage for the VM, and click Next.
10. Select VM Network for the Management network, and choose a network for the workload network. Choose the same network for the
Frontend network, and click Next.
11. Customize the template using the following:
• Appliance configuration section
y For the root password, we used Password1!
y Check the box for Permit Root Login.
y Leave the TLS CA blank.
• Network configuration section
y We left the default, haproxy.local.
y Configure a local DNS server.
y For management IP, we used 10.209.201.200/16.
y For management gateway, we used 10.209.0.1.
y NOTE: The description asks for the workload network gateway address. You should enter the management gateway
address instead.
y For Workload IP, we used 192.168.1.2/24.
y For Workload gateway, we used 192.168.1.1.
• Load Balancing section
y For load balancer IP ranges, we used 192.168.1.240/29.
y Accept the default management port.
y For HAProxy User ID, we used admin.
y For the HAProxy password, we used Password1!
12. Click Next.
13. Review the summary, and click Finish.
The deployment will take a few minutes to completely deploy and configure.
14. Power on the HAProxy VM.
Adding licenses
1. Select Menu Administration.
2. In the left pane, select Licenses.
3. In Licenses, click Add New Licenses.
4. In Enter license keys, enter the license keys for your deployment, and click Next.
5. In Edit license names, rename the licenses if desired, and click Next.
6. In Ready to complete, click Finish.
Pod density comparison: VMware vSphere with Tanzu vs. a bare-metal approach using Red Hat OpenShift 4.7 November 2021 (Revised) | 13
14. Configuring Workload Management
Configure the vSphere environment for Kubernetes.
1. From the vSphere client, click Menu Workload Management.
2. Click Get Started.
3. Review the messages and warnings regarding supported configurations, and click Next.
4. Select the Cluster on which you want to enable workload management, and click Next.
5. Choose the capacity for the control plane VMs (we chose Small). Click Next.
6. Choose the storage policy you wish to use for the control plane nodes (we chose tkg-clusters). Click Next.
7. Configure the Load Balancer section with the following:
• Name: haproxy
• Type: HA proxy
• Data plane API Addresses: 10.209.201.200:5556
• User name: admin
• Password: Password1!
• IP Address Ranges for Virtual Servers: 192.168.1.240-192.168.1.247
• Server Certificate Authority: [copy and pasted from the instructions in step 10 below]
8. Open an SSH session to the HAProxy management address and connect using root and Password1!
9. Type the following: cat /etc/haproxy/ca.crt
10. Copy the entire output (including the first and last lines) and paste the contents into the Server Certificate Authority box) in the last line
under step 7 above.
11. Close the SSH session.
12. Click Next.
13. Configure Workload Management with the following:
• Network: VM Network
• Starting IP Address: 10.209.201.201
• Subnet Mask: 255.255.0.0
• Gateway: 10.209.0.1
• DNS Server: 10.41.0.10
• NTP Server: 10.40.0.1
14. Click Next.
15. Configure Workload Network with the following:
• Leave the default for Services addresses.
• DNS Servers: 10.41.0.10
• Under Workload Network, click Add.
• Accept default for network-1.
• Port Group: Workload Network
• Gateway: 192.168.1.1
• Subnet: 255.255.255.0
y IP Address Ranges: 192.168.1.65-192.168.1.126
16. Click Save.
17. For TKG Configuration, beside Add Content Library, click Add.
18. Select the TKG-cl library, and click OK.
19. Click Next.
20. Click Finish.
The workload management cluster will deploy and configure. You may see apparent errors during configuration, but vCenter will resolve
them upon successful completion.
Pod density comparison: VMware vSphere with Tanzu vs. a bare-metal approach using Red Hat OpenShift 4.7 November 2021 (Revised) | 14
15. Configuring namespaces
Configure the Kubernetes namespace for services deployment.
1. In Workload Management, click Namespaces.
2. Click Create Namespace.
3. Select the target cluster, and provide a name. We used tanzu-ns. Click Create.
4. The new namespace is created. Click the Permissions tab, and click Add.
5. Choose vSphere.local for the identity source. Search for the DevOps user you created. Select the “can edit” role. Click OK.
6. Click the Storage tab.
7. In the Storage Policies section, click Edit.
8. Select the tkg-clusters policy, and click OK. The environment is ready for connection and deployment of containers.
Deploying Red Hat OpenShift 4.7
Configuring DNS
1. Log into your domain controller.
2. Press the Start key, type DNS and press Return.
3. In DNS manager, navigate to your domain’s forward lookup zone.
4. Right-click your forward lookup zone, and select New Host (A or AAAA).
5. In the New Host screen, make sure that Create associated pointer (PTR) record is selected, then input the host name and IP address,
and click Add Host.
6. Click OK to acknowledge the creation of the host.
7. Repeat steps 5 and 6 for the following hosts:
api.openshift 10.209.60.201
api-int.openshift 10.209.60.201
*.apps.openshift 10.209.60.202
bootstrap.openshift 10.209.60.10
master-0.openshift 10.209.60.11
master-1.openshift 10.209.60.12
master-2.openshift 10.209.60.13
worker-0.openshift 10.209.60.21
worker-1.openshift 10.209.60.22
worker-2.openshift 10.209.60.23
worker-3.openshift 10.209.60.24
worker-4.openshift 10.209.60.25
Installing the Red Hat OpenShift provisioner
We installed our OpenShift installer on a physical host with a network connection to the infrastructure network.
Pod density comparison: VMware vSphere with Tanzu vs. a bare-metal approach using Red Hat OpenShift 4.7 November 2021 (Revised) | 15
16. Installing RHEL
1. Connect a RHEL 8.4 installation disk image to the host.
2. Boot the host off the RHEL installation disk.
3. When choosing what to boot, select Install Red Hat Enterprise Linux 8.4.0
4. Once the installer loads, click Next to move past the start screen.
5. In the Installation Summary screen, click Installation Destination.
6. In the Installation Destination screen, accept the default, and click Done.
7. In the Installation Summary screen, click KDUMP.
8. In the KDUMP screen, uncheck Enable kdump, and click Done.
9. In the Installation Summary screen, click Software Selection.
10. Select Minimal Install, and click Done.
11. In the Installation Summary screen, click Network & Host Name.
12. In Network & Host Name, choose your hostname, turn your Ethernet connection on, and click Configure.
13. In Editing [your Ethernet connection], click Connect automatically with priority: 0, and click Save.
14. Click Done.
15. In the Installation Summary screen, click Begin Installation.
16. In the Configuration screen, click Root Password.
17. In Root Password, type your root password, and click Done.
18. When the installation is complete, click Reboot to reboot into the OS.
Creating an SSH private key
1. Log into the OpenShift installer
2. Create a non-root user account with root access for OpenShift:
useradd kni
passwd kni
echo "kni ALL=(root) NOPASSWD:ALL" | tee -a /etc/sudoers.d/kni
chmod 0440 /etc/sudoers.d/kni
3. To create an SSH key for your new user that you will use for communication between your cluster hosts, type the following command:
su - kni -c "ssh-keygen -t ed25519 -f /home/kni/.ssh/id_rsa -N ''"
When configuring your OpenShift install_config.yaml file, you will need to use the information in /home/kni/.ssh/id_rsa.pub to fill in
the sshKey value.
Installing the OpenShift installer
1. Log into the kni user on the OpenShift provisioner.
2. Disable the firewall service:
sudo systemctl stop firewalld
sudo systemctl diable firealld
3. Register your RHEL installation to Red Hat to allow your system to install updates.
4. Install the prerequisites for the OpenShift installation:
sudo dnf install -y libvirt qemu-kvm mkisofs python3-devel jq ipmitool
5. Modify and start the libvirtd service:
sudo usermod --append --groups libvirt kni
sudo systemctl enable libvirtd --now
6. Create the default storage pool:
sudo virsh pool-define-as --name default --type dir --target /var/lib/libvirt/images
sudo virsh pool-start default
sudo virsh pool-autostart default
7. Copy the Pull Secret file to the OpenShift install machine.
Pod density comparison: VMware vSphere with Tanzu vs. a bare-metal approach using Red Hat OpenShift 4.7 November 2021 (Revised) | 16
17. 8. Set the environmental variables that will grab the OpenShift install files:
export VERSION=latest-4.7
export RELEASE_IMAGE=$(curl -s https://mirror.openshift.com/pub/openshift-v4/clients/ocp/$VERSION/
release.txt | grep 'Pull From: quay.io' | awk -F ' ' '{print $3}')
export cmd=openshift-baremetal-install
export pullsecret_file=~/pull-secret.txt
export pullsecret_file=~/pull-secret
export extract_dir=$(pwd)
9. Download and unpack the OpenShift file:
curl -s https://mirror.openshift.com/pub/openshift-v4/clients/ocp/$VERSION/openshift-client-linux.
tar.gz | tar zxvf - oc
sudo cp oc /usr/local/bin
10. Using the OpenShift executable, download and extract the OpenShift installer, then move it to the bin directory:
oc adm release extract --registry-config "${pullsecret_file}" --command=$cmd --to "${extract_dir}"
${RELEASE_IMAGE}
sudo cp openshift-baremetal-install /usr/local/bin
Running an OpenShift install using installer-provided infrastructure
1. Create an install-config.yaml for your testbed. We have provided an example below:
apiVersion: v1
baseDomain: [the domain you're using]
metadata:
name: openshift
networking:
machineCIDR: [a routable CIDR pool that can reach the Internet]
networkType: OVNKubernetes
compute:
- name: worker
replicas: 5
controlPlane:
name: master
replicas: 3
platform:
baremetal: {}
platform:
baremetal:
apiVIP: 10.209.60.201
ingressVIP: 10.209.60.202
provisioningNetwork: "Disabled"
bootstrapProvisioningIP: 10.209.60.10
hosts:
- name: master-0
role: master
bmc:
address: redfish-virtualmedia://10.209.101.11/redfish/v1/Systems/1
username: Administrator
password: Password1
disableCertificateVerification: True
bootMACAddress: [the MAC address of the NIC you're planning to use]
hardwareProfile: default
rootDeviceHints:
model: "LOGICAL"
minSizeGigabytes: 300
- name: master-1
role: master
Pod density comparison: VMware vSphere with Tanzu vs. a bare-metal approach using Red Hat OpenShift 4.7 November 2021 (Revised) | 17
18. bmc:
address: redfish-virtualmedia://10.209.101.12/redfish/v1/Systems/1
username: Administrator
password: Password1
disableCertificateVerification: True
bootMACAddress: [the MAC address of the NIC you're planning to use]
hardwareProfile: default
rootDeviceHints:
model: "LOGICAL"
minSizeGigabytes: 300
- name: master-2
role: master
bmc:
address: redfish-virtualmedia://10.209.101.13/redfish/v1/Systems/1
username: Administrator
password: Password1
disableCertificateVerification: True
bootMACAddress: [the MAC address of the NIC you're planning to use]
hardwareProfile: default
rootDeviceHints:
model: "LOGICAL"
minSizeGigabytes: 300
- name: worker-0
role: worker
bmc:
address: redfish-virtualmedia://10.209.101.21/redfish/v1/Systems/1
username: Administrator
password: Password1
disableCertificateVerification: True
bootMACAddress: [the MAC address of the NIC you're planning to use]
rootDeviceHints:
model: "LOGICAL"
minSizeGigabytes: 300
- name: worker-1
role: worker
bmc:
address: redfish-virtualmedia://10.209.101.22/redfish/v1/Systems/1
username: Administrator
password: Password1
disableCertificateVerification: True
bootMACAddress: [the MAC address of the NIC you're planning to use]
rootDeviceHints:
model: "LOGICAL"
minSizeGigabytes: 300
- name: worker-2
role: worker
bmc:
address: redfish-virtualmedia://10.209.101.23/redfish/v1/Systems/1
username: Administrator
password: Password1
disableCertificateVerification: True
bootMACAddress: [the MAC address of the NIC you're planning to use]
rootDeviceHints:
model: "LOGICAL"
minSizeGigabytes: 300
- name: worker-3
Pod density comparison: VMware vSphere with Tanzu vs. a bare-metal approach using Red Hat OpenShift 4.7 November 2021 (Revised) | 18
19. role: worker
bmc:
address: redfish-virtualmedia://10.209.101.24/redfish/v1/Systems/1
username: Administrator
password: Password1
disableCertificateVerification: True
bootMACAddress: [the MAC address of the NIC you're planning to use]
rootDeviceHints:
model: "LOGICAL"
minSizeGigabytes: 300
- name: worker-4
role: worker
bmc:
address: redfish-virtualmedia://10.209.101.25/redfish/v1/Systems/1
username: Administrator
password: Password1
disableCertificateVerification: True
bootMACAddress: [the MAC address of the NIC you're planning to use]
rootDeviceHints:
model: "LOGICAL"
minSizeGigabytes: 300
pullSecret: '[your pull secret text]'
sshKey: '[your public SSH key]'
2. Create an install directory:
mkdir clusterconfig
3. Copy install-config.yaml into your install directory:
cp install-config.yaml clusterconfig/
4. Run the following command to generate the Kubernetes manifests for your OpenShift cluster:
openshift-baremetal-install --dir=clusterconfig create manifests
5. Create the Ignition files:
openshift-baremetal-install --dir=clusterconfig create cluster
Creating the Local Storage operator
1. Log into your OpenShift Container Platform cluster.
2. Click Operators OperatorHub.
3. In the search box, type Local Storage
4. When the Local Storage operator appears, click it.
5. In the right side of the screen, click Install.
6. In the Create Operator Subscription screen, accept all defaults, and click Subscribe.
The installation of the operator is complete when the Installed Operators status page lists Local Storage as Succeeded.
Creating the OpenShift Container Storage operator
1. Click Operators OperatorHub.
2. In the search box, type OpenShift Container Storage
3. When the OpenShift Container Storage operator appears, click it.
4. In the right side of the screen, click Install.
5. In the Create Operator Subscription screen, accept all defaults, and click Subscribe.
Pod density comparison: VMware vSphere with Tanzu vs. a bare-metal approach using Red Hat OpenShift 4.7 November 2021 (Revised) | 19
20. Creating an OpenShift Container Storage cluster
1. Click Operators Installed Operators.
2. Click on OpenShift Container Storage.
3. In OpenShift Container Storage, under the Storage Cluster option, select Create Instance.
4. In the Create Instance screen, make sure Internal-Attached devices is selected, select All Nodes, and click Next.
5. In the Create Storage Class screen, enter the Volume Set Name, and click Next.
6. When a pop-up warns you that this cannot be undone, click Yes to continue.
7. In the Set Storage and nodes screen, wait for the Storage Class to finish creating, and click Next.
8. In the Security configuration screen, leave Enable encryption unchecked, and click Next.
9. In the Review screen, click Create to kick off the OCS cluster creation.
Installing and running the pod-density test with VMware vSphere with Tanzu
1. Deploy the Tanzu Kubernetes Grid cluster from your Workload Management namespace. The specification file we used, new-tkg-cluster.
yaml, is in the Files we used in our testing section of this document.
kubectl apply -f new-tkg-cluster.yaml
2. After the cluster is created (it may take some time), log into the cluster by first logging out of the Workload Management namespace
context, and logging into the context of the new cluster you created.
kubectl vsphere logout
kubectl vsphere login --insecure-skip-tls-verify --vsphere-username devops@vsphere.local
--server=https://10.209.201.201 --tanzu-kubernetes-cluster-namespace tkg-workload --tanzu-
kubernetes-cluster-name tkg-cluster-01
3. Deploy the application, which is based on this example: https://github.com/paulbouwer/hello-kubernetes. The specification file, hello-
kubernetes.yaml, is in the Files we used in our testing section of this document.
kubectl apply -f hello-kubernetes.yaml
4. To set the number of pods, scale the replicas. To limit the stress on cluster resources, we added replicas in batches of 100 until we
reached the final number. For example, to scale to 300 pods, we added pods in three steps:
kubectl scale deployment hello-kubernetes --replicas=100
# wait until all pods have started
kubectl scale deployment hello-kubernetes --replicas=200
# wait until all pods have started
kubectl scale deployment hello-kubernetes --replicas=300
# wait until all pods have started
5. Get the external IP address for the service:
kubectl get svc hello-kubernetes
6. To perform the test, execute this script that sends an unending set of queries to the load balancer’s IP address, which you found in step
5. For example, if the IP address was 10.233.59.17, the script would be:
while : ; do
curl -s --connect-timeout 10 http://10.233.59.17 -w
'n%{time_total} hello-kubernetes- 0000n'
done | grep hello-kubernetes- | tee pod-density-results.txt
7. To end the test, type Control+C. The results file contains the name of the pod that received and responded to the query and the total
time needed to complete the request.
Pod density comparison: VMware vSphere with Tanzu vs. a bare-metal approach using Red Hat OpenShift 4.7 November 2021 (Revised) | 20
21. Installing and running the pod-density test with Red Hat OpenShift
1. Deploy the application, which is based on this example: https://github.com/paulbouwer/hello-kubernetes. The specification file, hello-
kubernetes.yaml, is in the Files we used in our testing section of this document.
oc apply -f hello-kubernetes.yaml
2. To set the number of pods, scale the replicas. To limit the stress on cluster resources, we added replicas in batches of 100 until we
reached the final number. For example, to scale to 300 pods, we added pods in three steps:
oc scale deployment hello-kubernetes --replicas=100 –n hello
# wait until all pods have started
oc scale deployment hello-kubernetes --replicas=200 –n hello
# wait until all pods have started
oc scale deployment hello-kubernetes --replicas=300 –n hello
# wait until all pods have started
3. Get the external IP address for the service:
kubectl get svc hello-kubernetes –n hello
4. To perform the test, execute this script that sends an unending set of queries to the load balancer’s IP address, which you found in step
3. For example, if the IP address was 10.233.59.17, the script would be:
while : ; do
curl -s --connect-timeout 10 http://10.233.59.17 -w
'n%{time_total} hello-kubernetes- 0000n'
done | grep hello-kubernetes- | tee pod-density-results.txt
5. To end the test, type Control+C. The results file contains the name of the pod that received and responded to the query and the total
time needed to complete the request.
Files we used in our testing
new-tkg-cluster.yaml
We used the new-tkg-cluster.yaml file only for the VMware vSphere with Tanzu environment.
apiVersion: run.tanzu.vmware.com/v1alpha1
kind: TanzuKubernetesCluster
metadata:
name: tkg-cluster-01
namespace: tkg-workload
spec:
distribution:
version: 1.19.7+vmware.1-tkg.1.fc82c41
topology:
controlPlane:
count: 3
class: guaranteed-medium
storageClass: tkg-storage-policy
workers:
count: 130
class: best-effort-medium
storageClass: tkg-storage-policy
settings:
storage:
defaultClass: tkg-storage-policy
network:
cni:
name: antrea
services:
cidrBlocks: ["172.17.0.0/16"]
pods:
cidrBlocks: ["172.18.0.0/16"]
Pod density comparison: VMware vSphere with Tanzu vs. a bare-metal approach using Red Hat OpenShift 4.7 November 2021 (Revised) | 21
22. Principled Technologies is a registered trademark of Principled Technologies, Inc.
All other product names are the trademarks of their respective owners.
DISCLAIMER OF WARRANTIES; LIMITATION OF LIABILITY:
Principled Technologies, Inc. has made reasonable efforts to ensure the accuracy and validity of its testing, however, Principled Technologies, Inc. specifically disclaims
any warranty, expressed or implied, relating to the test results and analysis, their accuracy, completeness or quality, including any implied warranty of fitness for any
particular purpose. All persons or entities relying on the results of any testing do so at their own risk, and agree that Principled Technologies, Inc., its employees and its
subcontractors shall have no liability whatsoever from any claim of loss or damage on account of any alleged error or defect in any testing procedure or result.
In no event shall Principled Technologies, Inc. be liable for indirect, special, incidental, or consequential damages in connection with its testing, even if advised of
the possibility of such damages. In no event shall Principled Technologies, Inc.’s liability, including for direct damages, exceed the amounts paid in connection with
Principled Technologies, Inc.’s testing. Customer’s sole and exclusive remedies are as set forth herein.
This project was commissioned by VMware.
Principled
Technologies®
Facts matter.®
Principled
Technologies®
Facts matter.®
hello-kubernetes.yaml
We used this same hello-kubernetes.yaml file for both environments for identical workloads.
apiVersion: v1
kind: Service
metadata:
name: hello-kubernetes
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 8080
selector:
app: hello-kubernetes
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-kubernetes
spec:
replicas: 1
selector:
matchLabels:
app: hello-kubernetes
template:
metadata:
labels:
app: hello-kubernetes
spec:
containers:
- name: hello-kubernetes
image: paulbouwer/hello-kubernetes:1.9
ports:
- containerPort: 8080
Pod density comparison: VMware vSphere with Tanzu vs. a bare-metal approach using Red Hat OpenShift 4.7 November 2021 (Revised) | 22