The document discusses designing domains in VMware Cloud Foundation. It covers designing the management domain, including sizing considerations for the ESXi hosts, vCenter Server, vSphere networking, and vSAN storage. It also discusses designing workload domains, such as ESXi and vCenter Server design considerations, vSphere networking design, and shared storage design. The objectives are to understand sizing of the management domain, and design considerations for the workload domains.
This technical paper discusses the deployment of a VMware environment and best practices in using IBM Scale Out Network Attached Storage (SONAS) for its primary storage. To know more about the Network Attached Storage, visit http://ibm.co/SH8WJo.
VMworld 2013
Christos Karamanolis, VMware
Kiran Madnani, VMware
James Streit, Thomson Reuters
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
VMworld 2013: VMware Virtual SAN Technical Best Practices VMworld
VMworld 2013
Cormac Hogan, VMware
Kiran Madnani, VMware
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
Guaranteeing Storage Performance by Mike Tutkowskibuildacloud
This session will introduce the basics of primary storage in CloudStack. Additionally, I discuss the challenges of guaranteeing storage performance in a cloud and how by leveraging the latest enhancements to CloudStack, storage administrators can deliver consistent, repeatable performance to 10s, 100s or 1,000s of application workloads in parallel. I'll review the CloudStack enhancements in detail, outline the management benefits they provide and discuss common go-to-market approaches.
About Mike Tutkowski
Mike Tutkowski, a member of the CloudStack PMC, develops software for the Apache Software Foundation's CloudStack project to help drive improvements in its storage component and to integrate SolidFire more deeply into the product.
VMworld 2013: Lowering TCO for Virtual Desktops with VMware View and VMware V...VMworld
VMworld 2013
Jad Chamcham, VMware
Narasimha Krishnakumar, VMware, view, vsan, tco
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
This technical paper discusses the deployment of a VMware environment and best practices in using IBM Scale Out Network Attached Storage (SONAS) for its primary storage. To know more about the Network Attached Storage, visit http://ibm.co/SH8WJo.
VMworld 2013
Christos Karamanolis, VMware
Kiran Madnani, VMware
James Streit, Thomson Reuters
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
VMworld 2013: VMware Virtual SAN Technical Best Practices VMworld
VMworld 2013
Cormac Hogan, VMware
Kiran Madnani, VMware
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
Guaranteeing Storage Performance by Mike Tutkowskibuildacloud
This session will introduce the basics of primary storage in CloudStack. Additionally, I discuss the challenges of guaranteeing storage performance in a cloud and how by leveraging the latest enhancements to CloudStack, storage administrators can deliver consistent, repeatable performance to 10s, 100s or 1,000s of application workloads in parallel. I'll review the CloudStack enhancements in detail, outline the management benefits they provide and discuss common go-to-market approaches.
About Mike Tutkowski
Mike Tutkowski, a member of the CloudStack PMC, develops software for the Apache Software Foundation's CloudStack project to help drive improvements in its storage component and to integrate SolidFire more deeply into the product.
VMworld 2013: Lowering TCO for Virtual Desktops with VMware View and VMware V...VMworld
VMworld 2013
Jad Chamcham, VMware
Narasimha Krishnakumar, VMware, view, vsan, tco
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
White paper: IBM FlashSystems in VMware EnvironmentsthinkASG
Drive performance in VMware environments with IBM FlashSystem. IBM flash storage delivers extreme, scalable performance for virtualized infrastructure.
With VMware Cloud on AWS, not only can you consume VMware products on AWS, but you can also use AWS native services from virtual machines running within VMware Cloud on AWS. Come learn about our latest features and how you can take advantage of the best of both VMware and AWS for your environment. This session is brought to you by AWS Partner, VMware.
VMworld 2013: IBM Solutions for VMware Virtual SAN VMworld
VMworld 2013
Eric Deadwyler, IBM
Joseph Russell, VMware
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
Running DataStax Enterprise in VMware Cloud and Hybrid EnvironmentsDataStax
To simplify deploying and managing modern applications, enterprises have been combining the benefits of hyperconverged infrastructure (HCI) with the performance and scale of a NoSQL database — and the results have been remarkable. With this combination, IT organizations have experienced more agility, improved reliability, and better application performance. Watch this on-demand webinar where you’ll learn specifically how VMware HCI with DataStax Enterprise (DSE) and Apache Cassandra™ are transforming the enterprise.
View recording: https://youtu.be/FCLGHMIB0L4
Explore all DataStax Webinars: https://www.datastax.com/resources/webinars
Provisioning server high_availability_considerations2Nuno Alves
The purpose of this document is to give the target audience an overview about the critical components of a Citrix
Provisioning Server infrastructure with regards to a high availability implementation. These considerations focus on the
following areas:
• Virtual Disk (vDisk) Storage
• Write Cache Placement
• SQL Database
• TFTP Service
• DHCP Service
VMworld 2013: Maximize Database Performance in Your Software-Defined Data CenterVMworld
VMworld 2013
Mark Achtemichuk, VMware
Michael Webster, VMware
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
Transform Your Business with VMware Cloud on AWS, an Integrated Hybrid Approa...Amazon Web Services
With VMware Cloud on AWS, not only can you consume VMware products on AWS, but you can also leverage AWS native services from virtual machines running within VMware Cloud on AWS. In this session, learn about the integrations we are preparing and how you can leverage the best of both VMware and AWS for your environment.
Session sponsored by VMware
EMC VSPEX BLUE is an all-in-one Hyper-Converged Infrastructure Appliance powered by Intel processor technology and VMware EVO:RAIL software.
It simplifies and automates deployment, provides and intuitive management dashboard that embeds the VSPEX BLUE Manager to simplify operations, upgrades and patches.
With a software designed building block approach, capacity and performance scale linearly – eliminating the need for pre-planned infrastructure purchases and reducing your upfront investments.
All wrapped with a single point of global support from EMC for both hardware and software
CW RADAR, FMCW RADAR, FMCW ALTIMETER, AND THEIR PARAMETERSveerababupersonal22
It consists of cw radar and fmcw radar ,range measurement,if amplifier and fmcw altimeterThe CW radar operates using continuous wave transmission, while the FMCW radar employs frequency-modulated continuous wave technology. Range measurement is a crucial aspect of radar systems, providing information about the distance to a target. The IF amplifier plays a key role in signal processing, amplifying intermediate frequency signals for further analysis. The FMCW altimeter utilizes frequency-modulated continuous wave technology to accurately measure altitude above a reference point.
White paper: IBM FlashSystems in VMware EnvironmentsthinkASG
Drive performance in VMware environments with IBM FlashSystem. IBM flash storage delivers extreme, scalable performance for virtualized infrastructure.
With VMware Cloud on AWS, not only can you consume VMware products on AWS, but you can also use AWS native services from virtual machines running within VMware Cloud on AWS. Come learn about our latest features and how you can take advantage of the best of both VMware and AWS for your environment. This session is brought to you by AWS Partner, VMware.
VMworld 2013: IBM Solutions for VMware Virtual SAN VMworld
VMworld 2013
Eric Deadwyler, IBM
Joseph Russell, VMware
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
Running DataStax Enterprise in VMware Cloud and Hybrid EnvironmentsDataStax
To simplify deploying and managing modern applications, enterprises have been combining the benefits of hyperconverged infrastructure (HCI) with the performance and scale of a NoSQL database — and the results have been remarkable. With this combination, IT organizations have experienced more agility, improved reliability, and better application performance. Watch this on-demand webinar where you’ll learn specifically how VMware HCI with DataStax Enterprise (DSE) and Apache Cassandra™ are transforming the enterprise.
View recording: https://youtu.be/FCLGHMIB0L4
Explore all DataStax Webinars: https://www.datastax.com/resources/webinars
Provisioning server high_availability_considerations2Nuno Alves
The purpose of this document is to give the target audience an overview about the critical components of a Citrix
Provisioning Server infrastructure with regards to a high availability implementation. These considerations focus on the
following areas:
• Virtual Disk (vDisk) Storage
• Write Cache Placement
• SQL Database
• TFTP Service
• DHCP Service
VMworld 2013: Maximize Database Performance in Your Software-Defined Data CenterVMworld
VMworld 2013
Mark Achtemichuk, VMware
Michael Webster, VMware
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
Transform Your Business with VMware Cloud on AWS, an Integrated Hybrid Approa...Amazon Web Services
With VMware Cloud on AWS, not only can you consume VMware products on AWS, but you can also leverage AWS native services from virtual machines running within VMware Cloud on AWS. In this session, learn about the integrations we are preparing and how you can leverage the best of both VMware and AWS for your environment.
Session sponsored by VMware
EMC VSPEX BLUE is an all-in-one Hyper-Converged Infrastructure Appliance powered by Intel processor technology and VMware EVO:RAIL software.
It simplifies and automates deployment, provides and intuitive management dashboard that embeds the VSPEX BLUE Manager to simplify operations, upgrades and patches.
With a software designed building block approach, capacity and performance scale linearly – eliminating the need for pre-planned infrastructure purchases and reducing your upfront investments.
All wrapped with a single point of global support from EMC for both hardware and software
CW RADAR, FMCW RADAR, FMCW ALTIMETER, AND THEIR PARAMETERSveerababupersonal22
It consists of cw radar and fmcw radar ,range measurement,if amplifier and fmcw altimeterThe CW radar operates using continuous wave transmission, while the FMCW radar employs frequency-modulated continuous wave technology. Range measurement is a crucial aspect of radar systems, providing information about the distance to a target. The IF amplifier plays a key role in signal processing, amplifying intermediate frequency signals for further analysis. The FMCW altimeter utilizes frequency-modulated continuous wave technology to accurately measure altitude above a reference point.
NUMERICAL SIMULATIONS OF HEAT AND MASS TRANSFER IN CONDENSING HEAT EXCHANGERS...ssuser7dcef0
Power plants release a large amount of water vapor into the
atmosphere through the stack. The flue gas can be a potential
source for obtaining much needed cooling water for a power
plant. If a power plant could recover and reuse a portion of this
moisture, it could reduce its total cooling water intake
requirement. One of the most practical way to recover water
from flue gas is to use a condensing heat exchanger. The power
plant could also recover latent heat due to condensation as well
as sensible heat due to lowering the flue gas exit temperature.
Additionally, harmful acids released from the stack can be
reduced in a condensing heat exchanger by acid condensation. reduced in a condensing heat exchanger by acid condensation.
Condensation of vapors in flue gas is a complicated
phenomenon since heat and mass transfer of water vapor and
various acids simultaneously occur in the presence of noncondensable
gases such as nitrogen and oxygen. Design of a
condenser depends on the knowledge and understanding of the
heat and mass transfer processes. A computer program for
numerical simulations of water (H2O) and sulfuric acid (H2SO4)
condensation in a flue gas condensing heat exchanger was
developed using MATLAB. Governing equations based on
mass and energy balances for the system were derived to
predict variables such as flue gas exit temperature, cooling
water outlet temperature, mole fraction and condensation rates
of water and sulfuric acid vapors. The equations were solved
using an iterative solution technique with calculations of heat
and mass transfer coefficients and physical properties.
6th International Conference on Machine Learning & Applications (CMLA 2024)ClaraZara1
6th International Conference on Machine Learning & Applications (CMLA 2024) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of on Machine Learning & Applications.
Forklift Classes Overview by Intella PartsIntella Parts
Discover the different forklift classes and their specific applications. Learn how to choose the right forklift for your needs to ensure safety, efficiency, and compliance in your operations.
For more technical information, visit our website https://intellaparts.com
We have compiled the most important slides from each speaker's presentation. This year’s compilation, available for free, captures the key insights and contributions shared during the DfMAy 2024 conference.
Final project report on grocery store management system..pdfKamal Acharya
In today’s fast-changing business environment, it’s extremely important to be able to respond to client needs in the most effective and timely manner. If your customers wish to see your business online and have instant access to your products or services.
Online Grocery Store is an e-commerce website, which retails various grocery products. This project allows viewing various products available enables registered users to purchase desired products instantly using Paytm, UPI payment processor (Instant Pay) and also can place order by using Cash on Delivery (Pay Later) option. This project provides an easy access to Administrators and Managers to view orders placed using Pay Later and Instant Pay options.
In order to develop an e-commerce website, a number of Technologies must be studied and understood. These include multi-tiered architecture, server and client-side scripting techniques, implementation technologies, programming language (such as PHP, HTML, CSS, JavaScript) and MySQL relational databases. This is a project with the objective to develop a basic website where a consumer is provided with a shopping cart website and also to know about the technologies used to develop such a website.
This document will discuss each of the underlying technologies to create and implement an e- commerce website.
Hierarchical Digital Twin of a Naval Power SystemKerry Sado
A hierarchical digital twin of a Naval DC power system has been developed and experimentally verified. Similar to other state-of-the-art digital twins, this technology creates a digital replica of the physical system executed in real-time or faster, which can modify hardware controls. However, its advantage stems from distributing computational efforts by utilizing a hierarchical structure composed of lower-level digital twin blocks and a higher-level system digital twin. Each digital twin block is associated with a physical subsystem of the hardware and communicates with a singular system digital twin, which creates a system-level response. By extracting information from each level of the hierarchy, power system controls of the hardware were reconfigured autonomously. This hierarchical digital twin development offers several advantages over other digital twins, particularly in the field of naval power systems. The hierarchical structure allows for greater computational efficiency and scalability while the ability to autonomously reconfigure hardware controls offers increased flexibility and responsiveness. The hierarchical decomposition and models utilized were well aligned with the physical twin, as indicated by the maximum deviations between the developed digital twin hierarchy and the hardware.
Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)MdTanvirMahtab2
This presentation is about the working procedure of Shahjalal Fertilizer Company Limited (SFCL). A Govt. owned Company of Bangladesh Chemical Industries Corporation under Ministry of Industries.
The Planning and Preparation Workbook in VMware Cloud Foundation documentation provides the requirements for the management domain.
These resources can be scaled as the deployment increases in size. The number of vCenter Server and NSX instances increases as the number of domains increases.
The design considerations for the management domain include these elements:
ESXi design for the management domain: The compute layer of the virtual infrastructure (VI) layer in the SDDC is implemented by ESXi, a bare-metal hypervisor that you install directly onto your physical server. With direct access and control of underlying resources, ESXi logically partitions hardware to consolidate applications and cut costs.
vCenter Server design for the management domain: For this design, you determine the number of vCenter Server instances in the management domain, their size, networking configuration, cluster layout, redundancy, and security configuration.
vSphere networking design for the management domain: The network design prevents unauthorized access and provides timely access to business data. This design uses vSphere Distributed Switch and NSX-T Data Center for virtual networking.
Software-defined networking design for the management domain: In this design, you use NSX-T Data Center or connect the management workloads by way of virtual network segments and routing. You create constructs for region-specific and cross-region solutions. These constructs isolate the solutions from the rest of the network, providing routing to the data center and load balancing.
Shared storage design for the management domain: The shared storage design includes vSAN and NFS storage for the SDDC management components.
For the logical design for ESXi, determine the high-level integration of ESXi hosts with other components.
To provide the resources required to run the management components according to the design objectives, each ESXi host consists of the following elements:
Out-of-band management interface
Network interfaces
Storage devices
The configuration and assembly process for each system should be standardized, with all components installed in the same way on each ESXi host. Because standardization of the physical configuration of the ESXi hosts removes variability, the infrastructure is easily managed and supported. ESXi hosts are deployed with identical configurations across all cluster members, including storage and networking configurations. For example, consistent PCIe card slot placement, especially for network controllers, is essential for accurate alignment of physical to virtual I/O resources.
By using identical configurations, an even balance of virtual machine storage components is established across storage and compute resources.
In this design, the primary storage system for the management domain is vSAN. Consequently, the sizing of physical servers running ESXi requires special considerations:
The number of workload domains to be deployed in the future
Whether each of those domains has dedicated NSX Manager instances
The deployment of other VMware solutions such as vRealize Automation, vRealize Operations Manager, and vRealize Network Insight
The solution requires proper sizing of the management domain in VMware Cloud Foundation.
When implementing a consolidated design, consider other workloads and virtual machines that might be running alongside the management infrastructure virtual machines. An average-size virtual machine has two virtual CPUs and 4 GB of RAM. The typical spec 2U ESXi host can run 60 average-size virtual machines.
VMware Cloud Foundation 4.0 uses vSAN ReadyNode for the physical servers running ESXi in the management domain. Use the vSAN sizing tool to size the management nodes accordingly.
**NOTE: consider resource Management Domain needs for additional options such as vRealize Automation, vRealize Operations Manager, and vRealize Network Insight
The solution requires proper sizing of the management domain in VMware Cloud Foundation. Also consider management resources for additional WLD.
Depending the boot media used, the minimum capacity for each partition varies. The only constant is the system boot partition. If the boot media is larger than 128 GB, a VMFS datastore is created automatically and is used for storing virtual machine data.
For storage media such as USB or SD devices, the ESX-OSData partition is created on a high-endurance storage device such as an HDD or SSD. When a secondary high-endurance storage device is not available, ESX-OSData is created on USB or SD devices, but this partition is used only to store ROM-data. RAM-data is stored on a RAM disk.
For more information about ESXi hardware configurations, see ESXi Hardware Requirements at https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.esxi.install.doc/GUID-DEB8086A-306B-4239-BF76-E354679202FC.html.
One vCenter Server instance is allocated to the management domain to support management components.
Determine the number of vCenter Server instances for the management domain and the amount of compute and storage resources required based on the scale of the environment, the plans for deploying virtual infrastructure workload domains, and the requirements for isolating management workloads from tenant workloads. vCenter Server is leveraged for some advanced vSphere features such as vSphere Distributed Resource Scheduler (vSphere DRS), vSphere vMotion, and vSphere Storage vMotion.
By using the Enhanced Linked Mode of vCenter Server, you can log in to every vCenter Server instance that is joined to the same vCenter Single Sign-On domain and access their inventories. You can connect as many as 15 vCenter Server instances to a single vCenter Single Sign-On domain.
In the management domain, the default deployment size of vCenter is small. This size can be modified in the Deployment Parameter Workbook.
Consider changing the size of vCenter when scaling up a consolidated design with multiple clusters running workloads in the management domain.
When more capacity is needed, scaling the consolidated design is an option, but a much better design option for a larger scale is to use workload domains. After you reach 8 or 10 nodes in the consolidated design, consider migrating to a standard design. If you do migrate to a standard design, the default small vCenter size should be enough for most designs.
The separation of different traffic types is required to reduce contention and latency and to configure access security. This separation is created by default by VMware Cloud Foundation.
When a migration occurs using vSphere vMotion, the contents of the memory of the guest operating system is transmitted over the network. vSphere vMotion is on a separate network, using a dedicated vSphere vMotion VLAN.
High latency on any network can negatively affect performance. Some components are more sensitive to high latency than others. For example, reducing latency is important on the IP storage network and on the vSphere Fault Tolerance logging network. According to the application or service, high latency on specific virtual machine networks can also negatively affect performance.
You might want to consider other types of storage, such as NFS, as the supplemental storage for the management domain. The supplemental storage can be used for backups. Depending on your design and implementation, you can keep daily backups of the management components by leveraging the supplemental storage.
Management domain supports many forms of supplemental storage (NFS, FC, iSCSI, vSphere Virtual Volumes). You must manually manage the lifecycle management (LCM) of supplemental storage hardware and manage any VMware Compatibility Guide requirements, for example, firmware or VIBs for an FC adapter.
You can start deploying the SDDC in a single availability zone configuration and then extend the environment with the second availability zone. One advantage in having multiple availability zones is that the management components of the SDDC can run in availability zone 1, and, if an outage occurs in zone 1, the components can be recovered by vSphere HA in availability zone 2.
Extending the management cluster to a vSAN stretched cluster provides the following advantages:
Increased availability with minimal downtime and data loss. (RPO is zero, and the RTO is the time it takes vSphere HA to start the virtual machines in availability zone 2.)
All features and policies with vSAN Storage Policy-Based Management (SPBM) can be used.
Intersite latency is 5 ms, so the design works with data centers located within a 50-mile radius, such as a metro area, providing business continuity.
Using a vSAN stretched cluster for the management components has the following disadvantages:
Increased footprint
Symmetrical host configuration in the two availability zones
Additional setup for the stretched cluster: Manual configuration and manual guidance on VMware Cloud Foundation 4.0.1
The software-defined data center (SDDC) detailed design includes numbered design decisions (Decision IDs), and the justification and implications of each decision.
Example of how to design the Domain in regards of RAM.
When deciding on the vCenter size, consider whether you want to deploy a consolidated architecture and plan to scale the design, or whether, for either business reasons or technical requirements, you plan to maintain the consolidated design with more than 100 hosts.
In the consolidated design in VMware Cloud Foundation, keep your clusters small, and when you get to 8 or 10 nodes, consider a migration to the standard design and consider using a more robust implantation with workload domains.
Jerry note: 8 to 10 nodes total, management plus 2-3 small cluster is the max that should be considered for the consolidation architecture.
Q: Stephen Costello
Asked about the use of the Management Domain and should this be kept small and tighty.
Jerry: VCF is used as a service based product. Management should be performed on the SDDC manager not the vCenter (the use case is covering VM creation and deletion.) SDDC Manager does not reach out or get info about VM deployed from other management tools such as vCenter. For example over 10 hosts to the Management Domain, the use of WLD are recommended (required).
This can lead into a Federation discussion if providing services to various (different) groups or customers.
Stephen was concerned about the number of VC in the Management domain, one for each WLD.
Q: Jonathan Ebenezer: Cloud Builder –deployment fails mid way, do we troubleshoot the process or restart.
Jerry: You can go in and determine the failure. This is most likely an issue caused in the JSON spreadsheet.
If this I the case correct the input file, then start from the beginning.
Timeout issues, use reset task.
Ashley Huynh: Most common issue seen is a validation error network. Incorrect IP ranges made in the spreadsheet (JASON) before import is a cause. As a TIP take snapshot of the cloudbulder appliance so if the appliance borks you dont have to start from scratch
The workload domain consists of components from the physical infrastructure, virtual infrastructure, and the security and compliance layers.
When deploying a VI workload domain, you must address several design considerations:
ESXi detailed design for a VI workload domain: The compute layer of the VI is implemented by ESXi, a bare-metal hypervisor that installs directly onto your physical server. With direct access and control of underlying resources, ESXi logically partitions hardware to consolidate applications and cut costs.
vCenter Server design for a VI workload domain: For this design, you determine the number of vCenter Server instances in the workload domain, their size, networking configuration, cluster layout, redundancy, and security configuration.
vSphere networking design for a VI workload domain: The network design prevents unauthorized access and provides timely access to business data. This design uses vSphere Distributed Switch and NSX-T Data Center for virtual networking.
Software-defined networking design for a VI workload domain: You use NSX-T Data Center to provide network connectivity for workloads, implementing virtual network segments and routing.
Shared storage design for a VI workload domain: The shared storage design includes the design for VMware vSAN storage and other options of principal storage and supplemental storage.
vSAN Sizing – (Jerry)
Create Management Domain – use small VC
Then scale up your vCenter Server.
Then increase the number of hosts.
When the design uses vSAN ReadyNode as the fundamental building block for the primary storage system in the workload domain:
Select all ESXi host hardware, including CPUs, according to the VMware Compatibility Guide and aligned to the ESXi version specified by this design.
The sizing of physical servers running ESXi requires special considerations when you use vSAN storage.
For information about the models of physical servers that are vSAN ready, see the VMware Compatibility Guide at https://www.vmware.com/resources/compatibility/search.php.
If you are not using vSAN ReadyNode, your CPU must be listed on the VMware Compatibility Guide under CPU Series and aligned to the ESXi version specified by this design.
NOTE, no mix and match, either all-flash or all hybrid throughout.
ESXI configurations should be identical or a best effort to match the hardware type, CPU, RAM, storage type and size (sizing and technology matching throughout)
For more information about disk groups, including design and sizing guidance, see Administering VMware vSAN at https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.vsan.doc/GUID-AEF15062-1ED9-4E2B-BA12-A5CE0932B976.html.
This ties to workload types and requirements and scalability and growth estimations. Design for overhead (this includes Tanzu.)
Some networks, such as vMotion and vSAN, are created by VMware Cloud Foundation, whereas others, such as NFS, are optional. The NFS network is used when NFS is the principal storage in the workload domain. You must create a network pool, specifying an NFS VLAN ID and subnet/IPS.
Separating different types of traffic is required to reduce contention and latency, and for access security.
High latency on any network can negatively affect performance. Some components are more sensitive to high latency than others. For example, reducing latency is important on the IP storage and the vSphere Fault Tolerance logging network because latency on these networks can negatively affect the performance of multiple virtual machines.
According to the application or service, high latency on specific virtual machine networks can also negatively affect performance. Use information gathered from the current-state analysis and from interviews with key stakeholders and SMEs to determine which workloads and networks are especially sensitive to high latency.
IMPORTNT:
IT IS REQUIRED to separate your network types. Latency can become problematic.
All workload domains can use NSX Edge in the management domain. For scalability, the SDDC Manager can be used to deploy an NSX Edge cluster in workload domain clusters.
NSX Manager provides the user interface and the RESTful API for creating, configuring, and monitoring NSX components, such as virtual network segments, and Tier-0 and Tier-1 gateways.
NSX Manager implements the management and control plane for the NSX-T Data Center infrastructure. NSX Manager is the centralized network management component of NSX-T Data Center, providing an aggregated view on all components in the NSX-T Data Center system.
NOTE: You CANNOT share the MLD NSX Manager with WLD, these have NSX Edge server instances – see my notes
The management domain has a dedicated NSX instance and an NSX Edge cluster.
The NSX Edge cluster can be deployed at day X.
For different technical requirements or business reasons, you might need dedicated NSX Manager instances that are deployed for an individual workload domain. For example, a university might keep faculty and students completely separated using dedicated domains and dedicated NSX Manager instances. Another example is a hospital with different business units that require separate management and billing for their consumption, like a multitenant design with a cloud provider.
VMware Cloud Foundation provides the data center design with numerous planning and deployment options to meet various business requirements.
You must consider the segments and different types of traffic for the workload domain.
In addition to the regular workload traffic, such as from VMs, other traffic might require their own separate networks, VLANs, and subnets. If you use multiple workload domain clusters, determine whether they share the same VLANs and subnets.
Dedicated networks are needed for the different network types. Adhere to the requirements of each traffic type.
For the workload domain deployment in VMware Cloud Foundation, you can select principal and supplemental storage. For principal storage, the choices are vSAN, FC. or NFS. For supplemental storage, you can select FC, NFS, or iSCSI.
You might have traditional FC arrays that were purchased 2 or 3 years ago, and the full ROI is not realized yet. VMware Cloud Foundation gives you the flexibility to use those arrays as principal storage for the workload domain.
If the FC is phased out to go with All vSAN, the WLD must be migrated to the new WLD with vSAN set as the “primary” storage type. (This has been mentioned earlier).
In the workload domain, you can choose between different storage solutions. You can use vSAN, NFS, and FC as principal storage, and you can use NFS, FC, and iSCSI as supplemental storage. You can design multiple workload domains according to business needs and use different storage solutions for each domain. For example, a workload domain that is dedicated for Horizon can work with vSAN.
When you create a workload domain, only one principal storage option is used for the defined cluster. After the workload domain is deployed, one or more supplemental storage options can be (manually) added. If a second cluster is created in a VI workload domain, a different principal storage option can be selected, if necessary.
NOTE: One primary storage solution Multiple supplemental storage solutions can be applied.
The best way to size your vSAN nodes for the workload domain is to first perform an application profile assessment. You must know which workloads are running in the workload domain, or at least have an idea of what they are projected to be.
After completing the application profile assessment, you can use the new vSAN sizer at https://vsansizer.vmware.com.
Within the tool, you can perform the following tasks:
Select Hybrid or All Flash
Define one or more workload profiles using templates (VDI, Databases, or General Purpose)
Define server configuration
Generate an export report
Consider these guiding questions when formulating your design:
Will the largest VM/workload affect the number of sockets that are required in your hosts?
How much memory and CPU do your workloads require?
How many host failures can be tolerated?
How much vSAN disk space is required?
Consider all storage policy settings, VM availability requirements, and so on.
Identify the features that you want to use.
How much disk space do you need per server?
Consider the cache and capacity requirements.
Consider the performance requirements.
Do the number of hosts affect the available Failures to Tolerate (FTT) and Failure Tolerance Method (FTM) policies?
Example of sizing with TANZU note for large NSX Edge nodes.
Examples of Static versus dynamic port group configuration.
Slack space availability is a key point for this slide
vSAN ability to support the needed /desired number of failure or maintenance periods.