SlideShare a Scribd company logo
© 2020 VMware, Inc.
Designing Domains in
VMware Cloud Foundation
Module 3
© 2020 VMware, Inc.
Importance
When designing and planning for a VMware Cloud Foundation deployment, you must consider the sizing of
your solution. Creating a scalable, resilient solution requires a management domain that is properly sized.
After the management domain is functional, you can deploy the workload domains. During the deployment,
you must balance several design considerations.
VMware Cloud Foundation: Plan and Deploy [v4.0 ]
© 2020 VMware, Inc.
Module Lessons
1. Designing and Sizing the Management Domain
2. Designing and Sizing Workload Domains
VMware Cloud Foundation: Plan and Deploy [v4.0 ]
© 2019 VMware Inc. All rights reserved.
Designing and Sizing the Management
Domain
© 2020 VMware, Inc.
Learner Objectives
• Recognize management domain sizing considerations
• Describe design considerations for ESXi in the management domain
• Describe design considerations for vCenter in the management domain
• Describe design considerations for vSphere networking in the management domain
• Describe design considerations for vSAN in the management domain
• Recognize design choices for a consolidated design or standard design
VMware Cloud Foundation: Plan and Deploy [v4.0 ]
© 2020 VMware, Inc.
Management Domain Minimum Hardware Requirements (1)
Component Requirements
Servers
Four vSAN ReadyNodes.
For information about compatible vSAN ReadyNodes, see the VMware Compatibility
Guide at https://www.vmware.com/resources/compatibility/search.php.
CPU per server
Dual-socket, eight cores per socket minimum requirement for all-flash systems.
Single-socket, eight cores per socket minimum requirement for hybrid (flash and
magnetic) systems.
NOTE: VMware Cloud Foundation also supports quad-socket servers for use with all-
flash or hybrid systems.
VMware Cloud Foundation: Plan and Deploy [v4.0 ]
Before deploying the management domain, verify that the minimum requirements are met.
© 2020 VMware, Inc.
Management Domain Minimum Hardware Requirements (2)
Component Requirements
Memory per server 256 GB
Storage per server
16 GB boot device, local media
For information about installing ESXi on a supported USB flash drive or SD flash card, see
VMware knowledge base article 2004784 at https://kb.vmware.com/s/article/2004784.
One NVMe or SSD for the caching tier
Class D endurance
Class E performance
Two SSDs or HDDs for the capacity tier
For guidelines about cache sizing, see Designing and Sizing a Virtual SAN Cluster at
https://docs.vmware.com/en/VMware-vSphere/6.5/com.vmware.vsphere.virtualsan.doc/GUID-
1EB40E66-1FBD-48A6-9426-B33F9255B282.html
NICs per server
Minimum of two 10 GbE (or higher) NICs (IOVP certified)
(Optional) One 1 GbE BMC NIC
The new design in VMware Cloud Foundation 4.0 supports four or six NICs per server.
VMware Cloud Foundation: Plan and Deploy [v4.0 ]
© 2020 VMware, Inc.
Resource Use in the Management Domain
This resource summary includes the core VM components deployed with VMware Cloud Foundation and the
minimum resources that are required:
• VMware Cloud Foundation deployment components:
– SDDC Manager
– vCenter
– NSX Manager instances
– NSX Edge instances
– Optional: Three-node vRealize Log Insight cluster
– Optional: Any of the vRealize Suite components
• Resource minimums:
– 52 vCPUs
– 256 GB RAM
– 1990GB Storage
vSAN sizing tools: https://vsansizer.vmware.com
VMware Cloud Foundation: Plan and Deploy [v4.0 ]
© 2020 VMware, Inc.
Virtual Infrastructure Design: Management Domain Considerations
When deploying the management domain, consider the following design elements:
• ESXi design
• vCenter Server design
• vSphere networking design
• Software-defined networking design
• Shared storage design
VMware Cloud Foundation: Plan and Deploy [v4.0 ]
© 2020 VMware, Inc.
ESXi Design for the Management Domain (1)
In the virtual infrastructure (VI) design, you size
the compute resources of the ESXi hosts in the
management domain according to the following
requirements:
• System requirements of the management
components
• Requirements for managing customer
workloads based on the design objectives
ESXi Host
CPU Memory
Non-Local Storage
(SAN/NAS)
Local Storage
(vSAN)
NIC 1 and NIC 2
Uplinks
Out of Band
Mgmt Uplink
VMware Cloud Foundation: Plan and Deploy [v4.0 ]
© 2020 VMware, Inc.
ESXi Design for the Management Domain (2)
Installing ESXi 7.0 requires a boot device with at least 8 GB for USB or SD devices, and 32 GB for other
device types.
When considering a virtual machine swap file design for the management domain, you can use the default
configuration. The swap file is stored in the same location as the configuration file of the virtual machine.
When VMware Cloud Foundation is used, the SDDC Manager performs lifecycle management, allowing
additional components to be included as part of the lifecycle management process.
VMware Cloud Foundation: Plan and Deploy [v4.0 ]
© 2020 VMware, Inc.
vCenter Server Design for the Management Domain (1)
A vCenter Server deployment consists of one or more vCenter Server instances with embedded Platform
Services Controller (PSC) according to the scale of the environment.
vCenter Server is deployed as a preconfigured virtual appliance running the Photon operating system.
vCenter Server is deployed by VMware Cloud Builder during the bring-up process, and its size is small by
default. Unless you plan to implement a consolidated design, no other size considerations are necessary.
VMware Cloud Foundation does not support vCenter Server High Availability, so the design must include
proper back up of all the management components, including vCenter.
VMware Cloud Foundation: Plan and Deploy [v4.0 ]
© 2020 VMware, Inc.
vCenter Server Design for the Management Domain (2)
Appliance Size
Management
Capacity
Default Storage
Size
Large Storage
Size
X-Large Storage
Size
X-Large
environment
Up to 2,000 hosts
or 35,000 VMs
1,805 GB 1,905 GB 3,665 GB
Large environment Up to 1,000 hosts
or 10,000 VMs
1,065 GB 1,765 GB 3,525 GB
Medium
environment
Up to 400 hosts or
4,000 VMs
700 GB 1,700 GB 3,460 GB
Small environment
(deployed by
default)
Up to 100 hosts or
1,000 VMs
480 GB 1,535 GB 3,295 GB
Tiny environment Up to 10 hosts or
100 VMs
415 GB 1,490 GB 3,245 GB
VMware Cloud Foundation: Plan and Deploy [v4.0 ]
The default deployment size of vCenter in the management domain is small.
© 2020 VMware, Inc.
vSphere Networking Design for the Management Domain
To achieve greater security and better performance, network services are segmented from one another by
VMware Cloud Foundation.
Network I/O Control and traffic shaping are deployed by default to guarantee bandwidth for critical
infrastructure VMs and for critical types of traffic, such as vSAN VLAN.
VMware Cloud Foundation: Plan and Deploy [v4.0 ]
© 2020 VMware, Inc.
Shared Storage Design for the Management Domain (1)
The clusters in the management domain use vSAN for principal storage. Other types of storage are available
for supplemental storage.
Management Cluster
Virtual
Appliance
Virtual
Appliance
Virtual
Appliance
Virtual
Appliance
Virtual
Appliance
Virtual
Appliance
ESXi Host
Datastore(s)
Mgmt
VMs
Backups Templates
and Logs
Sample
Datastore
Software-Defined Storage
Policy-Based Storage Management
Virtualized Data Services
Hypervisor Storage Abstraction
SAN or NAS or DAS
(Third party or VMware vSAN)
1500GB
200GB
VMDK
Swap Files + Logs
SSD
FC15K FC10K SATA
Physical Disks
SSD
FC15K FC10K SATA
VMware Cloud Foundation: Plan and Deploy [v4.0 ]
© 2020 VMware, Inc.
Shared Storage Design for the Management Domain (2)
In the management domain, you can design your vSAN cluster in a single availability zone or use multiple
availability zones.
VMware Cloud Foundation: Plan and Deploy [v4.0 ]
© 2020 VMware, Inc.
Design Decisions: ESXi Nodes in the Management Domain
Decision ID Design Decision Design Justification Design Implication
HYD-MGMT-VI-ESXi-001 Use vSAN ReadyNodes
with vSAN storage for
each ESXi host in the
management domain.
VMware Cloud
Foundation is fully
compatible with vSAN at
deployment.
Hardware choices follow
the vSAN compatibility
guide
HYD-MGMT-VI-ESXi-002 Allocate hosts with
uniform configuration
across the first cluster of
the management
domain.
A balanced cluster has
these advantages:
• Predictable
performance even
during hardware
failures
• Minimal impact of
resync or rebuild
operations on
performance
You must apply vendor
sourcing, budgeting, and
procurement
considerations for
uniform server nodes, on
a per cluster basis.
VMware Cloud Foundation: Plan and Deploy [v4.0 ]
When selecting hosts for the management domain, use vSAN ReadyNodes, which should be as
homogenous as possible for the best predictable performance.
© 2020 VMware, Inc.
Design Decisions: ESXi Memory in the Management Domain
Decision ID Design Decision Design Justification Design Implication
HYD-MGMT-VI-ESXi-
003
Install each ESXi host in
the first, four-node, cluster
of the management
domain with a minimum of
256 GB of RAM.
The management domain
and NSX Edge
appliances in this cluster
require a total of 453 GB
of RAM.
You allocate the
remaining memory to
additional management
components that are
required for new
capabilities, for example,
for new VI workload
domains.
In a four-node cluster,
only 768 GB is available
for use because the host
redundancy that is
configured in vSphere
HA is N+1.
VMware Cloud Foundation: Plan and Deploy [v4.0 ]
To resize the amount of RAM in each host, consider the types of workloads that run the management
domain.
© 2020 VMware, Inc.
Design Decisions: vCenter Considerations in the Management Domain
Decision ID Design Decision Design Justification Design Implication
HYD-MGMT-VI-VC-002 Deploy an appliance for
the management domain
vCenter Server of a small
deployment size or larger.
A vCenter Server appliance
of a small-deployment size
is sufficient to manage the
management components
that are required for
achieving the design
objectives.
If the size of the
management environment
increases, you might have
to increase the vCenter
Server appliance size.
HYD-MGMT-VI-VC-003 Deploy an appliance of
the management domain
vCenter Server with the
default storage size.
The default storage capacity
assigned to a small
appliance is sufficient to
manage the management
appliances that are required
for achieving the design
objectives.
None at this point.
VMware Cloud Foundation: Plan and Deploy [v4.0 ]
For determining the vCenter size, consider whether a consolidated architecture will be used, or
the number of workload domains that you expect to deploy.
© 2020 VMware, Inc.
Review of Learner Objectives
• Recognize management domain sizing considerations
• Describe design considerations for ESXi in the management domain
• Describe design considerations for vCenter in the management domain
• Describe design considerations for vSphere networking in the management domain
• Describe design considerations for vSAN in the management domain
• Recognize design choices for a consolidated design or standard design
VMware Cloud Foundation: Plan and Deploy [v4.0 ]
© 2019 VMware Inc. All rights reserved.
Designing and Sizing Workload Domains
© 2020 VMware, Inc.
Learner Objectives
• Recognize workload domain sizing considerations
• Describe ESXi design considerations for a VI workload domain
• Describe vCenter Server design considerations for a VI workload domain
• Describe vSphere networking design considerations for a VI workload domain
• Describe software-defined networking design considerations for a VI workload domain
• Describe shared storage design considerations for a VI workload domain
• Describe design considerations for workload domains with shared NSX Manager instances
• Describe design considerations for workload domains with dedicated NSX Manager instances
VMware Cloud Foundation: Plan and Deploy [v4.0 ]
© 2020 VMware, Inc.
About Workload Domains
A workload domain is a policy-based resource container with specific availability and performance attributes
that combines compute, storage, and networking into a single consumable entity.
The workload domain forms an additional building block of VMWare Cloud Foundation and exists in addition
to the management domain in a standard design.
The virtual infrastructure layer controls the access to the underlying physical infrastructure layer. It controls
and allocates resources to workloads running in the workload domain.
The security and compliance layer provides role-based access controls and integration with the corporate
identity provider.
VMware Cloud Foundation: Plan and Deploy [v4.0 ]
© 2020 VMware, Inc.
Workload Domains: Basic Components
The basic components of a workload domain are as follows:
• ESXi nodes are dedicated to the domain.
• The vCenter Server instance for a workload domain is deployed in the management domain.
• NSX Manager instances are deployed in the management domain to support NSX networking in a
workload domain.
• Principal storage choices include vSAN, NFS, or FC.
• Supplemental storage choices include vSAN, vSphere Virtual Volumes, NFS, FC, and iSCSI.
VMware Cloud Foundation: Plan and Deploy [v4.0 ]
© 2020 VMware, Inc.
Workload Domain Design Considerations
When deploying a VI workload domain, consider several design elements:
• ESXi design
• vCenter Server design
• vSphere networking design
• Software-defined networking design
• Shared storage design
VMware Cloud Foundation: Plan and Deploy [v4.0 ]
© 2020 VMware, Inc.
ESXi Design for Workload Domains (1)
To provide the foundational component of the VI, each ESXi host consists of the following components:
• Out-of-band management interface
• Network interfaces
• Storage devices
ESXi hosts should be deployed with identical configurations across all cluster members, including storage
and networking configurations:
• An average-size virtual machine has two virtual CPUs with 4 GB of RAM.
• A typical spec 2U ESXi host can run 60 average-size virtual machines.
VMware Cloud Foundation: Plan and Deploy [v4.0 ]
© 2020 VMware, Inc.
ESXi Design for Workload Domains (2)
When sizing memory for the ESXi hosts in the workload domain, consider the following requirements:
• Requirements for the workloads running in the cluster:
When sizing memory for hosts in a cluster, set the admission control setting to n+1, which reserves the
resources of one host for failover or maintenance.
• Number of vSAN disk groups and disks on an ESXi host:
To support the maximum number of disk groups, you must provide at least 32 GB of RAM.
VMware Cloud Foundation: Plan and Deploy [v4.0 ]
© 2020 VMware, Inc.
vCenter Server Design for Workload Domains
The amount of compute and storage resources for the vCenter Server instances that support the workload
domain depends on the scale of the infrastructure required and the number of traditional workloads that will
run in the domain.
A vCenter Server instance is deployed for each workload domain, using Enhanced Linked Mode to connect,
view, and search across all linked vCenter Server systems.
Redundancy Method Protects vCenter Server Appliance
Automated protection using vSphere HA Yes
Manual configuration and manual failover, for
example, using a cold standby
Yes
VMware Cloud Foundation: Plan and Deploy [v4.0 ]
© 2020 VMware, Inc.
vSphere Networking: Distributed Port Group Design Example
Distributed port groups define how a connection is made to a network.
Sample ESXi Host for the
VI Workload Domain
nic0 nic1
VLAN ESXi Management
VLAN vMotion
VLAN Host Overlay
VLAN vSAN
VLAN NFS
VLAN Trunk Edge Overlay
VLAN Trunk Uplink 01
VLAN Trunk Uplink 02
sfo01-w01-cl01-vds
VMware Cloud Foundation: Plan and Deploy [v4.0 ]
© 2020 VMware, Inc.
Software-Defined Networking Design for Workload Domains (1)
Workload domains can share existing NSX Manager instances.
Management Workload Domain
Workload Domain Blue
vCenter
Cluster 1
Workload Domain Blue
Cluster 1
APP
OS
APP
OS
ESXi
Cluster 2
DB
OS
DB
OS
ESXi
ESXi ESXi
Cluster 1 Cluster 2
APP
OS
APP
OS
APP
OS
ESXi ESXi
ESXi ESXi
APP
OS
Workload Domain
Green vCenter Workload Domain Yellow
Workload Domain
Yellow vCenter
VDI VDI VDI VDI
NSX-T Manager Shared Instance
Workload Domain Green Multicluster
APP
OS
APP
OS
APP
OS
APP
OS
NSX
ESG
NSX
Edge
NSX
ESG
NSX
Edge
NSX
ESG
NSX
Edge
NSX
ESG
NSX
Edge
Management Domain
vCenter Server
NSX
ESG
NSX
Edge
SDDC Manager
NSX-T Managers
NSX-T Managers
NSX Manager
NSX-T Managers
NSX-T Managers
NSX Manager
VMware Cloud Foundation: Plan and Deploy [v4.0 ]
© 2020 VMware, Inc.
Software-Defined Networking Design for Workload Domains (2)
Workload domains can optionally use a set of dedicated NSX Manager instances.
Cluster 1
Workload Domain Blue
ESXi ESXi
Cluster 1 Cluster 2
APP
OS
APP
OS
APP
OS
ESXi ESXi
ESXi ESXi
APP
OS
VDI VDI VDI VDI
Workload Domain Green Multicluster
APP
OS
APP
OS
APP
OS
APP
OS
NSX
ESG
NSX
Edge
NSX
ESG
NSX
Edge
Management Workload Domain
Workload Domain Blue
vCenter
Workload Domain
Green vCenter
NSX Manager Dedicated to Green
NSX Manager Dedicated to Blue
Management Domain
vCenter Server
NSX
ESG
NSX
Edge
SDDC Manager
NSX-T Managers
NSX-T Managers
NSX Manager
NSX-T Managers
NSX-T Managers
NSX Manager
NSX-T Managers
NSX-T Managers
NSX Manager
VMware Cloud Foundation: Plan and Deploy [v4.0 ]
© 2020 VMware, Inc.
Software-Defined Networking Design for Workload Domains (3)
VMware Cloud Foundation: Plan and Deploy [v4.0 ]
You must consider VLANs and subnets or software-defined networking in workload domain clusters.
Important considerations include:
• The type of traffic that is required:
– Fault tolerance
– vSAN
– iSCSI
– NFS
– vMotion
– Replication
– Backup
• Whether network availability zones are implemented:
vSAN stretched clusters are leveraged in the design.
• Whether different workload domain clusters share the same subnets
© 2020 VMware, Inc.
Shared Storage Design for Workload Domains (1)
When deploying a workload domain in VMware Cloud Foundation, you can select from several types of
storage, both principal and supplemental.
Before deciding, consider the following guidelines:
• Optimize the storage design to meet the diverse needs of applications, services, administrators, and
users.
• Strategically align business applications and the storage infrastructure to reduce costs, boost
performance, improve availability, provide security, and enhance functionality.
• Provide multiple tiers of storage to match application data access to application requirements.
• Design each tier of storage with different performance, capacity, and availability characteristics.
Not every application requires expensive, high-performance, highly available storage. Designing
different storage tiers reduces cost.
VMware Cloud Foundation: Plan and Deploy [v4.0 ]
© 2020 VMware, Inc.
Workload Domain
Workloads
APP APP APP
OS OS OS
ESXi Host
Datastore(s)
Payloads
SLA 1
Payloads
SLA 2
Payloads
SLA 3
Sample
Datastore
1500GB
200GB
VMDK
2048
GB
Software-Defined Storage
Policy-Based Storage Management
Virtualized Data Services
Hypervisor Storage Abstraction
SAN or NAS or DAS
(Third party or VMware vSAN)
SSD SSD
FC15K FC10K FC15K FC10K
SATA SATA
Shared Storage Design for Workload Domains (2)
In the workload domain, you can use different solutions for principal and supplemental storage.
VMware Cloud Foundation: Plan and Deploy [v4.0 ]
© 2020 VMware, Inc.
Design Decisions: ESXI Nodes in Workload Domains
Decision ID Design Decision Design Justification Design Implication
HYD-WLD-VI-ESXi-001 Use vSAN ReadyNodes
with vSAN storage for
each ESXi host in the
shared edge and
workload cluster.
VMware Cloud
Foundation is fully
compatible with vSAN at
deployment.
Several vendors and
hardware choices.
HYD-WLD-VI-ESXi-002 Ensure that all nodes
have uniform
configuration for the
shared edge and
workload cluster.
A balanced cluster has
these advantages:
• Predictable
performance even
during hardware
failures
• Minimal impact of
resync or rebuild
operations on
performance
You must apply vendor
sourcing, budgeting, and
procurement
considerations for
uniform server nodes, on
a per cluster basis.
VMware Cloud Foundation: Plan and Deploy [v4.0 ]
When selecting the hosts for workload domains, use vSAN ReadyNodes if vSAN is the principal storage.
© 2020 VMware, Inc.
Design Decisions: ESXi Node Memory in Workload Domains
Decision ID Design Decision Design Justification Design Implication
HYD-WLD-VI-ESXi-003 Install each ESXi host in
the shared edge and
workload cluster with a
minimum of 256 GB of
RAM.
The medium-sized NSX
Edge appliances in this
vSphere cluster require a
total of 64 GB of RAM.
The remaining RAM is
available for traditional
workloads.
In a four-node cluster,
only 768 GB is available
for use because of the
n+1 vSphere HA setting.
VMware Cloud Foundation: Plan and Deploy [v4.0 ]
The type of workload domain influences the amount of memory needed. For vSphere with Tanzu, large NSX
Edge nodes are required.
© 2020 VMware, Inc.
Design Decisions: vCenter Considerations in Workload Domains
Decision ID Design Decision Design Justification Design Implication
HYD-WLD-VI-VC-002 Deploy an appliance for
the workload domain
vCenter Server of a
medium deployment size
or larger.
A vCenter Server
appliance of a medium-
deployment size is
typically sufficient to
manage traditional
workloads that run in a
workload domain.
If the size of the
workload domain grows,
you might need to
increase the vCenter
Server Appliance size.
HYD-WLD-VI-VC-003 Deploy the workload
domain vCenter Server
with the default storage
size.
The default storage
capacity assigned to a
medium-sized appliance
is sufficient.
None at this point.
VMware Cloud Foundation: Plan and Deploy [v4.0 ]
Consider the number of hosts and clusters to be included in the workload domain. Right-sizing vCenter
reduces the likelihood that you need to increase the vCenter appliance size.
© 2020 VMware, Inc.
Design Decisions: vSphere Networking in Workload Domains (1)
Design ID Design Decision Design Justification Design Implication
HYD-WLD-VI-NET-003 Network I/O Control on
all distributed switches is
enabled by default by
VMware Cloud
Foundation.
Increases resiliency and
performance of the
network.
If modified incorrectly,
Network I/O Control
might affect network
performance for critical
traffic types.
HYD-WLD-VI-NET-004 Configure the MTU size
of vSphere Distributed
Switch to 9,000 for
jumbo frames.
Supports the MTU size
required by system traffic
types.
Improves traffic
throughput.
When adjusting the MTU
packet size, you must
also configure the entire
network path (VMkernel
ports, virtual switches,
physical switches, and
routers) to support the
same MTU packet size.
VMware Cloud Foundation: Plan and Deploy [v4.0 ]
Consider whether to modify the default Network I/O control shares and default MTU in the network.
© 2020 VMware, Inc.
Design Decisions: vSphere Networking in Workload Domains (2)
Decision ID Design Decision Design Justification Design Implication
HYD-WLD-VI-NET-006 Use static port binding for all port
groups in the shared edge and
workload cluster.
With static binding, a VM
connects to the same port on
vSphere Distributed Switch.
This decision provides historical
data and port level monitoring.
None at this point.
HYD-WLD-VI-NET-007 Use a route based on the physical
NIC load teaming algorithm for the
management port group.
Reduces the complexity of the
network design and increases
resiliency and performance.
None at this point.
HYD-WLD-VI-NET-008 Use a route based on the physical
NIC load teaming algorithm for the
vMotion port group.
Reduces the complexity of the
network design and increases
resiliency and performance.
None at this point.
VMware Cloud Foundation: Plan and Deploy [v4.0 ]
Determine the type of load sharing that you must configure for each type of traffic.
© 2020 VMware, Inc.
Design Decisions: vSAN Datastore in Workload Domains
Decision ID Design Decision Design Justification Design Implication
HYD-WLD-VI-SDS-1303 Provide the shared edge
and workload cluster with
a minimum of 24 TB of
raw capacity for vSAN.
NSX Edge nodes and
sample tenant workloads
require at least 8 TB of
raw storage (prior to
FTT=1) and 16 TB when
using the default vSAN
storage policy.
If you scale the
environment out with
more workloads,
additional storage is
required in the workload
domain.
HYD-WLD-VI-SDS-1304 On all vSAN datastores,
ensure that at least 30%
of free space is always
available.
When vSAN reaches 80%
usage, a rebalance task is
started that can be
resource-intensive.
Increases the amount of
available storage needed.
VMware Cloud Foundation: Plan and Deploy [v4.0 ]
If using vSAN as the principal storage in the workload domain, identify the vSAN policies that each
workload requires.
© 2020 VMware, Inc.
Design Decisions: vSAN Cluster in the Workload Domain
Decision ID Design Decision Design Justification Design Implication
HYD-WLD-VI-SDS-1307 When using a single availability
zone, the shared edge and
workload cluster requires a
minimum of four ESXi hosts to
support vSAN.
Having four ESXi hosts
addresses the availability and
sizing requirements.
An ESXi host can be taken
offline for maintenance or
upgrades without affecting the
overall vSAN cluster health.
The availability requirements
for the shared edge and
workload cluster might cause
underutilization of the
cluster's ESXi hosts.
HYD- WLD-VI-SDS-1308 When using two availability
zones, the shared edge and
workload cluster requires a
minimum of eight ESXi hosts
(four in each availability zone) to
support a stretched vSAN
configuration.
Having eight ESXi hosts
addresses the availability and
sizing requirements.
You can take an availability
zone offline for maintenance
or upgrades without affecting
the overall vSAN cluster
health.
The capacity of the additional
four hosts is not added to the
capacity of the cluster.
The hosts are used only to
provide additional availability.
VMware Cloud Foundation: Plan and Deploy [v4.0 ]
If using vSAN as the principal storage in the workload domain, determine the level of high availability to
be provided for the workloads in each workload domain cluster.
© 2020 VMware, Inc.
Lab 2: Design and Project Overview
Review the customer case study and validate the conceptual, logical, and physical designs:
1. Read the Case Study
2. Review the Conceptual Design
3. Review the Business Requirements, Constraints, Assumptions, and Risks
4. Review the Proposed Solution
5. Justify the Design Decisions
VMware Cloud Foundation: Plan and Deploy [v4.0 ]
© 2020 VMware, Inc.
Review of Learner Objectives
• Recognize workload domain sizing considerations
• Describe ESXi design considerations for a VI workload domain
• Describe vCenter Server design considerations for a VI workload domain
• Describe vSphere networking design considerations for a VI workload domain
• Describe software-defined networking design considerations for a VI workload domain
• Describe shared storage design considerations for a VI workload domain
• Describe design considerations for workload domains with shared NSX Manager instances
• Describe design considerations for workload domains with dedicated NSX Manager instances
VMware Cloud Foundation: Plan and Deploy [v4.0 ]
© 2020 VMware, Inc.
Key Points
• When deploying the management domain in VMware Cloud Foundation, you make design decisions
related to ESXi, vCenter, vSAN, and networking in vSphere.
• When deploying workload domains, you consider the vCenter size, vSAN availability zones, and the
implication of multiple zones in the design.
• Workload domains default to a set of shared NSX Manager instances, but a separate set of dedicated
NSX Manager instances can be deployed per workload domain as an alternative design option.
Questions?
VMware Cloud Foundation: Plan and Deploy [v4.0 ]

More Related Content

Similar to VCFPD4_M03_Domain Design-AR.pptx

Hitachi whitepaper-protect-ucp-hc-v240-with-vmware-vsphere-hdid
Hitachi whitepaper-protect-ucp-hc-v240-with-vmware-vsphere-hdidHitachi whitepaper-protect-ucp-hc-v240-with-vmware-vsphere-hdid
Hitachi whitepaper-protect-ucp-hc-v240-with-vmware-vsphere-hdid
Chetan Gabhane
 
White paper: IBM FlashSystems in VMware Environments
White paper: IBM FlashSystems in VMware EnvironmentsWhite paper: IBM FlashSystems in VMware Environments
White paper: IBM FlashSystems in VMware Environments
thinkASG
 
ENT208 Transform your Business with VMware Cloud on AWS
ENT208 Transform your Business with VMware Cloud on AWSENT208 Transform your Business with VMware Cloud on AWS
ENT208 Transform your Business with VMware Cloud on AWS
Amazon Web Services
 
VMworld 2013: IBM Solutions for VMware Virtual SAN
VMworld 2013: IBM Solutions for VMware Virtual SAN VMworld 2013: IBM Solutions for VMware Virtual SAN
VMworld 2013: IBM Solutions for VMware Virtual SAN
VMworld
 
VMworld 2014: Virtual SAN Architecture Deep Dive
VMworld 2014: Virtual SAN Architecture Deep DiveVMworld 2014: Virtual SAN Architecture Deep Dive
VMworld 2014: Virtual SAN Architecture Deep Dive
VMworld
 
Varrow VMworld Update and vCHS Lunch and Learn Presentation
Varrow VMworld Update and vCHS Lunch and Learn PresentationVarrow VMworld Update and vCHS Lunch and Learn Presentation
Varrow VMworld Update and vCHS Lunch and Learn Presentation
Varrow Inc.
 
Running DataStax Enterprise in VMware Cloud and Hybrid Environments
Running DataStax Enterprise in VMware Cloud and Hybrid EnvironmentsRunning DataStax Enterprise in VMware Cloud and Hybrid Environments
Running DataStax Enterprise in VMware Cloud and Hybrid Environments
DataStax
 
VMware vCloud Director Technisch Overzicht
VMware vCloud Director Technisch OverzichtVMware vCloud Director Technisch Overzicht
VMware vCloud Director Technisch OverzichtArjan Hendriks
 
VMware HCI solutions - 2020-01-16
VMware HCI solutions - 2020-01-16VMware HCI solutions - 2020-01-16
VMware HCI solutions - 2020-01-16
David Pasek
 
Provisioning server high_availability_considerations2
Provisioning server high_availability_considerations2Provisioning server high_availability_considerations2
Provisioning server high_availability_considerations2
Nuno Alves
 
VMworld 2013: Maximize Database Performance in Your Software-Defined Data Center
VMworld 2013: Maximize Database Performance in Your Software-Defined Data CenterVMworld 2013: Maximize Database Performance in Your Software-Defined Data Center
VMworld 2013: Maximize Database Performance in Your Software-Defined Data Center
VMworld
 
VMworld Europe 204: Technical Deep Dive on EVO: RAIL, the new VMware Hyper-Co...
VMworld Europe 204: Technical Deep Dive on EVO: RAIL, the new VMware Hyper-Co...VMworld Europe 204: Technical Deep Dive on EVO: RAIL, the new VMware Hyper-Co...
VMworld Europe 204: Technical Deep Dive on EVO: RAIL, the new VMware Hyper-Co...
VMworld
 
VSICM8_M02.pptx
VSICM8_M02.pptxVSICM8_M02.pptx
VSICM8_M02.pptx
MazharUddin34
 
Dell EMC VxRAIL Appliance based on VMware SDS
Dell EMC VxRAIL Appliance based on VMware SDSDell EMC VxRAIL Appliance based on VMware SDS
Dell EMC VxRAIL Appliance based on VMware SDS
MarketingArrowECS_CZ
 
VMware Virtualization Basics - Part-1.pptx
VMware Virtualization Basics - Part-1.pptxVMware Virtualization Basics - Part-1.pptx
VMware Virtualization Basics - Part-1.pptx
ssuser4d1c08
 
VMware Virtual SAN Presentation
VMware Virtual SAN PresentationVMware Virtual SAN Presentation
VMware Virtual SAN Presentation
virtualsouthwest
 
Transform Your Business with VMware Cloud on AWS, an Integrated Hybrid Approa...
Transform Your Business with VMware Cloud on AWS, an Integrated Hybrid Approa...Transform Your Business with VMware Cloud on AWS, an Integrated Hybrid Approa...
Transform Your Business with VMware Cloud on AWS, an Integrated Hybrid Approa...
Amazon Web Services
 
VMware Hyper-Converged: EVO:RAIL Overview
VMware Hyper-Converged: EVO:RAIL OverviewVMware Hyper-Converged: EVO:RAIL Overview
VMware Hyper-Converged: EVO:RAIL Overview
Rolta AdvizeX
 
Big App Workloads on Microsoft Azure - TechEd Europe 2014
Big App Workloads on Microsoft Azure - TechEd Europe 2014Big App Workloads on Microsoft Azure - TechEd Europe 2014
Big App Workloads on Microsoft Azure - TechEd Europe 2014
Brian Benz
 

Similar to VCFPD4_M03_Domain Design-AR.pptx (20)

Hitachi whitepaper-protect-ucp-hc-v240-with-vmware-vsphere-hdid
Hitachi whitepaper-protect-ucp-hc-v240-with-vmware-vsphere-hdidHitachi whitepaper-protect-ucp-hc-v240-with-vmware-vsphere-hdid
Hitachi whitepaper-protect-ucp-hc-v240-with-vmware-vsphere-hdid
 
White paper: IBM FlashSystems in VMware Environments
White paper: IBM FlashSystems in VMware EnvironmentsWhite paper: IBM FlashSystems in VMware Environments
White paper: IBM FlashSystems in VMware Environments
 
ENT208 Transform your Business with VMware Cloud on AWS
ENT208 Transform your Business with VMware Cloud on AWSENT208 Transform your Business with VMware Cloud on AWS
ENT208 Transform your Business with VMware Cloud on AWS
 
VMworld 2013: IBM Solutions for VMware Virtual SAN
VMworld 2013: IBM Solutions for VMware Virtual SAN VMworld 2013: IBM Solutions for VMware Virtual SAN
VMworld 2013: IBM Solutions for VMware Virtual SAN
 
ebk EVO-RAIL v104
ebk EVO-RAIL v104ebk EVO-RAIL v104
ebk EVO-RAIL v104
 
VMworld 2014: Virtual SAN Architecture Deep Dive
VMworld 2014: Virtual SAN Architecture Deep DiveVMworld 2014: Virtual SAN Architecture Deep Dive
VMworld 2014: Virtual SAN Architecture Deep Dive
 
Varrow VMworld Update and vCHS Lunch and Learn Presentation
Varrow VMworld Update and vCHS Lunch and Learn PresentationVarrow VMworld Update and vCHS Lunch and Learn Presentation
Varrow VMworld Update and vCHS Lunch and Learn Presentation
 
Running DataStax Enterprise in VMware Cloud and Hybrid Environments
Running DataStax Enterprise in VMware Cloud and Hybrid EnvironmentsRunning DataStax Enterprise in VMware Cloud and Hybrid Environments
Running DataStax Enterprise in VMware Cloud and Hybrid Environments
 
VMware vCloud Director Technisch Overzicht
VMware vCloud Director Technisch OverzichtVMware vCloud Director Technisch Overzicht
VMware vCloud Director Technisch Overzicht
 
VMware HCI solutions - 2020-01-16
VMware HCI solutions - 2020-01-16VMware HCI solutions - 2020-01-16
VMware HCI solutions - 2020-01-16
 
Provisioning server high_availability_considerations2
Provisioning server high_availability_considerations2Provisioning server high_availability_considerations2
Provisioning server high_availability_considerations2
 
VMworld 2013: Maximize Database Performance in Your Software-Defined Data Center
VMworld 2013: Maximize Database Performance in Your Software-Defined Data CenterVMworld 2013: Maximize Database Performance in Your Software-Defined Data Center
VMworld 2013: Maximize Database Performance in Your Software-Defined Data Center
 
VMworld Europe 204: Technical Deep Dive on EVO: RAIL, the new VMware Hyper-Co...
VMworld Europe 204: Technical Deep Dive on EVO: RAIL, the new VMware Hyper-Co...VMworld Europe 204: Technical Deep Dive on EVO: RAIL, the new VMware Hyper-Co...
VMworld Europe 204: Technical Deep Dive on EVO: RAIL, the new VMware Hyper-Co...
 
VSICM8_M02.pptx
VSICM8_M02.pptxVSICM8_M02.pptx
VSICM8_M02.pptx
 
Dell EMC VxRAIL Appliance based on VMware SDS
Dell EMC VxRAIL Appliance based on VMware SDSDell EMC VxRAIL Appliance based on VMware SDS
Dell EMC VxRAIL Appliance based on VMware SDS
 
VMware Virtualization Basics - Part-1.pptx
VMware Virtualization Basics - Part-1.pptxVMware Virtualization Basics - Part-1.pptx
VMware Virtualization Basics - Part-1.pptx
 
VMware Virtual SAN Presentation
VMware Virtual SAN PresentationVMware Virtual SAN Presentation
VMware Virtual SAN Presentation
 
Transform Your Business with VMware Cloud on AWS, an Integrated Hybrid Approa...
Transform Your Business with VMware Cloud on AWS, an Integrated Hybrid Approa...Transform Your Business with VMware Cloud on AWS, an Integrated Hybrid Approa...
Transform Your Business with VMware Cloud on AWS, an Integrated Hybrid Approa...
 
VMware Hyper-Converged: EVO:RAIL Overview
VMware Hyper-Converged: EVO:RAIL OverviewVMware Hyper-Converged: EVO:RAIL Overview
VMware Hyper-Converged: EVO:RAIL Overview
 
Big App Workloads on Microsoft Azure - TechEd Europe 2014
Big App Workloads on Microsoft Azure - TechEd Europe 2014Big App Workloads on Microsoft Azure - TechEd Europe 2014
Big App Workloads on Microsoft Azure - TechEd Europe 2014
 

Recently uploaded

CW RADAR, FMCW RADAR, FMCW ALTIMETER, AND THEIR PARAMETERS
CW RADAR, FMCW RADAR, FMCW ALTIMETER, AND THEIR PARAMETERSCW RADAR, FMCW RADAR, FMCW ALTIMETER, AND THEIR PARAMETERS
CW RADAR, FMCW RADAR, FMCW ALTIMETER, AND THEIR PARAMETERS
veerababupersonal22
 
AP LAB PPT.pdf ap lab ppt no title specific
AP LAB PPT.pdf ap lab ppt no title specificAP LAB PPT.pdf ap lab ppt no title specific
AP LAB PPT.pdf ap lab ppt no title specific
BrazilAccount1
 
NUMERICAL SIMULATIONS OF HEAT AND MASS TRANSFER IN CONDENSING HEAT EXCHANGERS...
NUMERICAL SIMULATIONS OF HEAT AND MASS TRANSFER IN CONDENSING HEAT EXCHANGERS...NUMERICAL SIMULATIONS OF HEAT AND MASS TRANSFER IN CONDENSING HEAT EXCHANGERS...
NUMERICAL SIMULATIONS OF HEAT AND MASS TRANSFER IN CONDENSING HEAT EXCHANGERS...
ssuser7dcef0
 
6th International Conference on Machine Learning & Applications (CMLA 2024)
6th International Conference on Machine Learning & Applications (CMLA 2024)6th International Conference on Machine Learning & Applications (CMLA 2024)
6th International Conference on Machine Learning & Applications (CMLA 2024)
ClaraZara1
 
14 Template Contractual Notice - EOT Application
14 Template Contractual Notice - EOT Application14 Template Contractual Notice - EOT Application
14 Template Contractual Notice - EOT Application
SyedAbiiAzazi1
 
Forklift Classes Overview by Intella Parts
Forklift Classes Overview by Intella PartsForklift Classes Overview by Intella Parts
Forklift Classes Overview by Intella Parts
Intella Parts
 
DfMAy 2024 - key insights and contributions
DfMAy 2024 - key insights and contributionsDfMAy 2024 - key insights and contributions
DfMAy 2024 - key insights and contributions
gestioneergodomus
 
Investor-Presentation-Q1FY2024 investor presentation document.pptx
Investor-Presentation-Q1FY2024 investor presentation document.pptxInvestor-Presentation-Q1FY2024 investor presentation document.pptx
Investor-Presentation-Q1FY2024 investor presentation document.pptx
AmarGB2
 
Unbalanced Three Phase Systems and circuits.pptx
Unbalanced Three Phase Systems and circuits.pptxUnbalanced Three Phase Systems and circuits.pptx
Unbalanced Three Phase Systems and circuits.pptx
ChristineTorrepenida1
 
block diagram and signal flow graph representation
block diagram and signal flow graph representationblock diagram and signal flow graph representation
block diagram and signal flow graph representation
Divya Somashekar
 
MCQ Soil mechanics questions (Soil shear strength).pdf
MCQ Soil mechanics questions (Soil shear strength).pdfMCQ Soil mechanics questions (Soil shear strength).pdf
MCQ Soil mechanics questions (Soil shear strength).pdf
Osamah Alsalih
 
space technology lecture notes on satellite
space technology lecture notes on satellitespace technology lecture notes on satellite
space technology lecture notes on satellite
ongomchris
 
Nuclear Power Economics and Structuring 2024
Nuclear Power Economics and Structuring 2024Nuclear Power Economics and Structuring 2024
Nuclear Power Economics and Structuring 2024
Massimo Talia
 
RAT: Retrieval Augmented Thoughts Elicit Context-Aware Reasoning in Long-Hori...
RAT: Retrieval Augmented Thoughts Elicit Context-Aware Reasoning in Long-Hori...RAT: Retrieval Augmented Thoughts Elicit Context-Aware Reasoning in Long-Hori...
RAT: Retrieval Augmented Thoughts Elicit Context-Aware Reasoning in Long-Hori...
thanhdowork
 
Final project report on grocery store management system..pdf
Final project report on grocery store management system..pdfFinal project report on grocery store management system..pdf
Final project report on grocery store management system..pdf
Kamal Acharya
 
Basic Industrial Engineering terms for apparel
Basic Industrial Engineering terms for apparelBasic Industrial Engineering terms for apparel
Basic Industrial Engineering terms for apparel
top1002
 
Hierarchical Digital Twin of a Naval Power System
Hierarchical Digital Twin of a Naval Power SystemHierarchical Digital Twin of a Naval Power System
Hierarchical Digital Twin of a Naval Power System
Kerry Sado
 
Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)
Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)
Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)
MdTanvirMahtab2
 
Pile Foundation by Venkatesh Taduvai (Sub Geotechnical Engineering II)-conver...
Pile Foundation by Venkatesh Taduvai (Sub Geotechnical Engineering II)-conver...Pile Foundation by Venkatesh Taduvai (Sub Geotechnical Engineering II)-conver...
Pile Foundation by Venkatesh Taduvai (Sub Geotechnical Engineering II)-conver...
AJAYKUMARPUND1
 
Tutorial for 16S rRNA Gene Analysis with QIIME2.pdf
Tutorial for 16S rRNA Gene Analysis with QIIME2.pdfTutorial for 16S rRNA Gene Analysis with QIIME2.pdf
Tutorial for 16S rRNA Gene Analysis with QIIME2.pdf
aqil azizi
 

Recently uploaded (20)

CW RADAR, FMCW RADAR, FMCW ALTIMETER, AND THEIR PARAMETERS
CW RADAR, FMCW RADAR, FMCW ALTIMETER, AND THEIR PARAMETERSCW RADAR, FMCW RADAR, FMCW ALTIMETER, AND THEIR PARAMETERS
CW RADAR, FMCW RADAR, FMCW ALTIMETER, AND THEIR PARAMETERS
 
AP LAB PPT.pdf ap lab ppt no title specific
AP LAB PPT.pdf ap lab ppt no title specificAP LAB PPT.pdf ap lab ppt no title specific
AP LAB PPT.pdf ap lab ppt no title specific
 
NUMERICAL SIMULATIONS OF HEAT AND MASS TRANSFER IN CONDENSING HEAT EXCHANGERS...
NUMERICAL SIMULATIONS OF HEAT AND MASS TRANSFER IN CONDENSING HEAT EXCHANGERS...NUMERICAL SIMULATIONS OF HEAT AND MASS TRANSFER IN CONDENSING HEAT EXCHANGERS...
NUMERICAL SIMULATIONS OF HEAT AND MASS TRANSFER IN CONDENSING HEAT EXCHANGERS...
 
6th International Conference on Machine Learning & Applications (CMLA 2024)
6th International Conference on Machine Learning & Applications (CMLA 2024)6th International Conference on Machine Learning & Applications (CMLA 2024)
6th International Conference on Machine Learning & Applications (CMLA 2024)
 
14 Template Contractual Notice - EOT Application
14 Template Contractual Notice - EOT Application14 Template Contractual Notice - EOT Application
14 Template Contractual Notice - EOT Application
 
Forklift Classes Overview by Intella Parts
Forklift Classes Overview by Intella PartsForklift Classes Overview by Intella Parts
Forklift Classes Overview by Intella Parts
 
DfMAy 2024 - key insights and contributions
DfMAy 2024 - key insights and contributionsDfMAy 2024 - key insights and contributions
DfMAy 2024 - key insights and contributions
 
Investor-Presentation-Q1FY2024 investor presentation document.pptx
Investor-Presentation-Q1FY2024 investor presentation document.pptxInvestor-Presentation-Q1FY2024 investor presentation document.pptx
Investor-Presentation-Q1FY2024 investor presentation document.pptx
 
Unbalanced Three Phase Systems and circuits.pptx
Unbalanced Three Phase Systems and circuits.pptxUnbalanced Three Phase Systems and circuits.pptx
Unbalanced Three Phase Systems and circuits.pptx
 
block diagram and signal flow graph representation
block diagram and signal flow graph representationblock diagram and signal flow graph representation
block diagram and signal flow graph representation
 
MCQ Soil mechanics questions (Soil shear strength).pdf
MCQ Soil mechanics questions (Soil shear strength).pdfMCQ Soil mechanics questions (Soil shear strength).pdf
MCQ Soil mechanics questions (Soil shear strength).pdf
 
space technology lecture notes on satellite
space technology lecture notes on satellitespace technology lecture notes on satellite
space technology lecture notes on satellite
 
Nuclear Power Economics and Structuring 2024
Nuclear Power Economics and Structuring 2024Nuclear Power Economics and Structuring 2024
Nuclear Power Economics and Structuring 2024
 
RAT: Retrieval Augmented Thoughts Elicit Context-Aware Reasoning in Long-Hori...
RAT: Retrieval Augmented Thoughts Elicit Context-Aware Reasoning in Long-Hori...RAT: Retrieval Augmented Thoughts Elicit Context-Aware Reasoning in Long-Hori...
RAT: Retrieval Augmented Thoughts Elicit Context-Aware Reasoning in Long-Hori...
 
Final project report on grocery store management system..pdf
Final project report on grocery store management system..pdfFinal project report on grocery store management system..pdf
Final project report on grocery store management system..pdf
 
Basic Industrial Engineering terms for apparel
Basic Industrial Engineering terms for apparelBasic Industrial Engineering terms for apparel
Basic Industrial Engineering terms for apparel
 
Hierarchical Digital Twin of a Naval Power System
Hierarchical Digital Twin of a Naval Power SystemHierarchical Digital Twin of a Naval Power System
Hierarchical Digital Twin of a Naval Power System
 
Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)
Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)
Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)
 
Pile Foundation by Venkatesh Taduvai (Sub Geotechnical Engineering II)-conver...
Pile Foundation by Venkatesh Taduvai (Sub Geotechnical Engineering II)-conver...Pile Foundation by Venkatesh Taduvai (Sub Geotechnical Engineering II)-conver...
Pile Foundation by Venkatesh Taduvai (Sub Geotechnical Engineering II)-conver...
 
Tutorial for 16S rRNA Gene Analysis with QIIME2.pdf
Tutorial for 16S rRNA Gene Analysis with QIIME2.pdfTutorial for 16S rRNA Gene Analysis with QIIME2.pdf
Tutorial for 16S rRNA Gene Analysis with QIIME2.pdf
 

VCFPD4_M03_Domain Design-AR.pptx

  • 1. © 2020 VMware, Inc. Designing Domains in VMware Cloud Foundation Module 3
  • 2. © 2020 VMware, Inc. Importance When designing and planning for a VMware Cloud Foundation deployment, you must consider the sizing of your solution. Creating a scalable, resilient solution requires a management domain that is properly sized. After the management domain is functional, you can deploy the workload domains. During the deployment, you must balance several design considerations. VMware Cloud Foundation: Plan and Deploy [v4.0 ]
  • 3. © 2020 VMware, Inc. Module Lessons 1. Designing and Sizing the Management Domain 2. Designing and Sizing Workload Domains VMware Cloud Foundation: Plan and Deploy [v4.0 ]
  • 4. © 2019 VMware Inc. All rights reserved. Designing and Sizing the Management Domain
  • 5. © 2020 VMware, Inc. Learner Objectives • Recognize management domain sizing considerations • Describe design considerations for ESXi in the management domain • Describe design considerations for vCenter in the management domain • Describe design considerations for vSphere networking in the management domain • Describe design considerations for vSAN in the management domain • Recognize design choices for a consolidated design or standard design VMware Cloud Foundation: Plan and Deploy [v4.0 ]
  • 6. © 2020 VMware, Inc. Management Domain Minimum Hardware Requirements (1) Component Requirements Servers Four vSAN ReadyNodes. For information about compatible vSAN ReadyNodes, see the VMware Compatibility Guide at https://www.vmware.com/resources/compatibility/search.php. CPU per server Dual-socket, eight cores per socket minimum requirement for all-flash systems. Single-socket, eight cores per socket minimum requirement for hybrid (flash and magnetic) systems. NOTE: VMware Cloud Foundation also supports quad-socket servers for use with all- flash or hybrid systems. VMware Cloud Foundation: Plan and Deploy [v4.0 ] Before deploying the management domain, verify that the minimum requirements are met.
  • 7. © 2020 VMware, Inc. Management Domain Minimum Hardware Requirements (2) Component Requirements Memory per server 256 GB Storage per server 16 GB boot device, local media For information about installing ESXi on a supported USB flash drive or SD flash card, see VMware knowledge base article 2004784 at https://kb.vmware.com/s/article/2004784. One NVMe or SSD for the caching tier Class D endurance Class E performance Two SSDs or HDDs for the capacity tier For guidelines about cache sizing, see Designing and Sizing a Virtual SAN Cluster at https://docs.vmware.com/en/VMware-vSphere/6.5/com.vmware.vsphere.virtualsan.doc/GUID- 1EB40E66-1FBD-48A6-9426-B33F9255B282.html NICs per server Minimum of two 10 GbE (or higher) NICs (IOVP certified) (Optional) One 1 GbE BMC NIC The new design in VMware Cloud Foundation 4.0 supports four or six NICs per server. VMware Cloud Foundation: Plan and Deploy [v4.0 ]
  • 8. © 2020 VMware, Inc. Resource Use in the Management Domain This resource summary includes the core VM components deployed with VMware Cloud Foundation and the minimum resources that are required: • VMware Cloud Foundation deployment components: – SDDC Manager – vCenter – NSX Manager instances – NSX Edge instances – Optional: Three-node vRealize Log Insight cluster – Optional: Any of the vRealize Suite components • Resource minimums: – 52 vCPUs – 256 GB RAM – 1990GB Storage vSAN sizing tools: https://vsansizer.vmware.com VMware Cloud Foundation: Plan and Deploy [v4.0 ]
  • 9. © 2020 VMware, Inc. Virtual Infrastructure Design: Management Domain Considerations When deploying the management domain, consider the following design elements: • ESXi design • vCenter Server design • vSphere networking design • Software-defined networking design • Shared storage design VMware Cloud Foundation: Plan and Deploy [v4.0 ]
  • 10. © 2020 VMware, Inc. ESXi Design for the Management Domain (1) In the virtual infrastructure (VI) design, you size the compute resources of the ESXi hosts in the management domain according to the following requirements: • System requirements of the management components • Requirements for managing customer workloads based on the design objectives ESXi Host CPU Memory Non-Local Storage (SAN/NAS) Local Storage (vSAN) NIC 1 and NIC 2 Uplinks Out of Band Mgmt Uplink VMware Cloud Foundation: Plan and Deploy [v4.0 ]
  • 11. © 2020 VMware, Inc. ESXi Design for the Management Domain (2) Installing ESXi 7.0 requires a boot device with at least 8 GB for USB or SD devices, and 32 GB for other device types. When considering a virtual machine swap file design for the management domain, you can use the default configuration. The swap file is stored in the same location as the configuration file of the virtual machine. When VMware Cloud Foundation is used, the SDDC Manager performs lifecycle management, allowing additional components to be included as part of the lifecycle management process. VMware Cloud Foundation: Plan and Deploy [v4.0 ]
  • 12. © 2020 VMware, Inc. vCenter Server Design for the Management Domain (1) A vCenter Server deployment consists of one or more vCenter Server instances with embedded Platform Services Controller (PSC) according to the scale of the environment. vCenter Server is deployed as a preconfigured virtual appliance running the Photon operating system. vCenter Server is deployed by VMware Cloud Builder during the bring-up process, and its size is small by default. Unless you plan to implement a consolidated design, no other size considerations are necessary. VMware Cloud Foundation does not support vCenter Server High Availability, so the design must include proper back up of all the management components, including vCenter. VMware Cloud Foundation: Plan and Deploy [v4.0 ]
  • 13. © 2020 VMware, Inc. vCenter Server Design for the Management Domain (2) Appliance Size Management Capacity Default Storage Size Large Storage Size X-Large Storage Size X-Large environment Up to 2,000 hosts or 35,000 VMs 1,805 GB 1,905 GB 3,665 GB Large environment Up to 1,000 hosts or 10,000 VMs 1,065 GB 1,765 GB 3,525 GB Medium environment Up to 400 hosts or 4,000 VMs 700 GB 1,700 GB 3,460 GB Small environment (deployed by default) Up to 100 hosts or 1,000 VMs 480 GB 1,535 GB 3,295 GB Tiny environment Up to 10 hosts or 100 VMs 415 GB 1,490 GB 3,245 GB VMware Cloud Foundation: Plan and Deploy [v4.0 ] The default deployment size of vCenter in the management domain is small.
  • 14. © 2020 VMware, Inc. vSphere Networking Design for the Management Domain To achieve greater security and better performance, network services are segmented from one another by VMware Cloud Foundation. Network I/O Control and traffic shaping are deployed by default to guarantee bandwidth for critical infrastructure VMs and for critical types of traffic, such as vSAN VLAN. VMware Cloud Foundation: Plan and Deploy [v4.0 ]
  • 15. © 2020 VMware, Inc. Shared Storage Design for the Management Domain (1) The clusters in the management domain use vSAN for principal storage. Other types of storage are available for supplemental storage. Management Cluster Virtual Appliance Virtual Appliance Virtual Appliance Virtual Appliance Virtual Appliance Virtual Appliance ESXi Host Datastore(s) Mgmt VMs Backups Templates and Logs Sample Datastore Software-Defined Storage Policy-Based Storage Management Virtualized Data Services Hypervisor Storage Abstraction SAN or NAS or DAS (Third party or VMware vSAN) 1500GB 200GB VMDK Swap Files + Logs SSD FC15K FC10K SATA Physical Disks SSD FC15K FC10K SATA VMware Cloud Foundation: Plan and Deploy [v4.0 ]
  • 16. © 2020 VMware, Inc. Shared Storage Design for the Management Domain (2) In the management domain, you can design your vSAN cluster in a single availability zone or use multiple availability zones. VMware Cloud Foundation: Plan and Deploy [v4.0 ]
  • 17. © 2020 VMware, Inc. Design Decisions: ESXi Nodes in the Management Domain Decision ID Design Decision Design Justification Design Implication HYD-MGMT-VI-ESXi-001 Use vSAN ReadyNodes with vSAN storage for each ESXi host in the management domain. VMware Cloud Foundation is fully compatible with vSAN at deployment. Hardware choices follow the vSAN compatibility guide HYD-MGMT-VI-ESXi-002 Allocate hosts with uniform configuration across the first cluster of the management domain. A balanced cluster has these advantages: • Predictable performance even during hardware failures • Minimal impact of resync or rebuild operations on performance You must apply vendor sourcing, budgeting, and procurement considerations for uniform server nodes, on a per cluster basis. VMware Cloud Foundation: Plan and Deploy [v4.0 ] When selecting hosts for the management domain, use vSAN ReadyNodes, which should be as homogenous as possible for the best predictable performance.
  • 18. © 2020 VMware, Inc. Design Decisions: ESXi Memory in the Management Domain Decision ID Design Decision Design Justification Design Implication HYD-MGMT-VI-ESXi- 003 Install each ESXi host in the first, four-node, cluster of the management domain with a minimum of 256 GB of RAM. The management domain and NSX Edge appliances in this cluster require a total of 453 GB of RAM. You allocate the remaining memory to additional management components that are required for new capabilities, for example, for new VI workload domains. In a four-node cluster, only 768 GB is available for use because the host redundancy that is configured in vSphere HA is N+1. VMware Cloud Foundation: Plan and Deploy [v4.0 ] To resize the amount of RAM in each host, consider the types of workloads that run the management domain.
  • 19. © 2020 VMware, Inc. Design Decisions: vCenter Considerations in the Management Domain Decision ID Design Decision Design Justification Design Implication HYD-MGMT-VI-VC-002 Deploy an appliance for the management domain vCenter Server of a small deployment size or larger. A vCenter Server appliance of a small-deployment size is sufficient to manage the management components that are required for achieving the design objectives. If the size of the management environment increases, you might have to increase the vCenter Server appliance size. HYD-MGMT-VI-VC-003 Deploy an appliance of the management domain vCenter Server with the default storage size. The default storage capacity assigned to a small appliance is sufficient to manage the management appliances that are required for achieving the design objectives. None at this point. VMware Cloud Foundation: Plan and Deploy [v4.0 ] For determining the vCenter size, consider whether a consolidated architecture will be used, or the number of workload domains that you expect to deploy.
  • 20. © 2020 VMware, Inc. Review of Learner Objectives • Recognize management domain sizing considerations • Describe design considerations for ESXi in the management domain • Describe design considerations for vCenter in the management domain • Describe design considerations for vSphere networking in the management domain • Describe design considerations for vSAN in the management domain • Recognize design choices for a consolidated design or standard design VMware Cloud Foundation: Plan and Deploy [v4.0 ]
  • 21. © 2019 VMware Inc. All rights reserved. Designing and Sizing Workload Domains
  • 22. © 2020 VMware, Inc. Learner Objectives • Recognize workload domain sizing considerations • Describe ESXi design considerations for a VI workload domain • Describe vCenter Server design considerations for a VI workload domain • Describe vSphere networking design considerations for a VI workload domain • Describe software-defined networking design considerations for a VI workload domain • Describe shared storage design considerations for a VI workload domain • Describe design considerations for workload domains with shared NSX Manager instances • Describe design considerations for workload domains with dedicated NSX Manager instances VMware Cloud Foundation: Plan and Deploy [v4.0 ]
  • 23. © 2020 VMware, Inc. About Workload Domains A workload domain is a policy-based resource container with specific availability and performance attributes that combines compute, storage, and networking into a single consumable entity. The workload domain forms an additional building block of VMWare Cloud Foundation and exists in addition to the management domain in a standard design. The virtual infrastructure layer controls the access to the underlying physical infrastructure layer. It controls and allocates resources to workloads running in the workload domain. The security and compliance layer provides role-based access controls and integration with the corporate identity provider. VMware Cloud Foundation: Plan and Deploy [v4.0 ]
  • 24. © 2020 VMware, Inc. Workload Domains: Basic Components The basic components of a workload domain are as follows: • ESXi nodes are dedicated to the domain. • The vCenter Server instance for a workload domain is deployed in the management domain. • NSX Manager instances are deployed in the management domain to support NSX networking in a workload domain. • Principal storage choices include vSAN, NFS, or FC. • Supplemental storage choices include vSAN, vSphere Virtual Volumes, NFS, FC, and iSCSI. VMware Cloud Foundation: Plan and Deploy [v4.0 ]
  • 25. © 2020 VMware, Inc. Workload Domain Design Considerations When deploying a VI workload domain, consider several design elements: • ESXi design • vCenter Server design • vSphere networking design • Software-defined networking design • Shared storage design VMware Cloud Foundation: Plan and Deploy [v4.0 ]
  • 26. © 2020 VMware, Inc. ESXi Design for Workload Domains (1) To provide the foundational component of the VI, each ESXi host consists of the following components: • Out-of-band management interface • Network interfaces • Storage devices ESXi hosts should be deployed with identical configurations across all cluster members, including storage and networking configurations: • An average-size virtual machine has two virtual CPUs with 4 GB of RAM. • A typical spec 2U ESXi host can run 60 average-size virtual machines. VMware Cloud Foundation: Plan and Deploy [v4.0 ]
  • 27. © 2020 VMware, Inc. ESXi Design for Workload Domains (2) When sizing memory for the ESXi hosts in the workload domain, consider the following requirements: • Requirements for the workloads running in the cluster: When sizing memory for hosts in a cluster, set the admission control setting to n+1, which reserves the resources of one host for failover or maintenance. • Number of vSAN disk groups and disks on an ESXi host: To support the maximum number of disk groups, you must provide at least 32 GB of RAM. VMware Cloud Foundation: Plan and Deploy [v4.0 ]
  • 28. © 2020 VMware, Inc. vCenter Server Design for Workload Domains The amount of compute and storage resources for the vCenter Server instances that support the workload domain depends on the scale of the infrastructure required and the number of traditional workloads that will run in the domain. A vCenter Server instance is deployed for each workload domain, using Enhanced Linked Mode to connect, view, and search across all linked vCenter Server systems. Redundancy Method Protects vCenter Server Appliance Automated protection using vSphere HA Yes Manual configuration and manual failover, for example, using a cold standby Yes VMware Cloud Foundation: Plan and Deploy [v4.0 ]
  • 29. © 2020 VMware, Inc. vSphere Networking: Distributed Port Group Design Example Distributed port groups define how a connection is made to a network. Sample ESXi Host for the VI Workload Domain nic0 nic1 VLAN ESXi Management VLAN vMotion VLAN Host Overlay VLAN vSAN VLAN NFS VLAN Trunk Edge Overlay VLAN Trunk Uplink 01 VLAN Trunk Uplink 02 sfo01-w01-cl01-vds VMware Cloud Foundation: Plan and Deploy [v4.0 ]
  • 30. © 2020 VMware, Inc. Software-Defined Networking Design for Workload Domains (1) Workload domains can share existing NSX Manager instances. Management Workload Domain Workload Domain Blue vCenter Cluster 1 Workload Domain Blue Cluster 1 APP OS APP OS ESXi Cluster 2 DB OS DB OS ESXi ESXi ESXi Cluster 1 Cluster 2 APP OS APP OS APP OS ESXi ESXi ESXi ESXi APP OS Workload Domain Green vCenter Workload Domain Yellow Workload Domain Yellow vCenter VDI VDI VDI VDI NSX-T Manager Shared Instance Workload Domain Green Multicluster APP OS APP OS APP OS APP OS NSX ESG NSX Edge NSX ESG NSX Edge NSX ESG NSX Edge NSX ESG NSX Edge Management Domain vCenter Server NSX ESG NSX Edge SDDC Manager NSX-T Managers NSX-T Managers NSX Manager NSX-T Managers NSX-T Managers NSX Manager VMware Cloud Foundation: Plan and Deploy [v4.0 ]
  • 31. © 2020 VMware, Inc. Software-Defined Networking Design for Workload Domains (2) Workload domains can optionally use a set of dedicated NSX Manager instances. Cluster 1 Workload Domain Blue ESXi ESXi Cluster 1 Cluster 2 APP OS APP OS APP OS ESXi ESXi ESXi ESXi APP OS VDI VDI VDI VDI Workload Domain Green Multicluster APP OS APP OS APP OS APP OS NSX ESG NSX Edge NSX ESG NSX Edge Management Workload Domain Workload Domain Blue vCenter Workload Domain Green vCenter NSX Manager Dedicated to Green NSX Manager Dedicated to Blue Management Domain vCenter Server NSX ESG NSX Edge SDDC Manager NSX-T Managers NSX-T Managers NSX Manager NSX-T Managers NSX-T Managers NSX Manager NSX-T Managers NSX-T Managers NSX Manager VMware Cloud Foundation: Plan and Deploy [v4.0 ]
  • 32. © 2020 VMware, Inc. Software-Defined Networking Design for Workload Domains (3) VMware Cloud Foundation: Plan and Deploy [v4.0 ] You must consider VLANs and subnets or software-defined networking in workload domain clusters. Important considerations include: • The type of traffic that is required: – Fault tolerance – vSAN – iSCSI – NFS – vMotion – Replication – Backup • Whether network availability zones are implemented: vSAN stretched clusters are leveraged in the design. • Whether different workload domain clusters share the same subnets
  • 33. © 2020 VMware, Inc. Shared Storage Design for Workload Domains (1) When deploying a workload domain in VMware Cloud Foundation, you can select from several types of storage, both principal and supplemental. Before deciding, consider the following guidelines: • Optimize the storage design to meet the diverse needs of applications, services, administrators, and users. • Strategically align business applications and the storage infrastructure to reduce costs, boost performance, improve availability, provide security, and enhance functionality. • Provide multiple tiers of storage to match application data access to application requirements. • Design each tier of storage with different performance, capacity, and availability characteristics. Not every application requires expensive, high-performance, highly available storage. Designing different storage tiers reduces cost. VMware Cloud Foundation: Plan and Deploy [v4.0 ]
  • 34. © 2020 VMware, Inc. Workload Domain Workloads APP APP APP OS OS OS ESXi Host Datastore(s) Payloads SLA 1 Payloads SLA 2 Payloads SLA 3 Sample Datastore 1500GB 200GB VMDK 2048 GB Software-Defined Storage Policy-Based Storage Management Virtualized Data Services Hypervisor Storage Abstraction SAN or NAS or DAS (Third party or VMware vSAN) SSD SSD FC15K FC10K FC15K FC10K SATA SATA Shared Storage Design for Workload Domains (2) In the workload domain, you can use different solutions for principal and supplemental storage. VMware Cloud Foundation: Plan and Deploy [v4.0 ]
  • 35. © 2020 VMware, Inc. Design Decisions: ESXI Nodes in Workload Domains Decision ID Design Decision Design Justification Design Implication HYD-WLD-VI-ESXi-001 Use vSAN ReadyNodes with vSAN storage for each ESXi host in the shared edge and workload cluster. VMware Cloud Foundation is fully compatible with vSAN at deployment. Several vendors and hardware choices. HYD-WLD-VI-ESXi-002 Ensure that all nodes have uniform configuration for the shared edge and workload cluster. A balanced cluster has these advantages: • Predictable performance even during hardware failures • Minimal impact of resync or rebuild operations on performance You must apply vendor sourcing, budgeting, and procurement considerations for uniform server nodes, on a per cluster basis. VMware Cloud Foundation: Plan and Deploy [v4.0 ] When selecting the hosts for workload domains, use vSAN ReadyNodes if vSAN is the principal storage.
  • 36. © 2020 VMware, Inc. Design Decisions: ESXi Node Memory in Workload Domains Decision ID Design Decision Design Justification Design Implication HYD-WLD-VI-ESXi-003 Install each ESXi host in the shared edge and workload cluster with a minimum of 256 GB of RAM. The medium-sized NSX Edge appliances in this vSphere cluster require a total of 64 GB of RAM. The remaining RAM is available for traditional workloads. In a four-node cluster, only 768 GB is available for use because of the n+1 vSphere HA setting. VMware Cloud Foundation: Plan and Deploy [v4.0 ] The type of workload domain influences the amount of memory needed. For vSphere with Tanzu, large NSX Edge nodes are required.
  • 37. © 2020 VMware, Inc. Design Decisions: vCenter Considerations in Workload Domains Decision ID Design Decision Design Justification Design Implication HYD-WLD-VI-VC-002 Deploy an appliance for the workload domain vCenter Server of a medium deployment size or larger. A vCenter Server appliance of a medium- deployment size is typically sufficient to manage traditional workloads that run in a workload domain. If the size of the workload domain grows, you might need to increase the vCenter Server Appliance size. HYD-WLD-VI-VC-003 Deploy the workload domain vCenter Server with the default storage size. The default storage capacity assigned to a medium-sized appliance is sufficient. None at this point. VMware Cloud Foundation: Plan and Deploy [v4.0 ] Consider the number of hosts and clusters to be included in the workload domain. Right-sizing vCenter reduces the likelihood that you need to increase the vCenter appliance size.
  • 38. © 2020 VMware, Inc. Design Decisions: vSphere Networking in Workload Domains (1) Design ID Design Decision Design Justification Design Implication HYD-WLD-VI-NET-003 Network I/O Control on all distributed switches is enabled by default by VMware Cloud Foundation. Increases resiliency and performance of the network. If modified incorrectly, Network I/O Control might affect network performance for critical traffic types. HYD-WLD-VI-NET-004 Configure the MTU size of vSphere Distributed Switch to 9,000 for jumbo frames. Supports the MTU size required by system traffic types. Improves traffic throughput. When adjusting the MTU packet size, you must also configure the entire network path (VMkernel ports, virtual switches, physical switches, and routers) to support the same MTU packet size. VMware Cloud Foundation: Plan and Deploy [v4.0 ] Consider whether to modify the default Network I/O control shares and default MTU in the network.
  • 39. © 2020 VMware, Inc. Design Decisions: vSphere Networking in Workload Domains (2) Decision ID Design Decision Design Justification Design Implication HYD-WLD-VI-NET-006 Use static port binding for all port groups in the shared edge and workload cluster. With static binding, a VM connects to the same port on vSphere Distributed Switch. This decision provides historical data and port level monitoring. None at this point. HYD-WLD-VI-NET-007 Use a route based on the physical NIC load teaming algorithm for the management port group. Reduces the complexity of the network design and increases resiliency and performance. None at this point. HYD-WLD-VI-NET-008 Use a route based on the physical NIC load teaming algorithm for the vMotion port group. Reduces the complexity of the network design and increases resiliency and performance. None at this point. VMware Cloud Foundation: Plan and Deploy [v4.0 ] Determine the type of load sharing that you must configure for each type of traffic.
  • 40. © 2020 VMware, Inc. Design Decisions: vSAN Datastore in Workload Domains Decision ID Design Decision Design Justification Design Implication HYD-WLD-VI-SDS-1303 Provide the shared edge and workload cluster with a minimum of 24 TB of raw capacity for vSAN. NSX Edge nodes and sample tenant workloads require at least 8 TB of raw storage (prior to FTT=1) and 16 TB when using the default vSAN storage policy. If you scale the environment out with more workloads, additional storage is required in the workload domain. HYD-WLD-VI-SDS-1304 On all vSAN datastores, ensure that at least 30% of free space is always available. When vSAN reaches 80% usage, a rebalance task is started that can be resource-intensive. Increases the amount of available storage needed. VMware Cloud Foundation: Plan and Deploy [v4.0 ] If using vSAN as the principal storage in the workload domain, identify the vSAN policies that each workload requires.
  • 41. © 2020 VMware, Inc. Design Decisions: vSAN Cluster in the Workload Domain Decision ID Design Decision Design Justification Design Implication HYD-WLD-VI-SDS-1307 When using a single availability zone, the shared edge and workload cluster requires a minimum of four ESXi hosts to support vSAN. Having four ESXi hosts addresses the availability and sizing requirements. An ESXi host can be taken offline for maintenance or upgrades without affecting the overall vSAN cluster health. The availability requirements for the shared edge and workload cluster might cause underutilization of the cluster's ESXi hosts. HYD- WLD-VI-SDS-1308 When using two availability zones, the shared edge and workload cluster requires a minimum of eight ESXi hosts (four in each availability zone) to support a stretched vSAN configuration. Having eight ESXi hosts addresses the availability and sizing requirements. You can take an availability zone offline for maintenance or upgrades without affecting the overall vSAN cluster health. The capacity of the additional four hosts is not added to the capacity of the cluster. The hosts are used only to provide additional availability. VMware Cloud Foundation: Plan and Deploy [v4.0 ] If using vSAN as the principal storage in the workload domain, determine the level of high availability to be provided for the workloads in each workload domain cluster.
  • 42. © 2020 VMware, Inc. Lab 2: Design and Project Overview Review the customer case study and validate the conceptual, logical, and physical designs: 1. Read the Case Study 2. Review the Conceptual Design 3. Review the Business Requirements, Constraints, Assumptions, and Risks 4. Review the Proposed Solution 5. Justify the Design Decisions VMware Cloud Foundation: Plan and Deploy [v4.0 ]
  • 43. © 2020 VMware, Inc. Review of Learner Objectives • Recognize workload domain sizing considerations • Describe ESXi design considerations for a VI workload domain • Describe vCenter Server design considerations for a VI workload domain • Describe vSphere networking design considerations for a VI workload domain • Describe software-defined networking design considerations for a VI workload domain • Describe shared storage design considerations for a VI workload domain • Describe design considerations for workload domains with shared NSX Manager instances • Describe design considerations for workload domains with dedicated NSX Manager instances VMware Cloud Foundation: Plan and Deploy [v4.0 ]
  • 44. © 2020 VMware, Inc. Key Points • When deploying the management domain in VMware Cloud Foundation, you make design decisions related to ESXi, vCenter, vSAN, and networking in vSphere. • When deploying workload domains, you consider the vCenter size, vSAN availability zones, and the implication of multiple zones in the design. • Workload domains default to a set of shared NSX Manager instances, but a separate set of dedicated NSX Manager instances can be deployed per workload domain as an alternative design option. Questions? VMware Cloud Foundation: Plan and Deploy [v4.0 ]

Editor's Notes

  1. The Planning and Preparation Workbook in VMware Cloud Foundation documentation provides the requirements for the management domain.
  2. These resources can be scaled as the deployment increases in size. The number of vCenter Server and NSX instances increases as the number of domains increases.
  3. The design considerations for the management domain include these elements: ESXi design for the management domain: The compute layer of the virtual infrastructure (VI) layer in the SDDC is implemented by ESXi, a bare-metal hypervisor that you install directly onto your physical server. With direct access and control of underlying resources, ESXi logically partitions hardware to consolidate applications and cut costs. vCenter Server design for the management domain: For this design, you determine the number of vCenter Server instances in the management domain, their size, networking configuration, cluster layout, redundancy, and security configuration.  vSphere networking design for the management domain: The network design prevents unauthorized access and provides timely access to business data. This design uses vSphere Distributed Switch and NSX-T Data Center for virtual networking. Software-defined networking design for the management domain: In this design, you use NSX-T Data Center or connect the management workloads by way of virtual network segments and routing. You create constructs for region-specific and cross-region solutions. These constructs isolate the solutions from the rest of the network, providing routing to the data center and load balancing. Shared storage design for the management domain: The shared storage design includes vSAN and NFS storage for the SDDC management components. 
  4. For the logical design for ESXi, determine the high-level integration of ESXi hosts with other components. To provide the resources required to run the management components according to the design objectives, each ESXi host consists of the following elements: Out-of-band management interface Network interfaces Storage devices The configuration and assembly process for each system should be standardized, with all components installed in the same way on each ESXi host. Because standardization of the physical configuration of the ESXi hosts removes variability, the infrastructure is easily managed and supported. ESXi hosts are deployed with identical configurations across all cluster members, including storage and networking configurations. For example, consistent PCIe card slot placement, especially for network controllers, is essential for accurate alignment of physical to virtual I/O resources. By using identical configurations, an even balance of virtual machine storage components is established across storage and compute resources. In this design, the primary storage system for the management domain is vSAN. Consequently, the sizing of physical servers running ESXi requires special considerations: The number of workload domains to be deployed in the future Whether each of those domains has dedicated NSX Manager instances The deployment of other VMware solutions such as vRealize Automation, vRealize Operations Manager, and vRealize Network Insight The solution requires proper sizing of the management domain in VMware Cloud Foundation. When implementing a consolidated design, consider other workloads and virtual machines that might be running alongside the management infrastructure virtual machines. An average-size virtual machine has two virtual CPUs and 4 GB of RAM. The typical spec 2U ESXi host can run 60 average-size virtual machines. VMware Cloud Foundation 4.0 uses vSAN ReadyNode for the physical servers running ESXi in the management domain. Use the vSAN sizing tool to size the management nodes accordingly. **NOTE: consider resource Management Domain needs for additional options such as vRealize Automation, vRealize Operations Manager, and vRealize Network Insight The solution requires proper sizing of the management domain in VMware Cloud Foundation. Also consider management resources for additional WLD.
  5. Depending the boot media used, the minimum capacity for each partition varies. The only constant is the system boot partition. If the boot media is larger than 128 GB, a VMFS datastore is created automatically and is used for storing virtual machine data. For storage media such as USB or SD devices, the ESX-OSData partition is created on a high-endurance storage device such as an HDD or SSD. When a secondary high-endurance storage device is not available, ESX-OSData is created on USB or SD devices, but this partition is used only to store ROM-data. RAM-data is stored on a RAM disk. For more information about ESXi hardware configurations, see ESXi Hardware Requirements at https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.esxi.install.doc/GUID-DEB8086A-306B-4239-BF76-E354679202FC.html.
  6. One vCenter Server instance is allocated to the management domain to support management components. Determine the number of vCenter Server instances for the management domain and the amount of compute and storage resources required based on the scale of the environment, the plans for deploying virtual infrastructure workload domains, and the requirements for isolating management workloads from tenant workloads. vCenter Server is leveraged for some advanced vSphere features such as vSphere Distributed Resource Scheduler (vSphere DRS), vSphere vMotion, and vSphere Storage vMotion. By using the Enhanced Linked Mode of vCenter Server, you can log in to every vCenter Server instance that is joined to the same vCenter Single Sign-On domain and access their inventories. You can connect as many as 15 vCenter Server instances to a single vCenter Single Sign-On domain.
  7. In the management domain, the default deployment size of vCenter is small. This size can be modified in the Deployment Parameter Workbook. Consider changing the size of vCenter when scaling up a consolidated design with multiple clusters running workloads in the management domain. When more capacity is needed, scaling the consolidated design is an option, but a much better design option for a larger scale is to use workload domains. After you reach 8 or 10 nodes in the consolidated design, consider migrating to a standard design. If you do migrate to a standard design, the default small vCenter size should be enough for most designs.
  8. The separation of different traffic types is required to reduce contention and latency and to configure access security. This separation is created by default by VMware Cloud Foundation. When a migration occurs using vSphere vMotion, the contents of the memory of the guest operating system is transmitted over the network. vSphere vMotion is on a separate network, using a dedicated vSphere vMotion VLAN. High latency on any network can negatively affect performance. Some components are more sensitive to high latency than others. For example, reducing latency is important on the IP storage network and on the vSphere Fault Tolerance logging network. According to the application or service, high latency on specific virtual machine networks can also negatively affect performance.
  9. You might want to consider other types of storage, such as NFS, as the supplemental storage for the management domain. The supplemental storage can be used for backups. Depending on your design and implementation, you can keep daily backups of the management components by leveraging the supplemental storage. Management domain supports many forms of supplemental storage (NFS, FC, iSCSI, vSphere Virtual Volumes). You must manually manage the lifecycle management (LCM) of supplemental storage hardware and manage any VMware Compatibility Guide requirements, for example, firmware or VIBs for an FC adapter.
  10. You can start deploying the SDDC in a single availability zone configuration and then extend the environment with the second availability zone. One advantage in having multiple availability zones is that the management components of the SDDC can run in availability zone 1, and, if an outage occurs in zone 1, the components can be recovered by vSphere HA in availability zone 2. Extending the management cluster to a vSAN stretched cluster provides the following advantages: Increased availability with minimal downtime and data loss. (RPO is zero, and the RTO is the time it takes vSphere HA to start the virtual machines in availability zone 2.) All features and policies with vSAN Storage Policy-Based Management (SPBM) can be used. Intersite latency is 5 ms, so the design works with data centers located within a 50-mile radius, such as a metro area, providing business continuity. Using a vSAN stretched cluster for the management components has the following disadvantages: Increased footprint Symmetrical host configuration in the two availability zones Additional setup for the stretched cluster: Manual configuration and manual guidance on VMware Cloud Foundation 4.0.1
  11. The software-defined data center (SDDC) detailed design includes numbered design decisions (Decision IDs), and the justification and implications of each decision.
  12. Example of how to design the Domain in regards of RAM.
  13. When deciding on the vCenter size, consider whether you want to deploy a consolidated architecture and plan to scale the design, or whether, for either business reasons or technical requirements, you plan to maintain the consolidated design with more than 100 hosts. In the consolidated design in VMware Cloud Foundation, keep your clusters small, and when you get to 8 or 10 nodes, consider a migration to the standard design and consider using a more robust implantation with workload domains. Jerry note: 8 to 10 nodes total, management plus 2-3 small cluster is the max that should be considered for the consolidation architecture. Q: Stephen Costello Asked about the use of the Management Domain and should this be kept small and tighty. Jerry: VCF is used as a service based product. Management should be performed on the SDDC manager not the vCenter (the use case is covering VM creation and deletion.) SDDC Manager does not reach out or get info about VM deployed from other management tools such as vCenter. For example over 10 hosts to the Management Domain, the use of WLD are recommended (required). This can lead into a Federation discussion if providing services to various (different) groups or customers. Stephen was concerned about the number of VC in the Management domain, one for each WLD. Q: Jonathan Ebenezer: Cloud Builder –deployment fails mid way, do we troubleshoot the process or restart. Jerry: You can go in and determine the failure. This is most likely an issue caused in the JSON spreadsheet. If this I the case correct the input file, then start from the beginning. Timeout issues, use reset task. Ashley Huynh: Most common issue seen is a validation error network. Incorrect IP ranges made in the spreadsheet (JASON) before import is a cause. As a TIP take snapshot of the cloudbulder appliance so if the appliance borks you dont have to start from scratch
  14. The workload domain consists of components from the physical infrastructure, virtual infrastructure, and the security and compliance layers.
  15. When deploying a VI workload domain, you must address several design considerations: ESXi detailed design for a VI workload domain: The compute layer of the VI is implemented by ESXi, a bare-metal hypervisor that installs directly onto your physical server. With direct access and control of underlying resources, ESXi logically partitions hardware to consolidate applications and cut costs. vCenter Server design for a VI workload domain: For this design, you determine the number of vCenter Server instances in the workload domain, their size, networking configuration, cluster layout, redundancy, and security configuration. vSphere networking design for a VI workload domain: The network design prevents unauthorized access and provides timely access to business data. This design uses vSphere Distributed Switch and NSX-T Data Center for virtual networking. Software-defined networking design for a VI workload domain: You use NSX-T Data Center to provide network connectivity for workloads, implementing virtual network segments and routing. Shared storage design for a VI workload domain: The shared storage design includes the design for VMware vSAN storage and other options of principal storage and supplemental storage. vSAN Sizing – (Jerry) Create Management Domain – use small VC Then scale up your vCenter Server. Then increase the number of hosts.
  16. When the design uses vSAN ReadyNode as the fundamental building block for the primary storage system in the workload domain: Select all ESXi host hardware, including CPUs, according to the VMware Compatibility Guide and aligned to the ESXi version specified by this design. The sizing of physical servers running ESXi requires special considerations when you use vSAN storage. For information about the models of physical servers that are vSAN ready, see the VMware Compatibility Guide at https://www.vmware.com/resources/compatibility/search.php. If you are not using vSAN ReadyNode, your CPU must be listed on the VMware Compatibility Guide under CPU Series and aligned to the ESXi version specified by this design. NOTE, no mix and match, either all-flash or all hybrid throughout. ESXI configurations should be identical or a best effort to match the hardware type, CPU, RAM, storage type and size (sizing and technology matching throughout)
  17. For more information about disk groups, including design and sizing guidance, see Administering VMware vSAN at https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.vsan.doc/GUID-AEF15062-1ED9-4E2B-BA12-A5CE0932B976.html.
  18. This ties to workload types and requirements and scalability and growth estimations. Design for overhead (this includes Tanzu.)
  19. Some networks, such as vMotion and vSAN, are created by VMware Cloud Foundation, whereas others, such as NFS, are optional. The NFS network is used when NFS is the principal storage in the workload domain. You must create a network pool, specifying an NFS VLAN ID and subnet/IPS. Separating different types of traffic is required to reduce contention and latency, and for access security. High latency on any network can negatively affect performance. Some components are more sensitive to high latency than others. For example, reducing latency is important on the IP storage and the vSphere Fault Tolerance logging network because latency on these networks can negatively affect the performance of multiple virtual machines. According to the application or service, high latency on specific virtual machine networks can also negatively affect performance. Use information gathered from the current-state analysis and from interviews with key stakeholders and SMEs to determine which workloads and networks are especially sensitive to high latency. IMPORTNT: IT IS REQUIRED to separate your network types. Latency can become problematic.
  20. All workload domains can use NSX Edge in the management domain. For scalability, the SDDC Manager can be used to deploy an NSX Edge cluster in workload domain clusters. NSX Manager provides the user interface and the RESTful API for creating, configuring, and monitoring NSX components, such as virtual network segments, and Tier-0 and Tier-1 gateways. NSX Manager implements the management and control plane for the NSX-T Data Center infrastructure. NSX Manager is the centralized network management component of NSX-T Data Center, providing an aggregated view on all components in the NSX-T Data Center system. NOTE: You CANNOT share the MLD NSX Manager with WLD, these have NSX Edge server instances – see my notes
  21. The management domain has a dedicated NSX instance and an NSX Edge cluster. The NSX Edge cluster can be deployed at day X. For different technical requirements or business reasons, you might need dedicated NSX Manager instances that are deployed for an individual workload domain. For example, a university might keep faculty and students completely separated using dedicated domains and dedicated NSX Manager instances. Another example is a hospital with different business units that require separate management and billing for their consumption, like a multitenant design with a cloud provider. VMware Cloud Foundation provides the data center design with numerous planning and deployment options to meet various business requirements.
  22. You must consider the segments and different types of traffic for the workload domain. In addition to the regular workload traffic, such as from VMs, other traffic might require their own separate networks, VLANs, and subnets. If you use multiple workload domain clusters, determine whether they share the same VLANs and subnets. Dedicated networks are needed for the different network types. Adhere to the requirements of each traffic type.
  23. For the workload domain deployment in VMware Cloud Foundation, you can select principal and supplemental storage. For principal storage, the choices are vSAN, FC. or NFS. For supplemental storage, you can select FC, NFS, or iSCSI. You might have traditional FC arrays that were purchased 2 or 3 years ago, and the full ROI is not realized yet. VMware Cloud Foundation gives you the flexibility to use those arrays as principal storage for the workload domain. If the FC is phased out to go with All vSAN, the WLD must be migrated to the new WLD with vSAN set as the “primary” storage type. (This has been mentioned earlier).
  24. In the workload domain, you can choose between different storage solutions. You can use vSAN, NFS, and FC as principal storage, and you can use NFS, FC, and iSCSI as supplemental storage. You can design multiple workload domains according to business needs and use different storage solutions for each domain. For example, a workload domain that is dedicated for Horizon can work with vSAN. When you create a workload domain, only one principal storage option is used for the defined cluster. After the workload domain is deployed, one or more supplemental storage options can be (manually) added. If a second cluster is created in a VI workload domain, a different principal storage option can be selected, if necessary. NOTE: One primary storage solution Multiple supplemental storage solutions can be applied.
  25. The best way to size your vSAN nodes for the workload domain is to first perform an application profile assessment. You must know which workloads are running in the workload domain, or at least have an idea of what they are projected to be. After completing the application profile assessment, you can use the new vSAN sizer at https://vsansizer.vmware.com. Within the tool, you can perform the following tasks: Select Hybrid or All Flash Define one or more workload profiles using templates (VDI, Databases, or General Purpose) Define server configuration Generate an export report Consider these guiding questions when formulating your design: Will the largest VM/workload affect the number of sockets that are required in your hosts? How much memory and CPU do your workloads require? How many host failures can be tolerated? How much vSAN disk space is required? Consider all storage policy settings, VM availability requirements, and so on. Identify the features that you want to use. How much disk space do you need per server? Consider the cache and capacity requirements. Consider the performance requirements. Do the number of hosts affect the available Failures to Tolerate (FTT) and Failure Tolerance Method (FTM) policies?
  26. Example of sizing with TANZU note for large NSX Edge nodes.
  27. Examples of Static versus dynamic port group configuration.
  28. Slack space availability is a key point for this slide vSAN ability to support the needed /desired number of failure or maintenance periods.