SlideShare a Scribd company logo
Platform Administration Guide
NOS 3.5
24-Sep-2013
Copyright | Platform Administration Guide | NOS 3.5 | 2
Notice
Copyright
Copyright 2013 Nutanix, Inc.
Nutanix, Inc.
1740 Technology Drive, Suite 400
San Jose, CA 95110
All rights reserved. This product is protected by U.S. and international copyright and intellectual property
laws. Nutanix is a trademark of Nutanix, Inc. in the United States and/or other jurisdictions. All other marks
and names mentioned herein may be trademarks of their respective companies.
Conventions
Convention Description
variable_value The action depends on a value that is unique to your environment.
ncli> command The commands are executed in the Nutanix nCLI.
user@host$ command The commands are executed as a non-privileged user (such as nutanix)
in the system shell.
root@host# command The commands are executed as the root user in the hypervisor host
(vSphere or KVM) shell.
output The information is displayed as output from a command or in a log file.
Default Cluster Credentials
Interface Target Username Password
Nutanix web console Nutanix Controller VM admin admin
vSphere client ESXi host root nutanix/4u
SSH client or console ESXi host root nutanix/4u
SSH client or console KVM host root nutanix/4u
SSH client Nutanix Controller VM nutanix nutanix/4u
IPMI web interface or ipmitool Nutanix node ADMIN ADMIN
IPMI web interface or ipmitool Nutanix node (NX-3000) admin admin
Version
Last modified: September 24, 2013 (2013-09-24-13:28 GMT-7)
3
Contents
Part I: NOS................................................................................................... 6
1: Cluster Management....................................................................... 7
To Start a Nutanix Cluster....................................................................................................... 7
To Stop a Cluster.....................................................................................................................7
To Destroy a Cluster................................................................................................................ 8
To Create Clusters from a Multiblock Cluster..........................................................................9
Disaster Protection................................................................................................................. 12
2: Password Management.................................................................15
To Change the Controller VM Password............................................................................... 15
To Change the ESXi Host Password.....................................................................................16
To Change the KVM Host Password.....................................................................................17
To Change the IPMI Password..............................................................................................18
3: Alerts...............................................................................................19
Cluster.....................................................................................................................................19
Controller VM..........................................................................................................................22
Guest VM................................................................................................................................24
Hardware.................................................................................................................................26
Storage....................................................................................................................................30
4: IP Address Configuration............................................................. 33
To Reconfigure the Cluster.................................................................................................... 33
To Prepare to Reconfigure the Cluster..................................................................................34
Remote Console IP Address Configuration........................................................................... 35
To Configure Host Networking............................................................................................... 38
To Configure Host Networking (KVM)....................................................................................39
To Update the ESXi Host Password in vCenter.................................................................... 40
To Change the Controller VM IP Addresses..........................................................................40
To Change a Controller VM IP Address (manual)................................................................. 41
To Complete Cluster Reconfiguration.................................................................................... 42
5: Field Installation............................................................................ 44
NOS Installer Reference.........................................................................................................44
To Image a Node................................................................................................................... 44
Part II: vSphere..........................................................................................47
6: vCenter Configuration...................................................................48
To Use an Existing vCenter Server....................................................................................... 48
4
7: VM Management............................................................................ 55
Migrating a VM to Another Cluster........................................................................................ 55
vStorage APIs for Array Integration....................................................................................... 57
Migrating vDisks to NFS.........................................................................................................58
8: Node Management.........................................................................62
To Shut Down a Node in a Cluster....................................................................................... 62
To Start a Node in a Cluster..................................................................................................63
To Restart a Node..................................................................................................................64
To Patch ESXi Hosts in a Cluster..........................................................................................65
Removing a Node...................................................................................................................65
9: Storage Replication Adapter for Site Recovery Manager.......... 68
To Configure the Nutanix Cluster for SRA Replication.......................................................... 69
To Configure SRA Replication on the SRM Servers............................................................. 70
Part III: KVM............................................................................................... 72
10: Kernel-based Virtual Machine (KVM) Architecture...................73
Storage Overview................................................................................................................... 73
VM Commands....................................................................................................................... 74
11: VM Management Commands......................................................75
virt_attach_disk.py.................................................................................................................. 76
virt_check_disks.py................................................................................................................. 77
virt_clone.py............................................................................................................................ 79
virt_detach_disk.py................................................................................................................. 80
virt_eject_cdrom.py................................................................................................................. 81
virt_insert_cdrom.py................................................................................................................82
virt_install.py........................................................................................................................... 83
virt_kill.py................................................................................................................................ 85
virt_kill_snapshot.py................................................................................................................86
virt_list_disks.py...................................................................................................................... 86
virt_migrate.py.........................................................................................................................87
virt_multiclone.py.................................................................................................................... 88
virt_snapshot.py...................................................................................................................... 89
nfs_ls.py.................................................................................................................................. 90
Part IV: Hardware...................................................................................... 93
12: Node Order...................................................................................94
13: System Specifications................................................................ 98
NX-1000 Series System Specifications..................................................................................98
NX-2000 System Specifications........................................................................................... 100
5
NX-3000 System Specifications........................................................................................... 103
NX-3050 System Specifications........................................................................................... 105
NX-6000 Series System Specifications................................................................................108
NOS | Platform Administration Guide | NOS 3.5 | 6
Part
I
NOS
| Platform Administration Guide | NOS 3.5 | 7
1
Cluster Management
Although each host in a Nutanix cluster runs a hypervisor independent of other hosts in the cluster, some
operations affect the entire cluster.
To Start a Nutanix Cluster
1. Log on to any Controller VM in the cluster with SSH.
2. Start the Nutanix cluster.
nutanix@cvm$ cluster start
If the cluster starts properly, output similar to the following is displayed for each node in the cluster:
CVM: 172.16.8.167 Up, ZeusLeader
Zeus UP [3148, 3161, 3162, 3163, 3170, 3180]
Scavenger UP [3333, 3345, 3346, 11997]
ConnectionSplicer UP [3379, 3392]
Hyperint UP [3394, 3407, 3408, 3429, 3440, 3447]
Medusa UP [3488, 3501, 3502, 3523, 3569]
DynamicRingChanger UP [4592, 4609, 4610, 4640]
Pithos UP [4613, 4625, 4626, 4678]
Stargate UP [4628, 4647, 4648, 4709]
Cerebro UP [4890, 4903, 4904, 4979]
Chronos UP [4906, 4918, 4919, 4968]
Curator UP [4922, 4934, 4935, 5064]
Prism UP [4939, 4951, 4952, 4978]
AlertManager UP [4954, 4966, 4967, 5022]
StatsAggregator UP [5017, 5039, 5040, 5091]
SysStatCollector UP [5046, 5061, 5062, 5098]
What to do next. After you have verified that the cluster is running, you can start guest VMs.
To Stop a Cluster
Before you begin. Shut down all guest virtual machines, including vCenter if it is running on the cluster.
Do not shut down Nutanix Controller VMs.
Note: This procedure stops all services provided by guest virtual machines, the Nutanix cluster,
and the hypervisor host.
1. Log on to a running Controller VM in the cluster with SSH.
2. Stop the Nutanix cluster.
nutanix@cvm$ cluster stop
Wait to proceed until output similar to the following is displayed for every Controller VM in the cluster.
CVM: 172.16.8.191 Up, ZeusLeader
Zeus UP [3167, 3180, 3181, 3182, 3191, 3201]
| Platform Administration Guide | NOS 3.5 | 8
Scavenger UP [3334, 3351, 3352, 3353]
ConnectionSplicer DOWN []
Hyperint DOWN []
Medusa DOWN []
DynamicRingChanger DOWN []
Pithos DOWN []
Stargate DOWN []
Cerebro DOWN []
Chronos DOWN []
Curator DOWN []
Prism DOWN []
AlertManager DOWN []
StatsAggregator DOWN []
SysStatCollector DOWN []
To Destroy a Cluster
Destroying a cluster resets all nodes in the cluster to the factory configuration. All cluster configuration and
guest VM data is unrecoverable after destroying the cluster.
1. Log on to any Controller VM in the cluster with SSH.
2. Stop the Nutanix cluster.
nutanix@cvm$ cluster stop
Wait to proceed until output similar to the following is displayed for every Controller VM in the cluster.
CVM: 172.16.8.191 Up, ZeusLeader
Zeus UP [3167, 3180, 3181, 3182, 3191, 3201]
Scavenger UP [3334, 3351, 3352, 3353]
ConnectionSplicer DOWN []
Hyperint DOWN []
Medusa DOWN []
DynamicRingChanger DOWN []
Pithos DOWN []
Stargate DOWN []
Cerebro DOWN []
Chronos DOWN []
Curator DOWN []
Prism DOWN []
AlertManager DOWN []
StatsAggregator DOWN []
SysStatCollector DOWN []
3. If the nodes in the cluster have Intel PCIe-SSD drives, ensure they are mapped properly.
Check if the node has an Intel PCIe-SSD drive.
nutanix@cvm$ lsscsi | grep 'SSD 910'
→ If no items are listed, the node does not have an Intel PCIe-SSD drive and you can proceed to the
next step.
→ If two items are listed, the node does have an Intel PCIe-SSD drive.
If the node has an Intel PCIe-SSD drive, check if it is mapped correctly.
nutanix@cvm$ cat /proc/partitions | grep dm
→ If two items are listed, the drive is mapped correctly and you can proceed.
→ If no items are listed, the drive is not mapped correctly. Start then stop the cluster before proceeding.
| Platform Administration Guide | NOS 3.5 | 9
Perform this check on every Controller VM in the cluster.
4. Destroy the cluster.
Caution: Performing this operation deletes all cluster and guest VM data in the cluster.
nutanix@cvm$ cluster -s cvm_ip_addr destroy
To Create Clusters from a Multiblock Cluster
The minimum size for a cluster is three nodes.
1. Remove nodes from the existing cluster.
→ If you want to preserve data on the existing cluster, remove nodes by following To Remove a Node
from a Cluster on page 65.
→ If you want multiple new clusters, destroy the existing cluster by following To Destroy a Cluster on
page 8.
2. Create one or more new clusters by following To Configure the Cluster on page 10.
Product Mixing Restrictions
While a Nutanix cluster can include different products, there are some restrictions.
Caution: Do not configure a cluster that violates any of the following rules.
Compatibility Matrix
NX-1000 NX-2000 NX-2050 NX-3000 NX-3050 NX-6000
NX-1000
1
• • • • • •
NX-2000 • • • • •
NX-2050 • • • • • •
NX-3000 • • • • • •
NX-3050 • • • • • •
NX-6000
2
• • • •
3
•
1. NX-1000 nodes can be mixed with other products in the same cluster only when they are running 10
GbE networking; they cannot be mixed when running 1 GbE networking. If NX-1000 nodes are using
the 1 GbE interface, the maximum cluster size is 8 nodes. If the nodes are using the 10 GbE interface,
the cluster has no limits other than the maximum supported cluster size that applies to all products.
2. NX-6000 nodes cannot be mixed NX-2000 nodes in the same cluster.
3. Because it has a larger Flash tier, NX-3050 is recommended to be mixed with NX-6000 over other
products.
• Any combination of NX-2000, NX-2050, NX-3000, and NX-3050 nodes can be mixed in the same
cluster.
| Platform Administration Guide | NOS 3.5 | 10
• All nodes in a cluster must be the same hypervisor type (ESXi or KVM).
• All Controller VMs in a cluster must have the same NOS version.
• Mixed Nutanix clusters comprising NX-2000 nodes and other products are supported as specified
above. However, because the NX-2000 processor architecture differs from other models, vSphere does
not support enhanced/live vMotion of VMs from one type of node to another unless Enhanced vMotion
Capability (EVC) is enabled. For more information about EVC, see the vSphere 5 documentation and
the following VMware knowledge base articles:
• Enhanced vMotion Compatibility (EVC) processor support [1003212]
• EVC and CPU Compatibility FAQ [1005764]
To Configure the Cluster
Before you begin.
• Confirm that the system you are using to configure the cluster meets the following requirements:
• IPv6 link-local enabled.
• Windows 7, Vista, or MacOS.
• (Windows only) Bonjour installed (included with iTunes or downloadable from http://
support.apple.com/kb/DL999).
• Determine the IPv6 service of any Controller VM in the cluster.
IPv6 service names are uniquely generated at the factory and have the following form (note the final
period):
NTNX-block_serial_number-node_location-CVM.local.
On the right side of the block toward the front is a label that has the block_serial_number (for example,
12AM3K520060). The node_location is a number 1-4 for NX-3000, a letter A-D for NX-1000/NX-2000/
NX-3050, or a letter A-B for NX-6000.
If you need to confirm if IPv6 link-local is enabled on the network or if you do not have access to get the
node serial number, see the Nutanix support knowledge base for alternative methods.
| Platform Administration Guide | NOS 3.5 | 11
1. Open a web browser.
Nutanix recommends using Internet Explorer 9 for Windows and Safari for Mac OS.
Note: Internet Explorer requires protected mode to be disabled. Go to Tools > Internet
Options > Security, clear the Enable Protected Mode check box, and restart the browser.
2. Navigate to http://cvm_host_name:2100/cluster_init.html.
Replace cvm_host_name with the IPv6 service name of any Controller VM that will be added to the
cluster.
Following is an example URL to access the cluster creation page on a Controller VM:
http://NTNX-12AM3K520060-1-CVM.local.:2100/cluster_init.html
If the cluster_init.html page is blank, then the Controller VM is already part of a cluster. Connect to a
Controller VM that is not part of a cluster.
3. Type a meaningful value in the Cluster Name field.
This value is appended to all automated communication between the cluster and Nutanix support. It
should include the customer's name and if necessary a modifier that differentiates this cluster from any
other clusters that the customer might have.
Note: This entity has the following naming restrictions:
• The maximum length is 75 characters.
• Allowed characters are uppercase and lowercase standard Latin letters (A-Z and a-z),
decimal digits (0-9), dots (.), hyphens (-), and underscores (_).
4. Type the appropriate DNS and NTP addresses in the respective fields.
5. Type the appropriate subnet masks in the Subnet Mask row.
6. Type the appropriate default gateway IP addresses in the Default Gateway row.
7. Select the check box next to each node that you want to add to the cluster.
| Platform Administration Guide | NOS 3.5 | 12
All unconfigured nodes on the current network are presented on this web page. If you will be configuring
multiple clusters, be sure that you only select the nodes that should be part of the current cluster.
8. Provide an IP address for all components in the cluster.
Note: The unconfigured nodes are not listed according to their position in the block. Ensure
that you assign the intended IP address to each node.
9. Click Create.
Wait until the Log Messages section of the page reports that the cluster has been successfully
configured.
Output similar to the following indicates successful cluster configuration.
Configuring IP addresses on node 12AM2K420010/A...
Configuring IP addresses on node 12AM2K420010/B...
Configuring IP addresses on node 12AM2K420010/C...
Configuring IP addresses on node 12AM2K420010/D...
Configuring Zeus on node 12AM2K420010/A...
Configuring Zeus on node 12AM2K420010/B...
Configuring Zeus on node 12AM2K420010/C...
Configuring Zeus on node 12AM2K420010/D...
Initializing cluster...
Cluster successfully initialized!
Initializing the cluster DNS and NTP servers...
Successfully updated the cluster NTP and DNS server list
10. Log on to any Controller VM in the cluster with SSH.
11. Start the Nutanix cluster.
nutanix@cvm$ cluster start
If the cluster starts properly, output similar to the following is displayed for each node in the cluster:
CVM: 172.16.8.167 Up, ZeusLeader
Zeus UP [3148, 3161, 3162, 3163, 3170, 3180]
Scavenger UP [3333, 3345, 3346, 11997]
ConnectionSplicer UP [3379, 3392]
Hyperint UP [3394, 3407, 3408, 3429, 3440, 3447]
Medusa UP [3488, 3501, 3502, 3523, 3569]
DynamicRingChanger UP [4592, 4609, 4610, 4640]
Pithos UP [4613, 4625, 4626, 4678]
Stargate UP [4628, 4647, 4648, 4709]
Cerebro UP [4890, 4903, 4904, 4979]
Chronos UP [4906, 4918, 4919, 4968]
Curator UP [4922, 4934, 4935, 5064]
Prism UP [4939, 4951, 4952, 4978]
AlertManager UP [4954, 4966, 4967, 5022]
StatsAggregator UP [5017, 5039, 5040, 5091]
SysStatCollector UP [5046, 5061, 5062, 5098]
Disaster Protection
After VM protection is configured in the web console, managing snapshots and failing from one site to
another are accomplished with the nCLI.
| Platform Administration Guide | NOS 3.5 | 13
To Manage VM Snapshots
You can manage VM snapshots, including restoration, with these nCLI commands.
• Check status of replication.
ncli> pd list-replication-status
• List snapshots.
ncli> pd list-snapshots name="pd_name"
• Restore VMs from backup.
ncli> pd rollback-vms name="pd_name" vm-names="vm_ids" snap-id="snapshot_id" path-
prefix="folder_name"
• Replace vm_ids with a comma-separated list of VM IDs as given in vm list.
• Replace snapshot_id with a snapshot ID as given by pd list-snapshots.
• Replace folder_name with the name you want to give the VM folder on the datastore, which will be
created if it does not exist.
The VM is restored to the container where the snapshot resides. If you used a DAS-SATA-only
container for replication, after restoring the VM move it to an container suitable for active workloads with
storage vMotion
• Restore NFS files from backup.
ncli> pd rollback-nfs-files name="pd_name" files="nfs_files" snap-id="snapshot_id"
• Replace nfs_files with a comma-separated list of NFS files to restore.
• Replace snapshot_id with a snapshot ID as given by pd list-snapshots.
If you want to replace the existing file, include replace-nfs-files=true.
• Remove snapshots.
ncli> pd rm-snapshot name="pd_name" snap-ids="snapshot_ids"
Replace snapshot_ids with a comma-separated list of snapshot IDs as given in pd list snapshots.
To Fail from one Site to Another
Disaster failover
Connect to the backup site and activate it.
ncli> pd activate name="pd_name"
This operation does the following:
1. Restores all VM files from last fully-replicated snapshot.
2. Registers VMs on recovery site.
3. Marks the failover site protection domain as active.
Planned failover
Connect to the primary site and specify the failover site to migrate to.
ncli> pd migrate name="pd_name" remote-site="remote_site_name2"
This operation does the following:
1. Creates and replicates a snapshot of the protection domain.
| Platform Administration Guide | NOS 3.5 | 14
2. Shuts down VMs on the local site.
3. Creates and replicates another snapshot of the protection domain.
4. Unregisters all VMs and removes their associated files.
5. Marks the local site protection domain as inactive.
6. Restores all VM files from the last snapshot and registers them on the remote site.
7. Marks the remote site protection domain as active.
| Platform Administration Guide | NOS 3.5 | 15
2
Password Management
You can change the passwords of the following cluster components:
• Nutanix management interfaces
• Nutanix Controller VMs
• Hypervisor software
• Node hardware (management port)
Requirements
• You know the IP address of the component that you want to modify.
• You know the current password of the component you want to modify.
The default passwords of all components are provided in Default Cluster Credentials on page 2.
• You have selected a password that has 8 or more characters and at least one of each of the following:
• Upper-case letters
• Lower-case letters
• Numerals
• Symbols
To Change the Controller VM Password
Perform these steps on every Controller VM in the cluster.
Warning: The nutanix user must have the same password on all Controller VMs.
1. Log on to the Controller VM with SSH.
2. Change the nutanix user password.
nutanix@cvm$ passwd
3. Respond to the prompts, providing the current and new nutanix user password.
Changing password for nutanix.
Old Password:
New password:
Retype new password:
Password changed.
Note: The password must meet the following complexity requirements:
• At least 9 characters long
• At least 2 lowercase characters
• At least 2 uppercase characters
• At least 2 numbers
• At least 2 special characters
| Platform Administration Guide | NOS 3.5 | 16
To Change the ESXi Host Password
The cluster software needs to be able to log into each host as root to perform standard cluster operations,
such as mounting a new NFS datastore or querying the status of VMs in the cluster. Therefore, after
changing the ESXi root password it is critical to update the cluster configuration with the new password.
Tip: Although it is not required for the root user to have the same password on all hosts, doing so
will make cluster management and support much easier. If you do select a different password for
one or more hosts, make sure to note the password for each host.
1. Change the root password of all hosts.
Perform these steps on every ESXi host in the cluster.
a. Log on to the ESXi host with SSH.
b. Change the root password.
root@esx# passwd root
c. Respond to the prompts, providing the current and new root password.
Changing password for root.
Old Password:
New password:
Retype new password:
Password changed.
2. Update the root user password for all hosts in the Zeus configuration.
Warning: If you do not perform this step, the web console will no longer show correct statistics
and alerts, and other cluster operations will fail.
a. Log on to any Controller VM in the cluster with SSH.
b. Find the host IDs.
nutanix@cvm$ ncli -p 'admin_password' host list | grep -E 'ID|Hypervisor Key'
Note the host ID for each hypervisor host.
c. Update the hypervisor host password.
nutanix@cvm$ ncli -p 'admin_password' managementserver edit name=host_addr
password='host_password'
nutanix@cvm$ ncli -p 'admin_password' host edit id=host_id hypervisor-
password='host_password'
• Replace host_addr with the IP address of the hypervisor host.
• Replace host_id with a host ID you determined in the preceding step.
• Replace host_password with the root password on the corresponding hypervisor host.
Perform this step for every hypervisor host in the cluster.
3. Update the ESXi host password.
a. Log on to vCenter with the vSphere client.
b. Right-click the host with the changed password and select Disconnect.
c. Right-click the host and select Connect.
| Platform Administration Guide | NOS 3.5 | 17
d. Enter the new password and complete the Add Host Wizard.
If reconnecting the host fails, remove it from the cluster and add it again.
To Change the KVM Host Password
The cluster software needs to be able to log into each host as root to perform standard cluster operations,
such as mounting a new NFS datastore or querying the status of VMs in the cluster. Therefore, after
changing the KVM root password it is critical to update the cluster configuration with the new password.
Tip: Although it is not required for the root user to have the same password on all hosts, doing so
will make cluster management and support much easier. If you do select a different password for
one or more hosts, make sure to note the password for each host.
1. Change the root password of all hosts.
Perform these steps on every KVM host in the cluster.
a. Log on to the KVM host with SSH.
b. Change the root password.
root@kvm# passwd root
c. Respond to the prompts, providing the current and new root password.
Changing password for root.
Old Password:
New password:
Retype new password:
Password changed.
2. Update the root user password for all hosts in the Zeus configuration.
Warning: If you do not perform this step, the web console will no longer show correct statistics
and alerts, and other cluster operations will fail.
a. Log on to any Controller VM in the cluster with SSH.
b. Find the host IDs.
nutanix@cvm$ ncli -p 'admin_password' host list | grep -E 'ID|Hypervisor Key'
Note the host ID for each hypervisor host.
c. Update the hypervisor host password.
nutanix@cvm$ ncli -p 'admin_password' managementserver edit name=host_addr
password='host_password'
nutanix@cvm$ ncli -p 'admin_password' host edit id=host_id hypervisor-
password='host_password'
• Replace host_addr with the IP address of the hypervisor host.
• Replace host_id with a host ID you determined in the preceding step.
• Replace host_password with the root password on the corresponding hypervisor host.
Perform this step for every hypervisor host in the cluster.
| Platform Administration Guide | NOS 3.5 | 18
To Change the IPMI Password
The cluster software needs to be able to log into the management interface on each host to perform certain
operations, such as reading hardware alerts. Therefore, after changing the IPMI password it is critical to
update the cluster configuration with the new password.
Tip: Although it is not required for the administrative user to have the same password on all hosts,
doing so will make cluster management much easier. If you do select a different password for one
or more hosts, make sure to note the password for each host
1. Change the administrative user password of all IPMI hosts.
Product Administrative user
NX-1000, NX-3050, NX-6000 ADMIN
NX-3000 admin
NX-2000 ADMIN
Perform these steps on every IPMI host in the cluster.
a. Sign in to the IPMI web interface as the administrative user.
b. Click Configuration.
c. Click Users.
d. Select the administrative user and then click Modify User.
e. Type the new password in both text fields and then click Modify.
f. Click OK to close the confirmation window.
2. Update the administrative user password for all hosts in the Zeus configuration.
a. Log on to any Controller VM in the cluster with SSH.
b. Generate a list of all hosts in the cluster.
nutanix@cvm$ ncli -p 'admin_password' host list | grep -E 'ID|IPMI Address'
Note the host ID of each entry in the list.
c. Update the IPMI password.
nutanix@cvm$ ncli -p 'admin_password' host edit id=host_id ipmi-
password='ipmi_password'
• Replace host_id with a host ID you determined in the preceding step.
• Replace ipmi_password with the administrative user password on the corresponding IPMI host.
Perform this step for every IPMI host in the cluster.
| Platform Administration Guide | NOS 3.5 | 19
3
Alerts
This section lists all the NOS alerts with cause and resolution, sorted by category.
• Cluster
• Controller VM
• Guest VM
• Hardware
• Storage
Cluster
CassandraDetachedFromRing [A1055]
Message Cassandra on CVM ip_address is now detached from ring due to reason.
Cause Either a metadata drive has failed, the node was down for an extended period of time,
or an unexpected subsystem fault was encountered, so the node was removed from the
metadata store.
Resolution If the metadata drive has failed, replace the metadata drive as soon as possible. Refer
to the Nutanix documentation for instructions. If the node was down for an extended
period of time and is now running, add it back to the metadata store with the "host
enable-metadata-store" nCLI command. Otherwise, contact Nutanix support.
Severity kCritical
CassandraMarkedToBeDetached [A1054]
Message Cassandra on CVM ip_address is marked to be detached from ring due to reason.
Cause Either a metadata drive has failed, the node was down for an extended period of time,
or an unexpected subsystem fault was encountered, so the node is marked to be
removed from the metadata store.
Resolution If the metadata drive has failed, replace the metadata drive as soon as possible. Refer
to the Nutanix documentation for instructions. If the node was down for an extended
period of time and is now running, add it back to the metadata store with the "host
enable-metadata-store" nCLI command. Otherwise, contact Nutanix support.
Severity kCritical
DuplicateRemoteClusterId [A1038]
Message Remote cluster 'remote_name' is disabled because the name conflicts with
remote cluster 'conflicting_remote_name'.
| Platform Administration Guide | NOS 3.5 | 20
Cause Two remote sites with different names or different IP addresses have same cluster ID.
This can happen in two cases: (a) A remote cluster is added twice under two different
names (through different IP addresses) or (b) Two clusters have the same cluster ID.
Resolution In case (a) remove the duplicate remote site. In case (b) verify that the both clusters
have the same cluster ID and contact Nutanix support.
Severity kWarning
JumboFramesDisabled [A1062]
Message Jumbo frames could not be enabled on the iface interface in the last three
attempts.
Cause Jumbo frames could not be enabled in the controller VMs.
Resolution Ensure that the 10-Gig network switch has jumbo-frames enabled.
Severity kCritical
NetworkDisconnect [A1041]
Message IPMI interface target_ip is not reachable from Controller VM source_ip in the
last six attempts.
Cause The IPMI interface is down or there is a network connectivity issue.
Resolution Ensure that the IPMI interface is functioning and that physical networking, VLANs, and
virtual switches are configured correctly.
Severity kWarning
NetworkDisconnect [A1006]
Message Hypervisor target_ip is not reachable from Controller VM source_ip in the last
six attempts.
Cause The hypervisor host is down or there is a network connectivity issue.
Resolution Ensure that the hypervisor host is running and that physical networking, VLANs, and
virtual switches are configured correctly.
Severity kCritical
NetworkDisconnect [A1048]
Message Controller VM svm_ip with network address svm_subnet is in a different network
than the Hypervisor hypervisor_ip, which is in the network hypervisor_subnet.
Cause The Controller VM and the hypervisor are not on the same subnet.
Resolution Reconfigure the cluster. Either move the Controller VMs to the same subnet as the
hypervisor hosts or move the hypervisor hosts to the same subnet as the Controller
VMs.
| Platform Administration Guide | NOS 3.5 | 21
Severity kCritical
NetworkDisconnect [A1040]
Message Hypervisor target_ip is not reachable from Controller VM source_ip in the last
three attempts.
Cause The hypervisor host is down or there is a network connectivity issue.
Resolution Ensure that the hypervisor host is running and that physical networking, VLANs, and
virtual switches are configured correctly.
Severity kCritical
RemoteSupportEnabled [A1051]
Message Daily reminder that remote support tunnel to Nutanix HQ is enabled on this
cluster.
Cause Nutanix support staff are able to access the cluster to assist with any issue.
Resolution No action is necessary.
Severity kInfo
TimeDifferenceHigh [A1017]
Message Wall clock time has drifted by more than time_difference_limit_secs seconds
between the Controller VMs lower_time_ip and higher_time_ip.
Cause The cluster does not have NTP servers configured or they are not reachable.
Resolution Ensure that the cluster has NTP servers configured and that the NTP servers are
reachable from all Controller VMs.
Severity kWarning
ZeusConfigMismatch [A1008]
Message IPMI IP address on Controller VM svm_ip_address was updated from
zeus_ip_address to invalid_ip_address without following the Nutanix IP
Reconfiguration procedure.
Cause The IP address configured in the cluster does not match the actual setting of the IPMI
interface.
Resolution Follow the IP address change procedure in the Nutanix documentation.
Severity kCritical
| Platform Administration Guide | NOS 3.5 | 22
ZeusConfigMismatch [A1009]
Message IP address of Controller VM zeus_ip_address has been updated to
invalid_ip_address. The Controller VM will not be part of the cluster once the
change comes into effect, unless zeus configuration is updated.
Cause The IP address configured in the cluster does not match the actual setting of the
Controller VM.
Resolution Follow the IP address change procedure in the Nutanix documentation.
Severity kCritical
ZeusConfigMismatch [A1029]
Message Hypervisor IP address on Controller VM svm_ip_address was updated from
zeus_ip_address to invalid_ip_address without following the Nutanix IP
Reconfiguration procedure.
Cause The IP address configured in the cluster does not match the actual setting of the
hypervisor.
Resolution Follow the IP address change procedure in the Nutanix documentation.
Severity kCritical
Controller VM
CVMNICSpeedLow [A1058]
Message Controller VM service_vm_external_ip is not running on 10 Gbps network
interface. This will degrade the system performance.
Cause The Controller VM is not configured to use the 10 Gbps NIC or is configured to share
load with a slower NIC.
Resolution Connect the Controller VM to 10 Gbps NICs only.
Severity kWarning
CVMRAMUsageHigh [A1056]
Message Main memory usage in Controller VM ip_address is high in the last 20 minutes.
free_memory_kb KB of memory is free.
Cause The RAM usage on the Controller VM has been high.
Resolution Contact Nutanix Support for diagnosis. RAM on the Controller VM may need to be
increased.
Severity kCritical
| Platform Administration Guide | NOS 3.5 | 23
CVMRebooted [A1024]
Message Controller VM ip_address has been rebooted.
Cause Various
Resolution If the Controller VM was restarted intentionally, no action is necessary. If it restarted by
itself, contact Nutanix support.
Severity kCritical
IPMIError [A1050]
Message Controller VM ip_address is unable to fetch IPMI SDR repository.
Cause The IPMI interface is down or there is a network connectivity issue.
Resolution Ensure that the IPMI interface is functioning and that physical networking, VLANs, and
virtual switches are configured correctly.
Severity kCritical
KernelMemoryUsageHigh [A1034]
Message Controller VM ip_address's kernel memory usage is higher than expected.
Cause Various
Resolution Contact Nutanix support.
Severity kCritical
NetworkDisconnect [A1001]
Message Controller VM target_ip is not reachable from Controller VM source_ip in the
last six attempts.
Cause The Controller VM is down or there is a network connectivity issue.
Resolution If the Controller VM does not respond to ping, turn it on. Ensure that physical
networking, VLANs, and virtual switches are configured correctly.
Severity kCritical
NetworkDisconnect [A1011]
Message Controller VM target_ip is not reachable from Controller VM source_ip in the
last three attempts.
Cause The Controller VM is down or there is a network connectivity issue.
Resolution Ensure that the Controller VM is running and that physical networking, VLANs, and
virtual switches are configured correctly.
Severity kCritical
| Platform Administration Guide | NOS 3.5 | 24
NodeInMaintenanceMode [A1013]
Message Controller VM ip_address is put in maintenance mode due to reason.
Cause Node removal has been initiated.
Resolution No action is necessary.
Severity kInfo
ServicesRestartingFrequently [A1032]
Message There have been 10 or more cluster services restarts within 15 minutes.
Cause This alert usually indicates that the Controller VM was restarted, but there could be
other causes.
Resolution If this alert occurs once or infrequently, no action is necessary. If it is frequent, contact
Nutanix support.
Severity kCritical
StargateTemporarilyDown [A1030]
Message Stargate on Controller VM ip_address is down for downtime seconds.
Cause Various
Resolution Contact Nutanix support.
Severity kCritical
Guest VM
ProtectedVmNotFound [A1010]
Message Unable to locate VM with name 'vm_name and internal ID 'vm_id' in protection
domain 'protection_domain_name'.
Cause The VM was deleted.
Resolution Remove the VM from the protection domain.
Severity kWarning
ProtectionDomainActivation [A1043]
Message Unable to make protection domain 'protection_domain_name' active on remote
site 'remote_name' due to 'reason'.
Cause Various
Resolution Resolve the stated reason for the failure. If you cannot resolve the error, contact
Nutanix support.
| Platform Administration Guide | NOS 3.5 | 25
Severity kCritical
ProtectionDomainChangeModeFailure [A1060]
Message Protection domain protection_domain_name activate/deactivate failed. reason
Cause Protection domain cannot be activated or migrated.
Resolution Resolve the stated reason for the failure. If you cannot resolve the error, contact
Nutanix support.
Severity kCritical
ProtectionDomainReplicationExpired [A1003]
Message Protection domain protection_domain_name replication to the remote site
remote_name has expired before it is started.
Cause Replication is taking too long to complete before the snapshots expire.
Resolution Review replication schedules taking into account bandwidth and overall load on
systems. Confirm retention time on replicated snapshots.
Severity kWarning
ProtectionDomainReplicationFailure [A1015]
Message Protection domain protection_domain_name replication to remote site
remote_name failed. reason
Cause Various
Resolution Resolve the stated reason for the failure. If you cannot resolve the error, contact
Nutanix support.
Severity kCritical
ProtectionDomainSnapshotFailure [A1064]
Message Protection domain protection_domain_name snapshot snapshot_id failed. reason
Cause Protection domain cannot be snapshotted.
Resolution Make sure all VMs and files are available.
Severity kCritical
VMAutoStartDisabled [A1057]
Message Virtual Machine auto start is disabled on the hypervisor of Controller VM
service_vm_external_ip
| Platform Administration Guide | NOS 3.5 | 26
Cause Auto start of the Controller VM is disabled.
Resolution Enable auto start of the Controller VM as recommended by Nutanix. If auto start is
intentionally disabled, no action is necessary.
Severity kInfo
VMLimitExceeded [A1053]
Message The number of virtual machines on node node_serial is vm_count, which is above
the limit vm_limit.
Cause The node is running more virtual machines than the hardware can support.
Resolution Shut down VMs or move them to other nodes in the cluster.
Severity kCritical
VmActionError [A1033]
Message Failed to action VM with name 'vm_name' and internal ID 'vm_id' due to reason
Cause A VM could not be restored because of a hypervisor error, or could not be deleted
because it is still in use.
Resolution Resolve the stated reason for the failure. If you cannot resolve the error, contact
Nutanix support.
Severity kCritical
VmRegistrationError [A1002]
Message Failed to register VM using name 'vm_name' with the hypervisor due to reason
Cause An error on the hypervisor.
Resolution Resolve the stated reason for the failure. If you cannot resolve the error, contact
Nutanix support.
Severity kCritical
Hardware
CPUTemperatureHigh [A1049]
Message Temperature of CPU cpu_id exceeded temperatureC on Controller VM ip_address
Cause The device is overheating to the point of imminent failure.
Resolution Ensure that the fans in the block are functioning properly and that the environment is
cool enough.
Severity kCritical
| Platform Administration Guide | NOS 3.5 | 27
DiskBad [A1044]
Message Disk disk_position on node node_position of block block_position is marked
offline due to IO errors. Serial number of the disk is disk_serial in node
node_serial of block block_serial.
Cause The drive has failed.
Resolution Replace the failed drive. Refer to the Nutanix documentation for instructions.
Severity kCritical
FanSpeedLow [A1020]
Message Speed of fan fan_id exceeded fan_rpm RPM on Controller VM ip_address.
Cause The device is overheating to the point of imminent failure.
Resolution Ensure that the fans in the block are functioning properly and that the environment is
cool enough.
Severity kCritical
FanSpeedLow [A1045]
Message Fan fan_id has stopped on Controller VM ip_address.
Cause A fan has failed.
Resolution Replace the fan as soon as possible. Refer to the Nutanix documentation for
instructions.
Severity kCritical
FusionIOTemperatureHigh [A1016]
Message Fusion-io drive device temperature exceeded temperatureC on Controller VM
ip_address
Cause The device is overheating.
Resolution Ensure that the fans in the block are functioning properly and that the environment is
cool enough.
Severity kWarning
FusionIOTemperatureHigh [A1047]
Message Fusion-io drive device temperature exceeded temperatureC on Controller VM
ip_address
Cause The device is overheating to the point of imminent failure.
| Platform Administration Guide | NOS 3.5 | 28
Resolution Ensure that the fans in the block are functioning properly and that the environment is
cool enough.
Severity kCritical
FusionIOWearHigh [A1014]
Message Fusion-io drive die failure has occurred in Controller VM svm_ip and most of
the Fusion-io drives have worn out beyond 1.2PB of writes.
Cause The drives are approaching the maximum write endurance and are beginning to fail.
Resolution Replace the drives as soon as possible. Refer to the Nutanix documentation for
instructions.
Severity kCritical
FusionIOWearHigh [A1026]
Message Fusion-io drive die failures have occurred in Controller VMs svm_ip_list.
Cause The drive is failing.
Resolution Replace the drive as soon as possible. Refer to the Nutanix documentation for
instructions.
Severity kCritical
HardwareClockFailure [A1059]
Message Hardware clock in node node_serial has failed.
Cause The RTC clock on the host has failed or the RTC battery has died.
Resolution Replace the node. Refer to the Nutanix documentation for instructions.
Severity kCritical
IntelSSDTemperatureHigh [A1028]
Message Intel 910 SSD device device temperature exceeded temperatureC on the
Controller VM ip_address.
Cause The device is overheating.
Resolution Ensure that the fans in the block are functioning properly and that the environment is
cool enough.
Severity kWarning
| Platform Administration Guide | NOS 3.5 | 29
IntelSSDTemperatureHigh [A1007]
Message Intel 910 SSD device device temperature exceeded temperatureC on the
Controller VM ip_address.
Cause The device is overheating to the point of imminent failure.
Resolution Ensure that the fans in the block are functioning properly and that the environment is
cool enough.
Severity kCritical
IntelSSDWearHigh [A1035]
Message Intel 910 SSD device device on the Controller VM ip_address has worn out
beyond 6.5PB of writes.
Cause The drive is approaching the maximum write endurance.
Resolution Consider replacing the drive.
Severity kWarning
IntelSSDWearHigh [A1042]
Message Intel 910 SSD device device on the Controller VM ip_address has worn out
beyond 7PB of writes.
Cause The drive is close the maximum write endurance and failure is imminent.
Resolution Replace the drive as soon as possible. Refer to the Nutanix documentation for
instructions.
Severity kCritical
PowerSupplyDown [A1046]
Message power_source power source is down on block block_position.
Cause The power supply has failed.
Resolution Replace the power supply as soon as possible. Refer to the Nutanix documentation for
instructions.
Severity kCritical
RAMFault [A1052]
Message DIMM fault detected on Controller VM ip_address. The node is running with
current_memory_gb GB whereas installed_memory_gb GB was installed.
Cause A DIMM has failed.
| Platform Administration Guide | NOS 3.5 | 30
Resolution Replace the failed DIMM as soon as possible. Refer to the Nutanix documentation for
instructions.
Severity kCritical
RAMTemperatureHigh [A1022]
Message Temperature of DIMM dimm_id for CPU cpu_id exceeded temperatureC on Controller
VM ip_address
Cause The device is overheating to the point of imminent failure.
Resolution Ensure that the fans in the block are functioning properly and that the environment is
cool enough.
Severity kCritical
SystemTemperatureHigh [A1012]
Message System temperature exceeded temperatureC on Controller VM ip_address
Cause The node is overheating to the point of imminent failure.
Resolution Ensure that the fans in the block are functioning properly and that the environment is
cool enough.
Severity kCritical
Storage
DiskInodeUsageHigh [A1018]
Message Inode usage for one or more disks on Controller VM ip_address has exceeded
75%.
Cause The filesystem contains too many files.
Resolution Delete unneeded data or add nodes to the cluster.
Severity kWarning
DiskInodeUsageHigh [A1027]
Message Inode usage for one or more disks on Controller VM ip_address has exceeded
90%.
Cause The filesystem contains too many files.
Resolution Delete unneeded data or add nodes to the cluster.
Severity kCritical
| Platform Administration Guide | NOS 3.5 | 31
DiskSpaceUsageHigh [A1031]
Message Disk space usage for one or more disks on Controller VM ip_address has
exceeded warn_limit%.
Cause Too much data is stored on the node.
Resolution Delete unneeded data or add nodes to the cluster.
Severity kWarning
DiskSpaceUsageHigh [A1005]
Message Disk space usage for one or more disks on Controller VM ip_address has
exceeded critical_limit%.
Cause Too much data is stored on the node.
Resolution Delete unneeded data or add nodes to the cluster.
Severity kCritical
FusionIOReserveLow [A1023]
Message Fusion-io drive device reserves are down to reserve% on Controller VM
ip_address.
Cause The drive is beginning to fail.
Resolution Consider replacing the drive.
Severity kWarning
FusionIOReserveLow [A1039]
Message Fusion-io drive device reserves are down to reserve% on Controller VM
ip_address.
Cause The drive is failing.
Resolution Replace the drive as soon as possible. Refer to the Nutanix documentation for
instructions.
Severity kCritical
SpaceReservationViolated [A1021]
Message Space reservation configured on vdisk vdisk_name belonging to container id
container_id could not be honored due to insufficient disk space resulting
from a possible disk or node failure.
Cause A drive or a node has failed, and the space reservations on the cluster can no longer be
met.
| Platform Administration Guide | NOS 3.5 | 32
Resolution Change space reservations to total less than 90% of the available storage, and
replace the drive or node as soon as possible. Refer to the Nutanix documentation for
instructions.
Severity kWarning
VDiskBlockMapUsageHigh [A1061]
Message Too many snapshots have been allocated in the system. This may cause
perceivable performance degradation.
Cause Too many vdisks or snapshots are present in the system.
Resolution Remove unneeded snapshots and vdisks. If using remote replication, try to lower the
frequency of taking snapshots. If you cannot resolve the error, contact Nutanix support.
Severity kInfo
| Platform Administration Guide | NOS 3.5 | 33
4
IP Address Configuration
NOS includes a web-based configuration tool that automates the modification of Controller VMs and
configures the cluster to use these new IP addresses. Other cluster components must be modified
manually.
Requirements
The web-based configuration tool requires that IPv6 link-local be enabled on the subnet. If IPv6 link-local is
not available, you must configure the Controller VM IP addresses and the cluster manually. The web-based
configuration tool also requires that the Controller VMs be able to communicate with each other.
All Controller VMs and hypervisor hosts must be on the same subnet. If the IPMI interfaces are connected,
Nutanix recommends that they be on the same subnet as the Controller VMs and hypervisor hosts.
Guest VMs can be on a different subnet.
To Reconfigure the Cluster
Warning: If you are reassigning a Controller VM IP address to another Controller VM, you must
perform this complete procedure twice: once to assign intermediate IP addresses and again to
assign the desired IP addresses.
For example, if Controller VM A has IP address 172.16.0.11 and Controller VM B has IP address
172.16.0.10 and you want to swap them, you would need to reconfigure them with different IP
addresses (such as 172.16.0.100 and 172.16.0.101) before changing them to the IP addresses in
use initially.
1. Place the cluster in reconfiguration mode by following To Prepare to Reconfigure the Cluster on
page 34.
2. Configure the IPMI IP addresses by following the procedure for your hardware model.
→ To Configure the Remote Console IP Address (NX-1000, NX-3050, NX-6000) on page 35
→ To Configure the Remote Console IP Address (NX-3000) on page 35
→ To Configure the Remote Console IP Address (NX-2000) on page 36
| Platform Administration Guide | NOS 3.5 | 34
Alternatively, you can set the IPMI IP address using a command-line utility by following To Configure
the Remote Console IP Address (command line) on page 37.
3. Configure networking on node the by following the hypervisor-specific procedure.
→ vSphere: To Configure Host Networking on page 38
→ KVM: To Configure Host Networking (KVM) on page 39
4. (vSphere only) Update the ESXi host IP addresses in vCenter by following To Update the ESXi Host
Password in vCenter on page 40.
5. Configure the Controller VM IP addresses.
→ If IPv6 is enabled on the subnet, follow To Change the Controller VM IP Addresses on page 40.
→ If IPv6 is not enabled on the subnet, follow To Change a Controller VM IP Address (manual) on
page 41 for each Controller VM in the cluster.
6. Complete cluster reconfiguration by following To Complete Cluster Reconfiguration on page 42.
To Prepare to Reconfigure the Cluster
1. Log on to any Controller VM in the cluster with SSH.
2. Stop the Nutanix cluster.
nutanix@cvm$ cluster stop
Wait to proceed until output similar to the following is displayed for every Controller VM in the cluster.
CVM: 172.16.8.191 Up, ZeusLeader
Zeus UP [3167, 3180, 3181, 3182, 3191, 3201]
Scavenger UP [3334, 3351, 3352, 3353]
ConnectionSplicer DOWN []
Hyperint DOWN []
Medusa DOWN []
DynamicRingChanger DOWN []
Pithos DOWN []
Stargate DOWN []
Cerebro DOWN []
Chronos DOWN []
Curator DOWN []
Prism DOWN []
AlertManager DOWN []
StatsAggregator DOWN []
SysStatCollector DOWN []
3. Put the cluster in reconfiguration mode.
nutanix@cvm$ cluster reconfig
Type y to confirm the reconfiguration.
Wait until the cluster successfully enters reconfiguration mode, as shown in the following example.
INFO cluster:185 Restarted Genesis on 172.16.8.189.
INFO cluster:185 Restarted Genesis on 172.16.8.188.
INFO cluster:185 Restarted Genesis on 172.16.8.191.
INFO cluster:185 Restarted Genesis on 172.16.8.190.
INFO cluster:864 Success!
| Platform Administration Guide | NOS 3.5 | 35
Remote Console IP Address Configuration
The Intelligent Platform Management Interface (IPMI) is a standardized interface used to manage a host
and monitor its operation. To enable remote access to the console of each host, you must configure the
IPMI settings within BIOS.
The Nutanix cluster provides a Java application to remotely view the console of each node, or host server.
You can use this console to configure additional IP addresses in the cluster.
The procedure for configuring the remote console IP address is slightly different for each hardware
platform.
To Configure the Remote Console IP Address (NX-1000, NX-3050, NX-6000)
1. Connect a keyboard and monitor to a node in the Nutanix block.
2. Restart the node and press Delete to enter the BIOS setup utility.
You will have a limited amount of time to enter BIOS before the host completes the restart process.
3. Press the right arrow key to select the IPMI tab.
4. Press the down arrow key until BMC network configuration is highlighted and then press Enter.
5. Select Configuration Address source and press Enter.
6. Select Static and press Enter.
7. Assign the Station IP address, Subnet mask, and Router IP address.
8. Review the BIOS settings and press F4 to save the configuration changes and exit the BIOS setup
utility.
The node restarts.
To Configure the Remote Console IP Address (NX-3000)
1. Connect a keyboard and monitor to a node in the Nutanix block.
| Platform Administration Guide | NOS 3.5 | 36
2. Restart the node and press Delete to enter the BIOS setup utility.
You will have a limited amount of time to enter BIOS before the host completes the restart process.
3. Press the right arrow key to select the Server Mgmt tab.
4. Press the down arrow key until BMC network configuration is highlighted and then press Enter.
5. Select Configuration source and press Enter.
6. Select Static on next reset and press Enter.
7. Assign the Station IP address, Subnet mask, and Router IP address.
8. Press F10 to save the configuration changes.
9. Review the settings and then press Enter.
The node restarts.
To Configure the Remote Console IP Address (NX-2000)
1. Connect a keyboard and monitor to a node in the Nutanix block.
2. Restart the node and press Delete to enter the BIOS setup utility.
You will have a limited amount of time to enter BIOS before the host completes the restart process.
3. Press the right arrow key to select the Advanced tab.
4. Press the down arrow key until IPMI Configuration is highlighted and then press Enter.
5. Select Set LAN Configuration and press Enter.
6. Select Static to assign an IP address, subnet mask, and gateway address.
| Platform Administration Guide | NOS 3.5 | 37
7. Press F10 to save the configuration changes.
8. Review the settings and then press Enter.
9. Restart the node.
To Configure the Remote Console IP Address (command line)
You can configure the management interface from the hypervisor host on the same node.
Perform these steps once from each hypervisor host in the cluster where the management network
configuration need to be changed.
1. Log on to the hypervisor host with SSH or the IPMI remote console.
2. Set the networking parameters.
root@esx# /ipmitool -U ADMIN -P ADMIN lan set 1 ipsrc static
root@esx# /ipmitool -U ADMIN -P ADMIN lan set 1 ipaddr mgmt_interface_ip_addr
root@esx# /ipmitool -U ADMIN -P ADMIN lan set 1 netmask mgmt_interface_subnet_addr
root@esx# /ipmitool -U ADMIN -P ADMIN lan set 1 defgw ipaddr mgmt_interface_gateway
root@kvm# ipmitool -U ADMIN -P ADMIN lan set 1 ipsrc static
root@kvm# ipmitool -U ADMIN -P ADMIN lan set 1 ipaddr mgmt_interface_ip_addr
root@kvm# ipmitool -U ADMIN -P ADMIN lan set 1 netmask mgmt_interface_subnet_addr
root@kvm# ipmitool -U ADMIN -P ADMIN lan set 1 defgw ipaddr mgmt_interface_gateway
3. Show current settings.
root@esx# /ipmitool -v -U ADMIN -P ADMIN lan print 1
root@kvm# ipmitool -v -U ADMIN -P ADMIN lan print 1
Confirm that the parameters are set to the correct values.
| Platform Administration Guide | NOS 3.5 | 38
To Configure Host Networking
You can access the ESXi console either through IPMI or by attaching a keyboard and monitor to the node.
1. On the ESXi host console, press F2 and then provide the ESXi host logon credentials.
2. Press the down arrow key until Configure Management Network is highlighted and then press Enter.
3. Select Network Adapters and press Enter.
4. Ensure that the connected network adapters are selected.
If they are not selected, press Space to select them and press Enter to return to the previous screen.
5. If a VLAN ID needs to be configured on the Management Network, select VLAN (optional) and press
Enter. In the dialog box, provide the VLAN ID and press Enter.
6. Select IP Configuration and press Enter.
7. If necessary, highlight the Set static IP address and network configuration option and press Space
to update the setting.
8. Provide values for the following: IP Address, Subnet Mask, and Default Gateway fields based on your
environment and then press Enter .
9. Select DNS Configuration and press Enter.
10. If necessary, highlight the Use the following DNS server addresses and hostname option and press
Space to update the setting.
11. Provide values for the Primary DNS Server and Alternate DNS Server fields based on your
environment and then press Enter.
12. Press Esc and then Y to apply all changes and restart the management network.
13. Select Test Management Network and press Enter.
14. Press Enter to start the network ping test.
15. Verify that the default gateway and DNS servers reported by the ping test match those that you
specified earlier in the procedure and then press Enter.
Ensure that the tested addresses pass the ping test. If they do not, confirm that the correct IP
addresses are configured.
| Platform Administration Guide | NOS 3.5 | 39
Press Enter to close the test window.
16. Press Esc to log out.
To Configure Host Networking (KVM)
You can access the hypervisor host console either through IPMI or by attaching a keyboard and monitor to
the node.
1. Log on to the host as root.
2. Open the network interface configuration file.
root@kvm# vi /etc/sysconfig/network-scripts/ifcfg-br0
3. Press A to edit values in the file.
4. Update entries for netmask, gateway, and address.
The block should look like this:
ONBOOT="yes"
NM_CONTROLLED="no"
NETMASK="subnet_mask"
IPADDR="host_ip_addr"
DEVICE="eth0"
TYPE="ethernet"
GATEWAY="gateway_ip_addr"
BOOTPROTO="none"
• Replace host_ip_addr with the IP address for the hypervisor host.
• Replace subnet_mask with the subnet mask for host_ip_addr.
• Replace gateway_ip_addr with the gateway address for host_ip_addr.
5. Press Esc.
6. Type :wq and press Enter to save your changes.
7. Open the name services configuration file.
root@kvm# vi /etc/resolv.conf
8. Update the values for the nameserver parameter then save and close the file.
9. Restart networking.
root@kvm# /etc/init.d/network restart
| Platform Administration Guide | NOS 3.5 | 40
To Update the ESXi Host Password in vCenter
1. Log on to vCenter with the vSphere client.
2. Right-click the host with the changed password and select Disconnect.
3. Right-click the host and select Connect.
4. Enter the new password and complete the Add Host Wizard.
If reconnecting the host fails, remove it from the cluster and add it again.
To Change the Controller VM IP Addresses
Before you begin.
• Confirm that the system you are using to configure the cluster meets the following requirements:
• IPv6 link-local enabled.
• Windows 7, Vista, or MacOS.
• (Windows only) Bonjour installed (included with iTunes or downloadable from http://
support.apple.com/kb/DL999).
• Determine the IPv6 service of any Controller VM in the cluster.
IPv6 service names are uniquely generated at the factory and have the following form (note the final
period):
NTNX-block_serial_number-node_location-CVM.local.
On the right side of the block toward the front is a label that has the block_serial_number (for example,
12AM3K520060). The node_location is a number 1-4 for NX-3000, a letter A-D for NX-1000/NX-2000/
NX-3050, or a letter A-B for NX-6000.
If IPv6 link-local is not enabled on the subnet, reconfigure the cluster manually.
If you need to confirm if IPv6 link-local is enabled on the network or if you do not have access to get the
node serial number, see the Nutanix support knowledge base for alternative methods.
| Platform Administration Guide | NOS 3.5 | 41
Warning: If you are reassigning a Controller VM IP address to another Controller VM, you must
perform this complete procedure twice: once to assign intermediate IP addresses and again to
assign the desired IP addresses.
For example, if Controller VM A has IP address 172.16.0.11 and Controller VM B has IP address
172.16.0.10 and you want to swap them, you would need to reconfigure them with different IP
addresses (such as 172.16.0.100 and 172.16.0.101) before changing them to the IP addresses in
use initially.
The cluster must be stopped and in reconfiguration mode before changing the Controller VM IP addresses.
1. Open a web browser.
Nutanix recommends using Internet Explorer 9 for Windows and Safari for Mac OS.
Note: Internet Explorer requires protected mode to be disabled. Go to Tools > Internet
Options > Security, clear the Enable Protected Mode check box, and restart the browser.
2. Go to http://cvm_ip_addr:2100/ip_reconfig.html
Replace cvm_ip_addr with the name of the IPv6 service of any Controller VM that will be added to the
cluster.
3. Update one or more cells on the IP Reconfiguration page.
Ensure that all components satisfy the cluster subnet requirements. See Subnet Requirements.
4. Click Reconfigure.
5. Wait until the Log Messages section of the page reports that the cluster has been successfully
reconfigured, as shown in the following example.
Configuring IP addresses on node S10264822116570/A...
Success!
Configuring IP addresses on node S10264822116570/C...
Success!
Configuring IP addresses on node S10264822116570/B...
Success!
Configuring IP addresses on node S10264822116570/D...
Success!
Configuring Zeus on node S10264822116570/A...
Configuring Zeus on node S10264822116570/C...
Configuring Zeus on node S10264822116570/B...
Configuring Zeus on node S10264822116570/D...
Reconfiguration successful!
The IP address reconfiguration will disconnect any SSH sessions to cluster components. The cluster is
taken out of reconfiguration mode.
To Change a Controller VM IP Address (manual)
1. Log on to the hypervisor host with SSH or the IPMI remote console.
2. Log on to the Controller VM with SSH.
root@host# ssh nutanix@192.168.5.254
Enter the Controller VM nutanix password.
3. Restart genesis.
nutanix@cvm$ genesis restart
If the restart is successful, output similar to the following is displayed:
| Platform Administration Guide | NOS 3.5 | 42
Stopping Genesis pids [1933, 30217, 30218, 30219, 30241]
Genesis started on pids [30378, 30379, 30380, 30381, 30403]
4. Change the network interface configuration.
a. Open the network interface configuration file.
nutanix@cvm$ sudo vi /etc/sysconfig/network-scripts/ifcfg-eth0
Enter the nutanix password.
b. Press A to edit values in the file.
c. Update entries for netmask, gateway, and address.
The block should look like this:
ONBOOT="yes"
NM_CONTROLLED="no"
NETMASK="subnet_mask"
IPADDR="cvm_ip_addr"
DEVICE="eth0"
TYPE="ethernet"
GATEWAY="gateway_ip_addr"
BOOTPROTO="none"
• Replace cvm_ip_addr with the IP address for the Controller VM.
• Replace subnet_mask with the subnet mask for cvm_ip_addr.
• Replace gateway_ip_addr with the gateway address for cvm_ip_addr.
d. Press Esc.
e. Type :wq and press Enter to save your changes.
5. Update the Zeus configuration.
a. Open the host configuration file.
nutanix@cvm$ sudo vi /etc/hosts
b. Press A to edit values in the file.
c. Update hosts zk1, zk2, and zk3 to match changed Controller VM IP addresses.
d. Press Esc.
e. Type :wq and press Enter to save your changes.
6. Restart the virtual machine.
nutanix@cvm$ sudo reboot
Enter the nutanix password if prompted.
To Complete Cluster Reconfiguration
1. If you changed the IP addresses manually, take the cluster out of reconfiguration mode.
Perform these steps for every Controller VM in the cluster.
a. Log on to the Controller VM with SSH.
| Platform Administration Guide | NOS 3.5 | 43
b. Take the Controller VM out of reconfiguration mode.
nutanix@cvm$ rm ~/.node_reconfigure
c. Restart genesis.
nutanix@cvm$ genesis restart
If the restart is successful, output similar to the following is displayed:
Stopping Genesis pids [1933, 30217, 30218, 30219, 30241]
Genesis started on pids [30378, 30379, 30380, 30381, 30403]
2. Log on to any Controller VM in the cluster with SSH.
3. Start the Nutanix cluster.
nutanix@cvm$ cluster start
If the cluster starts properly, output similar to the following is displayed for each node in the cluster:
CVM: 172.16.8.167 Up, ZeusLeader
Zeus UP [3148, 3161, 3162, 3163, 3170, 3180]
Scavenger UP [3333, 3345, 3346, 11997]
ConnectionSplicer UP [3379, 3392]
Hyperint UP [3394, 3407, 3408, 3429, 3440, 3447]
Medusa UP [3488, 3501, 3502, 3523, 3569]
DynamicRingChanger UP [4592, 4609, 4610, 4640]
Pithos UP [4613, 4625, 4626, 4678]
Stargate UP [4628, 4647, 4648, 4709]
Cerebro UP [4890, 4903, 4904, 4979]
Chronos UP [4906, 4918, 4919, 4968]
Curator UP [4922, 4934, 4935, 5064]
Prism UP [4939, 4951, 4952, 4978]
AlertManager UP [4954, 4966, 4967, 5022]
StatsAggregator UP [5017, 5039, 5040, 5091]
SysStatCollector UP [5046, 5061, 5062, 5098]
| Platform Administration Guide | NOS 3.5 | 44
5
Field Installation
You can reimage a Nutanix node with the Phoenix ISO. This process installs the hypervisor and the
Nutanix Controller VM.
Note: Phoenix usage is restricted to Nutanix sales engineers, support engineers, and authorized
partners.
Phoenix can be used to cleanly install systems for POCs or to switch hypervisors.
NOS Installer Reference
Installation Options
Component Option
Hypervisor Clean Install Hypervisor: To install the selected hypervisor as part of
complete reimaging.
Clean Install SVM: To install the Controller VM as part of complete reimaging
or Controller VM boot drive replacement.
Controller VM
Repair SVM: To retain Controller VM configuration.
Note: Do not use this option except under guidance from Nutanix
support.
Supported Products and Hypervisors
Product ESX 5.0U2 & 5.1U1 KVM Hyper-V
NX-1000 •
NX-2000 •
NX-2050 •
NX-3000 • •
NX-3050 • • •
NX-6050/NX-6070 •
To Image a Node
Before you begin.
• Download the Phoenix ISO to a workstation with access to the IPMI interface on the node that you want
to reimage.
| Platform Administration Guide | NOS 3.5 | 45
• Gather the following required pieces of information: Block ID, Cluster ID, and Node Serial Number.
These items are assigned by Nutanix, and you must use the correct values.
This procedure describes how to image a node from an ISO on a workstation.
Repeat this procedure once for every node that you want to reimage.
1. Sign in to the IPMI web console.
2. Attach the ISO to the node.
a. Go to Remote Control and click Launch Console.
Accept any security warnings to start the console.
b. In the console, click Media > Virtual Media Wizard.
c. Click Browse next to ISO Image and select the ISO file.
d. Click Connect CD/DVD.
e. Go to Remote Control > Power Control.
f. Select Reset Server and click Perform Action.
The host restarts from the ISO.
3. In the boot menu, select Installer and press Enter.
If previous values for these parameters are detected on the node, they will be displayed.
4. Enter the required information.
→ If all previous values are displayed and you want to use then, press Y.
→ If some or all of the previous values are not displayed, enter the required values.
a. Block ID: Enter the unique block identifier assigned by Nutanix.
b. Model: Enter the product number.
c. Node Serial: Enter the unique node identifier assigned by Nutanix.
d. Cluster ID: Enter the unique cluster identifier assigned by Nutanix.
e. Node Position: Enter 1, 2, 3, or 4 for NX-3000; A, B, C, or D for all other 4-node blocks.
Warning: If you are imaging all nodes in a block, ensure that the Block ID is the same for all
nodes and that the Node Serial Number and Node Position are different.
| Platform Administration Guide | NOS 3.5 | 46
5. Select both Clean Install Hypervisor and Clean Install SVM then select Start.
Installation begins and takes about 20 minutes.
6. In the Virtual Media window, click Disconnect next to CD Media.
7. In the IPMI console, go to to Remote Control > Power Control.
8. Select Reset Server and click Perform Action.
The node restarts with the new image. After the node starts, additional configuration tasks run and
then the host restarts again. During this time, the host name is installing-please-be-patient. Wait
approximately 20 minutes until this stage completes before accessing the node.
Warning: Do not restart the host until the configuration is complete.
What to do next. Add the node to a cluster.
vSphere | Platform Administration Guide | NOS 3.5 | 47
Part
II
vSphere
| Platform Administration Guide | NOS 3.5 | 48
6
vCenter Configuration
VMware vCenter enables the centralized management of multiple ESXi hosts. The Nutanix cluster in
vCenter must be configured according to Nutanix best practices.
While most customers prefer to use an existing vCenter, Nutanix provides a vCenter OVF, which is on
the Controller VMs in /home/nutanix/data/images/vcenter. You can deploy the OVF using the standard
procedures for vSphere.
To Use an Existing vCenter Server
1. Shut down the Nutanix vCenter VM.
2. Create a new cluster entity within the existing vCenter inventory and configure its settings based on
Nutanix best practices by following To Create a Nutanix Cluster in vCenter on page 48.
3. Add the Nutanix hosts to this new cluster by following To Add a Nutanix Node to vCenter on
page 51.
To Create a Nutanix Cluster in vCenter
1. Log on to vCenter with the vSphere client.
2. If you want the Nutanix cluster to be in its own datacenter or if there is no datacenter, click File > New >
Datacenter and type a meaningful name for the datacenter, such as NTNX-DC. Otherwise, proceed to the
next step.
You can also create the Nutanix cluster within an existing datacenter.
3. Right-click the datacenter node and select New Cluster.
4. Type a meaningful name for the cluster in the Name field, such as NTNX-Cluster.
5. Select the Turn on vSphere HA check box and click Next.
6. Select Admission Control > Enable.
7. Select Admission Control Policy > Percentage of cluster resources reserved as failover spare
capacity and enter the percentage appropriate for the number of Nutanix nodes in the cluster the click
Next.
Hosts (N+1) Percentage Hosts (N+2) Percentage Hosts (N+3) Percentage Hosts (N+4) Percentage
1 N/A 9 23% 17 18% 25 16%
2 N/A 10 20% 18 17% 26 15%
3 33% 11 18% 19 16% 27 15%
4 25% 12 17% 20 15% 28 14%
5 20% 13 15% 21 14% 29 14%
| Platform Administration Guide | NOS 3.5 | 49
Hosts (N+1) Percentage Hosts (N+2) Percentage Hosts (N+3) Percentage Hosts (N+4) Percentage
6 18% 14 14% 22 14% 30 13%
7 15% 15 13% 23 13% 31 13%
8 13% 16 13% 24 13% 32 13%
8. Click Next on the following three pages to accept the default values.
• Virtual Machine Options
• VM monitoring
• VMware EVC
9. Verify that Store the swapfile in the same directory as the virtual machine (recommended) is
selected and click Next.
10. Review the settings and then click Finish.
11. Add all Nutanix nodes to the vCenter cluster inventory.
See To Add a Nutanix Node to vCenter on page 51.
12. Right-click the Nutanix cluster node and select Edit Settings.
13. If vSphere HA and DRS are not enabled, select them on the Cluster Features page. Otherwise,
proceed to the next step.
Note: vSphere HA and DRS must be configured even if the customer does not plan to use
the features. The settings will be preserved within the vSphere cluster configuration, so if the
customer later decides to enable the feature, it will be pre-configured based on Nutanix best
practices.
14. Configure vSphere HA.
a. Select vSphere HA > Virtual Machine Options.
b. Change the VM restart priority of all Controller VMs to Disabled.
| Platform Administration Guide | NOS 3.5 | 50
Tip: Controller VMs include the phrase CVM in their names. It may be necessary to expand
the Virtual Machine column to view the entire VM name.
c. Change the Host Isolation Response setting of all Controller VMs to Leave Powered On.
d. Select vSphere HA > VM Monitoring
e. Change the VM Monitoring setting for all Controller VMs to Disabled.
f. Select vSphere HA > Datastore Heartbeating.
g. Click Select only from my preferred datastores and select the Nutanix datastore (NTNX-NFS).
h. If the cluster does not use vSphere HA, disable it on the Cluster Features page. Otherwise,
proceed to the next step.
15. Configure vSphere DRS.
a. Select vSphere DRS > Virtual Machine Options.
b. Change the Automation Level setting of all Controller VMs to Disabled.
| Platform Administration Guide | NOS 3.5 | 51
c. Select vSphere DRS > Power Management.
d. Confirm that Off is selected as the default power management for the cluster.
e. If the cluster does not use vSphere DRS, disable it on the Cluster Features page. Otherwise,
proceed to the next step.
16. Click OK to close the cluster settings window.
To Add a Nutanix Node to vCenter
The cluster must be configured according to Nutanix specifications given in vSphere Cluster Settings on
page 53.
Tip: Refer to Default Cluster Credentials on page 2 for the default credentials of all cluster
components.
1. Log on to vCenter with the vSphere client.
2. Right-click the cluster and select Add Host.
3. Type the IP address of the ESXi host in the Host field.
4. Enter the ESXi host logon credentials in the Username and Password fields.
5. Click Next.
If a security or duplicate management alert appears, click Yes.
6. Review the Host Summary page and click Next.
7. Select a license to assign to the ESXi host and click Next.
8. Ensure that the Enable Lockdown Mode check box is left unselected and click Next.
Lockdown mode is not supported.
9. Click Finish.
10. Select the ESXi host and click the Configuration tab.
11. Configure DNS servers.
a. Click DNS and Routing > Properties.
b. Select Use the following DNS server address.
| Platform Administration Guide | NOS 3.5 | 52
c. Type DNS server addresses in the Preferred DNS Server and Alternate DNS Server fields and
click OK.
12. Configure NTP servers.
a. Click Time Configuration > Properties > Options > NTP Settings > Add.
b. Type the NTP server address.
Add multiple NTP servers if required.
c. Click OK in the NTP Daemon (ntpd) Options and Time Configuration windows.
d. Click Time Configuration > Properties > Options > General.
e. Select Start automatically under Startup Policy.
f. Click Start
g. Click OK in the NTP Daemon (ntpd) Options and Time Configuration windows.
13. Click Storage and confirm that NFS datastores are mounted.
14. Set the Controller VM to start automatically when the ESXi host is powered on.
a. Click the Configuration tab.
b. Click Virtual Machine Startup/Shutdown in the Software frame.
c. Select the Controller VM and click Properties.
d. Ensure that the Allow virtual machines to start and stop automatically with the system check
box is selected.
e. If the Controller VM is listed in Manual Startup, click Move Up to move the Controller VM into the
Automatic Startup section.
| Platform Administration Guide | NOS 3.5 | 53
f. Click OK.
15. (NX-2000 only) Click Host Cache Configuration and confirm that the host cache is stored on the local
datastore.
If it is not correct, click Properties to update the location.
vSphere Cluster Settings
Certain vSphere cluster settings are required for Nutanix clusters.
vSphere HA and DRS must be configured even if the customer does not plan to use the feature. The
settings will be preserved within the vSphere cluster configuration, so if the customer later decides to
enable the feature, it will be pre-configured based on Nutanix best practices.
vSphere HA Settings
Enable host monitoring
Enable admission control and use the percentage-based policy with a value based on the
number of nodes in the cluster.
Set the VM Restart Priority of all Controller VMs to Disabled.
Set the Host Isolation Response of all Controller VMs to Leave Powered On.
Disable VM Monitoring for all Controller VMs.
Enable Datastore Heartbeating by clicking Select only from my preferred datastores and
choosing the Nutanix NFS datastore.
vSphere DRS Settings
Disable automation on all Controller VMs.
| Platform Administration Guide | NOS 3.5 | 54
Leave power management disabled (set to Off).
Other Cluster Settings
Store VM swapfiles in the same directory as the virtual machine.
(NX-2000 only) Store host cache on the local datastore.
Failover Reservation Percentages
Hosts (N+1) Percentage Hosts (N+2) Percentage Hosts (N+3) Percentage Hosts (N+4) Percentage
1 N/A 9 23% 17 18% 25 16%
2 N/A 10 20% 18 17% 26 15%
3 33% 11 18% 19 16% 27 15%
4 25% 12 17% 20 15% 28 14%
5 20% 13 15% 21 14% 29 14%
6 18% 14 14% 22 14% 30 13%
7 15% 15 13% 23 13% 31 13%
8 13% 16 13% 24 13% 32 13%
| Platform Administration Guide | NOS 3.5 | 55
7
VM Management
Migrating a VM to Another Cluster
You can live migrate a VM to an ESXi host in a Nutanix cluster. Usually this is done in the following cases:
• Migrate VMs from existing storage platform to Nutanix.
• Keep VMs running during disruptive upgrade or other downtime of Nutanix cluster.
In migrating VMs between vSphere clusters, the source host and NFS datastore are the ones presently
running the VM. The target host and NFS datastore are the ones where the VM will run after migration. The
target ESXi host and datastore must be part of a Nutanix cluster.
To accomplish this migration, you have to mount the NFS datastores from the target on the source. After
the migration is complete, you should unmount the datastores and block access.
To Migrate a VM to Another Cluster
Before you begin. Both the source host and the target host must be in the same vSphere cluster. Allow
NFS access to NDFS by adding the source host and target host to a whitelist, as described in To Configure
a Filesystem Whitelist.
To migrate a VM back to the source from the target, perform this same procedure with the target as the
new source and the source as the new target.
1. Sign in to the Nutanix web console.
2. Log on to vCenter with the vSphere client.
3. Mount the target NFS datastore on the source host and on the target host.
You can mount NFS datastores in the vSphere client by clicking Add Storage on the Configuration >
Storage screen for a host.
| Platform Administration Guide | NOS 3.5 | 56
Note: Due to a limitation with VMware vSphere, a temporary name and the IP address of a
controller VM must be used to mount the target NFS datastore on both the source host and the
target host for this procedure.
Parameter Value
Server IP address of the Controller VM on the target ESXi host
Folder Name of the container that has the target NFS datastore (typically /
nfs-ctr)
Datastore Name A temporary name for the NFS datastore (e.g., Temp-NTNX-NFS)
a. Select the source host and go to Configuration > Storage.
b. Click Add Storage and mount the target NFS datastore.
c. Select the target host and go to Configuration > Storage.
d. Click Add Storage and mount the target NFS datastore.
4. Change the VM datastore and host.
Do this for each VM that you want to live migrate to the target.
a. Right-click the VM and select Migrate.
| Platform Administration Guide | NOS 3.5 | 57
b. Select Change datastore and click Next.
c. Select the temporary datastore and click Next then Finish.
The VM storage is moved to the temporary datastore on the target host.
d. Right-click the VM and select Migrate.
e. Select Change host and click Next.
f. Select the target host and click Next.
g. Ensure that High priority is selected and click Next then Finish.
The VM keeps running as it moves to the target host.
h. Right-click the VM and select Migrate.
i. Select Change datastore and click Next.
j. Select the target datastore and click Next then Finish.
The VM storage is moved to the target datastore on the target host.
5. Unmount the datastores in the vSphere client.
Warning: Do not unmount the NFS datastore with the IP address 192.168.5.2.
a. Select the source host and go to Configuration > Storage
b. Right click the temporary datastore and select Unmount.
c. Select the target host and go to Configuration > Storage
d. Right click the temporary datastore and select Unmount.
What to do next. NDFS is not intended to be used as a general use NFS server. Once the migration is
complete, disable NFS access by removing the source host and target host from the whitelist, as described
in To Configure a Filesystem Whitelist.
vStorage APIs for Array Integration
To improve the vSphere cloning process, Nutanix provides a vStorage APIs for Array Integration (VAAI)
plugin. This plugin is installed by default during the Nutanix factory process.
Without the Nutanix VAAI plugin, the process of creating a full clone takes a significant amount of time
because all the data that comprises a VM is duplicated. This duplication also results in an increase in
storage consumption.
The Nutanix VAAI plugin efficiently makes full clones without reserving space for the clone. Read requests
for blocks that are shared between parent and clone are sent to the original vDisk that was created for the
parent VM. As the clone VM writes new blocks, the Nutanix file system allocates storage for those blocks.
This data management occurs completely at the storage layer, so the ESXi host sees a single file with the
full capacity that was allocated when the clone was created.
To Clone a VM
1. Log on to vCenter with the vSphere client.
| Platform Administration Guide | NOS 3.5 | 58
2. Right-click the VM and select Clone.
3. Follow the wizard to enter a name for the clone, choose a cluster, and choose a host.
4. Select the datastore that contains source VM and click Next.
Note: If you choose a datastore other than the one that contains the source VM, the clone
operation will use the VMware implementation and not the Nutanix VAAI plugin.
5. If desired, set the guest customization parameters. Otherwise, proceed to the next step.
6. Click Finish.
To Uninstall the VAAI Plugin
Because the VAAI plugin is in the process of certification, the security level is set to allow community-
supported plugins. Organizations with strict security policies may need to uninstall the plugin if it was
installed during setup.
Perform this procedure on each ESXi host in the Nutanix cluster.
1. Log on to the ESXi host with SSH.
2. Uninstall the plugin.
root@esx# esxcli software vib remove --vibname nfs-vaai-plugin
This command should return the following message:
Message: The update completed successfully, but the system needs to be rebooted for the
changes to be effective.
3. Disallow community-supported plugins.
root@esx# esxcli software acceptance set --level=PartnerSupported
4. Restart the node by following To Restart a Node on page 64.
Migrating vDisks to NFS
The Nutanix Virtual Computing Platform supports three types of storage for vDisks: VMFS, RDM, and NFS.
Nutanix recommends NFS for most situations. You can migrate VMFS and RDM vDisks to NFS.
Before migration, you must have an NFS datastore. You can determine if a datastore is NFS in
the vSphere client. NFS datastores have Server and Folder properties (for example, Server:
192.168.5.2, Folder: /ctr-ha). Datastore properties are shown in Datastores and Datastore Clusters >
Configuration > Datastore Details in the vSphere client.
| Platform Administration Guide | NOS 3.5 | 59
To create a datastore, use the Nutanix web console or the datastore create nCLI command.
The type of vDisk determines the mechanism that you use to migrate it to NFS.
• To migrate VMFS vDisks to NFS, use storage vMotion by following To Migrate VMFS vDisks to NFS on
page 59.
This operation takes significant time for each vDisk because the data is physically copied.
• To migrate RDM vDisks to NFS, use the Nutanix migrate2nfs.py utility by following To Migrate RDM
vDisks to NFS on page 60.
This operation takes only a small amount of time for each vDisk because data is not physically copied.
To Migrate VMFS vDisks to NFS
Before you begin. Log on to vCenter with the vSphere client.
Perform this procedure for each VM that is supported by a VMFS vDisk. The migration takes a significant
amount of time.
1. Right-click the VM and select Migrate.
2. Click Change datastore and click Next.
3. Select the NFS datastore and click Next.
4. Click Finish.
The vDisk begins migration. When the migration is complete, the vSphere client Tasks & Events tab
shows that the Relocate virtual machine task is completed.
| Platform Administration Guide | NOS 3.5 | 60
To Migrate RDM vDisks to NFS
The migrate2nfs.py utility is available on Controller VMs to rapidly migrate RDM vDisks to an NFS
datastore. This utility has the following restrictions:
• Guest VMs can be migrated only to an NFS datastore that is on the same container where the RDM
vDisk resides. For example, if the vDisk is in the ctr-ha container, the NFS datastore must be on the
ctr-ha container.
• ESXi has a maximum NFS vDisk size of in NFS is 2 TB - 512 B. To migrate vDisks to NFS, the
partitions must be smaller than this maximum. If you have any vDisks that exceed this maximum, you
have to reduce the size in the guest VM before using this mechanism to migrate it. How to reduce the
size is different for every operating system.
The following parameters are optional or are not always required.
--truncate_large_rdm_vmdks
Specify this switch to migrate vDisks larger than the maximum after reducing the size of the partition
in the guest operating system.
--filter=pattern
Specify a pattern with the --batch switch to restrict the vDisks based on the name, for example
Win7*. If you do not specify the --filter parameter in batch mode, all RDM vDisks are included.
--server=esxi_ip_addr and --svm_ip=cvm_ip_addr
Specify the ESXi host and Controller VM IP addresses if you are running the migrate2nfs.py script
on a Controller VM different from the node where the vDisk to migrate resides.
1. Log on to any Controller VM in the cluster with SSH.
2. Specify the logon credentials as environment variables.
nutanix@cvm$ export VI_USERNAME=root
nutanix@cvm$ export VI_PASSWORD=esxi_root_password
3. If you want to migrate one vDisk at a time, specify the VMX file.
nutanix@cvm$ migrate2nfs.py /vmfs/volumes/datastore_name/vm_dir/vm_name.vmx nfs_datastore
• Replace datastore_name with the name of the datastore, for example NTNX_datastore.
• Replace vm_dir/vm_name with the directory and the name of the VMX file.
4. If you want to migrate multiple vDisks at the same time, run migrate2nfs.py in batch mode.
Perform these steps for each ESXi host in the cluster.
a. List the VMs that will be migrated.
nutanix@cvm$ migrate2nfs.py --list_only --batch --server=esxi_ip_addr --
svm_ip=cvm_ip_addr source_datastore nfs_datastore
• Replace source_datastore with the name of the datastore that contains the VM .vmx file, for
example NTNX_datastore.
• Replace nfs_datastore with the name of the NFS datastore, for example NTNX-NFS.
b. Migrate the VMs.
nutanix@cvm$ migrate2nfs.py --batch --server=esxi_ip_addr --
svm_ip=cvm_ip_addr source_datastore nfs_datastore
Each VM takes approximately five minutes to migrate.
What to do next. Migrating the vDisks changes the device signature, which causes certain operating
systems to mark the disk as offline. How to mark the disk online is different for every operating system.
| Platform Administration Guide | NOS 3.5 | 61
| Platform Administration Guide | NOS 3.5 | 62
8
Node Management
A Nutanix cluster is composed of individual nodes, or host servers that run a hypervisor. Each node hosts
a Nutanix Controller VM, which coordinates management tasks with the Controller VMs on other nodes.
To Shut Down a Node in a Cluster
Before you begin. Shut down guest VMs, including vCenter and the vMA, that are running on the node, or
move them to other nodes in the cluster.
Caution: You can only shut down one node for each cluster. If the cluster would have more than
one node shut down, shut down the entire cluster.
1. Log on to vCenter (or to the ESXi host if vCenter is not available) with the vSphere client.
2. Right-click the Controller VM and select Power > Shut Down Guest.
Note: Do not Power Off or Reset the Controller VM. Shutting down the Controller VM as a
guest ensures that the cluster is aware that Controller VM is unavailable.
3. Right-click the host and select Enter Maintenance Mode.
4. In the Confirm Maintenance Mode dialog box, uncheck Move powered off and suspended virtual
machines to other hosts in the cluster and click Yes.
The host is placed in maintenance mode, which prevents VMs from running on the host.
5. Right-click the node and select Shut Down.
Wait until vCenter shows that the host is not responding, which may take several minutes.
If you are logged on to the ESXi host rather than to vCenter, the vSphere client will disconnect when the
host shuts down.
| Platform Administration Guide | NOS 3.5 | 63
To Start a Node in a Cluster
1. If the node is turned off, turn it on by pressing the power button on the front. Otherwise, proceed to the
next step.
2. Log on to vCenter (or to the node if vCenter is not running) with the vSphere client.
3. Right-click the ESXi host and select Exit Maintenance Mode.
4. Right-click the Controller VM and select Power > Power on.
Wait approximately 5 minutes for all services to start on the Controller VM.
5. Confirm that cluster services are running on the Controller VM.
nutanix@cvm$ ncli cluster status | grep -A 15 cvm_ip_addr
Output similar to the following is displayed.
Name : 10.1.56.197
Status : Up
Zeus : up
Scavenger : up
ConnectionSplicer : up
Hyperint : up
Medusa : up
Pithos : up
Stargate : up
Cerebro : up
Chronos : up
Curator : up
Prism : up
AlertManager : up
StatsAggregator : up
SysStatCollector : up
Every service listed should be up.
6. Right-click the ESXi host in the vSphere client and select Rescan for Datastores. Confirm that all
Nutanix datastores are available.
7. Verify that all services are up on all Controller VMs.
nutanix@cvm$ cluster status
If the cluster is running properly, output similar to the following is displayed for each node in the cluster:
CVM: 172.16.8.167 Up, ZeusLeader
Zeus UP [3148, 3161, 3162, 3163, 3170, 3180]
Scavenger UP [3333, 3345, 3346, 11997]
ConnectionSplicer UP [3379, 3392]
Hyperint UP [3394, 3407, 3408, 3429, 3440, 3447]
Medusa UP [3488, 3501, 3502, 3523, 3569]
DynamicRingChanger UP [4592, 4609, 4610, 4640]
Pithos UP [4613, 4625, 4626, 4678]
Stargate UP [4628, 4647, 4648, 4709]
Cerebro UP [4890, 4903, 4904, 4979]
Chronos UP [4906, 4918, 4919, 4968]
Curator UP [4922, 4934, 4935, 5064]
Prism UP [4939, 4951, 4952, 4978]
AlertManager UP [4954, 4966, 4967, 5022]
StatsAggregator UP [5017, 5039, 5040, 5091]
| Platform Administration Guide | NOS 3.5 | 64
SysStatCollector UP [5046, 5061, 5062, 5098]
To Restart a Node
Before you begin. Shut down guest VMs, including vCenter and the vMA, that are running on the node, or
move them to other nodes in the cluster.
Use the following procedure when you need to restart all Nutanix Complete Blocks in a cluster.
1. Log on to vCenter (or to the ESXi host if the node is running the vCenter VM) with the vSphere client.
2. Right-click the Controller VM and select Power > Shut Down Guest.
Note: Do not Power Off or Reset the Controller VM. Shutting down the Controller VM as a
guest ensures that the cluster is aware that Controller VM is unavailable.
3. Right-click the host and select Enter Maintenance Mode.
In the Confirm Maintenance Mode dialog box, uncheck Move powered off and suspended virtual
machines to other hosts in the cluster and click Yes.
The host is placed in maintenance mode, which prevents VMs from running on the host.
4. Right-click the node and select Reboot.
Wait until vCenter shows that the host is not responding and then is responding again, which may take
several minutes.
If you are logged on to the ESXi host rather than to vCenter, the vSphere client will disconnect when the
host shuts down.
5. Right-click the ESXi host and select Exit Maintenance Mode.
6. Right-click the Controller VM and select Power > Power on.
Wait approximately 5 minutes for all services to start on the Controller VM.
7. Log on to the Controller VM with SSH.
8. Confirm that cluster services are running on the Controller VM.
nutanix@cvm$ ncli cluster status | grep -A 15 cvm_ip_addr
Output similar to the following is displayed.
Name : 10.1.56.197
Status : Up
Zeus : up
Scavenger : up
ConnectionSplicer : up
Hyperint : up
Medusa : up
Pithos : up
Stargate : up
Cerebro : up
Chronos : up
Curator : up
Prism : up
AlertManager : up
StatsAggregator : up
SysStatCollector : up
Every service listed should be up.
| Platform Administration Guide | NOS 3.5 | 65
9. Right-click the ESXi host in the vSphere client and select Rescan for Datastores. Confirm that all
Nutanix datastores are available.
To Patch ESXi Hosts in a Cluster
Use the following procedure when you need to patch the ESXi hosts in a cluster without service
interruption.
Perform the following steps for each ESXi host in the cluster.
1. Shut down the node by following To Shut Down a Node in a Cluster on page 62, including moving
guest VMs to a running node in the cluster.
2. Patch the ESXi host using your normal procedures with VMware Update Manager or otherwise.
3. Start the node by following To Start a Node in a Cluster on page 63.
4. Log on to the Controller VM with SSH.
5. Confirm that cluster services are running on the Controller VM.
nutanix@cvm$ ncli cluster status | grep -A 15 cvm_ip_addr
Output similar to the following is displayed.
Name : 10.1.56.197
Status : Up
Zeus : up
Scavenger : up
ConnectionSplicer : up
Hyperint : up
Medusa : up
Pithos : up
Stargate : up
Cerebro : up
Chronos : up
Curator : up
Prism : up
AlertManager : up
StatsAggregator : up
SysStatCollector : up
Every service listed should be up.
Removing a Node
Before removing a node from a Nutanix cluster, ensure the following statements are true:
• The cluster has at least four nodes at the beginning of the process.
• The cluster will have at least three functional nodes at the conclusion of the process.
When you start planned removal of a node, the node is marked for removal and data is migrated to other
nodes in the cluster. After the node is prepared for removal, you can physically remove it from the block.
To Remove a Node from a Cluster
Before you begin.
• Ensure that all nodes that will be part of the cluster after node removal are running.
• Complete any add node operations on the cluster before removing nodes.
Platform administration guide-nos_v3_5
Platform administration guide-nos_v3_5
Platform administration guide-nos_v3_5
Platform administration guide-nos_v3_5
Platform administration guide-nos_v3_5
Platform administration guide-nos_v3_5
Platform administration guide-nos_v3_5
Platform administration guide-nos_v3_5
Platform administration guide-nos_v3_5
Platform administration guide-nos_v3_5
Platform administration guide-nos_v3_5
Platform administration guide-nos_v3_5
Platform administration guide-nos_v3_5
Platform administration guide-nos_v3_5
Platform administration guide-nos_v3_5
Platform administration guide-nos_v3_5
Platform administration guide-nos_v3_5
Platform administration guide-nos_v3_5
Platform administration guide-nos_v3_5
Platform administration guide-nos_v3_5
Platform administration guide-nos_v3_5
Platform administration guide-nos_v3_5
Platform administration guide-nos_v3_5
Platform administration guide-nos_v3_5
Platform administration guide-nos_v3_5
Platform administration guide-nos_v3_5
Platform administration guide-nos_v3_5
Platform administration guide-nos_v3_5
Platform administration guide-nos_v3_5
Platform administration guide-nos_v3_5
Platform administration guide-nos_v3_5
Platform administration guide-nos_v3_5
Platform administration guide-nos_v3_5
Platform administration guide-nos_v3_5
Platform administration guide-nos_v3_5
Platform administration guide-nos_v3_5
Platform administration guide-nos_v3_5
Platform administration guide-nos_v3_5
Platform administration guide-nos_v3_5
Platform administration guide-nos_v3_5
Platform administration guide-nos_v3_5
Platform administration guide-nos_v3_5
Platform administration guide-nos_v3_5
Platform administration guide-nos_v3_5
Platform administration guide-nos_v3_5
Platform administration guide-nos_v3_5

More Related Content

What's hot

Domino9on centos6
Domino9on centos6Domino9on centos6
Domino9on centos6
a8us
 
cynapspro endpoint data protection - installation guide
cynapspro endpoint data protection - installation guidecynapspro endpoint data protection - installation guide
cynapspro endpoint data protection - installation guide
cynapspro GmbH
 
Linux Server Hardening - Steps by Steps
Linux Server Hardening - Steps by StepsLinux Server Hardening - Steps by Steps
Linux Server Hardening - Steps by Steps
Sunil Paudel
 
Instalar MySQL CentOS
Instalar MySQL CentOSInstalar MySQL CentOS
Instalar MySQL CentOS
Moisés Elías Araya
 
Habilitar repositorio EPEL RHEL
Habilitar repositorio EPEL RHELHabilitar repositorio EPEL RHEL
Habilitar repositorio EPEL RHEL
Moisés Elías Araya
 
Freenas Tutorial EuroBSDCon 2012
Freenas Tutorial EuroBSDCon 2012Freenas Tutorial EuroBSDCon 2012
Freenas Tutorial EuroBSDCon 2012
Dru Lavigne
 
C mode class
C mode classC mode class
C mode class
Accenture
 
Medooze MCU Video Multiconference Server Installation and configuration guide...
Medooze MCU Video Multiconference Server Installation and configuration guide...Medooze MCU Video Multiconference Server Installation and configuration guide...
Medooze MCU Video Multiconference Server Installation and configuration guide...
sreeharsha43
 
Guia instalacion SQL Server Denali
Guia instalacion SQL Server DenaliGuia instalacion SQL Server Denali
Guia instalacion SQL Server Denali
Eduardo Castro
 
High Availability with Windows Server Clustering and Geo-Clustering
High Availability with Windows Server Clustering and Geo-ClusteringHigh Availability with Windows Server Clustering and Geo-Clustering
High Availability with Windows Server Clustering and Geo-Clustering
StarWind Software
 
Gluster Storage Platform Installation Guide
Gluster Storage Platform Installation GuideGluster Storage Platform Installation Guide
Gluster Storage Platform Installation Guide
GlusterFS
 
Ha cluster with openSUSE Leap
Ha cluster with openSUSE LeapHa cluster with openSUSE Leap
Ha cluster with openSUSE Leap
medwinz
 
Iscsi
IscsiIscsi
Iscsi
Md Shihab
 
Aix5[1].3+hacmp+oracle9 i+weblogic8.1安装实施报告
Aix5[1].3+hacmp+oracle9 i+weblogic8.1安装实施报告Aix5[1].3+hacmp+oracle9 i+weblogic8.1安装实施报告
Aix5[1].3+hacmp+oracle9 i+weblogic8.1安装实施报告
fm2008
 
How to Install Gluster Storage Platform
How to Install Gluster Storage PlatformHow to Install Gluster Storage Platform
How to Install Gluster Storage Platform
GlusterFS
 
Vsphere esxi-vcenter-server-55-setup-mscs
Vsphere esxi-vcenter-server-55-setup-mscsVsphere esxi-vcenter-server-55-setup-mscs
Vsphere esxi-vcenter-server-55-setup-mscs
Dhymas Mahendra
 
Component pack 6006 install guide
Component pack 6006 install guideComponent pack 6006 install guide
Component pack 6006 install guide
Roberto Boccadoro
 
Setupmanual
SetupmanualSetupmanual
Setupmanual
rikgaliano
 
Wbadmin
WbadminWbadmin
Wbadmin
ssuser1eca7d
 

What's hot (19)

Domino9on centos6
Domino9on centos6Domino9on centos6
Domino9on centos6
 
cynapspro endpoint data protection - installation guide
cynapspro endpoint data protection - installation guidecynapspro endpoint data protection - installation guide
cynapspro endpoint data protection - installation guide
 
Linux Server Hardening - Steps by Steps
Linux Server Hardening - Steps by StepsLinux Server Hardening - Steps by Steps
Linux Server Hardening - Steps by Steps
 
Instalar MySQL CentOS
Instalar MySQL CentOSInstalar MySQL CentOS
Instalar MySQL CentOS
 
Habilitar repositorio EPEL RHEL
Habilitar repositorio EPEL RHELHabilitar repositorio EPEL RHEL
Habilitar repositorio EPEL RHEL
 
Freenas Tutorial EuroBSDCon 2012
Freenas Tutorial EuroBSDCon 2012Freenas Tutorial EuroBSDCon 2012
Freenas Tutorial EuroBSDCon 2012
 
C mode class
C mode classC mode class
C mode class
 
Medooze MCU Video Multiconference Server Installation and configuration guide...
Medooze MCU Video Multiconference Server Installation and configuration guide...Medooze MCU Video Multiconference Server Installation and configuration guide...
Medooze MCU Video Multiconference Server Installation and configuration guide...
 
Guia instalacion SQL Server Denali
Guia instalacion SQL Server DenaliGuia instalacion SQL Server Denali
Guia instalacion SQL Server Denali
 
High Availability with Windows Server Clustering and Geo-Clustering
High Availability with Windows Server Clustering and Geo-ClusteringHigh Availability with Windows Server Clustering and Geo-Clustering
High Availability with Windows Server Clustering and Geo-Clustering
 
Gluster Storage Platform Installation Guide
Gluster Storage Platform Installation GuideGluster Storage Platform Installation Guide
Gluster Storage Platform Installation Guide
 
Ha cluster with openSUSE Leap
Ha cluster with openSUSE LeapHa cluster with openSUSE Leap
Ha cluster with openSUSE Leap
 
Iscsi
IscsiIscsi
Iscsi
 
Aix5[1].3+hacmp+oracle9 i+weblogic8.1安装实施报告
Aix5[1].3+hacmp+oracle9 i+weblogic8.1安装实施报告Aix5[1].3+hacmp+oracle9 i+weblogic8.1安装实施报告
Aix5[1].3+hacmp+oracle9 i+weblogic8.1安装实施报告
 
How to Install Gluster Storage Platform
How to Install Gluster Storage PlatformHow to Install Gluster Storage Platform
How to Install Gluster Storage Platform
 
Vsphere esxi-vcenter-server-55-setup-mscs
Vsphere esxi-vcenter-server-55-setup-mscsVsphere esxi-vcenter-server-55-setup-mscs
Vsphere esxi-vcenter-server-55-setup-mscs
 
Component pack 6006 install guide
Component pack 6006 install guideComponent pack 6006 install guide
Component pack 6006 install guide
 
Setupmanual
SetupmanualSetupmanual
Setupmanual
 
Wbadmin
WbadminWbadmin
Wbadmin
 

Viewers also liked

Sandeep kaushik npp certification exam (4.5) certificate
Sandeep kaushik npp certification exam (4.5) certificateSandeep kaushik npp certification exam (4.5) certificate
Sandeep kaushik npp certification exam (4.5) certificate
Sandeep Kaushik
 
Dell XC630-10 Nutanix on VMware ESXi reference architecture
Dell XC630-10 Nutanix on VMware ESXi reference architectureDell XC630-10 Nutanix on VMware ESXi reference architecture
Dell XC630-10 Nutanix on VMware ESXi reference architecture
Principled Technologies
 
NUTANIX and SPLUNK
NUTANIX and SPLUNKNUTANIX and SPLUNK
NUTANIX and SPLUNK
Greg Hanchin
 
Nutanix Community Editionのご紹介
Nutanix Community Editionのご紹介Nutanix Community Editionのご紹介
Nutanix Community Editionのご紹介
Akio Shimizu
 
Databases love nutanix
Databases love nutanixDatabases love nutanix
Databases love nutanix
NEXTtour
 
VMware vROps Management Pack for Nutanix Overview
VMware vROps Management Pack for Nutanix OverviewVMware vROps Management Pack for Nutanix Overview
VMware vROps Management Pack for Nutanix Overview
Blue Medora
 
Nutanix vdi workshop presentation
Nutanix vdi workshop presentationNutanix vdi workshop presentation
Nutanix vdi workshop presentation
He Hariyadi
 
Becoming a Professional with Prism
Becoming a Professional with PrismBecoming a Professional with Prism
Becoming a Professional with Prism
NEXTtour
 
Visitor Management System
Visitor Management SystemVisitor Management System
Visitor Management System
RITESH HELONDE
 
SYN 104: Citrix and Nutanix
SYN 104: Citrix and Nutanix SYN 104: Citrix and Nutanix
SYN 104: Citrix and Nutanix
Citrix
 
Nutanix
NutanixNutanix

Viewers also liked (11)

Sandeep kaushik npp certification exam (4.5) certificate
Sandeep kaushik npp certification exam (4.5) certificateSandeep kaushik npp certification exam (4.5) certificate
Sandeep kaushik npp certification exam (4.5) certificate
 
Dell XC630-10 Nutanix on VMware ESXi reference architecture
Dell XC630-10 Nutanix on VMware ESXi reference architectureDell XC630-10 Nutanix on VMware ESXi reference architecture
Dell XC630-10 Nutanix on VMware ESXi reference architecture
 
NUTANIX and SPLUNK
NUTANIX and SPLUNKNUTANIX and SPLUNK
NUTANIX and SPLUNK
 
Nutanix Community Editionのご紹介
Nutanix Community Editionのご紹介Nutanix Community Editionのご紹介
Nutanix Community Editionのご紹介
 
Databases love nutanix
Databases love nutanixDatabases love nutanix
Databases love nutanix
 
VMware vROps Management Pack for Nutanix Overview
VMware vROps Management Pack for Nutanix OverviewVMware vROps Management Pack for Nutanix Overview
VMware vROps Management Pack for Nutanix Overview
 
Nutanix vdi workshop presentation
Nutanix vdi workshop presentationNutanix vdi workshop presentation
Nutanix vdi workshop presentation
 
Becoming a Professional with Prism
Becoming a Professional with PrismBecoming a Professional with Prism
Becoming a Professional with Prism
 
Visitor Management System
Visitor Management SystemVisitor Management System
Visitor Management System
 
SYN 104: Citrix and Nutanix
SYN 104: Citrix and Nutanix SYN 104: Citrix and Nutanix
SYN 104: Citrix and Nutanix
 
Nutanix
NutanixNutanix
Nutanix
 

Similar to Platform administration guide-nos_v3_5

Safeconsole admin guide
Safeconsole admin guideSafeconsole admin guide
Safeconsole admin guide
MariusEnescu3
 
Cc admin
Cc adminCc admin
Cc admin
Venk Re
 
Netbackup intallation guide
Netbackup intallation guideNetbackup intallation guide
Netbackup intallation guide
rajan981
 
Secure remote access in solaris 9
Secure remote access in solaris 9Secure remote access in solaris 9
Secure remote access in solaris 9
Tintus Ardi
 
Plesk 8.1 for Linux/UNIX
Plesk 8.1 for Linux/UNIXPlesk 8.1 for Linux/UNIX
Plesk 8.1 for Linux/UNIX
webhostingguy
 
Plesk 8.1 for Linux/UNIX
Plesk 8.1 for Linux/UNIXPlesk 8.1 for Linux/UNIX
Plesk 8.1 for Linux/UNIX
webhostingguy
 
Ovm user's guide
Ovm user's guideOvm user's guide
Ovm user's guide
conlee82
 
Plesk 8.1 for Linux/UNIX
Plesk 8.1 for Linux/UNIXPlesk 8.1 for Linux/UNIX
Plesk 8.1 for Linux/UNIX
webhostingguy
 
Plesk 8.0 for Linux/UNIX
Plesk 8.0 for Linux/UNIXPlesk 8.0 for Linux/UNIX
Plesk 8.0 for Linux/UNIX
webhostingguy
 
Plesk 8.0 for Linux/UNIX
Plesk 8.0 for Linux/UNIXPlesk 8.0 for Linux/UNIX
Plesk 8.0 for Linux/UNIX
webhostingguy
 
ICM_NSX-T_V2.4_LAB
ICM_NSX-T_V2.4_LABICM_NSX-T_V2.4_LAB
ICM_NSX-T_V2.4_LAB
ThanhBinhNguyen78
 
Oracle_9i_Database_Getting_started
Oracle_9i_Database_Getting_startedOracle_9i_Database_Getting_started
Oracle_9i_Database_Getting_started
Hoàng Hải Nguyễn
 
inSync Cloud Administrator's Guide 5.1
inSync Cloud Administrator's Guide 5.1inSync Cloud Administrator's Guide 5.1
inSync Cloud Administrator's Guide 5.1
druva_slideshare
 
Coherence developer's guide
Coherence developer's guideCoherence developer's guide
Coherence developer's guide
wangdun119
 
B28654oas10g best pracitice
B28654oas10g best praciticeB28654oas10g best pracitice
B28654oas10g best pracitice
Caipei Chen
 
Plesk 8.0 for Linux/UNIX
Plesk 8.0 for Linux/UNIXPlesk 8.0 for Linux/UNIX
Plesk 8.0 for Linux/UNIX
webhostingguy
 
Plesk 8.0 for Linux/UNIX
Plesk 8.0 for Linux/UNIXPlesk 8.0 for Linux/UNIX
Plesk 8.0 for Linux/UNIX
webhostingguy
 
Plesk 8.1 for Linux/UNIX
Plesk 8.1 for Linux/UNIXPlesk 8.1 for Linux/UNIX
Plesk 8.1 for Linux/UNIX
webhostingguy
 
Admin
AdminAdmin
Plesk 8.0 for Linux/UNIX
Plesk 8.0 for Linux/UNIXPlesk 8.0 for Linux/UNIX
Plesk 8.0 for Linux/UNIX
webhostingguy
 

Similar to Platform administration guide-nos_v3_5 (20)

Safeconsole admin guide
Safeconsole admin guideSafeconsole admin guide
Safeconsole admin guide
 
Cc admin
Cc adminCc admin
Cc admin
 
Netbackup intallation guide
Netbackup intallation guideNetbackup intallation guide
Netbackup intallation guide
 
Secure remote access in solaris 9
Secure remote access in solaris 9Secure remote access in solaris 9
Secure remote access in solaris 9
 
Plesk 8.1 for Linux/UNIX
Plesk 8.1 for Linux/UNIXPlesk 8.1 for Linux/UNIX
Plesk 8.1 for Linux/UNIX
 
Plesk 8.1 for Linux/UNIX
Plesk 8.1 for Linux/UNIXPlesk 8.1 for Linux/UNIX
Plesk 8.1 for Linux/UNIX
 
Ovm user's guide
Ovm user's guideOvm user's guide
Ovm user's guide
 
Plesk 8.1 for Linux/UNIX
Plesk 8.1 for Linux/UNIXPlesk 8.1 for Linux/UNIX
Plesk 8.1 for Linux/UNIX
 
Plesk 8.0 for Linux/UNIX
Plesk 8.0 for Linux/UNIXPlesk 8.0 for Linux/UNIX
Plesk 8.0 for Linux/UNIX
 
Plesk 8.0 for Linux/UNIX
Plesk 8.0 for Linux/UNIXPlesk 8.0 for Linux/UNIX
Plesk 8.0 for Linux/UNIX
 
ICM_NSX-T_V2.4_LAB
ICM_NSX-T_V2.4_LABICM_NSX-T_V2.4_LAB
ICM_NSX-T_V2.4_LAB
 
Oracle_9i_Database_Getting_started
Oracle_9i_Database_Getting_startedOracle_9i_Database_Getting_started
Oracle_9i_Database_Getting_started
 
inSync Cloud Administrator's Guide 5.1
inSync Cloud Administrator's Guide 5.1inSync Cloud Administrator's Guide 5.1
inSync Cloud Administrator's Guide 5.1
 
Coherence developer's guide
Coherence developer's guideCoherence developer's guide
Coherence developer's guide
 
B28654oas10g best pracitice
B28654oas10g best praciticeB28654oas10g best pracitice
B28654oas10g best pracitice
 
Plesk 8.0 for Linux/UNIX
Plesk 8.0 for Linux/UNIXPlesk 8.0 for Linux/UNIX
Plesk 8.0 for Linux/UNIX
 
Plesk 8.0 for Linux/UNIX
Plesk 8.0 for Linux/UNIXPlesk 8.0 for Linux/UNIX
Plesk 8.0 for Linux/UNIX
 
Plesk 8.1 for Linux/UNIX
Plesk 8.1 for Linux/UNIXPlesk 8.1 for Linux/UNIX
Plesk 8.1 for Linux/UNIX
 
Admin
AdminAdmin
Admin
 
Plesk 8.0 for Linux/UNIX
Plesk 8.0 for Linux/UNIXPlesk 8.0 for Linux/UNIX
Plesk 8.0 for Linux/UNIX
 

Recently uploaded

Communications Mining Series - Zero to Hero - Session 1
Communications Mining Series - Zero to Hero - Session 1Communications Mining Series - Zero to Hero - Session 1
Communications Mining Series - Zero to Hero - Session 1
DianaGray10
 
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!
SOFTTECHHUB
 
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AI
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIEnchancing adoption of Open Source Libraries. A case study on Albumentations.AI
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AI
Vladimir Iglovikov, Ph.D.
 
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
James Anderson
 
Introduction to CHERI technology - Cybersecurity
Introduction to CHERI technology - CybersecurityIntroduction to CHERI technology - Cybersecurity
Introduction to CHERI technology - Cybersecurity
mikeeftimakis1
 
Epistemic Interaction - tuning interfaces to provide information for AI support
Epistemic Interaction - tuning interfaces to provide information for AI supportEpistemic Interaction - tuning interfaces to provide information for AI support
Epistemic Interaction - tuning interfaces to provide information for AI support
Alan Dix
 
Full-RAG: A modern architecture for hyper-personalization
Full-RAG: A modern architecture for hyper-personalizationFull-RAG: A modern architecture for hyper-personalization
Full-RAG: A modern architecture for hyper-personalization
Zilliz
 
20240605 QFM017 Machine Intelligence Reading List May 2024
20240605 QFM017 Machine Intelligence Reading List May 202420240605 QFM017 Machine Intelligence Reading List May 2024
20240605 QFM017 Machine Intelligence Reading List May 2024
Matthew Sinclair
 
Securing your Kubernetes cluster_ a step-by-step guide to success !
Securing your Kubernetes cluster_ a step-by-step guide to success !Securing your Kubernetes cluster_ a step-by-step guide to success !
Securing your Kubernetes cluster_ a step-by-step guide to success !
KatiaHIMEUR1
 
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdfFIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance
 
GridMate - End to end testing is a critical piece to ensure quality and avoid...
GridMate - End to end testing is a critical piece to ensure quality and avoid...GridMate - End to end testing is a critical piece to ensure quality and avoid...
GridMate - End to end testing is a critical piece to ensure quality and avoid...
ThomasParaiso2
 
20240609 QFM020 Irresponsible AI Reading List May 2024
20240609 QFM020 Irresponsible AI Reading List May 202420240609 QFM020 Irresponsible AI Reading List May 2024
20240609 QFM020 Irresponsible AI Reading List May 2024
Matthew Sinclair
 
GraphSummit Singapore | Neo4j Product Vision & Roadmap - Q2 2024
GraphSummit Singapore | Neo4j Product Vision & Roadmap - Q2 2024GraphSummit Singapore | Neo4j Product Vision & Roadmap - Q2 2024
GraphSummit Singapore | Neo4j Product Vision & Roadmap - Q2 2024
Neo4j
 
By Design, not by Accident - Agile Venture Bolzano 2024
By Design, not by Accident - Agile Venture Bolzano 2024By Design, not by Accident - Agile Venture Bolzano 2024
By Design, not by Accident - Agile Venture Bolzano 2024
Pierluigi Pugliese
 
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdf
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfObservability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdf
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdf
Paige Cruz
 
Essentials of Automations: The Art of Triggers and Actions in FME
Essentials of Automations: The Art of Triggers and Actions in FMEEssentials of Automations: The Art of Triggers and Actions in FME
Essentials of Automations: The Art of Triggers and Actions in FME
Safe Software
 
How to Get CNIC Information System with Paksim Ga.pptx
How to Get CNIC Information System with Paksim Ga.pptxHow to Get CNIC Information System with Paksim Ga.pptx
How to Get CNIC Information System with Paksim Ga.pptx
danishmna97
 
Climate Impact of Software Testing at Nordic Testing Days
Climate Impact of Software Testing at Nordic Testing DaysClimate Impact of Software Testing at Nordic Testing Days
Climate Impact of Software Testing at Nordic Testing Days
Kari Kakkonen
 
みなさんこんにちはこれ何文字まで入るの?40文字以下不可とか本当に意味わからないけどこれ限界文字数書いてないからマジでやばい文字数いけるんじゃないの?えこ...
みなさんこんにちはこれ何文字まで入るの?40文字以下不可とか本当に意味わからないけどこれ限界文字数書いてないからマジでやばい文字数いけるんじゃないの?えこ...みなさんこんにちはこれ何文字まで入るの?40文字以下不可とか本当に意味わからないけどこれ限界文字数書いてないからマジでやばい文字数いけるんじゃないの?えこ...
みなさんこんにちはこれ何文字まで入るの?40文字以下不可とか本当に意味わからないけどこれ限界文字数書いてないからマジでやばい文字数いけるんじゃないの?えこ...
名前 です男
 
20240607 QFM018 Elixir Reading List May 2024
20240607 QFM018 Elixir Reading List May 202420240607 QFM018 Elixir Reading List May 2024
20240607 QFM018 Elixir Reading List May 2024
Matthew Sinclair
 

Recently uploaded (20)

Communications Mining Series - Zero to Hero - Session 1
Communications Mining Series - Zero to Hero - Session 1Communications Mining Series - Zero to Hero - Session 1
Communications Mining Series - Zero to Hero - Session 1
 
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!
 
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AI
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIEnchancing adoption of Open Source Libraries. A case study on Albumentations.AI
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AI
 
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
 
Introduction to CHERI technology - Cybersecurity
Introduction to CHERI technology - CybersecurityIntroduction to CHERI technology - Cybersecurity
Introduction to CHERI technology - Cybersecurity
 
Epistemic Interaction - tuning interfaces to provide information for AI support
Epistemic Interaction - tuning interfaces to provide information for AI supportEpistemic Interaction - tuning interfaces to provide information for AI support
Epistemic Interaction - tuning interfaces to provide information for AI support
 
Full-RAG: A modern architecture for hyper-personalization
Full-RAG: A modern architecture for hyper-personalizationFull-RAG: A modern architecture for hyper-personalization
Full-RAG: A modern architecture for hyper-personalization
 
20240605 QFM017 Machine Intelligence Reading List May 2024
20240605 QFM017 Machine Intelligence Reading List May 202420240605 QFM017 Machine Intelligence Reading List May 2024
20240605 QFM017 Machine Intelligence Reading List May 2024
 
Securing your Kubernetes cluster_ a step-by-step guide to success !
Securing your Kubernetes cluster_ a step-by-step guide to success !Securing your Kubernetes cluster_ a step-by-step guide to success !
Securing your Kubernetes cluster_ a step-by-step guide to success !
 
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdfFIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
 
GridMate - End to end testing is a critical piece to ensure quality and avoid...
GridMate - End to end testing is a critical piece to ensure quality and avoid...GridMate - End to end testing is a critical piece to ensure quality and avoid...
GridMate - End to end testing is a critical piece to ensure quality and avoid...
 
20240609 QFM020 Irresponsible AI Reading List May 2024
20240609 QFM020 Irresponsible AI Reading List May 202420240609 QFM020 Irresponsible AI Reading List May 2024
20240609 QFM020 Irresponsible AI Reading List May 2024
 
GraphSummit Singapore | Neo4j Product Vision & Roadmap - Q2 2024
GraphSummit Singapore | Neo4j Product Vision & Roadmap - Q2 2024GraphSummit Singapore | Neo4j Product Vision & Roadmap - Q2 2024
GraphSummit Singapore | Neo4j Product Vision & Roadmap - Q2 2024
 
By Design, not by Accident - Agile Venture Bolzano 2024
By Design, not by Accident - Agile Venture Bolzano 2024By Design, not by Accident - Agile Venture Bolzano 2024
By Design, not by Accident - Agile Venture Bolzano 2024
 
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdf
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfObservability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdf
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdf
 
Essentials of Automations: The Art of Triggers and Actions in FME
Essentials of Automations: The Art of Triggers and Actions in FMEEssentials of Automations: The Art of Triggers and Actions in FME
Essentials of Automations: The Art of Triggers and Actions in FME
 
How to Get CNIC Information System with Paksim Ga.pptx
How to Get CNIC Information System with Paksim Ga.pptxHow to Get CNIC Information System with Paksim Ga.pptx
How to Get CNIC Information System with Paksim Ga.pptx
 
Climate Impact of Software Testing at Nordic Testing Days
Climate Impact of Software Testing at Nordic Testing DaysClimate Impact of Software Testing at Nordic Testing Days
Climate Impact of Software Testing at Nordic Testing Days
 
みなさんこんにちはこれ何文字まで入るの?40文字以下不可とか本当に意味わからないけどこれ限界文字数書いてないからマジでやばい文字数いけるんじゃないの?えこ...
みなさんこんにちはこれ何文字まで入るの?40文字以下不可とか本当に意味わからないけどこれ限界文字数書いてないからマジでやばい文字数いけるんじゃないの?えこ...みなさんこんにちはこれ何文字まで入るの?40文字以下不可とか本当に意味わからないけどこれ限界文字数書いてないからマジでやばい文字数いけるんじゃないの?えこ...
みなさんこんにちはこれ何文字まで入るの?40文字以下不可とか本当に意味わからないけどこれ限界文字数書いてないからマジでやばい文字数いけるんじゃないの?えこ...
 
20240607 QFM018 Elixir Reading List May 2024
20240607 QFM018 Elixir Reading List May 202420240607 QFM018 Elixir Reading List May 2024
20240607 QFM018 Elixir Reading List May 2024
 

Platform administration guide-nos_v3_5

  • 2. Copyright | Platform Administration Guide | NOS 3.5 | 2 Notice Copyright Copyright 2013 Nutanix, Inc. Nutanix, Inc. 1740 Technology Drive, Suite 400 San Jose, CA 95110 All rights reserved. This product is protected by U.S. and international copyright and intellectual property laws. Nutanix is a trademark of Nutanix, Inc. in the United States and/or other jurisdictions. All other marks and names mentioned herein may be trademarks of their respective companies. Conventions Convention Description variable_value The action depends on a value that is unique to your environment. ncli> command The commands are executed in the Nutanix nCLI. user@host$ command The commands are executed as a non-privileged user (such as nutanix) in the system shell. root@host# command The commands are executed as the root user in the hypervisor host (vSphere or KVM) shell. output The information is displayed as output from a command or in a log file. Default Cluster Credentials Interface Target Username Password Nutanix web console Nutanix Controller VM admin admin vSphere client ESXi host root nutanix/4u SSH client or console ESXi host root nutanix/4u SSH client or console KVM host root nutanix/4u SSH client Nutanix Controller VM nutanix nutanix/4u IPMI web interface or ipmitool Nutanix node ADMIN ADMIN IPMI web interface or ipmitool Nutanix node (NX-3000) admin admin Version Last modified: September 24, 2013 (2013-09-24-13:28 GMT-7)
  • 3. 3 Contents Part I: NOS................................................................................................... 6 1: Cluster Management....................................................................... 7 To Start a Nutanix Cluster....................................................................................................... 7 To Stop a Cluster.....................................................................................................................7 To Destroy a Cluster................................................................................................................ 8 To Create Clusters from a Multiblock Cluster..........................................................................9 Disaster Protection................................................................................................................. 12 2: Password Management.................................................................15 To Change the Controller VM Password............................................................................... 15 To Change the ESXi Host Password.....................................................................................16 To Change the KVM Host Password.....................................................................................17 To Change the IPMI Password..............................................................................................18 3: Alerts...............................................................................................19 Cluster.....................................................................................................................................19 Controller VM..........................................................................................................................22 Guest VM................................................................................................................................24 Hardware.................................................................................................................................26 Storage....................................................................................................................................30 4: IP Address Configuration............................................................. 33 To Reconfigure the Cluster.................................................................................................... 33 To Prepare to Reconfigure the Cluster..................................................................................34 Remote Console IP Address Configuration........................................................................... 35 To Configure Host Networking............................................................................................... 38 To Configure Host Networking (KVM)....................................................................................39 To Update the ESXi Host Password in vCenter.................................................................... 40 To Change the Controller VM IP Addresses..........................................................................40 To Change a Controller VM IP Address (manual)................................................................. 41 To Complete Cluster Reconfiguration.................................................................................... 42 5: Field Installation............................................................................ 44 NOS Installer Reference.........................................................................................................44 To Image a Node................................................................................................................... 44 Part II: vSphere..........................................................................................47 6: vCenter Configuration...................................................................48 To Use an Existing vCenter Server....................................................................................... 48
  • 4. 4 7: VM Management............................................................................ 55 Migrating a VM to Another Cluster........................................................................................ 55 vStorage APIs for Array Integration....................................................................................... 57 Migrating vDisks to NFS.........................................................................................................58 8: Node Management.........................................................................62 To Shut Down a Node in a Cluster....................................................................................... 62 To Start a Node in a Cluster..................................................................................................63 To Restart a Node..................................................................................................................64 To Patch ESXi Hosts in a Cluster..........................................................................................65 Removing a Node...................................................................................................................65 9: Storage Replication Adapter for Site Recovery Manager.......... 68 To Configure the Nutanix Cluster for SRA Replication.......................................................... 69 To Configure SRA Replication on the SRM Servers............................................................. 70 Part III: KVM............................................................................................... 72 10: Kernel-based Virtual Machine (KVM) Architecture...................73 Storage Overview................................................................................................................... 73 VM Commands....................................................................................................................... 74 11: VM Management Commands......................................................75 virt_attach_disk.py.................................................................................................................. 76 virt_check_disks.py................................................................................................................. 77 virt_clone.py............................................................................................................................ 79 virt_detach_disk.py................................................................................................................. 80 virt_eject_cdrom.py................................................................................................................. 81 virt_insert_cdrom.py................................................................................................................82 virt_install.py........................................................................................................................... 83 virt_kill.py................................................................................................................................ 85 virt_kill_snapshot.py................................................................................................................86 virt_list_disks.py...................................................................................................................... 86 virt_migrate.py.........................................................................................................................87 virt_multiclone.py.................................................................................................................... 88 virt_snapshot.py...................................................................................................................... 89 nfs_ls.py.................................................................................................................................. 90 Part IV: Hardware...................................................................................... 93 12: Node Order...................................................................................94 13: System Specifications................................................................ 98 NX-1000 Series System Specifications..................................................................................98 NX-2000 System Specifications........................................................................................... 100
  • 5. 5 NX-3000 System Specifications........................................................................................... 103 NX-3050 System Specifications........................................................................................... 105 NX-6000 Series System Specifications................................................................................108
  • 6. NOS | Platform Administration Guide | NOS 3.5 | 6 Part I NOS
  • 7. | Platform Administration Guide | NOS 3.5 | 7 1 Cluster Management Although each host in a Nutanix cluster runs a hypervisor independent of other hosts in the cluster, some operations affect the entire cluster. To Start a Nutanix Cluster 1. Log on to any Controller VM in the cluster with SSH. 2. Start the Nutanix cluster. nutanix@cvm$ cluster start If the cluster starts properly, output similar to the following is displayed for each node in the cluster: CVM: 172.16.8.167 Up, ZeusLeader Zeus UP [3148, 3161, 3162, 3163, 3170, 3180] Scavenger UP [3333, 3345, 3346, 11997] ConnectionSplicer UP [3379, 3392] Hyperint UP [3394, 3407, 3408, 3429, 3440, 3447] Medusa UP [3488, 3501, 3502, 3523, 3569] DynamicRingChanger UP [4592, 4609, 4610, 4640] Pithos UP [4613, 4625, 4626, 4678] Stargate UP [4628, 4647, 4648, 4709] Cerebro UP [4890, 4903, 4904, 4979] Chronos UP [4906, 4918, 4919, 4968] Curator UP [4922, 4934, 4935, 5064] Prism UP [4939, 4951, 4952, 4978] AlertManager UP [4954, 4966, 4967, 5022] StatsAggregator UP [5017, 5039, 5040, 5091] SysStatCollector UP [5046, 5061, 5062, 5098] What to do next. After you have verified that the cluster is running, you can start guest VMs. To Stop a Cluster Before you begin. Shut down all guest virtual machines, including vCenter if it is running on the cluster. Do not shut down Nutanix Controller VMs. Note: This procedure stops all services provided by guest virtual machines, the Nutanix cluster, and the hypervisor host. 1. Log on to a running Controller VM in the cluster with SSH. 2. Stop the Nutanix cluster. nutanix@cvm$ cluster stop Wait to proceed until output similar to the following is displayed for every Controller VM in the cluster. CVM: 172.16.8.191 Up, ZeusLeader Zeus UP [3167, 3180, 3181, 3182, 3191, 3201]
  • 8. | Platform Administration Guide | NOS 3.5 | 8 Scavenger UP [3334, 3351, 3352, 3353] ConnectionSplicer DOWN [] Hyperint DOWN [] Medusa DOWN [] DynamicRingChanger DOWN [] Pithos DOWN [] Stargate DOWN [] Cerebro DOWN [] Chronos DOWN [] Curator DOWN [] Prism DOWN [] AlertManager DOWN [] StatsAggregator DOWN [] SysStatCollector DOWN [] To Destroy a Cluster Destroying a cluster resets all nodes in the cluster to the factory configuration. All cluster configuration and guest VM data is unrecoverable after destroying the cluster. 1. Log on to any Controller VM in the cluster with SSH. 2. Stop the Nutanix cluster. nutanix@cvm$ cluster stop Wait to proceed until output similar to the following is displayed for every Controller VM in the cluster. CVM: 172.16.8.191 Up, ZeusLeader Zeus UP [3167, 3180, 3181, 3182, 3191, 3201] Scavenger UP [3334, 3351, 3352, 3353] ConnectionSplicer DOWN [] Hyperint DOWN [] Medusa DOWN [] DynamicRingChanger DOWN [] Pithos DOWN [] Stargate DOWN [] Cerebro DOWN [] Chronos DOWN [] Curator DOWN [] Prism DOWN [] AlertManager DOWN [] StatsAggregator DOWN [] SysStatCollector DOWN [] 3. If the nodes in the cluster have Intel PCIe-SSD drives, ensure they are mapped properly. Check if the node has an Intel PCIe-SSD drive. nutanix@cvm$ lsscsi | grep 'SSD 910' → If no items are listed, the node does not have an Intel PCIe-SSD drive and you can proceed to the next step. → If two items are listed, the node does have an Intel PCIe-SSD drive. If the node has an Intel PCIe-SSD drive, check if it is mapped correctly. nutanix@cvm$ cat /proc/partitions | grep dm → If two items are listed, the drive is mapped correctly and you can proceed. → If no items are listed, the drive is not mapped correctly. Start then stop the cluster before proceeding.
  • 9. | Platform Administration Guide | NOS 3.5 | 9 Perform this check on every Controller VM in the cluster. 4. Destroy the cluster. Caution: Performing this operation deletes all cluster and guest VM data in the cluster. nutanix@cvm$ cluster -s cvm_ip_addr destroy To Create Clusters from a Multiblock Cluster The minimum size for a cluster is three nodes. 1. Remove nodes from the existing cluster. → If you want to preserve data on the existing cluster, remove nodes by following To Remove a Node from a Cluster on page 65. → If you want multiple new clusters, destroy the existing cluster by following To Destroy a Cluster on page 8. 2. Create one or more new clusters by following To Configure the Cluster on page 10. Product Mixing Restrictions While a Nutanix cluster can include different products, there are some restrictions. Caution: Do not configure a cluster that violates any of the following rules. Compatibility Matrix NX-1000 NX-2000 NX-2050 NX-3000 NX-3050 NX-6000 NX-1000 1 • • • • • • NX-2000 • • • • • NX-2050 • • • • • • NX-3000 • • • • • • NX-3050 • • • • • • NX-6000 2 • • • • 3 • 1. NX-1000 nodes can be mixed with other products in the same cluster only when they are running 10 GbE networking; they cannot be mixed when running 1 GbE networking. If NX-1000 nodes are using the 1 GbE interface, the maximum cluster size is 8 nodes. If the nodes are using the 10 GbE interface, the cluster has no limits other than the maximum supported cluster size that applies to all products. 2. NX-6000 nodes cannot be mixed NX-2000 nodes in the same cluster. 3. Because it has a larger Flash tier, NX-3050 is recommended to be mixed with NX-6000 over other products. • Any combination of NX-2000, NX-2050, NX-3000, and NX-3050 nodes can be mixed in the same cluster.
  • 10. | Platform Administration Guide | NOS 3.5 | 10 • All nodes in a cluster must be the same hypervisor type (ESXi or KVM). • All Controller VMs in a cluster must have the same NOS version. • Mixed Nutanix clusters comprising NX-2000 nodes and other products are supported as specified above. However, because the NX-2000 processor architecture differs from other models, vSphere does not support enhanced/live vMotion of VMs from one type of node to another unless Enhanced vMotion Capability (EVC) is enabled. For more information about EVC, see the vSphere 5 documentation and the following VMware knowledge base articles: • Enhanced vMotion Compatibility (EVC) processor support [1003212] • EVC and CPU Compatibility FAQ [1005764] To Configure the Cluster Before you begin. • Confirm that the system you are using to configure the cluster meets the following requirements: • IPv6 link-local enabled. • Windows 7, Vista, or MacOS. • (Windows only) Bonjour installed (included with iTunes or downloadable from http:// support.apple.com/kb/DL999). • Determine the IPv6 service of any Controller VM in the cluster. IPv6 service names are uniquely generated at the factory and have the following form (note the final period): NTNX-block_serial_number-node_location-CVM.local. On the right side of the block toward the front is a label that has the block_serial_number (for example, 12AM3K520060). The node_location is a number 1-4 for NX-3000, a letter A-D for NX-1000/NX-2000/ NX-3050, or a letter A-B for NX-6000. If you need to confirm if IPv6 link-local is enabled on the network or if you do not have access to get the node serial number, see the Nutanix support knowledge base for alternative methods.
  • 11. | Platform Administration Guide | NOS 3.5 | 11 1. Open a web browser. Nutanix recommends using Internet Explorer 9 for Windows and Safari for Mac OS. Note: Internet Explorer requires protected mode to be disabled. Go to Tools > Internet Options > Security, clear the Enable Protected Mode check box, and restart the browser. 2. Navigate to http://cvm_host_name:2100/cluster_init.html. Replace cvm_host_name with the IPv6 service name of any Controller VM that will be added to the cluster. Following is an example URL to access the cluster creation page on a Controller VM: http://NTNX-12AM3K520060-1-CVM.local.:2100/cluster_init.html If the cluster_init.html page is blank, then the Controller VM is already part of a cluster. Connect to a Controller VM that is not part of a cluster. 3. Type a meaningful value in the Cluster Name field. This value is appended to all automated communication between the cluster and Nutanix support. It should include the customer's name and if necessary a modifier that differentiates this cluster from any other clusters that the customer might have. Note: This entity has the following naming restrictions: • The maximum length is 75 characters. • Allowed characters are uppercase and lowercase standard Latin letters (A-Z and a-z), decimal digits (0-9), dots (.), hyphens (-), and underscores (_). 4. Type the appropriate DNS and NTP addresses in the respective fields. 5. Type the appropriate subnet masks in the Subnet Mask row. 6. Type the appropriate default gateway IP addresses in the Default Gateway row. 7. Select the check box next to each node that you want to add to the cluster.
  • 12. | Platform Administration Guide | NOS 3.5 | 12 All unconfigured nodes on the current network are presented on this web page. If you will be configuring multiple clusters, be sure that you only select the nodes that should be part of the current cluster. 8. Provide an IP address for all components in the cluster. Note: The unconfigured nodes are not listed according to their position in the block. Ensure that you assign the intended IP address to each node. 9. Click Create. Wait until the Log Messages section of the page reports that the cluster has been successfully configured. Output similar to the following indicates successful cluster configuration. Configuring IP addresses on node 12AM2K420010/A... Configuring IP addresses on node 12AM2K420010/B... Configuring IP addresses on node 12AM2K420010/C... Configuring IP addresses on node 12AM2K420010/D... Configuring Zeus on node 12AM2K420010/A... Configuring Zeus on node 12AM2K420010/B... Configuring Zeus on node 12AM2K420010/C... Configuring Zeus on node 12AM2K420010/D... Initializing cluster... Cluster successfully initialized! Initializing the cluster DNS and NTP servers... Successfully updated the cluster NTP and DNS server list 10. Log on to any Controller VM in the cluster with SSH. 11. Start the Nutanix cluster. nutanix@cvm$ cluster start If the cluster starts properly, output similar to the following is displayed for each node in the cluster: CVM: 172.16.8.167 Up, ZeusLeader Zeus UP [3148, 3161, 3162, 3163, 3170, 3180] Scavenger UP [3333, 3345, 3346, 11997] ConnectionSplicer UP [3379, 3392] Hyperint UP [3394, 3407, 3408, 3429, 3440, 3447] Medusa UP [3488, 3501, 3502, 3523, 3569] DynamicRingChanger UP [4592, 4609, 4610, 4640] Pithos UP [4613, 4625, 4626, 4678] Stargate UP [4628, 4647, 4648, 4709] Cerebro UP [4890, 4903, 4904, 4979] Chronos UP [4906, 4918, 4919, 4968] Curator UP [4922, 4934, 4935, 5064] Prism UP [4939, 4951, 4952, 4978] AlertManager UP [4954, 4966, 4967, 5022] StatsAggregator UP [5017, 5039, 5040, 5091] SysStatCollector UP [5046, 5061, 5062, 5098] Disaster Protection After VM protection is configured in the web console, managing snapshots and failing from one site to another are accomplished with the nCLI.
  • 13. | Platform Administration Guide | NOS 3.5 | 13 To Manage VM Snapshots You can manage VM snapshots, including restoration, with these nCLI commands. • Check status of replication. ncli> pd list-replication-status • List snapshots. ncli> pd list-snapshots name="pd_name" • Restore VMs from backup. ncli> pd rollback-vms name="pd_name" vm-names="vm_ids" snap-id="snapshot_id" path- prefix="folder_name" • Replace vm_ids with a comma-separated list of VM IDs as given in vm list. • Replace snapshot_id with a snapshot ID as given by pd list-snapshots. • Replace folder_name with the name you want to give the VM folder on the datastore, which will be created if it does not exist. The VM is restored to the container where the snapshot resides. If you used a DAS-SATA-only container for replication, after restoring the VM move it to an container suitable for active workloads with storage vMotion • Restore NFS files from backup. ncli> pd rollback-nfs-files name="pd_name" files="nfs_files" snap-id="snapshot_id" • Replace nfs_files with a comma-separated list of NFS files to restore. • Replace snapshot_id with a snapshot ID as given by pd list-snapshots. If you want to replace the existing file, include replace-nfs-files=true. • Remove snapshots. ncli> pd rm-snapshot name="pd_name" snap-ids="snapshot_ids" Replace snapshot_ids with a comma-separated list of snapshot IDs as given in pd list snapshots. To Fail from one Site to Another Disaster failover Connect to the backup site and activate it. ncli> pd activate name="pd_name" This operation does the following: 1. Restores all VM files from last fully-replicated snapshot. 2. Registers VMs on recovery site. 3. Marks the failover site protection domain as active. Planned failover Connect to the primary site and specify the failover site to migrate to. ncli> pd migrate name="pd_name" remote-site="remote_site_name2" This operation does the following: 1. Creates and replicates a snapshot of the protection domain.
  • 14. | Platform Administration Guide | NOS 3.5 | 14 2. Shuts down VMs on the local site. 3. Creates and replicates another snapshot of the protection domain. 4. Unregisters all VMs and removes their associated files. 5. Marks the local site protection domain as inactive. 6. Restores all VM files from the last snapshot and registers them on the remote site. 7. Marks the remote site protection domain as active.
  • 15. | Platform Administration Guide | NOS 3.5 | 15 2 Password Management You can change the passwords of the following cluster components: • Nutanix management interfaces • Nutanix Controller VMs • Hypervisor software • Node hardware (management port) Requirements • You know the IP address of the component that you want to modify. • You know the current password of the component you want to modify. The default passwords of all components are provided in Default Cluster Credentials on page 2. • You have selected a password that has 8 or more characters and at least one of each of the following: • Upper-case letters • Lower-case letters • Numerals • Symbols To Change the Controller VM Password Perform these steps on every Controller VM in the cluster. Warning: The nutanix user must have the same password on all Controller VMs. 1. Log on to the Controller VM with SSH. 2. Change the nutanix user password. nutanix@cvm$ passwd 3. Respond to the prompts, providing the current and new nutanix user password. Changing password for nutanix. Old Password: New password: Retype new password: Password changed. Note: The password must meet the following complexity requirements: • At least 9 characters long • At least 2 lowercase characters • At least 2 uppercase characters • At least 2 numbers • At least 2 special characters
  • 16. | Platform Administration Guide | NOS 3.5 | 16 To Change the ESXi Host Password The cluster software needs to be able to log into each host as root to perform standard cluster operations, such as mounting a new NFS datastore or querying the status of VMs in the cluster. Therefore, after changing the ESXi root password it is critical to update the cluster configuration with the new password. Tip: Although it is not required for the root user to have the same password on all hosts, doing so will make cluster management and support much easier. If you do select a different password for one or more hosts, make sure to note the password for each host. 1. Change the root password of all hosts. Perform these steps on every ESXi host in the cluster. a. Log on to the ESXi host with SSH. b. Change the root password. root@esx# passwd root c. Respond to the prompts, providing the current and new root password. Changing password for root. Old Password: New password: Retype new password: Password changed. 2. Update the root user password for all hosts in the Zeus configuration. Warning: If you do not perform this step, the web console will no longer show correct statistics and alerts, and other cluster operations will fail. a. Log on to any Controller VM in the cluster with SSH. b. Find the host IDs. nutanix@cvm$ ncli -p 'admin_password' host list | grep -E 'ID|Hypervisor Key' Note the host ID for each hypervisor host. c. Update the hypervisor host password. nutanix@cvm$ ncli -p 'admin_password' managementserver edit name=host_addr password='host_password' nutanix@cvm$ ncli -p 'admin_password' host edit id=host_id hypervisor- password='host_password' • Replace host_addr with the IP address of the hypervisor host. • Replace host_id with a host ID you determined in the preceding step. • Replace host_password with the root password on the corresponding hypervisor host. Perform this step for every hypervisor host in the cluster. 3. Update the ESXi host password. a. Log on to vCenter with the vSphere client. b. Right-click the host with the changed password and select Disconnect. c. Right-click the host and select Connect.
  • 17. | Platform Administration Guide | NOS 3.5 | 17 d. Enter the new password and complete the Add Host Wizard. If reconnecting the host fails, remove it from the cluster and add it again. To Change the KVM Host Password The cluster software needs to be able to log into each host as root to perform standard cluster operations, such as mounting a new NFS datastore or querying the status of VMs in the cluster. Therefore, after changing the KVM root password it is critical to update the cluster configuration with the new password. Tip: Although it is not required for the root user to have the same password on all hosts, doing so will make cluster management and support much easier. If you do select a different password for one or more hosts, make sure to note the password for each host. 1. Change the root password of all hosts. Perform these steps on every KVM host in the cluster. a. Log on to the KVM host with SSH. b. Change the root password. root@kvm# passwd root c. Respond to the prompts, providing the current and new root password. Changing password for root. Old Password: New password: Retype new password: Password changed. 2. Update the root user password for all hosts in the Zeus configuration. Warning: If you do not perform this step, the web console will no longer show correct statistics and alerts, and other cluster operations will fail. a. Log on to any Controller VM in the cluster with SSH. b. Find the host IDs. nutanix@cvm$ ncli -p 'admin_password' host list | grep -E 'ID|Hypervisor Key' Note the host ID for each hypervisor host. c. Update the hypervisor host password. nutanix@cvm$ ncli -p 'admin_password' managementserver edit name=host_addr password='host_password' nutanix@cvm$ ncli -p 'admin_password' host edit id=host_id hypervisor- password='host_password' • Replace host_addr with the IP address of the hypervisor host. • Replace host_id with a host ID you determined in the preceding step. • Replace host_password with the root password on the corresponding hypervisor host. Perform this step for every hypervisor host in the cluster.
  • 18. | Platform Administration Guide | NOS 3.5 | 18 To Change the IPMI Password The cluster software needs to be able to log into the management interface on each host to perform certain operations, such as reading hardware alerts. Therefore, after changing the IPMI password it is critical to update the cluster configuration with the new password. Tip: Although it is not required for the administrative user to have the same password on all hosts, doing so will make cluster management much easier. If you do select a different password for one or more hosts, make sure to note the password for each host 1. Change the administrative user password of all IPMI hosts. Product Administrative user NX-1000, NX-3050, NX-6000 ADMIN NX-3000 admin NX-2000 ADMIN Perform these steps on every IPMI host in the cluster. a. Sign in to the IPMI web interface as the administrative user. b. Click Configuration. c. Click Users. d. Select the administrative user and then click Modify User. e. Type the new password in both text fields and then click Modify. f. Click OK to close the confirmation window. 2. Update the administrative user password for all hosts in the Zeus configuration. a. Log on to any Controller VM in the cluster with SSH. b. Generate a list of all hosts in the cluster. nutanix@cvm$ ncli -p 'admin_password' host list | grep -E 'ID|IPMI Address' Note the host ID of each entry in the list. c. Update the IPMI password. nutanix@cvm$ ncli -p 'admin_password' host edit id=host_id ipmi- password='ipmi_password' • Replace host_id with a host ID you determined in the preceding step. • Replace ipmi_password with the administrative user password on the corresponding IPMI host. Perform this step for every IPMI host in the cluster.
  • 19. | Platform Administration Guide | NOS 3.5 | 19 3 Alerts This section lists all the NOS alerts with cause and resolution, sorted by category. • Cluster • Controller VM • Guest VM • Hardware • Storage Cluster CassandraDetachedFromRing [A1055] Message Cassandra on CVM ip_address is now detached from ring due to reason. Cause Either a metadata drive has failed, the node was down for an extended period of time, or an unexpected subsystem fault was encountered, so the node was removed from the metadata store. Resolution If the metadata drive has failed, replace the metadata drive as soon as possible. Refer to the Nutanix documentation for instructions. If the node was down for an extended period of time and is now running, add it back to the metadata store with the "host enable-metadata-store" nCLI command. Otherwise, contact Nutanix support. Severity kCritical CassandraMarkedToBeDetached [A1054] Message Cassandra on CVM ip_address is marked to be detached from ring due to reason. Cause Either a metadata drive has failed, the node was down for an extended period of time, or an unexpected subsystem fault was encountered, so the node is marked to be removed from the metadata store. Resolution If the metadata drive has failed, replace the metadata drive as soon as possible. Refer to the Nutanix documentation for instructions. If the node was down for an extended period of time and is now running, add it back to the metadata store with the "host enable-metadata-store" nCLI command. Otherwise, contact Nutanix support. Severity kCritical DuplicateRemoteClusterId [A1038] Message Remote cluster 'remote_name' is disabled because the name conflicts with remote cluster 'conflicting_remote_name'.
  • 20. | Platform Administration Guide | NOS 3.5 | 20 Cause Two remote sites with different names or different IP addresses have same cluster ID. This can happen in two cases: (a) A remote cluster is added twice under two different names (through different IP addresses) or (b) Two clusters have the same cluster ID. Resolution In case (a) remove the duplicate remote site. In case (b) verify that the both clusters have the same cluster ID and contact Nutanix support. Severity kWarning JumboFramesDisabled [A1062] Message Jumbo frames could not be enabled on the iface interface in the last three attempts. Cause Jumbo frames could not be enabled in the controller VMs. Resolution Ensure that the 10-Gig network switch has jumbo-frames enabled. Severity kCritical NetworkDisconnect [A1041] Message IPMI interface target_ip is not reachable from Controller VM source_ip in the last six attempts. Cause The IPMI interface is down or there is a network connectivity issue. Resolution Ensure that the IPMI interface is functioning and that physical networking, VLANs, and virtual switches are configured correctly. Severity kWarning NetworkDisconnect [A1006] Message Hypervisor target_ip is not reachable from Controller VM source_ip in the last six attempts. Cause The hypervisor host is down or there is a network connectivity issue. Resolution Ensure that the hypervisor host is running and that physical networking, VLANs, and virtual switches are configured correctly. Severity kCritical NetworkDisconnect [A1048] Message Controller VM svm_ip with network address svm_subnet is in a different network than the Hypervisor hypervisor_ip, which is in the network hypervisor_subnet. Cause The Controller VM and the hypervisor are not on the same subnet. Resolution Reconfigure the cluster. Either move the Controller VMs to the same subnet as the hypervisor hosts or move the hypervisor hosts to the same subnet as the Controller VMs.
  • 21. | Platform Administration Guide | NOS 3.5 | 21 Severity kCritical NetworkDisconnect [A1040] Message Hypervisor target_ip is not reachable from Controller VM source_ip in the last three attempts. Cause The hypervisor host is down or there is a network connectivity issue. Resolution Ensure that the hypervisor host is running and that physical networking, VLANs, and virtual switches are configured correctly. Severity kCritical RemoteSupportEnabled [A1051] Message Daily reminder that remote support tunnel to Nutanix HQ is enabled on this cluster. Cause Nutanix support staff are able to access the cluster to assist with any issue. Resolution No action is necessary. Severity kInfo TimeDifferenceHigh [A1017] Message Wall clock time has drifted by more than time_difference_limit_secs seconds between the Controller VMs lower_time_ip and higher_time_ip. Cause The cluster does not have NTP servers configured or they are not reachable. Resolution Ensure that the cluster has NTP servers configured and that the NTP servers are reachable from all Controller VMs. Severity kWarning ZeusConfigMismatch [A1008] Message IPMI IP address on Controller VM svm_ip_address was updated from zeus_ip_address to invalid_ip_address without following the Nutanix IP Reconfiguration procedure. Cause The IP address configured in the cluster does not match the actual setting of the IPMI interface. Resolution Follow the IP address change procedure in the Nutanix documentation. Severity kCritical
  • 22. | Platform Administration Guide | NOS 3.5 | 22 ZeusConfigMismatch [A1009] Message IP address of Controller VM zeus_ip_address has been updated to invalid_ip_address. The Controller VM will not be part of the cluster once the change comes into effect, unless zeus configuration is updated. Cause The IP address configured in the cluster does not match the actual setting of the Controller VM. Resolution Follow the IP address change procedure in the Nutanix documentation. Severity kCritical ZeusConfigMismatch [A1029] Message Hypervisor IP address on Controller VM svm_ip_address was updated from zeus_ip_address to invalid_ip_address without following the Nutanix IP Reconfiguration procedure. Cause The IP address configured in the cluster does not match the actual setting of the hypervisor. Resolution Follow the IP address change procedure in the Nutanix documentation. Severity kCritical Controller VM CVMNICSpeedLow [A1058] Message Controller VM service_vm_external_ip is not running on 10 Gbps network interface. This will degrade the system performance. Cause The Controller VM is not configured to use the 10 Gbps NIC or is configured to share load with a slower NIC. Resolution Connect the Controller VM to 10 Gbps NICs only. Severity kWarning CVMRAMUsageHigh [A1056] Message Main memory usage in Controller VM ip_address is high in the last 20 minutes. free_memory_kb KB of memory is free. Cause The RAM usage on the Controller VM has been high. Resolution Contact Nutanix Support for diagnosis. RAM on the Controller VM may need to be increased. Severity kCritical
  • 23. | Platform Administration Guide | NOS 3.5 | 23 CVMRebooted [A1024] Message Controller VM ip_address has been rebooted. Cause Various Resolution If the Controller VM was restarted intentionally, no action is necessary. If it restarted by itself, contact Nutanix support. Severity kCritical IPMIError [A1050] Message Controller VM ip_address is unable to fetch IPMI SDR repository. Cause The IPMI interface is down or there is a network connectivity issue. Resolution Ensure that the IPMI interface is functioning and that physical networking, VLANs, and virtual switches are configured correctly. Severity kCritical KernelMemoryUsageHigh [A1034] Message Controller VM ip_address's kernel memory usage is higher than expected. Cause Various Resolution Contact Nutanix support. Severity kCritical NetworkDisconnect [A1001] Message Controller VM target_ip is not reachable from Controller VM source_ip in the last six attempts. Cause The Controller VM is down or there is a network connectivity issue. Resolution If the Controller VM does not respond to ping, turn it on. Ensure that physical networking, VLANs, and virtual switches are configured correctly. Severity kCritical NetworkDisconnect [A1011] Message Controller VM target_ip is not reachable from Controller VM source_ip in the last three attempts. Cause The Controller VM is down or there is a network connectivity issue. Resolution Ensure that the Controller VM is running and that physical networking, VLANs, and virtual switches are configured correctly. Severity kCritical
  • 24. | Platform Administration Guide | NOS 3.5 | 24 NodeInMaintenanceMode [A1013] Message Controller VM ip_address is put in maintenance mode due to reason. Cause Node removal has been initiated. Resolution No action is necessary. Severity kInfo ServicesRestartingFrequently [A1032] Message There have been 10 or more cluster services restarts within 15 minutes. Cause This alert usually indicates that the Controller VM was restarted, but there could be other causes. Resolution If this alert occurs once or infrequently, no action is necessary. If it is frequent, contact Nutanix support. Severity kCritical StargateTemporarilyDown [A1030] Message Stargate on Controller VM ip_address is down for downtime seconds. Cause Various Resolution Contact Nutanix support. Severity kCritical Guest VM ProtectedVmNotFound [A1010] Message Unable to locate VM with name 'vm_name and internal ID 'vm_id' in protection domain 'protection_domain_name'. Cause The VM was deleted. Resolution Remove the VM from the protection domain. Severity kWarning ProtectionDomainActivation [A1043] Message Unable to make protection domain 'protection_domain_name' active on remote site 'remote_name' due to 'reason'. Cause Various Resolution Resolve the stated reason for the failure. If you cannot resolve the error, contact Nutanix support.
  • 25. | Platform Administration Guide | NOS 3.5 | 25 Severity kCritical ProtectionDomainChangeModeFailure [A1060] Message Protection domain protection_domain_name activate/deactivate failed. reason Cause Protection domain cannot be activated or migrated. Resolution Resolve the stated reason for the failure. If you cannot resolve the error, contact Nutanix support. Severity kCritical ProtectionDomainReplicationExpired [A1003] Message Protection domain protection_domain_name replication to the remote site remote_name has expired before it is started. Cause Replication is taking too long to complete before the snapshots expire. Resolution Review replication schedules taking into account bandwidth and overall load on systems. Confirm retention time on replicated snapshots. Severity kWarning ProtectionDomainReplicationFailure [A1015] Message Protection domain protection_domain_name replication to remote site remote_name failed. reason Cause Various Resolution Resolve the stated reason for the failure. If you cannot resolve the error, contact Nutanix support. Severity kCritical ProtectionDomainSnapshotFailure [A1064] Message Protection domain protection_domain_name snapshot snapshot_id failed. reason Cause Protection domain cannot be snapshotted. Resolution Make sure all VMs and files are available. Severity kCritical VMAutoStartDisabled [A1057] Message Virtual Machine auto start is disabled on the hypervisor of Controller VM service_vm_external_ip
  • 26. | Platform Administration Guide | NOS 3.5 | 26 Cause Auto start of the Controller VM is disabled. Resolution Enable auto start of the Controller VM as recommended by Nutanix. If auto start is intentionally disabled, no action is necessary. Severity kInfo VMLimitExceeded [A1053] Message The number of virtual machines on node node_serial is vm_count, which is above the limit vm_limit. Cause The node is running more virtual machines than the hardware can support. Resolution Shut down VMs or move them to other nodes in the cluster. Severity kCritical VmActionError [A1033] Message Failed to action VM with name 'vm_name' and internal ID 'vm_id' due to reason Cause A VM could not be restored because of a hypervisor error, or could not be deleted because it is still in use. Resolution Resolve the stated reason for the failure. If you cannot resolve the error, contact Nutanix support. Severity kCritical VmRegistrationError [A1002] Message Failed to register VM using name 'vm_name' with the hypervisor due to reason Cause An error on the hypervisor. Resolution Resolve the stated reason for the failure. If you cannot resolve the error, contact Nutanix support. Severity kCritical Hardware CPUTemperatureHigh [A1049] Message Temperature of CPU cpu_id exceeded temperatureC on Controller VM ip_address Cause The device is overheating to the point of imminent failure. Resolution Ensure that the fans in the block are functioning properly and that the environment is cool enough. Severity kCritical
  • 27. | Platform Administration Guide | NOS 3.5 | 27 DiskBad [A1044] Message Disk disk_position on node node_position of block block_position is marked offline due to IO errors. Serial number of the disk is disk_serial in node node_serial of block block_serial. Cause The drive has failed. Resolution Replace the failed drive. Refer to the Nutanix documentation for instructions. Severity kCritical FanSpeedLow [A1020] Message Speed of fan fan_id exceeded fan_rpm RPM on Controller VM ip_address. Cause The device is overheating to the point of imminent failure. Resolution Ensure that the fans in the block are functioning properly and that the environment is cool enough. Severity kCritical FanSpeedLow [A1045] Message Fan fan_id has stopped on Controller VM ip_address. Cause A fan has failed. Resolution Replace the fan as soon as possible. Refer to the Nutanix documentation for instructions. Severity kCritical FusionIOTemperatureHigh [A1016] Message Fusion-io drive device temperature exceeded temperatureC on Controller VM ip_address Cause The device is overheating. Resolution Ensure that the fans in the block are functioning properly and that the environment is cool enough. Severity kWarning FusionIOTemperatureHigh [A1047] Message Fusion-io drive device temperature exceeded temperatureC on Controller VM ip_address Cause The device is overheating to the point of imminent failure.
  • 28. | Platform Administration Guide | NOS 3.5 | 28 Resolution Ensure that the fans in the block are functioning properly and that the environment is cool enough. Severity kCritical FusionIOWearHigh [A1014] Message Fusion-io drive die failure has occurred in Controller VM svm_ip and most of the Fusion-io drives have worn out beyond 1.2PB of writes. Cause The drives are approaching the maximum write endurance and are beginning to fail. Resolution Replace the drives as soon as possible. Refer to the Nutanix documentation for instructions. Severity kCritical FusionIOWearHigh [A1026] Message Fusion-io drive die failures have occurred in Controller VMs svm_ip_list. Cause The drive is failing. Resolution Replace the drive as soon as possible. Refer to the Nutanix documentation for instructions. Severity kCritical HardwareClockFailure [A1059] Message Hardware clock in node node_serial has failed. Cause The RTC clock on the host has failed or the RTC battery has died. Resolution Replace the node. Refer to the Nutanix documentation for instructions. Severity kCritical IntelSSDTemperatureHigh [A1028] Message Intel 910 SSD device device temperature exceeded temperatureC on the Controller VM ip_address. Cause The device is overheating. Resolution Ensure that the fans in the block are functioning properly and that the environment is cool enough. Severity kWarning
  • 29. | Platform Administration Guide | NOS 3.5 | 29 IntelSSDTemperatureHigh [A1007] Message Intel 910 SSD device device temperature exceeded temperatureC on the Controller VM ip_address. Cause The device is overheating to the point of imminent failure. Resolution Ensure that the fans in the block are functioning properly and that the environment is cool enough. Severity kCritical IntelSSDWearHigh [A1035] Message Intel 910 SSD device device on the Controller VM ip_address has worn out beyond 6.5PB of writes. Cause The drive is approaching the maximum write endurance. Resolution Consider replacing the drive. Severity kWarning IntelSSDWearHigh [A1042] Message Intel 910 SSD device device on the Controller VM ip_address has worn out beyond 7PB of writes. Cause The drive is close the maximum write endurance and failure is imminent. Resolution Replace the drive as soon as possible. Refer to the Nutanix documentation for instructions. Severity kCritical PowerSupplyDown [A1046] Message power_source power source is down on block block_position. Cause The power supply has failed. Resolution Replace the power supply as soon as possible. Refer to the Nutanix documentation for instructions. Severity kCritical RAMFault [A1052] Message DIMM fault detected on Controller VM ip_address. The node is running with current_memory_gb GB whereas installed_memory_gb GB was installed. Cause A DIMM has failed.
  • 30. | Platform Administration Guide | NOS 3.5 | 30 Resolution Replace the failed DIMM as soon as possible. Refer to the Nutanix documentation for instructions. Severity kCritical RAMTemperatureHigh [A1022] Message Temperature of DIMM dimm_id for CPU cpu_id exceeded temperatureC on Controller VM ip_address Cause The device is overheating to the point of imminent failure. Resolution Ensure that the fans in the block are functioning properly and that the environment is cool enough. Severity kCritical SystemTemperatureHigh [A1012] Message System temperature exceeded temperatureC on Controller VM ip_address Cause The node is overheating to the point of imminent failure. Resolution Ensure that the fans in the block are functioning properly and that the environment is cool enough. Severity kCritical Storage DiskInodeUsageHigh [A1018] Message Inode usage for one or more disks on Controller VM ip_address has exceeded 75%. Cause The filesystem contains too many files. Resolution Delete unneeded data or add nodes to the cluster. Severity kWarning DiskInodeUsageHigh [A1027] Message Inode usage for one or more disks on Controller VM ip_address has exceeded 90%. Cause The filesystem contains too many files. Resolution Delete unneeded data or add nodes to the cluster. Severity kCritical
  • 31. | Platform Administration Guide | NOS 3.5 | 31 DiskSpaceUsageHigh [A1031] Message Disk space usage for one or more disks on Controller VM ip_address has exceeded warn_limit%. Cause Too much data is stored on the node. Resolution Delete unneeded data or add nodes to the cluster. Severity kWarning DiskSpaceUsageHigh [A1005] Message Disk space usage for one or more disks on Controller VM ip_address has exceeded critical_limit%. Cause Too much data is stored on the node. Resolution Delete unneeded data or add nodes to the cluster. Severity kCritical FusionIOReserveLow [A1023] Message Fusion-io drive device reserves are down to reserve% on Controller VM ip_address. Cause The drive is beginning to fail. Resolution Consider replacing the drive. Severity kWarning FusionIOReserveLow [A1039] Message Fusion-io drive device reserves are down to reserve% on Controller VM ip_address. Cause The drive is failing. Resolution Replace the drive as soon as possible. Refer to the Nutanix documentation for instructions. Severity kCritical SpaceReservationViolated [A1021] Message Space reservation configured on vdisk vdisk_name belonging to container id container_id could not be honored due to insufficient disk space resulting from a possible disk or node failure. Cause A drive or a node has failed, and the space reservations on the cluster can no longer be met.
  • 32. | Platform Administration Guide | NOS 3.5 | 32 Resolution Change space reservations to total less than 90% of the available storage, and replace the drive or node as soon as possible. Refer to the Nutanix documentation for instructions. Severity kWarning VDiskBlockMapUsageHigh [A1061] Message Too many snapshots have been allocated in the system. This may cause perceivable performance degradation. Cause Too many vdisks or snapshots are present in the system. Resolution Remove unneeded snapshots and vdisks. If using remote replication, try to lower the frequency of taking snapshots. If you cannot resolve the error, contact Nutanix support. Severity kInfo
  • 33. | Platform Administration Guide | NOS 3.5 | 33 4 IP Address Configuration NOS includes a web-based configuration tool that automates the modification of Controller VMs and configures the cluster to use these new IP addresses. Other cluster components must be modified manually. Requirements The web-based configuration tool requires that IPv6 link-local be enabled on the subnet. If IPv6 link-local is not available, you must configure the Controller VM IP addresses and the cluster manually. The web-based configuration tool also requires that the Controller VMs be able to communicate with each other. All Controller VMs and hypervisor hosts must be on the same subnet. If the IPMI interfaces are connected, Nutanix recommends that they be on the same subnet as the Controller VMs and hypervisor hosts. Guest VMs can be on a different subnet. To Reconfigure the Cluster Warning: If you are reassigning a Controller VM IP address to another Controller VM, you must perform this complete procedure twice: once to assign intermediate IP addresses and again to assign the desired IP addresses. For example, if Controller VM A has IP address 172.16.0.11 and Controller VM B has IP address 172.16.0.10 and you want to swap them, you would need to reconfigure them with different IP addresses (such as 172.16.0.100 and 172.16.0.101) before changing them to the IP addresses in use initially. 1. Place the cluster in reconfiguration mode by following To Prepare to Reconfigure the Cluster on page 34. 2. Configure the IPMI IP addresses by following the procedure for your hardware model. → To Configure the Remote Console IP Address (NX-1000, NX-3050, NX-6000) on page 35 → To Configure the Remote Console IP Address (NX-3000) on page 35 → To Configure the Remote Console IP Address (NX-2000) on page 36
  • 34. | Platform Administration Guide | NOS 3.5 | 34 Alternatively, you can set the IPMI IP address using a command-line utility by following To Configure the Remote Console IP Address (command line) on page 37. 3. Configure networking on node the by following the hypervisor-specific procedure. → vSphere: To Configure Host Networking on page 38 → KVM: To Configure Host Networking (KVM) on page 39 4. (vSphere only) Update the ESXi host IP addresses in vCenter by following To Update the ESXi Host Password in vCenter on page 40. 5. Configure the Controller VM IP addresses. → If IPv6 is enabled on the subnet, follow To Change the Controller VM IP Addresses on page 40. → If IPv6 is not enabled on the subnet, follow To Change a Controller VM IP Address (manual) on page 41 for each Controller VM in the cluster. 6. Complete cluster reconfiguration by following To Complete Cluster Reconfiguration on page 42. To Prepare to Reconfigure the Cluster 1. Log on to any Controller VM in the cluster with SSH. 2. Stop the Nutanix cluster. nutanix@cvm$ cluster stop Wait to proceed until output similar to the following is displayed for every Controller VM in the cluster. CVM: 172.16.8.191 Up, ZeusLeader Zeus UP [3167, 3180, 3181, 3182, 3191, 3201] Scavenger UP [3334, 3351, 3352, 3353] ConnectionSplicer DOWN [] Hyperint DOWN [] Medusa DOWN [] DynamicRingChanger DOWN [] Pithos DOWN [] Stargate DOWN [] Cerebro DOWN [] Chronos DOWN [] Curator DOWN [] Prism DOWN [] AlertManager DOWN [] StatsAggregator DOWN [] SysStatCollector DOWN [] 3. Put the cluster in reconfiguration mode. nutanix@cvm$ cluster reconfig Type y to confirm the reconfiguration. Wait until the cluster successfully enters reconfiguration mode, as shown in the following example. INFO cluster:185 Restarted Genesis on 172.16.8.189. INFO cluster:185 Restarted Genesis on 172.16.8.188. INFO cluster:185 Restarted Genesis on 172.16.8.191. INFO cluster:185 Restarted Genesis on 172.16.8.190. INFO cluster:864 Success!
  • 35. | Platform Administration Guide | NOS 3.5 | 35 Remote Console IP Address Configuration The Intelligent Platform Management Interface (IPMI) is a standardized interface used to manage a host and monitor its operation. To enable remote access to the console of each host, you must configure the IPMI settings within BIOS. The Nutanix cluster provides a Java application to remotely view the console of each node, or host server. You can use this console to configure additional IP addresses in the cluster. The procedure for configuring the remote console IP address is slightly different for each hardware platform. To Configure the Remote Console IP Address (NX-1000, NX-3050, NX-6000) 1. Connect a keyboard and monitor to a node in the Nutanix block. 2. Restart the node and press Delete to enter the BIOS setup utility. You will have a limited amount of time to enter BIOS before the host completes the restart process. 3. Press the right arrow key to select the IPMI tab. 4. Press the down arrow key until BMC network configuration is highlighted and then press Enter. 5. Select Configuration Address source and press Enter. 6. Select Static and press Enter. 7. Assign the Station IP address, Subnet mask, and Router IP address. 8. Review the BIOS settings and press F4 to save the configuration changes and exit the BIOS setup utility. The node restarts. To Configure the Remote Console IP Address (NX-3000) 1. Connect a keyboard and monitor to a node in the Nutanix block.
  • 36. | Platform Administration Guide | NOS 3.5 | 36 2. Restart the node and press Delete to enter the BIOS setup utility. You will have a limited amount of time to enter BIOS before the host completes the restart process. 3. Press the right arrow key to select the Server Mgmt tab. 4. Press the down arrow key until BMC network configuration is highlighted and then press Enter. 5. Select Configuration source and press Enter. 6. Select Static on next reset and press Enter. 7. Assign the Station IP address, Subnet mask, and Router IP address. 8. Press F10 to save the configuration changes. 9. Review the settings and then press Enter. The node restarts. To Configure the Remote Console IP Address (NX-2000) 1. Connect a keyboard and monitor to a node in the Nutanix block. 2. Restart the node and press Delete to enter the BIOS setup utility. You will have a limited amount of time to enter BIOS before the host completes the restart process. 3. Press the right arrow key to select the Advanced tab. 4. Press the down arrow key until IPMI Configuration is highlighted and then press Enter. 5. Select Set LAN Configuration and press Enter. 6. Select Static to assign an IP address, subnet mask, and gateway address.
  • 37. | Platform Administration Guide | NOS 3.5 | 37 7. Press F10 to save the configuration changes. 8. Review the settings and then press Enter. 9. Restart the node. To Configure the Remote Console IP Address (command line) You can configure the management interface from the hypervisor host on the same node. Perform these steps once from each hypervisor host in the cluster where the management network configuration need to be changed. 1. Log on to the hypervisor host with SSH or the IPMI remote console. 2. Set the networking parameters. root@esx# /ipmitool -U ADMIN -P ADMIN lan set 1 ipsrc static root@esx# /ipmitool -U ADMIN -P ADMIN lan set 1 ipaddr mgmt_interface_ip_addr root@esx# /ipmitool -U ADMIN -P ADMIN lan set 1 netmask mgmt_interface_subnet_addr root@esx# /ipmitool -U ADMIN -P ADMIN lan set 1 defgw ipaddr mgmt_interface_gateway root@kvm# ipmitool -U ADMIN -P ADMIN lan set 1 ipsrc static root@kvm# ipmitool -U ADMIN -P ADMIN lan set 1 ipaddr mgmt_interface_ip_addr root@kvm# ipmitool -U ADMIN -P ADMIN lan set 1 netmask mgmt_interface_subnet_addr root@kvm# ipmitool -U ADMIN -P ADMIN lan set 1 defgw ipaddr mgmt_interface_gateway 3. Show current settings. root@esx# /ipmitool -v -U ADMIN -P ADMIN lan print 1 root@kvm# ipmitool -v -U ADMIN -P ADMIN lan print 1 Confirm that the parameters are set to the correct values.
  • 38. | Platform Administration Guide | NOS 3.5 | 38 To Configure Host Networking You can access the ESXi console either through IPMI or by attaching a keyboard and monitor to the node. 1. On the ESXi host console, press F2 and then provide the ESXi host logon credentials. 2. Press the down arrow key until Configure Management Network is highlighted and then press Enter. 3. Select Network Adapters and press Enter. 4. Ensure that the connected network adapters are selected. If they are not selected, press Space to select them and press Enter to return to the previous screen. 5. If a VLAN ID needs to be configured on the Management Network, select VLAN (optional) and press Enter. In the dialog box, provide the VLAN ID and press Enter. 6. Select IP Configuration and press Enter. 7. If necessary, highlight the Set static IP address and network configuration option and press Space to update the setting. 8. Provide values for the following: IP Address, Subnet Mask, and Default Gateway fields based on your environment and then press Enter . 9. Select DNS Configuration and press Enter. 10. If necessary, highlight the Use the following DNS server addresses and hostname option and press Space to update the setting. 11. Provide values for the Primary DNS Server and Alternate DNS Server fields based on your environment and then press Enter. 12. Press Esc and then Y to apply all changes and restart the management network. 13. Select Test Management Network and press Enter. 14. Press Enter to start the network ping test. 15. Verify that the default gateway and DNS servers reported by the ping test match those that you specified earlier in the procedure and then press Enter. Ensure that the tested addresses pass the ping test. If they do not, confirm that the correct IP addresses are configured.
  • 39. | Platform Administration Guide | NOS 3.5 | 39 Press Enter to close the test window. 16. Press Esc to log out. To Configure Host Networking (KVM) You can access the hypervisor host console either through IPMI or by attaching a keyboard and monitor to the node. 1. Log on to the host as root. 2. Open the network interface configuration file. root@kvm# vi /etc/sysconfig/network-scripts/ifcfg-br0 3. Press A to edit values in the file. 4. Update entries for netmask, gateway, and address. The block should look like this: ONBOOT="yes" NM_CONTROLLED="no" NETMASK="subnet_mask" IPADDR="host_ip_addr" DEVICE="eth0" TYPE="ethernet" GATEWAY="gateway_ip_addr" BOOTPROTO="none" • Replace host_ip_addr with the IP address for the hypervisor host. • Replace subnet_mask with the subnet mask for host_ip_addr. • Replace gateway_ip_addr with the gateway address for host_ip_addr. 5. Press Esc. 6. Type :wq and press Enter to save your changes. 7. Open the name services configuration file. root@kvm# vi /etc/resolv.conf 8. Update the values for the nameserver parameter then save and close the file. 9. Restart networking. root@kvm# /etc/init.d/network restart
  • 40. | Platform Administration Guide | NOS 3.5 | 40 To Update the ESXi Host Password in vCenter 1. Log on to vCenter with the vSphere client. 2. Right-click the host with the changed password and select Disconnect. 3. Right-click the host and select Connect. 4. Enter the new password and complete the Add Host Wizard. If reconnecting the host fails, remove it from the cluster and add it again. To Change the Controller VM IP Addresses Before you begin. • Confirm that the system you are using to configure the cluster meets the following requirements: • IPv6 link-local enabled. • Windows 7, Vista, or MacOS. • (Windows only) Bonjour installed (included with iTunes or downloadable from http:// support.apple.com/kb/DL999). • Determine the IPv6 service of any Controller VM in the cluster. IPv6 service names are uniquely generated at the factory and have the following form (note the final period): NTNX-block_serial_number-node_location-CVM.local. On the right side of the block toward the front is a label that has the block_serial_number (for example, 12AM3K520060). The node_location is a number 1-4 for NX-3000, a letter A-D for NX-1000/NX-2000/ NX-3050, or a letter A-B for NX-6000. If IPv6 link-local is not enabled on the subnet, reconfigure the cluster manually. If you need to confirm if IPv6 link-local is enabled on the network or if you do not have access to get the node serial number, see the Nutanix support knowledge base for alternative methods.
  • 41. | Platform Administration Guide | NOS 3.5 | 41 Warning: If you are reassigning a Controller VM IP address to another Controller VM, you must perform this complete procedure twice: once to assign intermediate IP addresses and again to assign the desired IP addresses. For example, if Controller VM A has IP address 172.16.0.11 and Controller VM B has IP address 172.16.0.10 and you want to swap them, you would need to reconfigure them with different IP addresses (such as 172.16.0.100 and 172.16.0.101) before changing them to the IP addresses in use initially. The cluster must be stopped and in reconfiguration mode before changing the Controller VM IP addresses. 1. Open a web browser. Nutanix recommends using Internet Explorer 9 for Windows and Safari for Mac OS. Note: Internet Explorer requires protected mode to be disabled. Go to Tools > Internet Options > Security, clear the Enable Protected Mode check box, and restart the browser. 2. Go to http://cvm_ip_addr:2100/ip_reconfig.html Replace cvm_ip_addr with the name of the IPv6 service of any Controller VM that will be added to the cluster. 3. Update one or more cells on the IP Reconfiguration page. Ensure that all components satisfy the cluster subnet requirements. See Subnet Requirements. 4. Click Reconfigure. 5. Wait until the Log Messages section of the page reports that the cluster has been successfully reconfigured, as shown in the following example. Configuring IP addresses on node S10264822116570/A... Success! Configuring IP addresses on node S10264822116570/C... Success! Configuring IP addresses on node S10264822116570/B... Success! Configuring IP addresses on node S10264822116570/D... Success! Configuring Zeus on node S10264822116570/A... Configuring Zeus on node S10264822116570/C... Configuring Zeus on node S10264822116570/B... Configuring Zeus on node S10264822116570/D... Reconfiguration successful! The IP address reconfiguration will disconnect any SSH sessions to cluster components. The cluster is taken out of reconfiguration mode. To Change a Controller VM IP Address (manual) 1. Log on to the hypervisor host with SSH or the IPMI remote console. 2. Log on to the Controller VM with SSH. root@host# ssh nutanix@192.168.5.254 Enter the Controller VM nutanix password. 3. Restart genesis. nutanix@cvm$ genesis restart If the restart is successful, output similar to the following is displayed:
  • 42. | Platform Administration Guide | NOS 3.5 | 42 Stopping Genesis pids [1933, 30217, 30218, 30219, 30241] Genesis started on pids [30378, 30379, 30380, 30381, 30403] 4. Change the network interface configuration. a. Open the network interface configuration file. nutanix@cvm$ sudo vi /etc/sysconfig/network-scripts/ifcfg-eth0 Enter the nutanix password. b. Press A to edit values in the file. c. Update entries for netmask, gateway, and address. The block should look like this: ONBOOT="yes" NM_CONTROLLED="no" NETMASK="subnet_mask" IPADDR="cvm_ip_addr" DEVICE="eth0" TYPE="ethernet" GATEWAY="gateway_ip_addr" BOOTPROTO="none" • Replace cvm_ip_addr with the IP address for the Controller VM. • Replace subnet_mask with the subnet mask for cvm_ip_addr. • Replace gateway_ip_addr with the gateway address for cvm_ip_addr. d. Press Esc. e. Type :wq and press Enter to save your changes. 5. Update the Zeus configuration. a. Open the host configuration file. nutanix@cvm$ sudo vi /etc/hosts b. Press A to edit values in the file. c. Update hosts zk1, zk2, and zk3 to match changed Controller VM IP addresses. d. Press Esc. e. Type :wq and press Enter to save your changes. 6. Restart the virtual machine. nutanix@cvm$ sudo reboot Enter the nutanix password if prompted. To Complete Cluster Reconfiguration 1. If you changed the IP addresses manually, take the cluster out of reconfiguration mode. Perform these steps for every Controller VM in the cluster. a. Log on to the Controller VM with SSH.
  • 43. | Platform Administration Guide | NOS 3.5 | 43 b. Take the Controller VM out of reconfiguration mode. nutanix@cvm$ rm ~/.node_reconfigure c. Restart genesis. nutanix@cvm$ genesis restart If the restart is successful, output similar to the following is displayed: Stopping Genesis pids [1933, 30217, 30218, 30219, 30241] Genesis started on pids [30378, 30379, 30380, 30381, 30403] 2. Log on to any Controller VM in the cluster with SSH. 3. Start the Nutanix cluster. nutanix@cvm$ cluster start If the cluster starts properly, output similar to the following is displayed for each node in the cluster: CVM: 172.16.8.167 Up, ZeusLeader Zeus UP [3148, 3161, 3162, 3163, 3170, 3180] Scavenger UP [3333, 3345, 3346, 11997] ConnectionSplicer UP [3379, 3392] Hyperint UP [3394, 3407, 3408, 3429, 3440, 3447] Medusa UP [3488, 3501, 3502, 3523, 3569] DynamicRingChanger UP [4592, 4609, 4610, 4640] Pithos UP [4613, 4625, 4626, 4678] Stargate UP [4628, 4647, 4648, 4709] Cerebro UP [4890, 4903, 4904, 4979] Chronos UP [4906, 4918, 4919, 4968] Curator UP [4922, 4934, 4935, 5064] Prism UP [4939, 4951, 4952, 4978] AlertManager UP [4954, 4966, 4967, 5022] StatsAggregator UP [5017, 5039, 5040, 5091] SysStatCollector UP [5046, 5061, 5062, 5098]
  • 44. | Platform Administration Guide | NOS 3.5 | 44 5 Field Installation You can reimage a Nutanix node with the Phoenix ISO. This process installs the hypervisor and the Nutanix Controller VM. Note: Phoenix usage is restricted to Nutanix sales engineers, support engineers, and authorized partners. Phoenix can be used to cleanly install systems for POCs or to switch hypervisors. NOS Installer Reference Installation Options Component Option Hypervisor Clean Install Hypervisor: To install the selected hypervisor as part of complete reimaging. Clean Install SVM: To install the Controller VM as part of complete reimaging or Controller VM boot drive replacement. Controller VM Repair SVM: To retain Controller VM configuration. Note: Do not use this option except under guidance from Nutanix support. Supported Products and Hypervisors Product ESX 5.0U2 & 5.1U1 KVM Hyper-V NX-1000 • NX-2000 • NX-2050 • NX-3000 • • NX-3050 • • • NX-6050/NX-6070 • To Image a Node Before you begin. • Download the Phoenix ISO to a workstation with access to the IPMI interface on the node that you want to reimage.
  • 45. | Platform Administration Guide | NOS 3.5 | 45 • Gather the following required pieces of information: Block ID, Cluster ID, and Node Serial Number. These items are assigned by Nutanix, and you must use the correct values. This procedure describes how to image a node from an ISO on a workstation. Repeat this procedure once for every node that you want to reimage. 1. Sign in to the IPMI web console. 2. Attach the ISO to the node. a. Go to Remote Control and click Launch Console. Accept any security warnings to start the console. b. In the console, click Media > Virtual Media Wizard. c. Click Browse next to ISO Image and select the ISO file. d. Click Connect CD/DVD. e. Go to Remote Control > Power Control. f. Select Reset Server and click Perform Action. The host restarts from the ISO. 3. In the boot menu, select Installer and press Enter. If previous values for these parameters are detected on the node, they will be displayed. 4. Enter the required information. → If all previous values are displayed and you want to use then, press Y. → If some or all of the previous values are not displayed, enter the required values. a. Block ID: Enter the unique block identifier assigned by Nutanix. b. Model: Enter the product number. c. Node Serial: Enter the unique node identifier assigned by Nutanix. d. Cluster ID: Enter the unique cluster identifier assigned by Nutanix. e. Node Position: Enter 1, 2, 3, or 4 for NX-3000; A, B, C, or D for all other 4-node blocks. Warning: If you are imaging all nodes in a block, ensure that the Block ID is the same for all nodes and that the Node Serial Number and Node Position are different.
  • 46. | Platform Administration Guide | NOS 3.5 | 46 5. Select both Clean Install Hypervisor and Clean Install SVM then select Start. Installation begins and takes about 20 minutes. 6. In the Virtual Media window, click Disconnect next to CD Media. 7. In the IPMI console, go to to Remote Control > Power Control. 8. Select Reset Server and click Perform Action. The node restarts with the new image. After the node starts, additional configuration tasks run and then the host restarts again. During this time, the host name is installing-please-be-patient. Wait approximately 20 minutes until this stage completes before accessing the node. Warning: Do not restart the host until the configuration is complete. What to do next. Add the node to a cluster.
  • 47. vSphere | Platform Administration Guide | NOS 3.5 | 47 Part II vSphere
  • 48. | Platform Administration Guide | NOS 3.5 | 48 6 vCenter Configuration VMware vCenter enables the centralized management of multiple ESXi hosts. The Nutanix cluster in vCenter must be configured according to Nutanix best practices. While most customers prefer to use an existing vCenter, Nutanix provides a vCenter OVF, which is on the Controller VMs in /home/nutanix/data/images/vcenter. You can deploy the OVF using the standard procedures for vSphere. To Use an Existing vCenter Server 1. Shut down the Nutanix vCenter VM. 2. Create a new cluster entity within the existing vCenter inventory and configure its settings based on Nutanix best practices by following To Create a Nutanix Cluster in vCenter on page 48. 3. Add the Nutanix hosts to this new cluster by following To Add a Nutanix Node to vCenter on page 51. To Create a Nutanix Cluster in vCenter 1. Log on to vCenter with the vSphere client. 2. If you want the Nutanix cluster to be in its own datacenter or if there is no datacenter, click File > New > Datacenter and type a meaningful name for the datacenter, such as NTNX-DC. Otherwise, proceed to the next step. You can also create the Nutanix cluster within an existing datacenter. 3. Right-click the datacenter node and select New Cluster. 4. Type a meaningful name for the cluster in the Name field, such as NTNX-Cluster. 5. Select the Turn on vSphere HA check box and click Next. 6. Select Admission Control > Enable. 7. Select Admission Control Policy > Percentage of cluster resources reserved as failover spare capacity and enter the percentage appropriate for the number of Nutanix nodes in the cluster the click Next. Hosts (N+1) Percentage Hosts (N+2) Percentage Hosts (N+3) Percentage Hosts (N+4) Percentage 1 N/A 9 23% 17 18% 25 16% 2 N/A 10 20% 18 17% 26 15% 3 33% 11 18% 19 16% 27 15% 4 25% 12 17% 20 15% 28 14% 5 20% 13 15% 21 14% 29 14%
  • 49. | Platform Administration Guide | NOS 3.5 | 49 Hosts (N+1) Percentage Hosts (N+2) Percentage Hosts (N+3) Percentage Hosts (N+4) Percentage 6 18% 14 14% 22 14% 30 13% 7 15% 15 13% 23 13% 31 13% 8 13% 16 13% 24 13% 32 13% 8. Click Next on the following three pages to accept the default values. • Virtual Machine Options • VM monitoring • VMware EVC 9. Verify that Store the swapfile in the same directory as the virtual machine (recommended) is selected and click Next. 10. Review the settings and then click Finish. 11. Add all Nutanix nodes to the vCenter cluster inventory. See To Add a Nutanix Node to vCenter on page 51. 12. Right-click the Nutanix cluster node and select Edit Settings. 13. If vSphere HA and DRS are not enabled, select them on the Cluster Features page. Otherwise, proceed to the next step. Note: vSphere HA and DRS must be configured even if the customer does not plan to use the features. The settings will be preserved within the vSphere cluster configuration, so if the customer later decides to enable the feature, it will be pre-configured based on Nutanix best practices. 14. Configure vSphere HA. a. Select vSphere HA > Virtual Machine Options. b. Change the VM restart priority of all Controller VMs to Disabled.
  • 50. | Platform Administration Guide | NOS 3.5 | 50 Tip: Controller VMs include the phrase CVM in their names. It may be necessary to expand the Virtual Machine column to view the entire VM name. c. Change the Host Isolation Response setting of all Controller VMs to Leave Powered On. d. Select vSphere HA > VM Monitoring e. Change the VM Monitoring setting for all Controller VMs to Disabled. f. Select vSphere HA > Datastore Heartbeating. g. Click Select only from my preferred datastores and select the Nutanix datastore (NTNX-NFS). h. If the cluster does not use vSphere HA, disable it on the Cluster Features page. Otherwise, proceed to the next step. 15. Configure vSphere DRS. a. Select vSphere DRS > Virtual Machine Options. b. Change the Automation Level setting of all Controller VMs to Disabled.
  • 51. | Platform Administration Guide | NOS 3.5 | 51 c. Select vSphere DRS > Power Management. d. Confirm that Off is selected as the default power management for the cluster. e. If the cluster does not use vSphere DRS, disable it on the Cluster Features page. Otherwise, proceed to the next step. 16. Click OK to close the cluster settings window. To Add a Nutanix Node to vCenter The cluster must be configured according to Nutanix specifications given in vSphere Cluster Settings on page 53. Tip: Refer to Default Cluster Credentials on page 2 for the default credentials of all cluster components. 1. Log on to vCenter with the vSphere client. 2. Right-click the cluster and select Add Host. 3. Type the IP address of the ESXi host in the Host field. 4. Enter the ESXi host logon credentials in the Username and Password fields. 5. Click Next. If a security or duplicate management alert appears, click Yes. 6. Review the Host Summary page and click Next. 7. Select a license to assign to the ESXi host and click Next. 8. Ensure that the Enable Lockdown Mode check box is left unselected and click Next. Lockdown mode is not supported. 9. Click Finish. 10. Select the ESXi host and click the Configuration tab. 11. Configure DNS servers. a. Click DNS and Routing > Properties. b. Select Use the following DNS server address.
  • 52. | Platform Administration Guide | NOS 3.5 | 52 c. Type DNS server addresses in the Preferred DNS Server and Alternate DNS Server fields and click OK. 12. Configure NTP servers. a. Click Time Configuration > Properties > Options > NTP Settings > Add. b. Type the NTP server address. Add multiple NTP servers if required. c. Click OK in the NTP Daemon (ntpd) Options and Time Configuration windows. d. Click Time Configuration > Properties > Options > General. e. Select Start automatically under Startup Policy. f. Click Start g. Click OK in the NTP Daemon (ntpd) Options and Time Configuration windows. 13. Click Storage and confirm that NFS datastores are mounted. 14. Set the Controller VM to start automatically when the ESXi host is powered on. a. Click the Configuration tab. b. Click Virtual Machine Startup/Shutdown in the Software frame. c. Select the Controller VM and click Properties. d. Ensure that the Allow virtual machines to start and stop automatically with the system check box is selected. e. If the Controller VM is listed in Manual Startup, click Move Up to move the Controller VM into the Automatic Startup section.
  • 53. | Platform Administration Guide | NOS 3.5 | 53 f. Click OK. 15. (NX-2000 only) Click Host Cache Configuration and confirm that the host cache is stored on the local datastore. If it is not correct, click Properties to update the location. vSphere Cluster Settings Certain vSphere cluster settings are required for Nutanix clusters. vSphere HA and DRS must be configured even if the customer does not plan to use the feature. The settings will be preserved within the vSphere cluster configuration, so if the customer later decides to enable the feature, it will be pre-configured based on Nutanix best practices. vSphere HA Settings Enable host monitoring Enable admission control and use the percentage-based policy with a value based on the number of nodes in the cluster. Set the VM Restart Priority of all Controller VMs to Disabled. Set the Host Isolation Response of all Controller VMs to Leave Powered On. Disable VM Monitoring for all Controller VMs. Enable Datastore Heartbeating by clicking Select only from my preferred datastores and choosing the Nutanix NFS datastore. vSphere DRS Settings Disable automation on all Controller VMs.
  • 54. | Platform Administration Guide | NOS 3.5 | 54 Leave power management disabled (set to Off). Other Cluster Settings Store VM swapfiles in the same directory as the virtual machine. (NX-2000 only) Store host cache on the local datastore. Failover Reservation Percentages Hosts (N+1) Percentage Hosts (N+2) Percentage Hosts (N+3) Percentage Hosts (N+4) Percentage 1 N/A 9 23% 17 18% 25 16% 2 N/A 10 20% 18 17% 26 15% 3 33% 11 18% 19 16% 27 15% 4 25% 12 17% 20 15% 28 14% 5 20% 13 15% 21 14% 29 14% 6 18% 14 14% 22 14% 30 13% 7 15% 15 13% 23 13% 31 13% 8 13% 16 13% 24 13% 32 13%
  • 55. | Platform Administration Guide | NOS 3.5 | 55 7 VM Management Migrating a VM to Another Cluster You can live migrate a VM to an ESXi host in a Nutanix cluster. Usually this is done in the following cases: • Migrate VMs from existing storage platform to Nutanix. • Keep VMs running during disruptive upgrade or other downtime of Nutanix cluster. In migrating VMs between vSphere clusters, the source host and NFS datastore are the ones presently running the VM. The target host and NFS datastore are the ones where the VM will run after migration. The target ESXi host and datastore must be part of a Nutanix cluster. To accomplish this migration, you have to mount the NFS datastores from the target on the source. After the migration is complete, you should unmount the datastores and block access. To Migrate a VM to Another Cluster Before you begin. Both the source host and the target host must be in the same vSphere cluster. Allow NFS access to NDFS by adding the source host and target host to a whitelist, as described in To Configure a Filesystem Whitelist. To migrate a VM back to the source from the target, perform this same procedure with the target as the new source and the source as the new target. 1. Sign in to the Nutanix web console. 2. Log on to vCenter with the vSphere client. 3. Mount the target NFS datastore on the source host and on the target host. You can mount NFS datastores in the vSphere client by clicking Add Storage on the Configuration > Storage screen for a host.
  • 56. | Platform Administration Guide | NOS 3.5 | 56 Note: Due to a limitation with VMware vSphere, a temporary name and the IP address of a controller VM must be used to mount the target NFS datastore on both the source host and the target host for this procedure. Parameter Value Server IP address of the Controller VM on the target ESXi host Folder Name of the container that has the target NFS datastore (typically / nfs-ctr) Datastore Name A temporary name for the NFS datastore (e.g., Temp-NTNX-NFS) a. Select the source host and go to Configuration > Storage. b. Click Add Storage and mount the target NFS datastore. c. Select the target host and go to Configuration > Storage. d. Click Add Storage and mount the target NFS datastore. 4. Change the VM datastore and host. Do this for each VM that you want to live migrate to the target. a. Right-click the VM and select Migrate.
  • 57. | Platform Administration Guide | NOS 3.5 | 57 b. Select Change datastore and click Next. c. Select the temporary datastore and click Next then Finish. The VM storage is moved to the temporary datastore on the target host. d. Right-click the VM and select Migrate. e. Select Change host and click Next. f. Select the target host and click Next. g. Ensure that High priority is selected and click Next then Finish. The VM keeps running as it moves to the target host. h. Right-click the VM and select Migrate. i. Select Change datastore and click Next. j. Select the target datastore and click Next then Finish. The VM storage is moved to the target datastore on the target host. 5. Unmount the datastores in the vSphere client. Warning: Do not unmount the NFS datastore with the IP address 192.168.5.2. a. Select the source host and go to Configuration > Storage b. Right click the temporary datastore and select Unmount. c. Select the target host and go to Configuration > Storage d. Right click the temporary datastore and select Unmount. What to do next. NDFS is not intended to be used as a general use NFS server. Once the migration is complete, disable NFS access by removing the source host and target host from the whitelist, as described in To Configure a Filesystem Whitelist. vStorage APIs for Array Integration To improve the vSphere cloning process, Nutanix provides a vStorage APIs for Array Integration (VAAI) plugin. This plugin is installed by default during the Nutanix factory process. Without the Nutanix VAAI plugin, the process of creating a full clone takes a significant amount of time because all the data that comprises a VM is duplicated. This duplication also results in an increase in storage consumption. The Nutanix VAAI plugin efficiently makes full clones without reserving space for the clone. Read requests for blocks that are shared between parent and clone are sent to the original vDisk that was created for the parent VM. As the clone VM writes new blocks, the Nutanix file system allocates storage for those blocks. This data management occurs completely at the storage layer, so the ESXi host sees a single file with the full capacity that was allocated when the clone was created. To Clone a VM 1. Log on to vCenter with the vSphere client.
  • 58. | Platform Administration Guide | NOS 3.5 | 58 2. Right-click the VM and select Clone. 3. Follow the wizard to enter a name for the clone, choose a cluster, and choose a host. 4. Select the datastore that contains source VM and click Next. Note: If you choose a datastore other than the one that contains the source VM, the clone operation will use the VMware implementation and not the Nutanix VAAI plugin. 5. If desired, set the guest customization parameters. Otherwise, proceed to the next step. 6. Click Finish. To Uninstall the VAAI Plugin Because the VAAI plugin is in the process of certification, the security level is set to allow community- supported plugins. Organizations with strict security policies may need to uninstall the plugin if it was installed during setup. Perform this procedure on each ESXi host in the Nutanix cluster. 1. Log on to the ESXi host with SSH. 2. Uninstall the plugin. root@esx# esxcli software vib remove --vibname nfs-vaai-plugin This command should return the following message: Message: The update completed successfully, but the system needs to be rebooted for the changes to be effective. 3. Disallow community-supported plugins. root@esx# esxcli software acceptance set --level=PartnerSupported 4. Restart the node by following To Restart a Node on page 64. Migrating vDisks to NFS The Nutanix Virtual Computing Platform supports three types of storage for vDisks: VMFS, RDM, and NFS. Nutanix recommends NFS for most situations. You can migrate VMFS and RDM vDisks to NFS. Before migration, you must have an NFS datastore. You can determine if a datastore is NFS in the vSphere client. NFS datastores have Server and Folder properties (for example, Server: 192.168.5.2, Folder: /ctr-ha). Datastore properties are shown in Datastores and Datastore Clusters > Configuration > Datastore Details in the vSphere client.
  • 59. | Platform Administration Guide | NOS 3.5 | 59 To create a datastore, use the Nutanix web console or the datastore create nCLI command. The type of vDisk determines the mechanism that you use to migrate it to NFS. • To migrate VMFS vDisks to NFS, use storage vMotion by following To Migrate VMFS vDisks to NFS on page 59. This operation takes significant time for each vDisk because the data is physically copied. • To migrate RDM vDisks to NFS, use the Nutanix migrate2nfs.py utility by following To Migrate RDM vDisks to NFS on page 60. This operation takes only a small amount of time for each vDisk because data is not physically copied. To Migrate VMFS vDisks to NFS Before you begin. Log on to vCenter with the vSphere client. Perform this procedure for each VM that is supported by a VMFS vDisk. The migration takes a significant amount of time. 1. Right-click the VM and select Migrate. 2. Click Change datastore and click Next. 3. Select the NFS datastore and click Next. 4. Click Finish. The vDisk begins migration. When the migration is complete, the vSphere client Tasks & Events tab shows that the Relocate virtual machine task is completed.
  • 60. | Platform Administration Guide | NOS 3.5 | 60 To Migrate RDM vDisks to NFS The migrate2nfs.py utility is available on Controller VMs to rapidly migrate RDM vDisks to an NFS datastore. This utility has the following restrictions: • Guest VMs can be migrated only to an NFS datastore that is on the same container where the RDM vDisk resides. For example, if the vDisk is in the ctr-ha container, the NFS datastore must be on the ctr-ha container. • ESXi has a maximum NFS vDisk size of in NFS is 2 TB - 512 B. To migrate vDisks to NFS, the partitions must be smaller than this maximum. If you have any vDisks that exceed this maximum, you have to reduce the size in the guest VM before using this mechanism to migrate it. How to reduce the size is different for every operating system. The following parameters are optional or are not always required. --truncate_large_rdm_vmdks Specify this switch to migrate vDisks larger than the maximum after reducing the size of the partition in the guest operating system. --filter=pattern Specify a pattern with the --batch switch to restrict the vDisks based on the name, for example Win7*. If you do not specify the --filter parameter in batch mode, all RDM vDisks are included. --server=esxi_ip_addr and --svm_ip=cvm_ip_addr Specify the ESXi host and Controller VM IP addresses if you are running the migrate2nfs.py script on a Controller VM different from the node where the vDisk to migrate resides. 1. Log on to any Controller VM in the cluster with SSH. 2. Specify the logon credentials as environment variables. nutanix@cvm$ export VI_USERNAME=root nutanix@cvm$ export VI_PASSWORD=esxi_root_password 3. If you want to migrate one vDisk at a time, specify the VMX file. nutanix@cvm$ migrate2nfs.py /vmfs/volumes/datastore_name/vm_dir/vm_name.vmx nfs_datastore • Replace datastore_name with the name of the datastore, for example NTNX_datastore. • Replace vm_dir/vm_name with the directory and the name of the VMX file. 4. If you want to migrate multiple vDisks at the same time, run migrate2nfs.py in batch mode. Perform these steps for each ESXi host in the cluster. a. List the VMs that will be migrated. nutanix@cvm$ migrate2nfs.py --list_only --batch --server=esxi_ip_addr -- svm_ip=cvm_ip_addr source_datastore nfs_datastore • Replace source_datastore with the name of the datastore that contains the VM .vmx file, for example NTNX_datastore. • Replace nfs_datastore with the name of the NFS datastore, for example NTNX-NFS. b. Migrate the VMs. nutanix@cvm$ migrate2nfs.py --batch --server=esxi_ip_addr -- svm_ip=cvm_ip_addr source_datastore nfs_datastore Each VM takes approximately five minutes to migrate. What to do next. Migrating the vDisks changes the device signature, which causes certain operating systems to mark the disk as offline. How to mark the disk online is different for every operating system.
  • 61. | Platform Administration Guide | NOS 3.5 | 61
  • 62. | Platform Administration Guide | NOS 3.5 | 62 8 Node Management A Nutanix cluster is composed of individual nodes, or host servers that run a hypervisor. Each node hosts a Nutanix Controller VM, which coordinates management tasks with the Controller VMs on other nodes. To Shut Down a Node in a Cluster Before you begin. Shut down guest VMs, including vCenter and the vMA, that are running on the node, or move them to other nodes in the cluster. Caution: You can only shut down one node for each cluster. If the cluster would have more than one node shut down, shut down the entire cluster. 1. Log on to vCenter (or to the ESXi host if vCenter is not available) with the vSphere client. 2. Right-click the Controller VM and select Power > Shut Down Guest. Note: Do not Power Off or Reset the Controller VM. Shutting down the Controller VM as a guest ensures that the cluster is aware that Controller VM is unavailable. 3. Right-click the host and select Enter Maintenance Mode. 4. In the Confirm Maintenance Mode dialog box, uncheck Move powered off and suspended virtual machines to other hosts in the cluster and click Yes. The host is placed in maintenance mode, which prevents VMs from running on the host. 5. Right-click the node and select Shut Down. Wait until vCenter shows that the host is not responding, which may take several minutes. If you are logged on to the ESXi host rather than to vCenter, the vSphere client will disconnect when the host shuts down.
  • 63. | Platform Administration Guide | NOS 3.5 | 63 To Start a Node in a Cluster 1. If the node is turned off, turn it on by pressing the power button on the front. Otherwise, proceed to the next step. 2. Log on to vCenter (or to the node if vCenter is not running) with the vSphere client. 3. Right-click the ESXi host and select Exit Maintenance Mode. 4. Right-click the Controller VM and select Power > Power on. Wait approximately 5 minutes for all services to start on the Controller VM. 5. Confirm that cluster services are running on the Controller VM. nutanix@cvm$ ncli cluster status | grep -A 15 cvm_ip_addr Output similar to the following is displayed. Name : 10.1.56.197 Status : Up Zeus : up Scavenger : up ConnectionSplicer : up Hyperint : up Medusa : up Pithos : up Stargate : up Cerebro : up Chronos : up Curator : up Prism : up AlertManager : up StatsAggregator : up SysStatCollector : up Every service listed should be up. 6. Right-click the ESXi host in the vSphere client and select Rescan for Datastores. Confirm that all Nutanix datastores are available. 7. Verify that all services are up on all Controller VMs. nutanix@cvm$ cluster status If the cluster is running properly, output similar to the following is displayed for each node in the cluster: CVM: 172.16.8.167 Up, ZeusLeader Zeus UP [3148, 3161, 3162, 3163, 3170, 3180] Scavenger UP [3333, 3345, 3346, 11997] ConnectionSplicer UP [3379, 3392] Hyperint UP [3394, 3407, 3408, 3429, 3440, 3447] Medusa UP [3488, 3501, 3502, 3523, 3569] DynamicRingChanger UP [4592, 4609, 4610, 4640] Pithos UP [4613, 4625, 4626, 4678] Stargate UP [4628, 4647, 4648, 4709] Cerebro UP [4890, 4903, 4904, 4979] Chronos UP [4906, 4918, 4919, 4968] Curator UP [4922, 4934, 4935, 5064] Prism UP [4939, 4951, 4952, 4978] AlertManager UP [4954, 4966, 4967, 5022] StatsAggregator UP [5017, 5039, 5040, 5091]
  • 64. | Platform Administration Guide | NOS 3.5 | 64 SysStatCollector UP [5046, 5061, 5062, 5098] To Restart a Node Before you begin. Shut down guest VMs, including vCenter and the vMA, that are running on the node, or move them to other nodes in the cluster. Use the following procedure when you need to restart all Nutanix Complete Blocks in a cluster. 1. Log on to vCenter (or to the ESXi host if the node is running the vCenter VM) with the vSphere client. 2. Right-click the Controller VM and select Power > Shut Down Guest. Note: Do not Power Off or Reset the Controller VM. Shutting down the Controller VM as a guest ensures that the cluster is aware that Controller VM is unavailable. 3. Right-click the host and select Enter Maintenance Mode. In the Confirm Maintenance Mode dialog box, uncheck Move powered off and suspended virtual machines to other hosts in the cluster and click Yes. The host is placed in maintenance mode, which prevents VMs from running on the host. 4. Right-click the node and select Reboot. Wait until vCenter shows that the host is not responding and then is responding again, which may take several minutes. If you are logged on to the ESXi host rather than to vCenter, the vSphere client will disconnect when the host shuts down. 5. Right-click the ESXi host and select Exit Maintenance Mode. 6. Right-click the Controller VM and select Power > Power on. Wait approximately 5 minutes for all services to start on the Controller VM. 7. Log on to the Controller VM with SSH. 8. Confirm that cluster services are running on the Controller VM. nutanix@cvm$ ncli cluster status | grep -A 15 cvm_ip_addr Output similar to the following is displayed. Name : 10.1.56.197 Status : Up Zeus : up Scavenger : up ConnectionSplicer : up Hyperint : up Medusa : up Pithos : up Stargate : up Cerebro : up Chronos : up Curator : up Prism : up AlertManager : up StatsAggregator : up SysStatCollector : up Every service listed should be up.
  • 65. | Platform Administration Guide | NOS 3.5 | 65 9. Right-click the ESXi host in the vSphere client and select Rescan for Datastores. Confirm that all Nutanix datastores are available. To Patch ESXi Hosts in a Cluster Use the following procedure when you need to patch the ESXi hosts in a cluster without service interruption. Perform the following steps for each ESXi host in the cluster. 1. Shut down the node by following To Shut Down a Node in a Cluster on page 62, including moving guest VMs to a running node in the cluster. 2. Patch the ESXi host using your normal procedures with VMware Update Manager or otherwise. 3. Start the node by following To Start a Node in a Cluster on page 63. 4. Log on to the Controller VM with SSH. 5. Confirm that cluster services are running on the Controller VM. nutanix@cvm$ ncli cluster status | grep -A 15 cvm_ip_addr Output similar to the following is displayed. Name : 10.1.56.197 Status : Up Zeus : up Scavenger : up ConnectionSplicer : up Hyperint : up Medusa : up Pithos : up Stargate : up Cerebro : up Chronos : up Curator : up Prism : up AlertManager : up StatsAggregator : up SysStatCollector : up Every service listed should be up. Removing a Node Before removing a node from a Nutanix cluster, ensure the following statements are true: • The cluster has at least four nodes at the beginning of the process. • The cluster will have at least three functional nodes at the conclusion of the process. When you start planned removal of a node, the node is marked for removal and data is migrated to other nodes in the cluster. After the node is prepared for removal, you can physically remove it from the block. To Remove a Node from a Cluster Before you begin. • Ensure that all nodes that will be part of the cluster after node removal are running. • Complete any add node operations on the cluster before removing nodes.