The document provides instructions for cluster management and configuration for the Nutanix NOS 3.5 platform. Key points include:
- Starting and stopping a Nutanix cluster via commands on the Controller VM such as "cluster start" and "cluster stop".
- Destroying a cluster resets all nodes to factory configuration, deleting all cluster and VM data.
- Creating new clusters from nodes of an existing "multiblock" cluster by removing nodes and configuring new clusters.
- Product mixing restrictions specify compatible node combinations in a cluster.
- Configuring a new cluster via a web browser using the IPv6 address of a Controller VM, selecting nodes, and providing networking details.
This document provides a summary of the Nutanix command-line interface (nCLI) and commands that can be run on the Controller VM to manage the Nutanix cluster. The nCLI allows administrators to run commands against the Nutanix cluster from a local machine or any Controller VM. The cluster command is used to manage the cluster and performs actions like starting, stopping, upgrading and configuring the cluster. The document also outlines various configuration parameters and file paths used by cluster management components.
The document provides instructions for configuring IP addresses on a Nutanix cluster in 3 steps:
1. Use the web-based configuration tool to automatically configure IP addresses on Controller VMs and the cluster by inputting network settings.
2. Log into any Controller VM with SSH and start the Nutanix cluster.
3. Verify the cluster services start properly on each node to confirm the cluster is running.
Foundation provides step-by-step instructions for using the Foundation tool to perform a field installation of Nutanix, which includes installing a hypervisor and Nutanix Controller VM on nodes and creating a cluster. The guide covers imaging both factory-prepared nodes and bare metal nodes, and creating a cluster or just imaging nodes. Hardware and hypervisor compatibility and requirements are also documented.
The document provides instructions for installing NetXMS server software on UNIX systems. It describes downloading and unpacking the NetXMS source code, running the configure script to specify installation options like the database driver and installation location, compiling the code using make, installing it on the system using make install, copying configuration files to the default or custom location, and creating a database and user for NetXMS to use. The instructions also cover upgrading an existing NetXMS server installation on UNIX.
This document provides instructions for integrating FreeRadius with Novell eDirectory to enable wireless authentication. It describes installing and configuring Novell OES Linux, applying necessary patches, installing FreeRadius and the RADIUS plugin for iManager, extending the eDirectory schema, generating certificates, and configuring FreeRadius, eDirectory, and clients. The goal is to set up wireless authentication against an eDirectory user directory using FreeRadius as the RADIUS server.
This document provides step-by-step instructions for creating a high availability SQL Server 2008 R2 cluster with two nodes using Microsoft Cluster Service on Windows Server 2008 R2. It discusses SQL clustering requirements, changes in Windows 2008/R2 related to clustering, and considerations for installing and configuring a two-node SQL cluster. The document also references additional technologies like Network Load Balancing that can provide further redundancy and scalability beyond normal clustering constraints.
Comparação entre XenServer 6.2 e VMware VSphere 5.1 - Comparison of Citrix Xe...Lorscheider Santiago
The document compares Citrix XenServer 6.2 and VMware vSphere 5.1. It discusses that both pioneered server virtualization and how their architectures have evolved. It then analyzes key areas like memory management, storage management, infrastructure management, and disaster recovery planning. It notes XenServer uses open standards like VHD and avoids proprietary formats. For desktop virtualization, it highlights XenServer integrates with Citrix XenDesktop and its Provisioning Services for template management to efficiently deploy golden images at scale.
Installing and Configuring Domino 10 on CentOS 7Devin Olson
Instructions on how to do a base-level installation of IBM / HCL's Domino 10 (10.0.1) server on a Debial-based (Red Hat, CentOS, etc) Linux Server.
Includes partitioning, network configuration, ssh installation & configuration, group and user creation, minimal packages, firewall configuration, sticky bits, and more.
This document provides a summary of the Nutanix command-line interface (nCLI) and commands that can be run on the Controller VM to manage the Nutanix cluster. The nCLI allows administrators to run commands against the Nutanix cluster from a local machine or any Controller VM. The cluster command is used to manage the cluster and performs actions like starting, stopping, upgrading and configuring the cluster. The document also outlines various configuration parameters and file paths used by cluster management components.
The document provides instructions for configuring IP addresses on a Nutanix cluster in 3 steps:
1. Use the web-based configuration tool to automatically configure IP addresses on Controller VMs and the cluster by inputting network settings.
2. Log into any Controller VM with SSH and start the Nutanix cluster.
3. Verify the cluster services start properly on each node to confirm the cluster is running.
Foundation provides step-by-step instructions for using the Foundation tool to perform a field installation of Nutanix, which includes installing a hypervisor and Nutanix Controller VM on nodes and creating a cluster. The guide covers imaging both factory-prepared nodes and bare metal nodes, and creating a cluster or just imaging nodes. Hardware and hypervisor compatibility and requirements are also documented.
The document provides instructions for installing NetXMS server software on UNIX systems. It describes downloading and unpacking the NetXMS source code, running the configure script to specify installation options like the database driver and installation location, compiling the code using make, installing it on the system using make install, copying configuration files to the default or custom location, and creating a database and user for NetXMS to use. The instructions also cover upgrading an existing NetXMS server installation on UNIX.
This document provides instructions for integrating FreeRadius with Novell eDirectory to enable wireless authentication. It describes installing and configuring Novell OES Linux, applying necessary patches, installing FreeRadius and the RADIUS plugin for iManager, extending the eDirectory schema, generating certificates, and configuring FreeRadius, eDirectory, and clients. The goal is to set up wireless authentication against an eDirectory user directory using FreeRadius as the RADIUS server.
This document provides step-by-step instructions for creating a high availability SQL Server 2008 R2 cluster with two nodes using Microsoft Cluster Service on Windows Server 2008 R2. It discusses SQL clustering requirements, changes in Windows 2008/R2 related to clustering, and considerations for installing and configuring a two-node SQL cluster. The document also references additional technologies like Network Load Balancing that can provide further redundancy and scalability beyond normal clustering constraints.
Comparação entre XenServer 6.2 e VMware VSphere 5.1 - Comparison of Citrix Xe...Lorscheider Santiago
The document compares Citrix XenServer 6.2 and VMware vSphere 5.1. It discusses that both pioneered server virtualization and how their architectures have evolved. It then analyzes key areas like memory management, storage management, infrastructure management, and disaster recovery planning. It notes XenServer uses open standards like VHD and avoids proprietary formats. For desktop virtualization, it highlights XenServer integrates with Citrix XenDesktop and its Provisioning Services for template management to efficiently deploy golden images at scale.
Installing and Configuring Domino 10 on CentOS 7Devin Olson
Instructions on how to do a base-level installation of IBM / HCL's Domino 10 (10.0.1) server on a Debial-based (Red Hat, CentOS, etc) Linux Server.
Includes partitioning, network configuration, ssh installation & configuration, group and user creation, minimal packages, firewall configuration, sticky bits, and more.
This document provides step-by-step instructions for installing and configuring IBM Domino 9 Social Edition on CentOS 6. It includes installing CentOS, configuring the OS, enabling required services, configuring the firewall to open ports for Domino, creating a user account, and performing Domino-specific configuration steps. The document contains detailed explanations and commands for completing a full ground-up installation of both CentOS and Domino.
cynapspro endpoint data protection - installation guidecynapspro GmbH
The document provides installation instructions for cynapspro Endpoint Data Protection 2010. It discusses installing the cynapspro server component, which manages client agents through a centralized database. The server requires a supported SQL server and reads an organization's directory service. Client components use a kernel driver to enforce policies controlled through the management console. Steps are outlined for installing SQL Server if needed, running the server setup, configuring database and directory settings, deploying the client agent, and installing additional modules like CryptionPro HDD.
Linux Server Hardening - Steps by StepsSunil Paudel
Linux Server Hardening
This document has the step by step of the way of hardening the server. We have used the metasploitable server, the vulnerable ubuntu server designed to be hacked, and have done the hardening. We have stopped all the unnecessary services and ports. We have assumed the server to be the web server only. Hence, only port 80 and 443 will be opened. Then the firewall rules have been set following by the apache web server hardening, encryption of the folder and files, disabling the unwanted users, forcing the password policies.
This document provides steps to install and configure MySQL 5.1 on CentOS 6.4. It describes downloading required libraries, editing the configuration file to set the character set to UTF-8, starting the MySQL service, securing the root user and removing test databases. It also demonstrates creating a database and table, loading, querying, updating and backing up data.
This document provides instructions for enabling the EPEL (Extra Packages for Enterprise Linux) repository on RHEL (Red Hat Enterprise Linux) systems to gain access to additional packages. It describes downloading the EPEL rpm package, installing it to enable the repository, and checking that it is functioning properly by viewing available packages and installing a library from EPEL.
This document provides an overview of FreeNAS 8.3.0, including its core features, ZFS and RAID concepts, hardware considerations, and what's new in version 8.3.0. It covers topics such as installing FreeNAS, using the graphical interface, upgrading, configuring volumes and shares, managing ZFS features like snapshots and replication, installing plugins using the Plugins Jail, and installing additional software. The document contains information to help users perform essential FreeNAS system administration tasks.
The document describes setting up a clustered ONTAP environment. It includes creating a cluster called "netappu" with nodes at locations "NetAppU" and adding various licenses. It also outlines configuring cluster network and management interfaces, setting up a Vserver for administration, enabling storage failover, and completing cluster node setup.
Medooze MCU Video Multiconference Server Installation and configuration guide...sreeharsha43
This document provides instructions for installing and configuring a Medooze MCU videoconferencing system on Ubuntu 12.04 LTS. It describes how to install various software tools like Wireshark, Java JDK, and NetBeans IDE. It then explains how to install the Medooze Media Mixer Server and mcuWeb application. Finally, it outlines the steps to deploy mcuWeb in application servers like GlassFish, JBoss and Tomcat, and configure media mixers, video profiles, conferences and other features of the videoconferencing system.
En este documento brindamos un paso a paso para instalar SQL Server Denali y para activar la característica de SQL Server AlwaysOn
Saludos,
Eduardo Castro Martinez
http://ecastrom.blogspot.com
http://comunidadwindows.org
High Availability with Windows Server Clustering and Geo-ClusteringStarWind Software
Find out why having an effective disaster recovery plan is so important and how this differs from simple application or server resiliency. This presentation explains technical issues involved in implementing DR in a virtual server environment and show how using the StarWind Virtual SAN solution can help increase the availability of business critical workloads.
You can also watch webcast based on this slide presentation: http://www.starwindsoftware.com/reduce-disaster-recovery-and-business-continuity-expenses-video
This document provides a detailed description of the Gluster Storage Platform installation process. For demonstration purposes this guide will detail how to install and configure a two-node storage cluster. It also outlines how to create a storage volume and mount on clients.
A computer cluster consists of connected computers that work together as a single system. High availability (HA) systems are designed to avoid loss of service by managing failures and minimizing downtime. Key components of an HA cluster with openSUSE Leap include the Corosync messaging system, the Pacemaker resource manager, fencing devices to resolve split-brain situations, and shared storage solutions like DRBD. Setting up such a cluster involves installing openSUSE Leap on multiple VMs, configuring Corosync and Pacemaker for resource management and failover, implementing storage replication with DRBD, and testing the HA functionality of services like Nginx.
The document describes configuring an iSCSI target on a server to provide 5GB of shared block storage to clients. It involves creating an LVM volume from an unpartitioned disk, configuring the iSCSI target to use the LVM volume as a backing store, creating an ACL and LUN, then configuring an initiator on a client to discover and login to the target to access the LUN as a block device.
The document provides instructions for installing AIX5.3, HACMP, Oracle9i, and Weblogic 8.1 on IBM P510 servers with attached storage. It outlines the required hardware, including servers, storage arrays, and networking equipment. It then details the steps for hardware installation, disk array configuration, operating system installation, software package installation, system configuration, and volume group creation for database storage.
This "how-to" slideshare presentation outlines the Gluster Storage Platform installation process. For demonstration purposes we'll show you how to install and configure a two-node storage cluster.
This document provides instructions for setting up different types of Microsoft Cluster Service (MSCS) clusters in a VMware vSphere environment, including:
1) Clustering virtual machines on a single physical host to protect against OS and application failures.
2) Clustering virtual machines across physical hosts to protect against both software and hardware failures, which requires shared storage on a Fibre Channel SAN.
3) Clustering physical machines with virtual machines by having standby virtual machines on a single host that can take over for physical machines in the case of hardware failure.
This document provides installation instructions for Component Pack 6.0.0.6 across three servers. It details preparing the system by opening required firewall ports, installing prerequisites like Docker and Kubernetes, initializing the master node, joining worker nodes, and installing Helm. It also covers tasks like creating persistent volumes, labeling worker nodes for Elasticsearch, pushing images to the Docker registry, bootstrapping the Kubernetes cluster, and installing the Component Pack connections-env.
This document provides instructions for installing the RAIDar utility to connect the ReadyNAS to the network. It recommends following the FrontView Setup Wizard for initial configuration. The Setup Wizard guides the user through setting the clock, configuring alert contacts, network settings like IP address, security settings, and creating shared folders.
Want to create a system state backup quickly by WBAdmin backup? Read this article for detailed syntaxes and parameters. It also offers alternatives to wbadmin.
If your business is considering using a hyperconverged
computer/storage solution rather than disparate dedicated appliances, a Nutanix storage cluster powered by Dell XC630 appliances could bring many benefits. Thanks to its powerful Dell servers with Intel processors, this space-efficient solution was able to handle nine SQL Server 2014 OLTP workloads at over 420,000 OPM, 160 mailboxes in Microsoft Exchange 2013, and file/print and web server disk workloads; that’s enough to meet your present demands and still have room for future growth. With software-defined tiered storage, high availability, and a redundant network architecture, the hyperconverged solution based on Dell XC630 appliances can help your business get the job done.
This document provides an overview of deploying and optimizing Splunk implementations on the Nutanix virtual computing platform. It describes the Nutanix architecture, Splunk software capabilities, and benefits of running Splunk on Nutanix. Testing showed a single Splunk VM could index 80,000-89,000 events per second, while multiple VMs scaled to 340,000-500,000 EPS. The Nutanix platform automatically tiers Splunk data for optimal performance as the data ages.
This document provides step-by-step instructions for installing and configuring IBM Domino 9 Social Edition on CentOS 6. It includes installing CentOS, configuring the OS, enabling required services, configuring the firewall to open ports for Domino, creating a user account, and performing Domino-specific configuration steps. The document contains detailed explanations and commands for completing a full ground-up installation of both CentOS and Domino.
cynapspro endpoint data protection - installation guidecynapspro GmbH
The document provides installation instructions for cynapspro Endpoint Data Protection 2010. It discusses installing the cynapspro server component, which manages client agents through a centralized database. The server requires a supported SQL server and reads an organization's directory service. Client components use a kernel driver to enforce policies controlled through the management console. Steps are outlined for installing SQL Server if needed, running the server setup, configuring database and directory settings, deploying the client agent, and installing additional modules like CryptionPro HDD.
Linux Server Hardening - Steps by StepsSunil Paudel
Linux Server Hardening
This document has the step by step of the way of hardening the server. We have used the metasploitable server, the vulnerable ubuntu server designed to be hacked, and have done the hardening. We have stopped all the unnecessary services and ports. We have assumed the server to be the web server only. Hence, only port 80 and 443 will be opened. Then the firewall rules have been set following by the apache web server hardening, encryption of the folder and files, disabling the unwanted users, forcing the password policies.
This document provides steps to install and configure MySQL 5.1 on CentOS 6.4. It describes downloading required libraries, editing the configuration file to set the character set to UTF-8, starting the MySQL service, securing the root user and removing test databases. It also demonstrates creating a database and table, loading, querying, updating and backing up data.
This document provides instructions for enabling the EPEL (Extra Packages for Enterprise Linux) repository on RHEL (Red Hat Enterprise Linux) systems to gain access to additional packages. It describes downloading the EPEL rpm package, installing it to enable the repository, and checking that it is functioning properly by viewing available packages and installing a library from EPEL.
This document provides an overview of FreeNAS 8.3.0, including its core features, ZFS and RAID concepts, hardware considerations, and what's new in version 8.3.0. It covers topics such as installing FreeNAS, using the graphical interface, upgrading, configuring volumes and shares, managing ZFS features like snapshots and replication, installing plugins using the Plugins Jail, and installing additional software. The document contains information to help users perform essential FreeNAS system administration tasks.
The document describes setting up a clustered ONTAP environment. It includes creating a cluster called "netappu" with nodes at locations "NetAppU" and adding various licenses. It also outlines configuring cluster network and management interfaces, setting up a Vserver for administration, enabling storage failover, and completing cluster node setup.
Medooze MCU Video Multiconference Server Installation and configuration guide...sreeharsha43
This document provides instructions for installing and configuring a Medooze MCU videoconferencing system on Ubuntu 12.04 LTS. It describes how to install various software tools like Wireshark, Java JDK, and NetBeans IDE. It then explains how to install the Medooze Media Mixer Server and mcuWeb application. Finally, it outlines the steps to deploy mcuWeb in application servers like GlassFish, JBoss and Tomcat, and configure media mixers, video profiles, conferences and other features of the videoconferencing system.
En este documento brindamos un paso a paso para instalar SQL Server Denali y para activar la característica de SQL Server AlwaysOn
Saludos,
Eduardo Castro Martinez
http://ecastrom.blogspot.com
http://comunidadwindows.org
High Availability with Windows Server Clustering and Geo-ClusteringStarWind Software
Find out why having an effective disaster recovery plan is so important and how this differs from simple application or server resiliency. This presentation explains technical issues involved in implementing DR in a virtual server environment and show how using the StarWind Virtual SAN solution can help increase the availability of business critical workloads.
You can also watch webcast based on this slide presentation: http://www.starwindsoftware.com/reduce-disaster-recovery-and-business-continuity-expenses-video
This document provides a detailed description of the Gluster Storage Platform installation process. For demonstration purposes this guide will detail how to install and configure a two-node storage cluster. It also outlines how to create a storage volume and mount on clients.
A computer cluster consists of connected computers that work together as a single system. High availability (HA) systems are designed to avoid loss of service by managing failures and minimizing downtime. Key components of an HA cluster with openSUSE Leap include the Corosync messaging system, the Pacemaker resource manager, fencing devices to resolve split-brain situations, and shared storage solutions like DRBD. Setting up such a cluster involves installing openSUSE Leap on multiple VMs, configuring Corosync and Pacemaker for resource management and failover, implementing storage replication with DRBD, and testing the HA functionality of services like Nginx.
The document describes configuring an iSCSI target on a server to provide 5GB of shared block storage to clients. It involves creating an LVM volume from an unpartitioned disk, configuring the iSCSI target to use the LVM volume as a backing store, creating an ACL and LUN, then configuring an initiator on a client to discover and login to the target to access the LUN as a block device.
The document provides instructions for installing AIX5.3, HACMP, Oracle9i, and Weblogic 8.1 on IBM P510 servers with attached storage. It outlines the required hardware, including servers, storage arrays, and networking equipment. It then details the steps for hardware installation, disk array configuration, operating system installation, software package installation, system configuration, and volume group creation for database storage.
This "how-to" slideshare presentation outlines the Gluster Storage Platform installation process. For demonstration purposes we'll show you how to install and configure a two-node storage cluster.
This document provides instructions for setting up different types of Microsoft Cluster Service (MSCS) clusters in a VMware vSphere environment, including:
1) Clustering virtual machines on a single physical host to protect against OS and application failures.
2) Clustering virtual machines across physical hosts to protect against both software and hardware failures, which requires shared storage on a Fibre Channel SAN.
3) Clustering physical machines with virtual machines by having standby virtual machines on a single host that can take over for physical machines in the case of hardware failure.
This document provides installation instructions for Component Pack 6.0.0.6 across three servers. It details preparing the system by opening required firewall ports, installing prerequisites like Docker and Kubernetes, initializing the master node, joining worker nodes, and installing Helm. It also covers tasks like creating persistent volumes, labeling worker nodes for Elasticsearch, pushing images to the Docker registry, bootstrapping the Kubernetes cluster, and installing the Component Pack connections-env.
This document provides instructions for installing the RAIDar utility to connect the ReadyNAS to the network. It recommends following the FrontView Setup Wizard for initial configuration. The Setup Wizard guides the user through setting the clock, configuring alert contacts, network settings like IP address, security settings, and creating shared folders.
Want to create a system state backup quickly by WBAdmin backup? Read this article for detailed syntaxes and parameters. It also offers alternatives to wbadmin.
If your business is considering using a hyperconverged
computer/storage solution rather than disparate dedicated appliances, a Nutanix storage cluster powered by Dell XC630 appliances could bring many benefits. Thanks to its powerful Dell servers with Intel processors, this space-efficient solution was able to handle nine SQL Server 2014 OLTP workloads at over 420,000 OPM, 160 mailboxes in Microsoft Exchange 2013, and file/print and web server disk workloads; that’s enough to meet your present demands and still have room for future growth. With software-defined tiered storage, high availability, and a redundant network architecture, the hyperconverged solution based on Dell XC630 appliances can help your business get the job done.
This document provides an overview of deploying and optimizing Splunk implementations on the Nutanix virtual computing platform. It describes the Nutanix architecture, Splunk software capabilities, and benefits of running Splunk on Nutanix. Testing showed a single Splunk VM could index 80,000-89,000 events per second, while multiple VMs scaled to 340,000-500,000 EPS. The Nutanix platform automatically tiers Splunk data for optimal performance as the data ages.
VMware vROps Management Pack for Nutanix OverviewBlue Medora
The document discusses the VMware vRealize Operations Management Pack for Nutanix from Blue Medora. Blue Medora has partnered with VMware to expand the capabilities of vRealize Cloud Management by developing management packs. Their management pack for Nutanix extends vRealize Operations monitoring to include the entire Nutanix stack, including compute, storage, and virtualization resources. It connects directly to Nutanix Prism to collect over 1,000 metrics and provides visibility into relationships between Nutanix and virtual resources in a single view.
This document summarizes an interactive workshop on virtual desktop infrastructure (VDI) design. It discusses technical components of VDI like where desktops are delivered from and run, as well as storage considerations. Key topics covered include IO dispersion across different types of storage, squeezing virtual machines onto hardware, and balancing persistent versus non-persistent desktops. The document also outlines several important considerations for a successful VDI implementation like whether to use a traditional or converged infrastructure, how to pilot VDI, scaling out cost effectively, and where to focus efforts.
Prism is the control plane that simplifies datacenter operations by providing a single pane of glass to manage compute, storage and virtualization and offering rich automation and operational insights.
Simple & easy to use interface
Visitor Pre-Registration through a simple web page
Access control system integration
Visitor history & Dashboard Statistics (check in count, check out count, visitor count, etc.)
Visitors check-in & check-out tracking
Visitor Reports (daily / monthly / customize fields for filtering reports)
Auto fill for visitor details on revisit
Frequent visitor records maintained
Photo capture & ID & badge printing
Details of items carrying
Get the inside scoop on what Citrix and Nutanix are up to. In this session, you will discover the collaborative projects and visions that we are most excited about.
Watch the session on SynergyTV and follow along with these slides. http://live.citrixsynergy.com/2016/player/ondemandplayer.php?presentation_id=f2ededd4-64a0-47aa-bbbd-208d62128038
This document discusses Nutanix's converged infrastructure platform which combines storage and computing resources into a single system. It aims to provide Google-like infrastructure for enterprises with a software-defined, hyperconverged approach that offers linear scalability, predictable economics, and simplified management compared to traditional complex datacenter architectures. Nutanix has seen unprecedented customer adoption and has grown to become one of the fastest growing infrastructure companies of the last decade.
This document is an administrator guide for version 5.1 of SafeConsole, a software for managing portable storage devices. SafeConsole allows organizations to control device usage and support users by enabling password resets and more. The guide explains what SafeConsole is, how devices connect to it, and provides an overview of the basic features and configuration options available to administrators in the SafeConsole interface.
This document provides an administrator's guide for Rational ClearCase version 2002.05.00 and later. It contains information about ClearCase network architecture, including ClearCase hosts, data storage in versioned object bases (VOBs) and views, users, the ClearCase registry, and ClearCase server processes. It also covers administering ClearCase hosts, using DHCP and non-ClearCase access on UNIX systems, and using automount with ClearCase on UNIX.
Here are the key requirements for installing NetBackup server software:
- Hardware requirements: A server platform supported by NetBackup with sufficient disk space, memory, and CPU resources.
- Operating system: A supported version of Windows, Linux, Solaris, HP-UX, AIX, or Tru64.
- Licenses: Valid NetBackup base product license keys for all servers and options.
- Storage devices: Properly configured and compatible storage devices like tape drives and libraries if backing up to tape.
- Network: Ability to communicate between servers and clients on the network.
- Users: Local administrator-level access on Windows servers or root access on UNIX servers.
So in summary,
This document discusses secure remote access using Solaris Secure Shell. It describes network threats like password theft, session hijacking, and man-in-the-middle attacks. It explains how Solaris Secure Shell provides strong authentication, encryption, and session integrity to protect against these threats when accessing systems remotely. It also compares Solaris Secure Shell to IPsec and their suitability for different environments.
Plesk is a hosting automation solution that simplifies setup and management of user accounts, web sites, and email accounts. It manages common web hosting software like Apache, DNS servers, FTP servers, mail servers, databases and more. New features in version 8.1 include support for AWStats web analytics and MySQL 5.0 databases. Plesk provides a control panel for hosting providers and their customers to independently manage domains and email.
Plesk is a hosting automation solution that simplifies setup and management of user accounts, web sites, and email accounts. It manages common web hosting software like Apache, DNS servers, FTP servers, mail servers, databases and more. New features in version 8.1 include support for AWStats web analytics and MySQL 5.0 databases. Plesk provides a control panel for hosting providers and their customers to independently manage domains and email.
This document is the user's guide for Oracle VM release 3.0.3. It provides an overview of Oracle VM and instructions for common management tasks like setting up storage, networks, server pools, and virtual machines. It also covers converting physical hosts to virtual machines using Oracle's P2V utility and includes troubleshooting guidance.
Plesk is a hosting automation solution that simplifies hosting provider management by allowing customers to manage their own accounts through a personal control panel. It integrates common hosting services like DNS, web, FTP, and email which are preconfigured with optimal default settings. New in version 8.1 is support for AWStats and MySQL 5.0. Plesk saves hosting providers time by automating customer account setup and isolation while empowering customers with self-service access.
Plesk 8.0 Administrator's Guide provides information to help administrators configure and maintain their Plesk control panel. It covers topics such as logging in, customizing the interface, upgrading licenses, securing the panel, configuring server services like DNS, mail, databases and statistics, managing server resources, and serving customers by simplifying account and domain setup. The guide contains information needed to optimize use of the Plesk control panel.
Plesk is a hosting automation solution that simplifies setup and management of user accounts, websites, and email accounts. It manages common software components like DNS, web, FTP, and mail servers. The new release of Plesk features a brand new desktop interface with quick access to major functions and statistics, as well as the ability to customize permissions for different types of user accounts.
1. Log in to the vSphere Web Client UI using administrator@vsphere.local and password VMware1!VMware1!
2. Log in to the NSX Manager Simplified UI using admin and password VMware1!VMware1!
3. Accept the End User License Agreement
This document provides an overview and instructions for installing and using Oracle9i on Windows 2000 and Windows NT. It describes new features in Oracle9i Release 2 (9.2) and Release 1 (9.0.1), differences between using Oracle on Windows and UNIX, the Oracle9i architecture and services on Windows, and configuration parameters stored in the Windows registry. The document also covers topics such as multiple Oracle homes, the Optimal Flexible Architecture, accounts and passwords, and tools for developing and administering Oracle databases on Windows.
This document provides a comprehensive guide for installing, configuring, and using the Druva inSync Cloud and inSync client. It includes instructions on setting up administrator accounts, configuring backup policies and profiles, managing users, restoring and sharing files, and using data analytics and loss prevention features. The guide is intended for system administrators to understand and navigate the Druva inSync Cloud interface.
The document is the Oracle Coherence Developer's Guide, Release 3.7. It provides contextual information, instructions, and examples to teach developers and architects how to use Oracle Coherence and develop Coherence-based applications. Coherence allows for clustered data management, uses a single API for logical operations and XML configuration for physical settings, and supports caching, various data storage and serialization options, and extensibility.
This document provides best practices for managing and monitoring Oracle Application Server 10g Release 2 (10.1.2) using Oracle Enterprise Manager 10g and Oracle Process Manager and Notification Server (OPMN). It recommends using the deployment wizard and clusters to simplify application deployment and configuration management. It also recommends monitoring application performance and server health metrics to identify bottlenecks and availability issues.
Plesk is control panel software used to manage websites and email. It has standard and desktop views. The standard view has a navigation pane and main screen. The desktop view consolidates tasks. Users log in to access tools for sites, email, sessions, and help desk tickets. Context help provides information on screen elements.
Plesk is control panel software used to manage websites and email. It has standard and desktop views. The standard view has a navigation pane and main screen. The desktop view consolidates tasks. Users log in to access tools for administration, sessions, help, and logging out. Context help provides information.
The document provides an overview of Plesk's interface and instructions for logging in. It describes the standard and desktop views of the control panel and their key features. It also covers how to change contact information and password settings.
This document is the Oracle Clinical Administrator's Guide Release 4.6. It provides instructions and guidelines for configuring and administering the Oracle Clinical application. The document covers topics such as setting up user accounts and permissions, configuring security roles and menu access, customizing discrepancy management and data entry settings, and maintaining reference codelists. It is intended to help administrators and implementers set up and manage the Oracle Clinical system.
Plesk 8.0 for Linux/UNIX Client’s Guide provides information about using Plesk control panel software. It describes how to log in to Plesk, customize the interface, view hosting resources and features, create hosting plans using templates, predefine content for new websites, host websites, deploy databases, install applications, secure websites with SSL, and restrict access with passwords. The document is copyright 2006 by SWsoft and contains various legal notices and trademark information.
Similar to Platform administration guide-nos_v3_5 (20)
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIVladimir Iglovikov, Ph.D.
Presented by Vladimir Iglovikov:
- https://www.linkedin.com/in/iglovikov/
- https://x.com/viglovikov
- https://www.instagram.com/ternaus/
This presentation delves into the journey of Albumentations.ai, a highly successful open-source library for data augmentation.
Created out of a necessity for superior performance in Kaggle competitions, Albumentations has grown to become a widely used tool among data scientists and machine learning practitioners.
This case study covers various aspects, including:
People: The contributors and community that have supported Albumentations.
Metrics: The success indicators such as downloads, daily active users, GitHub stars, and financial contributions.
Challenges: The hurdles in monetizing open-source projects and measuring user engagement.
Development Practices: Best practices for creating, maintaining, and scaling open-source libraries, including code hygiene, CI/CD, and fast iteration.
Community Building: Strategies for making adoption easy, iterating quickly, and fostering a vibrant, engaged community.
Marketing: Both online and offline marketing tactics, focusing on real, impactful interactions and collaborations.
Mental Health: Maintaining balance and not feeling pressured by user demands.
Key insights include the importance of automation, making the adoption process seamless, and leveraging offline interactions for marketing. The presentation also emphasizes the need for continuous small improvements and building a friendly, inclusive community that contributes to the project's growth.
Vladimir Iglovikov brings his extensive experience as a Kaggle Grandmaster, ex-Staff ML Engineer at Lyft, sharing valuable lessons and practical advice for anyone looking to enhance the adoption of their open-source projects.
Explore more about Albumentations and join the community at:
GitHub: https://github.com/albumentations-team/albumentations
Website: https://albumentations.ai/
LinkedIn: https://www.linkedin.com/company/100504475
Twitter: https://x.com/albumentations
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
GridMate - End to end testing is a critical piece to ensure quality and avoid...ThomasParaiso2
End to end testing is a critical piece to ensure quality and avoid regressions. In this session, we share our journey building an E2E testing pipeline for GridMate components (LWC and Aura) using Cypress, JSForce, FakerJS…
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
2. Copyright | Platform Administration Guide | NOS 3.5 | 2
Notice
Copyright
Copyright 2013 Nutanix, Inc.
Nutanix, Inc.
1740 Technology Drive, Suite 400
San Jose, CA 95110
All rights reserved. This product is protected by U.S. and international copyright and intellectual property
laws. Nutanix is a trademark of Nutanix, Inc. in the United States and/or other jurisdictions. All other marks
and names mentioned herein may be trademarks of their respective companies.
Conventions
Convention Description
variable_value The action depends on a value that is unique to your environment.
ncli> command The commands are executed in the Nutanix nCLI.
user@host$ command The commands are executed as a non-privileged user (such as nutanix)
in the system shell.
root@host# command The commands are executed as the root user in the hypervisor host
(vSphere or KVM) shell.
output The information is displayed as output from a command or in a log file.
Default Cluster Credentials
Interface Target Username Password
Nutanix web console Nutanix Controller VM admin admin
vSphere client ESXi host root nutanix/4u
SSH client or console ESXi host root nutanix/4u
SSH client or console KVM host root nutanix/4u
SSH client Nutanix Controller VM nutanix nutanix/4u
IPMI web interface or ipmitool Nutanix node ADMIN ADMIN
IPMI web interface or ipmitool Nutanix node (NX-3000) admin admin
Version
Last modified: September 24, 2013 (2013-09-24-13:28 GMT-7)
3. 3
Contents
Part I: NOS................................................................................................... 6
1: Cluster Management....................................................................... 7
To Start a Nutanix Cluster....................................................................................................... 7
To Stop a Cluster.....................................................................................................................7
To Destroy a Cluster................................................................................................................ 8
To Create Clusters from a Multiblock Cluster..........................................................................9
Disaster Protection................................................................................................................. 12
2: Password Management.................................................................15
To Change the Controller VM Password............................................................................... 15
To Change the ESXi Host Password.....................................................................................16
To Change the KVM Host Password.....................................................................................17
To Change the IPMI Password..............................................................................................18
3: Alerts...............................................................................................19
Cluster.....................................................................................................................................19
Controller VM..........................................................................................................................22
Guest VM................................................................................................................................24
Hardware.................................................................................................................................26
Storage....................................................................................................................................30
4: IP Address Configuration............................................................. 33
To Reconfigure the Cluster.................................................................................................... 33
To Prepare to Reconfigure the Cluster..................................................................................34
Remote Console IP Address Configuration........................................................................... 35
To Configure Host Networking............................................................................................... 38
To Configure Host Networking (KVM)....................................................................................39
To Update the ESXi Host Password in vCenter.................................................................... 40
To Change the Controller VM IP Addresses..........................................................................40
To Change a Controller VM IP Address (manual)................................................................. 41
To Complete Cluster Reconfiguration.................................................................................... 42
5: Field Installation............................................................................ 44
NOS Installer Reference.........................................................................................................44
To Image a Node................................................................................................................... 44
Part II: vSphere..........................................................................................47
6: vCenter Configuration...................................................................48
To Use an Existing vCenter Server....................................................................................... 48
4. 4
7: VM Management............................................................................ 55
Migrating a VM to Another Cluster........................................................................................ 55
vStorage APIs for Array Integration....................................................................................... 57
Migrating vDisks to NFS.........................................................................................................58
8: Node Management.........................................................................62
To Shut Down a Node in a Cluster....................................................................................... 62
To Start a Node in a Cluster..................................................................................................63
To Restart a Node..................................................................................................................64
To Patch ESXi Hosts in a Cluster..........................................................................................65
Removing a Node...................................................................................................................65
9: Storage Replication Adapter for Site Recovery Manager.......... 68
To Configure the Nutanix Cluster for SRA Replication.......................................................... 69
To Configure SRA Replication on the SRM Servers............................................................. 70
Part III: KVM............................................................................................... 72
10: Kernel-based Virtual Machine (KVM) Architecture...................73
Storage Overview................................................................................................................... 73
VM Commands....................................................................................................................... 74
11: VM Management Commands......................................................75
virt_attach_disk.py.................................................................................................................. 76
virt_check_disks.py................................................................................................................. 77
virt_clone.py............................................................................................................................ 79
virt_detach_disk.py................................................................................................................. 80
virt_eject_cdrom.py................................................................................................................. 81
virt_insert_cdrom.py................................................................................................................82
virt_install.py........................................................................................................................... 83
virt_kill.py................................................................................................................................ 85
virt_kill_snapshot.py................................................................................................................86
virt_list_disks.py...................................................................................................................... 86
virt_migrate.py.........................................................................................................................87
virt_multiclone.py.................................................................................................................... 88
virt_snapshot.py...................................................................................................................... 89
nfs_ls.py.................................................................................................................................. 90
Part IV: Hardware...................................................................................... 93
12: Node Order...................................................................................94
13: System Specifications................................................................ 98
NX-1000 Series System Specifications..................................................................................98
NX-2000 System Specifications........................................................................................... 100
6. NOS | Platform Administration Guide | NOS 3.5 | 6
Part
I
NOS
7. | Platform Administration Guide | NOS 3.5 | 7
1
Cluster Management
Although each host in a Nutanix cluster runs a hypervisor independent of other hosts in the cluster, some
operations affect the entire cluster.
To Start a Nutanix Cluster
1. Log on to any Controller VM in the cluster with SSH.
2. Start the Nutanix cluster.
nutanix@cvm$ cluster start
If the cluster starts properly, output similar to the following is displayed for each node in the cluster:
CVM: 172.16.8.167 Up, ZeusLeader
Zeus UP [3148, 3161, 3162, 3163, 3170, 3180]
Scavenger UP [3333, 3345, 3346, 11997]
ConnectionSplicer UP [3379, 3392]
Hyperint UP [3394, 3407, 3408, 3429, 3440, 3447]
Medusa UP [3488, 3501, 3502, 3523, 3569]
DynamicRingChanger UP [4592, 4609, 4610, 4640]
Pithos UP [4613, 4625, 4626, 4678]
Stargate UP [4628, 4647, 4648, 4709]
Cerebro UP [4890, 4903, 4904, 4979]
Chronos UP [4906, 4918, 4919, 4968]
Curator UP [4922, 4934, 4935, 5064]
Prism UP [4939, 4951, 4952, 4978]
AlertManager UP [4954, 4966, 4967, 5022]
StatsAggregator UP [5017, 5039, 5040, 5091]
SysStatCollector UP [5046, 5061, 5062, 5098]
What to do next. After you have verified that the cluster is running, you can start guest VMs.
To Stop a Cluster
Before you begin. Shut down all guest virtual machines, including vCenter if it is running on the cluster.
Do not shut down Nutanix Controller VMs.
Note: This procedure stops all services provided by guest virtual machines, the Nutanix cluster,
and the hypervisor host.
1. Log on to a running Controller VM in the cluster with SSH.
2. Stop the Nutanix cluster.
nutanix@cvm$ cluster stop
Wait to proceed until output similar to the following is displayed for every Controller VM in the cluster.
CVM: 172.16.8.191 Up, ZeusLeader
Zeus UP [3167, 3180, 3181, 3182, 3191, 3201]
8. | Platform Administration Guide | NOS 3.5 | 8
Scavenger UP [3334, 3351, 3352, 3353]
ConnectionSplicer DOWN []
Hyperint DOWN []
Medusa DOWN []
DynamicRingChanger DOWN []
Pithos DOWN []
Stargate DOWN []
Cerebro DOWN []
Chronos DOWN []
Curator DOWN []
Prism DOWN []
AlertManager DOWN []
StatsAggregator DOWN []
SysStatCollector DOWN []
To Destroy a Cluster
Destroying a cluster resets all nodes in the cluster to the factory configuration. All cluster configuration and
guest VM data is unrecoverable after destroying the cluster.
1. Log on to any Controller VM in the cluster with SSH.
2. Stop the Nutanix cluster.
nutanix@cvm$ cluster stop
Wait to proceed until output similar to the following is displayed for every Controller VM in the cluster.
CVM: 172.16.8.191 Up, ZeusLeader
Zeus UP [3167, 3180, 3181, 3182, 3191, 3201]
Scavenger UP [3334, 3351, 3352, 3353]
ConnectionSplicer DOWN []
Hyperint DOWN []
Medusa DOWN []
DynamicRingChanger DOWN []
Pithos DOWN []
Stargate DOWN []
Cerebro DOWN []
Chronos DOWN []
Curator DOWN []
Prism DOWN []
AlertManager DOWN []
StatsAggregator DOWN []
SysStatCollector DOWN []
3. If the nodes in the cluster have Intel PCIe-SSD drives, ensure they are mapped properly.
Check if the node has an Intel PCIe-SSD drive.
nutanix@cvm$ lsscsi | grep 'SSD 910'
→ If no items are listed, the node does not have an Intel PCIe-SSD drive and you can proceed to the
next step.
→ If two items are listed, the node does have an Intel PCIe-SSD drive.
If the node has an Intel PCIe-SSD drive, check if it is mapped correctly.
nutanix@cvm$ cat /proc/partitions | grep dm
→ If two items are listed, the drive is mapped correctly and you can proceed.
→ If no items are listed, the drive is not mapped correctly. Start then stop the cluster before proceeding.
9. | Platform Administration Guide | NOS 3.5 | 9
Perform this check on every Controller VM in the cluster.
4. Destroy the cluster.
Caution: Performing this operation deletes all cluster and guest VM data in the cluster.
nutanix@cvm$ cluster -s cvm_ip_addr destroy
To Create Clusters from a Multiblock Cluster
The minimum size for a cluster is three nodes.
1. Remove nodes from the existing cluster.
→ If you want to preserve data on the existing cluster, remove nodes by following To Remove a Node
from a Cluster on page 65.
→ If you want multiple new clusters, destroy the existing cluster by following To Destroy a Cluster on
page 8.
2. Create one or more new clusters by following To Configure the Cluster on page 10.
Product Mixing Restrictions
While a Nutanix cluster can include different products, there are some restrictions.
Caution: Do not configure a cluster that violates any of the following rules.
Compatibility Matrix
NX-1000 NX-2000 NX-2050 NX-3000 NX-3050 NX-6000
NX-1000
1
• • • • • •
NX-2000 • • • • •
NX-2050 • • • • • •
NX-3000 • • • • • •
NX-3050 • • • • • •
NX-6000
2
• • • •
3
•
1. NX-1000 nodes can be mixed with other products in the same cluster only when they are running 10
GbE networking; they cannot be mixed when running 1 GbE networking. If NX-1000 nodes are using
the 1 GbE interface, the maximum cluster size is 8 nodes. If the nodes are using the 10 GbE interface,
the cluster has no limits other than the maximum supported cluster size that applies to all products.
2. NX-6000 nodes cannot be mixed NX-2000 nodes in the same cluster.
3. Because it has a larger Flash tier, NX-3050 is recommended to be mixed with NX-6000 over other
products.
• Any combination of NX-2000, NX-2050, NX-3000, and NX-3050 nodes can be mixed in the same
cluster.
10. | Platform Administration Guide | NOS 3.5 | 10
• All nodes in a cluster must be the same hypervisor type (ESXi or KVM).
• All Controller VMs in a cluster must have the same NOS version.
• Mixed Nutanix clusters comprising NX-2000 nodes and other products are supported as specified
above. However, because the NX-2000 processor architecture differs from other models, vSphere does
not support enhanced/live vMotion of VMs from one type of node to another unless Enhanced vMotion
Capability (EVC) is enabled. For more information about EVC, see the vSphere 5 documentation and
the following VMware knowledge base articles:
• Enhanced vMotion Compatibility (EVC) processor support [1003212]
• EVC and CPU Compatibility FAQ [1005764]
To Configure the Cluster
Before you begin.
• Confirm that the system you are using to configure the cluster meets the following requirements:
• IPv6 link-local enabled.
• Windows 7, Vista, or MacOS.
• (Windows only) Bonjour installed (included with iTunes or downloadable from http://
support.apple.com/kb/DL999).
• Determine the IPv6 service of any Controller VM in the cluster.
IPv6 service names are uniquely generated at the factory and have the following form (note the final
period):
NTNX-block_serial_number-node_location-CVM.local.
On the right side of the block toward the front is a label that has the block_serial_number (for example,
12AM3K520060). The node_location is a number 1-4 for NX-3000, a letter A-D for NX-1000/NX-2000/
NX-3050, or a letter A-B for NX-6000.
If you need to confirm if IPv6 link-local is enabled on the network or if you do not have access to get the
node serial number, see the Nutanix support knowledge base for alternative methods.
11. | Platform Administration Guide | NOS 3.5 | 11
1. Open a web browser.
Nutanix recommends using Internet Explorer 9 for Windows and Safari for Mac OS.
Note: Internet Explorer requires protected mode to be disabled. Go to Tools > Internet
Options > Security, clear the Enable Protected Mode check box, and restart the browser.
2. Navigate to http://cvm_host_name:2100/cluster_init.html.
Replace cvm_host_name with the IPv6 service name of any Controller VM that will be added to the
cluster.
Following is an example URL to access the cluster creation page on a Controller VM:
http://NTNX-12AM3K520060-1-CVM.local.:2100/cluster_init.html
If the cluster_init.html page is blank, then the Controller VM is already part of a cluster. Connect to a
Controller VM that is not part of a cluster.
3. Type a meaningful value in the Cluster Name field.
This value is appended to all automated communication between the cluster and Nutanix support. It
should include the customer's name and if necessary a modifier that differentiates this cluster from any
other clusters that the customer might have.
Note: This entity has the following naming restrictions:
• The maximum length is 75 characters.
• Allowed characters are uppercase and lowercase standard Latin letters (A-Z and a-z),
decimal digits (0-9), dots (.), hyphens (-), and underscores (_).
4. Type the appropriate DNS and NTP addresses in the respective fields.
5. Type the appropriate subnet masks in the Subnet Mask row.
6. Type the appropriate default gateway IP addresses in the Default Gateway row.
7. Select the check box next to each node that you want to add to the cluster.
12. | Platform Administration Guide | NOS 3.5 | 12
All unconfigured nodes on the current network are presented on this web page. If you will be configuring
multiple clusters, be sure that you only select the nodes that should be part of the current cluster.
8. Provide an IP address for all components in the cluster.
Note: The unconfigured nodes are not listed according to their position in the block. Ensure
that you assign the intended IP address to each node.
9. Click Create.
Wait until the Log Messages section of the page reports that the cluster has been successfully
configured.
Output similar to the following indicates successful cluster configuration.
Configuring IP addresses on node 12AM2K420010/A...
Configuring IP addresses on node 12AM2K420010/B...
Configuring IP addresses on node 12AM2K420010/C...
Configuring IP addresses on node 12AM2K420010/D...
Configuring Zeus on node 12AM2K420010/A...
Configuring Zeus on node 12AM2K420010/B...
Configuring Zeus on node 12AM2K420010/C...
Configuring Zeus on node 12AM2K420010/D...
Initializing cluster...
Cluster successfully initialized!
Initializing the cluster DNS and NTP servers...
Successfully updated the cluster NTP and DNS server list
10. Log on to any Controller VM in the cluster with SSH.
11. Start the Nutanix cluster.
nutanix@cvm$ cluster start
If the cluster starts properly, output similar to the following is displayed for each node in the cluster:
CVM: 172.16.8.167 Up, ZeusLeader
Zeus UP [3148, 3161, 3162, 3163, 3170, 3180]
Scavenger UP [3333, 3345, 3346, 11997]
ConnectionSplicer UP [3379, 3392]
Hyperint UP [3394, 3407, 3408, 3429, 3440, 3447]
Medusa UP [3488, 3501, 3502, 3523, 3569]
DynamicRingChanger UP [4592, 4609, 4610, 4640]
Pithos UP [4613, 4625, 4626, 4678]
Stargate UP [4628, 4647, 4648, 4709]
Cerebro UP [4890, 4903, 4904, 4979]
Chronos UP [4906, 4918, 4919, 4968]
Curator UP [4922, 4934, 4935, 5064]
Prism UP [4939, 4951, 4952, 4978]
AlertManager UP [4954, 4966, 4967, 5022]
StatsAggregator UP [5017, 5039, 5040, 5091]
SysStatCollector UP [5046, 5061, 5062, 5098]
Disaster Protection
After VM protection is configured in the web console, managing snapshots and failing from one site to
another are accomplished with the nCLI.
13. | Platform Administration Guide | NOS 3.5 | 13
To Manage VM Snapshots
You can manage VM snapshots, including restoration, with these nCLI commands.
• Check status of replication.
ncli> pd list-replication-status
• List snapshots.
ncli> pd list-snapshots name="pd_name"
• Restore VMs from backup.
ncli> pd rollback-vms name="pd_name" vm-names="vm_ids" snap-id="snapshot_id" path-
prefix="folder_name"
• Replace vm_ids with a comma-separated list of VM IDs as given in vm list.
• Replace snapshot_id with a snapshot ID as given by pd list-snapshots.
• Replace folder_name with the name you want to give the VM folder on the datastore, which will be
created if it does not exist.
The VM is restored to the container where the snapshot resides. If you used a DAS-SATA-only
container for replication, after restoring the VM move it to an container suitable for active workloads with
storage vMotion
• Restore NFS files from backup.
ncli> pd rollback-nfs-files name="pd_name" files="nfs_files" snap-id="snapshot_id"
• Replace nfs_files with a comma-separated list of NFS files to restore.
• Replace snapshot_id with a snapshot ID as given by pd list-snapshots.
If you want to replace the existing file, include replace-nfs-files=true.
• Remove snapshots.
ncli> pd rm-snapshot name="pd_name" snap-ids="snapshot_ids"
Replace snapshot_ids with a comma-separated list of snapshot IDs as given in pd list snapshots.
To Fail from one Site to Another
Disaster failover
Connect to the backup site and activate it.
ncli> pd activate name="pd_name"
This operation does the following:
1. Restores all VM files from last fully-replicated snapshot.
2. Registers VMs on recovery site.
3. Marks the failover site protection domain as active.
Planned failover
Connect to the primary site and specify the failover site to migrate to.
ncli> pd migrate name="pd_name" remote-site="remote_site_name2"
This operation does the following:
1. Creates and replicates a snapshot of the protection domain.
14. | Platform Administration Guide | NOS 3.5 | 14
2. Shuts down VMs on the local site.
3. Creates and replicates another snapshot of the protection domain.
4. Unregisters all VMs and removes their associated files.
5. Marks the local site protection domain as inactive.
6. Restores all VM files from the last snapshot and registers them on the remote site.
7. Marks the remote site protection domain as active.
15. | Platform Administration Guide | NOS 3.5 | 15
2
Password Management
You can change the passwords of the following cluster components:
• Nutanix management interfaces
• Nutanix Controller VMs
• Hypervisor software
• Node hardware (management port)
Requirements
• You know the IP address of the component that you want to modify.
• You know the current password of the component you want to modify.
The default passwords of all components are provided in Default Cluster Credentials on page 2.
• You have selected a password that has 8 or more characters and at least one of each of the following:
• Upper-case letters
• Lower-case letters
• Numerals
• Symbols
To Change the Controller VM Password
Perform these steps on every Controller VM in the cluster.
Warning: The nutanix user must have the same password on all Controller VMs.
1. Log on to the Controller VM with SSH.
2. Change the nutanix user password.
nutanix@cvm$ passwd
3. Respond to the prompts, providing the current and new nutanix user password.
Changing password for nutanix.
Old Password:
New password:
Retype new password:
Password changed.
Note: The password must meet the following complexity requirements:
• At least 9 characters long
• At least 2 lowercase characters
• At least 2 uppercase characters
• At least 2 numbers
• At least 2 special characters
16. | Platform Administration Guide | NOS 3.5 | 16
To Change the ESXi Host Password
The cluster software needs to be able to log into each host as root to perform standard cluster operations,
such as mounting a new NFS datastore or querying the status of VMs in the cluster. Therefore, after
changing the ESXi root password it is critical to update the cluster configuration with the new password.
Tip: Although it is not required for the root user to have the same password on all hosts, doing so
will make cluster management and support much easier. If you do select a different password for
one or more hosts, make sure to note the password for each host.
1. Change the root password of all hosts.
Perform these steps on every ESXi host in the cluster.
a. Log on to the ESXi host with SSH.
b. Change the root password.
root@esx# passwd root
c. Respond to the prompts, providing the current and new root password.
Changing password for root.
Old Password:
New password:
Retype new password:
Password changed.
2. Update the root user password for all hosts in the Zeus configuration.
Warning: If you do not perform this step, the web console will no longer show correct statistics
and alerts, and other cluster operations will fail.
a. Log on to any Controller VM in the cluster with SSH.
b. Find the host IDs.
nutanix@cvm$ ncli -p 'admin_password' host list | grep -E 'ID|Hypervisor Key'
Note the host ID for each hypervisor host.
c. Update the hypervisor host password.
nutanix@cvm$ ncli -p 'admin_password' managementserver edit name=host_addr
password='host_password'
nutanix@cvm$ ncli -p 'admin_password' host edit id=host_id hypervisor-
password='host_password'
• Replace host_addr with the IP address of the hypervisor host.
• Replace host_id with a host ID you determined in the preceding step.
• Replace host_password with the root password on the corresponding hypervisor host.
Perform this step for every hypervisor host in the cluster.
3. Update the ESXi host password.
a. Log on to vCenter with the vSphere client.
b. Right-click the host with the changed password and select Disconnect.
c. Right-click the host and select Connect.
17. | Platform Administration Guide | NOS 3.5 | 17
d. Enter the new password and complete the Add Host Wizard.
If reconnecting the host fails, remove it from the cluster and add it again.
To Change the KVM Host Password
The cluster software needs to be able to log into each host as root to perform standard cluster operations,
such as mounting a new NFS datastore or querying the status of VMs in the cluster. Therefore, after
changing the KVM root password it is critical to update the cluster configuration with the new password.
Tip: Although it is not required for the root user to have the same password on all hosts, doing so
will make cluster management and support much easier. If you do select a different password for
one or more hosts, make sure to note the password for each host.
1. Change the root password of all hosts.
Perform these steps on every KVM host in the cluster.
a. Log on to the KVM host with SSH.
b. Change the root password.
root@kvm# passwd root
c. Respond to the prompts, providing the current and new root password.
Changing password for root.
Old Password:
New password:
Retype new password:
Password changed.
2. Update the root user password for all hosts in the Zeus configuration.
Warning: If you do not perform this step, the web console will no longer show correct statistics
and alerts, and other cluster operations will fail.
a. Log on to any Controller VM in the cluster with SSH.
b. Find the host IDs.
nutanix@cvm$ ncli -p 'admin_password' host list | grep -E 'ID|Hypervisor Key'
Note the host ID for each hypervisor host.
c. Update the hypervisor host password.
nutanix@cvm$ ncli -p 'admin_password' managementserver edit name=host_addr
password='host_password'
nutanix@cvm$ ncli -p 'admin_password' host edit id=host_id hypervisor-
password='host_password'
• Replace host_addr with the IP address of the hypervisor host.
• Replace host_id with a host ID you determined in the preceding step.
• Replace host_password with the root password on the corresponding hypervisor host.
Perform this step for every hypervisor host in the cluster.
18. | Platform Administration Guide | NOS 3.5 | 18
To Change the IPMI Password
The cluster software needs to be able to log into the management interface on each host to perform certain
operations, such as reading hardware alerts. Therefore, after changing the IPMI password it is critical to
update the cluster configuration with the new password.
Tip: Although it is not required for the administrative user to have the same password on all hosts,
doing so will make cluster management much easier. If you do select a different password for one
or more hosts, make sure to note the password for each host
1. Change the administrative user password of all IPMI hosts.
Product Administrative user
NX-1000, NX-3050, NX-6000 ADMIN
NX-3000 admin
NX-2000 ADMIN
Perform these steps on every IPMI host in the cluster.
a. Sign in to the IPMI web interface as the administrative user.
b. Click Configuration.
c. Click Users.
d. Select the administrative user and then click Modify User.
e. Type the new password in both text fields and then click Modify.
f. Click OK to close the confirmation window.
2. Update the administrative user password for all hosts in the Zeus configuration.
a. Log on to any Controller VM in the cluster with SSH.
b. Generate a list of all hosts in the cluster.
nutanix@cvm$ ncli -p 'admin_password' host list | grep -E 'ID|IPMI Address'
Note the host ID of each entry in the list.
c. Update the IPMI password.
nutanix@cvm$ ncli -p 'admin_password' host edit id=host_id ipmi-
password='ipmi_password'
• Replace host_id with a host ID you determined in the preceding step.
• Replace ipmi_password with the administrative user password on the corresponding IPMI host.
Perform this step for every IPMI host in the cluster.
19. | Platform Administration Guide | NOS 3.5 | 19
3
Alerts
This section lists all the NOS alerts with cause and resolution, sorted by category.
• Cluster
• Controller VM
• Guest VM
• Hardware
• Storage
Cluster
CassandraDetachedFromRing [A1055]
Message Cassandra on CVM ip_address is now detached from ring due to reason.
Cause Either a metadata drive has failed, the node was down for an extended period of time,
or an unexpected subsystem fault was encountered, so the node was removed from the
metadata store.
Resolution If the metadata drive has failed, replace the metadata drive as soon as possible. Refer
to the Nutanix documentation for instructions. If the node was down for an extended
period of time and is now running, add it back to the metadata store with the "host
enable-metadata-store" nCLI command. Otherwise, contact Nutanix support.
Severity kCritical
CassandraMarkedToBeDetached [A1054]
Message Cassandra on CVM ip_address is marked to be detached from ring due to reason.
Cause Either a metadata drive has failed, the node was down for an extended period of time,
or an unexpected subsystem fault was encountered, so the node is marked to be
removed from the metadata store.
Resolution If the metadata drive has failed, replace the metadata drive as soon as possible. Refer
to the Nutanix documentation for instructions. If the node was down for an extended
period of time and is now running, add it back to the metadata store with the "host
enable-metadata-store" nCLI command. Otherwise, contact Nutanix support.
Severity kCritical
DuplicateRemoteClusterId [A1038]
Message Remote cluster 'remote_name' is disabled because the name conflicts with
remote cluster 'conflicting_remote_name'.
20. | Platform Administration Guide | NOS 3.5 | 20
Cause Two remote sites with different names or different IP addresses have same cluster ID.
This can happen in two cases: (a) A remote cluster is added twice under two different
names (through different IP addresses) or (b) Two clusters have the same cluster ID.
Resolution In case (a) remove the duplicate remote site. In case (b) verify that the both clusters
have the same cluster ID and contact Nutanix support.
Severity kWarning
JumboFramesDisabled [A1062]
Message Jumbo frames could not be enabled on the iface interface in the last three
attempts.
Cause Jumbo frames could not be enabled in the controller VMs.
Resolution Ensure that the 10-Gig network switch has jumbo-frames enabled.
Severity kCritical
NetworkDisconnect [A1041]
Message IPMI interface target_ip is not reachable from Controller VM source_ip in the
last six attempts.
Cause The IPMI interface is down or there is a network connectivity issue.
Resolution Ensure that the IPMI interface is functioning and that physical networking, VLANs, and
virtual switches are configured correctly.
Severity kWarning
NetworkDisconnect [A1006]
Message Hypervisor target_ip is not reachable from Controller VM source_ip in the last
six attempts.
Cause The hypervisor host is down or there is a network connectivity issue.
Resolution Ensure that the hypervisor host is running and that physical networking, VLANs, and
virtual switches are configured correctly.
Severity kCritical
NetworkDisconnect [A1048]
Message Controller VM svm_ip with network address svm_subnet is in a different network
than the Hypervisor hypervisor_ip, which is in the network hypervisor_subnet.
Cause The Controller VM and the hypervisor are not on the same subnet.
Resolution Reconfigure the cluster. Either move the Controller VMs to the same subnet as the
hypervisor hosts or move the hypervisor hosts to the same subnet as the Controller
VMs.
21. | Platform Administration Guide | NOS 3.5 | 21
Severity kCritical
NetworkDisconnect [A1040]
Message Hypervisor target_ip is not reachable from Controller VM source_ip in the last
three attempts.
Cause The hypervisor host is down or there is a network connectivity issue.
Resolution Ensure that the hypervisor host is running and that physical networking, VLANs, and
virtual switches are configured correctly.
Severity kCritical
RemoteSupportEnabled [A1051]
Message Daily reminder that remote support tunnel to Nutanix HQ is enabled on this
cluster.
Cause Nutanix support staff are able to access the cluster to assist with any issue.
Resolution No action is necessary.
Severity kInfo
TimeDifferenceHigh [A1017]
Message Wall clock time has drifted by more than time_difference_limit_secs seconds
between the Controller VMs lower_time_ip and higher_time_ip.
Cause The cluster does not have NTP servers configured or they are not reachable.
Resolution Ensure that the cluster has NTP servers configured and that the NTP servers are
reachable from all Controller VMs.
Severity kWarning
ZeusConfigMismatch [A1008]
Message IPMI IP address on Controller VM svm_ip_address was updated from
zeus_ip_address to invalid_ip_address without following the Nutanix IP
Reconfiguration procedure.
Cause The IP address configured in the cluster does not match the actual setting of the IPMI
interface.
Resolution Follow the IP address change procedure in the Nutanix documentation.
Severity kCritical
22. | Platform Administration Guide | NOS 3.5 | 22
ZeusConfigMismatch [A1009]
Message IP address of Controller VM zeus_ip_address has been updated to
invalid_ip_address. The Controller VM will not be part of the cluster once the
change comes into effect, unless zeus configuration is updated.
Cause The IP address configured in the cluster does not match the actual setting of the
Controller VM.
Resolution Follow the IP address change procedure in the Nutanix documentation.
Severity kCritical
ZeusConfigMismatch [A1029]
Message Hypervisor IP address on Controller VM svm_ip_address was updated from
zeus_ip_address to invalid_ip_address without following the Nutanix IP
Reconfiguration procedure.
Cause The IP address configured in the cluster does not match the actual setting of the
hypervisor.
Resolution Follow the IP address change procedure in the Nutanix documentation.
Severity kCritical
Controller VM
CVMNICSpeedLow [A1058]
Message Controller VM service_vm_external_ip is not running on 10 Gbps network
interface. This will degrade the system performance.
Cause The Controller VM is not configured to use the 10 Gbps NIC or is configured to share
load with a slower NIC.
Resolution Connect the Controller VM to 10 Gbps NICs only.
Severity kWarning
CVMRAMUsageHigh [A1056]
Message Main memory usage in Controller VM ip_address is high in the last 20 minutes.
free_memory_kb KB of memory is free.
Cause The RAM usage on the Controller VM has been high.
Resolution Contact Nutanix Support for diagnosis. RAM on the Controller VM may need to be
increased.
Severity kCritical
23. | Platform Administration Guide | NOS 3.5 | 23
CVMRebooted [A1024]
Message Controller VM ip_address has been rebooted.
Cause Various
Resolution If the Controller VM was restarted intentionally, no action is necessary. If it restarted by
itself, contact Nutanix support.
Severity kCritical
IPMIError [A1050]
Message Controller VM ip_address is unable to fetch IPMI SDR repository.
Cause The IPMI interface is down or there is a network connectivity issue.
Resolution Ensure that the IPMI interface is functioning and that physical networking, VLANs, and
virtual switches are configured correctly.
Severity kCritical
KernelMemoryUsageHigh [A1034]
Message Controller VM ip_address's kernel memory usage is higher than expected.
Cause Various
Resolution Contact Nutanix support.
Severity kCritical
NetworkDisconnect [A1001]
Message Controller VM target_ip is not reachable from Controller VM source_ip in the
last six attempts.
Cause The Controller VM is down or there is a network connectivity issue.
Resolution If the Controller VM does not respond to ping, turn it on. Ensure that physical
networking, VLANs, and virtual switches are configured correctly.
Severity kCritical
NetworkDisconnect [A1011]
Message Controller VM target_ip is not reachable from Controller VM source_ip in the
last three attempts.
Cause The Controller VM is down or there is a network connectivity issue.
Resolution Ensure that the Controller VM is running and that physical networking, VLANs, and
virtual switches are configured correctly.
Severity kCritical
24. | Platform Administration Guide | NOS 3.5 | 24
NodeInMaintenanceMode [A1013]
Message Controller VM ip_address is put in maintenance mode due to reason.
Cause Node removal has been initiated.
Resolution No action is necessary.
Severity kInfo
ServicesRestartingFrequently [A1032]
Message There have been 10 or more cluster services restarts within 15 minutes.
Cause This alert usually indicates that the Controller VM was restarted, but there could be
other causes.
Resolution If this alert occurs once or infrequently, no action is necessary. If it is frequent, contact
Nutanix support.
Severity kCritical
StargateTemporarilyDown [A1030]
Message Stargate on Controller VM ip_address is down for downtime seconds.
Cause Various
Resolution Contact Nutanix support.
Severity kCritical
Guest VM
ProtectedVmNotFound [A1010]
Message Unable to locate VM with name 'vm_name and internal ID 'vm_id' in protection
domain 'protection_domain_name'.
Cause The VM was deleted.
Resolution Remove the VM from the protection domain.
Severity kWarning
ProtectionDomainActivation [A1043]
Message Unable to make protection domain 'protection_domain_name' active on remote
site 'remote_name' due to 'reason'.
Cause Various
Resolution Resolve the stated reason for the failure. If you cannot resolve the error, contact
Nutanix support.
25. | Platform Administration Guide | NOS 3.5 | 25
Severity kCritical
ProtectionDomainChangeModeFailure [A1060]
Message Protection domain protection_domain_name activate/deactivate failed. reason
Cause Protection domain cannot be activated or migrated.
Resolution Resolve the stated reason for the failure. If you cannot resolve the error, contact
Nutanix support.
Severity kCritical
ProtectionDomainReplicationExpired [A1003]
Message Protection domain protection_domain_name replication to the remote site
remote_name has expired before it is started.
Cause Replication is taking too long to complete before the snapshots expire.
Resolution Review replication schedules taking into account bandwidth and overall load on
systems. Confirm retention time on replicated snapshots.
Severity kWarning
ProtectionDomainReplicationFailure [A1015]
Message Protection domain protection_domain_name replication to remote site
remote_name failed. reason
Cause Various
Resolution Resolve the stated reason for the failure. If you cannot resolve the error, contact
Nutanix support.
Severity kCritical
ProtectionDomainSnapshotFailure [A1064]
Message Protection domain protection_domain_name snapshot snapshot_id failed. reason
Cause Protection domain cannot be snapshotted.
Resolution Make sure all VMs and files are available.
Severity kCritical
VMAutoStartDisabled [A1057]
Message Virtual Machine auto start is disabled on the hypervisor of Controller VM
service_vm_external_ip
26. | Platform Administration Guide | NOS 3.5 | 26
Cause Auto start of the Controller VM is disabled.
Resolution Enable auto start of the Controller VM as recommended by Nutanix. If auto start is
intentionally disabled, no action is necessary.
Severity kInfo
VMLimitExceeded [A1053]
Message The number of virtual machines on node node_serial is vm_count, which is above
the limit vm_limit.
Cause The node is running more virtual machines than the hardware can support.
Resolution Shut down VMs or move them to other nodes in the cluster.
Severity kCritical
VmActionError [A1033]
Message Failed to action VM with name 'vm_name' and internal ID 'vm_id' due to reason
Cause A VM could not be restored because of a hypervisor error, or could not be deleted
because it is still in use.
Resolution Resolve the stated reason for the failure. If you cannot resolve the error, contact
Nutanix support.
Severity kCritical
VmRegistrationError [A1002]
Message Failed to register VM using name 'vm_name' with the hypervisor due to reason
Cause An error on the hypervisor.
Resolution Resolve the stated reason for the failure. If you cannot resolve the error, contact
Nutanix support.
Severity kCritical
Hardware
CPUTemperatureHigh [A1049]
Message Temperature of CPU cpu_id exceeded temperatureC on Controller VM ip_address
Cause The device is overheating to the point of imminent failure.
Resolution Ensure that the fans in the block are functioning properly and that the environment is
cool enough.
Severity kCritical
27. | Platform Administration Guide | NOS 3.5 | 27
DiskBad [A1044]
Message Disk disk_position on node node_position of block block_position is marked
offline due to IO errors. Serial number of the disk is disk_serial in node
node_serial of block block_serial.
Cause The drive has failed.
Resolution Replace the failed drive. Refer to the Nutanix documentation for instructions.
Severity kCritical
FanSpeedLow [A1020]
Message Speed of fan fan_id exceeded fan_rpm RPM on Controller VM ip_address.
Cause The device is overheating to the point of imminent failure.
Resolution Ensure that the fans in the block are functioning properly and that the environment is
cool enough.
Severity kCritical
FanSpeedLow [A1045]
Message Fan fan_id has stopped on Controller VM ip_address.
Cause A fan has failed.
Resolution Replace the fan as soon as possible. Refer to the Nutanix documentation for
instructions.
Severity kCritical
FusionIOTemperatureHigh [A1016]
Message Fusion-io drive device temperature exceeded temperatureC on Controller VM
ip_address
Cause The device is overheating.
Resolution Ensure that the fans in the block are functioning properly and that the environment is
cool enough.
Severity kWarning
FusionIOTemperatureHigh [A1047]
Message Fusion-io drive device temperature exceeded temperatureC on Controller VM
ip_address
Cause The device is overheating to the point of imminent failure.
28. | Platform Administration Guide | NOS 3.5 | 28
Resolution Ensure that the fans in the block are functioning properly and that the environment is
cool enough.
Severity kCritical
FusionIOWearHigh [A1014]
Message Fusion-io drive die failure has occurred in Controller VM svm_ip and most of
the Fusion-io drives have worn out beyond 1.2PB of writes.
Cause The drives are approaching the maximum write endurance and are beginning to fail.
Resolution Replace the drives as soon as possible. Refer to the Nutanix documentation for
instructions.
Severity kCritical
FusionIOWearHigh [A1026]
Message Fusion-io drive die failures have occurred in Controller VMs svm_ip_list.
Cause The drive is failing.
Resolution Replace the drive as soon as possible. Refer to the Nutanix documentation for
instructions.
Severity kCritical
HardwareClockFailure [A1059]
Message Hardware clock in node node_serial has failed.
Cause The RTC clock on the host has failed or the RTC battery has died.
Resolution Replace the node. Refer to the Nutanix documentation for instructions.
Severity kCritical
IntelSSDTemperatureHigh [A1028]
Message Intel 910 SSD device device temperature exceeded temperatureC on the
Controller VM ip_address.
Cause The device is overheating.
Resolution Ensure that the fans in the block are functioning properly and that the environment is
cool enough.
Severity kWarning
29. | Platform Administration Guide | NOS 3.5 | 29
IntelSSDTemperatureHigh [A1007]
Message Intel 910 SSD device device temperature exceeded temperatureC on the
Controller VM ip_address.
Cause The device is overheating to the point of imminent failure.
Resolution Ensure that the fans in the block are functioning properly and that the environment is
cool enough.
Severity kCritical
IntelSSDWearHigh [A1035]
Message Intel 910 SSD device device on the Controller VM ip_address has worn out
beyond 6.5PB of writes.
Cause The drive is approaching the maximum write endurance.
Resolution Consider replacing the drive.
Severity kWarning
IntelSSDWearHigh [A1042]
Message Intel 910 SSD device device on the Controller VM ip_address has worn out
beyond 7PB of writes.
Cause The drive is close the maximum write endurance and failure is imminent.
Resolution Replace the drive as soon as possible. Refer to the Nutanix documentation for
instructions.
Severity kCritical
PowerSupplyDown [A1046]
Message power_source power source is down on block block_position.
Cause The power supply has failed.
Resolution Replace the power supply as soon as possible. Refer to the Nutanix documentation for
instructions.
Severity kCritical
RAMFault [A1052]
Message DIMM fault detected on Controller VM ip_address. The node is running with
current_memory_gb GB whereas installed_memory_gb GB was installed.
Cause A DIMM has failed.
30. | Platform Administration Guide | NOS 3.5 | 30
Resolution Replace the failed DIMM as soon as possible. Refer to the Nutanix documentation for
instructions.
Severity kCritical
RAMTemperatureHigh [A1022]
Message Temperature of DIMM dimm_id for CPU cpu_id exceeded temperatureC on Controller
VM ip_address
Cause The device is overheating to the point of imminent failure.
Resolution Ensure that the fans in the block are functioning properly and that the environment is
cool enough.
Severity kCritical
SystemTemperatureHigh [A1012]
Message System temperature exceeded temperatureC on Controller VM ip_address
Cause The node is overheating to the point of imminent failure.
Resolution Ensure that the fans in the block are functioning properly and that the environment is
cool enough.
Severity kCritical
Storage
DiskInodeUsageHigh [A1018]
Message Inode usage for one or more disks on Controller VM ip_address has exceeded
75%.
Cause The filesystem contains too many files.
Resolution Delete unneeded data or add nodes to the cluster.
Severity kWarning
DiskInodeUsageHigh [A1027]
Message Inode usage for one or more disks on Controller VM ip_address has exceeded
90%.
Cause The filesystem contains too many files.
Resolution Delete unneeded data or add nodes to the cluster.
Severity kCritical
31. | Platform Administration Guide | NOS 3.5 | 31
DiskSpaceUsageHigh [A1031]
Message Disk space usage for one or more disks on Controller VM ip_address has
exceeded warn_limit%.
Cause Too much data is stored on the node.
Resolution Delete unneeded data or add nodes to the cluster.
Severity kWarning
DiskSpaceUsageHigh [A1005]
Message Disk space usage for one or more disks on Controller VM ip_address has
exceeded critical_limit%.
Cause Too much data is stored on the node.
Resolution Delete unneeded data or add nodes to the cluster.
Severity kCritical
FusionIOReserveLow [A1023]
Message Fusion-io drive device reserves are down to reserve% on Controller VM
ip_address.
Cause The drive is beginning to fail.
Resolution Consider replacing the drive.
Severity kWarning
FusionIOReserveLow [A1039]
Message Fusion-io drive device reserves are down to reserve% on Controller VM
ip_address.
Cause The drive is failing.
Resolution Replace the drive as soon as possible. Refer to the Nutanix documentation for
instructions.
Severity kCritical
SpaceReservationViolated [A1021]
Message Space reservation configured on vdisk vdisk_name belonging to container id
container_id could not be honored due to insufficient disk space resulting
from a possible disk or node failure.
Cause A drive or a node has failed, and the space reservations on the cluster can no longer be
met.
32. | Platform Administration Guide | NOS 3.5 | 32
Resolution Change space reservations to total less than 90% of the available storage, and
replace the drive or node as soon as possible. Refer to the Nutanix documentation for
instructions.
Severity kWarning
VDiskBlockMapUsageHigh [A1061]
Message Too many snapshots have been allocated in the system. This may cause
perceivable performance degradation.
Cause Too many vdisks or snapshots are present in the system.
Resolution Remove unneeded snapshots and vdisks. If using remote replication, try to lower the
frequency of taking snapshots. If you cannot resolve the error, contact Nutanix support.
Severity kInfo
33. | Platform Administration Guide | NOS 3.5 | 33
4
IP Address Configuration
NOS includes a web-based configuration tool that automates the modification of Controller VMs and
configures the cluster to use these new IP addresses. Other cluster components must be modified
manually.
Requirements
The web-based configuration tool requires that IPv6 link-local be enabled on the subnet. If IPv6 link-local is
not available, you must configure the Controller VM IP addresses and the cluster manually. The web-based
configuration tool also requires that the Controller VMs be able to communicate with each other.
All Controller VMs and hypervisor hosts must be on the same subnet. If the IPMI interfaces are connected,
Nutanix recommends that they be on the same subnet as the Controller VMs and hypervisor hosts.
Guest VMs can be on a different subnet.
To Reconfigure the Cluster
Warning: If you are reassigning a Controller VM IP address to another Controller VM, you must
perform this complete procedure twice: once to assign intermediate IP addresses and again to
assign the desired IP addresses.
For example, if Controller VM A has IP address 172.16.0.11 and Controller VM B has IP address
172.16.0.10 and you want to swap them, you would need to reconfigure them with different IP
addresses (such as 172.16.0.100 and 172.16.0.101) before changing them to the IP addresses in
use initially.
1. Place the cluster in reconfiguration mode by following To Prepare to Reconfigure the Cluster on
page 34.
2. Configure the IPMI IP addresses by following the procedure for your hardware model.
→ To Configure the Remote Console IP Address (NX-1000, NX-3050, NX-6000) on page 35
→ To Configure the Remote Console IP Address (NX-3000) on page 35
→ To Configure the Remote Console IP Address (NX-2000) on page 36
34. | Platform Administration Guide | NOS 3.5 | 34
Alternatively, you can set the IPMI IP address using a command-line utility by following To Configure
the Remote Console IP Address (command line) on page 37.
3. Configure networking on node the by following the hypervisor-specific procedure.
→ vSphere: To Configure Host Networking on page 38
→ KVM: To Configure Host Networking (KVM) on page 39
4. (vSphere only) Update the ESXi host IP addresses in vCenter by following To Update the ESXi Host
Password in vCenter on page 40.
5. Configure the Controller VM IP addresses.
→ If IPv6 is enabled on the subnet, follow To Change the Controller VM IP Addresses on page 40.
→ If IPv6 is not enabled on the subnet, follow To Change a Controller VM IP Address (manual) on
page 41 for each Controller VM in the cluster.
6. Complete cluster reconfiguration by following To Complete Cluster Reconfiguration on page 42.
To Prepare to Reconfigure the Cluster
1. Log on to any Controller VM in the cluster with SSH.
2. Stop the Nutanix cluster.
nutanix@cvm$ cluster stop
Wait to proceed until output similar to the following is displayed for every Controller VM in the cluster.
CVM: 172.16.8.191 Up, ZeusLeader
Zeus UP [3167, 3180, 3181, 3182, 3191, 3201]
Scavenger UP [3334, 3351, 3352, 3353]
ConnectionSplicer DOWN []
Hyperint DOWN []
Medusa DOWN []
DynamicRingChanger DOWN []
Pithos DOWN []
Stargate DOWN []
Cerebro DOWN []
Chronos DOWN []
Curator DOWN []
Prism DOWN []
AlertManager DOWN []
StatsAggregator DOWN []
SysStatCollector DOWN []
3. Put the cluster in reconfiguration mode.
nutanix@cvm$ cluster reconfig
Type y to confirm the reconfiguration.
Wait until the cluster successfully enters reconfiguration mode, as shown in the following example.
INFO cluster:185 Restarted Genesis on 172.16.8.189.
INFO cluster:185 Restarted Genesis on 172.16.8.188.
INFO cluster:185 Restarted Genesis on 172.16.8.191.
INFO cluster:185 Restarted Genesis on 172.16.8.190.
INFO cluster:864 Success!
35. | Platform Administration Guide | NOS 3.5 | 35
Remote Console IP Address Configuration
The Intelligent Platform Management Interface (IPMI) is a standardized interface used to manage a host
and monitor its operation. To enable remote access to the console of each host, you must configure the
IPMI settings within BIOS.
The Nutanix cluster provides a Java application to remotely view the console of each node, or host server.
You can use this console to configure additional IP addresses in the cluster.
The procedure for configuring the remote console IP address is slightly different for each hardware
platform.
To Configure the Remote Console IP Address (NX-1000, NX-3050, NX-6000)
1. Connect a keyboard and monitor to a node in the Nutanix block.
2. Restart the node and press Delete to enter the BIOS setup utility.
You will have a limited amount of time to enter BIOS before the host completes the restart process.
3. Press the right arrow key to select the IPMI tab.
4. Press the down arrow key until BMC network configuration is highlighted and then press Enter.
5. Select Configuration Address source and press Enter.
6. Select Static and press Enter.
7. Assign the Station IP address, Subnet mask, and Router IP address.
8. Review the BIOS settings and press F4 to save the configuration changes and exit the BIOS setup
utility.
The node restarts.
To Configure the Remote Console IP Address (NX-3000)
1. Connect a keyboard and monitor to a node in the Nutanix block.
36. | Platform Administration Guide | NOS 3.5 | 36
2. Restart the node and press Delete to enter the BIOS setup utility.
You will have a limited amount of time to enter BIOS before the host completes the restart process.
3. Press the right arrow key to select the Server Mgmt tab.
4. Press the down arrow key until BMC network configuration is highlighted and then press Enter.
5. Select Configuration source and press Enter.
6. Select Static on next reset and press Enter.
7. Assign the Station IP address, Subnet mask, and Router IP address.
8. Press F10 to save the configuration changes.
9. Review the settings and then press Enter.
The node restarts.
To Configure the Remote Console IP Address (NX-2000)
1. Connect a keyboard and monitor to a node in the Nutanix block.
2. Restart the node and press Delete to enter the BIOS setup utility.
You will have a limited amount of time to enter BIOS before the host completes the restart process.
3. Press the right arrow key to select the Advanced tab.
4. Press the down arrow key until IPMI Configuration is highlighted and then press Enter.
5. Select Set LAN Configuration and press Enter.
6. Select Static to assign an IP address, subnet mask, and gateway address.
37. | Platform Administration Guide | NOS 3.5 | 37
7. Press F10 to save the configuration changes.
8. Review the settings and then press Enter.
9. Restart the node.
To Configure the Remote Console IP Address (command line)
You can configure the management interface from the hypervisor host on the same node.
Perform these steps once from each hypervisor host in the cluster where the management network
configuration need to be changed.
1. Log on to the hypervisor host with SSH or the IPMI remote console.
2. Set the networking parameters.
root@esx# /ipmitool -U ADMIN -P ADMIN lan set 1 ipsrc static
root@esx# /ipmitool -U ADMIN -P ADMIN lan set 1 ipaddr mgmt_interface_ip_addr
root@esx# /ipmitool -U ADMIN -P ADMIN lan set 1 netmask mgmt_interface_subnet_addr
root@esx# /ipmitool -U ADMIN -P ADMIN lan set 1 defgw ipaddr mgmt_interface_gateway
root@kvm# ipmitool -U ADMIN -P ADMIN lan set 1 ipsrc static
root@kvm# ipmitool -U ADMIN -P ADMIN lan set 1 ipaddr mgmt_interface_ip_addr
root@kvm# ipmitool -U ADMIN -P ADMIN lan set 1 netmask mgmt_interface_subnet_addr
root@kvm# ipmitool -U ADMIN -P ADMIN lan set 1 defgw ipaddr mgmt_interface_gateway
3. Show current settings.
root@esx# /ipmitool -v -U ADMIN -P ADMIN lan print 1
root@kvm# ipmitool -v -U ADMIN -P ADMIN lan print 1
Confirm that the parameters are set to the correct values.
38. | Platform Administration Guide | NOS 3.5 | 38
To Configure Host Networking
You can access the ESXi console either through IPMI or by attaching a keyboard and monitor to the node.
1. On the ESXi host console, press F2 and then provide the ESXi host logon credentials.
2. Press the down arrow key until Configure Management Network is highlighted and then press Enter.
3. Select Network Adapters and press Enter.
4. Ensure that the connected network adapters are selected.
If they are not selected, press Space to select them and press Enter to return to the previous screen.
5. If a VLAN ID needs to be configured on the Management Network, select VLAN (optional) and press
Enter. In the dialog box, provide the VLAN ID and press Enter.
6. Select IP Configuration and press Enter.
7. If necessary, highlight the Set static IP address and network configuration option and press Space
to update the setting.
8. Provide values for the following: IP Address, Subnet Mask, and Default Gateway fields based on your
environment and then press Enter .
9. Select DNS Configuration and press Enter.
10. If necessary, highlight the Use the following DNS server addresses and hostname option and press
Space to update the setting.
11. Provide values for the Primary DNS Server and Alternate DNS Server fields based on your
environment and then press Enter.
12. Press Esc and then Y to apply all changes and restart the management network.
13. Select Test Management Network and press Enter.
14. Press Enter to start the network ping test.
15. Verify that the default gateway and DNS servers reported by the ping test match those that you
specified earlier in the procedure and then press Enter.
Ensure that the tested addresses pass the ping test. If they do not, confirm that the correct IP
addresses are configured.
39. | Platform Administration Guide | NOS 3.5 | 39
Press Enter to close the test window.
16. Press Esc to log out.
To Configure Host Networking (KVM)
You can access the hypervisor host console either through IPMI or by attaching a keyboard and monitor to
the node.
1. Log on to the host as root.
2. Open the network interface configuration file.
root@kvm# vi /etc/sysconfig/network-scripts/ifcfg-br0
3. Press A to edit values in the file.
4. Update entries for netmask, gateway, and address.
The block should look like this:
ONBOOT="yes"
NM_CONTROLLED="no"
NETMASK="subnet_mask"
IPADDR="host_ip_addr"
DEVICE="eth0"
TYPE="ethernet"
GATEWAY="gateway_ip_addr"
BOOTPROTO="none"
• Replace host_ip_addr with the IP address for the hypervisor host.
• Replace subnet_mask with the subnet mask for host_ip_addr.
• Replace gateway_ip_addr with the gateway address for host_ip_addr.
5. Press Esc.
6. Type :wq and press Enter to save your changes.
7. Open the name services configuration file.
root@kvm# vi /etc/resolv.conf
8. Update the values for the nameserver parameter then save and close the file.
9. Restart networking.
root@kvm# /etc/init.d/network restart
40. | Platform Administration Guide | NOS 3.5 | 40
To Update the ESXi Host Password in vCenter
1. Log on to vCenter with the vSphere client.
2. Right-click the host with the changed password and select Disconnect.
3. Right-click the host and select Connect.
4. Enter the new password and complete the Add Host Wizard.
If reconnecting the host fails, remove it from the cluster and add it again.
To Change the Controller VM IP Addresses
Before you begin.
• Confirm that the system you are using to configure the cluster meets the following requirements:
• IPv6 link-local enabled.
• Windows 7, Vista, or MacOS.
• (Windows only) Bonjour installed (included with iTunes or downloadable from http://
support.apple.com/kb/DL999).
• Determine the IPv6 service of any Controller VM in the cluster.
IPv6 service names are uniquely generated at the factory and have the following form (note the final
period):
NTNX-block_serial_number-node_location-CVM.local.
On the right side of the block toward the front is a label that has the block_serial_number (for example,
12AM3K520060). The node_location is a number 1-4 for NX-3000, a letter A-D for NX-1000/NX-2000/
NX-3050, or a letter A-B for NX-6000.
If IPv6 link-local is not enabled on the subnet, reconfigure the cluster manually.
If you need to confirm if IPv6 link-local is enabled on the network or if you do not have access to get the
node serial number, see the Nutanix support knowledge base for alternative methods.
41. | Platform Administration Guide | NOS 3.5 | 41
Warning: If you are reassigning a Controller VM IP address to another Controller VM, you must
perform this complete procedure twice: once to assign intermediate IP addresses and again to
assign the desired IP addresses.
For example, if Controller VM A has IP address 172.16.0.11 and Controller VM B has IP address
172.16.0.10 and you want to swap them, you would need to reconfigure them with different IP
addresses (such as 172.16.0.100 and 172.16.0.101) before changing them to the IP addresses in
use initially.
The cluster must be stopped and in reconfiguration mode before changing the Controller VM IP addresses.
1. Open a web browser.
Nutanix recommends using Internet Explorer 9 for Windows and Safari for Mac OS.
Note: Internet Explorer requires protected mode to be disabled. Go to Tools > Internet
Options > Security, clear the Enable Protected Mode check box, and restart the browser.
2. Go to http://cvm_ip_addr:2100/ip_reconfig.html
Replace cvm_ip_addr with the name of the IPv6 service of any Controller VM that will be added to the
cluster.
3. Update one or more cells on the IP Reconfiguration page.
Ensure that all components satisfy the cluster subnet requirements. See Subnet Requirements.
4. Click Reconfigure.
5. Wait until the Log Messages section of the page reports that the cluster has been successfully
reconfigured, as shown in the following example.
Configuring IP addresses on node S10264822116570/A...
Success!
Configuring IP addresses on node S10264822116570/C...
Success!
Configuring IP addresses on node S10264822116570/B...
Success!
Configuring IP addresses on node S10264822116570/D...
Success!
Configuring Zeus on node S10264822116570/A...
Configuring Zeus on node S10264822116570/C...
Configuring Zeus on node S10264822116570/B...
Configuring Zeus on node S10264822116570/D...
Reconfiguration successful!
The IP address reconfiguration will disconnect any SSH sessions to cluster components. The cluster is
taken out of reconfiguration mode.
To Change a Controller VM IP Address (manual)
1. Log on to the hypervisor host with SSH or the IPMI remote console.
2. Log on to the Controller VM with SSH.
root@host# ssh nutanix@192.168.5.254
Enter the Controller VM nutanix password.
3. Restart genesis.
nutanix@cvm$ genesis restart
If the restart is successful, output similar to the following is displayed:
42. | Platform Administration Guide | NOS 3.5 | 42
Stopping Genesis pids [1933, 30217, 30218, 30219, 30241]
Genesis started on pids [30378, 30379, 30380, 30381, 30403]
4. Change the network interface configuration.
a. Open the network interface configuration file.
nutanix@cvm$ sudo vi /etc/sysconfig/network-scripts/ifcfg-eth0
Enter the nutanix password.
b. Press A to edit values in the file.
c. Update entries for netmask, gateway, and address.
The block should look like this:
ONBOOT="yes"
NM_CONTROLLED="no"
NETMASK="subnet_mask"
IPADDR="cvm_ip_addr"
DEVICE="eth0"
TYPE="ethernet"
GATEWAY="gateway_ip_addr"
BOOTPROTO="none"
• Replace cvm_ip_addr with the IP address for the Controller VM.
• Replace subnet_mask with the subnet mask for cvm_ip_addr.
• Replace gateway_ip_addr with the gateway address for cvm_ip_addr.
d. Press Esc.
e. Type :wq and press Enter to save your changes.
5. Update the Zeus configuration.
a. Open the host configuration file.
nutanix@cvm$ sudo vi /etc/hosts
b. Press A to edit values in the file.
c. Update hosts zk1, zk2, and zk3 to match changed Controller VM IP addresses.
d. Press Esc.
e. Type :wq and press Enter to save your changes.
6. Restart the virtual machine.
nutanix@cvm$ sudo reboot
Enter the nutanix password if prompted.
To Complete Cluster Reconfiguration
1. If you changed the IP addresses manually, take the cluster out of reconfiguration mode.
Perform these steps for every Controller VM in the cluster.
a. Log on to the Controller VM with SSH.
43. | Platform Administration Guide | NOS 3.5 | 43
b. Take the Controller VM out of reconfiguration mode.
nutanix@cvm$ rm ~/.node_reconfigure
c. Restart genesis.
nutanix@cvm$ genesis restart
If the restart is successful, output similar to the following is displayed:
Stopping Genesis pids [1933, 30217, 30218, 30219, 30241]
Genesis started on pids [30378, 30379, 30380, 30381, 30403]
2. Log on to any Controller VM in the cluster with SSH.
3. Start the Nutanix cluster.
nutanix@cvm$ cluster start
If the cluster starts properly, output similar to the following is displayed for each node in the cluster:
CVM: 172.16.8.167 Up, ZeusLeader
Zeus UP [3148, 3161, 3162, 3163, 3170, 3180]
Scavenger UP [3333, 3345, 3346, 11997]
ConnectionSplicer UP [3379, 3392]
Hyperint UP [3394, 3407, 3408, 3429, 3440, 3447]
Medusa UP [3488, 3501, 3502, 3523, 3569]
DynamicRingChanger UP [4592, 4609, 4610, 4640]
Pithos UP [4613, 4625, 4626, 4678]
Stargate UP [4628, 4647, 4648, 4709]
Cerebro UP [4890, 4903, 4904, 4979]
Chronos UP [4906, 4918, 4919, 4968]
Curator UP [4922, 4934, 4935, 5064]
Prism UP [4939, 4951, 4952, 4978]
AlertManager UP [4954, 4966, 4967, 5022]
StatsAggregator UP [5017, 5039, 5040, 5091]
SysStatCollector UP [5046, 5061, 5062, 5098]
44. | Platform Administration Guide | NOS 3.5 | 44
5
Field Installation
You can reimage a Nutanix node with the Phoenix ISO. This process installs the hypervisor and the
Nutanix Controller VM.
Note: Phoenix usage is restricted to Nutanix sales engineers, support engineers, and authorized
partners.
Phoenix can be used to cleanly install systems for POCs or to switch hypervisors.
NOS Installer Reference
Installation Options
Component Option
Hypervisor Clean Install Hypervisor: To install the selected hypervisor as part of
complete reimaging.
Clean Install SVM: To install the Controller VM as part of complete reimaging
or Controller VM boot drive replacement.
Controller VM
Repair SVM: To retain Controller VM configuration.
Note: Do not use this option except under guidance from Nutanix
support.
Supported Products and Hypervisors
Product ESX 5.0U2 & 5.1U1 KVM Hyper-V
NX-1000 •
NX-2000 •
NX-2050 •
NX-3000 • •
NX-3050 • • •
NX-6050/NX-6070 •
To Image a Node
Before you begin.
• Download the Phoenix ISO to a workstation with access to the IPMI interface on the node that you want
to reimage.
45. | Platform Administration Guide | NOS 3.5 | 45
• Gather the following required pieces of information: Block ID, Cluster ID, and Node Serial Number.
These items are assigned by Nutanix, and you must use the correct values.
This procedure describes how to image a node from an ISO on a workstation.
Repeat this procedure once for every node that you want to reimage.
1. Sign in to the IPMI web console.
2. Attach the ISO to the node.
a. Go to Remote Control and click Launch Console.
Accept any security warnings to start the console.
b. In the console, click Media > Virtual Media Wizard.
c. Click Browse next to ISO Image and select the ISO file.
d. Click Connect CD/DVD.
e. Go to Remote Control > Power Control.
f. Select Reset Server and click Perform Action.
The host restarts from the ISO.
3. In the boot menu, select Installer and press Enter.
If previous values for these parameters are detected on the node, they will be displayed.
4. Enter the required information.
→ If all previous values are displayed and you want to use then, press Y.
→ If some or all of the previous values are not displayed, enter the required values.
a. Block ID: Enter the unique block identifier assigned by Nutanix.
b. Model: Enter the product number.
c. Node Serial: Enter the unique node identifier assigned by Nutanix.
d. Cluster ID: Enter the unique cluster identifier assigned by Nutanix.
e. Node Position: Enter 1, 2, 3, or 4 for NX-3000; A, B, C, or D for all other 4-node blocks.
Warning: If you are imaging all nodes in a block, ensure that the Block ID is the same for all
nodes and that the Node Serial Number and Node Position are different.
46. | Platform Administration Guide | NOS 3.5 | 46
5. Select both Clean Install Hypervisor and Clean Install SVM then select Start.
Installation begins and takes about 20 minutes.
6. In the Virtual Media window, click Disconnect next to CD Media.
7. In the IPMI console, go to to Remote Control > Power Control.
8. Select Reset Server and click Perform Action.
The node restarts with the new image. After the node starts, additional configuration tasks run and
then the host restarts again. During this time, the host name is installing-please-be-patient. Wait
approximately 20 minutes until this stage completes before accessing the node.
Warning: Do not restart the host until the configuration is complete.
What to do next. Add the node to a cluster.
48. | Platform Administration Guide | NOS 3.5 | 48
6
vCenter Configuration
VMware vCenter enables the centralized management of multiple ESXi hosts. The Nutanix cluster in
vCenter must be configured according to Nutanix best practices.
While most customers prefer to use an existing vCenter, Nutanix provides a vCenter OVF, which is on
the Controller VMs in /home/nutanix/data/images/vcenter. You can deploy the OVF using the standard
procedures for vSphere.
To Use an Existing vCenter Server
1. Shut down the Nutanix vCenter VM.
2. Create a new cluster entity within the existing vCenter inventory and configure its settings based on
Nutanix best practices by following To Create a Nutanix Cluster in vCenter on page 48.
3. Add the Nutanix hosts to this new cluster by following To Add a Nutanix Node to vCenter on
page 51.
To Create a Nutanix Cluster in vCenter
1. Log on to vCenter with the vSphere client.
2. If you want the Nutanix cluster to be in its own datacenter or if there is no datacenter, click File > New >
Datacenter and type a meaningful name for the datacenter, such as NTNX-DC. Otherwise, proceed to the
next step.
You can also create the Nutanix cluster within an existing datacenter.
3. Right-click the datacenter node and select New Cluster.
4. Type a meaningful name for the cluster in the Name field, such as NTNX-Cluster.
5. Select the Turn on vSphere HA check box and click Next.
6. Select Admission Control > Enable.
7. Select Admission Control Policy > Percentage of cluster resources reserved as failover spare
capacity and enter the percentage appropriate for the number of Nutanix nodes in the cluster the click
Next.
Hosts (N+1) Percentage Hosts (N+2) Percentage Hosts (N+3) Percentage Hosts (N+4) Percentage
1 N/A 9 23% 17 18% 25 16%
2 N/A 10 20% 18 17% 26 15%
3 33% 11 18% 19 16% 27 15%
4 25% 12 17% 20 15% 28 14%
5 20% 13 15% 21 14% 29 14%
49. | Platform Administration Guide | NOS 3.5 | 49
Hosts (N+1) Percentage Hosts (N+2) Percentage Hosts (N+3) Percentage Hosts (N+4) Percentage
6 18% 14 14% 22 14% 30 13%
7 15% 15 13% 23 13% 31 13%
8 13% 16 13% 24 13% 32 13%
8. Click Next on the following three pages to accept the default values.
• Virtual Machine Options
• VM monitoring
• VMware EVC
9. Verify that Store the swapfile in the same directory as the virtual machine (recommended) is
selected and click Next.
10. Review the settings and then click Finish.
11. Add all Nutanix nodes to the vCenter cluster inventory.
See To Add a Nutanix Node to vCenter on page 51.
12. Right-click the Nutanix cluster node and select Edit Settings.
13. If vSphere HA and DRS are not enabled, select them on the Cluster Features page. Otherwise,
proceed to the next step.
Note: vSphere HA and DRS must be configured even if the customer does not plan to use
the features. The settings will be preserved within the vSphere cluster configuration, so if the
customer later decides to enable the feature, it will be pre-configured based on Nutanix best
practices.
14. Configure vSphere HA.
a. Select vSphere HA > Virtual Machine Options.
b. Change the VM restart priority of all Controller VMs to Disabled.
50. | Platform Administration Guide | NOS 3.5 | 50
Tip: Controller VMs include the phrase CVM in their names. It may be necessary to expand
the Virtual Machine column to view the entire VM name.
c. Change the Host Isolation Response setting of all Controller VMs to Leave Powered On.
d. Select vSphere HA > VM Monitoring
e. Change the VM Monitoring setting for all Controller VMs to Disabled.
f. Select vSphere HA > Datastore Heartbeating.
g. Click Select only from my preferred datastores and select the Nutanix datastore (NTNX-NFS).
h. If the cluster does not use vSphere HA, disable it on the Cluster Features page. Otherwise,
proceed to the next step.
15. Configure vSphere DRS.
a. Select vSphere DRS > Virtual Machine Options.
b. Change the Automation Level setting of all Controller VMs to Disabled.
51. | Platform Administration Guide | NOS 3.5 | 51
c. Select vSphere DRS > Power Management.
d. Confirm that Off is selected as the default power management for the cluster.
e. If the cluster does not use vSphere DRS, disable it on the Cluster Features page. Otherwise,
proceed to the next step.
16. Click OK to close the cluster settings window.
To Add a Nutanix Node to vCenter
The cluster must be configured according to Nutanix specifications given in vSphere Cluster Settings on
page 53.
Tip: Refer to Default Cluster Credentials on page 2 for the default credentials of all cluster
components.
1. Log on to vCenter with the vSphere client.
2. Right-click the cluster and select Add Host.
3. Type the IP address of the ESXi host in the Host field.
4. Enter the ESXi host logon credentials in the Username and Password fields.
5. Click Next.
If a security or duplicate management alert appears, click Yes.
6. Review the Host Summary page and click Next.
7. Select a license to assign to the ESXi host and click Next.
8. Ensure that the Enable Lockdown Mode check box is left unselected and click Next.
Lockdown mode is not supported.
9. Click Finish.
10. Select the ESXi host and click the Configuration tab.
11. Configure DNS servers.
a. Click DNS and Routing > Properties.
b. Select Use the following DNS server address.
52. | Platform Administration Guide | NOS 3.5 | 52
c. Type DNS server addresses in the Preferred DNS Server and Alternate DNS Server fields and
click OK.
12. Configure NTP servers.
a. Click Time Configuration > Properties > Options > NTP Settings > Add.
b. Type the NTP server address.
Add multiple NTP servers if required.
c. Click OK in the NTP Daemon (ntpd) Options and Time Configuration windows.
d. Click Time Configuration > Properties > Options > General.
e. Select Start automatically under Startup Policy.
f. Click Start
g. Click OK in the NTP Daemon (ntpd) Options and Time Configuration windows.
13. Click Storage and confirm that NFS datastores are mounted.
14. Set the Controller VM to start automatically when the ESXi host is powered on.
a. Click the Configuration tab.
b. Click Virtual Machine Startup/Shutdown in the Software frame.
c. Select the Controller VM and click Properties.
d. Ensure that the Allow virtual machines to start and stop automatically with the system check
box is selected.
e. If the Controller VM is listed in Manual Startup, click Move Up to move the Controller VM into the
Automatic Startup section.
53. | Platform Administration Guide | NOS 3.5 | 53
f. Click OK.
15. (NX-2000 only) Click Host Cache Configuration and confirm that the host cache is stored on the local
datastore.
If it is not correct, click Properties to update the location.
vSphere Cluster Settings
Certain vSphere cluster settings are required for Nutanix clusters.
vSphere HA and DRS must be configured even if the customer does not plan to use the feature. The
settings will be preserved within the vSphere cluster configuration, so if the customer later decides to
enable the feature, it will be pre-configured based on Nutanix best practices.
vSphere HA Settings
Enable host monitoring
Enable admission control and use the percentage-based policy with a value based on the
number of nodes in the cluster.
Set the VM Restart Priority of all Controller VMs to Disabled.
Set the Host Isolation Response of all Controller VMs to Leave Powered On.
Disable VM Monitoring for all Controller VMs.
Enable Datastore Heartbeating by clicking Select only from my preferred datastores and
choosing the Nutanix NFS datastore.
vSphere DRS Settings
Disable automation on all Controller VMs.
54. | Platform Administration Guide | NOS 3.5 | 54
Leave power management disabled (set to Off).
Other Cluster Settings
Store VM swapfiles in the same directory as the virtual machine.
(NX-2000 only) Store host cache on the local datastore.
Failover Reservation Percentages
Hosts (N+1) Percentage Hosts (N+2) Percentage Hosts (N+3) Percentage Hosts (N+4) Percentage
1 N/A 9 23% 17 18% 25 16%
2 N/A 10 20% 18 17% 26 15%
3 33% 11 18% 19 16% 27 15%
4 25% 12 17% 20 15% 28 14%
5 20% 13 15% 21 14% 29 14%
6 18% 14 14% 22 14% 30 13%
7 15% 15 13% 23 13% 31 13%
8 13% 16 13% 24 13% 32 13%
55. | Platform Administration Guide | NOS 3.5 | 55
7
VM Management
Migrating a VM to Another Cluster
You can live migrate a VM to an ESXi host in a Nutanix cluster. Usually this is done in the following cases:
• Migrate VMs from existing storage platform to Nutanix.
• Keep VMs running during disruptive upgrade or other downtime of Nutanix cluster.
In migrating VMs between vSphere clusters, the source host and NFS datastore are the ones presently
running the VM. The target host and NFS datastore are the ones where the VM will run after migration. The
target ESXi host and datastore must be part of a Nutanix cluster.
To accomplish this migration, you have to mount the NFS datastores from the target on the source. After
the migration is complete, you should unmount the datastores and block access.
To Migrate a VM to Another Cluster
Before you begin. Both the source host and the target host must be in the same vSphere cluster. Allow
NFS access to NDFS by adding the source host and target host to a whitelist, as described in To Configure
a Filesystem Whitelist.
To migrate a VM back to the source from the target, perform this same procedure with the target as the
new source and the source as the new target.
1. Sign in to the Nutanix web console.
2. Log on to vCenter with the vSphere client.
3. Mount the target NFS datastore on the source host and on the target host.
You can mount NFS datastores in the vSphere client by clicking Add Storage on the Configuration >
Storage screen for a host.
56. | Platform Administration Guide | NOS 3.5 | 56
Note: Due to a limitation with VMware vSphere, a temporary name and the IP address of a
controller VM must be used to mount the target NFS datastore on both the source host and the
target host for this procedure.
Parameter Value
Server IP address of the Controller VM on the target ESXi host
Folder Name of the container that has the target NFS datastore (typically /
nfs-ctr)
Datastore Name A temporary name for the NFS datastore (e.g., Temp-NTNX-NFS)
a. Select the source host and go to Configuration > Storage.
b. Click Add Storage and mount the target NFS datastore.
c. Select the target host and go to Configuration > Storage.
d. Click Add Storage and mount the target NFS datastore.
4. Change the VM datastore and host.
Do this for each VM that you want to live migrate to the target.
a. Right-click the VM and select Migrate.
57. | Platform Administration Guide | NOS 3.5 | 57
b. Select Change datastore and click Next.
c. Select the temporary datastore and click Next then Finish.
The VM storage is moved to the temporary datastore on the target host.
d. Right-click the VM and select Migrate.
e. Select Change host and click Next.
f. Select the target host and click Next.
g. Ensure that High priority is selected and click Next then Finish.
The VM keeps running as it moves to the target host.
h. Right-click the VM and select Migrate.
i. Select Change datastore and click Next.
j. Select the target datastore and click Next then Finish.
The VM storage is moved to the target datastore on the target host.
5. Unmount the datastores in the vSphere client.
Warning: Do not unmount the NFS datastore with the IP address 192.168.5.2.
a. Select the source host and go to Configuration > Storage
b. Right click the temporary datastore and select Unmount.
c. Select the target host and go to Configuration > Storage
d. Right click the temporary datastore and select Unmount.
What to do next. NDFS is not intended to be used as a general use NFS server. Once the migration is
complete, disable NFS access by removing the source host and target host from the whitelist, as described
in To Configure a Filesystem Whitelist.
vStorage APIs for Array Integration
To improve the vSphere cloning process, Nutanix provides a vStorage APIs for Array Integration (VAAI)
plugin. This plugin is installed by default during the Nutanix factory process.
Without the Nutanix VAAI plugin, the process of creating a full clone takes a significant amount of time
because all the data that comprises a VM is duplicated. This duplication also results in an increase in
storage consumption.
The Nutanix VAAI plugin efficiently makes full clones without reserving space for the clone. Read requests
for blocks that are shared between parent and clone are sent to the original vDisk that was created for the
parent VM. As the clone VM writes new blocks, the Nutanix file system allocates storage for those blocks.
This data management occurs completely at the storage layer, so the ESXi host sees a single file with the
full capacity that was allocated when the clone was created.
To Clone a VM
1. Log on to vCenter with the vSphere client.
58. | Platform Administration Guide | NOS 3.5 | 58
2. Right-click the VM and select Clone.
3. Follow the wizard to enter a name for the clone, choose a cluster, and choose a host.
4. Select the datastore that contains source VM and click Next.
Note: If you choose a datastore other than the one that contains the source VM, the clone
operation will use the VMware implementation and not the Nutanix VAAI plugin.
5. If desired, set the guest customization parameters. Otherwise, proceed to the next step.
6. Click Finish.
To Uninstall the VAAI Plugin
Because the VAAI plugin is in the process of certification, the security level is set to allow community-
supported plugins. Organizations with strict security policies may need to uninstall the plugin if it was
installed during setup.
Perform this procedure on each ESXi host in the Nutanix cluster.
1. Log on to the ESXi host with SSH.
2. Uninstall the plugin.
root@esx# esxcli software vib remove --vibname nfs-vaai-plugin
This command should return the following message:
Message: The update completed successfully, but the system needs to be rebooted for the
changes to be effective.
3. Disallow community-supported plugins.
root@esx# esxcli software acceptance set --level=PartnerSupported
4. Restart the node by following To Restart a Node on page 64.
Migrating vDisks to NFS
The Nutanix Virtual Computing Platform supports three types of storage for vDisks: VMFS, RDM, and NFS.
Nutanix recommends NFS for most situations. You can migrate VMFS and RDM vDisks to NFS.
Before migration, you must have an NFS datastore. You can determine if a datastore is NFS in
the vSphere client. NFS datastores have Server and Folder properties (for example, Server:
192.168.5.2, Folder: /ctr-ha). Datastore properties are shown in Datastores and Datastore Clusters >
Configuration > Datastore Details in the vSphere client.
59. | Platform Administration Guide | NOS 3.5 | 59
To create a datastore, use the Nutanix web console or the datastore create nCLI command.
The type of vDisk determines the mechanism that you use to migrate it to NFS.
• To migrate VMFS vDisks to NFS, use storage vMotion by following To Migrate VMFS vDisks to NFS on
page 59.
This operation takes significant time for each vDisk because the data is physically copied.
• To migrate RDM vDisks to NFS, use the Nutanix migrate2nfs.py utility by following To Migrate RDM
vDisks to NFS on page 60.
This operation takes only a small amount of time for each vDisk because data is not physically copied.
To Migrate VMFS vDisks to NFS
Before you begin. Log on to vCenter with the vSphere client.
Perform this procedure for each VM that is supported by a VMFS vDisk. The migration takes a significant
amount of time.
1. Right-click the VM and select Migrate.
2. Click Change datastore and click Next.
3. Select the NFS datastore and click Next.
4. Click Finish.
The vDisk begins migration. When the migration is complete, the vSphere client Tasks & Events tab
shows that the Relocate virtual machine task is completed.
60. | Platform Administration Guide | NOS 3.5 | 60
To Migrate RDM vDisks to NFS
The migrate2nfs.py utility is available on Controller VMs to rapidly migrate RDM vDisks to an NFS
datastore. This utility has the following restrictions:
• Guest VMs can be migrated only to an NFS datastore that is on the same container where the RDM
vDisk resides. For example, if the vDisk is in the ctr-ha container, the NFS datastore must be on the
ctr-ha container.
• ESXi has a maximum NFS vDisk size of in NFS is 2 TB - 512 B. To migrate vDisks to NFS, the
partitions must be smaller than this maximum. If you have any vDisks that exceed this maximum, you
have to reduce the size in the guest VM before using this mechanism to migrate it. How to reduce the
size is different for every operating system.
The following parameters are optional or are not always required.
--truncate_large_rdm_vmdks
Specify this switch to migrate vDisks larger than the maximum after reducing the size of the partition
in the guest operating system.
--filter=pattern
Specify a pattern with the --batch switch to restrict the vDisks based on the name, for example
Win7*. If you do not specify the --filter parameter in batch mode, all RDM vDisks are included.
--server=esxi_ip_addr and --svm_ip=cvm_ip_addr
Specify the ESXi host and Controller VM IP addresses if you are running the migrate2nfs.py script
on a Controller VM different from the node where the vDisk to migrate resides.
1. Log on to any Controller VM in the cluster with SSH.
2. Specify the logon credentials as environment variables.
nutanix@cvm$ export VI_USERNAME=root
nutanix@cvm$ export VI_PASSWORD=esxi_root_password
3. If you want to migrate one vDisk at a time, specify the VMX file.
nutanix@cvm$ migrate2nfs.py /vmfs/volumes/datastore_name/vm_dir/vm_name.vmx nfs_datastore
• Replace datastore_name with the name of the datastore, for example NTNX_datastore.
• Replace vm_dir/vm_name with the directory and the name of the VMX file.
4. If you want to migrate multiple vDisks at the same time, run migrate2nfs.py in batch mode.
Perform these steps for each ESXi host in the cluster.
a. List the VMs that will be migrated.
nutanix@cvm$ migrate2nfs.py --list_only --batch --server=esxi_ip_addr --
svm_ip=cvm_ip_addr source_datastore nfs_datastore
• Replace source_datastore with the name of the datastore that contains the VM .vmx file, for
example NTNX_datastore.
• Replace nfs_datastore with the name of the NFS datastore, for example NTNX-NFS.
b. Migrate the VMs.
nutanix@cvm$ migrate2nfs.py --batch --server=esxi_ip_addr --
svm_ip=cvm_ip_addr source_datastore nfs_datastore
Each VM takes approximately five minutes to migrate.
What to do next. Migrating the vDisks changes the device signature, which causes certain operating
systems to mark the disk as offline. How to mark the disk online is different for every operating system.
62. | Platform Administration Guide | NOS 3.5 | 62
8
Node Management
A Nutanix cluster is composed of individual nodes, or host servers that run a hypervisor. Each node hosts
a Nutanix Controller VM, which coordinates management tasks with the Controller VMs on other nodes.
To Shut Down a Node in a Cluster
Before you begin. Shut down guest VMs, including vCenter and the vMA, that are running on the node, or
move them to other nodes in the cluster.
Caution: You can only shut down one node for each cluster. If the cluster would have more than
one node shut down, shut down the entire cluster.
1. Log on to vCenter (or to the ESXi host if vCenter is not available) with the vSphere client.
2. Right-click the Controller VM and select Power > Shut Down Guest.
Note: Do not Power Off or Reset the Controller VM. Shutting down the Controller VM as a
guest ensures that the cluster is aware that Controller VM is unavailable.
3. Right-click the host and select Enter Maintenance Mode.
4. In the Confirm Maintenance Mode dialog box, uncheck Move powered off and suspended virtual
machines to other hosts in the cluster and click Yes.
The host is placed in maintenance mode, which prevents VMs from running on the host.
5. Right-click the node and select Shut Down.
Wait until vCenter shows that the host is not responding, which may take several minutes.
If you are logged on to the ESXi host rather than to vCenter, the vSphere client will disconnect when the
host shuts down.
63. | Platform Administration Guide | NOS 3.5 | 63
To Start a Node in a Cluster
1. If the node is turned off, turn it on by pressing the power button on the front. Otherwise, proceed to the
next step.
2. Log on to vCenter (or to the node if vCenter is not running) with the vSphere client.
3. Right-click the ESXi host and select Exit Maintenance Mode.
4. Right-click the Controller VM and select Power > Power on.
Wait approximately 5 minutes for all services to start on the Controller VM.
5. Confirm that cluster services are running on the Controller VM.
nutanix@cvm$ ncli cluster status | grep -A 15 cvm_ip_addr
Output similar to the following is displayed.
Name : 10.1.56.197
Status : Up
Zeus : up
Scavenger : up
ConnectionSplicer : up
Hyperint : up
Medusa : up
Pithos : up
Stargate : up
Cerebro : up
Chronos : up
Curator : up
Prism : up
AlertManager : up
StatsAggregator : up
SysStatCollector : up
Every service listed should be up.
6. Right-click the ESXi host in the vSphere client and select Rescan for Datastores. Confirm that all
Nutanix datastores are available.
7. Verify that all services are up on all Controller VMs.
nutanix@cvm$ cluster status
If the cluster is running properly, output similar to the following is displayed for each node in the cluster:
CVM: 172.16.8.167 Up, ZeusLeader
Zeus UP [3148, 3161, 3162, 3163, 3170, 3180]
Scavenger UP [3333, 3345, 3346, 11997]
ConnectionSplicer UP [3379, 3392]
Hyperint UP [3394, 3407, 3408, 3429, 3440, 3447]
Medusa UP [3488, 3501, 3502, 3523, 3569]
DynamicRingChanger UP [4592, 4609, 4610, 4640]
Pithos UP [4613, 4625, 4626, 4678]
Stargate UP [4628, 4647, 4648, 4709]
Cerebro UP [4890, 4903, 4904, 4979]
Chronos UP [4906, 4918, 4919, 4968]
Curator UP [4922, 4934, 4935, 5064]
Prism UP [4939, 4951, 4952, 4978]
AlertManager UP [4954, 4966, 4967, 5022]
StatsAggregator UP [5017, 5039, 5040, 5091]
64. | Platform Administration Guide | NOS 3.5 | 64
SysStatCollector UP [5046, 5061, 5062, 5098]
To Restart a Node
Before you begin. Shut down guest VMs, including vCenter and the vMA, that are running on the node, or
move them to other nodes in the cluster.
Use the following procedure when you need to restart all Nutanix Complete Blocks in a cluster.
1. Log on to vCenter (or to the ESXi host if the node is running the vCenter VM) with the vSphere client.
2. Right-click the Controller VM and select Power > Shut Down Guest.
Note: Do not Power Off or Reset the Controller VM. Shutting down the Controller VM as a
guest ensures that the cluster is aware that Controller VM is unavailable.
3. Right-click the host and select Enter Maintenance Mode.
In the Confirm Maintenance Mode dialog box, uncheck Move powered off and suspended virtual
machines to other hosts in the cluster and click Yes.
The host is placed in maintenance mode, which prevents VMs from running on the host.
4. Right-click the node and select Reboot.
Wait until vCenter shows that the host is not responding and then is responding again, which may take
several minutes.
If you are logged on to the ESXi host rather than to vCenter, the vSphere client will disconnect when the
host shuts down.
5. Right-click the ESXi host and select Exit Maintenance Mode.
6. Right-click the Controller VM and select Power > Power on.
Wait approximately 5 minutes for all services to start on the Controller VM.
7. Log on to the Controller VM with SSH.
8. Confirm that cluster services are running on the Controller VM.
nutanix@cvm$ ncli cluster status | grep -A 15 cvm_ip_addr
Output similar to the following is displayed.
Name : 10.1.56.197
Status : Up
Zeus : up
Scavenger : up
ConnectionSplicer : up
Hyperint : up
Medusa : up
Pithos : up
Stargate : up
Cerebro : up
Chronos : up
Curator : up
Prism : up
AlertManager : up
StatsAggregator : up
SysStatCollector : up
Every service listed should be up.
65. | Platform Administration Guide | NOS 3.5 | 65
9. Right-click the ESXi host in the vSphere client and select Rescan for Datastores. Confirm that all
Nutanix datastores are available.
To Patch ESXi Hosts in a Cluster
Use the following procedure when you need to patch the ESXi hosts in a cluster without service
interruption.
Perform the following steps for each ESXi host in the cluster.
1. Shut down the node by following To Shut Down a Node in a Cluster on page 62, including moving
guest VMs to a running node in the cluster.
2. Patch the ESXi host using your normal procedures with VMware Update Manager or otherwise.
3. Start the node by following To Start a Node in a Cluster on page 63.
4. Log on to the Controller VM with SSH.
5. Confirm that cluster services are running on the Controller VM.
nutanix@cvm$ ncli cluster status | grep -A 15 cvm_ip_addr
Output similar to the following is displayed.
Name : 10.1.56.197
Status : Up
Zeus : up
Scavenger : up
ConnectionSplicer : up
Hyperint : up
Medusa : up
Pithos : up
Stargate : up
Cerebro : up
Chronos : up
Curator : up
Prism : up
AlertManager : up
StatsAggregator : up
SysStatCollector : up
Every service listed should be up.
Removing a Node
Before removing a node from a Nutanix cluster, ensure the following statements are true:
• The cluster has at least four nodes at the beginning of the process.
• The cluster will have at least three functional nodes at the conclusion of the process.
When you start planned removal of a node, the node is marked for removal and data is migrated to other
nodes in the cluster. After the node is prepared for removal, you can physically remove it from the block.
To Remove a Node from a Cluster
Before you begin.
• Ensure that all nodes that will be part of the cluster after node removal are running.
• Complete any add node operations on the cluster before removing nodes.