This document describes the steps to create a Vertica cluster on AWS. To run a Vertica
cluster on AWS requires creating Amazon Machine Instances (AMIs). The instructions in
this document apply to AMIs built with Vertica Version 7.2.x
The Vertica Community Edition is installed on the AMI. Community Edition is limited to
three nodes and up to 1 TB of data. Each AMI includes a Community Edition license.
Most of the remainder of this document describes the details of how to prepare your
AWS environment, launch AMI instances, and combine instances to create a cluster. To
set up your Vertica cluster on AWS, follow the detailed directions that follow, or use the
summarized set of tasks in Quick Start to Setting Up Vertica AWS
OCI Storage Services provides different types of storage for various use cases:
- Local NVMe SSD storage provides high-performance temporary storage that is not persistent.
- Block Volume storage provides durable block-level storage for applications requiring SAN-like features through iSCSI. Volumes can be resized, backed up, and cloned.
- File Storage Service provides shared file systems accessible over NFSv3 that are durable and suitable for applications like EBS and HPC workloads.
Citrix PVS Advanced memory and storage considerations for provisioning servicesNuno Alves
This document discusses memory considerations for Citrix Provisioning Services deployments. It explains how Windows handles memory, specifically the system cache which caches file data in RAM for improved performance. The size of the system cache affects storage performance. The document recommends calculating how much data is typically read from shared vDisks by target devices in order to determine the appropriate amount of RAM needed in Provisioning Services servers and target devices for caching this data in memory rather than reading it from disk. This improves performance by reducing disk read I/O operations.
vSphere provides tools like vCenter, ESXTOP, and PowerCLI to monitor the performance of CPU, memory, network, and storage. Key metrics include CPU and memory usage, network packet drops, storage latency, and swap rates. Issues like oversubscription, capacity limitations, and configuration errors can be identified by watching for saturated resources, dropped packets, and high latency or queueing. External monitoring of physical infrastructure can also provide useful visibility.
XPDS16: High-Performance Virtualization for HPC Cloud on Xen - Jun Nakajima &...The Linux Foundation
We have been working to get Xen up and running on self-boot Intel® Xeon Phi processors to build HPC clouds. We see several challenges because of the unique (but not unusual for HPC) hardware technologies and performance requirements. For example, such hardware technologies include 1) >256 CPUs, 2) MCDRAM (high-bandwidth memory), 3) integrated fabric (i.e. Intel® Omni-Path). Unlike the “coprocessor“ model, supporting self-boot with >256 CPUs has various implications to Xen, including scheduling and scalability. We need to allow user applications to use MCDRAM directly to perform optimally. Also, we need to enable the integrated HPC fabric for the VM to use by direct I/O assignment.
In addition, we have only a single VM on each node to meet the high-performance requirements of HPC clouds. This (i.e. non-shared) model allowed us to optimize Xen more. In this talk, we share our design and lessons, and discuss the options we considered to achieve high-performance virtualization for HPC.
TECHNICAL WHITE PAPER▸ NetBackup 7.6 Plugin for VMware vCenterSymantec
In NetBackup 7.6, the NetBackup plug-in for vCenter integrates with VMware’s vSphere Client user interface to provide new VMware virtual machine administration capabilities.
The plug-in enables VMware administrators…
▸ To monitor their Virtual machine backups directly from the VMware vSphere Client UI.
▸ To export virtual machine backup reports from the vSphere Client UI.
▸ Initiate full virtual machine recovery directly from a Recovery Portal in the vSphere Client UI.
Boosting performance with the Dell Acceleration Appliance for DatabasesPrincipled Technologies
If your business is expanding and you need to support more users accessing your databases, it’s time to act. Upgrading your database infrastructure with a flash storage-based solution is a smart way to improve performance without adding more servers or taking up very much rack space, which comes at a premium. The Dell Acceleration Appliance for Databases addresses this by providing strong performance when combined with your existing infrastructure or on its own.
We found that adding a highly available DAAD solution to our database application provided up to 3.01 times the Oracle Database 12c performance, which can make a big difference to your bottom line. Additionally, the DAAD delivered 3.14 times the database performance when replacing traditional storage completely, which could enable your infrastructure to keep up with your growing business’ needs.
This document provides troubleshooting information for issues that may occur when using vSphere features and components, including:
- Troubleshooting steps for resolving common virtual machine problems like fault tolerant configuration errors and USB device connectivity issues.
- Troubleshooting hosts, including vSphere HA states and Auto Deploy problems.
- Troubleshooting the vCenter Server and vSphere Web Client, as well as Linked Mode, certificates, and plug-ins.
- Troubleshooting availability features like vSphere HA, DRS, and fault tolerance.
- Troubleshooting storage, networking, licensing, and other resource management problems.
This document discusses Oracle Cloud Infrastructure compute options including bare metal instances, virtual machine instances, and dedicated hosts. It provides details on instance types, images, volumes, instance configurations and pools, autoscaling, metadata, and lifecycle. Key points covered include the differences between bare metal, VM, and dedicated host instances, bringing your own images, customizing boot volumes, using instance configurations and pools for management and autoscaling, and accessing instance metadata.
OCI Storage Services provides different types of storage for various use cases:
- Local NVMe SSD storage provides high-performance temporary storage that is not persistent.
- Block Volume storage provides durable block-level storage for applications requiring SAN-like features through iSCSI. Volumes can be resized, backed up, and cloned.
- File Storage Service provides shared file systems accessible over NFSv3 that are durable and suitable for applications like EBS and HPC workloads.
Citrix PVS Advanced memory and storage considerations for provisioning servicesNuno Alves
This document discusses memory considerations for Citrix Provisioning Services deployments. It explains how Windows handles memory, specifically the system cache which caches file data in RAM for improved performance. The size of the system cache affects storage performance. The document recommends calculating how much data is typically read from shared vDisks by target devices in order to determine the appropriate amount of RAM needed in Provisioning Services servers and target devices for caching this data in memory rather than reading it from disk. This improves performance by reducing disk read I/O operations.
vSphere provides tools like vCenter, ESXTOP, and PowerCLI to monitor the performance of CPU, memory, network, and storage. Key metrics include CPU and memory usage, network packet drops, storage latency, and swap rates. Issues like oversubscription, capacity limitations, and configuration errors can be identified by watching for saturated resources, dropped packets, and high latency or queueing. External monitoring of physical infrastructure can also provide useful visibility.
XPDS16: High-Performance Virtualization for HPC Cloud on Xen - Jun Nakajima &...The Linux Foundation
We have been working to get Xen up and running on self-boot Intel® Xeon Phi processors to build HPC clouds. We see several challenges because of the unique (but not unusual for HPC) hardware technologies and performance requirements. For example, such hardware technologies include 1) >256 CPUs, 2) MCDRAM (high-bandwidth memory), 3) integrated fabric (i.e. Intel® Omni-Path). Unlike the “coprocessor“ model, supporting self-boot with >256 CPUs has various implications to Xen, including scheduling and scalability. We need to allow user applications to use MCDRAM directly to perform optimally. Also, we need to enable the integrated HPC fabric for the VM to use by direct I/O assignment.
In addition, we have only a single VM on each node to meet the high-performance requirements of HPC clouds. This (i.e. non-shared) model allowed us to optimize Xen more. In this talk, we share our design and lessons, and discuss the options we considered to achieve high-performance virtualization for HPC.
TECHNICAL WHITE PAPER▸ NetBackup 7.6 Plugin for VMware vCenterSymantec
In NetBackup 7.6, the NetBackup plug-in for vCenter integrates with VMware’s vSphere Client user interface to provide new VMware virtual machine administration capabilities.
The plug-in enables VMware administrators…
▸ To monitor their Virtual machine backups directly from the VMware vSphere Client UI.
▸ To export virtual machine backup reports from the vSphere Client UI.
▸ Initiate full virtual machine recovery directly from a Recovery Portal in the vSphere Client UI.
Boosting performance with the Dell Acceleration Appliance for DatabasesPrincipled Technologies
If your business is expanding and you need to support more users accessing your databases, it’s time to act. Upgrading your database infrastructure with a flash storage-based solution is a smart way to improve performance without adding more servers or taking up very much rack space, which comes at a premium. The Dell Acceleration Appliance for Databases addresses this by providing strong performance when combined with your existing infrastructure or on its own.
We found that adding a highly available DAAD solution to our database application provided up to 3.01 times the Oracle Database 12c performance, which can make a big difference to your bottom line. Additionally, the DAAD delivered 3.14 times the database performance when replacing traditional storage completely, which could enable your infrastructure to keep up with your growing business’ needs.
This document provides troubleshooting information for issues that may occur when using vSphere features and components, including:
- Troubleshooting steps for resolving common virtual machine problems like fault tolerant configuration errors and USB device connectivity issues.
- Troubleshooting hosts, including vSphere HA states and Auto Deploy problems.
- Troubleshooting the vCenter Server and vSphere Web Client, as well as Linked Mode, certificates, and plug-ins.
- Troubleshooting availability features like vSphere HA, DRS, and fault tolerance.
- Troubleshooting storage, networking, licensing, and other resource management problems.
This document discusses Oracle Cloud Infrastructure compute options including bare metal instances, virtual machine instances, and dedicated hosts. It provides details on instance types, images, volumes, instance configurations and pools, autoscaling, metadata, and lifecycle. Key points covered include the differences between bare metal, VM, and dedicated host instances, bringing your own images, customizing boot volumes, using instance configurations and pools for management and autoscaling, and accessing instance metadata.
This document summarizes a presentation about FlashGrid, an alternative to Oracle Exadata that aims to achieve similar performance levels using commodity hardware. It discusses the key components of FlashGrid including the Linux kernel, networking protocols like Infiniband and NVMe, and hardware. Benchmarks show FlashGrid achieving comparable IOPS and throughput to Exadata on a single server. While Exadata has proprietary advantages, FlashGrid offers excellent raw performance at lower cost and with simpler maintenance through the use of standard technologies.
This document discusses backup and recovery strategies for Oracle Exadata systems. It outlines the fundamental principles of backups including having multiple copies of data stored on different media with one copy offsite. It then describes the various backup options for Exadata, including using additional Exadata storage cells for the fastest backups, using a ZFS storage appliance for flexibility, or backing up to tape for economical long-term storage with removable offline copies. Key metrics like backup and restore speeds are provided for each option.
HPE Data Protector 9.07 includes several new features:
- It allows for cached restores from VMware backups stored on StoreOnce Catalyst repositories. It also enables powering on and live migrating VMs from backups on StoreOnce Catalyst.
- It provides support for 3PAR remote copy ZDB backups, allowing VMware snapshots to be moved to a secondary site.
- Enhancements are made for using StoreOnce repositories, including support for multiprotocol access and improved service set resolution.
- Individual VHD/VHDX files can now be restored from Microsoft Hyper-V backups.
- Support is added for NetApp cluster aware backups and 3-way NDMP backups.
- Oracle Database 11g Release 2 provides many advanced features to lower IT costs including in-memory processing, automated storage management, database compression, and real application testing capabilities.
- It allows for online application upgrades using edition-based redefinition which allows new code and data changes to be installed without disrupting the existing system.
- Oracle provides multiple upgrade paths from prior database versions to 11g to allow for predictable performance and a safe upgrade process.
This document discusses database deployment automation. It begins with introductions and an example of a problematic Friday deployment. It then reviews the concept of automation and different visions of it within an organization. Potential tools and frameworks for automation are discussed, along with common pitfalls. Basic deployment workflows using Oracle Cloud Control are demonstrated, including setting credentials, creating a proxy user, adding target properties, and using a job template. The document concludes by emphasizing that database deployment automation is possible but requires effort from multiple teams.
Zero Data Loss Recovery Appliance - Deep DiveDaniele Massimi
The document discusses the installation and configuration of an Oracle Recovery Appliance (ZDLRA). It begins with an overview of the ZDLRA functionality including delta push backups, real-time redo transport, and the delta store. It then covers the step-by-step installation process including running the installer script and additional Recovery Appliance specific steps. Finally, it mentions deploying agents to compute nodes, discovering the Recovery Appliance, and installing the backup module on clients.
This document discusses MongoDB sharding which involves horizontally scaling MongoDB across multiple machines or shards. It describes the components of a sharded MongoDB cluster including shards, config servers, and mongos query routers. It provides examples of when and why sharding would be used such as for large datasets, high throughput, hardware limitations, storage engine limitations, isolating failures, and separating hot and cold data. The document then outlines steps to set up a basic two node sharded cluster with one shard, three config servers, and mongos query routers on the same two machines.
VMware vSphere 4.0 provides infrastructure services including enhanced virtualization capabilities for compute, storage, and networking. It features increased scalability support, availability features like VMware HA and Fault Tolerance, and security improvements such as VMsafe and vShield Zones. The release delivers optimization and automation to reduce costs while improving operational efficiency.
The document discusses Oracle's ZS3 series enterprise storage systems. It provides an overview of Oracle's approach to driving storage system evolution from hardware-defined to software-defined. It then summarizes the key features and benefits of the ZS3 series, including extreme performance, integrated analytics, and optimization for Oracle software.
VMworld 2015: The Future of Software- Defined Storage- What Does it Look Like...VMworld
The document discusses the future of software-defined storage in 3 years. It predicts that storage media will continue to advance with higher capacities and lower latencies using technologies like 3D NAND and NVDIMMs. Networking and interconnects like NVMe over Fabrics will allow disaggregated storage resources to be pooled and shared across servers. Software-defined storage platforms will evolve to provide common services for distributed data platforms beyond just block storage, with advanced data placement and policy controls to optimize different workloads.
This presentation provides an introduction to the current activities leading to software architectures and methodologies for new NVM technologies, including the activities of the SNIA Non-Volatile Memory (NVM) Technical Working Group. This session includes a review and discussion of the impacts of the SNIA NVM Programming Model (NPM). We will preview the current work on new technologies, including remote access, high availability, clustering, atomic transactions, error management, and current methodologies for dealing with NVM.
This document provides an overview and introduction to virtual storage concepts in VMware vSphere, including NFS, iSCSI, VMFS, and Virtual SAN datastores. It discusses storage protocols, multipathing, and best practices for configuring and managing different types of datastores. The document is divided into several sections covering storage concepts, iSCSI, NFS, VMFS, and Virtual SAN datastores.
Ibm spectrum scale fundamentals workshop for americas part 1 components archi...xKinAnx
The document provides instructions for installing and configuring Spectrum Scale 4.1. Key steps include: installing Spectrum Scale software on nodes; creating a cluster using mmcrcluster and designating primary/secondary servers; verifying the cluster status with mmlscluster; creating Network Shared Disks (NSDs); and creating a file system. The document also covers licensing, system requirements, and IBM and client responsibilities for installation and maintenance.
Presentation oracle on power power advantages and license optimizationsolarisyougood
This document discusses optimizing Oracle licensing on IBM Power Systems. It describes the advantages of Power Systems for virtualization and workload consolidation which can reduce licensing costs. It provides an overview of Oracle editions and their pricing, noting opportunities to use Standard Edition to save costs versus Enterprise Edition. It also discusses when RAC may not be needed on Power Systems due to its high availability features, and how PowerVM partitioning is recognized by Oracle for "sub-capacity pricing" based on actual cores used.
This document provides an overview of vMotion capabilities in VMware vSphere, including:
- Types of virtual machine migrations like vMotion, Storage vMotion, and shared-nothing vMotion.
- Requirements for vMotion like compatible CPUs and network connectivity.
- Enhanced features in vSphere 6 like separate vMotion networking stacks and long distance vMotion.
- Best practices for vMotion planning, limitations, and troubleshooting migration errors.
This document provides an overview of NetApp's general product direction and upcoming features for clustered Data ONTAP. However, it does not constitute a commitment by NetApp and the details may change without notice. NetApp makes no guarantees about future functionality, timelines or products. The development and release of any mentioned features is at NetApp's sole discretion.
Dell emc back up solution in azure cloud vipinvips
The document discusses backup solutions for Azure virtual machines and identifies limitations with native Azure backup. It recommends using Dell EMC Networker and DataDomain as a third-party enterprise backup solution that can meet tier 1 RPO and RTO requirements of less than an hour. It provides details on the proposed solution architecture with Networker and DataDomain instances in each region along with replication capabilities. The solution aims to address limitations of native Azure backup and provide application-aware backups, encryption, short RPOs, and support for workloads like Oracle databases.
The document discusses using Dell EMC Isilon all-flash storage for SAS GRID workloads. It describes a test of the Isilon F810 node with hardware-accelerated compression using a multi-user SAS analytics workload. The testing focused on performance, scalability, compression benefits, deduplication savings, and cost when running the workload on an Isilon cluster with up to 12 grid nodes and comparing results with and without enabling various compression options.
Oracle's Data Protection Solutions Will Help You Protect Your Business Interests
The document discusses Oracle's data protection solutions, specifically the Oracle Recovery Appliance. The Recovery Appliance provides continuous data protection for Oracle databases with recovery points of less than one second. It offers faster restore performance compared to generic data protection appliances. The Recovery Appliance fully integrates with Oracle databases and offers features like real-time data validation and monitoring of data loss exposure.
This document discusses hybrid cloud storage solutions from Microsoft, focusing on StorSimple. It provides an overview of Carlos Mayol, a Premier Field Engineer at Microsoft, and his expertise in areas like Azure Infrastructure Services. It then summarizes Microsoft's StorSimple product which provides hybrid cloud storage across on-premises and Azure environments, highlighting benefits like cost reduction, simplified management, and support for various workloads. Use cases and customer examples are provided for StorSimple 8000 series appliances and the StorSimple Virtual Array solution.
In Cloud computing we explain the basics of cloud and its model. It contain contents which distinguish between different types of clouds and its characteristics. With the help of presented point you will able to select your required cloud solution that can meet your company requirements.
This document summarizes a presentation about FlashGrid, an alternative to Oracle Exadata that aims to achieve similar performance levels using commodity hardware. It discusses the key components of FlashGrid including the Linux kernel, networking protocols like Infiniband and NVMe, and hardware. Benchmarks show FlashGrid achieving comparable IOPS and throughput to Exadata on a single server. While Exadata has proprietary advantages, FlashGrid offers excellent raw performance at lower cost and with simpler maintenance through the use of standard technologies.
This document discusses backup and recovery strategies for Oracle Exadata systems. It outlines the fundamental principles of backups including having multiple copies of data stored on different media with one copy offsite. It then describes the various backup options for Exadata, including using additional Exadata storage cells for the fastest backups, using a ZFS storage appliance for flexibility, or backing up to tape for economical long-term storage with removable offline copies. Key metrics like backup and restore speeds are provided for each option.
HPE Data Protector 9.07 includes several new features:
- It allows for cached restores from VMware backups stored on StoreOnce Catalyst repositories. It also enables powering on and live migrating VMs from backups on StoreOnce Catalyst.
- It provides support for 3PAR remote copy ZDB backups, allowing VMware snapshots to be moved to a secondary site.
- Enhancements are made for using StoreOnce repositories, including support for multiprotocol access and improved service set resolution.
- Individual VHD/VHDX files can now be restored from Microsoft Hyper-V backups.
- Support is added for NetApp cluster aware backups and 3-way NDMP backups.
- Oracle Database 11g Release 2 provides many advanced features to lower IT costs including in-memory processing, automated storage management, database compression, and real application testing capabilities.
- It allows for online application upgrades using edition-based redefinition which allows new code and data changes to be installed without disrupting the existing system.
- Oracle provides multiple upgrade paths from prior database versions to 11g to allow for predictable performance and a safe upgrade process.
This document discusses database deployment automation. It begins with introductions and an example of a problematic Friday deployment. It then reviews the concept of automation and different visions of it within an organization. Potential tools and frameworks for automation are discussed, along with common pitfalls. Basic deployment workflows using Oracle Cloud Control are demonstrated, including setting credentials, creating a proxy user, adding target properties, and using a job template. The document concludes by emphasizing that database deployment automation is possible but requires effort from multiple teams.
Zero Data Loss Recovery Appliance - Deep DiveDaniele Massimi
The document discusses the installation and configuration of an Oracle Recovery Appliance (ZDLRA). It begins with an overview of the ZDLRA functionality including delta push backups, real-time redo transport, and the delta store. It then covers the step-by-step installation process including running the installer script and additional Recovery Appliance specific steps. Finally, it mentions deploying agents to compute nodes, discovering the Recovery Appliance, and installing the backup module on clients.
This document discusses MongoDB sharding which involves horizontally scaling MongoDB across multiple machines or shards. It describes the components of a sharded MongoDB cluster including shards, config servers, and mongos query routers. It provides examples of when and why sharding would be used such as for large datasets, high throughput, hardware limitations, storage engine limitations, isolating failures, and separating hot and cold data. The document then outlines steps to set up a basic two node sharded cluster with one shard, three config servers, and mongos query routers on the same two machines.
VMware vSphere 4.0 provides infrastructure services including enhanced virtualization capabilities for compute, storage, and networking. It features increased scalability support, availability features like VMware HA and Fault Tolerance, and security improvements such as VMsafe and vShield Zones. The release delivers optimization and automation to reduce costs while improving operational efficiency.
The document discusses Oracle's ZS3 series enterprise storage systems. It provides an overview of Oracle's approach to driving storage system evolution from hardware-defined to software-defined. It then summarizes the key features and benefits of the ZS3 series, including extreme performance, integrated analytics, and optimization for Oracle software.
VMworld 2015: The Future of Software- Defined Storage- What Does it Look Like...VMworld
The document discusses the future of software-defined storage in 3 years. It predicts that storage media will continue to advance with higher capacities and lower latencies using technologies like 3D NAND and NVDIMMs. Networking and interconnects like NVMe over Fabrics will allow disaggregated storage resources to be pooled and shared across servers. Software-defined storage platforms will evolve to provide common services for distributed data platforms beyond just block storage, with advanced data placement and policy controls to optimize different workloads.
This presentation provides an introduction to the current activities leading to software architectures and methodologies for new NVM technologies, including the activities of the SNIA Non-Volatile Memory (NVM) Technical Working Group. This session includes a review and discussion of the impacts of the SNIA NVM Programming Model (NPM). We will preview the current work on new technologies, including remote access, high availability, clustering, atomic transactions, error management, and current methodologies for dealing with NVM.
This document provides an overview and introduction to virtual storage concepts in VMware vSphere, including NFS, iSCSI, VMFS, and Virtual SAN datastores. It discusses storage protocols, multipathing, and best practices for configuring and managing different types of datastores. The document is divided into several sections covering storage concepts, iSCSI, NFS, VMFS, and Virtual SAN datastores.
Ibm spectrum scale fundamentals workshop for americas part 1 components archi...xKinAnx
The document provides instructions for installing and configuring Spectrum Scale 4.1. Key steps include: installing Spectrum Scale software on nodes; creating a cluster using mmcrcluster and designating primary/secondary servers; verifying the cluster status with mmlscluster; creating Network Shared Disks (NSDs); and creating a file system. The document also covers licensing, system requirements, and IBM and client responsibilities for installation and maintenance.
Presentation oracle on power power advantages and license optimizationsolarisyougood
This document discusses optimizing Oracle licensing on IBM Power Systems. It describes the advantages of Power Systems for virtualization and workload consolidation which can reduce licensing costs. It provides an overview of Oracle editions and their pricing, noting opportunities to use Standard Edition to save costs versus Enterprise Edition. It also discusses when RAC may not be needed on Power Systems due to its high availability features, and how PowerVM partitioning is recognized by Oracle for "sub-capacity pricing" based on actual cores used.
This document provides an overview of vMotion capabilities in VMware vSphere, including:
- Types of virtual machine migrations like vMotion, Storage vMotion, and shared-nothing vMotion.
- Requirements for vMotion like compatible CPUs and network connectivity.
- Enhanced features in vSphere 6 like separate vMotion networking stacks and long distance vMotion.
- Best practices for vMotion planning, limitations, and troubleshooting migration errors.
This document provides an overview of NetApp's general product direction and upcoming features for clustered Data ONTAP. However, it does not constitute a commitment by NetApp and the details may change without notice. NetApp makes no guarantees about future functionality, timelines or products. The development and release of any mentioned features is at NetApp's sole discretion.
Dell emc back up solution in azure cloud vipinvips
The document discusses backup solutions for Azure virtual machines and identifies limitations with native Azure backup. It recommends using Dell EMC Networker and DataDomain as a third-party enterprise backup solution that can meet tier 1 RPO and RTO requirements of less than an hour. It provides details on the proposed solution architecture with Networker and DataDomain instances in each region along with replication capabilities. The solution aims to address limitations of native Azure backup and provide application-aware backups, encryption, short RPOs, and support for workloads like Oracle databases.
The document discusses using Dell EMC Isilon all-flash storage for SAS GRID workloads. It describes a test of the Isilon F810 node with hardware-accelerated compression using a multi-user SAS analytics workload. The testing focused on performance, scalability, compression benefits, deduplication savings, and cost when running the workload on an Isilon cluster with up to 12 grid nodes and comparing results with and without enabling various compression options.
Oracle's Data Protection Solutions Will Help You Protect Your Business Interests
The document discusses Oracle's data protection solutions, specifically the Oracle Recovery Appliance. The Recovery Appliance provides continuous data protection for Oracle databases with recovery points of less than one second. It offers faster restore performance compared to generic data protection appliances. The Recovery Appliance fully integrates with Oracle databases and offers features like real-time data validation and monitoring of data loss exposure.
This document discusses hybrid cloud storage solutions from Microsoft, focusing on StorSimple. It provides an overview of Carlos Mayol, a Premier Field Engineer at Microsoft, and his expertise in areas like Azure Infrastructure Services. It then summarizes Microsoft's StorSimple product which provides hybrid cloud storage across on-premises and Azure environments, highlighting benefits like cost reduction, simplified management, and support for various workloads. Use cases and customer examples are provided for StorSimple 8000 series appliances and the StorSimple Virtual Array solution.
In Cloud computing we explain the basics of cloud and its model. It contain contents which distinguish between different types of clouds and its characteristics. With the help of presented point you will able to select your required cloud solution that can meet your company requirements.
Amazon Web Services (AWS) provides on-demand access to computing resources and services through a global network of data centers. AWS allows organizations of all sizes to use distributed IT infrastructure to deliver a variety of use cases. Customers pay only for the resources they consume on a pay-as-you-go basis. AWS offers advantages like flexibility, cost effectiveness, and scalability. It provides a variety of computing, storage, database, analytics, and other services. AWS ensures security of customer data and applications using features like virtual private clouds, identity and access management, encryption, and monitoring tools.
Sun Salutation is considered a complete body workout. Yoga experts say that doing 12 sets of Surya Namaskar translates into doing 288 powerful yoga poses in a span of 12 to 15 minutes.
http://www.artofliving.org/in-en/yoga/yoga-poses/sun-salutation
This document describes how to use VMware and EMC Isilon to quickly deploy a Hadoop cluster running PivotalHD. It provides step-by-step instructions to automate the deployment of a Hadoop cluster using VMware Big Data Extensions and an existing EMC Isilon storage array for shared storage. The deployment can be done in a couple hours at low cost by leveraging existing VMware and EMC infrastructure.
This document describes the functions performed by an HP Vertica database administrator (DBA).
Perform these tasks using only the dedicated database administrator account that was created
when you installed HP Vertica. The examples in this documentation set assume that the
administrative account name is dbadmin.
l To perform certain cluster configuration and administration tasks, the DBA (users of the
administrative account) must be able to supply the root password for those hosts. If this
requirement conflicts with your organization's security policies, these functions must be
performed by your IT staff.
l If you perform administrative functions using a different account from the account provided
during installation, HP Vertica encounters file ownership problems.
l If you share the administrative account password, make sure that only one user runs the
Administration Tools at any time. Otherwise, automatic configuration propagation does not
work correctly.
l The Administration Tools require that the calling user's shell be /bin/bash. Other shells give
unexpected results and are not supported.
Трансформация ИТ с помощью Hewlett Packard Enterprise
ИТ для экономики идей
Идеи всегда являлись залогом успеха в развития бизнеса. Однако одних лишь хороших идей мало. Успех определяется тем, насколько быстро компания может превращать идеи в прибыль. Сегодня путь от идеи до ее реализации радикально сократился. Именно поэтому, говоря об особенностях современного этапа развития экономики, эксперты рынка все чаще используют термин «экономика идей».
Part 1: IBM Applications
This part of the guide describes ways to back up and restore Informix Server database objects, DB2 databases, and Lotus Notes/Domino Server.
This part includes the following chapters:
Data Protector Informix Server integration
Data Protector DB2 UDB integration
lData Protector Lotus Notes/Domino Server integration
Part 2:
Microsoft Applications
This part of the guide describes ways to configure and use the following:
Data Protector Microsoft SQL Server integration
Data Protector Microsoft SQL Server 2007/2010/2013 integration
Data Protector Microsoft SharePoint Server VSS based solution
Data Protector Microsoft Exchange Server 2007 integration
Data Protector Microsoft Exchange Server 2010 integration
Data Protector Microsoft Exchange Single Mailbox integration
Part 3:
Oracle and SAP
This part of the guide describes ways to configure and use the following:
Data Protector Oracle Server integration
Data Protector MySQL integration
Data Protector SAP R/3 integration
Data Protector SAP MaxDB integration
Data Protector SAP HANA Appliance integration
Part 4:
Sybase and Network Data Management Protocol Server
This part of the guide describes ways to configure and use the following:
Sybase Server integration
Network Data Management Protocol Server integration
NetApp SnapManager solution
Part 5:
Virtualization
This part of the guide describes ways to back up VMware virtual machines and Microsoft Hyper-V data online.
This part includes the following chapters:
Data Protector Virtual Environment integration for VMware
Data Protector Virtual Environment integration for Microsoft Hyper-V
Part 6:
PostgreSQL
This part of the guide describes Data Protector integration.
This part includes the following chapter:
Data Protector PostgreSQL integration
Hpe Data Protector Disaster Recovery GuideAndrey Karpov
This chapter provides a general overview of the disaster recovery process, explains the basic terms used in the Disaster Recovery guide and provides an overview of disaster recovery methods
Carefully follow the instructions below to prepare for disaster recovery and ensure a fast and efficient restore. The preparation procedure does not depend on the disaster recovery method, and includes developing a detailed disaster recovery plan, performing consistent and relevant backups, and updating the SRD file on Windows.
Assisted Manual Disaster Recovery (AMDR)
Manual Disaster Recovery (MDR)
This chapter contains descriptions of problems you might encounter while performing a disaster recovery. You can start with problems connected to a particular disaster recovery method and continue with general disaster recovery problems.
Example Preparation Tasks
Hpe Zero Downtime Administrator's GuideAndrey Karpov
Part 1: HPE P4000 SAN Solutions
This part describes how to configure the Data Protector HPE P4000 SAN Solutions integration. For information on how to perform zero downtime backup and instant recovery using the HPE P4000 SAN Solutions integration, see the HPE Data Protector Integration Guide for Microsoft Volume Shadow Copy Service.
Part 2: HPE P6000 EVA Disk Array Family
This part describes how to configure the Data Protector HPE P6000 EVA Disk Array Family integration, how to perform zero downtime backup and instant recovery using the HPE P6000 EVA Disk Array Family integration, and how to resolve the integration-specific Data Protector problems
Part 3: HPE P9000 XP Disk Array Family
This part describes how to configure the Data Protector HPE P9000 XP Disk Array Family integration, how to perform zero downtime backup and instant recovery using the HPE P9000 XP Disk Array Family integration, and how to resolve the integration-specific Data Protector problems.
Part 4: HPE 3PAR StoreServ Storage
This part describes how to configure the Data Protector HPE 3PAR StoreServ Storage integration, and how to perform zero downtime backup and instant recovery using the HPE 3PAR StoreServ Storage integration through native storage system support built-in in the Data Protector HPE P6000 / HPE 3PAR SMI-S Agent. For information on how to perform zero downtime backup and instant recovery using the HPE 3PAR StoreServ Storage integration through the Data Protector Microsoft Volume Shadow Copy Service integration, see the HPE Data Protector Integration Guide for Microsoft Volume Shadow Copy Service.
Part 5: EMC Symmetrix
This part describes how to configure the Data Protector EMC Symmetrix integration, how to perform zero downtime backup and instant recovery using the EMC Symmetrix integration, and how to resolve the integration-specific Data Protector problems.
Part 6: NetApp Storage
This part describes how to configure the Data Protector NetApp Storage integration, how to perform zero downtime backup using the NetApp Storage system, and how to resolve the integration-specific Data Protector problems.
Part 7: EMC VNX Family
This part describes how to configure the Data Protector EMC VNX Family integration, how to perform zero downtime backup using the EMC VNX storage system, and how to resolve the integration-specific Data Protector problems.
Part 8: EMC VMAX Family
This part describes how to configure the Data Protector EMC VMAX Family integration, how to perform zero downtime backup using the EMC VMAX storage system, and how to resolve the integration-specific Data Protector problems.
This document discusses AWS CloudFormation, which allows users to create and manage AWS resources through templates written in JSON. It describes the basic structure of a CloudFormation template, which includes sections for description, parameters, mappings, resources, and outputs. Parameters allow passing values to the template, mappings specify different settings for different AWS regions, resources define the AWS infrastructure to create, and outputs define values that are returned after stack creation. Examples are provided of basic CloudFormation templates and how to launch, update, and troubleshoot templates.
hpe manual data protector 9.07 granular extension guidesAndrey Karpov
Granular Recovery Extension User Guide for
Microsoft SharePoint Server, Exchange and
VMware
The HP Data Protector Granular Recovery Extension User Guide for Microsoft Exchange Server provides information specific to this extension:
l
For detailed information about Data Protector specifics, see the Data Protector Documentation set.
l
For detailed information about Microsoft Exchange Server specifics, refer to the official Microsoft Exchange Server documentation.
l Software Version number, which indicates the software version.
l Document Release Date, which changes each time the document is updated.
l Software Release Date, which indicates the release date of this version of the software.
To check for recent updates or to verify that you are using the most recent edition of a document, visit the
Knowledge Base on the HPE Big Data Customer Support Site.
This document provides guidance on different approaches for loading data into HP Vertica, including:
1) Using the COPY statement, which loads data in two phases, to bulk load large amounts of data efficiently.
2) Tuning the data load by adjusting resource pool parameters like query budget and configuration parameters to improve performance.
3) Troubleshooting various loading scenarios like loading large or many small files, wide tables, and ensuring sufficient executor nodes.
Vertica is a column-oriented database management system. It stores data in columnar projections rather than rows. The document provides an overview of Vertica concepts such as column storage, hybrid storage, projections vs tables, and types of projections. It also describes Vertica objects like projections, views, tables, SQL functions, and sequences. Operations covered include DML statements, bulk data loading using COPY, bulk updating using MERGE, and exporting data. The document compares Vertica to Teradata and provides version information.
HPE Data Protector Administrator's GuideAndrey Karpov
HPE Data Protector is data backup and recovery software. This document provides an administrator's guide for version 9.07, covering topics such as:
- Configuring security settings like encryption and access control
- Managing users, user groups, and their permissions
- Maintaining the Internal Database (IDB) that stores backup metadata
- Setting up a Manager-of-Managers (MoM) environment for centralized management across multiple cells
- Integrating Data Protector with clustering platforms like Microsoft Cluster Server, HPE Serviceguard, and HACMP
The guide includes procedures for tasks like backup, restore, IDB maintenance and recovery, user management, and more.
Hpe Data Protector troubleshooting guideAndrey Karpov
How to troubleshoot
To solve problems quickly and efficiently:
1.Make yourself familiar with the general troubleshooting information.
2.Check if your problem is described in the HPE Data Protector Help file or the troubleshooting sections of applicable guides:
To troubleshoot installation and upgrade, see the HPE Data Protector Installation Guide.
To troubleshoot application integration sessions, see the HPE Data Protector Integration Guide.
To troubleshoot zero downtime backup and instant recovery, see the HPE Data Protector Zero Downtime Backup Administrator's Guide and HPE Data Protector Zero Downtime Backup Integration Guide.
To troubleshoot disaster recovery, see the HPE Data Protector Disaster Recovery Guide.
This document provides an installation guide for HPE Data Protector 9.07. It describes how to install the Data Protector Cell Manager, clients, and various integration options. The guide covers installations on Windows, UNIX, Linux and other platforms. It also provides instructions for cluster-aware installations and maintaining the Data Protector installation.
Introducing Backup to Disk devices and deduplication
This document describes how HPE Data Protector integrates with Backup to Disk devices and deduplication. By supporting deduplication, several new concepts are introduced to Data Protector, including a new device type, the Backup to Disk device, and four interface types: the HPE StoreOnce Software deduplication, the HPE StoreOnce Backup System, Smart Cache, and the EMC Data Domain Boost. Backup to Disk devices and deduplication are both discussed in detail in this document.
Backup to Disk devices are devices that back up data to a physical storage disk and support multi-host configurations. They support different backends such as the HP StoreOnce Software deduplication, the StoreOnce Backup system, Smart Cache, or the EMC Data Domain Boost. This document also describes the basic principles behind deduplication technology.
Data Protector supports the following deduplication backends:
HPE Data Protector Software deduplication provides the ability to deploy target-side deduplication on virtually any industry-standard hardware, offers greater flexibility than existing solutions as it can be deployed in a wider range of hardware set-ups, and provides enterprise-class scalability.
Because of the way Data Protector makes use of the extremely efficient HPE StoreOnce engine, Data Protector software deduplication uses memory very efficiently. As a result, you can deploy deduplication on application or backup servers without lowering application performance. Data Protector software deduplication can even be deployed on a virtual machine. In addition, Data Protector software deduplication delivers very high throughput. HPE StoreOnce Backup system devices are disk to disk (D2D) backup devices which support deduplication. Smart Cache devices are backup to disk devices that enable non-staged recovery from VMware backups. EMC Data Domain Boost devices are D2D backup devices which support deduplication.
The document provides prerequisites and recommendations for installing and using Liquidware Labs Stratusphere components for virtual desktop infrastructure (VDI) assessments. It outlines requirements for the Stratusphere Hub, Connector ID Keys, and Network Station components, including supported hypervisors, download sizes, CPU/memory/storage needs, and network port requirements. It also provides guidance on preparing the environment, such as configuring networking and importing user/machine groups.
The document provides instructions for setting up virtual infrastructure on FIWARE Lab Cloud. It discusses the basic functionalities including identity services, compute services, storage services and network services. It also covers extended functionalities for deploying applications using blueprint templates which allow defining tiers, software, and network topology and launching blueprint instances.
Intro to the FIWARE Lab: Setting Up Your Virtual Infrastructure Using FIWARE Lab Cloud, by Fernando López.
1st FIWARE Summit, Málaga, Dec. 13-15, 2016.
IBM Cloud Pak for Integration 2020.2.1 installation khawkwf
The document provides instructions for installing IBMCP4I v2020.2.1 in 3 steps:
1. It outlines the prerequisite requirements including server sizing, file system requirements, and integration component sizing.
2. It describes how to add the online catalog sources to install operators if the cluster is connected to the internet.
3. It explains how to mirror the operators to a private registry if the cluster is in a restricted environment not connected to the internet, which involves preparing the registry, bastion host, downloading packages, and configuring the cluster.
Horst Junker presented on AWS infrastructure services including Amazon VPC, EC2, and S3. He demonstrated creating a VPC with public subnets, launching a web server EC2 instance into it, and copying a web application from an S3 bucket to the instance. Key points covered included VPC networking, EC2 instance types and metadata, and S3 concepts such as buckets and objects.
During the “Architecting for the Cloud” breakfast seminar where we discussed the requirements of modern cloud-based applications and how to overcome the confinement of traditional on-premises infrastructure.
We heard from data management practitioners and cloud strategists from Amazon Web Services and NuoDB about how organizations are meeting the challenges associated with building new or migrating existing applications to the cloud.
Finally, we discussed how the right cloud-based architecture can:
- Handle rapid user growth by adding new servers on demand
- Provide high performance even in the face of heavy application usage
- Offer around-the-clock resiliency and uptime
- Provide easy and fast access across multiple geographies
- Deliver cloud-enabled apps in public, private, or hybrid cloud environments
- TeamSQL AWS Architecture
- VPC Introduction (Public, private subnets) and Demo
- EC2 Introduction and Demo
- RDS Introduction and Demo
- Introduction to Cloudformation
- A simple Cloudformation Script and make it live (Creating EC2 with Cloudformation)
- Deleting Cloudformation Stack
- More advanced Cloudformation Script and make it live
(Cloudformation parameters, VPC, public, private subnets, RDS, ElasticBeanstalk, ElastiCache)
- Updating Cloudformation Stack
- Hands on - Advanced Cloudformation Script
Brief description how to use FIWARE Lab Cloud to deploy your resources and the differents steps and recomendations that you have to follow to resolve any problem
Infrastructure Continuous Delivery Using AWS CloudFormationAmazon Web Services
This document discusses using AWS CloudFormation and AWS CodePipeline to implement infrastructure continuous delivery. It begins by explaining the need for infrastructure as code and continuous delivery workflows for infrastructure changes. AWS CloudFormation allows treating infrastructure as code by authoring templates and provisioning AWS resources from them. AWS CodePipeline can then be used to automate building, testing and deploying infrastructure changes as code is updated. The document demonstrates decomposing a sample application into CloudFormation templates and setting up a CodePipeline to continuously deliver changes. It provides examples of how to model pipelines for network resources and application components separately with dependencies.
This document provides instructions for setting up HPE ESM for AWS, including launching an instance of the pre-installed AMI from the AWS Marketplace, configuring ESM, and sending logs to ESM for analysis. It outlines launching the ESM instance, configuring it with a license file and static IP, and deploying additional smart connectors. Additional documentation for using ESM is available on HPE's product documentation site.
This document provides instructions for setting up HPE ArcSight ArcMC for AWS software version 2.2 on Amazon Web Services (AWS). It describes how to launch an instance of the ArcMC for AWS Amazon Machine Image (AMI) on AWS, configure ArcMC for AWS including setting the admin password and license key, and additional next steps for configuration. It also provides contact information for HPE ArcSight support and links to product documentation.
Sap solutions-on-v mware-best-practices-guidenarendar99
This document provides best practices for deploying SAP software solutions on VMware vSphere. It discusses VMware virtualization capabilities and benefits, SAP platform architectures, SAP support for virtual environments, and guidelines for optimizing virtual machine configuration settings like memory, CPU, storage, and networking. The document aims to help organizations efficiently run their SAP workloads on VMware infrastructure while meeting SAP support requirements.
This document provides instructions for setting up HPE ArcSight Management Center (ArcMC) on Microsoft Azure. It describes how to launch an instance of ArcMC from the Azure Marketplace, configure it by setting a new admin password and updating the license key, and provides next steps for configuring SmartConnectors and integrating additional devices and applications. The document also provides contact information for ArcMC support and links to additional product documentation.
Chef and Apache CloudStack (ChefConf 2014)Jeff Moody
This document discusses using Chef with Apache CloudStack and Citrix CloudPlatform for automation and configuration management. It provides an overview of CloudStack and CloudPlatform, and explains two Chef knife plugins - knife-cloudstack and knife-cloudstack-fog. knife-cloudstack-fog provides comprehensive API coverage for provisioning CloudStack servers using Chef. The document also covers options for getting started with CloudStack and discusses future plans, like testing and merging the plugins.
오토스케일링(Auto-scaling)은 AWS 클라우드를 통해 고확장성 서비스와 아키텍처를 구성하는 데 필요한 가장 중요한 요소 중 하나입니다. 이 강연에서는 효과적인 클라우드 인프라 구축을 위해 오토 스케일링을 활용하는 다양한 방법에 대해 자세히 소개해 드립니다.
오토 스케일링 그룹의 구성과 확장 계획에 따른 설정 방법, 오토 스케일링 라이프 사이클과 CloudWatch 및 알림을 이용한 관리 방법, 각종 오토스케일링 모범사례 등을 알아보실 수 있습니다.
AWS re:Invent 2016: Infrastructure Continuous Delivery Using AWS CloudFormati...Amazon Web Services
In this session, we will review ways to manage the lifecycle of your dev, test, and production infrastructure using CloudFormation. Learn how to architect your infrastructure through loosely coupled stacks using cross-stack references, tightly coupled nested stacks and other best practices. Learn how to use CloudFormation to provision and manage a continuous deployment pipeline for your infrastructure-as-code. Automate deployment of new development environments as your infrastructure evolves, promote your new architecture for testing, and deploy changes to production.
To secure AWS infrastructure, implement multiple layers of security including VPCs, subnets, security groups, network ACLs, firewalls, and IAM roles and policies. Create a custom VPC with public and private subnets, attach an internet gateway to the VPC and route tables to allow access. Use security groups to control traffic, network ACLs as an additional firewall layer, and a NAT gateway to allow private instances internet access. Implement AWS WAF, Shield, and IAM best practices like MFA and least privilege policies.
Потребности:Надежное, экономически выгодное и простое в обращении решение для резервного копирования
Среда: VMware и Hyper-V, более, чем10 TБданных.
Предыдущее решение:
Очень дорогое и сложное решение для резервного копирования
Проблема: нехватка бюджета на нужды ИТ
Область применения:Социальные услуги.
Потребности:Сократить время и сложность бэкапа виртуальных машин.
Среда:45 ВМ, 1.5 TБданных, Windows Domain Controller, Lotus Notes, NAS
Область применения:ИТ услуги
Потребности:надежная система резервного копирования, нацеленная на сокращение времени простоя в случае отказа сервера.
Среда:множествоГипервизоров, WEBсайтов, ERP систем.
HPE IDOL Technical Overview - july 2016Andrey Karpov
Search and Analytics Platform for Text and Rich Media
Open Innovation is transforming everything
Connected people, apps and things generating massive data in many forms
How do you bridge the gap between data and outcomes?
Augmented Intelligence power apps for competitive advantage
Machine Learning at the Service of Business Augmented Intelligence
HPE Big Data Advanced Analytics Software Solutions
Strong information and weak information
HPE IDOL: Natural Language Processing (NLP) engine
VM Explorer® is a simple but powerful software to back up, replicate and restore your VMware ESX, ESXi and Microsoft Hyper-V Virtual Machines (VM).
The following documentation explains the main tasks required for configuration and daily use of VM Explorer®. All services hereinafter are brought to you by HPE.
The HPE services and materials presented for VM Explorer® hereinafter are protected by copyright, trademark, trade dress, unfair competition, and other intellectual property rights. The trademarks, logos and marks of HPE and VM Explorer® displayed on the services and products are the property of HPE or third parties. You are not permitted to use the Marks without the prior consent of HPE or the third party that may own the Marks.
Building and managing secure private and hybrid clouds
HP Helion extends beyond just cloud to become the very fabric of your enterprise. Delivers an extensible and open portfolio to build and manage enterprise grade end-to-end orchestrated cloud services.
Конференция по программным решениям HPE 2016Andrey Karpov
Конференция по программным решениям HPE 14 апреля 2016
Автоматизация процесса перевода транспортного сервиса между географически распределенными площадками
Система управления ТСЭР
Подсистема автоматизации процедуры перевода обработки
ЭС между центральными серверами ЦТУ ТСЭР (ПАПО)
Автоматизация процессов управления DevOps Новые реалии – новая скорость
8.0Transforming records management for Information Governance
•Access and understand virtually any source of information on-premise and in the cloud
•A strategic pillar of HP’s HAVEnBig Data platform
•Non-disruptive, manage-in-place approach complements any organization
Understanding human information
•Access and understand virtually any source of information on-premise and in the cloud
•A strategic pillar of HP’s HAVEnBig Data platform
•Non-disruptive, manage-in-place approach complements any organization
March 2016 HPE Data Protector
Comprehensive data protection for the modern enterprise
If you pick up the latest datacenter trends reports from ESG, Gartner, and IDC, you will notice that improving backup and recovery appears among the top IT priorities for organizations. The reason for that is simple: as the velocity, variety and complexity of data continue to accelerate, so do the risks of not being able to speedily restore critical systems and applications in case of disaster or data loss.
HP Distributed R is a high-performance scalable platform for the R language. It enables R to
leverage multiple cores and multiple servers to perform Big Data Advanced Analytics. It consists of
new R language constructs to easily parallelize algorithms across multiple R processes.
HP Distributed R simplifies large-scale analysis by extending R. Because R is a single-threaded
environment, it has limited utility for Big Data analytics. HP Distributed R allows you to specify that
parts of programs be run in multiple single-threaded R-processes. This approach results in
significantly reduced execution times for Big Data analysis.
Become a data-driven organization with the Internet of Things
Executive summary
Personal health monitors tracking your fitness, trashcans monitoring their fullness, watches telling you more
than just the time, and agricultural soil monitors saying it’s time to water. It seems a day doesn’t go by that
we don’t hear about the latest “offline” thing, device, or equipment becoming “online,” moving from isolation
to being connected to the Internet of Things (IoT). It’s clear that integrating sensors, electronics, and
network connectivity into devices can enable innovation, enhancing and extending the way we work and
interact with each other and the world around us.
HP Vertica Analytic Database
Creating flex tables is similar to creating other tables, except column definitions are optional. When
you create flex tables, with or without column definitions, HP Vertica implicitly adds a special
column to your table, called __raw__. This is the column that stores loaded data. The __raw__
column type is LONG VARBINARY, and its default maximum width is 130000 bytes (with an
absolute maximum of 32000000 bytes). You can change the width default with the
FlexTablesRawSize configuration parameter.
Loading data into a flex table encodes the record into a map type, and populates the __raw__
column. The map type is a standard dictionary type, pairing keys with string values as virtual
columns.
Каталог программных решений. Управление портфелем сервисов. Управление разработкой и тестированием. Управление качеством предоставления бизнес сервисов. Автоматизация ЦОД и построение облачных платформ. Обеспечение безопасности. Большие данные. Техническая поддержка программных продуктов HPE. Сервисы для обучения работе с продуктами HPE. Авторизованные партнеры HPE.
Emerging Technologies For Business Intelligence, Analytics, and Data Warehousing
Report Purpose. This report educates organizations worldwide about the inventory of
currently available emerging technologies and methods (ETMs) as they apply directly
to business intelligence (BI), analytics, and data warehousing (DW). TDWI
assumes that the innovations and excitement of ETMs can make BI, DW, and
analytics more appealing, pervasive, insightful, and actionable.
Продукт HP Vertica является системой управления базами данных, работающей по принципам массивной параллельной обработки и разработанной специально для хранения и обработки больших объемов данных.
HP Vertica поддерживает язык SQL, стандартные интерфейсы доступа к данным ODBC, JDBC, ADO.NET, а также содержащий множество коннекторов к различным инструментам бизнес-аналитики и анализа данных.
Кластер СУБД HP Vertica состоит из узлов стандартной архитектуры x86, объединенных сетевым соединением. Все узлы кластера являются равноценными, любой из узлов кластера может принимать и обслуживать запросы пользователей, а также выполнять загрузку данных.
Backup Navigator install and configuration guideAndrey Karpov
HP Backup Navigator is one of three products that support HP’s Adaptive Backup and Recovery solution. Adaptive Backup and Recovery is an innovative approach to data protection based on the use of operational analytics targeting the day-to-day use of the backup infrastructure. More importantly this approach adds trending capabilities and predictive algorithms enabling IT teams to make decisions about the backup and recovery process before problems surface. As a core component of the Adaptive Backup and Recovery solution, HP Backup Navigator delivers an interactive web-based reporting and analytics tool that correlates related, but often disparate, pools of information presenting the content graphically in the form of customizable dashboards, graphs, charts, summaries, trending views and detailed information concerning the backup performance, capacity utilization and daily operational details.
06-04-2024 - NYC Tech Week - Discussion on Vector Databases, Unstructured Data and AI
Round table discussion of vector databases, unstructured data, ai, big data, real-time, robots and Milvus.
A lively discussion with NJ Gen AI Meetup Lead, Prasad and Procure.FYI's Co-Found
Learn SQL from basic queries to Advance queriesmanishkhaire30
Dive into the world of data analysis with our comprehensive guide on mastering SQL! This presentation offers a practical approach to learning SQL, focusing on real-world applications and hands-on practice. Whether you're a beginner or looking to sharpen your skills, this guide provides the tools you need to extract, analyze, and interpret data effectively.
Key Highlights:
Foundations of SQL: Understand the basics of SQL, including data retrieval, filtering, and aggregation.
Advanced Queries: Learn to craft complex queries to uncover deep insights from your data.
Data Trends and Patterns: Discover how to identify and interpret trends and patterns in your datasets.
Practical Examples: Follow step-by-step examples to apply SQL techniques in real-world scenarios.
Actionable Insights: Gain the skills to derive actionable insights that drive informed decision-making.
Join us on this journey to enhance your data analysis capabilities and unlock the full potential of SQL. Perfect for data enthusiasts, analysts, and anyone eager to harness the power of data!
#DataAnalysis #SQL #LearningSQL #DataInsights #DataScience #Analytics
The Ipsos - AI - Monitor 2024 Report.pdfSocial Samosa
According to Ipsos AI Monitor's 2024 report, 65% Indians said that products and services using AI have profoundly changed their daily life in the past 3-5 years.
Natural Language Processing (NLP), RAG and its applications .pptxfkyes25
1. In the realm of Natural Language Processing (NLP), knowledge-intensive tasks such as question answering, fact verification, and open-domain dialogue generation require the integration of vast and up-to-date information. Traditional neural models, though powerful, struggle with encoding all necessary knowledge within their parameters, leading to limitations in generalization and scalability. The paper "Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks" introduces RAG (Retrieval-Augmented Generation), a novel framework that synergizes retrieval mechanisms with generative models, enhancing performance by dynamically incorporating external knowledge during inference.
ViewShift: Hassle-free Dynamic Policy Enforcement for Every Data LakeWalaa Eldin Moustafa
Dynamic policy enforcement is becoming an increasingly important topic in today’s world where data privacy and compliance is a top priority for companies, individuals, and regulators alike. In these slides, we discuss how LinkedIn implements a powerful dynamic policy enforcement engine, called ViewShift, and integrates it within its data lake. We show the query engine architecture and how catalog implementations can automatically route table resolutions to compliance-enforcing SQL views. Such views have a set of very interesting properties: (1) They are auto-generated from declarative data annotations. (2) They respect user-level consent and preferences (3) They are context-aware, encoding a different set of transformations for different use cases (4) They are portable; while the SQL logic is only implemented in one SQL dialect, it is accessible in all engines.
#SQL #Views #Privacy #Compliance #DataLake
06-04-2024 - NYC Tech Week - Discussion on Vector Databases, Unstructured Data and AI
Discussion on Vector Databases, Unstructured Data and AI
https://www.meetup.com/unstructured-data-meetup-new-york/
This meetup is for people working in unstructured data. Speakers will come present about related topics such as vector databases, LLMs, and managing data at scale. The intended audience of this group includes roles like machine learning engineers, data scientists, data engineers, software engineers, and PMs.This meetup was formerly Milvus Meetup, and is sponsored by Zilliz maintainers of Milvus.
Analysis insight about a Flyball dog competition team's performanceroli9797
Insight of my analysis about a Flyball dog competition team's last year performance. Find more: https://github.com/rolandnagy-ds/flyball_race_analysis/tree/main
The Building Blocks of QuestDB, a Time Series Databasejavier ramirez
Talk Delivered at Valencia Codes Meetup 2024-06.
Traditionally, databases have treated timestamps just as another data type. However, when performing real-time analytics, timestamps should be first class citizens and we need rich time semantics to get the most out of our data. We also need to deal with ever growing datasets while keeping performant, which is as fun as it sounds.
It is no wonder time-series databases are now more popular than ever before. Join me in this session to learn about the internal architecture and building blocks of QuestDB, an open source time-series database designed for speed. We will also review a history of some of the changes we have gone over the past two years to deal with late and unordered data, non-blocking writes, read-replicas, or faster batch ingestion.
3. Contents
Overview of Vertica on Amazon Web Services (AWS) 6
Supported Instance Types 7
Understanding the AWS Procedure Components 8
Creating Evaluation or Development Instances 12
Elastic Load Balancing 13
Enhanced Networking 14
Packages 15
Installing and Running Vertica on AWS 17
Configuring and Launching an Instance 18
Creating a Placement Group 20
Creating a Key Pair 21
Creating a Virtual Private Cloud (VPC) 22
Network ACL Settings 23
Creating and Assigning an Internet Gateway 24
Creating a Security Group 25
Security Group Settings 26
Inbound 26
Outbound 27
Adding Rules to a Security Group 28
HPE Vertica Analytic Database (7.2.x) Page 3 of 65
4. Assigning an Elastic IP 31
Connecting to an Instance 32
Connecting to an Instance from Windows Using Putty 32
Preparing Instances 33
Configuring Storage 34
Determining Volume Names 34
Combining Volumes for Storage 34
Forming a Cluster 36
Combining Instances 36
Considerations When Using the install_vertica or update_vertica Scripts 36
After Your Cluster Is Up and Running 38
Initial Installation and Configuration 39
Using Management Console (MC) on AWS 40
Adding Nodes to a Running AWS Cluster 43
Launching New Instances to Add to an Existing Cluster 44
Including New Instances as Cluster Nodes 45
Adding Nodes and Rebalancing the Database 46
Removing Nodes From a Running AWS Cluster 47
Preparing to Remove a Node 48
Removing Hosts From the Database 49
Removing Nodes From the Cluster 50
Stopping the AWS Instances (Optional) 51
Vertica on Amazon Web Services
HPE Vertica Analytic Database (7.2.x) Page 4 of 65
5. Migrating Data Between AWS Clusters 52
Migrating to Vertica 7.0 or later on AWS 55
Upgrading to the version 7.0 Vertica AMI on AWS 57
Preparing to Upgrade Your AMI 58
Upgrading Vertica Running on AWS 59
Troubleshooting: Checking Open Ports Manually 61
Using the Netcat (nc) Utility 62
Quick Start to Setting Up Vertica AWS 63
Send Documentation Feedback 65
Vertica on Amazon Web Services
HPE Vertica Analytic Database (7.2.x) Page 5 of 65
6. Overview of Vertica on Amazon Web
Services (AWS)
This document describes the steps to create a Vertica cluster on AWS. To run a Vertica
cluster on AWS requires creating Amazon Machine Instances (AMIs). The instructions in
this document apply to AMIs built with Vertica Version 7.2.x
The Vertica Community Edition is installed on the AMI. Community Edition is limited to
three nodes and up to 1 TB of data. Each AMI includes a Community Edition license.
Once Vertica is installed , you can find the license at this location:
/opt/vertica/config/licensing/vertica_community_edition.license.key
Most of the remainder of this document describes the details of how to prepare your
AWS environment, launch AMI instances, and combine instances to create a cluster. To
set up your Vertica cluster on AWS, follow the detailed directions that follow, or use the
summarized set of tasks in Quick Start to Setting Up Vertica AWS
Vertica on Amazon Web Services
Overview of Vertica on Amazon Web Services (AWS)
HPE Vertica Analytic Database (7.2.x) Page 6 of 65
7. Supported Instance Types
Vertica supports a range of Amazon Web Services (AWS) instance types, each
optimized for different purposes. For more information about Amazon cluster instances
and their limitations, see the Amazon documentation.
Sizing an instance involves estimating the hardware requirements for optimal Vertica
performance in a typical scenario. Hewlett Packard recommends using its AMI with the
instance type optimized for your requirements, as follows:
Optimization Type Description
Compute c3.4xlarge
c3.8xlarge
Compute optimized instances employ high
performance CPUs for calculation-intensive
applications.
Memory r3.4xlarge
r3.8xlarge
Memory optimized configured for memory
intensive applications such as in-memory
analytics, and enterprise-level collaborative
applications.
Storage i2.4xlarge
i2.8xlarge
Storage optimized use solid-state drives
(SSD) for rapid read and write performance.
Dense-storage d2.4xlarge
d2.8xlarge
Dense-storage instances use hard disk drives
(HDD) for maximum storage capacity at a low
cost.
Note: Data stored on the ephemeral drives of a storage-optimized instance exists
only while that instance is powered on. After powering off a storage-optimized
system, data on ephemeral drives is lost.
Vertica on Amazon Web Services
Supported Instance Types
HPE Vertica Analytic Database (7.2.x) Page 7 of 65
8. Understanding the AWS Procedure
Components
The following illustration and table display and describe the AWS components that you
must configure. These are preliminary steps in creating an Vertica cluster on Amazon
AWS. You can skip this section and go right to the Installing and Running Vertica on
AWS.
Illustration
Reference
and Step
#
Component Sample Description
1 Placement
Group
GroupVerticaP You supply the name of a Placement
Group when you create instances. You
use a Placement Group to group
instances together. A Placement Group
for a cluster resides in one availability
zone; your Placement Group cannot
span zones. A Placement Group
includes instances of the same type;
Vertica on Amazon Web Services
Understanding the AWS Procedure Components
HPE Vertica Analytic Database (7.2.x) Page 8 of 65
9. Illustration
Reference
and Step
#
Component Sample Description
you cannot mix types of instances in a
Placement Group. You can choose one
of two regions for a Placement Group;
see Creating a Placement Group for
information.
2 Key Pair MyKey
(MyKey.pem
would be the
resultant file.)
You need a Key Pair to access your
instances using SSH. You create the
Key Pair through the AWS interface,
and you store a copy of your key (*.pem)
file on your local machine. When you
access an instance, you need to know
the local path of your key, and you copy
the key to your instance before you can
run the install_vertica script.
3 VPC Vpc-d6c18dbd
Amazon
assigns this
name
automatically.
You create a Virtual Private Cloud
(VPC) on Amazon so that you can
create a network of your EC2 instances.
All instances within the VPC share the
same network and security settings. A
VPC uses the CIDR format for
specifying a range of IP addresses.
4 Internet
Gateway
Igw-d7c18dbc
Amazon
assigns and
attaches an
internet
gateway
automatically.
An internet gateway allows instances to
access the internet. Note that, typically,
a gateway is automatically assigned
when you create a VPC. You can also
create your own named internet
gateway, and then attach that gateway
to your VPC.
5 Security
Group
"default"
Amazon
assigns a
security group
named
The security group includes firewall
settings; its rules specify how traffic can
get in and out of your instances. When
you launch your instances, you choose
a security group.
If you use the default security group, you
Vertica on Amazon Web Services
Understanding the AWS Procedure Components
HPE Vertica Analytic Database (7.2.x) Page 9 of 65
10. Illustration
Reference
and Step
#
Component Sample Description
"default"
automatically,
but you can
create and
name your
own security
group.
must add Vertica recommended rules to
the group as you would if you created
the group yourself.
6 Instances 10.0.3.158 (all
samples)
10.0.3.159
10.0.3.157
10.0.3.160
(Amazon
assigns IP
addresses.
You need to
list the
addresses
when you run
the install_
vertica script
to form a
cluster of the
instances.)
An instance is a running version of the
AMI. You first choose an AMI and
specify the number of instances. You
then launch those instances. Once the
instances are running, you can access
one or more of them through SSH.
Notes:
l Once IPs are assigned to your
instances, they must not change in
order for Vertica to continue working.
The IP addresses are crucial once
you have run the install_vertica
script; if you change them or they
become reassigned, your cluster
breaks.
l When you stop one or more
instances, your My Instances page
may show blanks for the IP fields.
However, by default Amazon retains
the IPs and they should show up
again in short order.
7 Elastic IP 107.23.104.78
(sample)
You associate an elastic IP to one of
your instances so you can communicate
with your cluster. An elastic IP is a static
IP address that stays connected to your
account until you explicitly release it.
Vertica on Amazon Web Services
Understanding the AWS Procedure Components
HPE Vertica Analytic Database (7.2.x) Page 10 of 65
11. Illustration
Reference
and Step
#
Component Sample Description
8 Connecting --- Once you have completed all of the
procedures for setting up your cluster,
you can connect to the instance
attached to your elastic IP. You can use
SSH. If connecting from a Windows
machine, you can, for example, use
Putty.
9 install_
vertica
script
--- Your instances are already grouped
through Amazon and include the Vertica
software, but you must then run the
install_vertica script to combine
instances through their IP addresses so
that Vertica knows the instances are all
part of the same cluster.
Vertica on Amazon Web Services
Understanding the AWS Procedure Components
HPE Vertica Analytic Database (7.2.x) Page 11 of 65
12. Creating Evaluation or Development
Instances
You can create a very simple Vertica instance that you can use for testing, development,
or evaluation. Although you can create multiple one-click instances, these instances do
not share a placement group, do not have mounted volumes, and cannot form a cluster.
If you want to create an enterprise or cluster deployment of Vertica, launch the AMI from
the EC2 console by selecting the Manual Launch tab.
The 1-Click AMI is available from the Amazon Marketplace at
https://aws.amazon.com/marketplace/library?ref_=gtw_navgno_library
1. Navigate to the AWS Console, and select EC2.
2. From the EC2 Console Dashboard, click Launch Instance.
3. Select the Vertica AMI.
a. Select the AWS Marketplace tab and enter "HP/Vertica" in the search field.
b. Choose your Vertica AMI. Click the Select button next to the AMI.
4. Click Launch with 1-Click. The AMI launches.
Vertica on Amazon Web Services
Creating Evaluation or Development Instances
HPE Vertica Analytic Database (7.2.x) Page 12 of 65
13. Elastic Load Balancing
You can use elastic load balancing (ELB) for queries up to one hour. When enabling
ELB, ensure that the timer is configured to 3600 seconds.
For information about ELB, refer to Amazon documentation.
Vertica on Amazon Web Services
Elastic Load Balancing
HPE Vertica Analytic Database (7.2.x) Page 13 of 65
14. Enhanced Networking
Vertica AMIs support the AWS enhanced networking feature. See Enabling Enhanced
Networking on Linux Instances in a VPC in the AWS documentation for details.
Vertica on Amazon Web Services
Enhanced Networking
HPE Vertica Analytic Database (7.2.x) Page 14 of 65
15. Packages
Vertica AMIs come with the following packages pre-installed:
l Vertica Place
l Vertica Pulse
Vertica on Amazon Web Services
Packages
HPE Vertica Analytic Database (7.2.x) Page 15 of 65
16. Page 16 of 65HPE Vertica Analytic Database (7.2.x)
Vertica on Amazon Web Services
Packages
17. Installing and Running Vertica on
AWS
Use these procedures to install and run Vertica on AWS. Note that this document
mentions basic requirements only, by way of a simple example. Refer to the AWS
documentation for more information on each individual parameter.
For a high level overview of the installation process, refer to Appendix: Installing and
Running Vertica on AWS - QuickStart
Vertica on Amazon Web Services
Installing and Running Vertica on AWS
HPE Vertica Analytic Database (7.2.x) Page 17 of 65
18. Configuring and Launching an
Instance
Perform the following steps to configure and launch the instances that will become your
cluster in a later procedure. A basic Elastic Compute Cloud (EC2) instance (without a
Vertica AMI) is similar to a traditional host. When you create an EC2 instance using a
Vertica AMI, the instance includes the Vertica software and a standard recommended
configuration to ease creation of your cluster. Vertica recommends that you use the
Vertica AMI as is – without modification. The Vertica AMI acts as a template, requiring
fewer configuration steps.
1. Navigate to the AWS Console, and select EC2.
2. From the EC2 Console Dashboard, click Launch Instance.
3. Select the Vertica AMI.
a. Select the AWS Marketplace tab and enter "HP Vertica" in the search field.
b. Choose your Vertica AMI. Click the Select button next to the AMI.
4. Click Continue. The launch page opens.
5. Click Manual Launch, select a region, and click Launch with EC2 Console. The
Choose Instance Type page opens.
6. Select a supported instance type and click Next: Configure Instance Details.
7. Click Next: Configure Instance Details.
a. Choose the number of instances you want to launch. A Vertica cluster uses
identically configured instances of the same type. You cannot mix instance
types.
b. From the Network drop-down, choose your VPC.
Note: Not all data centers support VPC. If you receive an error message that
states "VPC is not currently supported…", choose a different region and zone
(for example, choose us-east-1e rather than us-east-1c).
c. From the Placement group drop-down, choose a placement group.
Alternatively, you can Create a Placement Group.
8. Click Next: Add Storage.The Add Storage page opens.
Vertica on Amazon Web Services
Configuring and Launching an Instance
HPE Vertica Analytic Database (7.2.x) Page 18 of 65
19. 9. Add storage to your instances based on your needs.
Note:
n Vertica recommends that you add a number of drives equal to the number of
cores in your instance. For example, for a c3.8xlarge, add eight drives. For an
r3.4xlarge, add four drives.
n Vertica does not recommend that you store data on the root drive.
n For optimal performance with EBS volumes, Amazon recommends that you
configure them in a RAID 0 array on each node in your cluster.
10. Click Next: Tag Instance. The Tag Instance page opens.
11. Create and add a key value pair if needed and click Next: Configure Security
Group. The Configure Security Group page opens.
12. Create or select a security group and click Review and Launch. The Review
Instance Launch page opens.
13. Review your instance details and click Launch. A key value pair dialog box opens.
14. Configure your key value pairs, and click Launch Instances.
15. Click Launch Instances. You receive a message saying that your instances are
launching.
You can click View Instances from the Launch Status page. Check that your instances
are running (show green as their state).
Note: You can stop an instance by right-clicking and choosing Stop. Use Stop
rather than Terminate; the Terminate command deletes your instance.
Vertica on Amazon Web Services
Configuring and Launching an Instance
HPE Vertica Analytic Database (7.2.x) Page 19 of 65
20. Creating a Placement Group
Perform the following steps to create a Placement Group. A Placement Group ensures
that your nodes are properly co-located.
1. Log in to your Amazon EC2 Console Dashboard.
2. Select Placement Groups. The Placement Group screen appears, listing your
existing placement groups.
3. Click Create Placement Group.
4. Name your placement group.
5. Click Create. Your group is created.
Vertica on Amazon Web Services
Creating a Placement Group
HPE Vertica Analytic Database (7.2.x) Page 20 of 65
21. Creating a Key Pair
Perform the following steps to create a Key Pair.
1. Select Key Pairs from the Navigation panel.
Note: Depending upon which browser you are using, you may have to turn off
the pop-up blocker in order to download the Key Pair.
2. Click Create Key Pair.
3. Name your Key Pair.
4. Click Yes.
The system displays a message letting you know that the Key Pair has been
created.
5. Save your key pair. Ensure that you keep the *.pem file; you need it to logon to your
instances.
Note that your Key Pair name now appears on the Key Pair list.
Vertica on Amazon Web Services
Creating a Key Pair
HPE Vertica Analytic Database (7.2.x) Page 21 of 65
22. Creating a Virtual Private Cloud
(VPC)
Perform the following steps to create a VPC.
1. From the AWS Management Console Home, navigate to the VPC console by
selecting VPC.
Note: If the VPC Console Dashboard shows that VPCs already exist, you can
select Your VPCs to note the names of the existing VPCs. As the VPC IDs are
very similar, noting the names of the existing VPCs helps you later in identifying
the new one you are about to create.
2. Click Start VPC Wizard.
3. From the wizard that displays, select VPC with a Single Public Subnet Only.
4. Change the Public Subnet as desired. Vertica recommends that you secure your
network with an Access Control List (ACL) that is appropriate to your situation. The
default ACL does not provide a high level of security.
5. Choose an Availability Zone.
Note: A Vertica cluster is operated within a single availability zone.
6. Click Create VPC. Amazon displays a message noting success.
7. Choose Your VPCs from the navigation pane, select your new VPC, and be sure
that both Enable DNS resolution and Enable DNS hostname support for
instances launched in this VPC are both checked.
8. Click Close. Your virtual private cloud is created.
9. Add the required network inbound and outbound rules to the VPC.
Vertica on Amazon Web Services
Creating a Virtual Private Cloud (VPC)
HPE Vertica Analytic Database (7.2.x) Page 22 of 65
23. Network ACL Settings
Vertica requires the following network access control list (ACL) settings on an AWS
instance running the Vertica AMI.
For detailed information on network ACLs within AWS, refer to Amazon's
documentation.
Inbound Rules
Type Protocol Port Range Source Allow/Deny
SSH (22) TCP (6) 22 0.0.0.0/0 Allow
Custom TCP Rule TCP (6) 5450 0.0.0.0/0 Allow
Custom TCP Rule TCP (6) 5433 0.0.0.0/0 Allow
Custom TCP Rule TCP (6) 1024-65535 0.0.0.0/0 Allow
ALL Traffic ALL ALL 0.0.0.0/0 Deny
Outbound Rules
Type Protocol Port Range Source Allow/Deny
Custom TCP Rule TCP (6) 0 - 65535 0.0.0.0/0 Allow
Vertica on Amazon Web Services
Network ACL Settings
HPE Vertica Analytic Database (7.2.x) Page 23 of 65
24. Creating and Assigning an Internet
Gateway
When you create a VPC, an internet gateway is automatically assigned to the VPC. You
can use that gateway, or you can assign your own. To create and assign your own,
perform the following steps. If using the default, continue with the next procedure,
Creating a Security Group.
1. From the navigation pane, choose Internet Gateways.
2. Click Create Internet gateway.
3. Choose Yes, Create.
The Internet Gateway screen appears. Note that the new gateway is not assigned to
a VPC.
4. Click Attach to VPC.
5. Choose your VPC.
6. Click Yes, Attach.
Note that your gateway is now attached to your VPC.
Vertica on Amazon Web Services
Creating and Assigning an Internet Gateway
HPE Vertica Analytic Database (7.2.x) Page 24 of 65
25. Creating a Security Group
When you create a Virtual Private Cloud (VPC), AWS automatically creates a default
security group assigns it to the VPC. You can use that security group, or you can name
and assign your own. To create and assign your own, perform the following steps. If
using the default, continue with the next procedure, Adding Rules to a Security Group.
Note that you must add the Vertica rules as described in the next section. The Vertica
AMI has specific security group requirements.
To create and name your own security group, perform the following steps.
1. From the Navigation pane, select Security Groups.
2. Click Create Security Group.
3. Enter a name for the group and provide a description.
4. Select a VPC to enable communication between nodes.
5. Click Create. The security group is created.
Vertica on Amazon Web Services
Creating a Security Group
HPE Vertica Analytic Database (7.2.x) Page 25 of 65
26. Security Group Settings
Vertica requires the following security group settings on an AWS instance running the
Vertica AMI.
For detailed information on security groups within AWS, refer to Amazon's
documentation.
Inbound
Type Protocol Port Range Source IP
SSH TCP 22 My IP 0.0.0.0/0
HTTP TCP 80 My IP 0.0.0.0/0
HTTPS TCP 443 My IP 0.0.0.0/0
DNS (UDP) UDP 53 My IP 0.0.0.0/0
Custom UDP UDP 4803-4805 My IP 0.0.0.0/0
Custom TCP TCP 4803-4805 My IP 0.0.0.0/0
Custom TCP TCP 5433 My IP 0.0.0.0/0
Custom TCP TCP 5434 My IP 0.0.0.0/0
Custom TCP TCP 5444 My IP 0.0.0.0/0
Custom TCP TCP 5450 My IP 0.0.0.0/0
Custom TCP TCP 8080 My IP 0.0.0.0/0
Custom TCP TCP 48073 My IP 0.0.0.0/0
Custom TCP TCP 50000 My IP 0.0.0.0/0
ICMP Echo Reply N/A My IP 0.0.0.0/0
ICMP Traceroute N/A My IP 0.0.0.0/0
Vertica on Amazon Web Services
Security Group Settings
HPE Vertica Analytic Database (7.2.x) Page 26 of 65
27. Outbound
Type Protocol Port Range Destination IP
All TCP TCP 0-65535 Anywhere 0.0.0.0/0
All ICMP ICMP 0-65535 Anywhere 0.0.0.0/0
All UDP UDP 0-65535 Anywhere 0.0.0.0/0
Vertica on Amazon Web Services
Security Group Settings
HPE Vertica Analytic Database (7.2.x) Page 27 of 65
28. Adding Rules to a Security Group
Perform the following steps to add rules to the security group you plan to use (whether
you plan to use the default group or have created your own). This section includes
procedures for adding rules from the Inbound tab and adding rule from the Outbound
tab. You perform both procedures to establish your security group.
Perform the following to add rules from the Inbound tab:
1. Ensure that you have checked the box for the security group you just created, and
that the Inbound tab is selected.
2. Add the HTTP rule.
a. Ensure that the Inbound tab is selected.
b. From the Create a new rule dropdown, choose HTTP.
c. Click Add Rule. The HTTP rule is added to your security group.
3. Add the Echo Reply rule.
a. From the Create a new rule dropdown, select Custom ICPM rule.
b. From the Type dropdown, select Echo Reply.
c. Click Add Rule. The Echo Reply rule is added to your security group.
4. Add the Traceroute rule.
a. Choose Custom ICMP rule once again.
b. Select Traceroute.
c. Click Add Rule. The Traceroute rule is added to your security group.
5. Add the SSH rule.
a. From the Create a new rule dropdown, choose SSH.
b. Click Add Rule. The SSH rule is added to your security group
6. Add the HTTPS rule.
Vertica on Amazon Web Services
Adding Rules to a Security Group
HPE Vertica Analytic Database (7.2.x) Page 28 of 65
29. a. From the Create a new rule dropdown, choose HTTPS.
b. Click Add Rule. The HTTPS rule is added to your security group.
7. Add a port range to your security group.
a. From the Create a new rule dropdown, select Custom TCP Rule.
b. Under Port range, enter 4803-4805.
c. Click Add Rule. The port range is added to your security group.
d. From the Create a new rule dropdown, select Custom UDP Rule.
e. Under Port range, enter 4803-4805.
f. Click Add Rule. The port range is added to your security group.
8. Add individual ports.
a. Also under Custom TCP rule, enter the following ports under Port Range, one
by one: 5433, 5434, 5444, and 5450. You enter the ports one by one to ensure
the port assignments are sequential. Vertica uses these ports for internode
communication.
b. Click Add Rule as you enter each number. Each port is added to your security
group.
9. Click Apply Rule Changes.
Note: You must click Apply Rule Changes or your rules will not be applied to
your security group. With the Inbound tab selected, your security group screen
should look similar to the following.
Vertica on Amazon Web Services
Adding Rules to a Security Group
HPE Vertica Analytic Database (7.2.x) Page 29 of 65
30. Perform the following to add rules from the Outbound tab.
Note: You want to ensure that all outbound traffic is allowed.
1. Select the Outbound tab.
a. Choose All TCP rule from the Create a new rule dropdown.
b. Click Add Rule. The All TCP rule is added to your security group.
2. Add the All ICMP rule.
a. Choose All ICMP rule from the Create a new rule dropdown.
b. Click Add Rule. The All ICMP rule is added to your security group.
3. Add the ALL UDP rule.
a. Choose ALL UDP rule from the Create a new rule dropdown.
b. Click Add Rule. The ALL UDP rule is added to your security group.
4. Click Apply Rule Changes.
Note: You must click Apply Rule Changes or your rules will not be applied to
your security group. With the Outbound tab selected, your screen should look
similar to the following.
Vertica on Amazon Web Services
Adding Rules to a Security Group
HPE Vertica Analytic Database (7.2.x) Page 30 of 65
31. Assigning an Elastic IP
The elastic IP is an IP address that you attach to an instance; you communicate with
your cluster through the instance that is attached to this IP address. An elastic IP is a
static IP address that stays connected to your account until you explicitly release it.
Note the following on IP addresses:
l You can stop your instances, but you must ensure that the IP addresses assigned are
not released.
l By default, Amazon keeps your IP addresses assigned until you release them.
1. From the Navigation menu, select Elastic IPs.
2. Click Allocate New Address.
3. On the Allocate New Address screen, choose VPC from the dropdown.
4. Click Yes, Allocate. The elastic IP is created and appears in the list of available
addresses.
5. Select the address that you want to assign and click Associate Address.
6. Choose one of the instances you created.
7. Click Associate. Your instance is associated with your elastic IP.
Vertica on Amazon Web Services
Assigning an Elastic IP
HPE Vertica Analytic Database (7.2.x) Page 31 of 65
32. Connecting to an Instance
Perform the following procedure to connect to an instance within your VPC.
1. As the dbadmin user, type the following command, substituting your ssh key:
# ssh --ssh-identity <ssh key> dbadmin@ipaddress
2. Select Instances from the Navigation panel.
3. Select the instance that is attached to the Elastic IP.
4. Click Connect.
5. On Connect to Your Instance, choose one of the following options:
n A Java SSH Client directly from my browser—Add the path to your private key
in the field Private key path, and click Launch SSH Client.
n Connect with a standalone SSH client—Follow the steps required by your
standalone SSH client.
Connecting to an Instance from Windows
Using Putty
If you connect to the instance from the Windows operating system, and plan to use
Putty:
1. Convert your key file using PuTTYgen.
2. Connect with Putty or WinSCP (connect via the elastic IP), using your converted key
(i.e., the *ppk file).
3. Move your key file (the *pem file) to the root dir using Putty or WinSCP.
Vertica on Amazon Web Services
Connecting to an Instance
HPE Vertica Analytic Database (7.2.x) Page 32 of 65
33. Preparing Instances
After you create your instances, you need to prepare them for cluster formation. Prepare
your instances by adding your AWS .pem key and your Vertica license.
1. As the root user, copy your *pem file (from where you saved it locally) onto your
primary instance.
Depending upon the procedure you use to copy the file, the permissions on the file
may change. If permissions change, the install_vertica script fails with a
message similar to the following:
FATAL (19): Failed Login Validation 10.0.3.158, cannot resolve or
connect to host as root.
If you receive a failure message, enter the following command to correct
permissions on your *pem file:
chmod 600 /<name-of-pem>.pem
2. Copy your Vertica license over to your primary instance, and also place it in /root.
Vertica on Amazon Web Services
Preparing Instances
HPE Vertica Analytic Database (7.2.x) Page 33 of 65
34. Configuring Storage
As a best practice, use dedicated volumes for node storage.
Important: Hewlett Packard Enterprise recommends that you do not store your data
on the root drive. When configuring your storage, make sure to use a supported file
system.
For best performance, you can combine multiple EBS volumes into RAID-0. Vertica
provides a shell script, which automates the storage configuration process.
Note: To take advantage of bursting, limit EBS volumes to 1TB or less.
Determining Volume Names
Before you combine volumes for storage, make note of your volume names so that you
can alter the configure_aws_raid.sh shell script. You can find your volumes with the
following commands:
cd /dev
ls
Your volumes start with xvd.
Important: Ignore your root volume. Do not include any of your root volumes in the
RAID creation process.
Combining Volumes for Storage
Follow these sample steps to combine your EBS volumes into RAID 0 using the
configure_aws_raid.sh shell script.
1. Edit the /opt/vertica/sbin/configure_aws_raid.sh shell file as follows:
a. Comment out the safety exit command at the beginning .
b. Change the sample volume names to your own volume names, which you noted
previously. Add more volumes, if necessary.
2. Run the /opt/vertica/sbin/configure_aws_raid.sh shell file. Running this
file creates a RAID 0 volume and mounts it to /vertica/data.
Vertica on Amazon Web Services
Configuring Storage
HPE Vertica Analytic Database (7.2.x) Page 34 of 65
35. 3. Change the owner of the newly created volume to dbadmin with chown.
4. Repeat steps 1-3 for each node on your cluster.
For more information about EBS storage, refer to the Amazon documentation.
Vertica on Amazon Web Services
Configuring Storage
HPE Vertica Analytic Database (7.2.x) Page 35 of 65
36. Forming a Cluster
Use the install_vertica script to combine two or more individual instances and
create a cluster.
Check the My Instances page for a list of current instances and their associated IP
addresses. You need these IP addresses when you run the install_vertica script.
Combining Instances
The following example combines instances using the install_vertica script.
Important: Before you run install_vertica, be sure to stop any running
databases on your nodes. The install_vertica script cannot complete
successfully if any databases are running.
While connected to your primary instance, enter the following command to combine your
instances into a cluster. Substitute the IP addresses for your instances and include your
root *pem file name.
sudo /opt/vertica/sbin/install_vertica --hosts
10.0.11.164,10.0.11.165,10.0.11.166 --dba-user-password-disabled --
point-to-point --data-dir /vertica/data --ssh-identity ~/<name-of-
pem>.pem --license <license.file>
Note: If you are using Community Edition, which limits you to three instances, you
can simply specify -L CE with no license file.
When you issue install_vertica or update_vertica on an AMI, always use the
--point-to-point parameter.This parameter configures spread to use direct
point-to-point communication between all Vertica nodes, which is a requirement for
clusters on AWS. If you do not use the parameter, you receive an error telling you
that you must use point-to-point communication on AWS.
Considerations When Using the install_
vertica or update_vertica Scripts
l By default, the installer assumes that you have mounted your storage to
/vertica/data. To specify another location, use the --data-dir argument.
Hewlett Packard Enterprise does not recommend that you store your data on the root
drive.
Vertica on Amazon Web Services
Forming a Cluster
HPE Vertica Analytic Database (7.2.x) Page 36 of 65
37. l Password logons present a security risk on AWS. Include the parameter --dba-
user-password-disabled so that the installer does not prompt for a password for
the database user.
For complete information on the install_vertica script and its parameters, see the
Installation Guide, specifically the section, About the install_vertica Script.
Vertica on Amazon Web Services
Forming a Cluster
HPE Vertica Analytic Database (7.2.x) Page 37 of 65
38. After Your Cluster Is Up and Running
Stop or reboot instances using the Amazon AWS console, but you must stop the
database before doing so. Once your cluster is up and running, if you need to stop or
reboot:
1. Stop the database.
2. Stop or reboot one or more instances.
Caution: If you stop or reboot an instance (or the cluster) without shutting the database
down first, disk or database corruption could result. Shutting the database down first
ensures that Vertica is not in the process of writing to disk when you shutdown. Refer to
the Vertica Administrator’s Guide for information on stopping a database.
Once your cluster is configured and running:
1. Create a database. When Vertica was installed for you, an Vertica database
administrator was created, dbadmin. You can use this pre-created dbadmin user to
create and start a database. Refer to the Vertica Installation Guide for information on
the dbadmin administrator.
2. Configure a database. Refer to the Vertica Administrator’s Guide for information on
configuring a database.
3. Refer to the full documentation set for Vertica for other tasks.
Vertica on Amazon Web Services
After Your Cluster Is Up and Running
HPE Vertica Analytic Database (7.2.x) Page 38 of 65
39. Initial Installation and Configuration
Once you have created your cluster on AWS, you can log on to your nodes and perform
a Vertica installation.
1. Log on to your cluster using the following command:
# ssh -i <ssh key> dbadmin@ipaddress
2. Run the Vertica installer. Be sure to include all of the nodes in your cluster.
3. Create a database.
Vertica on Amazon Web Services
Initial Installation and Configuration
HPE Vertica Analytic Database (7.2.x) Page 39 of 65
40. Using Management Console (MC) on
AWS
MC is a database management tool that provides a way for you to view and manage
aspects of your Vertica cluster. If you are running Vertica Release 6.1.2 or later, you can
install and run MC.
This release of MC on AWS includes restrictions.
l You cannot create a cluster on AWS using the MC. You cannot import a cluster into
AWS using the MC.
l You cannot monitor an AWS cluster using MC on a node that is outside of your AWS
cluster. You must install MC on an instance within the AWS cluster itself.
Note: Each version of Vertica Management Console (MC) is compatible only with
the matching version of the Vertica server. Version numbers must match to three
digits; for example, Vertica 6.1.2 server is supported with Vertica 6.1.2 MC only. This
is a general MC requirement and is not specific to MC on AWS.
What follows is a reading path for learning more about MC:
l For an overview of MC, where you can get it, and what you can do with it, refer to the
Concepts Guide, specifically, Management Console.
l For information on installing and configuring MC, refer to the Installation Guide,
specifically, Installing and Configuring Management Console (MC).
l For information on the differences between what you can do with MC versus what
you can do with the Administration Tools, refer to the Administration Guide,
specifically the section, Administration Tools and Management Console.
l For information on creating a database using MC, refer to the Getting Started Guide,
specifically the section, Create the Example Database Using Management Console.
Keep the following in mind concerning user accounts and the MC.
l When you first configure MC, during the configuration process you create an MC
superuser (a Linux account). Issuing a Factory Reset on the MC does not create a
new MC superuser, nor does it delete the existing MC superuser. When initializing
after a Factory Reset, you must logon using the original MC superuser account.
Vertica on Amazon Web Services
Using Management Console (MC) on AWS
HPE Vertica Analytic Database (7.2.x) Page 40 of 65
41. For information on setting MC to its original state (Factory Reset), and why you might
implement a Factory Reset, refer to the Administration Guide, specifically the section,
Resetting MC to Pre-configured state.
l Note that, once MC is configured, you can add users that are specific to MC. Users
created through the MC interface are MC specific. When you subsequently change a
password through the MC, you only change the password for the specific MC user.
Passwords external to MC (i.e., system Linux users and Vertica database passwords)
remain unchanged.
For information on MC users, refer to the Administration Guide, specifically the
sections, Creating an MC User and MC configuration privileges.
Vertica on Amazon Web Services
Using Management Console (MC) on AWS
HPE Vertica Analytic Database (7.2.x) Page 41 of 65
42. Page 42 of 65HPE Vertica Analytic Database (7.2.x)
Vertica on Amazon Web Services
Using Management Console (MC) on AWS
43. Adding Nodes to a Running AWS
Cluster
Use these procedures to add instances/nodes to an AWS cluster. The procedures
assume you have an AWS cluster up and running and have most-likely accomplished
each of the following.
l Created a database.
l Defined a database schema.
l Loaded data.
l Run the database designer.
l Connected to your database.
Vertica on Amazon Web Services
Adding Nodes to a Running AWS Cluster
HPE Vertica Analytic Database (7.2.x) Page 43 of 65
44. Launching New Instances to Add to
an Existing Cluster
Perform the procedure in Configuring and Launching an Instance to create new
instances that you then will add to your existing cluster. Be sure to choose the same
details you chose when you created the original instances (e.g., VPC and Placement
group).
Vertica on Amazon Web Services
Launching New Instances to Add to an Existing Cluster
HPE Vertica Analytic Database (7.2.x) Page 44 of 65
45. Including New Instances as Cluster
Nodes
The Instances page lists the instances and their associated IP addresses. You need
the IP addresses when you run the install_vertica script.
If you are configuring EBS volumes, be sure to configure the volumes on the node
before you add the node to your cluster.
To add the new instances as nodes to your existing cluster:
1. Connect to the instance that is assigned to the Elastic IP. See Connecting to an
Instance if you need more information.
2. Enter the following command to add the new instances as nodes to your cluster.
The following is an example. Substitute the IP addresses for your instances and
include your *pem file name. Your instances are added to your existing cluster.
sudo /opt/vertica/sbin/install_vertica --add-hosts 10.0.11.166 --
dba-user-password-disabled --point-to-point --data-dir
/vertica/data --ssh-identity ~/<name-of-pem>.pem
Vertica on Amazon Web Services
Including New Instances as Cluster Nodes
HPE Vertica Analytic Database (7.2.x) Page 45 of 65
46. Adding Nodes and Rebalancing the
Database
Once you have added the new instances to your existing cluster, you add them as
nodes to your cluster, and then rebalance the database.
Follow the procedure given in the Administration Guide, Adding Nodes to a Database.
Vertica on Amazon Web Services
Adding Nodes and Rebalancing the Database
HPE Vertica Analytic Database (7.2.x) Page 46 of 65
47. Removing Nodes From a Running
AWS Cluster
Use these procedures to remove instances/nodes from an AWS cluster.
Vertica on Amazon Web Services
Removing Nodes From a Running AWS Cluster
HPE Vertica Analytic Database (7.2.x) Page 47 of 65
48. Preparing to Remove a Node
Removing one or more nodes consists of the following general steps. The first two
steps, backing up a database and lowering the k-safety before node removal, are
prerequisites for the subsequent steps.
1. Back up the Database. See the section, Creating Full and Incremental Snapshots
(vbr) in the Administrator's Guide.
HPE recommends that you back up the database before performing this significant
operation because it entails creating new projections, deleting old projections, and
reloading data.
2. Lower the K-safety of your database if the cluster will not be large enough to support
its current level of K-safety after you remove nodes. See the section, Lowering the
K-safety Level to Allow for Node Removal in the Administrator's Guide.
Note: You cannot remove nodes if your cluster would not have the minimum
number of nodes required to maintain your database's current K-safety level (3
nodes for a database with a K-safety level of 1, and 5 nodes for a K-safety level
of 2). To remove the node or nodes from the database, you first must reduce the
K-safety level of your database.
3. Remove the hosts from the database.
4. Remove the nodes from the cluster if they are not used by any other databases.
5. Optionally, stop the instances within AWS that are no longer included in the cluster.
Vertica on Amazon Web Services
Preparing to Remove a Node
HPE Vertica Analytic Database (7.2.x) Page 48 of 65
49. Removing Hosts From the Database
Before performing the procedure in this section, you must have completed the tasks
referenced in Preparing to Remove a Node. The following procedure assumes that you
have both backed up your database and lowered the k-safety.
Note: Do not stop the database.
Perform the following to remove a host from the database.
1. While logged on as dbadmin, launch Administration Tools.
$ /opt/vertica/bin/admintools
Note: Do not remove the host that is attached to your EIP.
2. From the Main Menu, select Advanced Tools Menu.
3. From Advanced Menu, select Cluster Management. Select OK.
4. From Cluster Management, select Remove Host(s). Select OK.
5. From Select Database, choose the database from which you plan to remove hosts.
Select OK.
6. Select the host(s) to remove. Select OK.
7. Click Yes to confirm removal of the hosts.
Note: Enter a password if necessary. Leave blank if there is no password.
8. Select OK. The system displays a message letting you know that the hosts have
been removed. Automatic re-balancing also occurs.
9. Select OK to confirm. Administration Tools brings you back to the Cluster
Management menu.
Vertica on Amazon Web Services
Removing Hosts From the Database
HPE Vertica Analytic Database (7.2.x) Page 49 of 65
50. Removing Nodes From the Cluster
To remove nodes from a cluster, run the install_vertica script, specifying the IP
addresses of the nodes you are removing and the location and name of your *pem file.
(The following example removes only one node from the cluster.)
sudo /opt/vertica/sbin/install_vertica --remove-hosts 10.0.11.165 --
point-to-point --ssh-identity ~/<name-of-pem>.pem --dba-user-
password-disabled
Vertica on Amazon Web Services
Removing Nodes From the Cluster
HPE Vertica Analytic Database (7.2.x) Page 50 of 65
51. Stopping the AWS Instances
(Optional)
Once you have removed one or more nodes from your cluster, to save costs associated
with running instances, you can choose to stop or terminate the AWS instances that
were previously part of your cluster. This step is optional because, once you have
removed the node from your Vertica cluster, Vertica no longer sees the instance/node as
part of the cluster even though it is still running within AWS.
To stop an instance in AWS:
1. On AWS, navigate to your Instances page.
2. Right-click on the instance, and choose Stop.
Vertica on Amazon Web Services
Stopping the AWS Instances (Optional)
HPE Vertica Analytic Database (7.2.x) Page 51 of 65
52. Migrating Data Between AWS
Clusters
This section provides guidance for copying (importing) data from another AWS cluster,
or exporting data between AWS clusters.
There are three common issues that occur when exporting or copying on AWS clusters.
The issues are listed below. Except for these specific issues as they relate to AWS,
copying and exporting data works as documented in the Administrator's Guide section,
Copying and Exporting Data.
Issue 1. Ensure that all nodes in source and destination clusters have their own
elastic IPs (or public IPs) assigned.
Each node in one cluster must be able to communicate with each node in the other
cluster. Thus, each source and destination node needs an elastic IP (or public IP)
assigned.
Issue 2. Set the parameter DontCheckNetworkAddress to true.
On AWS, when creating a network interface, you receive an error if you attempt to
assign the elastic IP to an AWS node (example uses a sample elastic IP address):
dbadmin=> CREATE NETWORK INTERFACE eipinterface ON v_tpch_node0001
with '107.23.151.10';
ERROR 4125: No valid address found for [107.23.151.10] on this
node
This error occurs because the elastic IP is the public IP and not the private IP of the
target node. To resolve this issue, first set the parameter DontCheckNetworkAddress
to true:
select set_config_parameter('DontCheckNetworkAddress','1');
You can find information on the CREATE NETWORK INTERFACE statement and
SET_CONFIG_PARAMETER in the SQL Reference Manual.
Issue 3. Ensure your security group allows the AWS clusters to communicate.
Check your security groups for both your source and destination AWS clusters. Ensure
that ports 5433 and 5434 are open.
If one of your AWS clusters is on a separate VPC, ensure that your network access
control list (ACL) allows communication on port 5434.
Vertica on Amazon Web Services
Migrating Data Between AWS Clusters
HPE Vertica Analytic Database (7.2.x) Page 52 of 65
53. Note: This communication method exports and copies (imports) data through the
internet. You can alternatively use non-public IPs and gateways, or VPN to connect
the source and destination clusters.
Vertica on Amazon Web Services
Migrating Data Between AWS Clusters
HPE Vertica Analytic Database (7.2.x) Page 53 of 65
54. Page 54 of 65HPE Vertica Analytic Database (7.2.x)
Vertica on Amazon Web Services
Migrating Data Between AWS Clusters
55. Migrating to Vertica 7.0 or later on
AWS
Note: If you had a Vertica installation running on AWS prior to Release 6.1.x, you
can migrate to Vertica 7.2.x or later using a new preconfigured AMI.
For more information, see the Solutions tab of the myVertica portal.
Vertica on Amazon Web Services
Migrating to Vertica 7.0 or later on AWS
HPE Vertica Analytic Database (7.2.x) Page 55 of 65
56. Page 56 of 65HPE Vertica Analytic Database (7.2.x)
Vertica on Amazon Web Services
Migrating to Vertica 7.0 or later on AWS
57. Upgrading to the version 7.0 Vertica
AMI on AWS
Use these procedures for upgrading to the latest Vertica AMI. The procedures assume
that you have a 6.1.x or later cluster successfully configured and running on AWS. If you
are setting up an Vertica cluster on AWS for the first time, follow the detailed procedure
for installing and running Vertica on AWS.
Note: Both install_vertica and update_vertica use the same parameters.
Vertica on Amazon Web Services
Upgrading to the version 7.0 Vertica AMI on AWS
HPE Vertica Analytic Database (7.2.x) Page 57 of 65
58. Preparing to Upgrade Your AMI
Perform this procedure to prepare for the upgrade to the latest Vertica AMI.
1. Back up your existing database. See Backing Up and Restoring the Database in the
Administrator's Guide.
2. Download the Vertica install package. See Download and Install the Vertica Install
Package in the Installation Guide.
Vertica on Amazon Web Services
Preparing to Upgrade Your AMI
HPE Vertica Analytic Database (7.2.x) Page 58 of 65
59. Upgrading Vertica Running on AWS
Vertica supports upgrades of Vertica Server running on AWS instances created from the
Vertica AMI. To upgrade Vertica, follow the instructions provided in the Vertica upgrade
documentation.
Vertica on Amazon Web Services
Upgrading Vertica Running on AWS
HPE Vertica Analytic Database (7.2.x) Page 59 of 65
60. Page 60 of 65HPE Vertica Analytic Database (7.2.x)
Vertica on Amazon Web Services
Upgrading Vertica Running on AWS
61. Troubleshooting: Checking Open
Ports Manually
You originally configured your security group through the AWS interface. Once your
cluster is up and running, you can check ports manually through the command line
using the netcat (nc) utility. What follows is an example using the utility to check ports.
Before performing the procedure, choose the private IP addresses of two nodes in your
cluster.
The examples given below use nodes with the private IPs:
10.0.11.60 10.0.11.61
Vertica on Amazon Web Services
Troubleshooting: Checking Open Ports Manually
HPE Vertica Analytic Database (7.2.x) Page 61 of 65
62. Using the Netcat (nc) Utility
After installing the nc utility on your nodes, you can issue commands to check the ports
on one node from another node.
1. To check a TCP port:
a. Put one node in listen mode and specify the port. In the following sample, we’re
putting IP 10.0.11.60 into listen mode for port 4804.
[root@ip-10-0-11-60 ~]# nc -l 4804
b. From the other node, run nc specifying the IP address of the node you just put in
listen mode, and the same port number.
[root@ip-10-0-11-61 ~]# nc 10.0.11.60 4804
c. Enter sample text from either node and it should show up on the other. To cancel
once you have checked a port, enter Ctrl+C.
Note: To check a UDP port, use the same nc commands with the –u option.
[root@ip-10-0-11-60 ~]# nc -u -l 4804
[root@ip-10-0-11-61 ~]# nc -u 10.0.11.60 4804
Vertica on Amazon Web Services
Using the Netcat (nc) Utility
HPE Vertica Analytic Database (7.2.x) Page 62 of 65
63. Quick Start to Setting Up Vertica
AWS
This topic presents a summary of the detailed procedures included in this document.
The procedures require basic knowledge of the AWS Management Console. For
information about working with AWS Management Console, refer to the AWS
documentation, or use the detailed procedures.
From the AWS Management Console:
1. Choose a region, and create and name a Placement Group.
2. Create and name a Key Pair.
3. Create a VPC.
a. Edit the public subnet; change according to your planned system set-up.
b. Choose an availability zone.
4. Create an Internet Gateway and assign it to your VPC (or use the default gateway).
5. Create a Security Group for your VPC (or use the default security group, but add
rules).
Inbound rules to add:
n HTTP
n Custom ICMP Rule: Echo Reply
n Custom ICMP Rule: Traceroute
n SSH
n HTTPS
n Custom TCP Rule: 4803-4805
n Custom UDP Rule: 4803-4805
n Custom TCP Rule: (Port Range) 5433
n Custom TCP Rule: (Port Range) 5434
Vertica on Amazon Web Services
Quick Start to Setting Up Vertica AWS
HPE Vertica Analytic Database (7.2.x) Page 63 of 65
64. n Custom TCP Rule: (Port Range) 5434
n Custom TCP Rule: (Port Range) 5450
Outbound rules to add:
n All TCP: (Destination) 0.0.0.0/0
n All ICMP: (Destination) 0.0.0.0/0
6. Create instances.
a. Select Launch Instance.
b. From Community AMIs, choose a Vertica AMI.
c. Select the Compute optimized tab and select a supported instance type.
d. Choose the number of instances, network, and placement group.
e. Select a security group.
f. Click Launch and choose your key pair.
g. Click Launch Instances.
7. Assign an elastic IP to an instance.
8. Connect to the instance that is attached to the elastic IP.
9. Run the install_vertica script after placing the *pem file on your Amazon
instance.
Vertica on Amazon Web Services
Quick Start to Setting Up Vertica AWS
HPE Vertica Analytic Database (7.2.x) Page 64 of 65
65. Send Documentation Feedback
If you have comments about this document, you can contact the documentation team by
email. If an email client is configured on this system, click the link above and an email
window opens with the following information in the subject line:
Feedback on Vertica on Amazon Web Services (Vertica Analytic Database 7.2.x)
Just add your feedback to the email and click send.
If no email client is available, copy the information above to a new message in a web
mail client, and send your feedback to vertica-docfeedback@hpe.com.
We appreciate your feedback!
HPE Vertica Analytic Database (7.2.x) Page 65 of 65