SlideShare a Scribd company logo
1 of 13
Using the HPE DL380 Gen9 24-SFF Server as a Vertica Node
The Vertica Analytics Platform software runs on a shared-nothing MPP cluster of
peer nodes. Each peer nodes is independent, and processing is massively parallel.
A Vertica node is a hardware host configured to run an instance of Vertica.
This document provides recommendations for configuring an individual DL380 Gen9
24-SFF CTO Server as a Vertica node.
The recommendations presented in this document are intended to help you create a
cluster with the highest possible Vertica software performance.
This document includes a Bill of Materials (BOM) as a reference and to provide more
information about the DL380 Gen9 24-SFF CTO Server.
Recommended Software
This document assumes that your services, after you configure them, will be running
the following minimum software versions:
 Vertica 7.2 (or later) Enterprise Edition. This is the most recent release as of
April 2016.
 Red Hat Enterprise Linux 6.x.
If you are running Red Hat Enterprise Linux 7.x, watch for information in this
document that is clearly marked as RHEL 7.1 specific.
Selecting a Server Model
The HPE DL380 Gen9 product family includes several server models. The best model
for maximum Vertica software performance is the DL380 Gen9 24-SFF CTO Server
(part number 767032-B21).
Selecting a Processor
For maximum price/performance advantage on your Vertica database, the DL380
Gen9 24-SFF servers used for Vertica nodes should include two (2) Intel Xeon E5-
2690v3 2.6 GHz/12-core DDR4-2133 135W processors.
This processor recommendation is based on the fastest 12-core processors available
for the DL380 Gen9 24-SFF platform at the time of this writing. These processors
allow Vertica to deliver the fastest possible response time across a wide spectrum of
concurrent database workloads.
The processor’s faster clock speed directly affects the Vertica database response
time. Additional cores enhance the cluster’s ability to simultaneously execute
multiple MPP queries and data loads.
Selecting Memory
For maximum Vertica performance, DL380 Gen9 24-SFF servers used as Vertica
nodes should include 256 GB of RAM. Configure this memory as follows:
 8 x 32 GB DDR4-2133 RDIMMs, 1DPC (32 GB per channel)
 In the field, you can expand this configuration to 512 GB by adding
8 x 32 GB DIMMs.
A two-processor DL380 Gen9 24-SFF server has 8 memory channels with 3 DIMM
slots in each channel, for a total of 24 slots. DL380 Gen9 24-SFF memory
configuration should comply with DIMM population rules and guidelines:
 Do not leave any channel completely blank. Load all channels similarly.
 Populate the maximum number of DIMMS per channel (DPC) to 2. Doing so
allows you to use the highest supported DIMM speed of 2133 MHz. DPCs
with 3 DIMMs run at 1866 MHz or lower.
Note
Follow these guidelines to avoid a reduction in memory speed that could adversely
affect the performance of your Vertica database.
The preceding recommended memory configuration is based on 32 GB DDR4 2133
MHz DIMMs and 256 GB of RAM. That configuration is intended to achieve the best
memory performance while providing the option of future expansion. The following
table provides several alternate memory configurations:
Sample Configuration Total
Memory
Considerations
8 x 16 GB DDR4-2133
RDIMMs
1 DPC (8 GB per
channel)
128 GB A low-memory option for systems with less
concurrency and slower speed requirements.
16 x 16 GB DDR4-
2133 RDIMMs
2 DPC (16 GB, 16 GB
per channel)
256 GB A slightly less expensive option for 256 GB of
RAM that does not allow expansion in the
field.
8 x 32 GB DDR4-2133
RDIMMs
1 DPC (32 GB per
256 GB The standard memory recommendation for
Vertica.
channel)
16 x 32 GB DDR4-
2133 RDIMMs
2 DPC (32 GB + 32 GB
per channel)
512 GB A high-memory option that may be beneficial
to support some database workloads.
Selecting and Configuring Storage
Configure the storage hardware as follows for maximum performance of the DL380
Gen9 24-SFF server used as a Vertica node:
 1x HPE DL380 Gen9 24-SFF CTO Chassis with 24 Hot Plug SmartDrive SFF (2.5-
inch) Drive Bays
 1x HPE DL380 Gen9 2-SFF Kit with 2 Hot Plug SmartDrive SFF (2.5 inch) Drives
Bays (on the back of the server)
 1x HPE Smart Array P440ar/2GB FBWC 12Gb 2-ports Int FIO SAS Controller
(integrated on systemboard)
 2x 300 GB 12 G SAS 10 KB 2.5 inch SC ENT drives (configured as RAID1 for the
OS and the Vertica Catalog location)
 24x 1.2 TB 12 G SAS 10 KB 2.5 inch SC ENT drives (configured as one RAID 1+0
device for the Vertica Data location, for approximately 13 TB total formatted
storage capacity per Vertica node)
You can configure a Vertica node with less storage capacity:
 Substitute 24x 1.2 TB 12 G SAS 10 KB 2.5 inch SC ENT drives with 24x HPE 600
GB 12 G SAS 10 K 2.5 inch SC ENT drives.
 Configure the drives as one RAID 1+0 device for the Vertica data location, for
approximately 6 TB of total data storage capacity per Vertica node.
Alternatively, you can configure the 23rd and 24th 1.2 TB (or 600 GB) data drives (for
22 active drives in total) as hot spares. However, such configuration is unnecessary
with a RAID 1+0 configuration.
Vertica can operate on any storage type. For example, Vertica can run on internal
storage, a SAN array, a NAS storage unit, or a DAS enclosure. In each case, the
storage appears to the host as a file systemand is capable of providing sufficient I/O
bandwidth. Internal storage in a RAID configuration offers the best
price/performance/availability characteristics at the lowest TCO.
A Vertica installation requires at least two storage locations―one for the operating
system and catalog, and the other for data. Place these data locations on a dedicated,
contiguous-storage volume.
Vertica is a multithreaded application. The Vertica data location I/O profile is best
characterized as large block random I/O.
Drive Bay Population
The 26 drive bays on the DL380 Gen9 24-SFF servers are attached to the Smart Array
P440ar Controller over 4 internal SAS port connectors (through a 12 G SAS Expander
Card) as follows:
 Cages 1, 2, 3—Drive bays 1‒8, 9‒16,17‒24 (8 drives each)
 Cage 4—Drive bays 25 and 26 (2 drives)
For example, an ideal implementation of the recommended 26-drive configuration is:
 300 GB drives placed in bays 25 and 26, with the 2SFF drive expander in the
rear of the server.
 2 TB (or 600 GB) drives placed in all bays on the front of the server. This
approach spreads the Vertica data RAID10 I/O evenly across the SAS groups.
Note
For best performance, make sure to fully populate all the drive bays.
Protecting Data on BulkStorage
The HPE Smart Array P440ar/2GB FBWC 12 Gb 2-ports Int FIO SAS Controller offers
the optional HPE Secure Encryption capability that protects data at rest on any bulk
storage attached to the controller. (Additional software, hardware, and licenses may
be required.) For more information, see HPE Secure Encryption product details.
Data RAID Configuration
The 24 data drives should be configured as one RAID 1+0 device as follows:
 The recommended strip size for the data RAID 1+0 is 512 KB, which is the
default setting for the P440ar controller.
 The recommended Controller Cache (Accelerator) Ratio is 10/90, which is the
default setting for the P440ar controller.
 The logical drive should be partitioned with a single primary partition
spanning the entire drive.
Place the Vertica data location on a dedicated physical storage volume. Do not co-
locate the Vertica data location with the Vertica catalog location. Vertica Packard
Enterprise recommends that the Vertica catalog location on an Vertica node on a
DL380 Gen9 24-SFF server be the operating systemdrive.
For more information, read Before You Install Vertica in the product documentation,
particularly the discussion of Vertica storage locations.
Note
Vertica does not support storage configured with the Linux Logical Volume Manager
in the I/O path. This limitation applies to all Vertica storage locations including the
catalog which is typically placed on the OS drive.
Linux I/O Subsystem Tuning
To support the maximum performance DL380 Gen9 24-SFF node configuration,
Vertica Packard Enterprise recommends the following Linux I/O configuration
settings for the Vertica data location volumes:
 The recommended Linux file system is ext4.
 The recommended Linux I/O Scheduler is deadline.
 The recommended Linux Readahead setting is 8192 512-byte sectors (4 MB).
The current configuration recommendations differ from the previously issued
guidance due to the changes in the Vertica I/O profile implemented in Vertica 7.x.
System administrators should durably configure the deadline scheduler and the
read-ahead settings for the Vertica data volume so that these settings persist across
server restarts.
Caution
Failing to use the recommended Linux I/O subsystem settings will adversely affect
performance of the Vertica software.
Data RAID Configuration Example
The following configuration and tuning instructions pertain to the Vertica data
storage location.
Note
The following steps are provided as an example, and may not be correct for your
machine.
Verify the drive numbers and population for your machine before running these
commands.
1. In Red Hat Enterprise Linux 7.x, to load the modules that HPSSACLI requires,
execute the following commands:
modprobe sg
modprobe hpsa hpsa_allow_any=1
first
2. View the current storage configuration:
# hpssacli
>controller slot=1 show
3. Assume that the RAID1 OS drive is configured as a logical drive 1 and the 24 data
drives are either “loose” or pre-configured into a temporary logical drive. To
create a new logical drive with recommended parameters, any non-OS logical
drives must be destroyed, and content will be lost. To destroy a non-OS logical
drive:
>controller slot=1 ld 2 delete forced
4. Create a new RAID10 data drive with 512 K strip size:
>controller slot=1 create type=ld raid=1+0 ss=512 drives=allunassigned
5. Partition and format the RAID10 data drive:
# parted -s /dev/sdb mklabel gpt mkpart primary ext4 0% 100%
# mkfs.ext4 /dev/sdb1
6. Create a /data mount point, add a line to the /etc/fstab file, and mount the
Vertica data volume:
# mkdir /data
[add line to /etc/fstab]: /dev/sdb1 /data ext4 defaults,noatime 0 0
# mount /data
7. So that the Linux I/O scheduler Linux Read-Ahead, and hugepage
defragmentation settings persist across systemrestarts, add the following five
lines to /etc/rc.local. Apply these steps to every drive in your system.
Note
The following commands assume that sdb is the data drive, and sda is the OS/catalog
drive.
echo deadline > /sys/block/sdb/queue/scheduler
blockdev --setra 8192 /dev/sdb
echo never > /sys/kernel/mm/redhat_transparent_hugepage/enabled
echo never > /sys/kernel/mm/redhat_transparent_hugepage/defrag
echo no > /sys/kernel/mm/redhat_transparent_hugepage/khugepaged/defrag
echo deadline > /sys/block/sda/queue/scheduler
blockdev –setra 2048 /dev/sda
8. After you have configured the storage, run vioperf to understand the baseline
I/O performance.
The minimum required I/O is 20 MB/s read and write per physical processor core on
each node. This value is based on running in full duplex and reading and writing at
this rate simultaneously, concurrently on all nodes of the cluster.
Vertica Packard Enterprise recommends that I/O is 40 MB/s per physical core on
each node. For example, the I/O rate for a server node with 2 hyperthreaded 6-core
CPUs is a required minimum of 240 MB/s. Vertica Packard Enterprise recommends
480 MB/s. A properly configured DL380 Gen9 significantly outperforms this.
Your I/O performance should be close to:
 2,200 MB/s for read and write when using 15K RPM drives.
 2,200 MB/s write and 1,500 MB/s read when using 10 K RPM drives.
 800+800MB/s for rewrite
 7k+ for seeks
If you do not see those results, review the preceding steps to make sure you
configured your data storage location correctly.
For more information, see vioperf in the product documentation.
Selecting a Network Adapter
To support the maximum performance MPP cluster operations, DL380 Gen9 24-SFF
servers used as Vertica nodes should include at least two 10 Gigabit Ethernet (RJ-45)
ports:
 HPE Ethernet 10 Gb 2-port 561FLR-T Adapter in the FlexibleLOM slot
Alternatively, if you need SFP+ networking, DL380 Gen9 24-SFF servers used as
Vertica nodes should include at least two (2) 10 Gigabit Ethernet (SFP+) ports.
 HPE Ethernet 10 Gb 2-port 546FLR-SFP+Adapter in the FlexLOMslot
A Vertica cluster is formed with DL380 Gen9 24-SFF servers, associated network
switches, and Vertica software.
When used as a Vertica node, each DL380 Gen9 24-SFF server should be connected
to two separate Ethernet networks:
 The private network, such as a cluster interconnect, is used exclusively for internal
cluster communications. This network must be the same subnet, dedicated switch,
or VLAN, 10 Gb Ethernet. Vertica does TCP P2P communications and UDP
broadcasts on this network. IP addresses for the privatenetwork interfaces must
be assigned statically. No external traffic should be allowed over the privatecluster
network.
 The publicnetwork is used for database client (i.e., application) connectivity,
should be 10 Gb Ethernet. Vertica has no rigid requirements for public network
configuration. However, Vertica Packard Enterprise recommends that you assign
static IP addresses for the public network interfaces.
The private network interconnect should have Ethernet redundancy. Otherwise, the
interconnect (specifically the switch) would be a single point of a cluster-wide failure.
Cluster operations are not affected even in the event of a complete failure of a
public network. Thus, public network redundancy is not technically required.
However, if a failure occurs, the application connectivity to the database is affected.
Therefore, consider public network redundancy for continuous availability of the
entire environment.
To achieve redundancy on both the private and public networks:
1. Take the two ports from the Ethernet card on the server and run one to each of
the two top of rack switches (which are bonded together in an IRF).
2. Bond the links together using LACP.
3. Divide the links into public and private networks, using VLANs.
Configuring the Network
The following figure illustrates a typical network setup that achieves high throughput
and high availability. (This figure is for demonstration purposes only.)
This figure shows that the bonding of the adapters allows for one adapter to fail
without the connection failing. This bonding provides high availability of the network
ports. Bonding the adaptors also doubles the throughput.
Furthermore, this configuration allows for high availability with respect to the switch.
If a switch fails, the cluster does not go down. However, it may reduce the network
throughput by 50%.
Tuning the TCP/IP Stack
Depending on your workload, number of connections, and client connect rates, you
need to tune the Linux TCP/IP stack to provide adequate network performance and
throughput.
The following script represents the recommended network (TCP/IP) tuning
parameters for a Vertica cluster. Other network characteristics may affect how much
these parameters optimize your throughput.
Add the following parameters to the /etc/sysctl.conf file. The changes take effect
after the next reboot.
##### /etc/sysctl.conf
# Increase number of incoming connections
net.core.somaxconn = 1024
#Sets the send socket buffer maximum size in bytes.
net.core.wmem_max = 16777216
#Sets the receive socket buffer maximum size in bytes.
net.core.rmem_max = 16777216
#Sets the receive socket buffer default size in bytes.
net.core.wmem_default = 262144
#Sets the receive socket buffer maximum size in bytes.
net.core.rmem_default = 262144
#Sets the maximum number of packets allowed to queue when a particular
interface receives packets faster than the kernel can process them.
# increase the length of the processor input queue
net.core.netdev_max_backlog = 100000
net.ipv4.tcp_mem = 16777216 16777216 16777216
net.ipv4.tcp_wmem = 8192 262144 8388608
net.ipv4.tcp_rmem = 8192 262144 8388608
net.ipv4.udp_mem = 16777216 16777216 16777216
net.ipv4.udp_rmem_min = 16384
net.ipv4.udp_wmem_min = 16384
HPE ROM-Based Setup Utility (BIOS) Settings
Vertica Packard Enterprise recommends that you configure the BIOS for maximum
performance:
System Configuration > BIOS/Platform Configuration (RBSU) > Power Management >
HPE Power Profile > [Maximum Performance]
Additionally, you must modify the max_cstate value.
For Red Hat Enterprise Linux versions prior to 7.0, modify the kernel entry in the
/etc/grub.conf file by appending the following to the kernel command:
intel_idle.max_cstate=0 processor.max_cstate=0
For Red Hat Enterprise Linux 7.x, /etc/grub.conf has become /boot/efi/EFI/[centos or
redhat]grub/conf. You need to append the following arguments to the following
kernel command:
nosoftlockup intel_idle.max_cstate=0 mce=ignore_ce
HPE ProLiant DL380 Gen9 24-SFF Bill of Materials
The following table contains a sample Bill of Materials (BOM). Note the following
about this BOM:
 Part numbers are listed for reference only.
 Networking equipment and racks are not included.
 All hardware requests should come through the CSE team for review.
Quantity Part
Number
Description Notes
1 767032-
B21
HPE DL380 Gen9 24-SFF CTO
Server
1 719044-
L21
HPE DL380 Gen9 E5 2690v3 FIO
Kit
For 12 cores
1 719044- HPE DL380 Gen9 E5 2690v3 Kit For 12 cores
Quantity Part
Number
Description Notes
B21
8 726722-
B21
HPE 32 GB 4Rx4 PC42133P-L Kit For 256 GB of memory
(expandable to 512 GB)
1 727250-
B21
HPE 12 GB SAS Expander Card
1 724864-
B21
HPE DL380 Gen9 2SFF Kit
1 SG506A HPE C13-C14 2.5 ft Sgl Data
Special
Requirement for
deployments using HPE
Intelligent PDUs
1 SG508A HPE C13-C14 4.5 ft Sgl Data
Special
Requirement for
deployments using HPE
Intelligent PDUs
1 768896-
B21
HPE DL380 Gen9 Rear Serial
Cable Kit
1 749974-
B21
HPE Smart Array P440ar/2GB
FBWC 12Gb 2-ports Int FIO SAS
Controller
2 785067-
B21
HPE 300 GB 12 GB SAS 10KB 2.5
in SC ENT HDD
For operating systemand
database catalog
24 781518-
B21
HPE 1.2 TB 12 GB SAS 10KB 2.5
in SC ENT HDD
For approximately 13 TB
formatted storage
capacity
1 700699-
B21
HPE Ethernet 10Gb 2P 561FLR-T
Adapter
For RJ-45 networking
2 720479-
B21
HPE 800W FS Play Ht Plg Pwr
Supply Kit
Quantity Part
Number
Description Notes
1 733660-
B21
HPE 2U SFF Easy Rail Kit
Alternate Part Appendix
Processor
Quantity Part Number Description Notes
1 719046-L21 HPE DL380 Gen9 E5 2670v3 FIO Kit 12 cores at 2.3 GHz
1 719046-B21 HPE DL380 Gen9 E5 2670v3 Kit
Memory
Quantity Part Number Description Notes
16 726722-B21 HPE 32 GB 4Rx4 PC4 2133P-L Kit For 512 GB of memory
Disk Drives
Quantity Part
Number
Description Notes
24 759212-
B21
HPE 600 GB 12 GB SAS 10
KB 2.5 in SC ENT HDD
For approximately 6 TB
formatted storage capacity
Networking
Quantity Part
Number
Description Notes
1 779799-B21 HPE Ethernet 10GbE 546FLR-SFP+
Adapter
For SFP+
networking
PDF Guide Download: Configuring the HPE DL380 Gen9 24-SFF CTO Server
as a Vertica Node
More Related
HPE ProLiant Gen10-The World’s Most Secure Industry Standard Servers
How to Buy a Server for Your Business?
How to Choose a Server for Your Data Center’s Needs?

More Related Content

What's hot

JetStor 8 series 16G FC 12G SAS units
JetStor 8 series 16G FC 12G SAS unitsJetStor 8 series 16G FC 12G SAS units
JetStor 8 series 16G FC 12G SAS unitsGene Leyzarovich
 
HP Storage pre virtuálne systémy (Prehľad riešení na zálohovanie a ukladanie ...
HP Storage pre virtuálne systémy (Prehľad riešení na zálohovanie a ukladanie ...HP Storage pre virtuálne systémy (Prehľad riešení na zálohovanie a ukladanie ...
HP Storage pre virtuálne systémy (Prehľad riešení na zálohovanie a ukladanie ...ASBIS SK
 
IBM Flex System p24L, p260 and p460 Compute Nodes
IBM Flex System p24L, p260 and p460 Compute NodesIBM Flex System p24L, p260 and p460 Compute Nodes
IBM Flex System p24L, p260 and p460 Compute NodesIBM India Smarter Computing
 
Servers Technologies and Enterprise Data Center Trends 2014 - Thailand
Servers Technologies and Enterprise Data Center Trends 2014 - ThailandServers Technologies and Enterprise Data Center Trends 2014 - Thailand
Servers Technologies and Enterprise Data Center Trends 2014 - ThailandAruj Thirawat
 
Webinar NETGEAR - ReadyNAS, le novità hardware e software
Webinar NETGEAR - ReadyNAS, le novità hardware e softwareWebinar NETGEAR - ReadyNAS, le novità hardware e software
Webinar NETGEAR - ReadyNAS, le novità hardware e softwareNetgear Italia
 
Vortex86 Sx Linux How To
Vortex86 Sx Linux How ToVortex86 Sx Linux How To
Vortex86 Sx Linux How ToRogelio Canedo
 
IBM System x3850 X5 Technical Presentation
IBM System x3850 X5 Technical PresentationIBM System x3850 X5 Technical Presentation
IBM System x3850 X5 Technical PresentationCliff Kinard
 
ServeRAID M5115 SAS/SATA Controller for IBM Flex System
ServeRAID M5115 SAS/SATA Controller for IBM Flex SystemServeRAID M5115 SAS/SATA Controller for IBM Flex System
ServeRAID M5115 SAS/SATA Controller for IBM Flex SystemIBM India Smarter Computing
 
ZFS for Databases
ZFS for DatabasesZFS for Databases
ZFS for Databasesahl0003
 
Sparc t4 systems customer presentation
Sparc t4 systems customer presentationSparc t4 systems customer presentation
Sparc t4 systems customer presentationsolarisyougood
 
Optimize solution for oracle db technical presentation
Optimize solution for oracle db   technical presentationOptimize solution for oracle db   technical presentation
Optimize solution for oracle db technical presentationxKinAnx
 
SCSI Vs. SATA Vs. IDE
SCSI Vs. SATA Vs. IDESCSI Vs. SATA Vs. IDE
SCSI Vs. SATA Vs. IDEnullhate7543
 
Technical Report NetApp Clustered Data ONTAP 8.2: An Introduction
Technical Report NetApp Clustered Data ONTAP 8.2: An IntroductionTechnical Report NetApp Clustered Data ONTAP 8.2: An Introduction
Technical Report NetApp Clustered Data ONTAP 8.2: An IntroductionNetApp
 

What's hot (19)

JetStor 8 series 16G FC 12G SAS units
JetStor 8 series 16G FC 12G SAS unitsJetStor 8 series 16G FC 12G SAS units
JetStor 8 series 16G FC 12G SAS units
 
HP Storage pre virtuálne systémy (Prehľad riešení na zálohovanie a ukladanie ...
HP Storage pre virtuálne systémy (Prehľad riešení na zálohovanie a ukladanie ...HP Storage pre virtuálne systémy (Prehľad riešení na zálohovanie a ukladanie ...
HP Storage pre virtuálne systémy (Prehľad riešení na zálohovanie a ukladanie ...
 
Ph hw advisor
Ph hw advisorPh hw advisor
Ph hw advisor
 
IBM Flex System p24L, p260 and p460 Compute Nodes
IBM Flex System p24L, p260 and p460 Compute NodesIBM Flex System p24L, p260 and p460 Compute Nodes
IBM Flex System p24L, p260 and p460 Compute Nodes
 
Servers Technologies and Enterprise Data Center Trends 2014 - Thailand
Servers Technologies and Enterprise Data Center Trends 2014 - ThailandServers Technologies and Enterprise Data Center Trends 2014 - Thailand
Servers Technologies and Enterprise Data Center Trends 2014 - Thailand
 
Dell & HP Blade Systems Overview
Dell & HP Blade Systems Overview Dell & HP Blade Systems Overview
Dell & HP Blade Systems Overview
 
Webinar NETGEAR - ReadyNAS, le novità hardware e software
Webinar NETGEAR - ReadyNAS, le novità hardware e softwareWebinar NETGEAR - ReadyNAS, le novità hardware e software
Webinar NETGEAR - ReadyNAS, le novità hardware e software
 
Vortex86 Sx Linux How To
Vortex86 Sx Linux How ToVortex86 Sx Linux How To
Vortex86 Sx Linux How To
 
HP 3PAR Express Writes
HP 3PAR Express WritesHP 3PAR Express Writes
HP 3PAR Express Writes
 
Nd Evo Plus
Nd Evo PlusNd Evo Plus
Nd Evo Plus
 
IBM System x3850 X5 Technical Presentation
IBM System x3850 X5 Technical PresentationIBM System x3850 X5 Technical Presentation
IBM System x3850 X5 Technical Presentation
 
ServeRAID M5115 SAS/SATA Controller for IBM Flex System
ServeRAID M5115 SAS/SATA Controller for IBM Flex SystemServeRAID M5115 SAS/SATA Controller for IBM Flex System
ServeRAID M5115 SAS/SATA Controller for IBM Flex System
 
ZFS for Databases
ZFS for DatabasesZFS for Databases
ZFS for Databases
 
Qnap nas tvs serie x71 catalogo
Qnap nas tvs serie x71 catalogoQnap nas tvs serie x71 catalogo
Qnap nas tvs serie x71 catalogo
 
Sparc t4 systems customer presentation
Sparc t4 systems customer presentationSparc t4 systems customer presentation
Sparc t4 systems customer presentation
 
Optimize solution for oracle db technical presentation
Optimize solution for oracle db   technical presentationOptimize solution for oracle db   technical presentation
Optimize solution for oracle db technical presentation
 
SCSI Vs. SATA Vs. IDE
SCSI Vs. SATA Vs. IDESCSI Vs. SATA Vs. IDE
SCSI Vs. SATA Vs. IDE
 
SEMINAR
SEMINARSEMINAR
SEMINAR
 
Technical Report NetApp Clustered Data ONTAP 8.2: An Introduction
Technical Report NetApp Clustered Data ONTAP 8.2: An IntroductionTechnical Report NetApp Clustered Data ONTAP 8.2: An Introduction
Technical Report NetApp Clustered Data ONTAP 8.2: An Introduction
 

Similar to Guide using the hpe dl380 gen9 24-sff server as a vertica node

Hpe Proliant DL325 Gen10 Server Datasheet
Hpe Proliant DL325 Gen10 Server DatasheetHpe Proliant DL325 Gen10 Server Datasheet
Hpe Proliant DL325 Gen10 Server Datasheet美兰 曾
 
Scalability: Lenovo ThinkServer RD540 system and Lenovo ThinkServer SA120 sto...
Scalability: Lenovo ThinkServer RD540 system and Lenovo ThinkServer SA120 sto...Scalability: Lenovo ThinkServer RD540 system and Lenovo ThinkServer SA120 sto...
Scalability: Lenovo ThinkServer RD540 system and Lenovo ThinkServer SA120 sto...Principled Technologies
 
SOUG_GV_Flashgrid_V4
SOUG_GV_Flashgrid_V4SOUG_GV_Flashgrid_V4
SOUG_GV_Flashgrid_V4UniFabric
 
Series 8 RAID Datasheet
Series 8 RAID DatasheetSeries 8 RAID Datasheet
Series 8 RAID DatasheetAdaptec by PMC
 
FAS2240: An Inside Look
FAS2240: An Inside LookFAS2240: An Inside Look
FAS2240: An Inside LookNetApp
 
JetStor portfolio update final_2020-2021
JetStor portfolio update final_2020-2021JetStor portfolio update final_2020-2021
JetStor portfolio update final_2020-2021Gene Leyzarovich
 
958 and 959 sales exam prep
958 and 959 sales exam prep958 and 959 sales exam prep
958 and 959 sales exam prepJason Wong
 
Research OnThe IT Infrastructure Of Globe Pharmaceutical Ltd - All Server
Research OnThe IT Infrastructure Of Globe Pharmaceutical Ltd - All ServerResearch OnThe IT Infrastructure Of Globe Pharmaceutical Ltd - All Server
Research OnThe IT Infrastructure Of Globe Pharmaceutical Ltd - All ServerZiaul Hoque Prince
 
Hpe MSA 2040 Storage Datasheet
Hpe MSA 2040 Storage DatasheetHpe MSA 2040 Storage Datasheet
Hpe MSA 2040 Storage Datasheet美兰 曾
 
Hpe Proliant DL20 Gen10 Server Datasheet
Hpe Proliant DL20 Gen10 Server DatasheetHpe Proliant DL20 Gen10 Server Datasheet
Hpe Proliant DL20 Gen10 Server Datasheet美兰 曾
 
Adaptec’s maxCache™ 3.0 Read and Write SSD Caching Solution
Adaptec’s maxCache™ 3.0 Read and Write SSD Caching SolutionAdaptec’s maxCache™ 3.0 Read and Write SSD Caching Solution
Adaptec’s maxCache™ 3.0 Read and Write SSD Caching SolutionAdaptec by PMC
 
Lenovo-sr550-7x04a00gsg-data-sheet-ntm-jsc I Server Lenovo SR550 Datasheet
Lenovo-sr550-7x04a00gsg-data-sheet-ntm-jsc I Server Lenovo SR550 DatasheetLenovo-sr550-7x04a00gsg-data-sheet-ntm-jsc I Server Lenovo SR550 Datasheet
Lenovo-sr550-7x04a00gsg-data-sheet-ntm-jsc I Server Lenovo SR550 DatasheetMaychuDelltphcm
 
lenovo_storage_d3284_ds
lenovo_storage_d3284_dslenovo_storage_d3284_ds
lenovo_storage_d3284_dsJohn Marquis
 
HPE MSA 1040 Storage Datasheet
HPE MSA 1040 Storage DatasheetHPE MSA 1040 Storage Datasheet
HPE MSA 1040 Storage Datasheet美兰 曾
 
Boosting performance with the Dell Acceleration Appliance for Databases
Boosting performance with the Dell Acceleration Appliance for DatabasesBoosting performance with the Dell Acceleration Appliance for Databases
Boosting performance with the Dell Acceleration Appliance for DatabasesPrincipled Technologies
 
Hpe pro liant dl180 generation9 (gen9)
Hpe pro liant dl180 generation9 (gen9) Hpe pro liant dl180 generation9 (gen9)
Hpe pro liant dl180 generation9 (gen9) Sourav Dash
 

Similar to Guide using the hpe dl380 gen9 24-sff server as a vertica node (20)

Hpe Proliant DL325 Gen10 Server Datasheet
Hpe Proliant DL325 Gen10 Server DatasheetHpe Proliant DL325 Gen10 Server Datasheet
Hpe Proliant DL325 Gen10 Server Datasheet
 
Scalability: Lenovo ThinkServer RD540 system and Lenovo ThinkServer SA120 sto...
Scalability: Lenovo ThinkServer RD540 system and Lenovo ThinkServer SA120 sto...Scalability: Lenovo ThinkServer RD540 system and Lenovo ThinkServer SA120 sto...
Scalability: Lenovo ThinkServer RD540 system and Lenovo ThinkServer SA120 sto...
 
SOUG_GV_Flashgrid_V4
SOUG_GV_Flashgrid_V4SOUG_GV_Flashgrid_V4
SOUG_GV_Flashgrid_V4
 
Series 8 RAID Datasheet
Series 8 RAID DatasheetSeries 8 RAID Datasheet
Series 8 RAID Datasheet
 
FAS2240: An Inside Look
FAS2240: An Inside LookFAS2240: An Inside Look
FAS2240: An Inside Look
 
JetStor portfolio update final_2020-2021
JetStor portfolio update final_2020-2021JetStor portfolio update final_2020-2021
JetStor portfolio update final_2020-2021
 
958 and 959 sales exam prep
958 and 959 sales exam prep958 and 959 sales exam prep
958 and 959 sales exam prep
 
IBM Flex System Storage Expansion Node
IBM Flex System Storage Expansion NodeIBM Flex System Storage Expansion Node
IBM Flex System Storage Expansion Node
 
Research OnThe IT Infrastructure Of Globe Pharmaceutical Ltd - All Server
Research OnThe IT Infrastructure Of Globe Pharmaceutical Ltd - All ServerResearch OnThe IT Infrastructure Of Globe Pharmaceutical Ltd - All Server
Research OnThe IT Infrastructure Of Globe Pharmaceutical Ltd - All Server
 
ACNC JetStor AFA
ACNC JetStor AFAACNC JetStor AFA
ACNC JetStor AFA
 
Hpe MSA 2040 Storage Datasheet
Hpe MSA 2040 Storage DatasheetHpe MSA 2040 Storage Datasheet
Hpe MSA 2040 Storage Datasheet
 
Hpe Proliant DL20 Gen10 Server Datasheet
Hpe Proliant DL20 Gen10 Server DatasheetHpe Proliant DL20 Gen10 Server Datasheet
Hpe Proliant DL20 Gen10 Server Datasheet
 
Adaptec’s maxCache™ 3.0 Read and Write SSD Caching Solution
Adaptec’s maxCache™ 3.0 Read and Write SSD Caching SolutionAdaptec’s maxCache™ 3.0 Read and Write SSD Caching Solution
Adaptec’s maxCache™ 3.0 Read and Write SSD Caching Solution
 
San Presentation
San PresentationSan Presentation
San Presentation
 
Lenovo-sr550-7x04a00gsg-data-sheet-ntm-jsc I Server Lenovo SR550 Datasheet
Lenovo-sr550-7x04a00gsg-data-sheet-ntm-jsc I Server Lenovo SR550 DatasheetLenovo-sr550-7x04a00gsg-data-sheet-ntm-jsc I Server Lenovo SR550 Datasheet
Lenovo-sr550-7x04a00gsg-data-sheet-ntm-jsc I Server Lenovo SR550 Datasheet
 
NetAppOverview
NetAppOverviewNetAppOverview
NetAppOverview
 
lenovo_storage_d3284_ds
lenovo_storage_d3284_dslenovo_storage_d3284_ds
lenovo_storage_d3284_ds
 
HPE MSA 1040 Storage Datasheet
HPE MSA 1040 Storage DatasheetHPE MSA 1040 Storage Datasheet
HPE MSA 1040 Storage Datasheet
 
Boosting performance with the Dell Acceleration Appliance for Databases
Boosting performance with the Dell Acceleration Appliance for DatabasesBoosting performance with the Dell Acceleration Appliance for Databases
Boosting performance with the Dell Acceleration Appliance for Databases
 
Hpe pro liant dl180 generation9 (gen9)
Hpe pro liant dl180 generation9 (gen9) Hpe pro liant dl180 generation9 (gen9)
Hpe pro liant dl180 generation9 (gen9)
 

More from IT Tech

Cisco ip phone key expansion module setup
Cisco ip phone key expansion module setupCisco ip phone key expansion module setup
Cisco ip phone key expansion module setupIT Tech
 
Cisco catalyst 9200 series platform spec, licenses, transition guide
Cisco catalyst 9200 series platform spec, licenses, transition guideCisco catalyst 9200 series platform spec, licenses, transition guide
Cisco catalyst 9200 series platform spec, licenses, transition guideIT Tech
 
Cisco isr 900 series highlights, platform specs, licenses, transition guide
Cisco isr 900 series highlights, platform specs, licenses, transition guideCisco isr 900 series highlights, platform specs, licenses, transition guide
Cisco isr 900 series highlights, platform specs, licenses, transition guideIT Tech
 
Hpe pro liant gen9 to gen10 server transition guide
Hpe pro liant gen9 to gen10 server transition guideHpe pro liant gen9 to gen10 server transition guide
Hpe pro liant gen9 to gen10 server transition guideIT Tech
 
The new cisco isr 4461 faq
The new cisco isr 4461 faqThe new cisco isr 4461 faq
The new cisco isr 4461 faqIT Tech
 
New nexus 400 gigabit ethernet (400 g) switches
New nexus 400 gigabit ethernet (400 g) switchesNew nexus 400 gigabit ethernet (400 g) switches
New nexus 400 gigabit ethernet (400 g) switchesIT Tech
 
Tested cisco isr 1100 delivers the richest set of wi-fi features
Tested cisco isr 1100 delivers the richest set of wi-fi featuresTested cisco isr 1100 delivers the richest set of wi-fi features
Tested cisco isr 1100 delivers the richest set of wi-fi featuresIT Tech
 
Aruba campus and branch switching solution
Aruba campus and branch switching solutionAruba campus and branch switching solution
Aruba campus and branch switching solutionIT Tech
 
Cisco transceiver module for compatible catalyst switches
Cisco transceiver module for compatible catalyst switchesCisco transceiver module for compatible catalyst switches
Cisco transceiver module for compatible catalyst switchesIT Tech
 
Cisco ios on cisco catalyst switches
Cisco ios on cisco catalyst switchesCisco ios on cisco catalyst switches
Cisco ios on cisco catalyst switchesIT Tech
 
Cisco's wireless solutions deployment modes
Cisco's wireless solutions deployment modesCisco's wireless solutions deployment modes
Cisco's wireless solutions deployment modesIT Tech
 
Competitive switching comparison cisco vs. hpe aruba vs. huawei vs. dell
Competitive switching comparison cisco vs. hpe aruba vs. huawei vs. dellCompetitive switching comparison cisco vs. hpe aruba vs. huawei vs. dell
Competitive switching comparison cisco vs. hpe aruba vs. huawei vs. dellIT Tech
 
Four reasons to consider the all in-one isr 1000
Four reasons to consider the all in-one isr 1000Four reasons to consider the all in-one isr 1000
Four reasons to consider the all in-one isr 1000IT Tech
 
The difference between yellow and white labeled ports on a nexus 2300 series fex
The difference between yellow and white labeled ports on a nexus 2300 series fexThe difference between yellow and white labeled ports on a nexus 2300 series fex
The difference between yellow and white labeled ports on a nexus 2300 series fexIT Tech
 
Cisco transceiver modules for compatible cisco switches series
Cisco transceiver modules for compatible cisco switches seriesCisco transceiver modules for compatible cisco switches series
Cisco transceiver modules for compatible cisco switches seriesIT Tech
 
Guide to the new cisco firepower 2100 series
Guide to the new cisco firepower 2100 seriesGuide to the new cisco firepower 2100 series
Guide to the new cisco firepower 2100 seriesIT Tech
 
892 f sfp configuration example
892 f sfp configuration example892 f sfp configuration example
892 f sfp configuration exampleIT Tech
 
Cisco nexus 7000 and nexus 7700
Cisco nexus 7000 and nexus 7700Cisco nexus 7000 and nexus 7700
Cisco nexus 7000 and nexus 7700IT Tech
 
Cisco firepower ngips series migration options
Cisco firepower ngips series migration optionsCisco firepower ngips series migration options
Cisco firepower ngips series migration optionsIT Tech
 
Eol transceiver to replacement model
Eol transceiver to replacement modelEol transceiver to replacement model
Eol transceiver to replacement modelIT Tech
 

More from IT Tech (20)

Cisco ip phone key expansion module setup
Cisco ip phone key expansion module setupCisco ip phone key expansion module setup
Cisco ip phone key expansion module setup
 
Cisco catalyst 9200 series platform spec, licenses, transition guide
Cisco catalyst 9200 series platform spec, licenses, transition guideCisco catalyst 9200 series platform spec, licenses, transition guide
Cisco catalyst 9200 series platform spec, licenses, transition guide
 
Cisco isr 900 series highlights, platform specs, licenses, transition guide
Cisco isr 900 series highlights, platform specs, licenses, transition guideCisco isr 900 series highlights, platform specs, licenses, transition guide
Cisco isr 900 series highlights, platform specs, licenses, transition guide
 
Hpe pro liant gen9 to gen10 server transition guide
Hpe pro liant gen9 to gen10 server transition guideHpe pro liant gen9 to gen10 server transition guide
Hpe pro liant gen9 to gen10 server transition guide
 
The new cisco isr 4461 faq
The new cisco isr 4461 faqThe new cisco isr 4461 faq
The new cisco isr 4461 faq
 
New nexus 400 gigabit ethernet (400 g) switches
New nexus 400 gigabit ethernet (400 g) switchesNew nexus 400 gigabit ethernet (400 g) switches
New nexus 400 gigabit ethernet (400 g) switches
 
Tested cisco isr 1100 delivers the richest set of wi-fi features
Tested cisco isr 1100 delivers the richest set of wi-fi featuresTested cisco isr 1100 delivers the richest set of wi-fi features
Tested cisco isr 1100 delivers the richest set of wi-fi features
 
Aruba campus and branch switching solution
Aruba campus and branch switching solutionAruba campus and branch switching solution
Aruba campus and branch switching solution
 
Cisco transceiver module for compatible catalyst switches
Cisco transceiver module for compatible catalyst switchesCisco transceiver module for compatible catalyst switches
Cisco transceiver module for compatible catalyst switches
 
Cisco ios on cisco catalyst switches
Cisco ios on cisco catalyst switchesCisco ios on cisco catalyst switches
Cisco ios on cisco catalyst switches
 
Cisco's wireless solutions deployment modes
Cisco's wireless solutions deployment modesCisco's wireless solutions deployment modes
Cisco's wireless solutions deployment modes
 
Competitive switching comparison cisco vs. hpe aruba vs. huawei vs. dell
Competitive switching comparison cisco vs. hpe aruba vs. huawei vs. dellCompetitive switching comparison cisco vs. hpe aruba vs. huawei vs. dell
Competitive switching comparison cisco vs. hpe aruba vs. huawei vs. dell
 
Four reasons to consider the all in-one isr 1000
Four reasons to consider the all in-one isr 1000Four reasons to consider the all in-one isr 1000
Four reasons to consider the all in-one isr 1000
 
The difference between yellow and white labeled ports on a nexus 2300 series fex
The difference between yellow and white labeled ports on a nexus 2300 series fexThe difference between yellow and white labeled ports on a nexus 2300 series fex
The difference between yellow and white labeled ports on a nexus 2300 series fex
 
Cisco transceiver modules for compatible cisco switches series
Cisco transceiver modules for compatible cisco switches seriesCisco transceiver modules for compatible cisco switches series
Cisco transceiver modules for compatible cisco switches series
 
Guide to the new cisco firepower 2100 series
Guide to the new cisco firepower 2100 seriesGuide to the new cisco firepower 2100 series
Guide to the new cisco firepower 2100 series
 
892 f sfp configuration example
892 f sfp configuration example892 f sfp configuration example
892 f sfp configuration example
 
Cisco nexus 7000 and nexus 7700
Cisco nexus 7000 and nexus 7700Cisco nexus 7000 and nexus 7700
Cisco nexus 7000 and nexus 7700
 
Cisco firepower ngips series migration options
Cisco firepower ngips series migration optionsCisco firepower ngips series migration options
Cisco firepower ngips series migration options
 
Eol transceiver to replacement model
Eol transceiver to replacement modelEol transceiver to replacement model
Eol transceiver to replacement model
 

Recently uploaded

Vertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsVertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsMiki Katsuragi
 
WordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your BrandWordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your Brandgvaughan
 
Unraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfUnraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfAlex Barbosa Coqueiro
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationSlibray Presentation
 
Understanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitectureUnderstanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitecturePixlogix Infotech
 
Pigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping ElbowsPigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping ElbowsPigging Solutions
 
APIForce Zurich 5 April Automation LPDG
APIForce Zurich 5 April  Automation LPDGAPIForce Zurich 5 April  Automation LPDG
APIForce Zurich 5 April Automation LPDGMarianaLemus7
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024Lorenzo Miniero
 
Key Features Of Token Development (1).pptx
Key  Features Of Token  Development (1).pptxKey  Features Of Token  Development (1).pptx
Key Features Of Token Development (1).pptxLBM Solutions
 
AI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsAI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsMemoori
 
Bluetooth Controlled Car with Arduino.pdf
Bluetooth Controlled Car with Arduino.pdfBluetooth Controlled Car with Arduino.pdf
Bluetooth Controlled Car with Arduino.pdfngoud9212
 
Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Commit University
 
My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationRidwan Fadjar
 
Science&tech:THE INFORMATION AGE STS.pdf
Science&tech:THE INFORMATION AGE STS.pdfScience&tech:THE INFORMATION AGE STS.pdf
Science&tech:THE INFORMATION AGE STS.pdfjimielynbastida
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubKalema Edgar
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Scott Keck-Warren
 
Benefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other FrameworksBenefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other FrameworksSoftradix Technologies
 
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Mark Simos
 

Recently uploaded (20)

Vertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsVertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering Tips
 
WordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your BrandWordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your Brand
 
Unraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfUnraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdf
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck Presentation
 
Understanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitectureUnderstanding the Laravel MVC Architecture
Understanding the Laravel MVC Architecture
 
Pigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping ElbowsPigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping Elbows
 
APIForce Zurich 5 April Automation LPDG
APIForce Zurich 5 April  Automation LPDGAPIForce Zurich 5 April  Automation LPDG
APIForce Zurich 5 April Automation LPDG
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024
 
DMCC Future of Trade Web3 - Special Edition
DMCC Future of Trade Web3 - Special EditionDMCC Future of Trade Web3 - Special Edition
DMCC Future of Trade Web3 - Special Edition
 
Key Features Of Token Development (1).pptx
Key  Features Of Token  Development (1).pptxKey  Features Of Token  Development (1).pptx
Key Features Of Token Development (1).pptx
 
AI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsAI as an Interface for Commercial Buildings
AI as an Interface for Commercial Buildings
 
Bluetooth Controlled Car with Arduino.pdf
Bluetooth Controlled Car with Arduino.pdfBluetooth Controlled Car with Arduino.pdf
Bluetooth Controlled Car with Arduino.pdf
 
Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!
 
My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 Presentation
 
Science&tech:THE INFORMATION AGE STS.pdf
Science&tech:THE INFORMATION AGE STS.pdfScience&tech:THE INFORMATION AGE STS.pdf
Science&tech:THE INFORMATION AGE STS.pdf
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding Club
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024
 
Benefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other FrameworksBenefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other Frameworks
 
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptxE-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
 
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
 

Guide using the hpe dl380 gen9 24-sff server as a vertica node

  • 1. Using the HPE DL380 Gen9 24-SFF Server as a Vertica Node The Vertica Analytics Platform software runs on a shared-nothing MPP cluster of peer nodes. Each peer nodes is independent, and processing is massively parallel. A Vertica node is a hardware host configured to run an instance of Vertica. This document provides recommendations for configuring an individual DL380 Gen9 24-SFF CTO Server as a Vertica node. The recommendations presented in this document are intended to help you create a cluster with the highest possible Vertica software performance. This document includes a Bill of Materials (BOM) as a reference and to provide more information about the DL380 Gen9 24-SFF CTO Server. Recommended Software This document assumes that your services, after you configure them, will be running the following minimum software versions:  Vertica 7.2 (or later) Enterprise Edition. This is the most recent release as of April 2016.  Red Hat Enterprise Linux 6.x. If you are running Red Hat Enterprise Linux 7.x, watch for information in this document that is clearly marked as RHEL 7.1 specific. Selecting a Server Model The HPE DL380 Gen9 product family includes several server models. The best model for maximum Vertica software performance is the DL380 Gen9 24-SFF CTO Server (part number 767032-B21). Selecting a Processor For maximum price/performance advantage on your Vertica database, the DL380 Gen9 24-SFF servers used for Vertica nodes should include two (2) Intel Xeon E5- 2690v3 2.6 GHz/12-core DDR4-2133 135W processors. This processor recommendation is based on the fastest 12-core processors available for the DL380 Gen9 24-SFF platform at the time of this writing. These processors allow Vertica to deliver the fastest possible response time across a wide spectrum of concurrent database workloads. The processor’s faster clock speed directly affects the Vertica database response time. Additional cores enhance the cluster’s ability to simultaneously execute multiple MPP queries and data loads. Selecting Memory
  • 2. For maximum Vertica performance, DL380 Gen9 24-SFF servers used as Vertica nodes should include 256 GB of RAM. Configure this memory as follows:  8 x 32 GB DDR4-2133 RDIMMs, 1DPC (32 GB per channel)  In the field, you can expand this configuration to 512 GB by adding 8 x 32 GB DIMMs. A two-processor DL380 Gen9 24-SFF server has 8 memory channels with 3 DIMM slots in each channel, for a total of 24 slots. DL380 Gen9 24-SFF memory configuration should comply with DIMM population rules and guidelines:  Do not leave any channel completely blank. Load all channels similarly.  Populate the maximum number of DIMMS per channel (DPC) to 2. Doing so allows you to use the highest supported DIMM speed of 2133 MHz. DPCs with 3 DIMMs run at 1866 MHz or lower. Note Follow these guidelines to avoid a reduction in memory speed that could adversely affect the performance of your Vertica database. The preceding recommended memory configuration is based on 32 GB DDR4 2133 MHz DIMMs and 256 GB of RAM. That configuration is intended to achieve the best memory performance while providing the option of future expansion. The following table provides several alternate memory configurations: Sample Configuration Total Memory Considerations 8 x 16 GB DDR4-2133 RDIMMs 1 DPC (8 GB per channel) 128 GB A low-memory option for systems with less concurrency and slower speed requirements. 16 x 16 GB DDR4- 2133 RDIMMs 2 DPC (16 GB, 16 GB per channel) 256 GB A slightly less expensive option for 256 GB of RAM that does not allow expansion in the field. 8 x 32 GB DDR4-2133 RDIMMs 1 DPC (32 GB per 256 GB The standard memory recommendation for Vertica.
  • 3. channel) 16 x 32 GB DDR4- 2133 RDIMMs 2 DPC (32 GB + 32 GB per channel) 512 GB A high-memory option that may be beneficial to support some database workloads. Selecting and Configuring Storage Configure the storage hardware as follows for maximum performance of the DL380 Gen9 24-SFF server used as a Vertica node:  1x HPE DL380 Gen9 24-SFF CTO Chassis with 24 Hot Plug SmartDrive SFF (2.5- inch) Drive Bays  1x HPE DL380 Gen9 2-SFF Kit with 2 Hot Plug SmartDrive SFF (2.5 inch) Drives Bays (on the back of the server)  1x HPE Smart Array P440ar/2GB FBWC 12Gb 2-ports Int FIO SAS Controller (integrated on systemboard)  2x 300 GB 12 G SAS 10 KB 2.5 inch SC ENT drives (configured as RAID1 for the OS and the Vertica Catalog location)  24x 1.2 TB 12 G SAS 10 KB 2.5 inch SC ENT drives (configured as one RAID 1+0 device for the Vertica Data location, for approximately 13 TB total formatted storage capacity per Vertica node) You can configure a Vertica node with less storage capacity:  Substitute 24x 1.2 TB 12 G SAS 10 KB 2.5 inch SC ENT drives with 24x HPE 600 GB 12 G SAS 10 K 2.5 inch SC ENT drives.  Configure the drives as one RAID 1+0 device for the Vertica data location, for approximately 6 TB of total data storage capacity per Vertica node. Alternatively, you can configure the 23rd and 24th 1.2 TB (or 600 GB) data drives (for 22 active drives in total) as hot spares. However, such configuration is unnecessary with a RAID 1+0 configuration. Vertica can operate on any storage type. For example, Vertica can run on internal storage, a SAN array, a NAS storage unit, or a DAS enclosure. In each case, the storage appears to the host as a file systemand is capable of providing sufficient I/O bandwidth. Internal storage in a RAID configuration offers the best price/performance/availability characteristics at the lowest TCO.
  • 4. A Vertica installation requires at least two storage locations―one for the operating system and catalog, and the other for data. Place these data locations on a dedicated, contiguous-storage volume. Vertica is a multithreaded application. The Vertica data location I/O profile is best characterized as large block random I/O. Drive Bay Population The 26 drive bays on the DL380 Gen9 24-SFF servers are attached to the Smart Array P440ar Controller over 4 internal SAS port connectors (through a 12 G SAS Expander Card) as follows:  Cages 1, 2, 3—Drive bays 1‒8, 9‒16,17‒24 (8 drives each)  Cage 4—Drive bays 25 and 26 (2 drives) For example, an ideal implementation of the recommended 26-drive configuration is:  300 GB drives placed in bays 25 and 26, with the 2SFF drive expander in the rear of the server.  2 TB (or 600 GB) drives placed in all bays on the front of the server. This approach spreads the Vertica data RAID10 I/O evenly across the SAS groups. Note For best performance, make sure to fully populate all the drive bays. Protecting Data on BulkStorage The HPE Smart Array P440ar/2GB FBWC 12 Gb 2-ports Int FIO SAS Controller offers the optional HPE Secure Encryption capability that protects data at rest on any bulk storage attached to the controller. (Additional software, hardware, and licenses may be required.) For more information, see HPE Secure Encryption product details. Data RAID Configuration The 24 data drives should be configured as one RAID 1+0 device as follows:  The recommended strip size for the data RAID 1+0 is 512 KB, which is the default setting for the P440ar controller.  The recommended Controller Cache (Accelerator) Ratio is 10/90, which is the default setting for the P440ar controller.  The logical drive should be partitioned with a single primary partition spanning the entire drive. Place the Vertica data location on a dedicated physical storage volume. Do not co- locate the Vertica data location with the Vertica catalog location. Vertica Packard
  • 5. Enterprise recommends that the Vertica catalog location on an Vertica node on a DL380 Gen9 24-SFF server be the operating systemdrive. For more information, read Before You Install Vertica in the product documentation, particularly the discussion of Vertica storage locations. Note Vertica does not support storage configured with the Linux Logical Volume Manager in the I/O path. This limitation applies to all Vertica storage locations including the catalog which is typically placed on the OS drive. Linux I/O Subsystem Tuning To support the maximum performance DL380 Gen9 24-SFF node configuration, Vertica Packard Enterprise recommends the following Linux I/O configuration settings for the Vertica data location volumes:  The recommended Linux file system is ext4.  The recommended Linux I/O Scheduler is deadline.  The recommended Linux Readahead setting is 8192 512-byte sectors (4 MB). The current configuration recommendations differ from the previously issued guidance due to the changes in the Vertica I/O profile implemented in Vertica 7.x. System administrators should durably configure the deadline scheduler and the read-ahead settings for the Vertica data volume so that these settings persist across server restarts. Caution Failing to use the recommended Linux I/O subsystem settings will adversely affect performance of the Vertica software. Data RAID Configuration Example The following configuration and tuning instructions pertain to the Vertica data storage location. Note The following steps are provided as an example, and may not be correct for your machine. Verify the drive numbers and population for your machine before running these commands. 1. In Red Hat Enterprise Linux 7.x, to load the modules that HPSSACLI requires, execute the following commands:
  • 6. modprobe sg modprobe hpsa hpsa_allow_any=1 first 2. View the current storage configuration: # hpssacli >controller slot=1 show 3. Assume that the RAID1 OS drive is configured as a logical drive 1 and the 24 data drives are either “loose” or pre-configured into a temporary logical drive. To create a new logical drive with recommended parameters, any non-OS logical drives must be destroyed, and content will be lost. To destroy a non-OS logical drive: >controller slot=1 ld 2 delete forced 4. Create a new RAID10 data drive with 512 K strip size: >controller slot=1 create type=ld raid=1+0 ss=512 drives=allunassigned 5. Partition and format the RAID10 data drive: # parted -s /dev/sdb mklabel gpt mkpart primary ext4 0% 100% # mkfs.ext4 /dev/sdb1 6. Create a /data mount point, add a line to the /etc/fstab file, and mount the Vertica data volume: # mkdir /data [add line to /etc/fstab]: /dev/sdb1 /data ext4 defaults,noatime 0 0 # mount /data 7. So that the Linux I/O scheduler Linux Read-Ahead, and hugepage defragmentation settings persist across systemrestarts, add the following five lines to /etc/rc.local. Apply these steps to every drive in your system. Note The following commands assume that sdb is the data drive, and sda is the OS/catalog drive. echo deadline > /sys/block/sdb/queue/scheduler blockdev --setra 8192 /dev/sdb echo never > /sys/kernel/mm/redhat_transparent_hugepage/enabled
  • 7. echo never > /sys/kernel/mm/redhat_transparent_hugepage/defrag echo no > /sys/kernel/mm/redhat_transparent_hugepage/khugepaged/defrag echo deadline > /sys/block/sda/queue/scheduler blockdev –setra 2048 /dev/sda 8. After you have configured the storage, run vioperf to understand the baseline I/O performance. The minimum required I/O is 20 MB/s read and write per physical processor core on each node. This value is based on running in full duplex and reading and writing at this rate simultaneously, concurrently on all nodes of the cluster. Vertica Packard Enterprise recommends that I/O is 40 MB/s per physical core on each node. For example, the I/O rate for a server node with 2 hyperthreaded 6-core CPUs is a required minimum of 240 MB/s. Vertica Packard Enterprise recommends 480 MB/s. A properly configured DL380 Gen9 significantly outperforms this. Your I/O performance should be close to:  2,200 MB/s for read and write when using 15K RPM drives.  2,200 MB/s write and 1,500 MB/s read when using 10 K RPM drives.  800+800MB/s for rewrite  7k+ for seeks If you do not see those results, review the preceding steps to make sure you configured your data storage location correctly. For more information, see vioperf in the product documentation. Selecting a Network Adapter To support the maximum performance MPP cluster operations, DL380 Gen9 24-SFF servers used as Vertica nodes should include at least two 10 Gigabit Ethernet (RJ-45) ports:  HPE Ethernet 10 Gb 2-port 561FLR-T Adapter in the FlexibleLOM slot Alternatively, if you need SFP+ networking, DL380 Gen9 24-SFF servers used as Vertica nodes should include at least two (2) 10 Gigabit Ethernet (SFP+) ports.
  • 8.  HPE Ethernet 10 Gb 2-port 546FLR-SFP+Adapter in the FlexLOMslot A Vertica cluster is formed with DL380 Gen9 24-SFF servers, associated network switches, and Vertica software. When used as a Vertica node, each DL380 Gen9 24-SFF server should be connected to two separate Ethernet networks:  The private network, such as a cluster interconnect, is used exclusively for internal cluster communications. This network must be the same subnet, dedicated switch, or VLAN, 10 Gb Ethernet. Vertica does TCP P2P communications and UDP broadcasts on this network. IP addresses for the privatenetwork interfaces must be assigned statically. No external traffic should be allowed over the privatecluster network.  The publicnetwork is used for database client (i.e., application) connectivity, should be 10 Gb Ethernet. Vertica has no rigid requirements for public network configuration. However, Vertica Packard Enterprise recommends that you assign static IP addresses for the public network interfaces. The private network interconnect should have Ethernet redundancy. Otherwise, the interconnect (specifically the switch) would be a single point of a cluster-wide failure. Cluster operations are not affected even in the event of a complete failure of a public network. Thus, public network redundancy is not technically required. However, if a failure occurs, the application connectivity to the database is affected. Therefore, consider public network redundancy for continuous availability of the entire environment. To achieve redundancy on both the private and public networks: 1. Take the two ports from the Ethernet card on the server and run one to each of the two top of rack switches (which are bonded together in an IRF). 2. Bond the links together using LACP. 3. Divide the links into public and private networks, using VLANs. Configuring the Network The following figure illustrates a typical network setup that achieves high throughput and high availability. (This figure is for demonstration purposes only.)
  • 9. This figure shows that the bonding of the adapters allows for one adapter to fail without the connection failing. This bonding provides high availability of the network ports. Bonding the adaptors also doubles the throughput. Furthermore, this configuration allows for high availability with respect to the switch. If a switch fails, the cluster does not go down. However, it may reduce the network throughput by 50%. Tuning the TCP/IP Stack Depending on your workload, number of connections, and client connect rates, you need to tune the Linux TCP/IP stack to provide adequate network performance and throughput. The following script represents the recommended network (TCP/IP) tuning parameters for a Vertica cluster. Other network characteristics may affect how much these parameters optimize your throughput. Add the following parameters to the /etc/sysctl.conf file. The changes take effect after the next reboot. ##### /etc/sysctl.conf # Increase number of incoming connections net.core.somaxconn = 1024 #Sets the send socket buffer maximum size in bytes. net.core.wmem_max = 16777216 #Sets the receive socket buffer maximum size in bytes. net.core.rmem_max = 16777216 #Sets the receive socket buffer default size in bytes. net.core.wmem_default = 262144 #Sets the receive socket buffer maximum size in bytes. net.core.rmem_default = 262144 #Sets the maximum number of packets allowed to queue when a particular interface receives packets faster than the kernel can process them. # increase the length of the processor input queue net.core.netdev_max_backlog = 100000 net.ipv4.tcp_mem = 16777216 16777216 16777216 net.ipv4.tcp_wmem = 8192 262144 8388608 net.ipv4.tcp_rmem = 8192 262144 8388608
  • 10. net.ipv4.udp_mem = 16777216 16777216 16777216 net.ipv4.udp_rmem_min = 16384 net.ipv4.udp_wmem_min = 16384 HPE ROM-Based Setup Utility (BIOS) Settings Vertica Packard Enterprise recommends that you configure the BIOS for maximum performance: System Configuration > BIOS/Platform Configuration (RBSU) > Power Management > HPE Power Profile > [Maximum Performance] Additionally, you must modify the max_cstate value. For Red Hat Enterprise Linux versions prior to 7.0, modify the kernel entry in the /etc/grub.conf file by appending the following to the kernel command: intel_idle.max_cstate=0 processor.max_cstate=0 For Red Hat Enterprise Linux 7.x, /etc/grub.conf has become /boot/efi/EFI/[centos or redhat]grub/conf. You need to append the following arguments to the following kernel command: nosoftlockup intel_idle.max_cstate=0 mce=ignore_ce HPE ProLiant DL380 Gen9 24-SFF Bill of Materials The following table contains a sample Bill of Materials (BOM). Note the following about this BOM:  Part numbers are listed for reference only.  Networking equipment and racks are not included.  All hardware requests should come through the CSE team for review. Quantity Part Number Description Notes 1 767032- B21 HPE DL380 Gen9 24-SFF CTO Server 1 719044- L21 HPE DL380 Gen9 E5 2690v3 FIO Kit For 12 cores 1 719044- HPE DL380 Gen9 E5 2690v3 Kit For 12 cores
  • 11. Quantity Part Number Description Notes B21 8 726722- B21 HPE 32 GB 4Rx4 PC42133P-L Kit For 256 GB of memory (expandable to 512 GB) 1 727250- B21 HPE 12 GB SAS Expander Card 1 724864- B21 HPE DL380 Gen9 2SFF Kit 1 SG506A HPE C13-C14 2.5 ft Sgl Data Special Requirement for deployments using HPE Intelligent PDUs 1 SG508A HPE C13-C14 4.5 ft Sgl Data Special Requirement for deployments using HPE Intelligent PDUs 1 768896- B21 HPE DL380 Gen9 Rear Serial Cable Kit 1 749974- B21 HPE Smart Array P440ar/2GB FBWC 12Gb 2-ports Int FIO SAS Controller 2 785067- B21 HPE 300 GB 12 GB SAS 10KB 2.5 in SC ENT HDD For operating systemand database catalog 24 781518- B21 HPE 1.2 TB 12 GB SAS 10KB 2.5 in SC ENT HDD For approximately 13 TB formatted storage capacity 1 700699- B21 HPE Ethernet 10Gb 2P 561FLR-T Adapter For RJ-45 networking 2 720479- B21 HPE 800W FS Play Ht Plg Pwr Supply Kit
  • 12. Quantity Part Number Description Notes 1 733660- B21 HPE 2U SFF Easy Rail Kit Alternate Part Appendix Processor Quantity Part Number Description Notes 1 719046-L21 HPE DL380 Gen9 E5 2670v3 FIO Kit 12 cores at 2.3 GHz 1 719046-B21 HPE DL380 Gen9 E5 2670v3 Kit Memory Quantity Part Number Description Notes 16 726722-B21 HPE 32 GB 4Rx4 PC4 2133P-L Kit For 512 GB of memory Disk Drives Quantity Part Number Description Notes 24 759212- B21 HPE 600 GB 12 GB SAS 10 KB 2.5 in SC ENT HDD For approximately 6 TB formatted storage capacity Networking Quantity Part Number Description Notes 1 779799-B21 HPE Ethernet 10GbE 546FLR-SFP+ Adapter For SFP+ networking
  • 13. PDF Guide Download: Configuring the HPE DL380 Gen9 24-SFF CTO Server as a Vertica Node More Related HPE ProLiant Gen10-The World’s Most Secure Industry Standard Servers How to Buy a Server for Your Business? How to Choose a Server for Your Data Center’s Needs?