• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
13c planning
 

13c planning

on

  • 467 views

 

Statistics

Views

Total Views
467
Views on SlideShare
467
Embed Views
0

Actions

Likes
1
Downloads
0
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    13c planning 13c planning Presentation Transcript

    • Planning a Cluster 7/6/2012© 2012 MapR Technologies Planning a Cluster 1
    • Planning a Cluster Agenda • Hardware Requirements • Hardware Recommendations • Operating System Requirements • Node Configuration • Service Layout • HA Cluster Design • LAB: Planning© 2012 MapR Technologies Planning a Cluster 2
    • Planning a Cluster Objectives At the end of this module you will be able to: • List the minimum hardware and software requirements for MapR • List the recommended hardware configuration • Describe a typical 50 and 100 TB rack configuration • Explain how MapR services are arranged on a cluster • Identify the important considerations of HA cluster design© 2012 MapR Technologies Planning a Cluster 3
    • Hardware Requirements© 2012 MapR Technologies Planning a Cluster 4
    • Hardware Requirements Minimum Hardware requirements:  64-bit processor(s)  4GB DRAM  1 1GbE network interface  One free un-mounted drive or partition  100GB  >=10GB free space on OS partition  Swap space = 2X RAM  or set overcommit_memory to 1.© 2012 MapR Technologies Planning a Cluster 5
    • Hardware Recommendations© 2012 MapR Technologies Planning a Cluster 6
    • Hardware Recommendations Recommended Hardware configuration  64-bit processor with 8-12 cores  32G DRAM or more  2 GigE network interfaces  3-12 disks of 1-3 TB each  At least 20 GB of free space on OS partition  32 GB swap space or more© 2012 MapR Technologies Planning a Cluster 7
    • Hardware Recommendations Typical Compute/Storage Node  2U Chassis  Single motherboard, dual socket  2 x 4-core + 32 GB RAM or 2 x 6-core + 48 GB RAM  12 x 2 TB 7200-RPM drives  2 - 4 1GbE network interfaces (on-board NIC + additional NIC)  OS on single partition on one drive (remainder of drive used for storage)© 2012 MapR Technologies Planning a Cluster 8
    • Hardware Recommendations Typical 50TB Rack Configuration  10 typical compute/storage nodes (10 x 12 x 2 TB storage; 3x replication, 25% margin)  24-port 1 Gb/s rack-top switch with 2 x 10Gb/s uplink  Add second switch if each node uses 4 network interfaces© 2012 MapR Technologies Planning a Cluster 9
    • Hardware Recommendations Typical 100TB Rack Configuration  20 typical nodes (20 x 12 x 2 TB storage; 3x replication, 25% margin)  48-port 1 Gb/s rack-top switch with 4 x 10Gb/s uplink  Add second switch if each node uses 4 network interfaces© 2012 MapR Technologies Planning a Cluster 10
    • Notes on Rack Configuration  Plenty of bandwidth! –match to disk I/O  Multiple NICs per node  Make sure switches can handle throughput© 2012 MapR Technologies Planning a Cluster 11
    • Operating System Requirements© 2012 MapR Technologies Planning a Cluster 12
    • Operating System Requirements OS requirements  64-bit CentOS 5.4 or greater  64-bit Red Hat 5.4 or greater  64-bit Ubuntu 9.04 or greater  64-bit SUSE Enterprise 11.0 or greater© 2012 MapR Technologies Planning a Cluster 13
    • Node Configuration© 2012 MapR Technologies Planning a Cluster 14
    • Node Configuration Each node must have:  Unique hostname  Forward/reverse name resolution with all other nodes  MapR-specific user© 2012 MapR Technologies Planning a Cluster 15
    • Node Configuration  Hardware – If RAID used for MapR disks, use –w 1 with disksetup  Software and operating system – Java  Networking – Keyless ssh – Forward and reverse hostname resolution between all nodes  Configuration – Same users/groups on all nodes, or integration with LDAP (via PAM) • Common unified authentication – NTP – all nodes sync to one internal NTP server that syncs to outside© 2012 MapR Technologies Planning a Cluster 16
    • Service Layout© 2012 MapR Technologies Planning a Cluster 17
    • Example: Small Cluster Admin Svcs.© 2012 MapR Technologies Planning a Cluster 18
    • Service Instances Service Package How Many CLDB mapr-cldb 1-3 FileServer mapr-fileserver Most or all nodes HBase Master mapr-hbase-master 1-3 HBase RegionServer mapr-hbase-regionserver Varies JobTracker mapr-jobtracker 1-3 NFS mapr-nfs Varies TaskTracker mapr-tasktracker Most or all nodes WebServer mapr-webserver One or more 1, 3, 5, or a higher odd Zookeeper mapr-zookeeper number© 2012 MapR Technologies Planning a Cluster 19
    • HA Cluster Design© 2012 MapR Technologies Planning a Cluster 20
    • HA Cluster Design  M5 license required for HA  Number of instances of each service  Placement of services on different racks  No single point of failure  Failover: – JobTracker – CLDB – NFS – Zookeeper© 2012 MapR Technologies Planning a Cluster 21
    • HA Cluster Design Note: it is not a requirement to put admin services on first node, but it is a useful convention© 2012 MapR Technologies Planning a Cluster 22
    • LAB: Planning© 2012 MapR Technologies Planning a Cluster 23
    • Questions© 2012 MapR Technologies Planning a Cluster 24