13c planning

621 views

Published on

Published in: Technology, Real Estate
0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
621
On SlideShare
0
From Embeds
0
Number of Embeds
2
Actions
Shares
0
Downloads
0
Comments
0
Likes
1
Embeds 0
No embeds

No notes for slide

13c planning

  1. 1. Planning a Cluster 7/6/2012© 2012 MapR Technologies Planning a Cluster 1
  2. 2. Planning a Cluster Agenda • Hardware Requirements • Hardware Recommendations • Operating System Requirements • Node Configuration • Service Layout • HA Cluster Design • LAB: Planning© 2012 MapR Technologies Planning a Cluster 2
  3. 3. Planning a Cluster Objectives At the end of this module you will be able to: • List the minimum hardware and software requirements for MapR • List the recommended hardware configuration • Describe a typical 50 and 100 TB rack configuration • Explain how MapR services are arranged on a cluster • Identify the important considerations of HA cluster design© 2012 MapR Technologies Planning a Cluster 3
  4. 4. Hardware Requirements© 2012 MapR Technologies Planning a Cluster 4
  5. 5. Hardware Requirements Minimum Hardware requirements:  64-bit processor(s)  4GB DRAM  1 1GbE network interface  One free un-mounted drive or partition  100GB  >=10GB free space on OS partition  Swap space = 2X RAM  or set overcommit_memory to 1.© 2012 MapR Technologies Planning a Cluster 5
  6. 6. Hardware Recommendations© 2012 MapR Technologies Planning a Cluster 6
  7. 7. Hardware Recommendations Recommended Hardware configuration  64-bit processor with 8-12 cores  32G DRAM or more  2 GigE network interfaces  3-12 disks of 1-3 TB each  At least 20 GB of free space on OS partition  32 GB swap space or more© 2012 MapR Technologies Planning a Cluster 7
  8. 8. Hardware Recommendations Typical Compute/Storage Node  2U Chassis  Single motherboard, dual socket  2 x 4-core + 32 GB RAM or 2 x 6-core + 48 GB RAM  12 x 2 TB 7200-RPM drives  2 - 4 1GbE network interfaces (on-board NIC + additional NIC)  OS on single partition on one drive (remainder of drive used for storage)© 2012 MapR Technologies Planning a Cluster 8
  9. 9. Hardware Recommendations Typical 50TB Rack Configuration  10 typical compute/storage nodes (10 x 12 x 2 TB storage; 3x replication, 25% margin)  24-port 1 Gb/s rack-top switch with 2 x 10Gb/s uplink  Add second switch if each node uses 4 network interfaces© 2012 MapR Technologies Planning a Cluster 9
  10. 10. Hardware Recommendations Typical 100TB Rack Configuration  20 typical nodes (20 x 12 x 2 TB storage; 3x replication, 25% margin)  48-port 1 Gb/s rack-top switch with 4 x 10Gb/s uplink  Add second switch if each node uses 4 network interfaces© 2012 MapR Technologies Planning a Cluster 10
  11. 11. Notes on Rack Configuration  Plenty of bandwidth! –match to disk I/O  Multiple NICs per node  Make sure switches can handle throughput© 2012 MapR Technologies Planning a Cluster 11
  12. 12. Operating System Requirements© 2012 MapR Technologies Planning a Cluster 12
  13. 13. Operating System Requirements OS requirements  64-bit CentOS 5.4 or greater  64-bit Red Hat 5.4 or greater  64-bit Ubuntu 9.04 or greater  64-bit SUSE Enterprise 11.0 or greater© 2012 MapR Technologies Planning a Cluster 13
  14. 14. Node Configuration© 2012 MapR Technologies Planning a Cluster 14
  15. 15. Node Configuration Each node must have:  Unique hostname  Forward/reverse name resolution with all other nodes  MapR-specific user© 2012 MapR Technologies Planning a Cluster 15
  16. 16. Node Configuration  Hardware – If RAID used for MapR disks, use –w 1 with disksetup  Software and operating system – Java  Networking – Keyless ssh – Forward and reverse hostname resolution between all nodes  Configuration – Same users/groups on all nodes, or integration with LDAP (via PAM) • Common unified authentication – NTP – all nodes sync to one internal NTP server that syncs to outside© 2012 MapR Technologies Planning a Cluster 16
  17. 17. Service Layout© 2012 MapR Technologies Planning a Cluster 17
  18. 18. Example: Small Cluster Admin Svcs.© 2012 MapR Technologies Planning a Cluster 18
  19. 19. Service Instances Service Package How Many CLDB mapr-cldb 1-3 FileServer mapr-fileserver Most or all nodes HBase Master mapr-hbase-master 1-3 HBase RegionServer mapr-hbase-regionserver Varies JobTracker mapr-jobtracker 1-3 NFS mapr-nfs Varies TaskTracker mapr-tasktracker Most or all nodes WebServer mapr-webserver One or more 1, 3, 5, or a higher odd Zookeeper mapr-zookeeper number© 2012 MapR Technologies Planning a Cluster 19
  20. 20. HA Cluster Design© 2012 MapR Technologies Planning a Cluster 20
  21. 21. HA Cluster Design  M5 license required for HA  Number of instances of each service  Placement of services on different racks  No single point of failure  Failover: – JobTracker – CLDB – NFS – Zookeeper© 2012 MapR Technologies Planning a Cluster 21
  22. 22. HA Cluster Design Note: it is not a requirement to put admin services on first node, but it is a useful convention© 2012 MapR Technologies Planning a Cluster 22
  23. 23. LAB: Planning© 2012 MapR Technologies Planning a Cluster 23
  24. 24. Questions© 2012 MapR Technologies Planning a Cluster 24

×