Cluster computer

1,699 views

Published on

Computers that work together so that in many respects they can be viewed as a single system.

Published in: Technology
0 Comments
4 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
1,699
On SlideShare
0
From Embeds
0
Number of Embeds
5
Actions
Shares
0
Downloads
142
Comments
0
Likes
4
Embeds 0
No embeds

No notes for slide

Cluster computer

  1. 1. By: Ashraful Hoda Ashraful.hoda01@mail.com
  2. 2.          Introduction History why cluster computing? Architecture Clustering Concept Several application Operating System Companies that use it High performance Clusters (HPC) 2
  3. 3.  A computer cluster consists of a set of loosely connected computers that work together so that in many respects they can be viewed as a single system. Cluster consists of:  Nodes(master+computing)  Network  OS  Cluster middleware: It permits compute clustering programs to be portable to a wide variety of clusters. … Cluster Middle ware High Speed Local Network CPU CPU … CPU Cluster 3
  4. 4. INTRODUCTION  Consists of many of the same or similar type of machines.  Tightly-coupled using dedicated network connections.  The components of a cluster are usually connected to each other through fast local area networks, each node running its own instance on an operating system.  All machines share resources.  They must trust each other so that does not require a password, otherwise you would need to do a manual start on each machine.
  5. 5. HISTORY  The first commercial clustering product was ARCnet, developed by Data point in 1977.  Digital Equipment Corporation released their VAX cluster product in 1984 for the VAX/VMS operating system.  The ARCnet and VAX cluster products not only supported parallel computing, but also shared file systems and peripheral devices.  The idea was to provide the advantages of parallel processing, while maintaining data reliability and uniqueness.
  6. 6. Through CLUSTERS  Data sharing  Message passing and communication  Task scheduling  Node failure management  Parallel programming  Debugging and monitoring
  7. 7. Logical view
  8. 8. ARCHITECTURE  It consists of a collection of interconnected stand-alone computers cooperatively working together a single, integrated computing resource.  A node:  a single or multiprocessor system with memory, I/O facilities, &OS  generally 2 or more computers (nodes) connected together in a single cabinet, or physically separated & connected via a LAN appear as a single system to users and applications provide a cost-effective way to gain features and benefits
  9. 9. ARCHITECTURE  Database Replication Clusters
  10. 10. The components required to the development of low cost clusters are:     Processors Memory Networking components Motherboards, busses, and other sub-systems
  11. 11. Beowulf cluster Start from 1994 Donald Becker of NASA assembled this cluster. Also called Beowulf cluster Applications like data mining, simulations, parallel processing, weather modeling, etc 
  12. 12.  A Beowulf Cluster is a computer design that uses parallel processing across multiple computers to create cheap and powerful supercomputers. A Beowulf Cluster in practice is usually a collection of generic computers connected through an internal network.  A cluster has two types of computers, a master computer, and node computers.  When a large problem or set of data is given to a Beowulf cluster, the master computer first runs a program that breaks the problem into small discrete pieces; it then sends a piece to each node to compute. As nodes finish their tasks, the master computer continually sends more pieces to them until the entire problem has been computed. 12
  13. 13. ( Ethernet,Myrinet….) + (MPI)  Master: or service node or front node ( used to interact with users and manage the cluster )  Nodes : a group of computers (computing node s)( keyboard, mouse, floppy, video…)  Communications between nodes on an interconnect network platform ( Ethernet, Myrinet….)  In order for the master and node computers to communicate, some sort message passing control structure is required. MPI,(Message Passing Interface) is the most commonly used such control. 13
  14. 14. Brief Technical Parameters:  OS:  Service node:  Computing nodes:  System Memory:    CentOS 5 managed by Rochs-cluster 1 (Intel P4 2.4 GHz) 32(Intel P4 2.4- 2.8 GHz) 1 GB per node Network Platforms: Gigabit Ethernet, 2 cards per node Myrinet 2 G Language: C, C++, Fortran, java Compiler: Intel compiler, sun Java compiler 14
  15. 15. Science Computation Digital Biology Aerospace Resources Exploration 15
  16. 16. High Performance Networks/Switches a. Ethernet (10Mbps), b. Fast Ethernet (100Mbps), c. Gigabit Ethernet (1Gbps) e. ATM f. Myrinet (1.2Gbps) g. Digital Memory Channel
  17. 17. ISSUES TO BE CONSIDERED Cluster Networking Cluster Software Programming Timing Network Selection Speed Selection
  18. 18. OS (Operating System )  Three of the most commonly used OS are :  Windows mainly used to build a High Availability Cluster or a NLB(Network Local Balance) Cluster, provide services such as Database, File/Print,Web,Stream Media .Support 2-4 SMP or 32 processors. Hardly used to build a Science Computing Cluster  Redhat Linux The most used OS for a Beowulf Cluster. provides High Performance and Scalability / High Reliability / Low Cost ( get freely and uses inexpensive commodity hardware )  SUN Solaris Uses expensive and unpopular hardware 18
  19. 19. State of the art Operating Systems with companies : a. Linux b. Microsoft NT c. SUN Solaris d. IBM AIX e. HP UX (Illinois - PANDA)
  20. 20. 20
  21. 21.  Calculation procedure for peak performance:  No of nodes 64  NO. of Master Nodes : 1  Memory RAM: 4 GB  Hard Disk Capacity/each node : 250GB  Storage Cap. 4 TB  CLUSTER Software : ROCKS version 4.3  No .of processors and cores: 2 X 2 = 4(dual core + dual socket)  CPU speed : 2.6 GHz 21
  22. 22. ANY QUERIES ???

×