CLUSTER COMPUTING
Upcoming SlideShare
Loading in...5
×
 

CLUSTER COMPUTING

on

  • 3,624 views

 

Statistics

Views

Total Views
3,624
Views on SlideShare
3,624
Embed Views
0

Actions

Likes
0
Downloads
130
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

CLUSTER COMPUTING Presentation Transcript

  • 1. PRESENTED BY: MOHD UMAR M.TECH Ist YEAR CSE 09D11D5812
  • 2.
      • A cluster is a type of parallel or distributed processing system, which consists of a collection of interconnected stand-alone computers co - operatively working together as a single, integrated computing resource
      • This cluster of computers shares common network characteristics like the same namespace and it is available to other computers on the network as a single resource
  • 3.
      • High cost of ‘traditional’ High Performance Computing:
    • Clustering using Commercial off the Shelf is way cheaper than buying specialized machines for computing.
      • Increased need for High Performance Computing:
    • As processing power becomes available, applications which require enormous amount of processing, like weather modeling are becoming more common place requiring the high performance computing provided by Clusters
  • 4.
      • Myricom :
    • Myricom offers cards and switches that interconnect at speeds of up to 1.28 Gbps in each direction. The cards come in two different forms, copper-based and optical
      • Giganet:
    • Giganet is the first vendor of Virtual Interface (VI) architecture cards for the Linux platform, in their cLAN cards and switches.It uses its own network communications protocol rather than IP to exchange data directly between the servers, and it is not intended to be a WAN routable system.
  • 5.
      • IEEE SCI :
    • SCI is a ring-topology-based networking system unlike the star topology of Ethernet. That makes it faster to communicate between the nodes on a larger scale.
    • The IEEE standard SCI has even lower latencies (under 2.5 microseconds), and it can run at 400 MB per second
  • 6. Close Clusters They hide most of the cluster behind the gateway node. Consequently they need less IP addresses and provide better security. They are good for computing tasks.
  • 7. Open Clusters: All nodes can be seen from outside,and hence they need more IPs, and cause more security concern .But they are more flexible and are used for internet/web/information server task.
  • 8.
      • Beowulf Cluster:
    • Basically, the Beowulf architecture is a multi-computer architecture that is used for parallel computation applications. Therefore, Beowulf clusters are primarily meant only for processor-intensive and number crunching applications and definitely not for storage applications.
  • 9.
      • Homogeneous and Heterogeneous Clusters
      • Diskless versus “Disk full” Configurations
      • Network Selection
      • Security Considerations
  • 10.
      • Parallel programming requires skill and creativity and may be more challenging than assembling the hardware of a Beowulf system. The most common model for programming Beowulf clusters is a master-slave arrangement. In this model, one node acts as the master, directing the computations performed by one or more tiers of slave nodes.
  • 11.
      • Programming of a Beowulf cluster can be done in three ways
      • Using parallel message passing library such as PVM and MPI
      • Using parallel language such as High Performance Fortran and OpenMP
      • Using parallel math library
  • 12.
      • Using parallel message passing library such as PVM and MPI:
    • PVM - Parallel Virtual Machines:
    • It appeared before MPI.It is flexible for non-dedicated cluster, and is easy to use.It has lower performance and less feature rich compared to MPI.
    • MPI - Message Passing Interface:
    • A standard message passing interface for programming cluster or parallel system from MPI Forum. It is easy to use.
  • 13.
    • Advantages and Disadvantages of Programming Using Message passing:
      • The main advantages are that these are standards, and hence portable. They provide high performance as compared to the other approaches.
      • The disadvantage is that programming is quite difficult.
  • 14.
      • Using parallel language such as High Performance Fortran and OpenMP
      • . The High Performance Fortran Forum (HPFF), a coalition of industry, academic and laboratory representatives, works to define a set of extensions to Fortran 90 known collectively as High Performance Fortran (HPF). HPF extensions provide access to high-performance architecture features while maintaining portability across platforms.
  • 15.
      • Advantages and Disadvantages of Programming Using Parallel Languages:
      • The advantage if programming using parallel languages is that it is easy to code, and that it is portable.
      • The disadvantage is lower performance and limited scalability.
  • 16.
      • Using parallel math library:
      • By using parallel math libraries, the complexity of writing parallel code is avoided. Some examples are PETSc, PLAPACK, ScaLAPACK math libraries.
      • PLAPACK:
    • PLAPACK provides 3 unique features.
    • 1. Physically based matrix distribution
    • 2. API to query matrices and vectors
    • 3. Programming interface that allows object oriented programming
  • 17.
      • ScaLAPACK:
    • It contains routines for solving systems of linear equations .Most machine dependencies are limited to two standard libraries called the PBLAS, or Parallel Basic Linear Algebra Subroutines, and the BLACS ,or the BLACS, or Basic Linear Algebra Communication Subroutines.
      • PETSc:
    • PETSc facilitates the integration of independently developed application modules, which often most naturally employ different coding styles and data structures.
  • 18.
      • The concept of clusters is an empowering force . It wrests high-level computing away from the privileged few and makes low-cost parallel-processing systems available to those with modest resources. Research groups, high schools, colleges or small businesses can build or buy their own Beowulf clusters, realizing the promise of a supercomputer in every basement.
  • 19. THANK YOU
  • 20. ??????????????????? QUERIES ???????????????????