• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Design Alternative for Parallel Systems

Design Alternative for Parallel Systems



Bachelor Thesis on Parallel computing.

Bachelor Thesis on Parallel computing.



Total Views
Views on SlideShare
Embed Views



0 Embeds 0

No embeds



Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
Post Comment
Edit your comment

    Design Alternative for Parallel Systems Design Alternative for Parallel Systems Presentation Transcript

    • Design Alternative for Parallel System[root@aissms ~ ]# mount /dev/Parallex /mnt/presentationPresented by:Amit Kumar B32*****7Ankit Singh B32*****8Sushant Bhadkamkar B32*****2GUIDE: Mr. Anil.J. KadamDepartment of Computer Engineering,AISSMS College of Engineering,Pune - 1[root@aissms]# cat /mnt/presentation/AUTHORS
    • Overview[root@aissms ~]# tree /mnt/presentation Introduction What is Parallel computing?- Introduction to parallel computing,- Who uses parallel computing.?- Why parallel computing? Hardware & software resources. Technical design overview Implementation briefing Phase I results Applications Advantages Conclusion References
    • Introduction- What is Parallel computing?Parallel computing is the simultaneous execution of thesame task (split up and specially adapted) on multiple Processors inorder to obtain results faster.In the simplest sense, parallel computing is the simultaneous use ofmultiple compute resources to solve a computational problem.- To be run using multiple CPUs- A problem is broken into discrete parts that can be solvedconcurrently[root@aissms ~]# grep /mnt/parallex Introduction
    • Introduction[root@aissms ~]# grep /mnt/parallex Introduction-Amdahl’s LawIf the sequential component of an algorithm accounts for 1/s ofthe programs execution time, then the maximum possible speedup thatcan be achieved on a parallel computer is ‘s’
    • [root@aissms ~]# awk USAGE /mnt/parallex- Who uses parallel computing?Introduction
    • [root@aissms ~]# sed /mnt/parallex PARALLEL- Why parallel computing?The primary reasons for using parallel computing:-Save time - wall clock time-Solve larger problems-Provide concurrency (do multiple things at the same time)-Taking advantage of non-local resources-Cost savingsLimits to serial computing :-Transmission speeds-Limits to miniaturization-Economic limitationsIntroduction
    • [root@aissms ~]# cat Hardware | moreHardware:x686 Class PCs (installed with intranet connection)SwitchSerial port connectors100 BASE T LAN cable , RJ 45 connectorsSoftware:Linux (2.6.x kernel)Intel Compiler suite (Noncommercial)LSB ( Linux Standard Base ) Set of GNU Kits with GNU CC/C++/F77/LD/ASHardware and SoftwareResources
    • Design Overview[root@aissms ~]# echo $DESIGN_OVERVIEW
    • Phase I Implementation[root@aissms ~]# echo-NFS mounted on all nodes. (implementing shared memory)-Status of nodes-A test application sent to all host to determine current load onthe processor.-Developed a distribution algorithm to break the task according toload capacity of processor given by test app.-All task received by server & integrate the result & giveoutput on server terminal.
    • Phase I Results[root@aissms ~]# echo $CONCLUSIONExecution of application on single machine
    • Applications[root@aissms ~]#- High processing requirement tasks- Molecular dynamics- Astronomical modeling- Data mining- Image rendering- Clustering is now used for mission-critical applications such as web and FTPservers- Google uses an ever-growing cluster composed of tens of thousands ofcomputers- Scientific Calculations consisting of complex numerical calculations
    • Advantages[root@aissms ~]# ls -lh ‘Advantages*’- Implemented parallelism at every level.- Parallel systems implemented on available hardware.- Diskless technology.-Cost (central storage solution)-Error recovery-Initialization-Optimum utilization of available resources.
    • Conclusion[root@aissms ~]# echo $CONCLUSIONBy Implementing parallelism on all levels and making efficient utilization ofavailable hardware resources, we attempt to provide cost effective solutionfor small & medium scale businesses and research institutes.And,We are in process of developing Mini Super computer.
    • References[root@aissms ~]# find / -name “*Parallex*”[1] Parallel Computer Architectures : Hardware/Software Approach.Culler, David. Morgran Coffman Publishers. San Fransisco,CA.[2] High Performance Computing. 2nd Edition, Dowd Kavin and Charles.Sebastopol , CA : ORielly and Associates[3] Source Book of Parallel Computing: Dongara, Jack. Morgran CoffmanPublishers. San Fransisco,CA[4] High Performance Linux Cluster,JosephSloan.Sebastopol,CA:O’ReillyMedia Inc.[5] Parallel Computing on Heterogeneous Networks by Alexey L.Lastovetsky[6] Kernel Sources from http://ww.kernel.org
    • Thank you![root@aissms ~]# killall ParallexAnyQuestions ?