On October 23rd, 2014, we updated our
By continuing to use LinkedIn’s SlideShare service, you agree to the revised terms, so please take a few minutes to review them.
Design Alternative for Parallel SystemsPresentation Transcript
Design Alternative for Parallel System[root@aissms ~ ]# mount /dev/Parallex /mnt/presentationPresented by:Amit Kumar B32*****7Ankit Singh B32*****8Sushant Bhadkamkar B32*****2GUIDE: Mr. Anil.J. KadamDepartment of Computer Engineering,AISSMS College of Engineering,Pune - 1[root@aissms]# cat /mnt/presentation/AUTHORS
Overview[root@aissms ~]# tree /mnt/presentation Introduction What is Parallel computing?- Introduction to parallel computing,- Who uses parallel computing.?- Why parallel computing? Hardware & software resources. Technical design overview Implementation briefing Phase I results Applications Advantages Conclusion References
Introduction- What is Parallel computing?Parallel computing is the simultaneous execution of thesame task (split up and specially adapted) on multiple Processors inorder to obtain results faster.In the simplest sense, parallel computing is the simultaneous use ofmultiple compute resources to solve a computational problem.- To be run using multiple CPUs- A problem is broken into discrete parts that can be solvedconcurrently[root@aissms ~]# grep /mnt/parallex Introduction
Introduction[root@aissms ~]# grep /mnt/parallex Introduction-Amdahl’s LawIf the sequential component of an algorithm accounts for 1/s ofthe programs execution time, then the maximum possible speedup thatcan be achieved on a parallel computer is ‘s’
[root@aissms ~]# awk USAGE /mnt/parallex- Who uses parallel computing?Introduction
[root@aissms ~]# sed /mnt/parallex PARALLEL- Why parallel computing?The primary reasons for using parallel computing:-Save time - wall clock time-Solve larger problems-Provide concurrency (do multiple things at the same time)-Taking advantage of non-local resources-Cost savingsLimits to serial computing :-Transmission speeds-Limits to miniaturization-Economic limitationsIntroduction
[root@aissms ~]# cat Hardware | moreHardware:x686 Class PCs (installed with intranet connection)SwitchSerial port connectors100 BASE T LAN cable , RJ 45 connectorsSoftware:Linux (2.6.x kernel)Intel Compiler suite (Noncommercial)LSB ( Linux Standard Base ) Set of GNU Kits with GNU CC/C++/F77/LD/ASHardware and SoftwareResources
Phase I Implementation[root@aissms ~]# echo-NFS mounted on all nodes. (implementing shared memory)-Status of nodes-A test application sent to all host to determine current load onthe processor.-Developed a distribution algorithm to break the task according toload capacity of processor given by test app.-All task received by server & integrate the result & giveoutput on server terminal.
Phase I Results[root@aissms ~]# echo $CONCLUSIONExecution of application on single machine
Applications[root@aissms ~]#- High processing requirement tasks- Molecular dynamics- Astronomical modeling- Data mining- Image rendering- Clustering is now used for mission-critical applications such as web and FTPservers- Google uses an ever-growing cluster composed of tens of thousands ofcomputers- Scientific Calculations consisting of complex numerical calculations
Advantages[root@aissms ~]# ls -lh ‘Advantages*’- Implemented parallelism at every level.- Parallel systems implemented on available hardware.- Diskless technology.-Cost (central storage solution)-Error recovery-Initialization-Optimum utilization of available resources.
Conclusion[root@aissms ~]# echo $CONCLUSIONBy Implementing parallelism on all levels and making efficient utilization ofavailable hardware resources, we attempt to provide cost effective solutionfor small & medium scale businesses and research institutes.And,We are in process of developing Mini Super computer.
References[root@aissms ~]# find / -name “*Parallex*” Parallel Computer Architectures : Hardware/Software Approach.Culler, David. Morgran Coffman Publishers. San Fransisco,CA. High Performance Computing. 2nd Edition, Dowd Kavin and Charles.Sebastopol , CA : ORielly and Associates Source Book of Parallel Computing: Dongara, Jack. Morgran CoffmanPublishers. San Fransisco,CA High Performance Linux Cluster,JosephSloan.Sebastopol,CA:O’ReillyMedia Inc. Parallel Computing on Heterogeneous Networks by Alexey L.Lastovetsky Kernel Sources from http://ww.kernel.org