Your SlideShare is downloading. ×
0
Design Alternative for Parallel Systems
Design Alternative for Parallel Systems
Design Alternative for Parallel Systems
Design Alternative for Parallel Systems
Design Alternative for Parallel Systems
Design Alternative for Parallel Systems
Design Alternative for Parallel Systems
Design Alternative for Parallel Systems
Design Alternative for Parallel Systems
Design Alternative for Parallel Systems
Design Alternative for Parallel Systems
Design Alternative for Parallel Systems
Design Alternative for Parallel Systems
Design Alternative for Parallel Systems
Design Alternative for Parallel Systems
Design Alternative for Parallel Systems
Design Alternative for Parallel Systems
Design Alternative for Parallel Systems
Design Alternative for Parallel Systems
Design Alternative for Parallel Systems
Design Alternative for Parallel Systems
Design Alternative for Parallel Systems
Design Alternative for Parallel Systems
Design Alternative for Parallel Systems
Design Alternative for Parallel Systems
Design Alternative for Parallel Systems
Design Alternative for Parallel Systems
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

Design Alternative for Parallel Systems

386

Published on

Bachelor Thesis on Parallel computing.

Bachelor Thesis on Parallel computing.

Published in: Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
386
On Slideshare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
7
Comments
0
Likes
0
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. Design Alternative for Parallel System[root@aissms ~ ]# mount /dev/Parallex /mnt/presentationPresented by:Amit Kumar B32*****7Ankit Singh B32*****8Sushant Bhadkamkar B32*****2GUIDE: Mr. Anil.J. KadamDepartment of Computer Engineering,AISSMS College of Engineering,Pune - 1[root@aissms]# cat /mnt/presentation/AUTHORS
  • 2. Overview[root@aissms ~]# tree /mnt/presentation Introduction What is Parallel computing?- Introduction to parallel computing,- Who uses parallel computing.?- Why parallel computing? Hardware & software resources. Technical design overview Implementation briefing Phase I results Applications Advantages Conclusion References
  • 3. Introduction- What is Parallel computing?Parallel computing is the simultaneous execution of thesame task (split up and specially adapted) on multiple Processors inorder to obtain results faster.In the simplest sense, parallel computing is the simultaneous use ofmultiple compute resources to solve a computational problem.- To be run using multiple CPUs- A problem is broken into discrete parts that can be solvedconcurrently[root@aissms ~]# grep /mnt/parallex Introduction
  • 4. Introduction[root@aissms ~]# grep /mnt/parallex Introduction-Amdahl’s LawIf the sequential component of an algorithm accounts for 1/s ofthe programs execution time, then the maximum possible speedup thatcan be achieved on a parallel computer is ‘s’
  • 5. [root@aissms ~]# awk USAGE /mnt/parallex- Who uses parallel computing?Introduction
  • 6. [root@aissms ~]# sed /mnt/parallex PARALLEL- Why parallel computing?The primary reasons for using parallel computing:-Save time - wall clock time-Solve larger problems-Provide concurrency (do multiple things at the same time)-Taking advantage of non-local resources-Cost savingsLimits to serial computing :-Transmission speeds-Limits to miniaturization-Economic limitationsIntroduction
  • 7. [root@aissms ~]# cat Hardware | moreHardware:x686 Class PCs (installed with intranet connection)SwitchSerial port connectors100 BASE T LAN cable , RJ 45 connectorsSoftware:Linux (2.6.x kernel)Intel Compiler suite (Noncommercial)LSB ( Linux Standard Base ) Set of GNU Kits with GNU CC/C++/F77/LD/ASHardware and SoftwareResources
  • 8. Design Overview[root@aissms ~]# echo $DESIGN_OVERVIEW
  • 9. Phase I Implementation[root@aissms ~]# echo-NFS mounted on all nodes. (implementing shared memory)-Status of nodes-A test application sent to all host to determine current load onthe processor.-Developed a distribution algorithm to break the task according toload capacity of processor given by test app.-All task received by server & integrate the result & giveoutput on server terminal.
  • 10. Phase I Results[root@aissms ~]# echo $CONCLUSIONExecution of application on single machine
  • 11. Applications[root@aissms ~]#- High processing requirement tasks- Molecular dynamics- Astronomical modeling- Data mining- Image rendering- Clustering is now used for mission-critical applications such as web and FTPservers- Google uses an ever-growing cluster composed of tens of thousands ofcomputers- Scientific Calculations consisting of complex numerical calculations
  • 12. Advantages[root@aissms ~]# ls -lh ‘Advantages*’- Implemented parallelism at every level.- Parallel systems implemented on available hardware.- Diskless technology.-Cost (central storage solution)-Error recovery-Initialization-Optimum utilization of available resources.
  • 13. Conclusion[root@aissms ~]# echo $CONCLUSIONBy Implementing parallelism on all levels and making efficient utilization ofavailable hardware resources, we attempt to provide cost effective solutionfor small & medium scale businesses and research institutes.And,We are in process of developing Mini Super computer.
  • 14. References[root@aissms ~]# find / -name “*Parallex*”[1] Parallel Computer Architectures : Hardware/Software Approach.Culler, David. Morgran Coffman Publishers. San Fransisco,CA.[2] High Performance Computing. 2nd Edition, Dowd Kavin and Charles.Sebastopol , CA : ORielly and Associates[3] Source Book of Parallel Computing: Dongara, Jack. Morgran CoffmanPublishers. San Fransisco,CA[4] High Performance Linux Cluster,JosephSloan.Sebastopol,CA:O’ReillyMedia Inc.[5] Parallel Computing on Heterogeneous Networks by Alexey L.Lastovetsky[6] Kernel Sources from http://ww.kernel.org
  • 15. Thank you![root@aissms ~]# killall ParallexAnyQuestions ?

×