HIGH PERFORMANCE
COMPUTING
LECT_01
Introduction to HPC
BATCH: 11BS(IT)
PREPARED BY: MUKHTIAR AHMED
Asst. Prof. I.T Department
DEFINATION OF HIGH PERFORMANCE
COMPUTING
• High performance computing:
– can mean high flop count
• per processor
• totaled over many processors working on the same
problem
• totaled over many processors working on related
problems
– can mean faster turnaround time
• more powerful system
• scheduled to first available system(s)
• using multiple systems simultaneously
DEFINATIONS…….
• HPC: any computational technique that
solves a large problem faster than possible
using single, commodity systems
– Custom-designed, high-performance processors
(e.g. Cray, NEC)
– Parallel computing
– Distributed computing
– Grid computing
DEFINATIONS……
• Parallel computing: single systems with many
processors working on the same problem
• Distributed computing: many systems loosely
coupled by a scheduler to work on related
problems
• Grid Computing: many systems tightly
coupled by software and networks to work
together on single problems or on related
problems
IMPORTANCE OF HPC IN RELATED FIELD(S)
• HPC has had tremendous impact on all areas
of computational science and engineering in
academia, government, and industry.
• Many problems have been solved with HPC
techniques that were impossible to solve with
individual workstations or personal
computers.
What is a Parallel Computer?
• Parallel computing: the use of multiple
computers or processors working together on
a common task
• Parallel computer: a computer that contains
multiple processors:
– each processor works on its section of the
problem
– processors are allowed to exchange information
with other processors
Parallel vs. Serial Computers
• Two big advantages of parallel computers:
1. total performance
2. total memory
• Parallel computers enable us to solve
problems that:
– benefit from, or require, fast solution
– require large amounts of memory
– example that requires both: weather forecasting
Parallel vs. Serial Computers
• Some benefits of parallel computing include:
– more data points
• bigger domains
• better spatial resolution
• more particles
– more time steps
• longer runs
• better temporal resolution
– faster execution
• faster time to solution
• more solutions in same time
• lager simulations in real time

Hpc 1

  • 1.
    HIGH PERFORMANCE COMPUTING LECT_01 Introduction toHPC BATCH: 11BS(IT) PREPARED BY: MUKHTIAR AHMED Asst. Prof. I.T Department
  • 2.
    DEFINATION OF HIGHPERFORMANCE COMPUTING • High performance computing: – can mean high flop count • per processor • totaled over many processors working on the same problem • totaled over many processors working on related problems – can mean faster turnaround time • more powerful system • scheduled to first available system(s) • using multiple systems simultaneously
  • 3.
    DEFINATIONS……. • HPC: anycomputational technique that solves a large problem faster than possible using single, commodity systems – Custom-designed, high-performance processors (e.g. Cray, NEC) – Parallel computing – Distributed computing – Grid computing
  • 4.
    DEFINATIONS…… • Parallel computing:single systems with many processors working on the same problem • Distributed computing: many systems loosely coupled by a scheduler to work on related problems • Grid Computing: many systems tightly coupled by software and networks to work together on single problems or on related problems
  • 5.
    IMPORTANCE OF HPCIN RELATED FIELD(S) • HPC has had tremendous impact on all areas of computational science and engineering in academia, government, and industry. • Many problems have been solved with HPC techniques that were impossible to solve with individual workstations or personal computers.
  • 6.
    What is aParallel Computer? • Parallel computing: the use of multiple computers or processors working together on a common task • Parallel computer: a computer that contains multiple processors: – each processor works on its section of the problem – processors are allowed to exchange information with other processors
  • 7.
    Parallel vs. SerialComputers • Two big advantages of parallel computers: 1. total performance 2. total memory • Parallel computers enable us to solve problems that: – benefit from, or require, fast solution – require large amounts of memory – example that requires both: weather forecasting
  • 8.
    Parallel vs. SerialComputers • Some benefits of parallel computing include: – more data points • bigger domains • better spatial resolution • more particles – more time steps • longer runs • better temporal resolution – faster execution • faster time to solution • more solutions in same time • lager simulations in real time