OpenMP and NVIDIA
Jeff Larkin
NVIDIA Developer Technologies
OpenMP and NVIDIA
OpenMP is the dominant standard for directive-based parallel
programming.
NVIDIA joined OpenMP in 2011 to contribute to discussions around
parallel accelerators.
NVIDIA proposed the TEAMS construct for accelerators in 2012
OpenMP 4.0 with accelerator support released in 2013
Why does NVIDIA care
about OpenMP?
3 Ways to Accelerate Applications
Applications
Libraries

Compiler
Directives

Programming
Languages

“Drop-in”
Acceleration

Easily Accelerate
Applications

Maximum
Flexibility
Reaching a Broader Set of Developers

1,000,000’s

Universities
Supercomputing Centers
Oil & Gas

100,000’s
Research

Early Adopters

2004

Time

CAE
CFD
Finance
Rendering
Data Analytics
Life Sciences
Defense
Weather
Climate
Plasma Physics

Present
Introduction to
OpenMP
Teams/Distribute
OpenMP PARALLEL FOR
Executes the iterations of the
next for loop in parallel across
a team of threads.

#pragma omp parallel for
for (i=0; i<N; i++)
p[i] = v1[i] * v2[i];
OpenMP TARGET PARALLEL FOR
Offloads data and execution to
a target device, then
Executes the iterations of the
next for loop in parallel across
a team of threads.

#pragma omp target
#pragma omp parallel for
for (i=0; i<N; i++)
p[i] = v1[i] * v2[i];
OpenMP 4.0 TEAMS/DISTRIBUTE
Creates a league of teams on
the target device, distributes
blocks of work among those
teams, and executes the
remaining work in parallel
within each team

This code is portable whether 1
team or many teams are used.

#pragma omp target teams
#pragma omp 
distribute parallel for 
reduction(+:sum)
for (i=0; i<N; i++)
sum += B[i] * C[i];
OpenMP 4.0 Teams Distribute Parallel For
#pragma omp target
#pragma omp parallel for reduction(+:sum)
for (i=0; i< ??;i++) sum += B[i] **C[i];
N; i++) sum += B[i]
C[i];
OpenMP 4.0 Teams Distribute Parallel For
#pragma omp target teams
#pragma omp distribute parallel for
#pragma omp parallel for reduction(+:sum)
for (i=0; i< ??; i++) sum += B[i] * C[i];

The programmer
doesn’t have to
#pragma omp parallel for reduction(+:sum) think about how
the loop is
for (i=??; i< ??; i++) sum += B[i] * C[i];
decomposed!

#pragma omp parallel for reduction(+:sum)
for (i=??; i< ??; i++) sum += B[i] * C[i];
#pragma omp parallel for reduction(+:sum)
for (i=??; i< N; i++) sum += B[i] * C[i];
OMP + NV: We’re not
done yet!
OMP + NV: We’re not done yet!
Hardware parallelism is not going away, programmers demand a
simple, portable way to use it.
OpenMP 4.0 is just the first step toward a portable standard for
directive-based acceleration
We will continue to work with OpenMP to address the challenges
of parallel computing
Improved Interoperability
Improved Portability
Improved Expressibility
Thank You

SC13: OpenMP and NVIDIA

  • 1.
    OpenMP and NVIDIA JeffLarkin NVIDIA Developer Technologies
  • 2.
    OpenMP and NVIDIA OpenMPis the dominant standard for directive-based parallel programming. NVIDIA joined OpenMP in 2011 to contribute to discussions around parallel accelerators. NVIDIA proposed the TEAMS construct for accelerators in 2012 OpenMP 4.0 with accelerator support released in 2013
  • 3.
    Why does NVIDIAcare about OpenMP?
  • 4.
    3 Ways toAccelerate Applications Applications Libraries Compiler Directives Programming Languages “Drop-in” Acceleration Easily Accelerate Applications Maximum Flexibility
  • 5.
    Reaching a BroaderSet of Developers 1,000,000’s Universities Supercomputing Centers Oil & Gas 100,000’s Research Early Adopters 2004 Time CAE CFD Finance Rendering Data Analytics Life Sciences Defense Weather Climate Plasma Physics Present
  • 6.
  • 7.
    OpenMP PARALLEL FOR Executesthe iterations of the next for loop in parallel across a team of threads. #pragma omp parallel for for (i=0; i<N; i++) p[i] = v1[i] * v2[i];
  • 8.
    OpenMP TARGET PARALLELFOR Offloads data and execution to a target device, then Executes the iterations of the next for loop in parallel across a team of threads. #pragma omp target #pragma omp parallel for for (i=0; i<N; i++) p[i] = v1[i] * v2[i];
  • 9.
    OpenMP 4.0 TEAMS/DISTRIBUTE Createsa league of teams on the target device, distributes blocks of work among those teams, and executes the remaining work in parallel within each team This code is portable whether 1 team or many teams are used. #pragma omp target teams #pragma omp distribute parallel for reduction(+:sum) for (i=0; i<N; i++) sum += B[i] * C[i];
  • 10.
    OpenMP 4.0 TeamsDistribute Parallel For #pragma omp target #pragma omp parallel for reduction(+:sum) for (i=0; i< ??;i++) sum += B[i] **C[i]; N; i++) sum += B[i] C[i];
  • 11.
    OpenMP 4.0 TeamsDistribute Parallel For #pragma omp target teams #pragma omp distribute parallel for #pragma omp parallel for reduction(+:sum) for (i=0; i< ??; i++) sum += B[i] * C[i]; The programmer doesn’t have to #pragma omp parallel for reduction(+:sum) think about how the loop is for (i=??; i< ??; i++) sum += B[i] * C[i]; decomposed! #pragma omp parallel for reduction(+:sum) for (i=??; i< ??; i++) sum += B[i] * C[i]; #pragma omp parallel for reduction(+:sum) for (i=??; i< N; i++) sum += B[i] * C[i];
  • 12.
    OMP + NV:We’re not done yet!
  • 13.
    OMP + NV:We’re not done yet! Hardware parallelism is not going away, programmers demand a simple, portable way to use it. OpenMP 4.0 is just the first step toward a portable standard for directive-based acceleration We will continue to work with OpenMP to address the challenges of parallel computing Improved Interoperability Improved Portability Improved Expressibility
  • 14.