1. OpenMP
Author: Andrey Karpov
Date: 20.11.2009
Abstract
The article briefly describes the OpenMP technology.
OpenMP
OpenMP (Open Multi-Processing) is a set of compiler directives, library procedures and environment
variables intended for programming multi-threaded applications on multi-processor systems with
shared memory (SMP-systems).
The first OpenMP standard was developed in 1997 as an API oriented on writing easy portable multi-
threaded applications. At first, it was based on Fortran language but then included C and C++ as well.
OpenMP interface became one of the most popular parallel programming technologies. OpenMP is
successfully exploited both in programming of supercomputer systems with many processors and in
desktop user systems or, for example, Xbox 360.
OpenMP specification is developed by several large computer and software vendors whose activity is
regulated by the non-profit organization "OpenMP Architecture Review Board" (ARB) [1].
OpenMP exploits the parallel execution model called "branching-merging". OpenMP program begins as
a single execution thread called the initial thread. When the thread meets a parallel construction, it
creates a new group of threads consisting of the initial thread itself and some other threads and
becomes the main thread in the group. All the members of the new group (including the main thread)
execute the code inside the parallel construction. At the end of the parallel construction, there is an
implicit barrier. After the parallel construction is processed, the further execution of the user code is
performed only by the main thread. A parallel area may include other parallel areas where each thread
of the initial area becomes the main thread of its thread group. The embedded areas may also include
areas of a deeper nesting level.
The number of threads in a group performed concurrently can be controlled by several methods. One of
them is using the environment variable OMP_NUM_THREADS. Another method is to call the procedure
omp_set_num_threads(). One more way is to use the expression num_threads together with parallel
directive.
OpenMP and other parallel programming technologies
At present, MPI interface (Message Passing Interface) is considered to be the most flexible, portable and
popular interface in parallel programming. But the Message Passing Interface:
• is not too efficient on SMP-systems;
• is relatively difficult to study because it demands thinking in "non-computing" terms.
2. POSIX-interface for threading (Pthreads) has a wide support (nearly on all UNIX-systems) but due to
many reasons it does not suite the practical parallel programming:
• Fortran is not supported;
• its level is too low;
• no support for data concurrency;
• the threading mechanism was originally developed not for the purposes of computing
concurrency arrangement.
OpenMP can be viewed as a high-level superstructure over Pthreads (or other similar thread libraries).
Let us list the advantages OpenMP provides a developer with.
1. Due to the idea of "incremental parallelization", OpenMP is ideal for the developers wishing to
quickly parallelize their applications with large parallel loops. A developer does not create a new
parallel program but simply adds OpenMP-directives into the text of a serial program.
2. OpenMP is a very flexible mechanism providing the developer with great capabilities of
controlling a parallel application's behavior.
3. OpenMP-program is supposed to be used as a serial one on a single-processor platform, i.e. you
do not need to support both the serial and parallel versions. OpenMP-directives are simply
ignored by the serial compiler and to call the OpenMP-procedures you may place stubs whose
texts are given in the specifications.
4. One of OpenMP's advantages, as its developers point out, is support of the so called "orphan"
directives, i.e. work synchronization and distribution directives do not necessarily need to be
included directly into the lexical context of a parallel area.
OpenMP and toolkit
At present, OpenMP technology is supported by most C/C++ compilers. Yet, it is not so good with the
tools of testing parallel OpenMP programs. Although analysis tools and tools for testing and optimizing
parallel programs have existed for a long time, they were not too popular in the sphere of applied
software development until recently. That is why they are often less convenient than other
development tools.
The fullest support of parallel OpenMP-program development is provided by the package Intel Parallel
Studio. It includes a tool of preliminary code analysis for detecting code fragments that can be
potentially parallelized. There is a compiler with OpenMP support providing good optimization. There is
also a profiler and a dynamic analysis tool for detecting parallel errors.
Also, we should mention one more tool - VivaMP included into PVS-Studio. This is a static code analyzer
oriented on detecting errors in OpenMP programs at the stage of writing.
References
1. OpenMP Architecture Review Board. http://www.openmp.org/
2. Joel Yliluoma. Guide into OpenMP: Easy multithreading programming for C++.
http://www.viva64.com/go.php?url=135
3. Kang Su Gatlin and Pete Isensee. OpenMP and C++. http://www.viva64.com/go.php?url=113
4. The collection of links referring to parallel programming and OpenMP technology.
http://www.viva64.com/links/parallel-programming/