OPENMP is a directory of FORTRAN77 examples which illustrate the use of the OpenMP application program interface for carrying out parallel computations in a shared memory environment.
The directives allow the user to mark areas of the code, such as do, while or for loops, which are suitable for parallel processing. The directives appear as a special kind of comment, so the program can be compiled and run in serial mode. However, the user can tell the compiler to "notice" the special directives, in which case a version of the program will be created that runs in parallel.
Thus the same program can easily be run in serial or parallel mode on a given computer, or run on a computer that does not have Open MP at all.
OpenMP is suitable for a shared memory parallel system, that is, a situation in which there is a single memory space, and multiple processors. If memory is shared, then typically the number of processors will be small, and they will all be on the same physical machine.
By contrast, in a distributed memory system, items of data are closely associated with a particular processor. There may be a very large number of processors, and they may be more loosely coupled and even on different machines. Such a system will need to be handled with MPI or some other message passing interface.
OpenMP descended in part from the old Cray microtasking directives, so if you've lived long enough to remember those, you will recognize some features.
OpenMP includes a number of functions whose type must be declared in any program that uses them. To avoid having to declare these functions, you can use the command
include 'omp_lib.h'in any routine that invokes OpenMP functions.
Note that, for the FORTRAN77 compiler, the OpenMP directives are required to follow the unfriendly and intolerant rules for line length and continuation, that apply to the text of FORTRAN77 code, namely:
OpenMP allows you to "request" any number of threads of execution. This is a request, and it's not always a wise request. If your system has four processors available, and they're not busy doing other things, or serving other users, maybe 4 threads is what you want. But you can't guarantee you'll get the undivided use of those processors. Moreover, if you run the same program using 1 thread and 4 threads, you may find that using 4 threads slows you down, either because you don't actually have 4 processors, (so the system has the overhead of pretending to run in parallel), or because the processors you have are also busy doing other things.
For this reason, it's wise to run the program at least once in single thread mode, so you have a benchmark against which to measure the speedup you got (or didn't get!) versus the speedup you hoped for.
The compiler you use must recognize the OpenMP directives in order to produce code that will run in parallel. Here are some of the compilers available that support OpenMP:
The computer code and data files described and made available on this web page are distributed under the GNU LGPL license.
OPENMP examples are available in a C version and a C++ version and a FORTRAN77 version and a FORTRAN90 version.
DIJKSTRA_OPENMP, a FORTRAN77 program which uses OpenMP to parallelize a simple example of Dijkstra's minimum distance algorithm for graphs.
FFT_OPENMP, a FORTRAN77 program which demonstrates the computation of a Fast Fourier Transform in parallel, using OpenMP.
HEATED_PLATE_OPENMP, a FORTRAN77 program which solves the steady (time independent) heat equation in a 2D rectangular region, using OpenMP to run in parallel.
HELLO_OPENMP, a FORTRAN77 program which prints out "Hello, world!" using the OpenMP parallel programming environment.
MANDELBROT_OPENMP, a FORTRAN77 program which generates an ASCII Portable Pixel Map (PPM) image of the Mandelbrot fractal set, using OpenMP for parallel execution.
MD_OPENMP, a FORTRAN77 program which carries out a molecular dynamics simulation using OpenMP.
MULTITASK_OPENMP, a FORTRAN77 program which demonstrates how to "multitask", that is, to execute several unrelated and distinct tasks simultaneously, using OpenMP for parallel execution.
MXM_OPENMP, a FORTRAN77 program which computes a dense matrix product C=A*B, using OpenMP for parallel execution.
MXV_OPENMP, a FORTRAN77 program which compares the performance of plain vanilla Fortran and the FORTRAN90 intrinsic routine MATMUL, for the matrix multiplication problem y=A*x, with and without parallelization by OpenMP.
OPENMP_STUBS, a FORTRAN77 library which implements a "stub" version of OpenMP, so that an OpenMP program can be compiled, linked and executed on a system that does not have OpenMP installed.
POISSON_OPENMP, a FORTRAN77 program which computes an approximate solution to the Poisson equation in a rectangle, using the Jacobi iteration to solve the linear system, and OpenMP to carry out the Jacobi iteration in parallel.
PRIME_OPENMP, a FORTRAN77 program which counts the number of primes between 1 and N, using OpenMP for parallel execution.
PTHREADS, C programs which illustrate the use of the POSIX thread library to carry out parallel program execution.
QUAD_OPENMP, a FORTRAN77 program which approximates an integral using a quadrature rule, and carries out the computation in parallel using OpenMP.
RANDOM_OPENMP, a FORTRAN77 program which illustrates how a parallel program using OpenMP can generate multiple distinct streams of random numbers.
SATISFY_OPENMP, a FORTRAN77 program which demonstrates, for a particular circuit, an exhaustive search for solutions of the circuit satisfiability problem, using OpenMP for parallel execution.
SCHEDULE_OPENMP, a FORTRAN77 program which demonstrates the default, static, and dynamic methods of "scheduling" loop iterations in OpenMP to avoid work imbalance.
SGEFA_OPENMP, a FORTRAN77 program which reimplements the SGEFA/SGESL linear algebra routines from LINPACK for use with OpenMP.
ZIGGURAT_OPENMP, a FORTRAN77 program which demonstrates how the ZIGGURAT library can be used to generate random numbers in an OpenMP parallel program.
COMPUTE_PI shows how information can be shared. Several processors need to compute pieces of a sum that will approximate pi.
DOT_PRODUCT compares the computation of a vector dot product in sequential mode, and using OpenMP. Typically, the overhead of using parallel processing outweighs the advantage for small vector sizes N. The code demonstrates this fact by using a number of values of N, and by running both sequential and OpenMP versions of the calculation.
HELMHOLTZ is a more extensive program that solves the Helmholtz equation on a regular grid, using a Jacobi iterative linear equation solver with overrelaxation;
MXM is a simple exercise in timing the computation of a matrix-matrix product.
You can go up one level to the FORTRAN77 source codes.