OPENMP
FORTRAN90 Examples of Parallel Programming with OpenMP


OPENMP is a directory of FORTRAN90 examples which illustrate the use of the OpenMP application program interface for carrying out parallel computations in a shared memory environment.

The directives allow the user to mark areas of the code, such as do, while or for loops, which are suitable for parallel processing. The directives appear as a special kind of comment, so the program can be compiled and run in serial mode. However, the user can tell the compiler to "notice" the special directives, in which case a version of the program will be created that runs in parallel.

Thus the same program can easily be run in serial or parallel mode on a given computer, or run on a computer that does not have OpenMP at all.

OpenMP is suitable for a shared memory parallel system, that is, a situation in which there is a single memory space, and multiple processors. If memory is shared, then typically the number of processors will be small, and they will all be on the same physical machine.

By contrast, in a distributed memory system, items of data are closely associated with a particular processor. There may be a very large number of processors, and they may be more loosely coupled and even on different machines. Such a system will need to be handled with MPI or some other message passing interface.

FORTRAN90 Issues

OpenMP includes a number of functions whose type must be declared in any program that uses them. To avoid having to declare these functions, you can use the command


        use omp_lib
      
in any routine that invokes OpenMP functions.

"Pretend" Parallelism

OpenMP allows you to "request" any number of threads of execution. This is a request, and it's not always a wise request. If your system has four processors available, and they're not busy doing other things, or serving other users, maybe 4 threads is what you want. But you can't guarantee you'll get the undivided use of those processors. Moreover, if you run the same program using 1 thread and 4 threads, you may find that using 4 threads slows you down, either because you don't actually have 4 processors, (so the system has the overhead of pretending to run in parallel), or because the processors you have are also busy doing other things.

For this reason, it's wise to run the program at least once in single thread mode, so you have a benchmark against which to measure the speedup you got (or didn't get!) versus the speedup you hoped for.

Compiler Support

The compiler you use must recognize the OpenMP directives in order to produce code that will run in parallel. Here are some of the compilers available that support OpenMP:

Licensing:

The computer code and data files described and made available on this web page are distributed under the GNU LGPL license.

Languages:

OPENMP examples are available in a C version and a C++ version and a FORTRAN90 version.

Related Data and Programs:

DIJKSTRA_OPENMP, a FORTRAN90 program which uses OpenMP to parallelize a simple example of Dijkstra's minimum distance algorithm for graphs.

FFT_OPENMP, a FORTRAN90 program which demonstrates the computation of a Fast Fourier Transform in parallel, using OpenMP.

HEATED_PLATE_OPENMP, a FORTRAN90 program which solves the steady (time independent) heat equation in a 2D rectangular region, using OpenMP to run in parallel.

HELLO_OPENMP, a FORTRAN90 program which prints out "Hello, world!" using the OpenMP parallel programming environment.

JACOBI_OPENMP, a FORTRAN90 program which illustrates the use of the OpenMP application program interface to parallelize a Jacobi iteration solving A*x=b.

MANDELBROT_OPENMP, a FORTRAN90 program which generates an ASCII Portable Pixel Map (PPM) image of the Mandelbrot fractal set, using OpenMP for parallel execution.

MD_OPENMP, a FORTRAN90 program which carries out a molecular dynamics simulation using OpenMP.

MPI, FORTRAN90 programs which illustrate the use of parallel programming in a distributed memory environment, using message passing.

MULTITASK_OPENMP, a FORTRAN90 program which demonstrates how to "multitask", that is, to execute several unrelated and distinct tasks simultaneously, using OpenMP for parallel execution.

MXM_OPENMP, a FORTRAN90 program which computes a dense matrix product C=A*B, using OpenMP for parallel execution.

OPENMP_RCC, FORTRAN90 programs which illustrate how a FORTRAN90 program, using OpenMP, can be compiled and run in batch mode on the FSU High Performance Computing (HPC) cluster operated by the Research Computing Center (RCC).

OPENMP_STUBS, a FORTRAN90 library which implements a "stub" version of OpenMP, so that an OpenMP program can be compiled, linked and executed on a system that does not have OpenMP installed.

POISSON_OPENMP, a FORTRAN90 program which computes an approximate solution to the Poisson equation in a rectangle, using the Jacobi iteration to solve the linear system, and OpenMP to carry out the Jacobi iteration in parallel.

PRIME_OPENMP, a FORTRAN90 program which counts the number of primes between 1 and N, using OpenMP for parallel execution.

PTHREADS, C programs which illustrate the use of the POSIX thread library to carry out parallel program execution.

QUAD_OPENMP, a FORTRAN90 program which approximates an integral using a quadrature rule, and carries out the computation in parallel using OpenMP.

RANDOM_OPENMP, a FORTRAN90 program which illustrates how a parallel program using OpenMP can generate multiple distinct streams of random numbers.

SATISFY_OPENMP, a FORTRAN90 program which demonstrates, for a particular circuit, an exhaustive search for solutions of the circuit satisfiability problem, using OpenMP for parallel execution.

SCHEDULE_OPENMP, a FORTRAN90 program which demonstrates the default, static, and dynamic methods of "scheduling" loop iterations in OpenMP to avoid work imbalance.

SGEFA_OPENMP, a FORTRAN90 program which reimplements the SGEFA/SGESL linear algebra routines from LINPACK for use with OpenMP.

ZIGGURAT_OPENMP, a FORTRAN90 program which demonstrates how the ZIGGURAT library can be used to generate random numbers in an OpenMP parallel program.

Reference:

  1. Peter Arbenz, Wesley Petersen,
    Introduction to Parallel Computing - A practical guide with examples in C,
    Oxford University Press,
    ISBN: 0-19-851576-6,
    LC: QA76.58.P47.
  2. Rohit Chandra, Leonardo Dagum, Dave Kohr, Dror Maydan, Jeff McDonald, Ramesh Menon,
    Parallel Programming in OpenMP,
    Morgan Kaufmann, 2001,
    ISBN: 1-55860-671-8,
    LC: QA76.642.P32.
  3. Barbara Chapman, Gabriele Jost, Ruud vanderPas, David Kuck,
    Using OpenMP: Portable Shared Memory Parallel Processing,
    MIT Press, 2007,
    ISBN13: 978-0262533027,
    LC: QA76.642.C49.
  4. OpenMP Architecture Review Board,
    OpenMP Application Program Interface,
    Version 3.0,
    May 2008.

Examples and Tests:

COMPUTE_PI_OPENMP shows how information can be shared. Several processors need to compute pieces of a sum that will approximate pi.

DOT_PRODUCT_OPENMP compares the computation of a vector dot product in sequential mode, and using OpenMP. Typically, the overhead of using parallel processing outweighs the advantage for small vector sizes N. The code demonstrates this fact by using a number of values of N, and by running both sequential and OpenMP versions of the calculation.

HELMHOLTZ_OPENMP is a more extensive program that solves the Helmholtz equation on a regular grid, using a Jacobi iterative linear equation solver with overrelaxation;

MAXIMUM_OPENMP shows how FORTRAN programs are allowed to compute the maximum entry of a vector as a reduction variable.

MXM_OPENMP is a simple exercise in timing the computation of a matrix-matrix product.

MXM2_OPENMP repeats the MXM example, but tries to measure the CPU time taken by individual threads. Unfortunately, the FORTRAN90 CPU_TIME function is not required, by the language standard, to store and return separate times for separate threads. So, whether this example gives you a good timing or not depends on what compiler you use and where.

RANDOM_CONTENTION_OPENMP is a program which demontrates the possibility that a program calling random_number can run MORE SLOWLY when parallel threads are added. The performance degradation is presumed to be caused by contention among the threads for access to the internal saved variables that control the state of the random number generator.

RANDOM_SEED_OPENMP is a program which explores the possibility of recreating exactly a stream of random numbers that were computed sequentially by carefully copying all the input seed values, and then regenerating the sequence in parallel.

You can go up one level to the FORTRAN90 source codes.


Last revised on 04 September 2018.