MPI_STUBS
Dummy MPI Library
MPI_STUBS
is a FORTRAN90 library which
provides "stub" versions of some MPI routines.
MPI_STUBS is intended to include stubs for the most commonly
called MPI routines. Most of the stub routines don't do anything.
In a few cases, where it makes sense, they do some simple action
or return a value that is appropriate for the serial processing
case.
MPI_STUBS can be used as a convenience, when a real MPI
implementation is not available, and the user simply wants to
test-compile a code. It may also be useful in those occasions
when a code has been so carefully written that it will still
execute correctly on a single processor.
MPI_STUBS is based on a similar package supplied as
part of the LAMMPS program, which allow that program to
be compiled, linked and run on a single processor machine,
although it is normally intended for parallel execution.
Licensing:
The computer code and data files described and made available on this web page
are distributed under
the GNU LGPL license.
Languages:
MPI_STUBS is available in
a C version and
a C++ version and
a FORTRAN90 version.
Related Data and Programs:
HEAT_MPI,
a FORTRAN90 program which
solves the 1D Time Dependent Heat Equation using MPI.
HELLO_MPI,
a FORTRAN90 program which
prints out "Hello, world!" using the MPI parallel programming environment.
MOAB,
examples which
illustrate the use of the MOAB job scheduler for a computer cluster.
MPI,
FORTRAN90 programs which
illustrate the use of MPI for parallel processing.
MULTITASK_MPI,
a FORTRAN90 program which
demonstrates how to "multitask", that is, to execute several unrelated
and distinct tasks simultaneously, using MPI for parallel execution.
QUAD_MPI,
a FORTRAN90 program which
approximates an integral using a quadrature rule, and carries out the
computation in parallel using MPI.
RANDOM_MPI,
a FORTRAN90 program which
demonstrates one way to generate the same sequence of random numbers
for both sequential execution and parallel execution under MPI.
SATISFY_MPI,
a FORTRAN90 program which
demonstrates, for a particular circuit, an exhaustive search
for solutions of the circuit satisfiability problem, using MPI to
carry out the calculation in parallel.
Reference:
-
William Gropp, Steven Huss-Lederman, Andrew Lumsdaine, Ewing Lusk,
Bill Nitzberg, William Saphir, Marc Snir,
MPI: The Complete Reference,
Volume II: The MPI-2 Extensions,
Second Edition,
MIT Press, 1998,
ISBN13: 978-0-262-57123-4,
LC: QA76.642.M65.
Source Code:
Examples and Tests:
BUFFON_LAPLACE is an "embarrassingly parallel" Monte Carlo
simulation of the Buffon-Laplace needle dropping process.
HELLO is a simple program that says "Hello, world!".
QUADRATURE is a program that estimates an integral
using the random sampling.
List of Routines:
-
MPI_ABORT shuts down the processes in a given communicator.
-
MPI_ALLGATHER gathers data from all the processes in a communicator.
-
MPI_ALLGATHERV gathers data from all the processes in a communicator.
-
MPI_ALLREDUCE carries out a reduction operation.
-
MPI_BARRIER forces processes within a communicator to wait together.
-
MPI_BCAST broadcasts data from one process to all others.
-
MPI_BSEND sends data from one process to another, using buffering.
-
MPI_CART_CREATE creates a communicator for a Cartesian topology.
-
MPI_CART_GET returns the "Cartesian coordinates" of the calling process.
-
MPI_CART_SHIFT finds the destination and source for Cartesian shifts.
-
MPI_COMM_DUP duplicates a communicator.
-
MPI_COMM_FREE "frees" a communicator.
-
MPI_COMM_RANK reports the rank of the calling process.
-
MPI_COMM_SIZE reports the number of processes in a communicator.
-
MPI_COMM_SPLIT splits up a communicator based on a key.
-
MPI_COPY_DOUBLE copies a double precision vector.
-
MPI_COPY_INTEGER copies an integer vector.
-
MPI_COPY_REAL copies a real vector.
-
MPI_FINALIZE shuts down the MPI library.
-
MPI_GET_COUNT reports the actual number of items transmitted.
-
MPI_INIT initializes the MPI library.
-
MPI_IRECV receives data from another process.
-
MPI_ISEND sends data from one process to another using nonblocking transmission.
-
MPI_RECV receives data from another process within a communicator.
-
MPI_REDUCE carries out a reduction operation.
-
MPI_REDUCE_DOUBLE_PRECISION carries out a reduction operation on double precision values.
-
MPI_REDUCE_INTEGER carries out a reduction operation on integers.
-
MPI_REDUCE_REAL carries out a reduction operation on reals.
-
MPI_REDUCE_SCATTER collects a message of the same length from each process.
-
MPI_RSEND "ready sends" data from one process to another.
-
MPI_SEND sends data from one process to another.
-
MPI_WAIT waits for an I/O request to complete.
-
MPI_WAITALL waits until all I/O requests have completed.
-
MPI_WAITANY waits until one I/O requests has completed.
-
MPI_WTICK returns the number of seconds per clock tick.
-
MPI_WTIME returns the elapsed wall clock time.
-
TIMESTRING writes the current YMDHMS date into a string.
You can go up one level to
the FORTRAN90 source codes.
Last revised on 21 May 2008.