petsc_test


petsc_test C programs which illustrate the use of the PETSc library for scientific programming in a parallel environment.

PETSC is a library of high level mathematical routines that can be executed in a parallel environment, making it easier for a user to gain the advantages of MPI.

PETSc stands for the Portable Extensible Toolkit for Scientific Computations.

PETSc is particularly suitable for the numerical solution of systems of partial differential equations on high performance computers. PETSc includes parallel linear and nonlinear equation solvers and time integrators that may be used by application programs written in C, C++, FORTRAN77 or FORTRAN90.

PETSc consists of a number of libraries which work with a particular family of objects, such as vectors. Some of the PETSc modules deal with

PETSc includes parts of the following software libraries:

PETSC interacts with, complements, or supports a variety of programs including:

PETSc runs in parallel by relying on:

A C program that uses PETSc must specify an include file that corresponds to the highest level PETSc objects needed within that program; this then guarantees that all lower level objects will automatically be included.

For example, a program that uses the Krylov Subspace Package for solving a linear system, must have a line like

# include "petscksp.h"
This statement will automatically gather up all the lower level include files that are needed. The PETSc include files are stored in the include subdirectory of ${PETSC_DIR}. If you set up and use your makefile properly, you don't need to worry about explicitly pointing to this include directory.

Usage:

Before using PETSc, the user must set the environment variable PETSC_DIR, indicating the full path of the PETSc home directory. On Phoenix, for instance, this might be done with the command

setenv PETSC_DIR /usr/local/petsc
This command might best be placed in the user's .cshrc file.

Before using PETSc, the user must set the environment variable PETSC_ARCH, indicating the architecture of the machine on which PETSc is to be run. On Phoenix, for instance, this might be done with the command

setenv PETSC_ARCH linux-gnu
This command might best be placed in the user's .cshrc file.

The commands required to compile, link and load a program with PETSc are complicated. It is best to use a makefile for this purpose, in which case most of the complications can be hidden. A bare bones makefile for the example ex2.c, which is going to use PETSc's Krylov Subspace Package (KSP), would look like this:

include ${PETSC_DIR}/bmake/common/base

ex2: ex2.o
	-${CLINKER} -o ex2 ex2.o ${PETSC_KSP_LIB}
      
To use this makefile, one simply types
make ex2
and the executable will be created.

The mpirun command may be used for SMALL jobs that run for a SHORT time on FEW processors. Repeated use of the mpirun command for large long jobs is an abuse of the system and will not be allowed.

To run a program that uses PETSc, the user may invoke the mpirun command, specifying the number of processors.

mpirun -np 4 ex2
The mpirun command will automatically log the user into 3 more nodes of Phoenix, (requiring the user to type in a password each time!).

To run a program that uses PETSc, the CONDOR system is preferable. This is a batch system which tries to be fair and efficient in the allocation of computing resources. To use CONDOR, the user should create an executable program, then prepare a "CONDOR submit script" that describes how the executable is to be run, and on how many processors, and submit the script.

Languages:

petsc_test is available in a C version and a C++ version and a FORTRAN90 version.

Related Data and Programs:

BLAS1, a vector-vector library, is included in PETSC.

BLAS2, a matrix-vector library, is included in PETSC.

BLAS3, a matrix-matrix library, is included in PETSC.

CONDOR, is a queueing system used locally to submit jobs to our parallel clusters.

LAPACK, an extensive linear algebra library, is included in PETSC.

LINPACK, an extensive linear algebra library, is included in PETSC.

MINPACK, a library for minimization and least squares, is included in PETSC.

MPI is used as part of the PETSC library. You may be interested in seeing examples of that parallel programming system.

OPEN_MP is a directory of C examples which illustrate the use of the OpenMP application program interface for carrying out parallel computations in a shared memory environment.

PESSL is a parallel mathematics library developed exclusively for IBM systems.

PETSc examples are also available in a C++ version, and a FORTRAN77 version and a FORTRAN90 version.

SPARSEKIT2, Yousef Saad's library for sparse matrices, is included in PETSC.

SPARSEPAK, a library of sparse matrix reordering routines, is included in PETSC.

Reference:

  1. Satish Balay, Kris Buschelman, Victor Eijkhout, William Gropp, Dinesh Kaushik, Matt Knepley, Lois Curfman McInnes, Barry Smith, Hong Zhang,
    The PETSc Users Manual,
    ANL-95/11, Revision 2.3.0,
    Argonne National Laboratory;
    A copy is available in a PDF version.
  2. Jeff Borggaard,
    A PETSc Tutorial (in C);
    A copy is available in a PDF version.
  3. The PETSc website
    http://www.mcs.anl.gov

Examples and Tests:

MAKEFILE contains the information necessary to properly compile, link and load the various user examples. If you use PETSc, you will have to have a similar makefile. The makefile is used by typing commands like

make ex1
which compiles, links and loads the program ex1.c, creating the executable ex1, which may then be run by submitting a Condor job.

HELLO is a basic "Hello, world!" program. This example may be run on one or more processors.

EX1 is an example program from the PETSc website. It demonstrates the use of the Krylov Subspace Package. This example may be run on a SINGLE processor.

EX2 is an example program from the PETSc website. It demonstrates the use of the Krylov Subspace Package. This example may be run on MULTIPLE processors.

You can go up one level to the C source codes.


Last revised on 17 January 2006.