HEAT
MPI Solution of Heat Equation


HEAT solves the heat equation using MPI.

The function H(X,T) is to be solved for on the unit interval 0.0 <= X <= 1.0 and the time interval 0 <= T <= 10.

The thermal conductivity function K multiplies the second derivative in the heat equation. We will be using an explicit method to solve our discretized version of the heat equation. In order that the results of such a computation are reliable, there is a relationship required between the time step, spatial step, and the function K. To avoid problems, we will need to set K = 0.002 / P, where P is the number of processors we are using. If we don't have a suitably small value of K, the program will refuse to carry out the computation.

The initial condition is H(X,T=0) = 95.0;

The boundary conditions are:
H(X=0,T) = 100 + 10 * sin ( T );
H(X=1,T) = 75.0

In order to divide the problem among P processors, the original domain is divided into P consecutive subintervals. Each processor is responsible for computing N values within its subinterval.

The discrete solution is computed for all spatial positions and a fixed time. We change notation, and now H(I) represents the I-th entry of a vector of values that are an estimate for the solution at one of these fixed times.

To get the next value of H(I), the processor needs current values of H(I-1), H(I) and H(I+1). This is fine except for H(1) and H(N), which are "missing" neighbors. In each case, the necessary information is available in the processor that is "next door".

To handle this situation, each processor actually sets up an array of N+2 values, including H(0) and H(N+1). When it comes time to update the values of H, the processor SENDS its values of H(1) to the left and H(N) to the right. It then RECEIVES values of H(0) from the left and H(N+1) from the right. Now that it has current values of H for entries 0 to N+1, it can compute new values for entries 1 to N.

Boundary conditions are used in the two special cases, that is, when process 0 needs H(0), and when process P-1 needs H(N+1).

Here is a diagram suggesting the situation when P = 3 and N = 4:

        0  1  2  3  4  5                          <== Nodes for process 0
                    :  :
                    0  1  2  3  4  5              <== Nodes for process 1
                                :  :
                                0  1  2  3  4  5  <== Nodes for process 2

        0  1  2  3  4  5  6  7  8  9 10 11 12 13  <== Global node numbering 
      

In this diagram, for instance, process 0 is really only responsible for computing new values of H at nodes 1, 2, 3 and 4. It keeps track of values at nodes 0 and 5 as well. It will get H(0) using a left boundary condition. And it will get H(5) when process 1 sends it the value of what process 1 calls H(1). In exchange, process 0 will be sending its value of H(4) to process 1.

Usage:

The program starts with an initial condition at time T = 0, then takes 100 steps to reach time T = 10. There is data for a sequential program that used N = 12 interior points.

Each step of the iteration requires sending and receiving two messages, which are identified by message tags equal to 1 and to 2.

The code used for message tag #1 has been written for you already. The message with tag #1 is used when processors 0 through P-2 send H(N) to their neighbor on the right, processors 1 through P-1. Processors 1 through P-1 receive this number as H(0). Meanwhile, process 0 sets H(0) using a boundary condition.

You must finish the code used for message tag #2. The message with tag #2 is used when processors 1 through P-1 send H(1) to their neighbor on the left, processors 0 through P-2. Processors 0 through P-2 receive this number as H(N+1). Meanwhile, process P-1 sets H(N+1) using a boundary condition.

Your assignment is to take one of the source codes, make an MPI version, and compare the runs using 1 and 8 processes.

This assignment is to be turned in. You should turn in 3 files:

  1. your modified source code
  2. the output from the run on 1 processor on the HPC.
  3. the output from the run on 8 processors on the HPC.

Source Code:

Completed Source Code:

Here are "completed" versions of the programs.
(Access to these files will be restricted until everyone has turned in their work)

Script for Execution on the FSU HPC Cluster:

Before using MPI on the FSU_HPC cluster, you must first set up the appropriate environment. You can do this by typing

        source /usr/local/profile.d/openmpi-gnu.sh
      

Once you have set up the environment, you should compile your program, using one of the following:

to create your executable program. Now you can use the msub command to submit a script to run your program.


Last revised on 21 October 2011.