Main Page | Namespace List | Class Hierarchy | Class List | File List | Namespace Members | Class Members | File Members | Related Pages

SAMRAI::tbox::MPI Struct Reference

#include <source/toolbox/parallel/MPI.h>

List of all members.

Public Types

typedef int comm
typedef int group
typedef int request
typedef int status

Static Public Member Functions

static void setCallAbortInSerialInsteadOfExit (bool flag=true)
static void abort ()
static void init (int *argc, char **argv[])
static void finalize ()
static void initialize ()
static void setCommunicator (MPI::comm communicator)
static MPI::comm getCommunicator ()
static int getRank ()
static int getNodes ()
static void updateOutgoingStatistics (const int messages, const int bytes)
static void updateIncomingStatistics (const int messages, const int bytes)
static int getOutgoingMessages ()
static int getOutgoingBytes ()
static int getIncomingMessages ()
static int getIncomingBytes ()
static int getTreeDepth ()
static void barrier ()
static double sumReduction (const double x)
static void sumReduction (double *x, const int n=1)
static float sumReduction (const float x)
static void sumReduction (float *x, const int n=1)
static dcomplex sumReduction (const dcomplex x)
static void sumReduction (dcomplex *x, const int n=1)
static int sumReduction (const int x)
static void sumReduction (int *x, const int n=1)
static double minReduction (const double x, int *rank_of_min=NULL)
static void minReduction (double *x, const int n=1, int *rank_of_min=NULL)
static float minReduction (const float x, int *rank_of_min=NULL)
static void minReduction (float *x, const int n=1, int *rank_of_min=NULL)
static int minReduction (const int x, int *rank_of_min=NULL)
static void minReduction (int *x, const int n=1, int *rank_of_min=NULL)
static double maxReduction (const double x, int *rank_of_max=NULL)
static void maxReduction (double *x, const int n=1, int *rank_of_max=NULL)
static float maxReduction (const float x, int *rank_of_max=NULL)
static void maxReduction (float *x, const int n=1, int *rank_of_max=NULL)
static int maxReduction (const int x, int *rank_of_max=NULL)
static void maxReduction (int *x, const int n=1, int *rank_of_max=NULL)
static void allToOneSumReduction (int *x, const int n, const int root=0)
static int bcast (const int x, const int root)
static void bcast (int *x, int &length, const int root)
static void bcast (char *x, int &length, const int root)
static void send (const int *buf, const int length, const int receiving_proc_number, const bool send_length=true, int tag=-1)
 This function sends an MPI message with an integer array to another processer.
static void sendBytes (const void *buf, const int number_bytes, const int receiving_proc_number)
 This function sends an MPI message with an array of bytes (MPI_BYTES) to receiving_proc_number.
static int recvBytes (void *buf, int number_bytes)
 This function receives an MPI message with an array of max size number_bytes (MPI_BYTES) from any processer.
static void recv (int *buf, int &length, const int sending_proc_number, const bool get_length=true, int tag=-1)
 This function receives an MPI message with an integer array from another processer.
static void allGather (const int *x_in, int size_in, int *x_out, int size_out)
static void allGather (const double *x_in, int size_in, double *x_out, int size_out)
static void allGather (int x_in, int *x_out)
static void allGather (double x_in, double *x_out)

Static Public Attributes

static comm commWorld = 0
static comm commNull = -1


Detailed Description

Class MPI groups common MPI routines into one globally-accessible location. It provides small, simple routines that are common in MPI code. In some cases, the calling syntax has been simplified for convenience. Moreover, there is no reason to include the preprocessor ifdef/endif guards around these calls, since the MPI libraries are not called in these routines if the MPI libraries are not being used (e.g., when writing serial code).

Note that this class is a utility class to group function calls in one name space (all calls are to static functions). Thus, you should never attempt to instantiate a class of type MPI; simply call the functions as static functions using the MPI::function(...) syntax.


Member Typedef Documentation

typedef int SAMRAI::tbox::MPI::comm
 

MPI Types

typedef int SAMRAI::tbox::MPI::group
 

typedef int SAMRAI::tbox::MPI::request
 

typedef int SAMRAI::tbox::MPI::status
 


Member Function Documentation

void SAMRAI::tbox::MPI::setCallAbortInSerialInsteadOfExit bool  flag = true  )  [static]
 

Set boolean flag indicating whether exit or abort is called when running with one processor. Calling this function influences the behavior of calls to MPI::abort(). By default, the flag is true meaning that abort() will be called. Passing false means exit(-1) will be called.

void SAMRAI::tbox::MPI::abort  )  [static]
 

Call MPI_Abort or exit depending on whether running with one or more processes and value set by function above, if called. The default is to call exit(-1) if running with one processor and to call MPI_Abort() otherwise. This function avoids having to guard abort calls in application code.

void SAMRAI::tbox::MPI::init int *  argc,
char **  argv[]
[inline, static]
 

Call MPI_Init. Use of this function avoids guarding MPI init calls in application code.

void SAMRAI::tbox::MPI::finalize  )  [inline, static]
 

Call MPI_Finalize. Use of this function avoids guarding MPI finalize calls in application code.

void SAMRAI::tbox::MPI::initialize  )  [static]
 

Initialize the MPI utility class. The MPI utility class must be initialized after the call to MPI_Init or MPI::init.

void SAMRAI::tbox::MPI::setCommunicator MPI::comm  communicator  )  [inline, static]
 

Set the communicator that is used for the MPI communication routines. The default communicator is MPI_COMM_WORLD.

MPI::comm SAMRAI::tbox::MPI::getCommunicator  )  [inline, static]
 

Get the current MPI communicator. The default communicator is MPI_COMM_WORLD.

int SAMRAI::tbox::MPI::getRank  )  [inline, static]
 

Return the processor rank (identifier) from 0 through the number of processors minus one.

int SAMRAI::tbox::MPI::getNodes  )  [inline, static]
 

Return the number of processors (nodes).

void SAMRAI::tbox::MPI::updateOutgoingStatistics const int  messages,
const int  bytes
[inline, static]
 

Update the statistics for outgoing messages. Statistics are automatically updated for the reduction calls in MPI.

void SAMRAI::tbox::MPI::updateIncomingStatistics const int  messages,
const int  bytes
[inline, static]
 

Update the statistics for incoming messages. Statistics are automatically updated for the reduction calls in MPI.

int SAMRAI::tbox::MPI::getOutgoingMessages  )  [inline, static]
 

Return the number of outgoing messages.

int SAMRAI::tbox::MPI::getOutgoingBytes  )  [inline, static]
 

Return the number of outgoing message bytes.

int SAMRAI::tbox::MPI::getIncomingMessages  )  [inline, static]
 

Return the number of incoming messages.

int SAMRAI::tbox::MPI::getIncomingBytes  )  [inline, static]
 

Return the number of incoming message bytes.

int SAMRAI::tbox::MPI::getTreeDepth  )  [static]
 

Get the depth of the reduction trees given the current number of MPI processors.

void SAMRAI::tbox::MPI::barrier  )  [static]
 

Perform a global barrier across all processors.

double SAMRAI::tbox::MPI::sumReduction const double  x  )  [static]
 

Perform a scalar sum reduction on a double across all nodes. Each processor contributes a value x of type double, and the sum is returned from the function.

void SAMRAI::tbox::MPI::sumReduction double *  x,
const int  n = 1
[static]
 

Perform an array sum reduction on doubles across all nodes. Each processor contributes an array of values of type double, and the element-wise sum is returned in the same array.

float SAMRAI::tbox::MPI::sumReduction const float  x  )  [static]
 

Perform a scalar sum reduction on a float across all nodes. Each processor contributes a value x of type float, and the sum is returned from the function.

void SAMRAI::tbox::MPI::sumReduction float *  x,
const int  n = 1
[static]
 

Perform an array sum reduction on floats across all nodes. Each processor contributes an array of values of type float, and the element-wise sum is returned in the same array.

dcomplex SAMRAI::tbox::MPI::sumReduction const dcomplex  x  )  [static]
 

Perform a scalar sum reduction on a dcomplex across all nodes. Each processor contributes a value x of type dcomplex, and the sum is returned from the function.

void SAMRAI::tbox::MPI::sumReduction dcomplex x,
const int  n = 1
[static]
 

Perform an array sum reduction on dcomplexes across all nodes. Each processor contributes an array of values of type dcomplex, and the element-wise sum is returned in the same array.

int SAMRAI::tbox::MPI::sumReduction const int  x  )  [static]
 

Perform a scalar sum reduction on an integer across all nodes. Each processor contributes a value x of type int, and the sum is returned from the function.

void SAMRAI::tbox::MPI::sumReduction int *  x,
const int  n = 1
[static]
 

Perform an array sum reduction on integers across all nodes. Each processor contributes an array of values of type int, and the element-wise sum is returned in the same array.

double SAMRAI::tbox::MPI::minReduction const double  x,
int *  rank_of_min = NULL
[static]
 

Perform a scalar min reduction on a double across all nodes. Each processor contributes a value x of type double, and the minimum is returned from the function.

If a 'rank_of_min' argument is provided, it will set it to the rank of process holding the minimum value.

void SAMRAI::tbox::MPI::minReduction double *  x,
const int  n = 1,
int *  rank_of_min = NULL
[static]
 

Perform an array min reduction on doubles across all nodes. Each processor contributes an array of values of type double, and the element-wise minimum is returned in the same array.

If a 'rank_of_min' argument is provided, it will set the array to the rank of process holding the minimum value. Like the double argument, the size of the supplied 'rank_of_min' array should be n.

float SAMRAI::tbox::MPI::minReduction const float  x,
int *  rank_of_min = NULL
[static]
 

Perform a scalar min reduction on a float across all nodes. Each processor contributes a value x of type float, and the minimum is returned from the function.

If a 'rank_of_min' argument is provided, it will set it to the rank of process holding the minimum value.

void SAMRAI::tbox::MPI::minReduction float *  x,
const int  n = 1,
int *  rank_of_min = NULL
[static]
 

Perform an array min reduction on floats across all nodes. Each processor contributes an array of values of type float, and the element-wise minimum is returned in the same array.

If a 'rank_of_min' argument is provided, it will set the array to the rank of process holding the minimum value. Like the double argument, the size of the supplied 'rank_of_min' array should be n.

int SAMRAI::tbox::MPI::minReduction const int  x,
int *  rank_of_min = NULL
[static]
 

Perform a scalar min reduction on an integer across all nodes. Each processor contributes a value x of type int, and the minimum is returned from the function.

If a 'rank_of_min' argument is provided, it will set it to the rank of process holding the minimum value.

void SAMRAI::tbox::MPI::minReduction int *  x,
const int  n = 1,
int *  rank_of_min = NULL
[static]
 

Perform an array min reduction on integers across all nodes. Each processor contributes an array of values of type int, and the element-wise minimum is returned in the same array.

If a 'rank_of_min' argument is provided, it will set the array to the rank of process holding the minimum value. Like the double argument, the size of the supplied 'rank_of_min' array should be n.

double SAMRAI::tbox::MPI::maxReduction const double  x,
int *  rank_of_max = NULL
[static]
 

Perform a scalar max reduction on a double across all nodes. Each processor contributes a value x of type double, and the maximum is returned from the function.

If a 'rank_of_max' argument is provided, it will set it to the rank of process holding the maximum value.

void SAMRAI::tbox::MPI::maxReduction double *  x,
const int  n = 1,
int *  rank_of_max = NULL
[static]
 

Perform an array max reduction on doubles across all nodes. Each processor contributes an array of values of type double, and the element-wise maximum is returned in the same array.

If a 'rank_of_max' argument is provided, it will set the array to the rank of process holding the maximum value. Like the double argument, the size of the supplied 'rank_of_max' array should be n.

float SAMRAI::tbox::MPI::maxReduction const float  x,
int *  rank_of_max = NULL
[static]
 

Perform a scalar max reduction on a float across all nodes. Each processor contributes a value x of type float, and the maximum is returned from the function.

If a 'rank_of_max' argument is provided, it will set it to the rank of process holding the maximum value.

void SAMRAI::tbox::MPI::maxReduction float *  x,
const int  n = 1,
int *  rank_of_max = NULL
[static]
 

Perform an array max reduction on floats across all nodes. Each processor contributes an array of values of type float, and the element-wise maximum is returned in the same array.

If a 'rank_of_max' argument is provided, it will set the array to the rank of process holding the maximum value. Like the double argument, the size of the supplied 'rank_of_max' array should be n.

int SAMRAI::tbox::MPI::maxReduction const int  x,
int *  rank_of_max = NULL
[static]
 

Perform a scalar max reduction on an integer across all nodes. Each processor contributes a value x of type int, and the maximum is returned from the function.

If a 'rank_of_max' argument is provided, it will set it to the rank of process holding the maximum value.

void SAMRAI::tbox::MPI::maxReduction int *  x,
const int  n = 1,
int *  rank_of_max = NULL
[static]
 

Perform an array max reduction on integers across all nodes. Each processor contributes an array of values of type int, and the element-wise maximum is returned in the same array.

If a 'rank_of_max' argument is provided, it will set the array to the rank of process holding the maximum value. Like the double argument, the size of the supplied 'rank_of_max' array should be n.

void SAMRAI::tbox::MPI::allToOneSumReduction int *  x,
const int  n,
const int  root = 0
[static]
 

Perform an all-to-one sum reduction on an integer array. The final result is only available on the root processor.

int SAMRAI::tbox::MPI::bcast const int  x,
const int  root
[static]
 

Broadcast integer from specified root process to all other processes. All processes other than root, receive a copy of the integer value.

void SAMRAI::tbox::MPI::bcast int *  x,
int &  length,
const int  root
[static]
 

Broadcast integer array from specified root processor to all other processors. For the root processor, "array" and "length" are treated as const.

void SAMRAI::tbox::MPI::bcast char *  x,
int &  length,
const int  root
[static]
 

Broadcast char array from specified root processor to all other processors. For the root processor, "array" and "length" are treated as const.

void SAMRAI::tbox::MPI::send const int *  buf,
const int  length,
const int  receiving_proc_number,
const bool  send_length = true,
int  tag = -1
[static]
 

This function sends an MPI message with an integer array to another processer.

If the receiving processor knows in advance the length of the array, use "send_length = false;" otherwise, this processor will first send the length of the array, then send the data. This call must be paired with a matching call to MPI::recv.

Parameters:
buf Pointer to integer array buffer with length integers.
length Number of integers in buf that we want to send.
receiving_proc_number Receiving processor number.
send_length Optional boolean argument specifiying if we first need to send a message with the array size. Default value is true.
tag Optional integer argument specifying an integer tag to be sent with this message. Default tag is 0.

void SAMRAI::tbox::MPI::sendBytes const void *  buf,
const int  number_bytes,
const int  receiving_proc_number
[static]
 

This function sends an MPI message with an array of bytes (MPI_BYTES) to receiving_proc_number.

This call must be paired with a matching call to MPI::recvBytes.

Parameters:
buf Void pointer to an array of number_bytes bytes to send.
number_bytes Integer number of bytes to send.
receiving_proc_number Receiving processor number.

int SAMRAI::tbox::MPI::recvBytes void *  buf,
int  number_bytes
[static]
 

This function receives an MPI message with an array of max size number_bytes (MPI_BYTES) from any processer.

This call must be paired with a matching call to MPI::sendBytes.

This function returns the processor number of the sender.

Parameters:
buf Void pointer to a buffer of size number_bytes bytes.
number_bytes Integer number specifing size of buf in bytes.

void SAMRAI::tbox::MPI::recv int *  buf,
int &  length,
const int  sending_proc_number,
const bool  get_length = true,
int  tag = -1
[static]
 

This function receives an MPI message with an integer array from another processer.

If this processor knows in advance the length of the array, use "get_length = false;" otherwise, the sending processor will first send the length of the array, then send the data. This call must be paired with a matching call to MPI::send.

Parameters:
buf Pointer to integer array buffer with capacity of length integers.
length Maximum number of integers that can be stored in buf.
sending_proc_number Processor number of sender.
get_length Optional boolean argument specifiying if we first need to send a message to determine the array size. Default value is true.
tag Optional integer argument specifying a tag which must be matched by the tag of the incoming message. Default tag is 0.

void SAMRAI::tbox::MPI::allGather const int *  x_in,
int  size_in,
int *  x_out,
int  size_out
[static]
 

Each processor sends an array of integers or doubles to all other processors; each processor's array may differ in length. The x_out array must be pre-allocated to the correct length (this is a bit cumbersome, but is necessary to avoid th allGather function from allocating memory that is freed elsewhere). To properly preallocate memory, before calling this method, call

size_out = MPI::sumReduction(size_in)

then allocate the x_out array.

void SAMRAI::tbox::MPI::allGather const double *  x_in,
int  size_in,
double *  x_out,
int  size_out
[static]
 

void SAMRAI::tbox::MPI::allGather int  x_in,
int *  x_out
[static]
 

Each processor sends every other processor an integer or double. The x_out array should be preallocated to a length equal to the number of processors.

void SAMRAI::tbox::MPI::allGather double  x_in,
double *  x_out
[static]
 


Member Data Documentation

MPI::comm SAMRAI::tbox::MPI::commWorld = 0 [static]
 

MPI constants

MPI::comm SAMRAI::tbox::MPI::commNull = -1 [static]
 


The documentation for this struct was generated from the following files:
Generated on Fri Dec 2 11:33:32 2005 for SAMRAI by  doxygen 1.4.2