MPI_Reduce_scatter man page on YellowDog

Man page or keyword search:  
man Server   18644 pages
apropos Keyword Search (all sections)
Output format
YellowDog logo
[printable version]

MPI_Reduce_scatter(3OpenMPI)			  MPI_Reduce_scatter(3OpenMPI)

NAME
       MPI_Reduce_scatter - Combines values and scatters the results.

SYNTAX
C Syntax
       #include <mpi.h>
       int MPI_Reduce_scatter(void *sendbuf, void *recvbuf, int *recvcounts,
	    MPI_Datatype datatype, MPI_Op op, MPI_Comm comm)

Fortran Syntax
       INCLUDE 'mpif.h'
       MPI_REDUCE_SCATTER(SENDBUF, RECVBUF, RECVCOUNTS, DATATYPE, OP,
		 COMM, IERROR)
	    <type>    SENDBUF(*), RECVBUF(*)
	    INTEGER   RECVCOUNTS(*), DATATYPE, OP, COMM, IERROR

C++ Syntax
       #include <mpi.h>
       void MPI::Comm::Reduce_scatter(const void* sendbuf, void* recvbuf,
	    int recvcounts[], const MPI::Datatype& datatype,
	    const MPI::Op& op) const

INPUT PARAMETERS
       sendbuf	 Starting address of send buffer (choice).

       recvcounts
		 Integer  array	 specifying  the  number of elements in result
		 distributed to each process. Array must be identical  on  all
		 calling processes.

       datatype	 Datatype of elements of input buffer (handle).

       op	 Operation (handle).

       comm	 Communicator (handle).

OUTPUT PARAMETERS
       recvbuf	 Starting address of receive buffer (choice).

       IERROR	 Fortran only: Error status (integer).

DESCRIPTION
       MPI_Reduce_scatter  first  does	an element-wise reduction on vector of
       count = S(i)revcounts[i] elements in the send buffer defined  by	 send‐
       buf,  count,  and  datatype.  Next,  the resulting vector of results is
       split into n disjoint segments, where n is the number of	 processes  in
       the  group.  Segment i contains recvcounts[i] elements. The ith segment
       is sent to process i and	 stored	 in  the  receive  buffer  defined  by
       recvbuf, recvcounts[i], and datatype.

USE OF IN-PLACE OPTION
       When  the  communicator	is  an	intracommunicator,  you	 can perform a
       reduce-scatter operation in-place (the output buffer  is	 used  as  the
       input buffer).  Use the variable MPI_IN_PLACE as the value of the send‐
       buf.  In this case, the input data is taken from the top of the receive
       buffer.	 The  area  occupied by the input data may be either longer or
       shorter than the data filled by the output data.

WHEN COMMUNICATOR IS AN INTER-COMMUNICATOR
       When the communicator  is  an  inter-communicator,  the	reduce-scatter
       operation  occurs  in  two  phases.  First, the result of the reduction
       performed on the data provided by the processes in the first  group  is
       scattered  among	 the  processes in the second group.  Then the reverse
       occurs: the reduction performed on the data provided by	the  processes
       in  the	second	group  is  scattered  among the processes in the first
       group.  For each group, all processes provide the same recvcounts argu‐
       ment,  and the sum of the recvcounts values should be the same for both
       groups.

NOTES ON COLLECTIVE OPERATIONS
       The reduction functions ( MPI_Op ) do not return an error value.	 As  a
       result,	if  the	 functions  detect an error, all they can do is either
       call MPI_Abort or silently skip the problem.  Thus, if you  change  the
       error handler from MPI_ERRORS_ARE_FATAL to something else, for example,
       MPI_ERRORS_RETURN , then no error may be indicated.

       The reason for this is the performance problems in  ensuring  that  all
       collective routines return the same error value.

ERRORS
       Almost  all MPI routines return an error value; C routines as the value
       of the function and Fortran routines in the last	 argument.  C++	 func‐
       tions  do  not  return  errors.	If the default error handler is set to
       MPI::ERRORS_THROW_EXCEPTIONS, then on error the C++ exception mechanism
       will be used to throw an MPI:Exception object.

       Before  the  error  value is returned, the current MPI error handler is
       called. By default, this error handler aborts the MPI job,  except  for
       I/O   function	errors.	  The	error  handler	may  be	 changed  with
       MPI_Comm_set_errhandler; the predefined error handler MPI_ERRORS_RETURN
       may  be	used  to cause error values to be returned. Note that MPI does
       not guarantee that an MPI program can continue past an error.

Open MPI 1.2			September 2006	  MPI_Reduce_scatter(3OpenMPI)
[top]

List of man pages available for YellowDog

Copyright (c) for man pages and the logo by the respective OS vendor.

For those who want to learn more, the polarhome community provides shell access and support.

[legal] [privacy] [GNU] [policy] [cookies] [netiquette] [sponsors] [FAQ]
Tweet
Polarhome, production since 1999.
Member of Polarhome portal.
Based on Fawad Halim's script.
....................................................................
Vote for polarhome
Free Shell Accounts :: the biggest list on the net