MPI_Alltoall man page on YellowDog

Man page or keyword search:  
man Server   18644 pages
apropos Keyword Search (all sections)
Output format
YellowDog logo
[printable version]

MPI_Alltoall(3OpenMPI)					MPI_Alltoall(3OpenMPI)

NAME
       MPI_Alltoall - All processes send data to all processes

SYNTAX
C Syntax
       #include <mpi.h>
       int MPI_Alltoall(void *sendbuf, int sendcount,
	    MPI_Datatype sendtype, void *recvbuf, int recvcount,
	    MPI_Datatype recvtype, MPI_Comm comm)

Fortran Syntax
       INCLUDE 'mpif.h'
       MPI_ALLTOALL(SENDBUF, SENDCOUNT, SENDTYPE, RECVBUF, RECVCOUNT,
	    RECVTYPE, COMM, IERROR)

	    <type>    SENDBUF(*), RECVBUF(*)
	    INTEGER   SENDCOUNT, SENDTYPE, RECVCOUNT, RECVTYPE
	    INTEGER   COMM, IERROR

C++ Syntax
       #include <mpi.h>
       void MPI::Comm::Alltoall(const void* sendbuf, int sendcount,
	    const MPI::Datatype& sendtype, void* recvbuf,
	    int recvcount, const MPI::Datatype& recvtype)

INPUT PARAMETERS
       sendbuf	   Starting address of send buffer (choice).

       sendcount   Number of elements to send to each process (integer).

       sendtype	   Datatype of send buffer elements (handle).

       recvcount   Number of elements to receive from each process (integer).

       recvtype	   Datatype of receive buffer elements (handle).

       comm	   Communicator over which data is to be exchanged (handle).

OUTPUT PARAMETERS
       recvbuf	   Starting address of receive buffer (choice).

       IERROR	   Fortran only: Error status (integer).

DESCRIPTION
       MPI_Alltoall  is a collective operation in which all processes send the
       same amount of data to each other, and receive the same amount of  data
       from  each  other.  The operation of this routine can be represented as
       follows, where each process performs 2n (n being	 the  number  of  pro‐
       cesses  in communicator comm) independent point-to-point communications
       (including communication with itself).

	    MPI_Comm_size(comm, &n);
	    for (i = 0, i < n; i++)
		MPI_Send(sendbuf + i * sendcount * extent(sendtype),
		    sendcount, sendtype, i, ..., comm);
	    for (i = 0, i < n; i++)
		MPI_Recv(recvbuf + i * recvcount * extent(recvtype),
		    recvcount, recvtype, i, ..., comm);

       Each process breaks up its local sendbuf into n blocks - each  contain‐
       ing sendcount elements of type sendtype - and divides its recvbuf simi‐
       larly according to recvcount and recvtype. Process  j  sends  the  k-th
       block  of  its local sendbuf to process k, which places the data in the
       j-th block of its local recvbuf. The amount of data sent must be	 equal
       to  the	amount	of data received, pairwise, between every pair of pro‐
       cesses.

       WHEN COMMUNICATOR IS AN INTER-COMMUNICATOR

       When the communicator is an inter-communicator,	the  gather  operation
       occurs in two phases.  The data is gathered from all the members of the
       first group and received by all the members of the second group.	  Then
       the  data  is  gathered	from  all  the members of the second group and
       received by all the members of the first.   The	operation  exhibits  a
       symmetric, full-duplex behavior.

       The  first  group  defines  the	root  process.	 The root process uses
       MPI_ROOT as the value of root.  All other processes in the first	 group
       use  MPI_PROC_NULL  as  the value of root.  All processes in the second
       group use the rank of the root process in the first group as the	 value
       of root.

       When  the  communicator	is an intra-communicator, these groups are the
       same, and the operation occurs in a single phase.

NOTES
       The MPI_IN_PLACE option is not available for this function.

       All arguments on all processes are significant. The comm	 argument,  in
       particular, must describe the same communicator on all processes.

       There are two MPI library functions that are more general than MPI_All‐
       toall. MPI_Alltoallv allows all-to-all communication to and  from  buf‐
       fers  that  need	 not  be  contiguous; different processes may send and
       receive different  amounts  of  data.  MPI_Alltoallw  expands  MPI_All‐
       toallv's	 functionality	to  allow  the exchange of data with different
       datatypes.

ERRORS
       Almost all MPI routines return an error value; C routines as the	 value
       of  the	function  and Fortran routines in the last argument. C++ func‐
       tions do not return errors. If the default  error  handler  is  set  to
       MPI::ERRORS_THROW_EXCEPTIONS, then on error the C++ exception mechanism
       will be used to throw an MPI:Exception object.

       Before the error value is returned, the current MPI  error  handler  is
       called.	By  default, this error handler aborts the MPI job, except for
       I/O  function  errors.  The  error  handler   may   be	changed	  with
       MPI_Comm_set_errhandler; the predefined error handler MPI_ERRORS_RETURN
       may be used to cause error values to be returned. Note  that  MPI  does
       not guarantee that an MPI program can continue past an error.

SEE ALSO
       MPI_Alltoallv
       MPI_Alltoallw

Open MPI 1.2			September 2006		MPI_Alltoall(3OpenMPI)
[top]

List of man pages available for YellowDog

Copyright (c) for man pages and the logo by the respective OS vendor.

For those who want to learn more, the polarhome community provides shell access and support.

[legal] [privacy] [GNU] [policy] [cookies] [netiquette] [sponsors] [FAQ]
Tweet
Polarhome, production since 1999.
Member of Polarhome portal.
Based on Fawad Halim's script.
....................................................................
Vote for polarhome
Free Shell Accounts :: the biggest list on the net