MPI_Scatterv man page on YellowDog

Man page or keyword search:  
man Server   18644 pages
apropos Keyword Search (all sections)
Output format
YellowDog logo
[printable version]

MPI_Scatterv(3OpenMPI)					MPI_Scatterv(3OpenMPI)

NAME
       MPI_Scatterv - Scatters a buffer in parts to all tasks in a group.

SYNTAX
C Syntax
       #include <mpi.h>
       int MPI_Scatterv(void *sendbuf, int *sendcounts, int *displs,
	    MPI_Datatype sendtype, void *recvbuf, int recvcount,
	    MPI_Datatype recvtype, int root, MPI_Comm comm)

Fortran Syntax
       INCLUDE 'mpif.h'
       MPI_SCATTERV(SENDBUF, SENDCOUNTS, DISPLS, SENDTYPE, RECVBUF,
		 RECVCOUNT, RECVTYPE, ROOT, COMM, IERROR)
	    <type>    SENDBUF(*), RECVBUF(*)
	    INTEGER   SENDCOUNTS(*), DISPLS(*), SENDTYPE
	    INTEGER   RECVCOUNT, RECVTYPE, ROOT, COMM, IERROR

C++ Syntax
       #include <mpi.h>
       void MPI::Comm::Scatterv(const void* sendbuf, const int sendcounts[],
	    const int displs[], const MPI::Datatype& sendtype,
	    void* recvbuf, int recvcount, const MPI::Datatype&
	    recvtype, int root) const

INPUT PARAMETERS
       sendbuf	 Address of send buffer (choice, significant only at root).

       sendcounts
		 Integer array (of length group size) specifying the number of
		 elements to send to each processor.

       displs	 Integer array (of length group size). Entry i	specifies  the
		 displacement  (relative  to  sendbuf)	from which to take the
		 outgoing data to process i.

       sendtype	 Datatype of send buffer elements (handle).

       recvcount Number of elements in receive buffer (integer).

       recvtype	 Datatype of receive buffer elements (handle).

       root	 Rank of sending process (integer).

       comm	 Communicator (handle).

OUTPUT PARAMETERS
       recvbuf	 Address of receive buffer (choice).

       IERROR	 Fortran only: Error status (integer).

DESCRIPTION
       MPI_Scatterv is the inverse operation to MPI_Gatherv.

       MPI_Scatterv extends the functionality of  MPI_Scatter  by  allowing  a
       varying	count  of data to be sent to each process, since sendcounts is
       now an array.  It also allows more flexibility as to where the data  is
       taken from on the root, by providing the new argument, displs.

       The outcome is as if the root executed n send operations,

	   MPI_Send(sendbuf + displs[i] * extent(sendtype), \
		    sendcounts[i], sendtype, i, ...)

       and each process executed a receive,

	   MPI_Recv(recvbuf, recvcount, recvtype, root, ...)

       The send buffer is ignored for all nonroot processes.

       The  type  signature implied by sendcount[i], sendtype at the root must
       be equal to the	type  signature	 implied  by  recvcount,  recvtype  at
       process	i (however, the type maps may be different). This implies that
       the amount of data sent must be equal to the amount of  data  received,
       pairwise	 between each process and the root. Distinct type maps between
       sender and receiver are still allowed.

       All arguments to the function are significant on process root, while on
       other  processes,  only	arguments  recvbuf, recvcount, recvtype, root,
       comm are significant. The arguments root and comm must  have  identical
       values on all processes.

       The  specification of counts, types, and displacements should not cause
       any location on the root to be read more than once.

       Example 1: The reverse of Example 5 in the MPI_Gatherv manpage. We have
       a  varying stride between blocks at sending (root) side, at the receiv‐
       ing side we receive 100 - i elements into the ith column of a 100 x 150
       C array at process i.

	   MPI_Comm comm;
	       int gsize,recvarray[100][150],*rptr;
	       int root, *sendbuf, myrank, bufsize, *stride;
	       MPI_Datatype rtype;
	       int i, *displs, *scounts, offset;
	       ...
	       MPI_Comm_size( comm, &gsize);
	       MPI_Comm_rank( comm, &myrank );

	       stride = (int *)malloc(gsize*sizeof(int));
	       ...
	       /* stride[i] for i = 0 to gsize-1 is set somehow
		* sendbuf comes from elsewhere
		*/
	       ...
	       displs = (int *)malloc(gsize*sizeof(int));
	       scounts = (int *)malloc(gsize*sizeof(int));
	       offset = 0;
	       for (i=0; i<gsize; ++i) {
		   displs[i] = offset;
		   offset += stride[i];
		   scounts[i] = 100 - i;
	       }
	       /* Create datatype for the column we are receiving
		*/
	       MPI_Type_vector( 100-myrank, 1, 150, MPI_INT, &rtype);
	       MPI_Type_commit( &rtype );
	       rptr = &recvarray[0][myrank];
	       MPI_Scatterv(sendbuf, scounts, displs, MPI_INT,
			    rptr, 1, rtype, root, comm);

       Example 2: The reverse of Example 1 in the MPI_Gather manpage. The root
       process scatters sets of 100 ints to the other processes, but the  sets
       of  100	are  stride  ints apart in the sending buffer. Requires use of
       MPI_Scatterv, where stride >= 100.

	   MPI_Comm comm;
	       int gsize,*sendbuf;
	       int root, rbuf[100], i, *displs, *scounts;

	   ...

	   MPI_Comm_size(comm, &gsize);
	       sendbuf = (int *)malloc(gsize*stride*sizeof(int));
	       ...
	       displs = (int *)malloc(gsize*sizeof(int));
	       scounts = (int *)malloc(gsize*sizeof(int));
	       for (i=0; i<gsize; ++i) {
		   displs[i] = i*stride;
		   scounts[i] = 100;
	       }
	       MPI_Scatterv(sendbuf, scounts, displs, MPI_INT,
			    rbuf, 100, MPI_INT, root, comm);

USE OF IN-PLACE OPTION
       When the communicator is an intracommunicator, you can perform a gather
       operation  in-place  (the  output  buffer is used as the input buffer).
       Use the variable MPI_IN_PLACE as the value of the root process recvbuf.
       In  this case, recvcount and recvtype are ignored, and the root process
       sends no data to itself.

       Note that MPI_IN_PLACE is a special kind of  value;  it	has  the  same
       restrictions on its use as MPI_BOTTOM.

       Because	the  in-place  option converts the receive buffer into a send-
       and-receive buffer, a Fortran binding that includes  INTENT  must  mark
       these as INOUT, not OUT.

WHEN COMMUNICATOR IS AN INTER-COMMUNICATOR
       When the communicator is an inter-communicator, the root process in the
       first group sends data to all processes in the second group.  The first
       group  defines  the  root  process.   That process uses MPI_ROOT as the
       value of its root argument.  The remaining processes use	 MPI_PROC_NULL
       as the value of their root argument.  All processes in the second group
       use the rank of that root process in the first group as	the  value  of
       their  root argument.   The receive buffer argument of the root process
       in the first group must be consistent with the receive buffer  argument
       of the processes in the second group.

ERRORS
       Almost  all MPI routines return an error value; C routines as the value
       of the function and Fortran routines in the last	 argument.  C++	 func‐
       tions  do  not  return  errors.	If the default error handler is set to
       MPI::ERRORS_THROW_EXCEPTIONS, then on error the C++ exception mechanism
       will be used to throw an MPI:Exception object.

       Before  the  error  value is returned, the current MPI error handler is
       called. By default, this error handler aborts the MPI job,  except  for
       I/O   function	errors.	  The	error  handler	may  be	 changed  with
       MPI_Comm_set_errhandler; the predefined error handler MPI_ERRORS_RETURN
       may  be	used  to cause error values to be returned. Note that MPI does
       not guarantee that an MPI program can continue past an error.

SEE ALSO
       MPI_Gather
       MPI_Gatherv
       MPI_Scatter

Open MPI 1.2			September 2006		MPI_Scatterv(3OpenMPI)
[top]

List of man pages available for YellowDog

Copyright (c) for man pages and the logo by the respective OS vendor.

For those who want to learn more, the polarhome community provides shell access and support.

[legal] [privacy] [GNU] [policy] [cookies] [netiquette] [sponsors] [FAQ]
Tweet
Polarhome, production since 1999.
Member of Polarhome portal.
Based on Fawad Halim's script.
....................................................................
Vote for polarhome
Free Shell Accounts :: the biggest list on the net