MPI_Win_allocate_shared man page on DragonFly

Man page or keyword search:  
man Server   44335 pages
apropos Keyword Search (all sections)
Output format
DragonFly logo
[printable version]

MPI_Win_allocate_shared(3)	      MPI	    MPI_Win_allocate_shared(3)

NAME
       MPI_Win_allocate_shared	-   Create  an MPI Window object for one-sided
       communication and shared memory access, and  allocate  memory  at  each
       process.

SYNOPSIS
       int MPI_Win_allocate_shared(MPI_Aint size, int disp_unit, MPI_Info info, MPI_Comm comm,
				    void *baseptr, MPI_Win *win)
       This  is	 a  collective	call executed by all processes in the group of
       comm. On each process i, it allocates memory of	at  least  size	 bytes
       that  is	 shared	 among all processes in comm, and returns a pointer to
       the  locally  allocated	segment	 in  baseptr  that  can	 be  used  for
       load/store  accesses on the calling process. The locally allocated mem‐
       ory can be the target of load/store accesses by remote  processes;  the
       base  pointers  for  other  processes can be queried using the function
       MPI_Win_shared_query .

       The call also returns a window object that can be used by all processes
       in  comm	 to perform RMA operations. The size argument may be different
       at each process and size = 0 is valid. It is the user's	responsibility
       to  ensure  that	 the communicator comm represents a group of processes
       that can create a shared memory segment that can	 be  accessed  by  all
       processes  in  the  group.  The	allocated  memory is contiguous across
       process ranks unless the info key alloc_shared_noncontig is  specified.
       Contiguous  across  process  ranks  means that the first address in the
       memory segment of process i is consecutive with the last address in the
       memory segment of process i − 1.	 This may enable the user to calculate
       remote address offsets with local information only.

INPUT PARAMETERS
       size   - size of window in bytes (nonnegative integer)
       disp_unit
	      - local unit size for displacements, in bytes (positive integer)
       info   - info argument (handle)
       comm   - communicator (handle)

OUTPUT PARAMETERS
       baseptr
	      - initial address of window (choice)
       win    - window object returned by the call (handle)

THREAD AND INTERRUPT SAFETY
       This routine is thread-safe.  This  means  that	this  routine  may  be
       safely  used by multiple threads without the need for any user-provided
       thread locks.  However, the routine is not interrupt safe.   Typically,
       this  is due to the use of memory allocation routines such as malloc or
       other non-MPICH runtime routines that  are  themselves  not  interrupt-
       safe.

NOTES FOR FORTRAN
       All  MPI routines in Fortran (except for MPI_WTIME and MPI_WTICK ) have
       an additional argument ierr at the end of the argument list.   ierr  is
       an  integer and has the same meaning as the return value of the routine
       in C.  In Fortran, MPI routines are subroutines, and are	 invoked  with
       the call statement.

       All MPI objects (e.g., MPI_Datatype , MPI_Comm ) are of type INTEGER in
       Fortran.

ERRORS
       All MPI routines (except MPI_Wtime and  MPI_Wtick  )  return  an	 error
       value;  C routines as the value of the function and Fortran routines in
       the last argument.  Before the value is returned, the current MPI error
       handler	is called.  By default, this error handler aborts the MPI job.
       The error handler may be changed with MPI_Comm_set_errhandler (for com‐
       municators),	 MPI_File_set_errhandler      (for     files),	   and
       MPI_Win_set_errhandler  (for   RMA   windows).	 The   MPI-1   routine
       MPI_Errhandler_set  may	be used but its use is deprecated.  The prede‐
       fined error handler MPI_ERRORS_RETURN may be used to cause error values
       to  be  returned.  Note that MPI does not guarentee that an MPI program
       can continue past an error; however, MPI implementations	 will  attempt
       to continue whenever possible.

       MPI_SUCCESS
	      - No error; MPI routine completed successfully.
       MPI_ERR_ARG
	      - Invalid argument.  Some argument is invalid and is not identi‐
	      fied by a specific error class (e.g., MPI_ERR_RANK ).
       MPI_ERR_COMM
	      - Invalid communicator.  A common error is to use a null	commu‐
	      nicator in a call (not even allowed in MPI_Comm_rank ).
       MPI_ERR_INFO
	      - Invalid Info
       MPI_ERR_OTHER
	      -	 Other	error;	use  MPI_Error_string  to get more information
	      about this error code.
       MPI_ERR_SIZE
	      -


SEE ALSO
       MPI_Win_allocate	 MPI_Win_create	 MPI_Win_create_dynamic	  MPI_Win_free
       MPI_Win_shared_query

				   11/9/2015	    MPI_Win_allocate_shared(3)
[top]

List of man pages available for DragonFly

Copyright (c) for man pages and the logo by the respective OS vendor.

For those who want to learn more, the polarhome community provides shell access and support.

[legal] [privacy] [GNU] [policy] [cookies] [netiquette] [sponsors] [FAQ]
Tweet
Polarhome, production since 1999.
Member of Polarhome portal.
Based on Fawad Halim's script.
....................................................................
Vote for polarhome
Free Shell Accounts :: the biggest list on the net