MPI_Neighbor_alltoallv(3) Open MPI MPI_Neighbor_alltoallv(3)NAME
MPI_Neighbor_alltoallv, MPI_Ineighbor_alltoallv - All processes send
different amounts of data to, and receive different amounts of data
from, all neighbors
SYNTAXC Syntax
#include <mpi.h>
int MPI_Neighbor_alltoallv(const void *sendbuf, const int sendcounts[],
const int sdisplsP, MPI_Datatype sendtype,
void *recvbuf, const int recvcounts[],
const int rdispls[], MPI_Datatype recvtype, MPI_Comm comm)
int MPI_Ineighbor_alltoallv(const void *sendbuf, const int sendcounts[],
const int sdisplsP, MPI_Datatype sendtype,
void *recvbuf, const int recvcounts[],
const int rdispls[], MPI_Datatype recvtype, MPI_Comm comm,
MPI_Request *request)
Fortran Syntax
INCLUDE 'mpif.h'
MPI_NEIGHBOR_ALLTOALLV(SENDBUF, SENDCOUNTS, SDISPLS, SENDTYPE,
RECVBUF, RECVCOUNTS, RDISPLS, RECVTYPE, COMM, IERROR)
<type> SENDBUF(*), RECVBUF(*)
INTEGER SENDCOUNTS(*), SDISPLS(*), SENDTYPE
INTEGER RECVCOUNTS(*), RDISPLS(*), RECVTYPE
INTEGER COMM, IERROR
MPI_INEIGHBOR_ALLTOALLV(SENDBUF, SENDCOUNTS, SDISPLS, SENDTYPE,
RECVBUF, RECVCOUNTS, RDISPLS, RECVTYPE, COMM, REQUEST, IERROR)
<type> SENDBUF(*), RECVBUF(*)
INTEGER SENDCOUNTS(*), SDISPLS(*), SENDTYPE
INTEGER RECVCOUNTS(*), RDISPLS(*), RECVTYPE
INTEGER COMM, REQUEST, IERROR
INPUT PARAMETERS
sendbuf Starting address of send buffer.
sendcounts Integer array, where entry i specifies the number of ele‐
ments to send to neighbor i.
sdispls Integer array, where entry i specifies the displacement
(offset from sendbuf, in units of sendtype) from which to
send data to neighbor i.
sendtype Datatype of send buffer elements.
recvcounts Integer array, where entry j specifies the number of ele‐
ments to receive from neighbor j.
rdispls Integer array, where entry j specifies the displacement
(offset from recvbuf, in units of recvtype) to which data
from neighbor j should be written.
recvtype Datatype of receive buffer elements.
comm Communicator over which data is to be exchanged.
OUTPUT PARAMETERS
recvbuf Address of receive buffer.
request Request (handle, non-blocking only).
IERROR Fortran only: Error status.
DESCRIPTIONMPI_Neighbor_alltoallv is a generalized collective operation in which
all processes send data to and receive data from all neighbors. It adds
flexibility to MPI_Neighbor_alltoall by allowing the user to specify
data to send and receive vector-style (via a displacement and element
count). The operation of this routine can be thought of as follows,
where each process performs 2n (n being the number of neighbors in to
topology of communicator comm) independent point-to-point communica‐
tions. The neighbors and buffer layout are determined by the topology
of comm.
MPI_Cart_get(comm, maxdims, dims, periods, coords);
for (dim = 0, i = 0 ; dim < dims ; ++dim) {
MPI_Cart_shift(comm, dim, 1, &r0, &r1);
MPI_Isend(sendbuf + sdispls[i] * extent(sendtype),
sendcount, sendtype, r0, ..., comm, ...);
MPI_Irecv(recvbuf + rdispls[i] * extent(recvtype),
recvcount, recvtype, r0, ..., comm, ...);
++i;
MPI_Isend(sendbuf + sdispls[i] * extent(sendtype),
sendcount, sendtype, r1, ..., comm, &req[i]);
MPI_Irecv(recvbuf + rdispls[i] * extent(recvtype),
recvcount, recvtype, r1, ..., comm, ...);
++i;
}
Process j sends the k-th block of its local sendbuf to neighbor k,
which places the data in the j-th block of its local recvbuf.
When a pair of processes exchanges data, each may pass different ele‐
ment count and datatype arguments so long as the sender specifies the
same amount of data to send (in bytes) as the receiver expects to
receive.
Note that process i may send a different amount of data to process j
than it receives from process j. Also, a process may send entirely dif‐
ferent amounts of data to different processes in the communicator.
NEIGHBOR ORDERING
For a distributed graph topology, created with MPI_Dist_graph_create,
the sequence of neighbors in the send and receive buffers at each
process is defined as the sequence returned by MPI_Dist_graph_neighbors
for destinations and sources, respectively. For a general graph topol‐
ogy, created with MPI_Graph_create, the order of neighbors in the send
and receive buffers is defined as the sequence of neighbors as returned
by MPI_Graph_neighbors. Note that general graph topologies should gen‐
erally be replaced by the distributed graph topologies.
For a Cartesian topology, created with MPI_Cart_create, the sequence of
neighbors in the send and receive buffers at each process is defined by
order of the dimensions, first the neighbor in the negative direction
and then in the positive direction with displacement 1. The numbers of
sources and destinations in the communication routines are 2*ndims with
ndims defined in MPI_Cart_create. If a neighbor does not exist, i.e.,
at the border of a Cartesian topology in the case of a non-periodic
virtual grid dimension (i.e., periods[...]==false), then this neighbor
is defined to be MPI_PROC_NULL.
If a neighbor in any of the functions is MPI_PROC_NULL, then the neigh‐
borhood collective communication behaves like a point-to-point communi‐
cation with MPI_PROC_NULL in this direction. That is, the buffer is
still part of the sequence of neighbors but it is neither communicated
nor updated.
NOTES
The MPI_IN_PLACE option for sendbuf is not meaningful for this opera‐
tion.
The specification of counts and displacements should not cause any
location to be written more than once.
All arguments on all processes are significant. The comm argument, in
particular, must describe the same communicator on all processes.
The offsets of sdispls and rdispls are measured in units of sendtype
and recvtype, respectively. Compare this to MPI_Neighbor_alltoallw,
where these offsets are measured in bytes.
ERRORS
Almost all MPI routines return an error value; C routines as the value
of the function and Fortran routines in the last argument.
Before the error value is returned, the current MPI error handler is
called. By default, this error handler aborts the MPI job, except for
I/O function errors. The error handler may be changed with
MPI_Comm_set_errhandler; the predefined error handler MPI_ERRORS_RETURN
may be used to cause error values to be returned. Note that MPI does
not guarantee that an MPI program can continue past an error.
SEE ALSO
MPI_Neighbor_alltoall
MPI_Neighbor_alltoallw
MPI_Cart_create
MPI_Graph_create
MPI_Dist_graph_create
1.7.4 Feb 04, 2014 MPI_Neighbor_alltoallv(3)