#include "petscsf.h" PetscErrorCode VecScatterCreate(Vec x,IS ix,Vec y,IS iy,VecScatter *newsf)Collective on Vec
|xin||- a vector that defines the shape (parallel data layout of the vector) of vectors from which we scatter|
|yin||- a vector that defines the shape (parallel data layout of the vector) of vectors to which we scatter|
|ix||- the indices of xin to scatter (if NULL scatters all values)|
|iy||- the indices of yin to hold results (if NULL fills entire vector yin)|
|newsf||- location to store the new scatter (SF) context|
|-vecscatter_view||- Prints detail of communications|
|-vecscatter_view ::ascii_info||- Print less details about communication|
|-vecscatter_merge||- VecScatterBegin() handles all of the communication, VecScatterEnd() is a nop eliminates the chance for overlap of computation and communication|
|-vecscatter_packongpu||- For GPU vectors, pack needed entries on GPU, then copy packed data to CPU, then do MPI. Otherwise, we might copy a segment encompassing needed entries. Default is TRUE.|
Currently the MPI_Send() use PERSISTENT versions. (this unfortunately requires that the same in and out arrays be used for each use, this is why we always need to pack the input into the work array before sending and unpack upon receiving instead of using MPI datatypes to avoid the packing/unpacking).
Both ix and iy cannot be NULL at the same time.
Use VecScatterCreateToAll() to create a vecscatter that copies an MPI vector to sequential vectors on all MPI ranks. Use VecScatterCreateToZero() to create a vecscatter that copies an MPI vector to a sequential vector on MPI rank 0. These special vecscatters have better performance than general ones.