Pipelined conjugate residual method This method has only a single non-blocking reduction per iteration, compared to 2 blocking for standard CR. The
non-blocking reduction is overlapped by the matrix-vector product, but not the preconditioner Section 1.5 Writing Application Codes with PETSc.
See also KSPPIPECG, where the reduction is only overlapped with the matrix-vector product.
MPI configuration may be necessary for reductions to make asynchronous progress, which is important for performance of pipelined methods.
See the FAQ on the PETSc website for details.
Pieter Ghysels, Universiteit Antwerpen, Intel Exascience lab Flanders
P. Ghysels and W. Vanroose, "Hiding global synchronization latency in the preconditioned Conjugate Gradient algorithm",
Submitted to Parallel Computing, 2012.
KSPCreate(), KSPSetType(), KSPPIPECG, KSPGROPPCG, KSPPGMRES, KSPCG, KSPCGUseSingleReduction()
Index of all KSP routines
Table of Contents for all manual pages
Index of all manual pages