10.1.10. Profiling


Up: C++ Next: Fortran Support Previous: Mixed-Language Operability

This section specifies the requirements of a C++ profiling interface to MPI.


Advice to implementors.

Since the main goal of profiling is to intercept function calls from user code, it is the implementor's decision how to layer the underlying implementation to allow function calls to be intercepted and profiled. If an implementation of the MPI C++ bindings is layered on top of MPI bindings in another language (such as C), or if the C++ bindings are layered on top of a profiling interface in another language, no extra profiling interface is necessary because the underlying MPI implementation already meets the MPI profiling interface requirements.

Native C++ MPI implementations that do not have access to other profiling interfaces must implement an interface that meets the requirements outlined in this section.

High quality implementations can implement the interface outlined in this section in order to promote portable C++ profiling libraries. Implementors may wish to provide an option whether to build the C++ profiling interface or not; C++ implementations that are already layered on top of bindings in another language or another profiling interface will have to insert a third layer to implement the C++ profiling interface. ( End of advice to implementors.)
To meet the requirements of the C++ MPI profiling interface, an implementation of the MPI functions must:

    1. Provide a mechanism through which all of the MPI defined functions may be accessed with a name shift. Thus all of the MPI functions (which normally start with the prefix `` MPI::'') should also be accessible with the prefix `` PMPI::.''


    2. Ensure that those MPI functions which are not replaced may still be linked into an executable image without causing name clashes.


    3. Document the implementation of different language bindings of the MPI interface if they are layered on top of each other, so that profiler developer knows whether they must implement the profile interface for each binding, or can economize by implementing it only for the lowest level routines.


    4. Where the implementation of different language bindings is is done through a layered approach (e.g., the C++ binding is a set of ``wrapper'' functions which call the C implementation), ensure that these wrapper functions are separable from the rest of the library.

    This is necessary to allow a separate profiling library to be correctly implemented, since (at least with Unix linker semantics) the profiling library must contain these wrapper functions if it is to perform as expected. This requirement allows the author of the profiling library to extract these functions from the original MPI library and add them into the profiling library without bringing along any other unnecessary code.


    5. Provide a no-op routine MPI::Pcontrol in the MPI library.


Advice to implementors.

There are (at least) two apparent options for implementing the C++ profiling interface: inheritance or caching. An inheritance-based approach may not be attractive because it may require a virtual inheritance implementation of the communicator classes. Thus, it is most likely that implementors still cache PMPI objects on their corresponding MPI objects. The caching scheme is outlined below.

The ``real'' entry points to each routine can be provided within a namespace PMPI. The non-profiling version can then be provided within a namespace MPI.

Caching instances of PMPI objects in the MPI handles provides the ``has a'' relationship that is necessary to implement the profiling scheme.

Each instance of an MPI object simply ``wraps up'' an instance of a PMPI object. MPI objects can then perform profiling actions before invoking the corresponding function in their internal PMPI object.

The key to making the profiling work by simply re-linking programs is by having a header file that declares all the MPI functions. The functions must be defined elsewhere, and compiled into a library. MPI constants should be declared extern in the MPI namespace. For example, the following is an excerpt from a sample mpi.h file:


Example Sample mpi.h file.

namespace PMPI { 
  class Comm { 
  public: 
    int Get_size() const; 
  }; 
  // etc. 
}; 
 
namespace MPI { 
public: 
  class Comm { 
  public: 
    int Get_size() const; 
 
  private: 
    PMPI::Comm pmpi_comm; 
  }; 
}; 

Note that all constructors, the assignment operator, and the destructor in the MPI class will need to initialize/destroy the internal PMPI object as appropriate.

The definitions of the functions must be in separate object files; the PMPI class member functions and the non-profiling versions of the MPI class member functions can be compiled into libmpi.a, while the profiling versions can be compiled into libpmpi.a. Note that the PMPI class member functions and the MPI constants must be in different object files than the non-profiling MPI class member functions in the libmpi.a library to prevent multiple definitions of MPI class member function names when linking both libmpi.a and libpmpi.a. For example:


Example pmpi.cc, to be compiled into libmpi.a.

int PMPI::Comm::Get_size() const 
{ 
  // Implementation of MPI_COMM_SIZE 
} 


Example constants.cc, to be compiled into libmpi.a.

const MPI::Intracomm MPI::COMM_WORLD; 


Example mpi_no_profile.cc, to be compiled into libmpi.a.

int MPI::Comm::Get_size() const 
{ 
  return pmpi_comm.Get_size(); 
} 


Example mpi_profile.cc, to be compiled into libpmpi.a.

int MPI::Comm::Get_size() const 
{ 
  // Do profiling stuff 
  int ret = pmpi_comm.Get_size(); 
  // More profiling stuff 
  return ret; 
} 

( End of advice to implementors.)



Up: C++ Next: Fortran Support Previous: Mixed-Language Operability


Return to MPI-2 Standard Index
Return to MPI 1.1 Standard Index
Return to MPI Forum Home Page

MPI-2.0 of July 18, 1997
HTML Generated on September 10, 2001