Quick Access

  • MPICH Home Page
    ANL/MSU MPI implementation

  • Download MPICH
    Free implementation of MPI

  • Patches for MPICH Updates for the current release of MPICH

  • MPI Standard 1.1


  • License for MPICH
    License for ROMIO
    (unofficial summary: freely available)
  • MPICH Documentation

  • MPI Home Page
  • MPICH-A Portable Implementation of MPI

    MPICH is a freely available, portable implementation of MPI, the Standard for message-passing libraries.

    Documentation on installing and using MPICH is available, as well as manual pages for the routines and commands. The current version of MPICH is 1.2.7p1 and was released on November 4th, 2005.

    The MPICH Implementation (postscript) is described in this paper. A bibtex bibliography of papers relating to MPICH is available. If you use MPICH for work that appears in a paper or report, please use these references. Additional papers and technical reports are also available.

    Getting the MPICH implementation

    MPICH is freely available and is distributed as source.
  • The Unix (all flavors) version of MPICH
  • The Microsoft Windows version of MPICH.
  • MPICH-G2, the Globus version of MPICH.
  • A list of known bugs and patchfiles is available.

    Bugs, problems, and suggestions should be sent to mpibugs.


    MPICH2 is an all-new implementation of MPI, designed to support research into high-performance implementations of MPI-1 and MPI-2 functionality. In addition to the features in MPICH, MPICH2 includes support for one-side communication, dynamic processes, intercommunicator collective operations, and expanded MPI-IO functionality. Clusters consisting of both single-processor and SMP nodes are supported. With the exception of users requiring the communication of heterogeneous data, we strongly encourage everyone to consider switching to MPICH2. Researchers interested in using using MPICH as a base for their research into MPI implementations should definitely use MPICH2.

    MPICH2 is a unified source distribution, supporting most flavors of Unix and recent versions of Windows. In additional, binary distributions are available for Windows platforms. Please see the MPICH2 home page for more details and for download instructions.


    The books Using MPI and Using MPI-2, published by MIT Press, describe how to use MPI and makes use of some of the special features of this implementation.

    Tools for MPICH

    A list of tools that work with MPICH is available.

    Year 2000 Statement

    MPICH is Y2K compliant. The MPI standard has no calendar or date related operations, and there are no uses of dates or calendar-related operations in any of the MPICH code. For those who are interested, our checks did reveal a use of date information in one of our scripts that we use to develop and manage MPICH; we fixed this in 1998. The script in question was never in any MPICH distribution, so all versions of MPICH are Y2K compliant.

    Projects Using MPICH

    We have begun a page that describes some projects that are using MPICH. If you would like to be mentioned on this page, please send mail to gropp@mcs.anl.gov briefly describing your project. The gallery of MPICH projects is here.


    Example programs and output are available.

    Supported Systems

    MPICH supports a wide range of systems. These include workstation clusters, shared memory systems, and massively parallel supercomputers. A list of supported systems is available.

    Thread Safety

    The MPICH implementation is not thread-safe. In many cases, it may be possible to use MPICH in what in MPI-2 are called MPI_THREAD_FUNNELED or MPI_THREAD_SERIALIZED modes when kernel (as opposed to user) threads are used. We plan to support a MPI_THREAD_MULTIPLE in a later release.

    Porting MPICH to other systems

    MPICH is designed to be relatively easy to port to other systems. Information on the design of MPICH is available here.

    MPICH has been used as the basis of many MPI implementations. A page of links to other MPI implementations based on MPICH is available.