Here you can find a list of the larger projects from the PMRS group, including both past and present projects.

  • MPICH – MPICH is a high performance and widely portable implementation of the Message Passing Interface (MPI) standard. The goals of MPICH are: to provide an MPI implementation that efficiently supports different computation and communication platforms including commodity clusters (desktop systems, shared-memory systems, multicore architectures), high-speed networks and proprietary high-end computing systems (Blue Gene, Cray) and to enable cutting-edge research in MPI through an easy-to-extend modular framework for other derived implementations.
  • ARGO – Argo is a new exascale operating system and runtime system designed to support extreme-scale scientific computation. It is built on an agile, new modular architecture that supports both global optimization and local control. It aims to efficiently leverage new chip and interconnect technologies while addressing the new modalities, programming environments, and workflows expected at exascale. It is designed from the ground up to run future high-performance computing applications at extreme scales.
  • DMEM – The DMEM project aims at homogenizing access to the different memory spaces that may be found in a heterogeneous memory platform. These include, but are not limited to, coprocessor memories (such as GPUs and Intel MIC), NVRAM, and scratchpad memory. The main goal of this project is to seamlessly expose the heterogeneous memory capabilities to developers so that they can easily choose the most appropriate or convenient memory space for their data.
  • VOCL – Virtual OpenCL (VOCL), is an OpenCL implementation which enables OpenCL applications to seamlessly access remote OpenCL devices. This provides enhanced flexibility to cluster configurations, as VOCL exposes all their OpenCL devices as a shared resource. The main goal of VOCL is to enable clusters equipped with fewer accelerators than nodes, hence increasing their utilization and reducing their idle time.
  • GVR – Global View Resilience (GVR) is a new programming approach that exploits a global view data model (global naming of data, consistency, and distributed layout), adding reliability to globally visible distributed arrays. Global naming of distributed data yields programmability benefits that include simpler expression of algorithms and decoupling of computation and data structure across increasingly complex (irregular, variable, degraded) hardware. In the GVR programming model, applications can indicate reliability priorities-which parts of their data are more important to protect-allowing the applications to manage reliability overheads. Because the distributed array abstraction is portable, GVR enables application programmers to manage reliability (and its overhead) in a flexible, portable fashion, tapping their deep scientific and application code insights.
  • Casper – Casper is a new process-based asynchronous progress model for MPI communication on multicore and many-core architectures. It is designed as a portable external library of MPI which can be transparently linked between any user application and MPI implementation through PMPI redirection. Casper allows user to keep aside a small, user-specified number of cores on a multicore or many-core environment as “ghost processes,” which are dedicated to help asynchronous progress for user processes through appropriate memory mapping from those user processes.