Euro MPI/PVM 2003
Vencie, Italy September-October 2003
Day 1. Tutorial: High Level Programming with MPI.
A basic introduction to MPI programming followed by details of the more recent developments in MPICH.
Part 1: Basic Introduction to MPI/MPICH.
- Collective communications: the 'V' versions of functions allow you to have different sizes for each chunk to be distributed or collected.
- Collective operations: MAX_LOC and MIN_LOC are useful for knowing what process holds max or min value.
- Collective operations: possible to define your own collective communication - very useful.
- Collective operations: not always best to use collective comms such as Bcast, can be quicker to 'pipeline'.
- MPI 2: One-sided comms: PUTS and GETS and LOCKS. 'MPI_Win' functions.
- MPI 2: New C++ and F90 bindings.
- MPI 2: Parallel I/O. Analogous to sending and recieving.
- MPI 2: Thread safe and process spawning ability.
- MPI 2: Download and more information from www.mcs.anl.gov/mpi/mpich2.
- Jumpshot: useful profiler and viewer for parallel codes.
Part 2: MPI Libraries.
- I/O libraries: netCDF and HDFS are the most popular.
- PnetCDF: parallel library for parallel I/O.
- Used in SciDAC, scientific data management.
- FLASH astro code, uses HDFS.
- Website: Parallel-netcdf at Argonne Labs.
- PetSc Library: for parallel solutions to systems of equations arising from descritisation of PDE's.
- Linear, non-linear and time evolution.
- Includes routines for sparse matrices, distributed arrays, scatter/gather on unstructured grid.
- Many numerical components.
- Facility for local and global representations of a parallelised grid.
- Others: ScaLAPACK, FFTW, HDFS ....
- Website: Parallel Libraries at Argonne Labs.
Day 2. MPI and the Grid.
First Session: Invited Talks.
Talk 1: Messaging Systems: Parallel Computing the Internet and the Grid.
Talk 2: Progress towards Petascale Virtual Machines.