Next | Prev | Up | Top | Contents | Index

Intertask Communication

All PVM intertask communication calls have counterparts in MPI, except for pvm_mcast() and pvm_trecv(). You can easily replace multicasting in the PVM library with multicasting at the application layer with a set of send calls or by defining a group and performing a broadcast in that group. Similarly, you can replace a timed receive in the PVM library by an equivalent function at the application layer.

Some PVM collective communication calls, namely, pvm_gather() and pvm_reduce(), are nonblocking. This characteristic should not lead to any changes in the application code unless the PVM application has explicit synchronization calls (for example, pvm_barrier()) after such nonblocking calls. In such a case, you can remove these synchronization calls from the translated MPI program.

To send contiguous data of a given type, MPI does not require packing and unpacking of data in send buffers, as PVM does. Additionally, for noncontiguous data, MPI provides derived datatypes that avoid explicit packing and unpacking. However, MPI also provides pack/unpack functions for sending noncontiguous data, for compatibility with previous versions of libraries.

Multiple message buffers and their functionality in PVM can be emulated by communicators in MPI.


Next | Prev | Up | Top | Contents | Index