Prokopalo E. T., Segeda N. E., Kaliberda N. I.

Oles Honchar Dnipropetrovsk National University


Many of those, who are seriously engages in programming, certainly ran into such sort of tasks decision of which due to the known algorithms occupies the enormous amount of time. Even the most modern and powerful computers need weeks, months, and even years are needed in order to execute the enormous amount of calculations which are necessary for a decision of similar tasks.

In these cases the so named concurrent programming comes for the programmers to help. The methods of the concurrent programming allow us to distribute necessary calculations between the processes of different computers, allowing us through that to unite their calculable power in-process on the one problem. MPI is now the most widespread technology of programming for parallel computers with the up-diffused memory. The basic method of parallel processes interaction in such systems is passing of messages to each other. It is reflected in the name of this technology – Message Passing Interface(interface of passing of messages). The standard of MPI fixes an interface, which must be observed by both the system of programming on every calculable platform and user during creation of the programs. Modern performances mostly correspond to the standard of MPI according to version 1.1. The standard of MPI- 2.0, considerably extending functionality of previous version, appeared in 1997-1998. However until now this variant of MPI did not get wide distribution and in full is not realized none of the system one.

MPI supports work with Fortran and C languages. However knowledge of both language is not fundamental, as basic ideas of MPI and the rules of registration of separate constructions for these languages are in a great deal similar. The complete version of interface contains the description of more than 125 procedures and functions. My task is to describe the idea of technology.

The interface of MPI supports creation of the concurrent programs in style of MIMD (Multiple Instruction Multiple Data), that implies the association of processes with different original texts. However writing and debugging of such programs are very difficult, therefore in practice programmers use much more often SPMD — (Single Program Multiple Data) of the concurrent programming, within the framework of which the same code is used for all parallel processes. Nowadays more and more perfomances of MPI support work with filaments.

As MPI is a library, then during the compiling of the program corresponding library modules are needed to be linked. It can be done in a command line or be used the commands or scripts mostly foreseen. For example, mpiCC for the programs in language of C++. Option of compiler of "- o name" allows setting the name for the got executable file, executable file is named a.out by default, for example:

mpiCC – o program program.f

After the receipt of executable file it is necessary to start him on the required amount of processors. For this purpose the command of start of MPI-applications mpirun is usually given, for example:

mpirun – np N <program with arguments>,

where N is a number of processes which it must be not as much as the number of processes settled in this system for one task. After a start the same program will be executed by all started processes, the result of implementation which depends on the system will be given out on a terminal or written down in a file with the predefined name.

  All additional objects: the names of procedures, constants, predefined types of data etc., used in MPI, have prefix of mpi_. If an user does not use in the program the names with such prefix, then conflicts with the objects of MPI certainly will not exist. In the C language, in addition, a register of symbols is substantial in the names of functions. Usually in the names of functions of MPI the first letter after prefix of mpi_ is written in an upper case, subsequent letters – in lowercase, and the names of constants of MPI are written down wholly in an upper case. All specifications interface of MPI are collected in the file of mpif.h (mpi.h) therefore directive # include "mpi.h" must stand at the beginning of MPI –program.

MPI-program is a great number of parallel interactive processes. All processes are generated once, forming parallel part of the program. During execution of MPI program the generation of additional processes or elimination of existing ones is shut out (in MPI – 2.0 such possibility has appeared). Every process works in its address space, there are not any general variables or data in MPI. The fundamental method of co-operation between processes is an obvious parcel of reports.

For localization of interaction of parallel as it is possible to create the groups of processes, providing them with a separate environment for associetion – a commu­nicator. Structure of the formed groups is arbitrary. Groups can fully coincide, penetrate one in to another, not intersect, or intersect partly. Processes can co-operate only inside of some communicator, reports, sent in different communicators, do not intersect and do not interfere with each other. Communicators have the predefined type of mpi_comm in the C language.

At the start of the program it is always considered that all descendant processes work within the framework of all-embracing communicator, which has the predefined name of mpi_comm_world. This communicator always exists and serves for all the started processes of MPI -program. Except it at the start of the program there is a communicator of mpi_comm_self, containing only one current process, and also communicator of mpi_comm_null, containing no one process. All interprocess communications flow within the framework of certain communicator, reports, transmitted in different communicators, in any way do not interfere with each other.

The basic concepts and work principles of MPI technologies are like these, which allow solving of any task quickly having a sufficient amount of computers.