Scilab Website | Contribute with GitLab | Mailing list archives | ATOMS toolboxes
Scilab Online Help
5.5.1 - Русский

Change language to:
English - Français - 日本語 - Português -

Please note that the recommended version of Scilab is 2024.0.0. This page might be outdated.
However, this page did not exist in the previous stable version.

Справка Scilab >> Scilab MPI > MPI implementation

MPI implementation

Technical details about the implementation

Sending and receiving data

The main problem of a MPI implementation from a scripting language is due to their native characterics. Indeed, scripting language are, by nature, using dynamic variables of various types and sizes.

The implementation of MPI in Scilab is using an internal serialization and deserialization process to the MPI datatype MPI_INT. The send functions (MPI_Send, MPI_Isend, MPI_BCast, etc) will convert all the supported datatypes to MPI_INT while receiving functions (MPI_Recv, MPI_Irecv, etc) will restore the original variables.

Like their memory representation in the 5 family, variables are serialized in the following way:

Double, Boolean, String(?)

Type

Number of rows

Number of columns

Complex (if relevant)

Data

Integer

Type

Number of rows

Number of columns

Precision

Data

Sparse (double or boolean)

Type

Number of rows

Number of columns

Complex

Number of items

Data

Other potential solutions have been considered but rejected for various reasons:

  • MPI based new datatype Needs to know a priori the size of the variable.
  • One send for the metadata (size, type), one send for the data While decreasing the performances, the code would be strongly complexified.

Asynchronous exchanges

Because MPI_Irecv and MPI_Isend standard behavior are not really the common way to do in the Scilab language, MPI_Wait returns a value in the Scilab MPI binding.

In order to store the list of requests and MPI variables expected by MPI_Irecv/MPI_Wait, static C structures containing at max 10 000 elements are used. They will be used to store the various MPI Request used for asynchronous exchanges and the reference to the expected variable from MPI_Irecv. The received value are returned by MPI_Wait.

In the following example, the request named "42" will be stored in this datastructure.

MPI_Init();
rnk =    MPI_Comm_rank();
sizeNodes =    MPI_Comm_size();

SLV = rnk;
Master = ~ SLV;

assert_checkequal(MPI_Comm_size(), 2);

if Master
   for slaveId = 1:sizeNodes-1
     value = slaveId*2
     MPI_Isend(value, slaveId, 42);
   end
else
    rankSource=0;
    tag=0;
    MPI_Irecv(rankSource, tag, 42); // MPI_Irecv does not return any value
    value=MPI_Wait(42) // the value will be returned by MPI_Wait
    assert_checkequal(value,2);
end

MPI_Finalize();
exit()

See Also

Report an issue
<< MPI Scilab MPI Локализация >>

Copyright (c) 2022-2023 (Dassault Systèmes)
Copyright (c) 2017-2022 (ESI Group)
Copyright (c) 2011-2017 (Scilab Enterprises)
Copyright (c) 1989-2012 (INRIA)
Copyright (c) 1989-2007 (ENPC)
with contributors
Last updated:
Thu Oct 02 14:01:18 CEST 2014