"Fossies" - the Fresh Open Source Software Archive

Member "openmpi-3.1.6/README.JAVA.txt" (18 Mar 2020, 10694 Bytes) of package /linux/misc/openmpi-3.1.6.tar.bz2:


As a special service "Fossies" has tried to format the requested text file into HTML format (style: standard) with prefixed line numbers. Alternatively you can here view or download the uninterpreted source code file.

    1 ***************************************************************************
    2 IMPORTANT NOTE
    3 
    4 JAVA BINDINGS ARE PROVIDED ON A "PROVISIONAL" BASIS - I.E., THEY ARE
    5 NOT PART OF THE CURRENT OR PROPOSED MPI STANDARDS. THUS, INCLUSION OF
    6 JAVA SUPPORT IS NOT REQUIRED BY THE STANDARD. CONTINUED INCLUSION OF
    7 THE JAVA BINDINGS IS CONTINGENT UPON ACTIVE USER INTEREST AND
    8 CONTINUED DEVELOPER SUPPORT.
    9 
   10 ***************************************************************************
   11 
   12 This version of Open MPI provides support for Java-based
   13 MPI applications.
   14 
   15 The rest of this document provides step-by-step instructions on
   16 building OMPI with Java bindings, and compiling and running
   17 Java-based MPI applications. Also, part of the functionality is
   18 explained with examples. Further details about the design,
   19 implementation and usage of Java bindings in Open MPI can be found
   20 in [1]. The bindings follow a JNI approach, that is, we do not
   21 provide a pure Java implementation of MPI primitives, but a thin
   22 layer on top of the C implementation. This is the same approach
   23 as in mpiJava [2]; in fact, mpiJava was taken as a starting point
   24 for Open MPI Java bindings, but they were later totally rewritten.
   25 
   26  [1] O. Vega-Gisbert, J. E. Roman, and J. M. Squyres. "Design and
   27      implementation of Java bindings in Open MPI". Parallel Comput.
   28      59: 1-20 (2016).
   29 
   30  [2] M. Baker et al. "mpiJava: An object-oriented Java interface to
   31      MPI". In Parallel and Distributed Processing, LNCS vol. 1586,
   32      pp. 748-762, Springer (1999).
   33 
   34 ============================================================================
   35 
   36 Building Java Bindings
   37 
   38 If this software was obtained as a developer-level
   39 checkout as opposed to a tarball, you will need to start your build by
   40 running ./autogen.pl. This will also require that you have a fairly
   41 recent version of autotools on your system - see the HACKING file for
   42 details.
   43 
   44 Java support requires that Open MPI be built at least with shared libraries
   45 (i.e., --enable-shared) - any additional options are fine and will not
   46 conflict. Note that this is the default for Open MPI, so you don't
   47 have to explicitly add the option. The Java bindings will build only
   48 if --enable-mpi-java is specified, and a JDK is found in a typical
   49 system default location.
   50 
   51 If the JDK is not in a place where we automatically find it, you can
   52 specify the location. For example, this is required on the Mac
   53 platform as the JDK headers are located in a non-typical location. Two
   54 options are available for this purpose:
   55 
   56 --with-jdk-bindir=<foo> - the location of javac and javah
   57 --with-jdk-headers=<bar> - the directory containing jni.h
   58 
   59 For simplicity, typical configurations are provided in platform files
   60 under contrib/platform/hadoop. These will meet the needs of most
   61 users, or at least provide a starting point for your own custom
   62 configuration.
   63 
   64 In summary, therefore, you can configure the system using the
   65 following Java-related options:
   66 
   67 $ ./configure --with-platform=contrib/platform/hadoop/<your-platform>
   68 ...
   69 
   70 or
   71 
   72 $ ./configure --enable-mpi-java --with-jdk-bindir=<foo>
   73               --with-jdk-headers=<bar> ...
   74 
   75 or simply
   76 
   77 $ ./configure --enable-mpi-java ...
   78 
   79 if JDK is in a "standard" place that we automatically find.
   80 
   81 ----------------------------------------------------------------------------
   82 
   83 Running Java Applications
   84 
   85 For convenience, the "mpijavac" wrapper compiler has been provided for
   86 compiling Java-based MPI applications. It ensures that all required MPI
   87 libraries and class paths are defined. You can see the actual command
   88 line using the --showme option, if you are interested.
   89 
   90 Once your application has been compiled, you can run it with the
   91 standard "mpirun" command line:
   92 
   93 $ mpirun <options> java <your-java-options> <my-app>
   94 
   95 For convenience, mpirun has been updated to detect the "java" command
   96 and ensure that the required MPI libraries and class paths are defined
   97 to support execution. You therefore do NOT need to specify the Java
   98 library path to the MPI installation, nor the MPI classpath. Any class
   99 path definitions required for your application should be specified
  100 either on the command line or via the CLASSPATH environmental
  101 variable. Note that the local directory will be added to the class
  102 path if nothing is specified.
  103 
  104 As always, the "java" executable, all required libraries, and your
  105 application classes must be available on all nodes.
  106 
  107 ----------------------------------------------------------------------------
  108 
  109 Basic usage of Java bindings
  110 
  111 There is an MPI package that contains all classes of the MPI Java
  112 bindings: Comm, Datatype, Request, etc. These classes have a direct
  113 correspondence with classes defined by the MPI standard. MPI primitives
  114 are just methods included in these classes. The convention used for
  115 naming Java methods and classes is the usual camel-case convention,
  116 e.g., the equivalent of MPI_File_set_info(fh,info) is fh.setInfo(info),
  117 where fh is an object of the class File.
  118 
  119 Apart from classes, the MPI package contains predefined public attributes
  120 under a convenience class MPI. Examples are the predefined communicator
  121 MPI.COMM_WORLD or predefined datatypes such as MPI.DOUBLE. Also, MPI
  122 initialization and finalization are methods of the MPI class and must
  123 be invoked by all MPI Java applications. The following example illustrates
  124 these concepts:
  125 
  126 import mpi.*;
  127 
  128 class ComputePi {
  129 
  130     public static void main(String args[]) throws MPIException {
  131 
  132         MPI.Init(args);
  133 
  134         int rank = MPI.COMM_WORLD.getRank(),
  135             size = MPI.COMM_WORLD.getSize(),
  136             nint = 100; // Intervals.
  137         double h = 1.0/(double)nint, sum = 0.0;
  138 
  139         for(int i=rank+1; i<=nint; i+=size) {
  140             double x = h * ((double)i - 0.5);
  141             sum += (4.0 / (1.0 + x * x));
  142         }
  143 
  144         double sBuf[] = { h * sum },
  145                rBuf[] = new double[1];
  146 
  147         MPI.COMM_WORLD.reduce(sBuf, rBuf, 1, MPI.DOUBLE, MPI.SUM, 0);
  148 
  149         if(rank == 0) System.out.println("PI: " + rBuf[0]);
  150         MPI.Finalize();
  151     }
  152 }
  153 
  154 ----------------------------------------------------------------------------
  155 
  156 Exception handling
  157 
  158 Java bindings in Open MPI support exception handling. By default, errors
  159 are fatal, but this behavior can be changed. The Java API will throw
  160 exceptions if the MPI.ERRORS_RETURN error handler is set:
  161 
  162     MPI.COMM_WORLD.setErrhandler(MPI.ERRORS_RETURN);
  163 
  164 If you add this statement to your program, it will show the line
  165 where it breaks, instead of just crashing in case of an error.
  166 Error-handling code can be separated from main application code by
  167 means of try-catch blocks, for instance:
  168 
  169     try
  170     {
  171         File file = new File(MPI.COMM_SELF, "filename", MPI.MODE_RDONLY);
  172     }
  173     catch(MPIException ex)
  174     {
  175         System.err.println("Error Message: "+ ex.getMessage());
  176         System.err.println("  Error Class: "+ ex.getErrorClass());
  177         ex.printStackTrace();
  178         System.exit(-1);
  179     }
  180 
  181 
  182 ----------------------------------------------------------------------------
  183 
  184 How to specify buffers
  185 
  186 In MPI primitives that require a buffer (either send or receive) the
  187 Java API admits a Java array. Since Java arrays can be relocated by
  188 the Java runtime environment, the MPI Java bindings need to make a
  189 copy of the contents of the array to a temporary buffer, then pass the
  190 pointer to this buffer to the underlying C implementation. From the
  191 practical point of view, this implies an overhead associated to all
  192 buffers that are represented by Java arrays. The overhead is small
  193 for small buffers but increases for large arrays.
  194 
  195 There is a pool of temporary buffers with a default capacity of 64K.
  196 If a temporary buffer of 64K or less is needed, then the buffer will
  197 be obtained from the pool. But if the buffer is larger, then it will
  198 be necessary to allocate the buffer and free it later.
  199 
  200 The default capacity of pool buffers can be modified with an 'mca'
  201 parameter:
  202 
  203     mpirun --mca mpi_java_eager size ...
  204 
  205 Where 'size' is the number of bytes, or kilobytes if it ends with 'k',
  206 or megabytes if it ends with 'm'.
  207 
  208 An alternative is to use "direct buffers" provided by standard
  209 classes available in the Java SDK such as ByteBuffer. For convenience
  210 we provide a few static methods "new[Type]Buffer" in the MPI class
  211 to create direct buffers for a number of basic datatypes. Elements
  212 of the direct buffer can be accessed with methods put() and get(),
  213 and the number of elements in the buffer can be obtained with the
  214 method capacity(). This example illustrates its use:
  215 
  216     int myself = MPI.COMM_WORLD.getRank();
  217     int tasks  = MPI.COMM_WORLD.getSize();
  218 
  219     IntBuffer in  = MPI.newIntBuffer(MAXLEN * tasks),
  220               out = MPI.newIntBuffer(MAXLEN);
  221 
  222     for(int i = 0; i < MAXLEN; i++)
  223         out.put(i, myself);      // fill the buffer with the rank
  224 
  225     Request request = MPI.COMM_WORLD.iAllGather(
  226                       out, MAXLEN, MPI.INT, in, MAXLEN, MPI.INT);
  227     request.waitFor();
  228     request.free();
  229 
  230     for(int i = 0; i < tasks; i++)
  231     {
  232         for(int k = 0; k < MAXLEN; k++)
  233         {
  234             if(in.get(k + i * MAXLEN) != i)
  235                 throw new AssertionError("Unexpected value");
  236         }
  237     }
  238 
  239 Direct buffers are available for: BYTE, CHAR, SHORT, INT, LONG,
  240 FLOAT, and DOUBLE. There is no direct buffer for booleans.
  241 
  242 Direct buffers are not a replacement for arrays, because they have
  243 higher allocation and deallocation costs than arrays. In some
  244 cases arrays will be a better choice. You can easily convert a
  245 buffer into an array and vice versa.
  246 
  247 All non-blocking methods must use direct buffers and only
  248 blocking methods can choose between arrays and direct buffers.
  249 
  250 The above example also illustrates that it is necessary to call
  251 the free() method on objects whose class implements the Freeable
  252 interface. Otherwise a memory leak is produced.
  253 
  254 ----------------------------------------------------------------------------
  255 
  256 Specifying offsets in buffers
  257 
  258 In a C program, it is common to specify an offset in a array with
  259 "&array[i]" or "array+i", for instance to send data starting from
  260 a given positon in the array. The equivalent form in the Java bindings
  261 is to "slice()" the buffer to start at an offset. Making a "slice()"
  262 on a buffer is only necessary, when the offset is not zero. Slices
  263 work for both arrays and direct buffers.
  264 
  265     import static mpi.MPI.slice;
  266     ...
  267     int numbers[] = new int[SIZE];
  268     ...
  269     MPI.COMM_WORLD.send(slice(numbers, offset), count, MPI.INT, 1, 0);
  270 
  271 ----------------------------------------------------------------------------
  272 
  273 If you have any problems, or find any bugs, please feel free to report
  274 them to Open MPI user's mailing list (see
  275 http://www.open-mpi.org/community/lists/ompi.php).