MPIO_REQUEST man page on IRIX

Man page or keyword search:  
man Server   31559 pages
apropos Keyword Search (all sections)
Output format
IRIX logo
[printable version]



mpi_io(3)							     mpi_io(3)

NAME
     mpi_io - Introduction to the ROMIO implementation of MPI I/O routines

DESCRIPTION
     MPI I/O routines are based on MPI-2 MPI I/O routines, in which derived
     data types are used to express data partitioning.	These MPI-2 I/O
     routines are derived from the ROMIO 1.2.4 source code. The current IRIX
     implementation contains all interfaces defined in the MPI-2 I/O chapter
     of the MPI-2 standard except shared file pointer functions (Sec. 9.4.4),
     split collective data access functions (Sec. 9.4.5), support for file
     interoperability (Sec. 9.5), I/O error handling (Sec. 9.7), and I/O error
     classes (Sec. 9.8).

   Limitations
     The MPI I/O routines have the following limitations:

     *	 Beginning with MPT 1.6, the status argument is set following all
	 read, write, MPIO_Test, and MPIO_Wait functions. Consequently,
	 MPI_Get_count and MPI_Get_elements will now work when passed the
	 status object from these operations.  Previously, they did not.

     *	 Beginning with MPT 1.8, all nonblocking I/O functions use the same
	 MPI_Request object that the message-pass- ing functions use.
	 Accordingly, you may mix requests from the two classes of functions
	 in calls to MPI_Wait(), MPI_Test(), and their variants. MPIO_Test()
	 and MPIO_Wait(), which, prior to MPT 1.8 were required to be used
	 with requests from nonblock- ing I/O functions only, are no longer
	 required, but continue to exist for compatibility with older MPT
	 binary applications.

     *	 All functions return only two possible error codes:  MPI_SUCCESS on
	 success and MPI_ERR_UNKNOWN on failure.

     *	 End-of-file is not detected.  The individual file pointer is
	 increased by the requested amount of data and not by the actual
	 amount of data read.  Therefore, after end-of-file is reached,
	 MPI_File_get_position(3) returns a wrong offset.

   Direct I/O
     MPI I/O supports direct access to files stored in XFS filesystems. Direct
     access bypasses the system's buffer cache, leading to better performance
     in some specialized cases.	 You can enable direct I/O for read and write
     operations by setting the appropriate environment variable(s),
     MPIO_DIRECT_READ or MPIO_DIRECT_WRITE, to the string "TRUE," as in the
     following example:

	  setenv MPIO_DIRECT_READ TRUE
	  setenv MPIO_DIRECT_WRITE TRUE

									Page 1

mpi_io(3)							     mpi_io(3)

   List of Routines
     The MPI I/O routines are as follows:

	  MPI_File_c2f(3)
	  MPI_File_close(3)
	  MPI_File_delete(3)
	  MPI_File_f2c(3)
	  MPI_File_get_amode(3)
	  MPI_File_get_atomicity(3)
	  MPI_File_get_byte_offset(3)
	  MPI_File_get_group(3)
	  MPI_File_get_info(3)
	  MPI_File_get_position(3)
	  MPI_File_get_size(3)
	  MPI_File_get_type_extent(3)
	  MPI_File_get_view(3)
	  MPI_File_iread(3)
	  MPI_File_iread_at(3)
	  MPI_File_iwrite(3)
	  MPI_File_iwrite_at(3)
	  MPI_File_open(3)
	  MPI_File_preallocate(3)
	  MPI_File_read(3)
	  MPI_File_read_all(3)
	  MPI_File_read_at(3)
	  MPI_File_read_at_all(3)
	  MPI_File_seek(3)
	  MPI_File_set_atomicity(3)
	  MPI_File_set_info(3)
	  MPI_File_set_size(3)
	  MPI_File_set_view(3)
	  MPI_File_sync(3)
	  MPI_File_write(3)
	  MPI_File_write_all(3)
	  MPI_File_write_at(3)
	  MPI_File_write_at_all(3)

									Page 2

[top]

List of man pages available for IRIX

Copyright (c) for man pages and the logo by the respective OS vendor.

For those who want to learn more, the polarhome community provides shell access and support.

[legal] [privacy] [GNU] [policy] [cookies] [netiquette] [sponsors] [FAQ]
Tweet
Polarhome, production since 1999.
Member of Polarhome portal.
Based on Fawad Halim's script.
....................................................................
Vote for polarhome
Free Shell Accounts :: the biggest list on the net