appschema man page on YellowDog

Man page or keyword search:  
man Server   18644 pages
apropos Keyword Search (all sections)
Output format
YellowDog logo
[printable version]

APPSCHEMA(5)		       LAM FILE FORMATS			  APPSCHEMA(5)

NAME
       appschema - LAM application schema format

SYNTAX
       #
       # comments
       #
       [<where>] [-np #] [-s <where>] [-wd <dir>] [-x <env>] <program> [<args>]
       [<where>] [-np #] [-s <where>] [-wd <dir>] [-x <env>] <program> [<args>]
	...

DESCRIPTION
       The application schema is an ASCII file containing a description of the
       programs which constitute an application.  It  is  used	by  mpirun(1),
       MPI_Comm_spawn, and MPI_Comm_spawn_multiple to start an MPI application
       (the MPI_Info key "file" can be	used  to  specify  an  app  schema  to
       MPI_Comm_spawn and MPI_Comm_spawn_multiple).  All tokens after the pro‐
       gram name will be passed as command line	 arguments  to	the  new  pro‐
       cesses.	 Ordering  of  the  other  elements on the command line is not
       important.

       The meaning of the options is  the  same	 as  in	 mpirun(1).   See  the
       mpirun(1)  man  page  for a lengthy discussion of the nomenclature used
       for <where>.  Note, however, that if -wd is  used  in  the  application
       schema  file,  it  will override any -wd value specified on the command
       line.

       For each program line, processes will be created on LAM nodes according
       to the presence of <where> and the process count option (-np).

       only <where>  One process is created on each node.

       only -np	     The  specified  number  of processes are scheduled across
		     all LAM nodes/CPUs.

       both	     The specified number of processes	are  scheduled	across
		     the specified nodes/CPUs.

       neither	     One process is created on the local node.

   Program Transfer
       By  default,  LAM  searches  for executable programs on the target node
       where a particular instantiation will run.  If the file system  is  not
       shared, the target nodes are homogeneous, and the program is frequently
       recompiled, it can be convenient to have LAM transfer the program  from
       a  source  node	(usually  the local node) to each target node.	The -s
       option specifies this behaviour and identifies the single source node.

EXAMPLE
       #
       # Example application schema
       # Note that it may be necessary to specify the entire pathname for
       # "master" and "slave" if you get "File not found" errors from
       # mpirun(1).
       #
       # This schema starts a "master" process on CPU 0 with the argument
       # "42.0", and then 10 "slave" processes (that are all sent from the
       # local node) scheduled across all available CPUs.
       #
       c0 master 42.0
       C -np 10 -s h slave

SEE ALSO
       mpirun(1),	 MPI_Comm_spawn(2),	   MPI_Comm_Spawn_multiple(2),
       MPIL_Spawn(2), introu(1)

LAM 7.1.2			  March, 2006			  APPSCHEMA(5)
[top]

List of man pages available for YellowDog

Copyright (c) for man pages and the logo by the respective OS vendor.

For those who want to learn more, the polarhome community provides shell access and support.

[legal] [privacy] [GNU] [policy] [cookies] [netiquette] [sponsors] [FAQ]
Tweet
Polarhome, production since 1999.
Member of Polarhome portal.
Based on Fawad Halim's script.
....................................................................
Vote for polarhome
Free Shell Accounts :: the biggest list on the net