lucreate man page on SunOS

Man page or keyword search:  
man Server   20652 pages
apropos Keyword Search (all sections)
Output format
SunOS logo
[printable version]

lucreate(1M)		System Administration Commands		  lucreate(1M)

NAME
       lucreate - create a new boot environment

SYNOPSIS
       /usr/sbin/lucreate [-A BE_description] [-c BE_name]
	    [-C ( boot_device | - )] -n BE_name
	    [-D file_system]
	    [-f exclude_list_file] [-I] [-l error_log]
	    [-o outfile] [-s ( - | source_BE_name )]
	    [ [-M slice_list_file [-M]...]
	    [-m mount_point:device [,volume]:fs_options[:zonename] [-m...]]] |
	    [-P][-p zfs_root_pool]
	    [-x exclude [-x]...] [-X] [-y include [-y]...]
	    [-Y include_list_file] [-z filter_list]

DESCRIPTION
       The  lucreate  command  is part of a suite of commands that make up the
       Live  Upgrade  feature  of  the	Solaris	 operating  environment.   See
       live_upgrade(5)	for  a description of the Live Upgrade feature and its
       associated terminology.

       The lucreate command offers a set of command line options  that	enable
       you to perform the following functions:

	   o	  Create a new boot environment (BE), based on the current BE.

	   o	  Create a new BE, based on a BE other than the current BE.

	   o	  Join or separate the file systems of a BE onto a new BE. For
		  example, join /var and  /opt	under  /,  or  separate	 these
		  directories to be mounted under different disk slices.

	   o	  Specify separate file systems belonging to a particular zone
		  inside of the new BE. (See zones(5).)

	   o	  Create the file systems for a BE, but leave those file  sys‐
		  tems unpopulated.

       If  lucreate  is	 invoked  without the -m, -M, or -p options (described
       below), it brings up an FMLI-based interface that provides curses-based
       screens	for  Live  Upgrade  administration.  Note  that the FMLI-based
       interface does not support all of the Live Upgrade  features  supported
       by  the command-line version of lucreate. Also, Sun is not committed to
       ongoing development of the FMLI-based interface.

       With the -p option, lucreate supports the creation of BEs on  ZFS  file
       systems. The source BE can be a UFS root file system on a disk slice or
       a ZFS file system in an existing ZFS storage pool. lucreate provides  a
       convenient means of migrating a BE from a UFS root file system to a ZFS
       root file system. You cannot create a BE on a UFS file  system  from  a
       source BE on a ZFS file system.

       The  creation  of a BE includes selecting the disk or device slices for
       all the mount points of the BE. Slices can be physical disks or logical
       devices,	 such  as  Solaris Volume Manager volumes. You can also change
       the mount points of the BE using the SPLIT and MERGE functions  of  the
       FMLI-based configuration screen.

       Upon  successful creation of a BE, you can use lustatus(1M) to view the
       state of that BE and lufslist(1M) to view the BE's  file	 systems.  You
       can  use	 luupgrade(1M) to upgrade the OS on that BE and luactivate(1M)
       to make a BE active, that is, designate it as the BE to	boot  from  at
       the next reboot of the system.

       Note -

	 Before	 booting  a new BE, you must run luactivate to specify that BE
	 as active. luactivate performs a number of tasks that ensure  correct
	 operation  of the BE. In some cases, a BE is not bootable until after
	 you have run the command. See luactivate(1M) for a list of the opera‐
	 tions performed by that command.

       The  lucreate command makes a distinction between the file systems that
       contain the OS—/, /usr, /var, and /opt—and those that do not,  such  as
       /export,	 /home, and other, user-defined file systems. The file systems
       in the first category cannot be shared between the source BE and the BE
       being  created; they are always copied from the source BE to the target
       BE. By contrast, the user-defined file systems are shared  by  default.
       For  Live  Upgrade  purposes,  the file systems that contain the OS are
       referred to as non-shareable (or critical)  file	 systems;  other  file
       systems	are  referred  to  as  shareable.  A non-shareable file system
       listed in the source BE's vfstab is copied to a new BE. For a shareable
       file  system,  if  you  specify a destination slice, the file system is
       copied. If you do not, the file system is shared.

       When migrating from a UFS-based	BE  to	a  ZFS-based  BE,  you	cannot
       migrate	shared UFS file systems to ZFS. Also, when the source and des‐
       tination BEs are both ZFS-based, you cannot copy shared	file  systems.
       Such file systems can only be shared.

       The lucreate command copies all non-global zones from the current BE to
       the BE being created. For non-global zones  residing  in	 a  non-shared
       file  system, the new BE gets a copy of the zone in its non-shared file
       system. For non-global zones residing in a shared file system, lucreate
       makes  a copy of the zone for the new BE in that shared file system and
       uses a different zonepath (see zoneadm(1M)) for the zone. The  zonepath
       used  is of the form zonepath-newBE. This prevents BEs from sharing the
       same non-global zone in the shared file system. When the	 new  BE  gets
       booted,	the zone in the shared file system belonging to the new BE has
       its zonepath renamed to zonepath and the zone in the shared file system
       belonging  to  the  original  BE	 has its zonepath renamed to zonepath-
       origBE.

       If a zone exists in a non-shared file system, the zone is automatically
       copied  when  the  UFS  root file system is migrated to a ZFS root file
       system. If a zone exists in a shared UFS file system, to migrate	 to  a
       ZFS  root  file system, you must first upgrade the zone, as in previous
       Solaris releases. A zone in a non-shared file system within a ZFS BE is
       cloned when upgrading to a ZFS BE within the same ZFS pool.

       The  lucreate  command supports a limited subset of Solaris Volume Man‐
       ager functions. In particular, using lucreate with the -m  option,  you
       can:

	   o	  Create a mirror.

	   o	  Detach  existing SVM concatenations from mirrors. Similarly,
		  you can attach existing Solaris  Volume  Manager  concatena‐
		  tions	 to mirrors. These can be mirrors that were created in
		  Solaris Volume Manager or those created by lucreate.

	   o	  Create a single-slice concatenation and attach a single disk
		  slice to it.

	   o	  Detach  a  single disk slice from a single-slice concatenta‐
		  tion.

	   o	  Attach multiple single-slice	concatenations	to  a  mirror.
		  lucreate  can	 attach as many of these concatenations as are
		  allowed by Solaris Volume Manager.

       lucreate does not allow you to attach multiple disk slices or  multiple
       storage devices to a concatenation. Similarly, it does not allow you to
       detach multiple slices or devices from a concatenation.

       If you use Solaris Volume Manager volumes for boot environments, it  is
       recommended  that  you  use lucreate rather than Solaris Volume Manager
       commands to manipulate these volumes. The Solaris Volume Manager	 soft‐
       ware  has  no knowledge of boot environments, whereas the lucreate com‐
       mand contains checks that prevent you from inadvertently	 destroying  a
       boot  environment  by,  for  example, overwriting or deleting a Solaris
       Volume Manager volume.

       If you have already used Solaris Volume Manager software to create com‐
       plex Solaris Volume Manager volumes (for example, RAID-5 volumes), Live
       Upgrade will support the use of these. However, to create  and  manipu‐
       late  these  complex objects, you must use Solaris Volume Manager soft‐
       ware. As described above, the use of Solaris Volume  Manager  software,
       rather than the lucreate command, entails the risk of destroying a boot
       environment. If you do use Solaris Volume Manager software,  use	 lufs‐
       list(1M) to determine which devices are in use for boot environments.

       Except  for  a  special use of the -s option, described below, you must
       have a source BE for the creation of a new BE. By default,  it  is  the
       current	BE.  You  can use the -s option to specify a BE other than the
       current BE.

       When creating a new BE on a UFS file system, lucreate  enables  you  to
       exclude	and include certain files from the source BE. You perform this
       inclusion or exclusion with  the	 -f,  -x,  -y,	-Y,  and  -z  options,
       described below. See the subsection on combining these options, follow‐
       ing OPTIONS, below.

       By default, all swap partitions on a UFS-based  source  BE  are	shared
       with  a	UFS-based target BE. For UFS-based target BEs, you can use the
       -m option (see below) to specify an additional or new set of swap  par‐
       titions	on the source BE for sharing with the target. When a UFS-based
       source BE is copied to a ZFS target BE, lucreate creates in the new  BE
       a  swap	area  and a dump device on separate ZFS volumes. When both the
       source and target BEs are ZFS-based and are in the same pool, both  BEs
       use  the same swap volume. If source and target are in different pools,
       a new swap volume is created in the pool of the target BE.

       The lucreate command allows you to assign a  description	 to  a	BE.  A
       description  is an optional attribute of a BE that can be of any format
       or length. It might be, for example, a  text  string  or	 binary	 data.
       After  you  create  a  BE,  you	can  change  a BE description with the
       ludesc(1M) utility.

       The lucreate command requires root privileges or that  you  assume  the
       Primary Administrator role.

OPTIONS
       The  lucreate command has the options listed below. Note that a BE name
       must not exceed 30 characters  in  length  and  must  consist  only  of
       alphanumeric characters and other ASCII characters that are not special
       to the Unix shell. See the "Quoting" section of sh(1). The BE name  can
       contain	only  single-byte,  8-bit characters; it cannot contain white‐
       space characters.

       Omission of -m, -M, and -p options (described  below)  in  an  lucreate
       command	line  invokes  the  FMLI-based	interface, which allows you to
       select disk or device slices for a UFS-based BE.

       -A BE_description

	   Assigns the BE_description to a BE. BE_description can  be  a  text
	   string  or  other  characters that can be entered on a Unix command
	   line. See ludesc(1M) for additional information on BE descriptions.

       -c BE_name

	   Assigns the name BE_name to the current  BE.	 This  option  is  not
	   required and can be used only when the first BE is created. For the
	   first time you run lucreate, for a UFS-based BE, if	you  omit  -c,
	   lucreate supplies a default name according to the following rules:

	       1.     If  the physical boot device can be determined, the base
		      name of that device is used to name the new  boot	 envi‐
		      ronment.	For  example,  if  the physical boot device is
		      /dev/dsk/c0t0d0s0, lucreate names the new boot  environ‐
		      ment c0t0d0s0.

	       2.     If  the  physical	 boot device cannot be determined, the
		      operating system name (from uname -s) and operating sys‐
		      tem  release  level (from uname -r) are combined to pro‐
		      duce the name of the new boot environment. For  example,
		      if uname -s returns SunOS and uname -r returns 5.9, then
		      lucreate assigns the name SunOS5.9 to the new boot envi‐
		      ronment.

	       3.     If  lucreate can determine neither boot device nor oper‐
		      ating system name, it assigns the name  current  to  the
		      new boot environment.
	   For	a  ZFS-based  BE,  the default BE name is the base name of the
	   root file system.

	   If you use the -c option after the first boot environment  is  cre‐
	   ated,  the  option  is ignored if the name specified is the same as
	   the current boot environment name. If the name is different, lucre‐
	   ate displays an error message and exits.

       -C (boot_device | -)

	   Provided for occasions when lucreate cannot figure out which physi‐
	   cal storage device is your boot device. This might occur, for exam‐
	   ple,	 when  you  have a mirrored root device on the source BE on an
	   x86 machine. The -C specifies the physical boot device  from	 which
	   the	source BE is booted. Without this option, lucreate attempts to
	   determine the physical device from which a BE boots. If the	device
	   on  which  the  root	 file system is located is not a physical disk
	   (for example, if root is on a Solaris Volume	 Manager  volume)  and
	   lucreate  is	 able  to  make	 a reasonable guess as to the physical
	   device, you receive the query:

	     Is the physical device devname the boot device for
	     the logical device devname?

	   If you respond y, the command proceeds.

	   If you specify -C boot_device, lucreate  skips  the	search	for  a
	   physical  device  and  uses	the device you specify. The - (hyphen)
	   with the -C option tells  lucreate  to  proceed  with  whatever  it
	   determines  is  the	boot  device.  If  the command cannot find the
	   device, you are prompted to enter it.

	   If you omit -C or specify -C boot_device and lucreate cannot find a
	   boot device, you receive an error message.

	   Use	of  the	 -C  -	form is a safe choice, because lucreate either
	   finds the correct boot device or gives you the opportunity to spec‐
	   ify that device in response to a subsequent query.

       -D file_system

	   Specify  a  separate	 dataset for /var during UFS to ZFS migration.
	   Valid values for file_system are /var or any other mount point  not
	   containing OS deliverables. For example, /data.

	   While  the  -D  option is mainly intended for specifying a separate
	   dataset for /var, it can also be used  for  other  non-OS  critical
	   file	 systems.  For	example, you can create a separate dataset for
	   /data under the root dataset in a ZFS root BE.

	   Note that all shareable file systems in the UFS root	 BE  that  are
	   not	explicitly  migrated  to separate datasets with -D option con‐
	   tinue to be shared with ZFS root BE.

	   See "Examples" for the migration and non-migration use  of  the  -D
	   option.

       -f exclude_list_file

	   Use	the  contents  of  exclude_list_file to exclude specific files
	   (including	directories)	from	the    newly	created	   BE.
	   exclude_list_file contains a list of files and directories, one per
	   line. If a line item is a file, only that file is  excluded;	 if  a
	   directory,  that  directory	and  all files beneath that directory,
	   including subdirectories, are excluded.

	   This option is not supported when the source BE is on  a  ZFS  file
	   system.

       -I

	   Ignore  integrity  check. Prior to creating a new BE, lucreate per‐
	   forms an integrity check, to prevent you from  excluding  important
	   system  files  from	the  BE.  Use  this  option  to	 override this
	   integrity check. The trade-off in use of this option is  faster  BE
	   creation  (with  -I) versus the risk of a BE that does not function
	   as you expect.

       -l error_log

	   Error messages and other status messages are sent to error_log,  in
	   addition to where they are sent in your current environment.

       -m mount_point:device[,volume]:fs_option[:zonename]
       [-m mount_point:device:fs_option[:zonename]] ...

	   Specifies  the  vfstab(4)  information  for a new UFS-based BE. The
	   file systems specified as arguments to -m can be on the  same  disk
	   or can be spread across multiple disks.

	   The	-m  option is not supported for BEs based on ZFS file systems.
	   This option also does not support EFI-labeled disks.

	   mount_point can be any valid mount point or - (hyphen),  indicating
	   a swap partition. The device field can be one of the following:

	       o      The  name	 of  a disk slice, of the form /dev/dsk/cnumt‐
		      numdnumsnum.

	       o      The name of a Solaris Volume Manager volume, of the form
		      /dev/md/dsk/dnum.

	       o      The  name	 of  a Solaris Volume Manager disk set, of the
		      form /dev/md/setname/dsk/dnum.

	       o      The  name	  of   a   Veritas   volume,   of   the	  form
		      /dev/vx/dsk/dgname/volname.

	       o      The name of a ZFS dataset. The ZFS dataset name is valid
		      only when creating a ZFS-based BE from a	UFS-based  BE.
		      Only  user-created file systems can be migrated to a ZFS
		      dataset. The ZFS dataset name cannot be  specified  when
		      the  mount  point is an OS-critical file system, such as
		      /usr or /opt. The -m option is not valid when source  BE
		      is ZFS-based.

	       o      The  keyword  merged, indicating that the file system at
		      the specified mount point is to be merged with its  par‐
		      ent.

	       o      The keyword shared, indicating that all of the swap par‐
		      titions in the source BE are to be shared with  the  new
		      BE.
	   The	-m  option  enables  you to attach a physical disk device to a
	   Solaris Volume  Manager  single-slice  concatenation	 or  attach  a
	   Solaris  Volume  Manager  volume  to	 a mirror. Both operations are
	   accomplished with the attach keyword, described  below.  With  this
	   option,  you have the choice of specifying a concatentation or mir‐
	   ror or allowing lucreate to select one for you. To specify  a  con‐
	   catenation  or  mirror,  append a comma and the name of the Solaris
	   Volume Manager logical device to the device name to which the logi‐
	   cal	device	is  being  attached.  If  you omit this specification,
	   lucreate selects a concatenation or mirror  from  a	list  of  free
	   devices. See EXAMPLES.

	   The	fs_option  field  can  be  one	or more of the keywords listed
	   below. The first two keywords specify types of  file	 systems.  The
	   remaining  keywords	specify	 actions to be taken on a file system.
	   When you specify multiple keywords, separate these with a comma.

	   ufs	       Create the file system as a UFS volume.

	   vxfs	       Create the file system as a Veritas device.

	   preserve    Preserve the file  system  contents  of	the  specified
		       physical	 storage  device. Use of this keyword presumes
		       that the device's file  system  and  its	 contents  are
		       appropriate  for the specified mount point. For a given
		       mount point, you can use preserve with only one device.
		       This keyword enables you to bypass the default steps of
		       creating a new file system  on  the  specified  storage
		       device,	then copying the file system contents from the
		       source BE to the specified device. When	you  use  pre‐
		       serve,  lucreate	 checks that the storage device's con‐
		       tents is suitable for a	specified  file	 system.  This
		       check is limited and cannot guarantee suitability.

	   mirror      Create  a  mirror  on the specified storage device. The
		       specified storage device must be a correctly named (for
		       example, /dev/md/dsk/d10) logical device that can serve
		       as a mirror. In subsequent -m options, you must specify
		       attach  (see  below)  to	 attach	 at least one physical
		       device to the new mirror.

	   attach      Attach a physical storage device, contained by  a  vol‐
		       ume,  to the mirror or single-slice concatenation asso‐
		       ciated with a specified mount point. When using attach,
		       if  you	want  to attach a disk to a specific mirror or
		       concatenation, you append a comma and the name of  that
		       logical	device	to  the	 device	 name. If you omit the
		       comma and the concatentation name, lucreate  selects  a
		       free  mirror  or single-slice concatenation as the con‐
		       tainer volume for the storage device. See EXAMPLES.

		       lucreate allows you to create only concatenations  that
		       contain	a  single  physical  drive  and	 allows you to
		       attach up to four such concatenations to a mirror.

	   detach      Detach a physical storage device	 from  the  mirror  or
		       concatenation associated with a specified mount point.

	   The optional zonename field specifies the name of an installed non-
	   global zone. It is used to specify  a  separate  file  system  that
	   belongs  to the particular zone, named zonename, that exists in the
	   new BE being created.

	   At minimum, you must specify one disk or device  slice,  for	 root.
	   You can do this with -m, -M (described below), or in the FMLI-based
	   interface. You must specify an -m argument for each file system you
	   want	 to  create  on	 a new BE. For example, if you have three file
	   systems on a source BE (say, /, /usr,  and  /var)  and  want	 these
	   three entities as separate file systems on a new BE, you must spec‐
	   ify three -m arguments. If you were to specify  only	 one,  in  our
	   example,  /,	 /usr,	and  /var would be merged on the new BE into a
	   single file system, under /.

	   When using the -m option to specify swap partition(s), you can des‐
	   ignate  device(s)  currently used for swap on any BE and any unused
	   devices.  Regarding	swap  assignments,  you	 have  the   following
	   choices:

	       o      Omit  any	 specification	of swap devices, in which case
		      all swap devices associated with the source BE  will  be
		      used by the new BE.

	       o      Specify  one or more swap devices, in which case the new
		      BE will use only the  specified  swap  devices  and  not
		      automatically share the swap devices associated with the
		      source BE.

	       o      Specify one or more swap devices and use the  syntax  -m
		      -:shared:swap,  in  which	 case  the new BE will use the
		      specified swap devices and will share swap devices  with
		      the source BE.
	   See EXAMPLES, below.

       -M slice_list

	   List of -m options, collected in the file slice_list. Specify these
	   arguments in the format specified for -m. Comment lines,  beginning
	   with	 a  hash  mark (#), are ignored. The -M option is useful where
	   you have a long list of file systems for a BE. Note	that  you  can
	   combine  -m	and -M options. For example, you can store swap parti‐
	   tions in slice_list and specify / and /usr slices with -m.

	   The -M option is not supported for BEs based on ZFS file systems.

	   The -m and -M options support the listing of multiple slices for  a
	   given  mount	 point. In processing these slices, lucreate skips any
	   unavailable slices and selects the first available slice. See EXAM‐
	   PLES.

       -n BE_name

	   The name of the BE to be created. BE_name must be unique on a given
	   system.

       -o outfile

	   All command output is sent to outfile, in addition to where	it  is
	   sent in your current environment.

       -P

	   Preserves the dump device from PBE to ABE.

       -p zfs_root_pool

	   Specifies the ZFS pool in which a new BE will reside.

	   This	 option can be omitted if the source and target BEs are within
	   the same pool.

	   The -p option does not support the splitting and  merging  of  file
	   systems in a target BE that is supported by the -m option.

       -s (- | BE_name)

	   Source  for	the creation of the new BE. This option enables you to
	   use a BE other than the current BE as the source for creation of  a
	   new BE.

	   If  you specify a hyphen (-) as an argument to -s, lucreate creates
	   the new BE, but does not populate it.  This	variation  of  the  -s
	   option  is  intended for the subsequent installation of a flash ar‐
	   chive on the unpopulated BE using luupgrade(1M). See flar(1M).

       -x exclude

	   Exclude the file or directory exclude from the newly created BE. If
	   exclude  is	a  directory, lucreate excludes that directory and all
	   files beneath that directory, including subdirectories.

	   This option is not supported when the source BE is on  a  ZFS  file
	   system.

       -X

	   Enable  XML	output.	 Characteristics of XML are defined in DTD, in
	   /usr/share/lib/xml/dtd/lu_cli.dtd.<num>, where <num> is the version
	   number of the DTD file.

       -y include

	   Include  the	 file or directory include in the newly created BE. If
	   include is a directory, lucreate includes that  directory  and  all
	   files beneath that directory, including subdirectories.

	   This	 option	 is  not supported when the source BE is on a ZFS file
	   system.

       -Y include_list_file

	   Use the contents of include_list_file  to  include  specific	 files
	   (including	 directories)	 from	 the	newly	 created   BE.
	   include_list_file contains a list of files and directories, one per
	   line.  If  a	 line item is a file, only that file is included; if a
	   directory, that directory and all  files  beneath  that  directory,
	   including subdirectories, are included.

	   This	 option	 is  not supported when the source BE is on a ZFS file
	   system.

       -z filter_list_file

	   filter_list_file contains a list of items, files  and  directories,
	   one	per  line. Each item is preceded by either a +, indicating the
	   item is to be included in the new BE, or -, indicating the item  is
	   to be excluded from the new BE.

	   This	 option	 is  not supported when the source BE is on a ZFS file
	   system.

   Combining File Inclusion and Exclusion Options
       When a source BE is on a UFS file system, the lucreate  command	allows
       you  to include or exclude specific files and directories when creating
       a new BE. You can include files and directories with:

	   o	  the -y include option

	   o	  the -Y include_list_file option

	   o	  items with a leading + in the file used  with	 the  -z  fil‐
		  ter_list option

       You can exclude files and directories with:

	   o	  the -x exclude option

	   o	  the -f exclude_list_file option

	   o	  items	 with  a  leading  - in the file used with the -z fil‐
		  ter_list option

       If the parent directory of an excluded item is  included	 with  include
       options	(for  example,	-y  include),  then  only the specific file or
       directory specified by exclude is excluded. Conversely, if  the	parent
       directory of an included file is specified for exclusion, then only the
       file include is included. For example, if you specify:

	 -x /a -y /a/b

       all of /a except for /a/b is excluded. If you specify:

	 -y /a -x /a/b

       all of /a except for /a/b is included.

EXAMPLES
       The lucreate command produces copious output. In	 the  following	 exam‐
       ples,  this  output  is	not  reproduced, except where it is needed for
       clarity.

       Example 1 Creating a New Boot Environment for the First Time

       The following command sequence creates a	 new  boot  environment	 on  a
       machine on which a BE has never been created. All non-shareable (criti‐
       cal) file systems are mounted under /.

	 # lucreate -c first_disk -m /:/dev/dsk/c0t4d0s0:ufs -n second_disk
	 many lines of output
	 lucreate: Creation of Boot Environment <second_disk> successful.

       The following command, like the preceding, creates a new boot  environ‐
       ment  on	 a  machine on which a BE has never been created. However, the
       following command differs in two respects: the -c option is omitted and
       the /usr file system is mounted on its own disk slice, separate from /.

	 # lucreate -m /:/dev/dsk/c0t4d0s0:ufs -m /usr:/dev/dsk/c0t4d0s1:ufs \
	 -n second_disk
	 lucreate: Please wait while your system configuration is determined.
	 many lines of output
	 lucreate: Creation of Boot Environment c0t4d0s0 successful.

       In  the	absence	 of the -c option, lucreate assigns the name c0t4d0s0,
       the base name of the root device, to the new boot environment.

       The same command is entered, with the addition of -c:

	 # lucreate -c first_disk -m /:/dev/dsk/c0t4d0s0:ufs \
	 -m /usr:/dev/dsk/c0t4d0s1:ufs -n second_disk
	 many lines of output
	 lucreate: Creation of Boot Environment <second_disk> successful.

       Following creation of a BE, you use luupgrade(1M) to upgrade the OS  on
       the new BE and luactivate(1M) to make that BE the BE you will boot from
       upon the next reboot of your machine. Note that the swap partition  and
       all  shareable file systems for first_disk will be available to (shared
       with) second_disk.

	 # luupgrade -u -n second_disk \
	 -s /net/installmachine/export/solarisX/OS_image
	 many lines of output
	 luupgrade: Upgrade of Boot Environment <second_disk> successful.

	 # luactivate second_disk

       See luupgrade(1M) and luactivate(1M) for	 descriptions  of  those  com‐
       mands.

       Example 2 Creating a BE Using a Source Other than the Current BE

       The  following  command uses the -s option to specify a source BE other
       than the current BE.

	 # lucreate -s third_disk -m /:/dev/dsk/c0t4d0s0:ufs \
	 -m /usr:/dev/dsk/c0t4d0s1:ufs -n second_disk
	 many lines of output
	 lucreate: Creation of Boot Environment <second_disk> successful.

       Example 3 Migrating a BE from a UFS Root File System to a ZFS Root File
       System

       The following command creates a BE of a ZFS root file system from a UFS
       root file system. The current BE, c1t0d0s0, containing a UFS root  file
       system,	is  identified by the -c option. The new BE, zfsBE, is identi‐
       fied by the -n option. A ZFS storage pool must exist before the	lucre‐
       ate  operation  and must be created with slices rather than whole disks
       to be upgradeable and bootable.

	 # zpool create rpool mirror c1t0d0s0 c2t0d0s0
	 # lucreate -c c1t0d0s0 -n zfsBE -p rpool

       Note that if the current BE also resides on the ZFS pool rpool, the  -p
       option could be omitted. For example:

	 # lucreate -n zfsBE

       Example	4  Migrating  a	 BE from a UFS Root File System with User File
       System to a ZFS Root File System

       The following command creates a BE of a ZFS root file system from a UFS
       root  file  system  with	 a user file system. The current BE, c1t0d0s0,
       containing a UFS root file system, is identified by the -c option.  The
       new  BE,	 zfsBE,	 is  identified	 by the -n option. The source BE has a
       user-created file system called /data. The /data	 file  system  can  be
       migrated	 to  ZFS  dataset.  A  ZFS  storage pool must exist before the
       lucreate operation and must be created with slices  rather  than	 whole
       disks to be upgradeable and bootable.

	 # zpool create rpool c2t0d0s0
	 # lucreate -c c1t0d0s0 -n zfsBE -p rpool -m /data:rpool/data:zfs

       Example 5 Creating a BE from a Flash Archive

       Performing  this task involves use of lucreate with the -s - option and
       luupgrade.

	 # lucreate -s - -m /:/dev/dsk/c0t4d0s0:ufs -m /usr:/dev/dsk/c0t4d0s1:ufs \
	 -n second_disk
	 brief messages
	 lucreate: Creation of Boot Environment <second_disk> successful.

       With the -s option, the lucreate command completes it work within  sec‐
       onds.  At  this	point,	you can use luupgrade to install the flash ar‐
       chive:

	 # luupgrade -f -n second_disk \
	 -s /net/installmachine/export/solarisX/OS_image \
	 -J "archive_location http://example.com/myflash.flar"

       See luupgrade(1M) for a description of that command.

       Example 6 Creating an Unpopulated BE from a ZFS Root

       The following command uses the -s - option to create an unpopulated  BE
       from  a	ZFS  root. You can subsequently use luupgrade(1M) to install a
       flash archive on the unpopulated BE, as in the previous example.

	 # lucreate -n second_disk -s - -p mypool

       Example 7 Sharing and Adding Swap Partitions

       In the simplest case, if you do not specify any swap partitions	in  an
       lucreate	 command, all swap partitions in the source BE are shared with
       the  new	 BE.  For  example,  assume   that   the   current   BE	  uses
       /dev/dsk/c0t4d0s7 as its swap partition. You enter the command:

	 # lucreate -n second_disk -m /:/dev/dsk/c0t4d0s0:ufs
	 many lines of output
	 lucreate: Creation of Boot Environment <second_disk> successful.

       Upon    conclusion    of	  the	preceding   command,   the   partition
       /dev/dsk/c0t4d0s7 will be used by the BE second_disk when  that	BE  is
       activated and booted.

       If  you	want a new BE to use a different swap partition from that used
       by the source BE, enter one or more -m options to specify a new	parti‐
       tion  or	 new  partitions. Assume, once again, that the current BE uses
       /dev/dsk/c0t4d0s7 as its swap partition. You enter the command:

	 # lucreate -m /:/dev/dsk/c0t0d0s0:ufs -m -:/dev/dsk/c0t4d0s1:swap \
	  -m -:/dev/dsk/c0t4d0s2:swap -n second_disk
	 many lines of output
	 lucreate: Creation of Boot Environment <second_disk> successful.

       Upon  activation	 and  boot,  the   new	 BE   second_disk   will   use
       /dev/dsk/c0t4d0s1    and	   /dev/dsk/c0t4d0s2	and   will   not   use
       /dev/dsk/c0t4d0s7, the swap partition used by the source BE.

       Assume you want the new BE second_disk to share the  source  BE's  swap
       partition and have an additional swap partition. You enter:

	 # lucreate -m /:/dev/dsk/c0t0d0s0:ufs -m -:/dev/dsk/c0t4d0s1:swap \
	  -m -:shared:swap -n second_disk
	 many lines of output
	 lucreate: Creation of Boot Environment <second_disk> successful.

       Upon  activation and boot, the new BE second_disk will use for swapping
       /dev/dsk/c0t4d0s7,  shared  with	 the  source  BE,  and,	 in  addition,
       /dev/dsk/c0t4d0s1.

       Example 8 Using Swap Partitions on Multiple Disks

       The command below creates a BE on a second disk and specifies swap par‐
       titions on both the first and second disks.

	 # lucreate -m /:/dev/dsk/c0t4d0s0:ufs -m -:/dev/dsk/c0t4d0s1:swap \
	  -m -:/dev/dsk/c0t0d0s1:swap -n second_disk
	 many lines of output
	 lucreate: Creation of Boot Environment <second_disk> successful.

       Following completion of the preceding command, the BE second_disk  will
       use  both  /dev/dsk/c0t0d0s1  and /dev/dsk/c0t4d0s1 as swap partitions.
       These swap assignments take effect only after booting from second_disk.
       If  you have a long list of swap partitions, it is useful to use the -M
       option, as shown below.

       Example 9 Using a Combination of -m and -M Options

       In this example, a list of swap partitions is  collected	 in  the  file
       /etc/lu/swapslices. The location and name of this file is user-defined.
       The contents of /etc/lu/swapslices:

	 -:/dev/dsk/c0t3d0s2:swap
	 -:/dev/dsk/c0t3d0s2:swap
	 -:/dev/dsk/c0t4d0s2:swap
	 -:/dev/dsk/c0t5d0s2:swap
	 -:/dev/dsk/c1t3d0s2:swap
	 -:/dev/dsk/c1t4d0s2:swap
	 -:/dev/dsk/c1t5d0s2:swap

       This file is specified in the following command:

	 # lucreate -m /:/dev/dsk/c02t4d0s0:ufs -m /usr:/dev/dsk/c02t4d0s1:ufs \
	 -M /etc/lu/swapslices -n second_disk
	 many lines of output
	 lucreate: Creation of Boot Environment <second_disk> successful.

       The  BE	second_disk  will  swap	 onto  the  partitions	specified   in
       /etc/lu/swapslices.

       Example 10 Copying Versus Sharing

       The following command copies the user file system /home (in addition to
       the non-shareable file systems / and /usr) from the current BE  to  the
       new BE:

	 # lucreate -m /:/dev/dsk/c0t4d0s0:ufs -m /usr:/dev/dsk/c0t4d0s1:ufs \
	 -m /home:/dev/dsk/c0t4d0s4:ufs -n second_disk

       The  following command differs from the preceding in that the -m option
       specifying a destination for /home is omitted. The result  of  this  is
       that  /home  will  be  shared  between  the  current BE and the BE sec‐
       ond_disk.

	 # lucreate -m /:/dev/dsk/c0t4d0s0:ufs -m /usr:/dev/dsk/c0t4d0s1:ufs \
	 -n second_disk

       Example 11 Using Solaris Volume Manager Volumes

       The command shown below does the following:

	   1.	  Creates the mirror d10 and establishes this  mirror  as  the
		  receptacle for the root file system.

	   2.	  Attaches  c0t0d0s0  and  c0t1d0s0 to single-slice concatena‐
		  tions d1 and d2, respectively. Note that  the	 specification
		  of these volumes is optional.

	   3.	  Attaches  the	 concatenations	 associated  with c0t0d0s0 and
		  c0t1d0s0 to mirror d10.

	   4.	  Copies the current BE's root	file  system  to  mirror  d10,
		  overwriting any d10 contents.

	 # lucreate -m /:/dev/md/dsk/d10:ufs,mirror \
	 -m /:/dev/md/dsk/d1:attach \
	 -m /:/dev/dsk/c0t1d0s0,d2:attach -n newBE

       The  following command differs from the preceding only in that concate‐
       nations for the physical storage devices are  not  specified.  In  this
       example, lucreate chooses concatenation names from a list of free names
       and attaches these volumes to the mirror	 specified  in	the  first  -m
       option.

	 # lucreate -m /:/dev/md/dsk/d10:ufs,mirror \
	 -m /:/dev/dsk/c0t0d0s0:attach \
	 -m /:/dev/dsk/c0t1d0s0:attach -n newBE

       The  following  command differs from the preceding commands in that one
       of the physical disks is detached from a mirror before  being  attached
       to  the	mirror	you  create. Also, the contents of one of the physical
       disks is preserved. The command does the following:

	   1.	  Creates the mirror d10 and establishes this  mirror  as  the
		  receptacle for the root file system.

	   2.	  Detaches  c0t0d0s0  from the mirror to which it is currently
		  attached.

	   3.	  Attaches c0t0d0s0 and c0t1d0s0 to concatenations d1 and  d2,
		  respectively.	 Note that the specification of the these con‐
		  catenations is optional.

	   4.	  Preserves the contents  of  c0t0d0s0,	 which	presumes  that
		  c0t0d0s0 contains a valid copy of the current BE's root file
		  system.

	   5.	  Attaches the concatenations  associated  with	 c0t0d0s0  and
		  c0t1d0s0 (d1 and d2) to mirror d10.

	 # lucreate -m /:/dev/md/dsk/d10:ufs,mirror \
	 -m /:/dev/dsk/c0t0d0s0,d1:detach,attach,preserve \
	 -m /:/dev/dsk/c0t1d0s0,d2:attach -n newBE

       The  following  command is a follow-on to the first command in this set
       of  examples.  This  command  detaches  a   concatenation   (containing
       c0t0d0s0)  from	one mirror (d10, in the first command) and attaches it
       to another (d20), preserving its contents.

	 # lucreate -m /:/dev/md/dsk/d20:ufs,mirror \
	 -m /:/dev/dsk/c0t0d0s0:detach,attach,preserve -n nextBE

       The following command creates two mirrors, placing the / file system of
       the new BE on one mirror and the /opt file system on the other.

	 # lucreate -m /:/dev/md/dsk/d10:ufs,mirror \
	 -m /:/dev/dsk/c0t0d0s0,d1:attach \
	 -m /:/dev/dsk/c1t0d0s0,d2:attach \
	 -m /opt:/dev/md/dsk/d11:ufs,mirror \
	 -m /opt:/dev/dsk/c2t0d0s1,d3:attach \
	 -m /opt:/dev/dsk/c3t1d0s1,d4:attach -n anotherBE

       Example 12 Invoking FMLI-based Interface

       This example is included for historical purposes as the lu interface is
       now obsolete.

       The command below, by omitting -m or -M options, invokes lu, the	 FMLI-
       based interface for Live Upgrade operations.

	 # lucreate -n second_disk

       The  preceding command uses the current BE as the source for the target
       BE second_disk. In the FMLI interface, you can specify the target  disk
       slices  for  second_disk.  The  following command is a variation on the
       preceding:

	 # lucreate -n second_disk -s third_disk

       In the preceding command, a source for the target BE is	specified.  As
       before,	the  FMLI  interface  comes up, enabling you to specify target
       disk slices for the new BE.

       Example 13 Merging File Systems

       The command below merges the /usr/opt file system into  the  /usr  file
       system. First, here are the disk slices in the BE first_disk, expressed
       in the format used for arguments to the -m option:

	 /:/dev/dsk/c0t4d0s0:ufs
	 /usr:/dev/dsk/c0t4d0s1:ufs
	 /usr/opt:/dev/dsk/c0t4d0s3:ufs

       The following command creates a BE second_disk and performs  the	 merge
       operation, merging /usr/opt with its parent, /usr.

	 # lucreate -m /:/dev/dsk/c0t4d0s0:ufs -m /usr:/dev/dsk/c0t4d0s1:ufs \
	 -m /usr/opt:merged:ufs -n second_disk

       Example 14 Splitting a File System

       Assume  a source BE with /, /usr, and /var all mounted on the same disk
       slice. The following command creates a BE second_disk that has /, /usr,
       and /var all mounted on different disk slices.

	 # lucreate -m /:/dev/dsk/c0t4d0s0:ufs -m /usr:/dev/dsk/c0t4d0s1:ufs \
	 /var:/dev/dsk/c0t4d0s3:ufs -n second_disk

       This  separation	 of  a	file system's (such as root's) components onto
       different disk slices is referred to as splitting a file system.

       Example 15 Specifying Alternative Slices

       The following command uses multiple  -m	options	 as  alternative  disk
       slices for the new BE second_disk.

	 # lucreate -m /:/dev/dsk/c0t4d0s0:ufs -m /:/dev/dsk/c0t4d0s1:ufs \
	 -m /:/dev/dsk/c0t4d0s5:ufs -n second_disk
	 many lines of output
	 lucreate: Creation of Boot Environment <second_disk> successful.

       The preceding command specifies three possible disk slices, s0, s1, and
       s5 for the / file system. lucreate  selects  the	 first	one  of	 these
       slices that is not being used by another BE. Note that the -s option is
       omitted, meaning that the current BE is the source BE for the  creation
       of the new BE.

       Example 16 Specifying Separate File Systems for Non-Global Zones

       The following command specifies a separate file system belonging to the
       zone, zone1, within the new BE, second_disk.

	 # lucreate -n second_disk -m /:/dev/dsk/c0d0s3:ufs \
	 -m /export/home:/dev/dsk/c0d0s5:ufs:zone1

       The zone named zone1, inside the new BE,	 has  a	 separate  disk	 slice
       allocated for its /export/home file system.

       Example 17 Specifying a Separate Dataset for /var Using -D

       The following command specifies a separate dataset for /var during cre‐
       ation of a ZFS root boot environment from  a  parent  boot  environment
       with a UFS root.

	 # lucreate -n zfsroot_BE -p mypool -D /var

       With  the  preceding  command,  the dataset shown below will be created
       under the root dataset. This dataset will be mounted  as	 /var  in  the
       newly  created  boot  environment. The dataset will be created with the
       canmount=noauto option and will inherit the  mountpoint	property  from
       the root dataset.

	 mypool/ROOT/zfsroot_BE/var

       If  you were to exclude the -D option in the example command /var would
       be merged with / (root) in the ZFS root BE.

       Example 18 Specifying a Separate Dataset for /data Using -D

       The following command creates a separate dataset for  /data  under  the
       root dataset in a ZFS root BE.

	 # lucreate -n zfsroot_BE -p mypool -D /data

       The  preceding command presumes that /data resides in a UFS file system
       in the parent BE. The command creates the following dataset.

	 mypool/ROOT/zfsroot_BE/data

       This dataset will be created with mountpoint=legacy and canmount=noauto
       and an entry will be created for /data in /etc/vfstab of the newly cre‐
       ated BE. Also, given that the dataset for /data resides under the  root
       dataset	in the ZFS root BE, /data will no longer be shared between the
       parent BE and the newly created ZFS root BE. If the ZFS root BE becomes
       the source BE in a subsequent lucreate operation, /data will be treated
       the same (for snapshot/cloning or copying) as other datasets under  the
       root dataset holding OS-critical components.

EXIT STATUS
       The following exit values are returned:

       0     Successful completion.

       >0    An error occurred.

FILES
       /etc/lutab

	   list of BEs on the system

       /usr/share/lib/xml/dtd/lu_cli.dtd.<num>

	   Live Upgrade DTD (see -X option)

ATTRIBUTES
       See attributes(5) for descriptions of the following attributes:

       ┌─────────────────────────────┬─────────────────────────────┐
       │      ATTRIBUTE TYPE	     │	    ATTRIBUTE VALUE	   │
       ├─────────────────────────────┼─────────────────────────────┤
       │Availability		     │SUNWlu			   │
       └─────────────────────────────┴─────────────────────────────┘

SEE ALSO
       luactivate(1M),	lucancel(1M), lucompare(1M), lucurr(1M), ludelete(1M),
       ludesc(1M), lufslist(1M), lumake(1M), lumount(1M), lurename(1M), lusta‐
       tus(1M),	 luupgrade(1M),	 zfs(1M),  zpool(1M),  zoneadm(1M),  lutab(4),
       attributes(5), live_upgrade(5), zones(5)

NOTES
       As is true for any Solaris operating system upgrade (and not a  feature
       of  Live	 Upgrade),  when  splitting  a	directory  into multiple mount
       points, hard links are not maintained across file systems. For example,
       if   /usr/test1/buglist	is  hard  linked  to  /usr/test2/buglist,  and
       /usr/test1 and /usr/test2 are split into	 separate  file	 systems,  the
       link  between  the files will no longer exist. If lucreate encounters a
       hard link across file systems, the command issues a warning message and
       creates a symbolic link to replace the lost hard link.

       lucreate	 cannot	 prevent  you  from making invalid configurations with
       respect to non-shareable file systems. For example, you could enter  an
       lucreate	 command  that	would  create  separate file systems for / and
       /kernel—an invalid division of /. The resulting BE would be unbootable.
       When  creating file systems for a boot environment, the rules are iden‐
       tical to the rules for creating file systems for the Solaris  operating
       environment.

       Mindful of the principle described in the preceding paragraph, consider
       the following:

	   o	  In a source BE, you must have valid vfstab entries for every
		  file system you want to copy to or share with a new BE.

	   o	  You cannot create a new BE on a disk with overlapping parti‐
		  tions (that is, partitions that share the same physical disk
		  space).  The	lucreate  command  that	 specifies such a disk
		  might complete, but the resulting BE would be unbootable.

       Note -

	 As stated in the description of the -m option,	 if  you  use  Solaris
	 Volume	 Manager  volumes  for	boot environments, use lucreate rather
	 than Solaris Volume Manager commands to manipulate these volumes. The
	 Solaris  Volume  Manager  software  has no knowledge of boot environ‐
	 ments; the lucreate command contains checks  that  prevent  you  from
	 inadvertently	destroying  a  boot environment by, for example, over‐
	 writing or deleting a Solaris Volume Manager volume.

       For versions of the Solaris operating system prior to Solaris 10,  Live
       Upgrade	supports the release it is distributed on and up to three mar‐
       keting releases back. For example, if you obtained  Live	 Upgrade  with
       Solaris 9 (including a Solaris 9 upgrade), that version of Live Upgrade
       supports Solaris versions 2.6, Solaris 7, and Solaris 8, in addition to
       Solaris	9. No version of Live Upgrade supports a Solaris version prior
       to Solaris 2.6.

       Starting with version 10 of the Solaris operating system, Live  Upgrade
       supports	 the  release  it  is  distributed  on and up to two marketing
       releases back. For example, if you obtained Live Upgrade	 with  Solaris
       10  (including a Solaris 10 upgrade), that version of Live Upgrade sup‐
       ports Solaris 8 and Solaris 9, in addition to Solaris 10. For  instruc‐
       tions  on  adding  Live	Upgrade	 packages  for the release you want to
       install, see Solaris 10 5/08 Installation Guide: Solaris	 Live  Upgrade
       and Upgrade Planning.

       Correct	operation  of Solaris Live Upgrade requires that a limited set
       of patch	 revisions  be	installed  for	a  given  OS  version.	Before
       installing  or  running	Live  Upgrade, you are required to install the
       limited set of patch revisions. Make sure you have  the	most  recently
       updated	patch  list  by consulting http://sunsolve.sun.com. Search for
       the infodoc 72099 on the SunSolve web site.

SunOS 5.10			  2 May 2012			  lucreate(1M)
[top]

List of man pages available for SunOS

Copyright (c) for man pages and the logo by the respective OS vendor.

For those who want to learn more, the polarhome community provides shell access and support.

[legal] [privacy] [GNU] [policy] [cookies] [netiquette] [sponsors] [FAQ]
Tweet
Polarhome, production since 1999.
Member of Polarhome portal.
Based on Fawad Halim's script.
....................................................................
Vote for polarhome
Free Shell Accounts :: the biggest list on the net