Technical Tip

QUESTION

How can I migrate my Solaris 10 (Update 6, 10/08 onwards) system to a ZFS-based root file system?

ANSWER
Before you read this, have a look at this page if you are not sure about setting up a ZFS pool.
This is a brief example of migrating an existing UFS-based Solaris 10 system to use a ZFS storage
pool for its root, swap and dump areas.
This is only supported in Solaris 10 Update 6 (10/08) onwards.

This procedure shows the process for migrating using Live Upgrade, and is one of six Live Upgrade scenarios taken from our Solaris Live Upgrade Workshop one-day course.

Note that preparation for performing a Live Upgrade includes installation of specific patches and the correct Live Upgrade software, plus availability of disk storage. See the following link for details:-
https://docs.oracle.com/cd/E26505_01/html/E28038/preconfig-17.html

Migrating to ZFS has several benefits, including:-
  • Using advanced ZFS facilities, with attendant performance and resilience.
  • Only one pool to maintain, with simple administration and easy replacement of disks (in a mirrored pool) with larger disks if more capacity is needed.
  • New Boot Environments (BE's) are created using ZFS file system snapshot clones, and are created almost instantly; these can then be upgraded, patched and modified (adding removing packages, etc) and then activated and booted from, to take over from the currently running BE.
To begin, create a pool on your spare disk(s), preferably a mirror.

The pool must be created from slices on disks with an SMI disk label, rather than whole disks, in order to be bootable and upgradeable, and you must use only slices or mirrors, and not RAIDZ. (a feature of ZFS roughly equivalent to RAID 5)

A ZFS root file system is larger than a UFS root file system because swap and dump devices must be separate devices, whereas swap and dump devices are the same device in a UFS root file system.

# zpool  create  -f   prawn_root_t2  mirror  c0t2d0s0  c0t3d0s0
# zpool  status
.. displays information about the ZFS pools.

Now use lucreate to create a new BE on the ZFS pool:-
# lucreate  -n  prawn_zfs_root  -p  prawn_root_t2
Analyzing system configuration.
Comparing source boot environment <c0t0d0s0> file systems with the file
system(s) you specified for the new boot environment. Determining which
file systems should be in the new boot environment.
Updating boot environment description database on all BEs.
Updating system configuration files.
The device </dev/dsk/c0t2d0s0> is not a root device for any boot environment; cannot get BE ID.
Creating configuration for boot environment <prawn_zfs_root>.
Source boot environment is <c0t0d0s0>.
Creating boot environment <prawn_zfs_root>.
Creating file systems on boot environment <prawn_zfs_root>.
Creating <zfs> file system for </> in zone <global> on <prawn_root_t2/ROOT/prawn_zfs_root>.
Populating file systems on boot environment <prawn_zfs_root>.
Checking selection integrity.
Integrity check OK.
Populating contents of mount point </>.
Copying.
Creating shared file system mount points.
Creating compare databases for boot environment <prawn_zfs_root>.
Creating compare database for file system </usr>.
Creating compare database for file system </prawn_root_t2/ROOT>.
Creating compare database for file system </opt>.
Creating compare database for file system </>.
Updating compare databases on boot environment <prawn_zfs_root>.
Making boot environment <prawn_zfs_root> bootable.
Creating boot_archive for /.alt.tmp.b-4Gc.mnt
updating /.alt.tmp.b-4Gc.mnt/platform/sun4u/boot_archive
Population of boot environment <prawn_zfs_root> successful.
Creation of boot environment <prawn_zfs_root> successful.
#

(Takes about 30-40 minutes.)
Note that data slices separate from those containing the Solaris OS such a /export/home will not be migrated (nor can they be, unlike BE's contained on UFS file systems); such slices will be mounted on their original mount points when the new BE is booted.
# lufslist  prawn_zfs_root
.. displays file system information for the new BE
# lustatus
.. displays general BE  information

You could now use luupgrade or smpatch (smpatch currently has issues...) to patch the new (ZFS-based) BE before activating it.

If the original (UFS) BE contained non-global zones in the system slices, they will be copied with the lucreate.
If they exist in a non-system slice, such as /zones mounted on a separate slice, then they will treated as a shared slice, in a similar manner to a /export/home slice as described above.

Looking to the future, when further releases are available, you could upgrade the new BE before booting it:-
# luupgrade -n prawn_zfs_root -u  -s /net/yamaha/software/sol10_u8
Where /net/yamaha/software/sol10_u8 is the path to a valid Solaris 10 distribution image. (Imaginary as at March 2009!)

Now that the new ZFS BE is created, we can activate it, then boot from it:-
# luactivate  prawn_zfs_root
A Live Upgrade Sync operation will be performed on startup of boot environment <p rawn_zfs_root>.

******************************************************************
The target boot environment has been activated. It will be used when you
reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You
MUST USE either the init or the shutdown command when you reboot. If you
do not use either init or shutdown, the system will not boot using the
target BE.
******************************************************************
In case of a failure while booting to the target BE, the following process
needs to be followed to fallback to the currently working boot environment:

1. Enter the PROM monitor (ok prompt).

2. Change the boot device back to the original boot environment by typing:

     setenv boot-device /pci@1f,0/ide@d/disk@0,0:a

3. Boot to the original boot environment by typing:
   boot
******************************************************************
Modifying boot archive service
Activation of boot environment <prawn_zfs_root> successful.


You can now reboot, but see the output above from luactivate on which commands to use.
# init  6


When the new ZFS root pool BE is booted and running, we can consider our next steps.
We may wish to retain the original BE in case of problems.
Have a look around to see how the system looks with various commands - you will not notice much difference, except with commands such as df.
Also, it is important to have knowledge of ZFS zpool and zfs commands in order to maintain the file systems and pool; such topics are covered on both our Solaris 10 Systems Administration Part 2, and Solaris 10 Update Workshop courses.
Note how partitions that hold user data, such as /export/home, are not included in the BE, but retain their original partitions and mount points.
You may wish to migrate these also ZFS, perhaps in a separate pool.
It is possible to place such things within the existing pool, but this would make them part of any new cloned BE's, with possible complications as a result.

Now we can create a further BE very quickly, which can then be patched or upgraded as required.

To create a new BE from our ZFS BE:-
# lucreate  -n   prawn_root_t2_jan_31
Analyzing system configuration.
Comparing source boot environment <prawn_zfs_root> file systems with the
file system(s) you specified for the new boot environment. Determining
which file systems should be in the new boot environment.
Updating boot environment description database on all BEs.
Updating system configuration files.
Creating configuration for boot environment <prawn_root_t2_jan_31>.
Source boot environment is <prawn_zfs_root>.
Creating boot environment <prawn_root_t2_jan_31>.
Cloning file systems from boot environment <prawn_zfs_root> to create boot environment <prawn_root_t2_jan_31>.
Creating snapshot for <prawn_root_t2/ROOT/prawn_zfs_root> on <prawn_root_t2/ROOT/prawn_zfs_root@prawn_root_t2_jan_31>.
Creating clone for <prawn_root_t2/ROOT/prawn_zfs_root@prawn_root_t2_jan_31> on <prawn_root_t2/ROOT/prawn_root_t2_jan_31>.
Setting canmount=noauto for </> in zone <global> on <prawn_root_t2/ROOT/prawn_root_t2_jan_31>.
Population of boot environment <prawn_root_t2_jan_31> successful.
Creation of boot environment <prawn_root_t2_jan_31> successful.

Takes about 30 seconds.....
# lustatus

A zfs list command will show the BE has been created as a clone of a ZFS snapshot.
# zfs  list

The new BE can now be patched, have new packages added and be upgraded..
For more information on ZFS, why not attend our 4-day Solaris 10 Update
course see: http://www.firstalt.co.uk/courses/s10up.html
ZFS is also included in our standard Solaris 10 Systems Administration courses.
For full details of Live Upgrade, you can attend our Solaris Live Upgrade Workshop, a one-day course covering 6 differnt Live Upgrade scenarios, incluidng this one!

How do I set up and configure a Zone in Solaris 10?

ANSWER


First Alternative course tutors can answer questions like this ... and are happy to do so. Look around our site for relevant courses in Linux -Unix - Solaris

close window