UFS to ZFS Migrationg using Live upgrade

UFS to ZFS conversion-Using Live upgrade

Requirement:
We need a physical disk matching to the current root harddisk size.if you don;t have spare disk,you remove the current mirror disk and use it for ZFS convert.

Assumptions: 
New disk : c1t1do
The new disk should formatted with SMI label and keep all the sectors in s0. EFI label is not
supported for root pool.for SMI
format -e 
Creating rpool:
First create zpool with the name of rpool using the newly configured disk.
bash-3.00# zpool create rpool c1t1d0s0
bash-3.00# zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 72K 7.81G 21K /rpool
 Verify if you are having existing boot environment to name current boot environment,

bash-3.00# lustatus
ERROR: No boot environments are configured on this system 
ERROR: cannot determine list of all boot environment names

Creating the new boot environment using rpool:
Now we can create a new boot environment using the newly configured zpool (i.e rpool) .
 -c — current boot environment name
 -n — new boot environment name
 -p — Pool name
bash-3.00# lucreate -c sol_stage1 -n sol_stage2 -p rpool
Checking GRUB menu...
System has findroot enabled GRUB
Analyzing system configuration.
Comparing source boot environment file systems with the file
system(s) you specified for the new boot environment. Determining which
file systems should be in the new boot environment.
Updating boot environment description database on all BEs.
Updating system configuration files.
The device 
is not a root device for any boot environment; ca
nnot get BE ID.
Creating configuration for boot environment .
Source boot environment is .
Creating boot environment .
Creating file systems on boot environment .
Creating file system for </> in zone on .
Populating file systems on boot environment .
Checking selection integrity.
Integrity check OK.
Populating contents of mount point </>.
Copying.
Updating compare databases on boot environment .
Making boot environment bootable.
Updating bootenv.rc on ABE .
File propagation successful
Copied GRUB menu from PBE to ABE
No entry for BE in GRUB menu
Population of boot environment successful.
Creation of boot environment successful.

bash-3.00# lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
sol_stage1 yes yes yes no -
sol_stage2 yes no no yes -

If Required
The following zfs list command shows the separate dataset created for /var by using the -D /var option in the lucreate command.

# lucreate -c c0t0d0 -n new-zfsBE -p rpool -D /var
 Activating the new boot environment:
Once the lucreate is done,then activate the new boot environment.So that system will boot from new BE from next time onwards.

Note:Do not use “reboot” command.Use “init 6”

bash-3.00# luactivate sol_stage2
System has findroot enabled GRUB
Generating boot-sign, partition and slice information for PBE 
A Live Upgrade Sync operation will be performed on startup of boot environment <
sol_stage2>.
Generating boot-sign for ABE 
NOTE: File 
not found in top level dataset for BE 
Generating partition and slice information for ABE 
Boot menu exists.
Generating multiboot menu entries for PBE.
Generating multiboot menu entries for ABE.
Disabling splashimage
Re-enabling splashimage
No more bootadm entries. Deletion of bootadm entries is complete.
GRUB menu default setting is unaffected
Done eliding bootadm entries.
**********************************************************************
The target boot environment has been activated. It will be used when you
reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You
MUST USE either the init or the shutdown command when you reboot. If you
do not use either init or shutdown, the system will not boot using the
target BE.
**********************************************************************
In case of a failure while booting to the target BE, the following process
needs to be followed to fallback to the currently working boot environment:
1. Boot from the Solaris failsafe or boot in Single User mode from Solaris Install CD or Network.
2. Mount the Parent boot environment root slice to some directory (like /mnt). You can use the following command to mount:

mount -F ufs /dev/dsk/c1t0d0s0 /mnt

3. Run utility with out any arguments from the Parent boot environment root slice, as shown below:

/mnt/sbin/luactivate

4. luactivate, activates the previous working boot environment and indicates the result.
5. Exit Single User mode and reboot the machine.
**********************************************************************
Modifying boot archive service
Propagating findroot GRUB for menu conversion.
File propagation successful
File propagation successful
File propagation successful
File propagation successful
Deleting stale GRUB loader from all BEs.
File deletion successful
File deletion successful
File deletion successful
Activation of boot environment successful.

bash-3.00# lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
sol_stage1 yes yes no no -
sol_stage2 yes no yes no - -------here you can see “:Active on Reboot is yes”
 
Reboot the server using init 6 to boot from new boot environment.
Note:Do not use “reboot” command.Use “init 6”

bash-3.00# init 6
updating /platform/i86pc/boot_archive
propagating updated GRUB menu
Saving existing file 
in top level dataset for BE as //boot/grub/menu.lst.prev.
File propagation successful
File propagation successful
File propagation successful
File propagation successful

bash-3.00# lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
sol_stage1 yes no no yes -
sol_stage2 yes yes yes no -
 Now you can see the server is booted in ZFS.
 bash-3.00# zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 4.60G 3.21G 34.5K /rpool
rpool/ROOT 3.59G 3.21G 21K legacy
rpool/ROOT/sol_stage2 3.59G 3.21G 3.59G /
rpool/dump 512M 3.21G 512M -
rpool/swap 528M 3.73G 16K -
bash-3.00# zpool status
 pool: rpool
 state: ONLINE
 scrub: none requested
config:

NAME STATE READ WRITE CKSUM
 rpool ONLINE 0 0 0
 c1t1d0s0 ONLINE 0 0 0

errors: No known data errors

If everything goes fine, you can remove the old boot environment using the below command
 bash-3.00# ludelete -f sol_stage1
System has findroot enabled GRUB
Updating GRUB menu default setting
Changing GRUB menu default setting to <0>
Saving existing file 
in top level dataset for BE as //boot/grub/menu.lst.prev.
File propagation successful
Successfully deleted entry from GRUB menu
Determining the devices to be marked free.
Updating boot environment configuration database.
Updating boot environment description database on all BEs.
Updating all boot environment configuration databases.
Boot environment deleted.

bash-3.00# lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
sol_stage2 yes yes yes no -

Now we can use the deleted old boot environment disk for rpool mirroring . Size should equal or greater than existing rpool disk.
 bash-3.00# zpool status
 pool: rpool
 state: ONLINE
 scrub: none requested
config:

NAME STATE READ WRITE CKSUM
 rpool ONLINE 0 0 0
 c1t1d0s0 ONLINE 0 0 0

errors: No known data errors
 Copying partition table to second disk.
 bash-3.00# prtvtoc /dev/rdsk/c1t1d0s2 |fmthard -s - /dev/rdsk/c1t0d0s2
fmthard: New volume table of contents now in place.
 Initiating the rpool mirroring:
 bash-3.00# zpool attach rpool c1t1d0s0 c1t0d0s0
Please be sure to invoke installgrub(1M) to make 'c1t0d0s0' bootable.
Make sure to wait until resilver is done before rebooting.
bash-3.00# zpool status
 pool: rpool
 state: ONLINE
status: One or more devices is currently being resilvered. The pool will
 continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
 scrub: resilver in progress for 0h0m, 1.37% done, 0h18m to go
config:

NAME STATE READ WRITE CKSUM
 rpool ONLINE 0 0 0
 mirror-0 ONLINE 0 0 0
 c1t1d0s0 ONLINE 0 0 0
 c1t0d0s0 ONLINE 0 0 0 56.9M resilvered

errors: No known data errors
 Once the mirror is done, system will be running on ZFS with root mirroring.
After migrating ZFS,you have to use liveupgrade for OS patching.