MIGRATION OF ZPOOL STORAGE

Here an example of global zone oxdglz75c has been considered.
Log into the server(global zone) as root.
Check the zpool by the following command
With no arguments, the command displays all the fields for all pools on the system
#root@oxdglz75c # zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
rpool 136G 27.3G 109G 20% ONLINE –
z_data_0x00 1.18T 449G 757G 37% ONLINE –
zones 151G 27.8G 123G 18% ONLINE –
Once storage has assigned the Lun’s rescan it from the host side.
Note: The new storage LUN’s should be in same no and same size.
Check the status

root@oxdglz75c # zpool status z_data_ox00
pool: z_data_ox00
state: ONLINE
scrub: none requested
config:

NAME STATE READ WRITE CKSUM
 z_data_ox00 ONLINE 0 0 0
 c14t60060E800428E400000028E40000010Cd0 ONLINE 0 0 0
 c14t60060E800428E400000028E400000111d0 ONLINE 0 0 0
 c14t60060E800428E400000028E400000113d0 ONLINE 0 0 0
 c14t60060E800428E400000028E400000115d0 ONLINE 0 0 0
 c14t60060E800428E400000028E400000117d0 ONLINE 0 0 0
 c14t60060E800428E400000028E40000053Ed0 ONLINE 0 0 0
 c14t60060E800428E400000028E400000536d0 ONLINE 0 0 0
 c14t60060E800428E400000028E400000539d0 ONLINE 0 0 0
 c14t60060E800428E400000028E400000547d0 ONLINE 0 0 0
 c14t60060E800428E400000028E400000548d0 ONLINE 0 0 0

errors: No known data errors

*Each disk in zfs pool have the same size:
 We are using the prtvtoc (print vtoc, or disklabel) command to print the current partition structure

root@oxdglz75c # prtvtoc /dev/rdsk/c14t60060E800428E400000028E40000010Cd0s2
* /dev/rdsk/c14t60060E800428E400000028E40000010Cd0s2 partition map
*
* Dimensions:
* 512 bytes/sector
* 122177280 sectors
* 122177213 accessible sectors
*
* Flags:
* 1: unmountable
* 10: read-only
*
* First Sector Last
* Partition Tag Flags Sector Count Sector Mount Directory
 0 4 00 34 122160829 122160862
 8 11 00 122160863 16384 122177246

Output from format:
root@oxdglz75c # format c14t60060E800428E400000028E40000010Cd0
selecting c14t60060E800428E400000028E40000010Cd0
[disk formatted]
/dev/dsk/c14t60060E800428E400000028E40000010Cd0s0 is part of active ZFS pool 
z_data_0x00. Please see zpool(1M).


FORMAT MENU:
 disk - select a disk
 type - select (define) a disk type
 partition - select (define) a partition table
 current - describe the current disk
 format - format and analyze the disk
 repair - repair a defective sector
 label - write label to the disk
 analyze - surface analysis
 defect - defect list management
 backup - search for backup labels
 verify - read and display labels
 inquiry - show vendor, product and revision
 volname - set 8-character volume name
 !<cmd> - execute <cmd>, then return
 quit
format> p


PARTITION MENU:
 0 - change `0' partition
 1 - change `1' partition
 2 - change `2' partition
 3 - change `3' partition
 4 - change `4' partition
 5 - change `5' partition
 6 - change `6' partition
 select - select a predefined table
modify - modify a predefined partition table
 name - name the current table
 print - display the current table
 label - write partition map and label to the disk
 !<cmd> - execute <cmd>, then return
 quit
partition> p
Current partition table (original):
Total disk sectors available: 122160862 + 16384 (reserved sectors)

Part Tag Flag First Sector Size Last Sector
 0 usr wm 34 58.25GB 122160862 
 1 unassigned wm 0 0 0 
 2 unassigned wm 0 0 0 
 3 unassigned wm 0 0 0 
 4 unassigned wm 0 0 0 
 5 unassigned wm 0 0 0 
 6 unassigned wm 0 0 0 
 8 reserved wm 122160863 8.00MB 122177246

---------Below are the new LUN’s allocated from new storage
c14t60060E800545AA00000045AA00001134d0 
c14t60060E800545AA00000045AA0000114Fd0 
c14t60060E800545AA00000045AA0000122Bd0 
c14t60060E800545AA00000045AA00001244d0 
c14t60060E800545AA00000045AA00001245d0 
c14t60060E800545AA00000045AA0000130Dd0 
c14t60060E800545AA00000045AA0000134Cd0 
c14t60060E800545AA00000045AA0000140Fd0 
c14t60060E800545AA00000045AA0000147Ed0 
c14t60060E800545AA00000045AA00001529d0
 This all new 10 luns have the same size and sectors as present luns in ZFS POOL z_data_0x00.
In next steps provided are how to intergrades new luns from new storage to existing ZPOOL without impacting to running application 
Edit the Mpxio configuration for the new XIC 
Verify the disk information. 
#format> inq
#Vendor: IBM
#Product: 2810XIV
#Revision: 10.2Vi /kernel/drv/scsi_vhci.conf
Add the following lun end of the file. 
device-type-scsi-options-list =
"IBM 2810XIV", "symmetric-option";
symmetric-option = 0x1000000;
Couple of reboot must needed once you edit the multi path information 
root@drpglz75a #
Schem: zpool replace <pool> <device> <new-device>

All data which was present in ZFS file systems in ZPOOL z_data_0x00 – still accessible without any issue. No impact to production system and no down time !!!