ZFS Administration

Contents

Creating pool
Creating File System
Taking Snapshot
Destroying File System
To Create Volume
To Increase Swap Space
To view the properties of the file system
To destroy storage pool
Adding Hot-Spare
Attaching device into the pool
Detaching device from the pool
To offline a device
To online a device
To view the exported pool
Disk Scrubbing
Renaming a file system
To Clone a File system
Unmounting a file system temporarily
To mount a file system
To Set a Value
Striping
Mirroring
Converting from Stripe to Mirror
To mount Legacy mount point
ZFS makes simple

bash-3.00# uname –a
SunOS test_solaris10 5.10 Generic_118855-33 i86pc i386 i86pc

bash-3.00# zpool status
no pools available

bash-3.00# zpool list
no pools available

bash-3.00# zfs list
no datasets available

bash-3.00# df –h

Filesystem size used avail capacity Mounted on
/dev/dsk/c1t0d0s0 6.9G 4.0G 2.8G 60% /
/devices 0K 0K 0K 0% /devices
ctfs 0K 0K 0K 0% /system/contract
proc 0K 0K 0K 0% /proc
mnttab 0K 0K 0K 0% /etc/mnttab
swap 579M 728K 579M 1% /etc/svc/volatile
objfs 0K 0K 0K 0% /system/object
/usr/lib/libc/libc_hwcap1.so.1
 6.9G 4.0G 2.8G 60% /lib/libc.so.1
fd 0K 0K 0K 0% /dev/fd
/dev/dsk/c1t0d0s4 514M 66M 396M 15% /var
swap 579M 84K 579M 1% /tmp
swap 579M 40K 579M 1% /var/run
/dev/dsk/c1t0d0s7 1.9G 12M 1.8G 1% /export/home
/vol/dev/dsk/c0t0d0/sol_10_1106_x86
 3.0G 3.0G 0K 100% /cdrom/sol_10_1106_x86
/hgfs 16G 4.0M 16G 1% /hgfs
/tmp/VMwareDnD 64G 16M 64G 1% /var/run/vmblock

bash-3.00# echo | format
Searching for disks...done

AVAILABLE DISK SELECTIONS:
 0. c1t0d0 <DEFAULT cyl 1302 alt 2 hd 255 sec 63>
 /pci@0,0/pci1000,30@10/sd@0,0
 1. c1t1d0 <DEFAULT cyl 1302 alt 2 hd 255 sec 63>
 /pci@0,0/pci1000,30@10/sd@1,0
 2. c1t2d0 <VMware,-VMware Virtual S-1.0-10.00GB>
 /pci@0,0/pci1000,30@10/sd@2,0
 3. c1t3d0 <VMware,-VMware Virtual S-1.0-1.00GB>
 /pci@0,0/pci1000,30@10/sd@3,0
 4. c1t4d0 <VMware,-VMware Virtual S-1.0-1.00GB>
 /pci@0,0/pci1000,30@10/sd@4,0
 5. c1t5d0 <VMware,-VMware Virtual S-1.0-1.00GB>
 /pci@0,0/pci1000,30@10/sd@5,0
 6. c1t6d0 <VMware,-VMware Virtual S-1.0-1.00GB>
 /pci@0,0/pci1000,30@10/sd@6,0
Specify disk (enter its number): Specify disk (enter its number):

ZFS Pool Creating

bash-3.00# zpool create pool1 c1t3d0 c1t4d0

bash-3.00# zpool status
 pool: pool1
 state: ONLINE
 scrub: none requested
config:
 NAME STATE READ WRITE CKSUM
 pool1 ONLINE 0 0 0
 c1t3d0 ONLINE 0 0 0
 c1t4d0 ONLINE 0 0 0

errors: No known data errors

bash-3.00# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
pool1 1.97G 58.5K 1.97G 0% ONLINE -
bash-3.00# zfs list
NAME USED AVAIL REFER MOUNTPOINT
pool1 77K 1.94G 24.5K /pool1

bash-3.00# df -h /pool1
Filesystem size used avail capacity Mounted on
pool1 1.9G 24K 1.9G 1% /pool1

Once we create the pool using zpool create command, file system and mount point wil be created automatically..

bash-3.00# zpool create pool2 mirror c1t5d0 c1t6d0

bash-3.00# zpool status
 pool: pool1
 state: ONLINE
 scrub: none requested
config:

NAME STATE READ WRITE CKSUM
 pool1 ONLINE 0 0 0
 c1t3d0 ONLINE 0 0 0
 c1t4d0 ONLINE 0 0 0

errors: No known data errors

pool: pool2
 state: ONLINE
 scrub: none requested
config:

NAME STATE READ WRITE CKSUM
 pool2 ONLINE 0 0 0
 mirror ONLINE 0 0 0
 c1t5d0 ONLINE 0 0 0
 c1t6d0 ONLINE 0 0 0

errors: No known data errors

bash-3.00# zpool list

NAME SIZE USED AVAIL CAP HEALTH ALTROOT
pool1 1.97G 80K 1.97G 0% ONLINE -
pool2 1008M 77.5K 1008M 0% ONLINE -

bash-3.00# zfs list
NAME USED AVAIL REFER MOUNTPOINT
pool1 77K 1.94G 24.5K /pool1
pool2 74.5K 976M 24.5K /pool2

bash-3.00# df -h /pool*
Filesystem size used avail capacity Mounted on
pool1 1.9G 24K 1.9G 1% /pool1
pool2 976M 24K 976M 1% /pool2

To Create a File System

bash-3.00# zfs create pool1/home
bash-3.00# zfs create pool1/home/user1
bash-3.00# zfs create pool1/home/user2
bash-3.00# zfs create pool1/home/user3
bash-3.00# zfs create pool1/home/user4
bash-3.00# zfs create pool2/home
bash-3.00# zfs create pool2/home/profile1
bash-3.00# zfs create pool2/home/profile2
bash-3.00# zfs create pool2/home/profile3
bash-3.00# zfs create pool2/home/profile4

bash-3.00# zpool status
 pool: pool1
 state: ONLINE
 scrub: none requested
config:

NAME STATE READ WRITE CKSUM
 pool1 ONLINE 0 0 0
 c1t3d0 ONLINE 0 0 0
 c1t4d0 ONLINE 0 0 0

errors: No known data errors

pool: pool2
 state: ONLINE
 scrub: none requested
config:

NAME STATE READ WRITE CKSUM
 pool2 ONLINE 0 0 0
 mirror ONLINE 0 0 0
 c1t5d0 ONLINE 0 0 0
 c1t6d0 ONLINE 0 0 0

errors: No known data errors

bash-3.00# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
pool1 1.97G 271K 1.97G 0% ONLINE -
pool2 1008M 258K 1008M 0% ONLINE -

bash-3.00# zfs list
NAME USED AVAIL REFER MOUNTPOINT
pool1 260K 1.94G 25.5K /pool1
pool1/home 153K 1.94G 30.5K /pool1/home
pool1/home/user1 24.5K 1.94G 24.5K /pool1/home/user1
pool1/home/user2 24.5K 1.94G 24.5K /pool1/home/user2
pool1/home/user3 24.5K 1.94G 24.5K /pool1/home/user3
pool1/home/user4 24.5K 1.94G 24.5K /pool1/home/user4
pool2 256K 976M 25.5K /pool2
pool2/home 153K 976M 30.5K /pool2/home
pool2/home/profile1 24.5K 976M 24.5K /pool2/home/profile1
pool2/home/profile2 24.5K 976M 24.5K /pool2/home/profile2
pool2/home/profile3 24.5K 976M 24.5K /pool2/home/profile3
pool2/home/profile4 24.5K 976M 24.5K /pool2/home/profile4

bash-3.00# df -h /pool*
Filesystem size used avail capacity Mounted on
pool1 1.9G 25K 1.9G 1% /pool1
pool2 976M 25K 976M 1% /pool2

To Check whether ZFS File System has been mounted or not

bash-3.00# df -h | grep pool
pool1 1.9G 25K 1.9G 1% /pool1
pool2 976M 25K 976M 1% /pool2
pool1/home 1.9G 30K 1.9G 1% /pool1/home
pool1/home/user1 1.9G 24K 1.9G 1% /pool1/home/user1
pool1/home/user2 1.9G 24K 1.9G 1% /pool1/home/user2
pool1/home/user3 1.9G 24K 1.9G 1% /pool1/home/user3
pool1/home/user4 1.9G 24K 1.9G 1% /pool1/home/user4
pool2/home 976M 30K 976M 1% /pool2/home
pool2/home/profile1 976M 24K 976M 1% /pool2/home/profile1
pool2/home/profile2 976M 24K 976M 1% /pool2/home/profile2
pool2/home/profile3 976M 24K 976M 1% /pool2/home/profile3
pool2/home/profile4 976M 24K 976M 1% /pool2/home/profile4

Assign the quota to file system

bash-3.00# zfs set quota=50m pool2/home/profile3

bash-3.00# zfs list
NAME USED AVAIL REFER MOUNTPOINT
pool1 260K 1.94G 25.5K /pool1
pool1/home 153K 1.94G 30.5K /pool1/home
pool1/home/user1 24.5K 1.94G 24.5K /pool1/home/user1
pool1/home/user2 24.5K 1.94G 24.5K /pool1/home/user2
pool1/home/user3 24.5K 1.94G 24.5K /pool1/home/user3
pool1/home/user4 24.5K 1.94G 24.5K /pool1/home/user4
pool2 256K 976M 25.5K /pool2
pool2/home 153K 976M 30.5K /pool2/home
pool2/home/profile1 24.5K 976M 24.5K /pool2/home/profile1
pool2/home/profile2 24.5K 976M 24.5K /pool2/home/profile2
pool2/home/profile3 24.5K 50.0M 24.5K /pool2/home/profile3
pool2/home/profile4 24.5K 976M 24.5K /pool2/home/profile4

Quota has been set to restrict a particular file system size.

bash-3.00# cd pool2/home/profile3
bash-3.00# ls -l
total 0

bash-3.00# mkfile 20m file1
bash-3.00# mkfile 20m file2
bash-3.00# ls -l
total 79902
-rw------T 1 root root 20971520 Jan 17 16:58 file1
-rw------T 1 root root 20971520 Jan 17 16:58 file2

bash-3.00# mkfile 20m file3
file3: initialized 10231808 of 20971520 bytes: Disc quota exceeded
bash-3.00# ls -l
total 102443
-rw------T 1 root root 20971520 Jan 17 16:58 file1
-rw------T 1 root root 20971520 Jan 17 16:58 file2
-rw------- 1 root root 20971520 Jan 17 16:59 file3

bash-3.00# rm file3

bash-3.00# ls -l
total 81950
-rw------T 1 root root 20971520 Jan 17 16:58 file1
-rw------T 1 root root 20971520 Jan 17 16:58 file2

bash-3.00# zfs list
NAME USED AVAIL REFER MOUNTPOINT
pool1 260K 1.94G 25.5K /pool1
pool1/home 153K 1.94G 30.5K /pool1/home
pool1/home/user1 24.5K 1.94G 24.5K /pool1/home/user1
pool1/home/user2 24.5K 1.94G 24.5K /pool1/home/user2
pool1/home/user3 24.5K 1.94G 24.5K /pool1/home/user3
pool1/home/user4 24.5K 1.94G 24.5K /pool1/home/user4
pool2 49.3M 927M 25.5K /pool2
pool2/home 49.2M 927M 30.5K /pool2/home
pool2/home/profile1 24.5K 927M 24.5K /pool2/home/profile1
pool2/home/profile2 24.5K 927M 24.5K /pool2/home/profile2
pool2/home/profile3 49.0M 982K 49.0M /pool2/home/profile3
pool2/home/profile4 24.5K 927M 24.5K /pool2/home/profile4

To Take a SnapShot

bash-3.00# zfs snapshot pool2/home/profile3@jan175.06pm
bash-3.00# zfs list
NAME USED AVAIL REFER MOUNTPOINT
pool1 260K 1.94G 25.5K /pool1
pool1/home 153K 1.94G 30.5K /pool1/home
pool1/home/user1 24.5K 1.94G 24.5K /pool1/home/user1
pool1/home/user2 24.5K 1.94G 24.5K /pool1/home/user2
pool1/home/user3 24.5K 1.94G 24.5K /pool1/home/user3
pool1/home/user4 24.5K 1.94G 24.5K /pool1/home/user4
pool2 49.3M 927M 25.5K /pool2
pool2/home 49.2M 927M 30.5K /pool2/home
pool2/home/profile1 24.5K 927M 24.5K /pool2/home/profile1
pool2/home/profile2 24.5K 927M 24.5K /pool2/home/profile2
pool2/home/profile3 49.0M 982K 49.0M /pool2/home/profile3
pool2/home/profile3@jan175.06pm 0 - 49.0M -
pool2/home/profile4 24.5K 927M 24.5K /pool2/home/profile4

Recovering a file system

bash-3.00# cd pool2/home/profile3
bash-3.00# ls -l
total 100389 
-rw------T 1 root root 20971520 Jan 17 16:58 file1
-rw------T 1 root root 20971520 Jan 17 16:58 file2
-rw------T 1 root root 9437184 Jan 17 17:03 file4

bash-3.00# rm file2
bash-3.00# ls -l
total 59414
-rw------T 1 root root 20971520 Jan 17 16:58 file1
-rw------T 1 root root 9437184 Jan 17 17:03 file4

bash-3.00# zfs rollback pool2/home/profile3@jan175.06pm

bash-3.00# cd pool2/home/profile3
bash-3.00# ls -l
total 100389
-rw------T 1 root root 20971520 Jan 17 16:58 file1
-rw------T 1 root root 20971520 Jan 17 16:58 file2 ["file2 has been recovered"]
-rw------T 1 root root 9437184 Jan 17 17:03 file4

ZFS Snap Shot

bash-3.00# zfs snapshot pool2/home/profile4@17jan5.12pm
bash-3.00# zfs list
NAME USED AVAIL REFER MOUNTPOINT
pool1 260K 1.94G 25.5K /pool1
pool1/home 153K 1.94G 30.5K /pool1/home
pool1/home/user1 24.5K 1.94G 24.5K /pool1/home/user1
pool1/home/user2 24.5K 1.94G 24.5K /pool1/home/user2
pool1/home/user3 24.5K 1.94G 24.5K /pool1/home/user3
pool1/home/user4 24.5K 1.94G 24.5K /pool1/home/user4
pool2 59.3M 917M 25.5K /pool2
pool2/home 59.2M 917M 30.5K /pool2/home
pool2/home/profile1 24.5K 917M 24.5K /pool2/home/profile1
pool2/home/profile2 24.5K 917M 24.5K /pool2/home/profile2
pool2/home/profile3 49.0M 982K 49.0M /pool2/home/profile3
pool2/home/profile3@jan175.06pm 0 - 49.0M -
pool2/home/profile4 10.0M 917M 10.0M /pool2/home/profile4
pool2/home/profile4@17jan5.12pm 0 - 10.0M -

ZFS Sending and Receiving 

bash-3.00# zfs send pool2/home/profile4@17jan5.12pm > /path1/path2/file100

ZFS Send command is used to take backup Powered by snapshots
 Full backup: any snapshot
 Incremental backup: any snapshot delta
 Very fast – cost proportional to data changed

bash-3.00# cd /path1/path2
bash-3.00# ls -l
total 20592
-rw-r--r-- 1 root root 10526728 Jan 22 16:46 file100
bash-3.00# cd

bash-3.00# zfs receive pool2/home/profile417jan < /path1/path2/file100

ZFS receive command is used to restore the file system
once it has been received successfully then it wil mount the file system
automatically

bash-3.00# zfs list
NAME USED AVAIL REFER MOUNTPOINT
pool1 260K 1.94G 25.5K /pool1
pool1/home 153K 1.94G 30.5K /pool1/home
pool1/home/user1 24.5K 1.94G 24.5K /pool1/home/user1
pool1/home/user2 24.5K 1.94G 24.5K /pool1/home/user2
pool1/home/user3 24.5K 1.94G 24.5K /pool1/home/user3
pool1/home/user4 24.5K 1.94G 24.5K /pool1/home/user4
pool2 69.3M 907M 25.5K /pool2
pool2/home 69.2M 907M 30.5K /pool2/home
pool2/home/profile1 24.5K 907M 24.5K /pool2/home/profile1
pool2/home/profile2 24.5K 907M 24.5K /pool2/home/profile2
pool2/home/profile3 49.0M 982K 49.0M /pool2/home/profile3
pool2/home/profile3@jan175.06pm 0 - 49.0M -
pool2/home/profile4 10.0M 907M 10.0M /pool2/home/profile4
pool2/home/profile4@17jan5.12pm 0 - 10.0M -
pool2/home/profile417jan 10.0M 907M 10.0M /pool2/home/profile417jan
pool2/home/profile417jan@17jan5.12pm 0 - 10.0M -

bash-3.00# cd /pool2/home/profile4
bash-3.00# ls -l
total 20495
-rw------T 1 root root 2097152 Jan 22 16:45 test1
-rw------T 1 root root 3145728 Jan 22 16:45 test2
-rw------T 1 root root 5242880 Jan 22 16:45 test3

bash-3.00# cd /pool2/home/profile417jan
bash-3.00# ls -l
total 20495
-rw------T 1 root root 2097152 Jan 22 16:45 test1
-rw------T 1 root root 3145728 Jan 22 16:45 test2
-rw------T 1 root root 5242880 Jan 22 16:45 test3

set mountpoint
Change default mount created in zfs

To change the default Mount point in zfs use the set mountpoint option

bash-3.00# zfs set mountpoint=/export/profile2 /pool2/home/profile2
bash-3.00# zfs list
NAME USED AVAIL REFER MOUNTPOINT
pool1 260K 1.94G 25.5K /pool1
pool1/home 153K 1.94G 30.5K /pool1/home
pool1/home/user1 24.5K 1.94G 24.5K /pool1/home/user1
pool1/home/user2 24.5K 1.94G 24.5K /pool1/home/user2
pool1/home/user3 24.5K 1.94G 24.5K /pool1/home/user3
pool1/home/user4 24.5K 1.94G 24.5K /pool1/home/user4
pool2 69.3M 907M 25.5K /pool2
pool2/home 69.2M 907M 31.5K /pool2/home
pool2/home/profile1 24.5K 907M 24.5K /pool2/home/profile1
/pool2/home/profile2 24.5K 907M 24.5K /export/profile2
pool2/home/profile3 49.0M 982K 49.0M /pool2/home/profile3
pool2/home/profile3@jan175.06pm 0 - 49.0M -
pool2/home/profile4 10.0M 907M 10.0M /pool2/home/profile4
pool2/home/profile4@17jan5.12pm 0 - 10.0M -
pool2/home/profile417jan5.12pm 10.0M 907M 10.0M /pool2/home/profile417jan5.12pm
pool2/home/profile417jan5.12pm@17jan5.12pm 0 - 10.0M -

Destroying ZFS  File System

bash-3.00# zfs destroy pool1/home/user4
bash-3.00# zfs destroy pool1/home/user3
bash-3.00# zfs destroy pool1/home/user2

bash-3.00# zfs list
NAME USED AVAIL REFER MOUNTPOINT
pool1 142K 1.94G 25.5K /pool1
pool1/home 52K 1.94G 27.5K /pool1/home
pool1/home/user1 24.5K 1.94G 24.5K /pool1/home/user1
pool2 69.3M 907M 25.5K /pool2
pool2/home 69.2M 907M 30.5K /pool2/home
pool2/home/profile1 24.5K 907M 24.5K /pool2/home/profile1
pool2/home/profile2 24.5K 907M 24.5K /export/profile2
pool2/home/profile3 49.0M 982K 49.0M /pool2/home/profile3
pool2/home/profile3@jan175.06pm 0 - 49.0M -
pool2/home/profile4 10.0M 907M 10.0M /pool2/home/profile4
pool2/home/profile4@17jan5.12pm 0 - 10.0M -
pool2/home/profile417jan5.12pm 10.0M 907M 10.0M /pool2/home/profile417jan5.12pm
pool2/home/profile417jan5.12pm@17jan5.12pm 0 - 10.0M -
pool2/home/profile5 24.5K 907M 24.5K /export/profile5

 

How to Create a Volume

bash-3.00# zfs create -V 10m /pool1/home/user2
bash-3.00# zfs list
NAME USED AVAIL REFER MOUNTPOINT
pool1 10.1M 1.93G 25.5K /pool1
pool1/home 10.0M 1.93G 25.5K /pool1/home
pool1/home/user1 24.5K 1.93G 24.5K /pool1/home/user1
pool1/home/user2 22.5K 1.94G 22.5K -
pool2 69.3M 907M 25.5K /pool2
pool2/home 69.2M 907M 30.5K /pool2/home
pool2/home/profile1 24.5K 907M 24.5K /pool2/home/profile1
pool2/home/profile2 24.5K 907M 24.5K /pool2/home/profile2
pool2/home/profile3 49.0M 982K 49.0M /pool2/home/profile3
pool2/home/profile3@jan175.06pm 0 - 49.0M -
pool2/home/profile4 10.0M 907M 10.0M /pool2/home/profile4
pool2/home/profile4@17jan5.12pm 0 - 10.0M -
pool2/home/profile417jan5.12pm 10.0M 907M 10.0M /pool2/home/profile417jan5.12pm
pool2/home/profile417jan5.12pm@17jan5.12pm 0 - 10.0M -

bash-3.00# cd /dev/zvol/dsk/
bash-3.00# ls
pool pool1
bash-3.00# cd pool1
bash-3.00# ls -l
total 2
drwxr-xr-x 2 root root 512 Jan 17 17:33 home
bash-3.00# cd home
bash-3.00# ls -l
total 2
lrwxrwxrwx 1 root root 38 Jan 17 17:33 user2 -> ../../../../../devices/pseudo/zfs@0:1c
bash-3.00# pwd
/dev/zvol/dsk/pool1/home

How to Increase ZFS Swap Space

bash-3.00# swap -l
swapfile dev swaplo blocks free
/dev/dsk/c1t0d0s1 33,1 8 1060280 1060280

bash-3.00# swap -a /dev/zvol/dsk/pool1/home/user2

bash-3.00# swap -l
swapfile dev swaplo blocks free
/dev/dsk/c1t0d0s1 33,1 8 1060280 1060280
/dev/zvol/dsk/pool1/home/user2 181,1 8 20472 20472

bash-3.00# swap -s
total: 228252k bytes allocated + 55628k reserved = 283880k used, 569456k available

bash-3.00# swap -d /dev/zvol/dsk/pool1/home/user2 ----------------To delete Swap
bash-3.00# swap -l
swapfile dev swaplo blocks free
/dev/dsk/c1t0d0s1 33,1 8 1060280 1060280
bash-3.00# swap -s
total: 228112k bytes allocated + 55768k reserved = 283880k used, 559220k available

To view the properties of the file system

bash-3.00# zfs get all pool2/home/profile2
NAME PROPERTY VALUE SOURCE
pool2/home/profile2 type filesystem - 
pool2/home/profile2 creation Thu Jan 17 16:13 2008 - 
pool2/home/profile2 used 24.5K - 
pool2/home/profile2 available 907M - 
pool2/home/profile2 referenced 24.5K - 
pool2/home/profile2 compressratio 1.00x - 
pool2/home/profile2 mounted yes - 
pool2/home/profile2 quota none default 
pool2/home/profile2 reservation none default 
pool2/home/profile2 recordsize 128K default 
pool2/home/profile2 mountpoint /pool2/home/profile2 default 
pool2/home/profile2 sharenfs off default 
pool2/home/profile2 checksum on default 
pool2/home/profile2 compression off default 
pool2/home/profile2 atime on default 
pool2/home/profile2 devices on default 
pool2/home/profile2 exec on default 
pool2/home/profile2 setuid on default 
pool2/home/profile2 readonly off default 
pool2/home/profile2 zoned off default 
pool2/home/profile2 snapdir hidden default 
pool2/home/profile2 aclmode groupmask default 
pool2/home/profile2 aclinherit secure default

bash-3.00# zpool iostat 3 3
 capacity operations bandwidth
pool used avail read write read write
---------- ----- ----- ----- ----- ----- -----
pool1 170K 1.97G 0 0 209 1.10K
pool2 69.4M 939M 0 7 18 42.2K
---------- ----- ----- ----- ----- ----- -----
pool1 170K 1.97G 0 0 0 0
pool2 69.4M 939M 0 0 0 0
---------- ----- ----- ----- ----- ----- -----
pool1 170K 1.97G 0 0 0 0
pool2 69.4M 939M 0 0 0 0
---------- ----- ----- ----- ----- ----- -----

To view Filesystem statistics

bash-3.00# fsstat -F
 new name name attr attr lookup rddir read read write write
 file remov chng get set ops ops ops bytes ops bytes
 886 110 110 1.59M 292 7.40M 43.5K 3.75M 163M 1.87M 255M ufs
 0 0 0 3.46K 0 5.48K 1.21K 1.63K 543K 0 0 proc
 0 0 0 85 1 98 2 8 19.5K 0 0 nfs
 332 29 12 6.80K 46 48.7K 990 20.4K 27.2M 64.9K 515M zfs
 0 0 0 6 0 0 0 0 0 0 0 hsfs
 25 0 0 12.5K 10 12.7K 6 3.11K 1.15M 106 4.54K lofs
5.38K 3.96K 1.11K 29.2K 123 10.1K 36 58.5K 58.5M 60.9K 52.5M tmpfs
 0 0 0 32.6K 0 0 0 118 14.0K 0 0 mntfs
 0 0 0 0 0 0 0 0 0 0 0 nfs3
 0 0 0 0 0 0 0 0 0 0 0 nfs4
 0 0 0 18 0 0 4 0 0 0 0 autofs

bash-3.00# zfs list
NAME USED AVAIL REFER MOUNTPOINT
pool1 10.1M 1.93G 25.5K /pool1
pool1/home 10.0M 1.93G 25.5K /pool1/home
pool1/home/user1 24.5K 1.93G 24.5K /pool1/home/user1
pool1/home/user2 22.5K 1.94G 22.5K -
pool2 69.3M 907M 25.5K /pool2
pool2/home 69.2M 907M 30.5K /pool2/home
pool2/home/profile1 24.5K 907M 24.5K /pool2/home/profile1
pool2/home/profile2 24.5K 907M 24.5K /export/profile5
pool2/home/profile3 49.0M 982K 49.0M /pool2/home/profile3
pool2/home/profile3@jan175.06pm 0 - 49.0M -
pool2/home/profile4 10.0M 907M 10.0M /pool2/home/profile4
pool2/home/profile4@17jan5.12pm 0 - 10.0M -
pool2/home/profile417jan5.12pm 10.0M 907M 10.0M /pool2/home/profile417jan5.12pm
pool2/home/profile417jan5.12pm@17jan5.12pm 0 - 10.0M -

How To destroy storage pool

bash-3.00# zpool destroy pool1
bash-3.00# zfs list
NAME USED AVAIL REFER MOUNTPOINT
pool2 69.3M 907M 25.5K /pool2
pool2/home 69.2M 907M 30.5K /pool2/home
pool2/home/profile1 24.5K 907M 24.5K /pool2/home/profile1
pool2/home/profile2 24.5K 907M 24.5K /export/profile5
pool2/home/profile3 49.0M 982K 49.0M /pool2/home/profile3
pool2/home/profile3@jan175.06pm 0 - 49.0M -
pool2/home/profile4 10.0M 907M 10.0M /pool2/home/profile4
pool2/home/profile4@17jan5.12pm 0 - 10.0M -
pool2/home/profile417jan5.12pm 10.0M 907M 10.0M /pool2/home/profile417jan5.12pm
pool2/home/profile417jan5.12pm@17jan5.12pm 0 - 10.0M -

bash-3.00# zpool status
 pool: pool2
 state: ONLINE
 scrub: none requested
config:

NAME STATE READ WRITE CKSUM
 pool2 ONLINE 0 0 0
 mirror ONLINE 0 0 0
 c1t5d0 ONLINE 0 0 0
 c1t6d0 ONLINE 0 0 0

errors: No known data errors

How To add Hot-Spare 

bash-3.00# zpool add pool2 spare c1t4d0
bash-3.00# zpool status
 pool: pool2
 state: ONLINE
 scrub: none requested
config:

NAME STATE READ WRITE CKSUM
 pool2 ONLINE 0 0 0
 mirror ONLINE 0 0 0
 c1t5d0 ONLINE 0 0 0
 c1t6d0 ONLINE 0 0 0
 spares
 c1t4d0 AVAIL

errors: No known data errors

How To attach / detach a device into the pool

bash-3.00# zpool attach pool2 c1t6d0 c1t3d0
bash-3.00# zpool status
 pool: pool2
 state: ONLINE
status: One or more devices is currently being resilvered. The pool will
 continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
 scrub: resilver in progress, 92.31% done, 0h0m to go
config:

NAME STATE READ WRITE CKSUM
 pool2 ONLINE 0 0 0
 mirror ONLINE 0 0 0
 c1t5d0 ONLINE 0 0 0
 c1t6d0 ONLINE 0 0 0
 c1t3d0 ONLINE 0 0 0
 spares
 c1t4d0 AVAIL

errors: No known data errors

bash-3.00# zpool detach pool2 c1t3d0
bash-3.00# zpool status
 pool: pool2
 state: ONLINE
 scrub: resilver completed with 0 errors on Thu Jan 17 17:48:52 2008
config:

NAME STATE READ WRITE CKSUM
 pool2 ONLINE 0 0 0
 mirror ONLINE 0 0 0
 c1t5d0 ONLINE 0 0 0
 c1t6d0 ONLINE 0 0 0
 spares
 c1t4d0 AVAIL

errors: No known data errors

How To offline / online a device

bash-3.00# zpool offline pool2 c1t5d0
Bringing device c1t5d0 offline

bash-3.00# zpool status
 pool: pool2
 state: DEGRADED
status: One or more devices has been taken offline by the adminstrator.Sufficient replicas exist for the pool to continue functioning in a degraded state.

action: Online the device using 'zpool online' or replace the device with 'zpool replace'.
scrub: resilver completed with 0 errors on Thu Jan 17 17:48:52 2008
config:

NAME STATE READ WRITE CKSUM
 pool2 DEGRADED 0 0 0
 mirror DEGRADED 0 0 0
 c1t5d0 OFFLINE 0 0 0
 c1t6d0 ONLINE 0 0 0
 spares
 c1t4d0 AVAIL

errors: No known data errors

bash-3.00# zpool online pool2 c1t5d0
Bringing device c1t5d0 online

bash-3.00# zpool status
 pool: pool2
 state: ONLINE
 scrub: resilver completed with 0 errors on Thu Jan 17 17:50:11 2008
config:

NAME STATE READ WRITE CKSUM
 pool2 ONLINE 0 0 0
 mirror ONLINE 0 0 0
 c1t5d0 ONLINE 0 0 0
 c1t6d0 ONLINE 0 0 0
 spares
 c1t4d0 AVAIL

errors: No known data errors

How To check the consistency of ZFS 

bash-3.00# dd if=/dev/zero of=/dev/rdsk/c1t5d0 count=10000
10000+0 records in
10000+0 records out

bash-3.00# zpool status
 pool: pool2
 state: ONLINE
 scrub: none requested
config:

NAME STATE READ WRITE CKSUM
 pool2 ONLINE 0 0 0
 mirror ONLINE 0 0 0
 c1t5d0 ONLINE 0 0 0
 c1t6d0 ONLINE 0 0 0
 spares
 c1t4d0 AVAIL

errors: No known data errors

How To view the exported pool

bash-3.00# zpool export pool2
bash-3.00# zpool import
 pool: pool2
 id: 3828820948604928523
 state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:

pool2 ONLINE
 mirror ONLINE
 c1t5d0 ONLINE
 c1t6d0 ONLINE
 spares
 c1t4d0

bash-3.00# zpool import pool2
bash-3.00# zpool status
 pool: pool2
 state: ONLINE
status: One or more devices has experienced an unrecoverable error. An
 attempt was made to correct the error. Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
 using 'zpool clear' or replace the device with 'zpool replace'.
 see: http://www.sun.com/msg/ZFS-8000-9P
 scrub: none requested
config:
NAME STATE READ WRITE CKSUM
 pool2 ONLINE 0 0 0
 mirror ONLINE 0 0 0
 c1t5d0 ONLINE 0 0 59 "file system got corrupted"
 c1t6d0 ONLINE 0 0 0
 spares
 c1t4d0 AVAIL

errors: No known data errors
bash-3.00# zpool scrub pool2

bash-3.00# zpool status
 pool: pool2
 state: ONLINE
status: One or more devices has experienced an unrecoverable error. An
 attempt was made to correct the error. Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
 using 'zpool clear' or replace the device with 'zpool replace'.
 see: http://www.sun.com/msg/ZFS-8000-9P
 scrub: scrub in progress, 85.36% done, 0h0m to go
config:

NAME STATE READ WRITE CKSUM
 pool2 ONLINE 0 0 0
 mirror ONLINE 0 0 0
 c1t5d0 ONLINE 0 0 63
 c1t6d0 ONLINE 0 0 0
 spares
 c1t4d0 AVAIL
errors: No known data errors