Zpool Degraded state

HOW to MANAGE SYSTEMS with ZFS IN SOLARISTM CONTAINERS

We came across file system issue on local zone. Multiple file system shown same % of utilization and unable to find any file under it

ssdb0184 $ df -h | grep 37%
/apps/stats 7.9G 2.8G 5.0G 37% /apps/stats
/home 7.9G 2.8G 5.0G 37% /home
/home/app 7.9G 2.8G 5.0G 37% /home/app
/opt/app 7.9G 2.8G 5.0G 37% /opt/app
/usr/local 7.9G 2.8G 5.0G 37% /usr/local
/usr/openv 7.9G 2.8G 5.0G 37% /usr/openv
/usr/openv/netbackup/logs 7.9G 2.8G 5.0G 37% /usr/openv/netbackup/logs
ssdb0184 $

We have found ZFS pools are in DEGRADED state after Global zone has rebooted.

vus725pa#zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
z3bc587c2-a30d-4806-ad06-3ab9dae30614 50.5G 1.15G 49.3G 2% DEGRADED -
z6f58ca7f-7149-4a19-8ee3-69f288e26391 31.8G 12.6G 19.1G 39% DEGRADED -
z72b624a4-fa69-4f47-ace3-755836b2c9b8 50.5G 1.16G 49.3G 2% DEGRADED -
z8b104848-b899-42d8-b1a6-cd4afb539517 47.8G 18.0G 29.8G 37% DEGRADED -
zee6d0e50-9209-41a0-9a4a-5de3ed083bdb 31.8G 14.2G 17.6G 44% DEGRADED -
vus725pa#


FIX

Step 1 >> Clear Fault and Bring ZFS statsu to ONLINE

# zpool clear <poolname>

vus725pa#zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
z3bc587c2-a30d-4806-ad06-3ab9dae30614 50.5G 1.15G 49.3G 2% ONLINE -
z6f58ca7f-7149-4a19-8ee3-69f288e26391 31.8G 12.6G 19.1G 39% ONLINE -
z72b624a4-fa69-4f47-ace3-755836b2c9b8 50.5G 1.16G 49.3G 2% ONLINE -
z8b104848-b899-42d8-b1a6-cd4afb539517 47.8G 18.0G 29.8G 37% ONLINE -
zee6d0e50-9209-41a0-9a4a-5de3ed083bdb 31.8G 14.2G 17.6G 44% ONLINE -
vus725pa#

Step 2 >> Halt Zone ( # zoneadm -z <zonename> halt )

Step 3 >> Unmount zfs file system ( # zfs unmount <filesystemname> )

Step 4 >> remount it ( # zfs mount -a )

Step 5 >> Boot zone ( # zoneadm -z <zoneadm> boot ).