Solaris Cheat sheets

SVM Cheat sheets
Configuration files

/etc/lvm/md.tab
1. The file is empty by default. The file is only used when metainit command is issued by the administrator. It is configured manually.
2. It can be populated by appending the output of # metastat -p. For example #metastat -p >> /etc/lvm/md.tab.
3. It can be used to recreate all the metadevices in one go. Best used in recovery of SVM configurations.
For example

# metainit -a       (to create all metadevices mentioned in md.tab file)

# metainit dxx      (create metadevice dxx only)

  1. DO NOTuse it on root file system though.

/etc/lvm/mddb.cf
SVM uses the configuration files /etc/lvm/mddb.cf to store the location of state database replicas. Do not edit this file manually.

/etc/lvm/md.cf
The configuration file /etc/lvm/md.cf contains the automatically generated configuration information for the default (unspecified or local) disk set.
This file can also be used to recover the SVM configuration If your system loses the information maintained in the state database.
Again do not edit this file manually.

/kernel/drv/md.conf
The configuration file md.conf contains fields like nmd (i.e. number of volumes (metadevices) that the configuration supports) etc. The file can be edited to change the default values for various such parameters.

/etc/rcS.d/S35svm.init
The RC script configures and starts SVM at boot and can be used to start/stop the daemons.

/etc/rc2.d/S95svm.sync
The RC script checks the SVM configuration at boot, start sync of mirrors if necessary and start the active monitoring daemon (mdmonitord).

Metadb related command

Metadb command syntax :

# metadb -helpusage: metadb [-s setname] -a [options] mddbnnnmetadb [-s setname] -a [options] device …metadb [-s setname] -d [options] mddbnnnmetadb [-s setname] -d [options] device …metadb [-s setname] -imetadb -p [options] [ mddb.cf-file ]options:-c count number of replicas (for use with -a only)-f force adding or deleting of replicas-k filename alternate /etc/system file-l length specify size of replica (for use with -a only)

To create 3 replicas on c0t0d0s7 (-f required only if you are creating metadb for the first time on the slice):

# metadb -a -f -c 3 c0t0d0s7

To create 2 more replicas on the same disk slice :

# metadb -a -c 2 c0t1d0s7

To delete the replicas :

# metadb -d c0t1d0s7

To delete the last replica (all SVM configuration wii be gone) :

# metadb -d -f c0t0d0s7

To check the status of meta database replicas :

# metadb -i     flags           first blk       block count     a m  pc luo        16              8192            /dev/dsk/c0t0d0s7     a    pc luo        16              8192            /dev/dsk/c0t1d0s7     W    pc l       unknown            8192            /dev/dsk/c0t2d0s7

Check for the W flag or unknown first block for the 3rd replica. W or unknown indicates a failed replica.

Creating different layouts

RAID 0(stripe and concatenation)
1. Creating a concatenation from slice S2 of 3 disks :

# metainit d1 3 1 c0t1d0s2 1 c1t1d0s2 1 c2t1d0s2d1 – the metadevice3   – the number of components to concatenate together1   – the number of devices for each component.

  1. Creating a stripe from slice S2 of 3 disks :

# metainit d2 1 3 c0t1d0s2 c1t1d0s2 c2t1d0s2 -i 16kd2      – the  metadevice1        – the number of components to concatenate3        – the number of devices in each stripe.-i 16k – the stripe segment size.

  1. Creating three, 2 disk concatenation and stripe them together :

# metainit d3 3 2 c0t1d0s2 c1t1d0s2  -i 16k 2 c3t1d0s2 c4t1d0s2  -i 16k 2 c6t1d0s2 c7t1d0s2  -i 16kd3     – the meatadevice3       – the number of stripes2       – the number of disk (slices) in each stripe-i 16k – the stripe segment size.

RAID 1 or Mirroring
In SVM mirroring is a 2 step procedure – create the 2 sub-mirrors (d11 and d12) first and associate them with the mirror (d10).

# metainit -f d11 1 1 c0t3d0s7# metainit -f d12 1 1 c0t4d0s7# metainit d10 -m d11# metattach d10 d12

Here d10 is the device to mount and and d11 and d12 hold the 2 copies of the data.

In case of mirroring root partition you need to follow few more steps. Refer the post SVM root encapsulation and mirroring [SPARC] for more information.

RAID 5
To setup a RAID 5 mirror using 3 disks :

# metainit d1 -r c0t1d0s2 c1t1d0s2 c2t1d0s2 -i 16k

To concatenate a disk at the end of RAID 5 mirror :

# metattach d1 c4t3d0s2

To add a hot spare pool hsp01 to the RAID 5 mirror d1 :

# metaparam -h hsp01 d1

Extending a metadevice

To grow a metadevice we need to attach a slice to the end and then grow the underlying filesystem:

# metattach d1 c3t1d0s2

If the metadevice is not mounted :

# growfs /dev/md/rdsk/d1

If the metadevice is mounted :

# growfs -M /export/home /dev/md/rdsk/d1

Removing the metadevices

The metadevice can be removed if they are not open (i.e. not mounted):

# metaclear d3

To delete all the metadevices (use it carefully as it blows away entire SVM configuration):

# metaclear -a -f

View the configuration and status

To view the entire SVM configuration and status of all the metadevices :

# metastat -p

To check the configuration and status of a particular device :

# metastat d3

Hot spare pools

To create a hot spare pool with no disks :

# metainit hsp01

To add a slice/disk to the hot spare pool :

# metahs -a hsp01 c0t1d0s4

Remember to add hot spare disk/slice in a order smallest to largets. So when there is a need of hot spare disk smallest capable disk will be used from the host spare pool to replace the failed one.

To add a slice to all hot spare pools :

# metahs -a all c1t1d0s4

To make hot spare pool hsp01 available to metadevice d1 (submirror or RAID 5) :

# metaparam -h hsp01 d1

Replacing the disk slice in hot spare (c1t1d0s4 is replaced by c2t1d0s4) :

# metahs -r hsp001 c1t1d0s4 c2t1d0s4

Remove a disk slice (c1t1d0s4) from all hot spares :

# metahs -d all c1t1d0s4

Remove a disk slice from the host spare pool hsp01 :

# metahs -d hsp01 c1t1d0s4

Re-enable a hot spare that was previously unavailable :

# metahs -e c1t1d0s4

To remove a hot spare pool :

# metahs -d hsp001

To check the status of hot spare pools :

# metahs -i# metastat

Disksets

The syntax of command to use on metasets is similar to those of the metadevices :

# command -s  options

The location of metadevices in the shared disksets is :

/dev/md//{dsk|rdsk}/dn

The hot spare pool inside a shared diskset are named as :

{setname}/hspnnn

To add hosts to a set :

# metaset -s [setname] -a -h [hostname1] [hostname2]

To add disks to disk set (Do not specify slices) :

# metaset -s [setname] -a c2t0d0 c2t1d0 c2t2d0 c2t3d0

Similarly to remove disks and hosts from disk set :

# metaset -s [setname] -d c2t3d0# metaset -s [setname] -d -h [hostname]

To take the control of a disk set :

# metaset -s [setname] -t

To release the control of a disk set :

# metaset -s [setname] -r

To check the status of metasets :

# metastat -s [diskset]

Troubleshooting commands

Below are some of the troubleshooting commands. Use the -s [metaset] option when using the commands on the metasets.

# metastat# metastat -t     (with option -t it will print date/time when the metadevice changed state/status.)# metastat -p# metadb -i# prtvtoc (on relevant devices)# mount# iostat -iE# format

Also check for any changes or errors in the files :

/var/adm/messages/etc/lvm/md.cf/etc/lvm/mddb.cf