Two node VCS cluster on Solaris

everyone wants to be expert on any of the cluster technologies. Especially on veritas cluster due to its market capitalization.But not every body will get the opportunity to work on it since VCS will be running on critical environment. Then how to learn ? Just reading the text books and attending 5 days training will give enough confidence ? No. I am sure without hands on experience, you can’t be expert in VCS.

I just tried to setup the VCS cluster in laptop/Desktop using OVM. OVM hosted two Solaris nodes and openfiler as shared storage.This setup worked like a charm.Why can’t you try in your laptop ?

Hardware requirement:
1.Laptop/PC with 6 GB RAM
2.Processor should have VT compatibly to install 64 bit operating system under vmware.
(By default Intel core 2 duo above version processor having VT feature.For Dual core processor,please check itwww.intel.com)
Required Software’s:
1.OVM for creating virtual Solaris nodes
2.Solaris 10 – Opertating system
3. Symantec Storage foundation HA – VCS Cluster software
4.Openfiler – Virtual SAN storage (iscsi shared storage)
Before starting to configure the VCS ,the below mentioned prerequisite need to be completed.
1.Create two virtual Solaris guest with three NIC cards in vmware.
2.Configure root password less authentication between two cluster nodes.
3.Install Symantec storage foundation HA on both Solaris nodes.
4.Install openfiler as virtual guest in vmware for shared storage.
5.Provision a new LUN in openfiler to share across the nodes.
6.Add the newly provisioned LUN in both the nodes using the iscisadm.

Once you have setup the above prerequisites ,you can proceed with VCS cluster configuration.
Run the below command to begin the configuration

karriNode1#/opt/VRTS/install/installsfha601-configure
 Storage Foundation 5.1 Configure Program
 Logs are being written to /var/tmp/installsfha601-201607292354mTw while installsfha601 is in progress.
 Enter the Solaris x64 system names separated by spaces: [q,?] node1 node2

Storage Foundation 5.1 Configure Program
 node1 node2
 Logs are being written to /var/tmp/installsfha601-201207292354mTw while installsfha601 is in progress
 Verifying systems: 100%
 Estimated time remaining: 0:00 5 of 5
 Checking system communication ....................................Done
 Checking release compatibility .......................................Done
 Checking installed product .............................................Done
 Checking platform version ..............................................Done
 Performing product prechecks .........................................Done
 System verification checks completed successfully

Storage Foundation and High Availability 5.1 Configure Program
 node1 node2
 To configure VCS, answer the set of questions on the next screen.
 When [b] is presented after a question, 'b' may be entered to go back to the first question of the configuration set.
 When [?] is presented after a question, '?' may be entered for help or additional information about the question.
 Following each set of questions, the information you have entered will be presented for confirmation. To repeat the set of
 questions and correct any previous errors, enter 'n' at the confirmation prompt.
 No configuration changes are made to the systems until all configuration questions are completed and confirmed.
 Press [Enter] to continue:

To configure VCS for SF51 the following information is required:
 A unique Cluster name
 A unique Cluster ID number between 0-65535
 Two or more NIC cards per system used for heartbeat links
 One or more heartbeat links are configured as private links
 One heartbeat link may be configured as a low priority link
 All systems are being configured to create one cluster
 Enter the unique cluster name: [q,?] karri
 Enter a unique Cluster ID number between 0-65535: [b,q,?] (0) 5
 Discovering NICs on node1 ............. Discovered e1000g0 e1000g1 e1000g2
 To use aggregated interfaces for private heartbeat, enter the name of an aggregated interface.
 To use a NIC for private heartbeat, enter a NIC which is not part of an aggregated interface.
 Enter the NIC for the first private heartbeat link on node1: [b,q,?] e1000g1
 Would you like to configure a second private heartbeat link? [y,n,q,b,?] (y)
 Enter the NIC for the second private heartbeat link on node1: [b,q,?] e1000g2
 Do you want to configure an additional low priority heartbeat link? [y,n,q,b,?] (n) y
 Enter the NIC for the low priority heartbeat link on node1: [b,q,?] (e1000g0)
 Are you using the same NICs for private heartbeat links on all systems? [y,n,q,b,?] (y)
 Checking Media Speed for e1000g1 on node1 .................................1000
 Checking Media Speed for e1000g2 on node1 ................................ 1000
 Checking Media Speed for e1000g1 on node2 .................................1000
 Checking Media Speed for e1000g2 on node2 .................................1000

Storage Foundation and High Availability 5.1 Configure Program
 node1 node2
 Cluster information verification:
 Cluster Name: karri
 Cluster ID Number: 5
 Private Heartbeat NICs for node1:
 link1=e1000g1
 link2=e1000g2
 Low Priority Heartbeat NIC for node1: link-lowpri=e1000g0
 Private Heartbeat NICs for node2:
 link1=e1000g1
 link2=e1000g2
 Low Priority Heartbeat NIC for node2: link-lowpri=e1000g0
 Is this information correct? [y,n,q,b,?] (y)

Virtual IP can be specified in RemoteGroup resource, and can be used to connect to the cluster using Java GUI
 The following data is required to configure the Virtual IP of the Cluster:
 A public NIC used by each system in the cluster
 A Virtual IP address and netmask
 Do you want to configure the Virtual IP? [y,n,q,?] (n)
 Storage Foundation and High Availability 5.1 Configure Program
 node1 node2
 Veritas Cluster Server can be configured to utilize Symantec Security Services
 Running VCS in Secure Mode guarantees that all inter-system communication is encrypted, and users are verified with security credentials. When running VCS in Secure Mode, NIS and system usernames and passwords are used to verify identity. VCS usernames and passwords are no longer utilized when a cluster is running in Secure Mode.Before configuring a cluster to operate using Symantec Security Services, another system must already have Symantec Security Services installed and be operating as a Root Broker. Refer to the Veritas Cluster Server Installation Guide for more information on configuring a Symantec Product Authentication Service Root Broker.

Would you like to configure VCS to use Symantec Security Services? [y,n,q] (n)

Storage Foundation and High Availability 5.1 Configure Program
 node1 node2

The following information is required to add VCS users:
 A user name
 A password for the user
 User privileges (Administrator, Operator, or Guest)

Do you want to set the username and/or password for the Admin user
 (default username = 'admin', password='password')? [y,n,q] (n)
 Do you want to add another user to the cluster? [y,n,q] (n)
 VCS User verification:
 User: admin Privilege: Administrators
 Passwords are not displayed
 Is this information correct? [y,n,q] (y)

Storage Foundation and High Availability 5.1 Configure Program
 node1 node2
 The following information is required to configure SMTP notification:
 The domain-based hostname of the SMTP server
 The email address of each SMTP recipient
 A minimum severity level of messages to send to each recipient
 Do you want to configure SMTP notification? [y,n,q,?] (n)

Storage Foundation and High Availability 5.1 Configure Program node1 node2
 The following information is required to configure SNMP notification:
 System names of SNMP consoles to receive VCS trap messages
 SNMP trap daemon port numbers for each console
 A minimum severity level of messages to send to each console
 Do you want to configure SNMP notification? [y,n,q,?] (n)

All SFHA processes that are currently running must be stopped
 Do you want to stop SFHA processes now? [y,n,q,?] (y)

Storage Foundation and High Availability 5.1 Configure Program
 node1 node2
 Logs are being written to /var/tmp/installsfha601-201207300006zNi while installsfha601 is in progress
 Stopping SFHA: 100%
 Estimated time remaining: 0:00 8 of 8
 Performing SFHA prestop tasks .....................................Done
 Stopping vxatd ....................................................Done
 Stopping had ......................................................Done
 Stopping hashadow .................................................Done
 Stopping CmdServer ................................................Done
 Stopping vxfen ....................................................Done
 Stopping gab ......................................................Done
 Stopping llt ......................................................Done
 Storage Foundation High Availability Shutdown completed successfully

Storage Foundation and High Availability 5.1 Configure Program
 node1 node2
 Logs are being written to /var/tmp/installsfha601-201207300006zNi while installsfha601 is in progress
 Starting SFHA: 100%
 Estimated time remaining: 0:00 19 of 19
 Performing SFHA configuration .........................................Done
 Starting vxdmp ........................................................Done
 Starting vxio .........................................................Done
 Starting vxspec .......................................................Done
 Starting vxconfigd ....................................................Done
 Starting vxesd ........................................................Done
 Starting vxrelocd .....................................................Done
 Starting vxconfigbackupd ..............................................Done
 Starting vxportal .....................................................Done
 Starting fdd ..........................................................Done
 Starting llt ..........................................................Done
 Starting gab ..........................................................Done
 Starting vxfen ........................................................Done
 Starting had ..........................................................Done
 Starting hashadow .....................................................Done
 Starting CmdServer ....................................................Done
 Starting vxdbd ........................................................Done
 Starting odm ..........................................................Done
 Performing SFHA poststart tasks .......................................Done

Storage Foundation High Availability Startup completed successfully

installsfha601 log files, summary file, and response file are saved at:
 /opt/VRTS/install/logs/installsfha601-201207300006zNi
 karriNode1#
karriNode1#/opt/VRTS/bin/hastatus -sum
 -- SYSTEM STATE
 -- System State Frozen
 A node1 RUNNING 0
 A node2 RUNNING 0
 Once the configuration is completed like the above we need to create servicegroup.

Create a diskgroup for cluster using vxvm.
 karriNode1#echo |format
 Searching for disks...done
 AVAILABLE DISK SELECTIONS:
 0. c1t0d0
 /pci@0,0/pci15ad,1976@10/sd@0,0
 1. c2t2d0
 /iscsi/disk@0000iqn.2006- 01.com.openfiler%3Atsn.58533bc21b9e0001,0
 2. c2t3d0
 /iscsi/disk@0000iqn.2006-01.com.openfiler%3Atsn.58533bc21b9e0001,1
 Specify disk(enter its number): Specify disk (enter its number):
karriNode1#vxdisk list
 DEVICE TYPE DISK GROUP STATUS
 c1t0d0s2 auto:none - - online invalid
 c2t2d0s2 auto:none - - online invalid
 c2t3d0s2 auto:none - - online invalid

 Bring the the openfiler LUNS in vxvm control.

 karriNode1#/etc/vx/bin/vxdisksetup -i c2t2d0
 karriNode1#/etc/vx/bin/vxdisksetup -i c2t3d0
 karriNode1#vxdisk list
 DEVICE TYPE DISK GROUP STATUS
 c1t0d0s2 auto:none - - online invalid
 c2t2d0s2 auto:cdsdisk - - online
 c2t3d0s2 auto:cdsdisk - - online

Create a new diskgroup.

karriNode1#vxdg init karridg iscsi1=c2t2d0 iscsi2=c2t3d0
 karriNode1#vxdisk list
 DEVICE TYPE DISK GROUP STATUS
 c1t0d0s2 auto:none - - online invalid
 c2t2d0s2 auto:cdsdisk iscsi1 karridg online
 c2t3d0s2 auto:cdsdisk iscsi2 karridg online
 Making the volume
 karriNode1#vxassist -g karridg make hansvol1 3g
 karriNode1#vxprint -hvt
 Disk group: karridg
 v hansvol1 - ENABLED ACTIVE 6291456 SELECT - fsgen
 pl hansvol1-01 hansvol1 ENABLED ACTIVE 6291456 CONCAT - RW
 sd iscsi1-01 hansvol1-01 iscsi1 0 4050688 0 c2t2d0 ENA
 sd iscsi2-01 hansvol1-01 iscsi2 0 2240768 4050688 c2t3d0 ENA

Creating the filesystem

karriNode1#mkfs -F vxfs /dev/vx/rdsk/karridg/hansvol1
 version 7 layout
 6291456 sectors, 3145728 blocks of size 1024, log size 16384 blocks
 largefiles supported
 karriNode1#
 karriNode1#mkdir /hansvol1
 karriNode1#mount -F vxfs /dev/vx/dsk/karridg/hansvol1 /hansvol1
 karriNode1#df -h /hansvol1
 Filesystem size used avail capacity Mounted on
 /dev/vx/dsk/karridg/hansvol1
 3.0G 18M 2.8G 1% /hansvol1
 # umount /hansvol1.

Adding Service Groups using Cluster Manager