The most visible change to the mass storage stack in HP-UX 11i v3 is the addition ofagile addressing, also known as persistent LUN binding . With the introduction of agile addressing, there is only a single DSF for each unique LUN in the server, no matter how many lunpaths the LUN has or if any of those lunpaths change.
Earlier releases HP-UX, the legacy addressing is used and each LUN will be having a separate device file for the Fibre Channel paths available on that server.
With the agile addressing,
Before configuring Volume Groups on a new HP-UX 11.31 server, the SPC -2 (SCSI Primary Commands -2) flag needs to be enabled. This flag is enabling at the port level / FA level from the storage side. Unix engineer need to engage the Storage team for enabling this flag. If the Flag is not configured properly for a server, the performance of other 11.31 servers which are connected to the same FA will get degraded even though those servers have the SPC flag enabled. After enabling the flag, the server need to be restarted .
The SPC settings can be verified with scsimgr command. Please find the o/p from where a server where SPC flag is not enabled.
hp461# scsimgr get_info -D /dev/rdisk/disk3 |grep -i spc
SPC protocol revision = 2
If the SPC flag is enabled the o/p will be,
hp461# scsimgr get_info -D /dev/rdisk/disk3 |grep -i spc
SPC protocol revision = 4
If the SPC flag is not enabled for a server, there will be more than one device files for for a LUN and will be same as the number of FC paths in the server. Suppose a server is having 2 FC cards & SPC flag is not enabled, the disk details will be as follows.
hp470# ioscan -m dsf /dev/rdisk/disk12
Persistent DSF Legacy DSF(s)
========================================
/dev/rdisk/disk12 /dev/rdsk/c5t14d7
And if we try to create / import a Volume Group on the server, the below errors will be coming.
#vgimport -v -s -N -m /tmp/hp404_vg125.map /dev/vg125
Beginning the import process on Volume Group "vg125". Verification of unique LVM disk id on each disk in the volume group /dev/vg125 failed. Following are the sets of disks having identical LVM disk id /dev/disk/disk510 /dev/disk/disk740
Root-cause :
|
In an HP-UX server, there will be a lunpath for all the active FC cards configured for the disk.
In earlier releases of HP-UX , there will be a device file for each FC path to a LUN. Normally the server treats one path as primary path and the others as the alternate paths to the LUN. In 11.31, if the flag is not enabled , the server will treat each path as a separate disk. Because of this if we try to use the disk for creating the VG, we are getting the error “Following are the sets of disks having identical LVM disk id “.
Once the SPC flag is enabled, there will be only DSF for a LUN irrespective of the number of FC paths configured on the server.
The o/p of the LUN status from a server where the flag is enabled will be as follows,
hp470# ioscan -m dsf /dev/rdisk/disk12
Persistent DSF Legacy DSF(s)
========================================
/dev/rdisk/disk12 /dev/rdsk/c5t14d7
/dev/rdsk/c4t14d7
Note: This server is having 2 FC paths configured for disks.
Scan and Configure New
LUNS on Redhat Linux
Found another useful thing on the web. This is the quick guide to rescan and configure newly added LUNS in Linux.
To configure the newly added LUNS on RHEL:
To configure the newly added LUNS on RHEL:
# ls /sys/class/fc_host
host0 host1 host2 host3 fdisk -l 2>/dev/null | egrep '^Disk' | egrep -v 'dm-' | wc -l echo "1" > /sys/class/fc_host/host0/issue_lip echo "- - -" > /sys/class/scsi_host/host0/scan echo "1" > /sys/class/fc_host/host1/issue_lip echo "- - -" > /sys/class/scsi_host/host1/scan echo "1" > /sys/class/fc_host/host2/issue_lip echo "- - -" > /sys/class/scsi_host/host2/scan echo "1" > /sys/class/fc_host/host3/issue_lip echo "- - -" > /sys/class/scsi_host/host3/scan cat /proc/scsi/scsi | egrep -i 'Host:' | wc -l fdisk -l 2>/dev/null | egrep '^Disk' | egrep -v 'dm-' | wc -l
Alternatively, we can
run the re-scan-scsi script.
To scan new LUNs on Linux operating system which is using QLogic driver
You need to find out driver proc file /proc/scsi/qlaXXX.
For example on my system it is /proc/scsi/qla2300/0
Once file is identified you need to type following command (login as the root):
# echo "scsi-qlascan" > /proc/scsi/qla2300/0
# cat /proc/scsi/qla2300/0
Now use the script rescan-scsi-bus.sh new LUN as a device. Run script as follows:
# ./rescan-scsi-bus.sh -l -w
The output of ls -l /sys/block/*/device should give you an idea about how each device is connected to the system.
Linux SAN Multipathing
Using device mapper
There are a lot of SAN multipathing solutions on Linux at the moment. Two of them are discussesed in this blog. The first one is device mapper multipathing that is a failover and load balancing solution with a lot of configuration options. The second one (mdadm multipathing) is just a failover solution with manuel re-anable of a failed path. The advantage of mdadm multiphating is that it is very easy to configure.
Before using a multipathing solution for a production environment on Linux it is also important to determine if the used solution is supportet with the used Hardware. For example HP doesn’t support the Device Mapper Multipathing solution on their servers yet.
Device Mapper Multipathing
Procedure for configuring the system with DM-Multipath:
- Install device-mapper-multipath rpm
- Edit the multipath.conf configuration file:
- comment out the default blacklist
- change any of the existing defaults as needed
- Start the multipath daemons
- Create the multipath device with the multipath
Install Device Mapper Multipath
# rpm -ivh device-mapper-multipath-0.4.7-8.el5.i386.rpm warning: device-mapper-multipath-0.4.7-8.el5.i386.rpm: Header V3 DSA signature: Preparing... ########################################### [100%] 1:device-mapper-multipath########################################### [100%]
Initial Configuration
Set user_friendly_name. The devices will be created as /dev/mapper/mpath[n]. Uncomment the blacklist.
# vim /etc/multipath.conf #blacklist { # devnode "*" #} defaults { user_friendly_names yes path_grouping_policy multibus }
Load the needed modul and the startup service.
# modprobe dm-multipath # /etc/init.d/multipathd start # chkconfig multipathd on
Print out the multipathed device.
# multipath -v2 or # multipath -v3
Configuration
Configure device type in config file.
# cat /sys/block/sda/device/vendor HP # cat /sys/block/sda/device/model HSV200 # vim /etc/multipath.conf devices { device { vendor "HP" product "HSV200" path_grouping_policy multibus no_path_retry "5" } }
Configure multipath device in config file.
# cat /var/lib/multipath/bindings # Format: # alias wwid # mpath0 3600508b400070aac0000900000080000 # vim /etc/multipath.conf multipaths { multipath { wwid 3600508b400070aac0000900000080000 alias mpath0 path_grouping_policy multibus path_checker readsector0 path_selector "round-robin 0" failback "5" rr_weight priorities no_path_retry "5" } }
Set not mutipathed devices on the blacklist. (f.e. local Raid-Devices, Volume Groups)
# vim /etc/multipath.conf devnode_blacklist { devnode "^cciss!c[0-9]d[0-9]*" devnode "^vg*" }
Show Configured Multipaths.
# dmsetup ls --target=multipath mpath0 (253, 1) # multipath -ll mpath0 (3600508b400070aac0000900000080000) dm-1 HP,HSV200 [size=10G][features=1 queue_if_no_path][hwhandler=0] \_ round-robin 0 [prio=4][active] \_ 0:0:0:1 sda 8:0 [active][ready] \_ 0:0:1:1 sdb 8:16 [active][ready] \_ 1:0:0:1 sdc 8:32 [active][ready] \_ 1:0:1:1 sdd 8:48 [active][ready]
Format and mount Device
Fdisk cannot be used with /dev/mapper/[dev_name] devices. Use fdisk on the underlying disks and execute the following command when device-mapper multipath maps the device to create a /dev/mapper/mpath[n] device for the partition.
# fdisk /dev/sda # kpartx -a /dev/mapper/mpath0 # ls /dev/mapper/* mpath0 mpath0p1 # mkfs.ext3 /dev/mapper/mpath0p1 # mount /dev/mapper/mpath0p1 /mnt/san
After that /dev/mapper/mpath0p1 is the first partition on the multipathed device.
Multipathing with mdadm on Linux
The md multipathing solution is only a failover solution what means that only one path is used at one time and no load balancing is made.
Start the MD Multipathing Service
Start the MD Multipathing Service
# chkconfig mdmpd on # /etc/init.d/mdmpd start
On the first Node (if it is a shared device)
Make Label on Disk
Make Label on Disk
# fdisk /dev/sda Disk /dev/sdt: 42.9 GB, 42949672960 bytes 64 heads, 32 sectors/track, 40960 cylinders Units = cylinders of 2048 * 512 = 1048576 bytes Device Boot Start End Blocks Id System /dev/sdt1 1 40960 41943024 fd Linux raid autodetect # partprobe
Bind multiple paths together
# mdadm --create /dev/md4 --level=multipath --raid-devices=4 /dev/sdq1 /dev/sdr1 /dev/sds1 /dev/sdt1
Get UUID
# mdadm --detail /dev/md4 UUID : b13031b5:64c5868f:1e68b273:cb36724e
Set md configuration in config file
# vim /etc/mdadm.conf # Multiple Paths to RAC SAN DEVICE /dev/sd[qrst]1 ARRAY /dev/md4 uuid=b13031b5:64c5868f:1e68b273:cb36724e # cat /proc/mdstat
On the second Node (Copy the /etc/mdadm.conf from the first node)
# mdadm -As # cat /proc/mdstat
Restore a failed path
# mdadm /dev/md1 -f /dev/sdt1 -r /dev/sdt1 -a /dev/sdt1
Scan and Configure New SAN disks on Redhat LinuxThe steps below are to scan for new LUNs from SAN after the LUNs have been presented from the storage side. And the steps below is when you are using QLogic HBA's .
The steps below will work on SUSE Linux Enterprise Server (SLES) 10 and Red Hat Enterprise Linux (RHEL) 5.Create a directory to hold the utilites. In this examle we will use the /tmp/ql_utils directory:Change to the /tmp/ql_utils directory:cd /tmp/ql_utilsRetrieve the utilities from the download above and extract them from the zip file into the /tmp/ql_utils directory.This will put five files into /tmp/ql_utils:ql-dynamic-tgt-lun-disc.shREADME.ql-dynamic-tgt-lun-disc.txtCopyingrevision.qldynamic.txtsg3_utils-1.23.tgzThe file that we are going to be using is the ql-dynamic-tgt-lun-disc.sh file. We need to ensure that it is set to executable:chmod a+x /tmp/ql_utils/ql-dynamic-tgt-lun-disc.shThe ql-dynamic-tgt-lun-disc.sh script has several options available. You can see what these options are by running: /tmp/ql_utils/ql-dynamic-tgt-lun-disc.sh -hThe commands to scan for new LUNs are listed below. After the commands we'll describe what each command does. These commands should be run with root priveledges while inside the /tmp/ql_utils directory.powermt display dev=all | egrep "Pseudo|Logical" > before./ql-dynamic-tgt-lun-disc.shPlease make sure there is no active I/O before running this script Do you want to continue: (yes/no)? yespowermt configpowermt display dev=all | egrep "Pseudo|Logical" > afterdiff after beforepowermt saveThe first line outputs the current list of LUNs to the file named before. This is optional, but, makes it easy to see what new LUNs have been discovered later on.The second line actually does the scan for new LUNs and prompts you to make sure that it's ok to run the script. Answer yes to the prompt.The fifth line (powermt config) creates the emcpower devices in /dev.The sixth line outputs the new list of LUNs to the file named after.The seventh line runs a command to compare the list of new LUNs to the list of old LUNs. The differences will be displayed on screen. Make sure that names of the new LUNs show up in the output.The last line saves the configuration.Remove LUNS From Linux SafelyTo remove a LUN and all associated PowerPath and Linux devices from the host environment follow these steps.Note: that it is critical to follow the procedures in exact order because deviating from the procedures can cause the host to panic1. Stop any I/O to the device being removed. [Unmount the filesystem ] It is critical to stop all I/O on the device that is being removed. 2. Run the following command to determine which native SCSI devices are mapped to the pseudo device: powermt display dev=all Find the name of the LUN to be removed and match it up with the emcpower device name. This will need to be done on each server individually. 3. Run the command: powermt remove dev=emcpowerWhere corresponds to the LUN to be removed. 4. Run the command: powermt release Failing to run this command results in the pseudo device still being visible in /dev and /sys/block and may lead to some complications when the new devices are dynamically added. 5. In the /tmp/ql_utils directory, there should be a script to rescan the qlogic hbas. The script is called ql-dynamic-tgt-lun-disc.sh. Run the script: /tmp/ql_utils/ql-dynamic-tgt-lun-disc.sh 6. You will now remove the device from the storage array using your array admin utilities. On the Linux server, run powermt display to verify that the device has been removed. 7. Now remove the LUN from the Storage Group (CLARiiON) or unmap it from the FA ports (DMX). Quick HOWTO: Reduce SWAP Partition Online without reboot in LinuxRecently I had a request to reduce the swap space and allocate that space to some other LV in one of our server. Below is what I followed and it perfectly worked for me. :)Make sure you have enough physical memory to hold the swap contents.Now, turn the swap off:# sync# swapoffNow check the status# swapon -sThen, Use fdisk command:# fdisk List partitions with "p" commandFind Delete your partition with "d" commandCreate a smaller Linux-Swap partition with "n" commandMake sure it is a Linux-Swap partition (type 82) (Change with "t" command)Write partition table with "w" commandRun "partprobe" to update Filesystem table to kernel. (It is very important before proceeding further)Then,mkswapswapon check to make sure swap is turned onswapon -sNow you can use your free space to increase space for other Logical volumes (LV).Use fdisk command to create new partition, then# partprobe# pvcreate# vgextend # lvextend -L +SIZE_TO_INCREASE Note: It is extreme importance of syncing and turning the swap off before you change any partitions. If you FORGET TO DO THIS, YOU WILL LOST_DATA!!Resizing Online Multipath Disk in LinuxHere is the steps to Resize an Online Multipath Disk which is using Linux native device mapper as multipath solution. 1. Resize your physical disk in SAN. SAN Admins will do this. 2. Use the following command to find the paths to the LUN:# multipath -l3. Now, Resize your paths. For SCSI devices, use the following command:# echo 1 > /sys/block//device/rescan 4. Resize your multipath device by running the multipathd resize command:# multipathd -k'resize map mpath0'5. Resize the File System (Assuming LVM is NOT used)# resize2fs /dev/mapper/mpath0If LVM is used, you need to do the following:#pvscanCheck your Disk Changes detected under LVM:#pvs or pvdisplay#vgscanTo check the VG Size is Increased:#vgs or vgdisplayNow Extend the LV:#lvextend -L +G Finally Extend the File System#resize2fsThats all. You are done
No comments:
Post a Comment