SANsymphony-V and ALUA enabled storage systems

User Rating: 0 / 5

Star InactiveStar InactiveStar InactiveStar InactiveStar Inactive

It's no secret that DataCore's SANsymphony-V itself isn't ALUA-aware for the storage it uses as pool storage. The DataCore drivers that replace the native fibre channel drivers from QLogic, Emulex or whatever vendor you use do not understand ALUA commands. That's why it sometimes ends with strange results when it comes to performance. Sometimes? Yes, because even if the DataCore drivers do not understand ALUA they still offer some basic path policy management for the backend storage systems. If you check the disks SSY-V sees as possible pool disks you will see that they can be reached via several pathes. This depends on the storage system used and how many pathes are available to a single disk. The algorithm SSY-V uses to select the used path to a disk is not mentioned anywhere nor what it probably is capable of. The situations I've seen until now show a quite "dumb" algorithm. Most of the time the disks are active on the same path, sometimes a second path is chosen no matter how many pathes are available per disk.


This will probably lead to problems when you have some entry-level storage systems in the backend like HPs P2000 or Fujitsu's DX80/90 systems. The reason for the problems come from the way, these dual head arrays manage the presented storage. RAID-sets, so called vDisks, are owned by one of the two controllers. Volumes (that's what you present to your host) created from a single vdisk are owned by the same controller, too. If you present storage to an application server like a DataCore SSY-V server, four pathes are created, two from the owning controller and two from the non-owning controller.

If you use a path to the owning controller, performance is perfect. If you use a so called non-optimized path to the volumes over the non-owning controller, all data has to be redirected via internal x-mirror cache to the owning controller. This has severe performance impacts. It's not unusual to loose 50% or more performance when using one of the non-optimized pathes.


What does that mean for SSY-V and it's quite simple path algorithm? Right, most of the time, a non-optimzed path to the volumes is chosen. This is not a DataCore's shortfall, they simply cannot support every single path selection technique out on the market so they use the simple algorithm mentioned above. This works all the time but probably not in an optimized way.

How can you test if your storage system is affected by the problems mentioned above? Well, that's quite simple. Create a few disks and present them to SSY-V. Check that at least one disk is connected via a non-optimized path and one via a optimized path. See option 1 below for how to check and change pathes. Now, add these new disks to a new pool. Select the pool and go to the "Allocation map" tab. All disks now should be listed with a yellow bar waiting the initialization process to start. This should take a minute or two but then all disk begin to initialize. Wait a few more minutes and check back. If all yellow bars shrink in the same way/with the same speed, you have no problems. If one or more yellow bars are smaller than the other you have the mentioned problem. The bars will show you how much of the disk is already initialized and if disks run with different speed because of non-optimzed path problems you will see it here.

Another way is to go to the performance tab of each disk currently initializing and check for the writes in MB/s.


So, what to do to get as much performance as possible from the attached storage system?

Currently there are tow ways:

1.) Use the DataCore drivers and manually select the preferred path to very single physical disk.

2.) Use native fibre channel drivers and Microsoft's MPIO software with RoundRobin path policy.


The two options in more detail.....

If you choose to use the DataCore drivers you have the most flexible solution. As the DataCore drivers allow you to use a FC HBA as frontend (target), backend (initiator) and mirror port (both target and initiator) and even all together at the same time, you can use the same HBA for several roles. This keeps the number of needed HBAs at a minimum and is probably the preferred option in small environments where you have only a few physical disks in your pool. Simply go ahead, check your storage systems which volumes (LUN) are carved from which vDisk (RAID set) and which is the owning controller. Go in the SSY-V GUI to "Physical disks" or, if the disks are already assigned to a pool, go to the pool's physical disks, select each disk one after the other, goto "Pathes" and select one of the two optimized pathes available. The icon for that path should change to a green play symbol.

Do that for ALL physical disks on BOTH/ALL DC servers. You see, this task is only acceptable with a few disks. By the way, all changes you made are preserved over a reboot so you only have to do this a single time. Make sure to check the pathes after a software update either from DataCore or the storage system firmware. I haven't tested that but it could render your manual path policy useless and reset to default "non-optimized" setup.


The second option is a bit more tricky and you have to be careful before and during the implementation. It's based on the usage of native fibre channel drivers for the HBAs used for backend storage connectivity and (if applicable) the use of 3rd party MPIO software. This option is supported by DataCore (see some notes on the support policy at the end of this article). DataCore recommends the use of native FC drivers in fact only if you have problems with backend storage connectivity but I don't see a general problem in using it also as a performance improvement.

First, open the SSY-V GUI, go to the properties of the HBA you want to use with native drivers and select the "Info" tab. Have a look at the "Location" entry. It is mentioned in the pattern Bus #, Slot # Function #.


Write down the values for all HBAs you want to make a driver replace. Now stop the virtualization in the SSY-V GUI and then, to be sure, stop the DataCore Executive Service, too. Close all SSY-V GUI, go to the Windows device manager and expand the DataCore Channel Adapters section. Check each FC adapter mentioned in this section to find the adapters to replace.


When you have identified the adapters, select "driver update" from the context menu and let Windows search for the best driver available. It should now install the native FC driver. If it doesn't find a suitable native driver, provide one in the wizard. Make sure, the adapters disappear in the DataCore section and appear under storage adapters.

This procedure is explanied in more detail in DataCore's Technical Bulletin #1 "Managing Fibre Channel Host Bus Adaptor Drivers"

Now it's time to install the MPIO drivers for your storage systems. For the storage systems mentioned above from HP and FTS there isn't a seperate MPIO software available. They all use Windows native MPIO software. If the storage vendor offers a separate MPIO software install it only in case that this software is based on the native Windows MPIO.
Since MPIO is already enabled by DataCore's SSY-V there is only a single job to do: you have to tell Windows MPIO to be active for your storage system. This is done in the MPIO console. There should already be an entry for your storage vendor. If there isn't any, do a rescan for changed hardware in device manager. Now, go back to the MPIO console, make a refresh and voila, there it is. Add it to the list of known storage systems, Windows MPIO is responsible for.

Now, goto the service MMC and set DataCore Executive Service to "manual" just to be sure. Reboot your DCS.

After the reboot goto the device manager and expand the hard disk section. You now should see all your volumes presented from your storage system. Choose on of the disks and goto the properties. Select the tab "MPIO". It should like like this:


Path Policy is set to RoundRobin and all pathes are optimized.

Open service MMC, set DataCore Executive Service back to "Automatic" and start the service. Open SSY-V GUI and expand the server ports section. If you formerly used the DataCore drivers and already configured the touched FC HBAs, these cards will now appear as "not present" with a red X. This is expected behavior. If you are sure you will not convert back to the DataCore drivers you can even remove these entries as they are not used anymore.

Next, navigate to your pool/physical disks section. All disks should be seen here but in the "Pathes" tab there isn't a single entry.


That's okay because the pathes are not managed by DataCore anymore. For SSY-V these disks look like direct-attached disks. Windows will take care for path selection and failover.

If the disks already belonged to a pool, the pools should be shown as read/write and available. If the disks are unused until now, they appear in the "physical disks" section and can be grouped in a pool now.

Make these changes for all HBAs on all DataCore servers and you're done. No more special configuration is needed to get the best performance possible.

If you ever update/upgrade your DataCore software (and I'm sure you will sooner or later) this will not affect your non-DataCore HBA drivers. They will still remain with the native FC drivers as the setup routine of SANsymphony-V will skip these adapters.


One word to the support policy by DataCore for this kind of setup. DataCore offers only support for their own drivers. They do indeed support this kind of configuration but as soon as you get problems they will only help you by checking their own piece of software code. They will never ever help you in solving problems caused by the native drivers or the Windows MPIO or whatever 3rd party MPIO software you use. So for this kind of configuration I recommend it to be used only by people who have a strong knowledge of their storage systems and DataCore and can make some deep troubleshooting by themselves.



joomla templatesfree joomla templatestemplate joomla
2019   globbers joomla template