DataCore SSY 10 PSP7 Update 2 brings a lot of fixes - and automatic cloud data collection

User Rating: 0 / 5

Star InactiveStar InactiveStar InactiveStar InactiveStar Inactive

Normally I don't write a blog article about the maintenance release of a software. There are lots of other blogs on the internet that use such content for new articles.

This time, Update 2 not only fixes a quite high number of bugs (I count 16 fixes from which 7 are marked as critical) but it also brings some new functionality to the software. One of the "enhancements" is a new data collector service. The release notes state:

"Enhancement: Collect and transmit machine data to DataCore cloud-based analytics platform. Refer to the Help topic “Data Collection” to disable (opt -out) of this service."

That's interesting. A new data collection service that sends out data about my system to the cloud. The release notes don't include any information about the kind of data the service sends out to the internet. That's okay, there is a link to the help topic where one can obviously get more information. As I already installed PSP7 Update 2 on my demo systems I have access to this new topics (if not, the internet web help is always up to date). As I'm a bit concerned about sending data from and about my SAN to the public cloud I want to know, which data exactly is sent and for what purpose.

Read more ...

Datacore Storage Allocation Units

User Rating: 0 / 5

Star InactiveStar InactiveStar InactiveStar InactiveStar Inactive

Storage Allocation Units (SAU) is Datacores "blocksize" when formatting a pool. At creation time of the pool you have to define the size of each SAU, ranging from 4MB to 1024MB with a default size of 128MB. With a SAU of 128MB, you can create pools with ~1PB of size. If you want to have the ability to add more than 1PB of diskspace to a pool, you have to take larger SAU sizes.

The SAU size can only be set at pool creation time. There is no way to change it afterwards.

Datacore recommends a SAU of 128MB with SSY-V. In former version like SANsymphony7 or SANmelody, the recommendation was to lower the SAU size for virtualized environments because with a lower SAU size, there will be smaller data chunks.

For example: if you have a SAU of 128MB and a data packet of 1GB in size, this data packet will be split in eight subpackets, spread across eight disks in your pool. If you have more disks in your pool, this 1GB packet will not use more than these eight disks.

If you lower the SAU to 8MB this would result in 128 data packets spread across all your disks. This way you would use all disks and the chance to profit from this disk layout is much higher than with a higher SAU.

This example is a bit unworldly but can give you a reason why formerly a lower SAU size was practically used.

With these information in mind we can calculate what maximum pool size each SAU increment supports. Unfortunately you have to calculate yourself because there is no such information officially released by Datacore.

The following table shows each combination based on the fact that 128MB SAU supports 1PB pool size.

SAU size Maximum pool size
4MB 32TB
8MB 64TB
16MB 128TB
32MB 256TB
64MB 512TB
128MB 1PB
256MB 2PB
512MB 4PB


The problem with this table is, that it is not correct. Assuming 8MB SAU size, you should have the ability to create disk pools with ~64TB of capacity.

We recently tried to upgrade a pool with initial ~31TB with additional 4TB of space. As soon as we try to add a single drive to this pool, the GUI throws an error that the disk can't be added.
This is rather annoying than problematic but the second reaction of that task is that the whole pool goes offline and can't be accessed anymore.

This is really something I haven't expected. It might be okay to deny the disk addition because of some non-documented limit but why does the pool go offline?

Fortunately you can bring the pool back online by simply rescanning the storage HBAs but it should never happen that a pool goes down just becasue of a simple "add disk" action.

We tried this procedure with SANsymphony 7 and SANsymphony-V8/9 and always got the same errors and pool problems.

Calling Datacore support and asking them for help didn't do anything so for the moment the only recommendation from the support guys is to delete the pool, recreate it with a SAU of 128MB and rebuild all mirrors.

Funny thing with 31TB of data......



Data corruption during path failover in W2K12 iSCSI environments

User Rating: 0 / 5

Star InactiveStar InactiveStar InactiveStar InactiveStar Inactive

Customers running Windows Server 2012 and 2012 R2 on top of any iSCSI based storage solution that uses multipathing should URGENTLY install the latest hotfix from MS to be found here. The reason behind is a critical bug is an error that causes data corruption after a path failover takes place. The available information are quite short, here is the official text from the kb article:

"If the connection to an iSCSI array is disconnected because of a certain issue, one or more iSCSI sessions or paths to the array fail. In this situation, Windows iSCSI initiator recovers the connection. However, the recovery causes data corruption."

Doesn't explain too much around the background of this problem but the bug seems to be serious. As already said a hotfix is available and all customers running an affected scenario are asked to update their servers as soon as possible.

As we have most of our installed base running on FC or using VMware as hypervisor between the Windows systems and the storage, we aren't affected until now. Even some customers running Hyper-V on top of DataCore based iSCSI storage haven't reported any problems although they definetly did path changes in the recent past during maintenance windows. So I can't confirm that this issue is 100% happening but just to be sure plan to install the hotfix.


Datacore: speed up resyncs

User Rating: 0 / 5

Star InactiveStar InactiveStar InactiveStar InactiveStar Inactive

Sometimes it's neccessary to resync a whole bunch of vDisks/NMVs within DataCore's SANsymphony(-V) software, e.g. because the service crashed or you haven't stopped the virtualization in the right way. This causes DataCore to do a full recovery on every volume where the state is in doubt.

This can lead to running resync operations on many volumes at the same time.

Per default, for every volume the recovery prioity is set to 0. This causes every volume to be in resync at the same time.

You can say: "Cool, this will max the resync speed and the volumes will be resynced within the shortest time". Unfortunately this is not correct. It's correct that all volumes run with the same priority and thus, with the same speed but this will not max out your mirror links.

Read more ...

DataCore MPIO (WIK3.0) won't take care about all vDisks presented

User Rating: 5 / 5

Star ActiveStar ActiveStar ActiveStar ActiveStar Active

Yesterday I saw a strange situation on one of our customers systems. They use DataCore quite from the beginning (SANsymphony 4 was the first release they used in production) and upgraded the environment one version after the other ending with the latest release SANsymphony-V 10.1.

With the availability of WIK 3.0, the latest version of DataCores MPIO software for Windows servers, they decided to upgrade their 3-node filecluster based on Windows Server 2012R2. So far not a problem at all.

After upgrading all hosts the cluster was working fine. Yesterday they attached a fourth node and requested to present all vDisks from the other cluster member to the new host. After a rescan the DataCore GUI showed only about 1/4th of the vdisks (in sum the system has 60 disks attached) actually presented to the system. Furthermore, the Windows disk manager won't start at all, it crashed during collecting information about the disks presented. So a reboot was done but after the reboot, the situation remained the same.

Read more ...

SSY-V client manual path change

User Rating: 0 / 5

Star InactiveStar InactiveStar InactiveStar InactiveStar Inactive

After upgrading from SANmelody/SANsymphony to SANsymphony-V and also updating Datacore's MPIO to v3.0.2 or HIK, the ability to manually change the active path from within a Windows client OS is not supported anymore. You can right-click on the available pathes but all choices are greyed out so you can't select them.

Sometimes you want or need to manually set the active path from the client so how can you do that in SSY-V? It depends on the client's OS. You can use native MPIO software on the client, for example in VMware's ESX you can do it via the path policy manager at datastore/LUN level.

In Linux you can do it by using the multipathd tools.

But what about Windows? Well, this is not supported anymore. If you still want to manually change the active path you can go to the SSY-V GUI and change the "Preferred server" option. This will cause the active path to move to the newly configured preferred server. This will work with ALUA and non-ALUA activated Windows hosts (BTW, ALUA activation is only supported if you use the Windows native MPIO software. Do not enable ALUA when using Datacore's MPIO/HIK).
After changing the preferred server you do not have to manually rescan for devices, the change will automatically take effect after a few seconds (30-45sec in my tests, check within the Datacore MPIO console).
The path change will not interrupt I/O, it will be transparent for your OS (tested with running IOmeter application an the affected volume)

One short note: if you have multiple pathes from the same Datacore server to the client, you can only change the Datacore server serving the disk, you can not define which path from this server will be active (better said, I haven't found a solution for this up to now).

SAN security

User Rating: 0 / 5

Star InactiveStar InactiveStar InactiveStar InactiveStar Inactive

With DataCore (and other products as well) it's quite easy nowadays to deploy a geographically distributed storage system to raise availability beyond the scope of a singe datacenter. As it is so easy to implement most companies forget to think about the impact on security such a scenario will bring. When we look at the way datacenters are connected then we talk about fibre glass cables that are used to connect the switching infrastructure most of the time. This connection is considered to be safe by itself but is it really?

Read more ...

Datacore SSY-V 9 vCenter integration

User Rating: 0 / 5

Star InactiveStar InactiveStar InactiveStar InactiveStar Inactive

Datacore implemented a VMware vCenter integration in their SSY-V 9 product.

With this integration you now can import your vCenter server system in Datacore and manage all storage related tasks via the SSY-V GUI. Do not mix up this new feature with the vSphere client plugin from Datacore. This is another plugin and gives you the ability to create/serve and unserve vDisks from within vSphere client. While the vSphere client plugin needs to be installed separately, the new vCenter integration works out-of-the-box without any additional software installation.

Read more ...

joomla templatesfree joomla templatestemplate joomla
2018   globbers joomla template