Usually we use DataCore's storage virtualization software SANsymphony-V with entry level storage systems like HP's P2000 series or Fujitsu Eternus DX80/90 systems or even direct-attached SAS storage in the storage servers itself. This is a quite common usage scenario as the HA features like synchronous mirroring or the performance improvements by using large amount of cache doesn't have to be provided by the storage system itself but rather by the storage virtualization software. This keeps initial hardware costs as low as possible by simultaneously giving you a quite decent performance and data safety.
Recently we made a project for a customer demanding for a fully mirrored storage system providing initially 130TB of mixed SAS and SATA and the ability to scale out up to 250TB.
Using the old fashioned way we would have used several entry level storage systems equipped with SAS and SATA disks in separate systems. Using more storage controllers gives us more cache and more cpu power to spread the load. Furthermore, splitting SAS and SATA to different storage systems and creating different pools within DataCore allows a complete storage system to fail without compromising the other pool's/tier's redundancy.
We choose Fujitsu as hardware partner as we are really content with their products and support in the last few months.
As we made the initial config with the low end DX90S2 systems, FTS asked us why we don't use the midrange system Eternus DX440S2. Normally the DX440S2 is much more expensive than the DX90 but as we decided to use in sum 8 DX90S2 systems, the prices aren't that different.
The main advantage of the DX440 against other storage vendors is that it is a high performance system that acts like a entry storage system. There is no "virtualization" of disks happening on the DX440 that could be contraproductive to DataCore. It's a "simple" RAID controller with extreme powerful CPUs and plenty of cache - perfect for SANsymphony.
We equipped the DX440S2 with 96GB of cache, 16x 8GBit FC modules and >300 SAS and SATA disks per DX440. This is a quite impressive hardware constellation. This setup even without SANsymphony is extremely fast and can scale out even more. To give you an idea: most storage vendors restrict the number of SSDs in a single storage system to avoid overhelming of the controller ressources. The DX440S2 controller is that fast, there simply is NO restriction on the number of SSDs you can use. Quite cool....
Back to the story.... we installed two of these systems, each behind a Primergy RX350S7 server with 8GBit FC cards, 192GB of RAM and two fast QuadCore CPUs. SSY-V 9.0 PSP2 on top on Windows Server 2012 and that's it.
To get most performance out of this box we decided to use native FC drivers from QLogic for the backend storage so we can use the Eternus native MPIO software. This way we get an excellent load balancing over all available pathes without manually interaction. (Side note: since the Eternus has 16 FC ports and the server has 4 FC ports, the theoretical number of pathes between the server and each LUN is 64. The Eternus MPIO for the DX440 is only capable of 8 pathes per LUN so we had to restrict, but even with 8 pathes to each LUN you have more than enough bandwidth available).
Initializing the two storage pools was done with ~3-4GB/s, temporirily saturating all backend FC ports on the DCS. Really impressive, especially the perfect loadbalancing between all FC ports.
Storage migration was quite simple, the customer already had DataCore in place so we simply expanded the existing 2-node cluster with the new nodes (keep the needed licenses in mind), split and unserved the vDisks one-by-one and reestablished the mirror with the new nodes. Split&Unserve is neccessary because with the current version of SSY-V you can't simply replace one part of a mirrored disk with a new disk from a third server. This is because there has to be more intelligence in the config engine of SSY-V as currently exists and therefore it is not possible yet. (will be available in a future release).
With 130TB fully mirrored you won't be catched by the achilles heel of SSY-V: a crash or unclean shutdown of the storage server will cause a full rebuild - no fun even with DX440 in the backend. So we decided to follow a new way once more: we set all vDisks to write-trough-mode. This way we disable write caching for all vDisks. Why? Just becasue if you don't have write cache enabled and one of your DCS went down there is no need to do a full resync. As you won't write cache any data, there can't be any corruption and SSY-V always has a consistent state.
Puh, this is quit uncommon. DataCore always says it gets the performance from their huge cache. This is right but for write caching only a ridiculous amount of RAM will be used (in any environment we saw, the write cache was below 1GB), the rest is used for read cache (see my post about cache in SSY-V on this blog).
With our DX440 systems and plenty of cache installed in the controller modules, the write cache can easily be handled by the storage system. No need for DataCores write-caching.
By the way, you can enable/disable write-through mode on a per-vDisk-base. This way you can enable SSY-V write cache on your most demanding volumes and disable it on the huge "no mappter how fast" archive volumes. Even if you crash one of your DCS, full rebuild will only be done on the vDisks that have write cache enabled. The huge archive volumes are in sync after a short log recovery.
So for now, the combination of a fast storage system like the DX440S2 and DataCores SANsymphony-V seems to be a perfect setup. Even in high load situations the write cache on the DX440 catches nearly 90% of all I/Os so we do not see any disadvantages by not using SSY-Vs write cache.