Using SANsymphony-V with HP SmartCache

User Rating: 0 / 5

Star InactiveStar InactiveStar InactiveStar InactiveStar Inactive
Published: Friday, 10 October 2014 13:54

HP SmartCache for SmartArray RAID controllers is a controller-based caching solution that uses local attached SSDs on the SmartArray controller as read and write (available with the new x4x series) cache.

SmartCache is fully transparent to the application layer as all the logic is within the controller chip. The OS and applications only see faster response times but have no idea why :-)

This feature sounds pretty cool as it supposes to be an alternative or even a completion to auto tiering features in storage software as e.g. DataCores SANsymphony-V where direct-attached storage is used as main storage source. Further more, as it is transparent to the storage hypervisor, no special certification or approval is needed.

Let's have a closer look on what SmartCache really is and how to use it.


First you need a ProLiant Gen8 or Gen9 server and a supported SmartArray controller. SmartCache is supported on the x2x series and above, so you need at least a 420, 421 or 821 controller. The x2x and x3x series can only use SSDs as a read cache, write caching is only supported on the lates x4x series. Beside the controller you need the SmartCache software license. Only on the 800er controllers the software is already onboard. The license is listprice 299$ including 1 year of technical support. Last but not least to use the SmartCache feature your SA controller needs at least 1GB of flash backed write cache as metadata is stored in the controller cache and needs at least 512MB. HP recommends 1GB of FBWC for every 1TB of SSD storage that is being used as SmartCache. To have some room to grow and to have additional cache really available to the controller for caching the minimum is 1GB. Not a problem as 1GB is more or less the standard, the x3x and x4x have even 2/4GB.

Now that you have met all prerequisites it's time to choose an SSD option to store your caching data. Generally spoken, every supported SSD in a Gen8 or Gen9 server is supported to be used as SmartCache. HP distinguishes between a whole bunch of models ranging from SSDs only meant to be used as boot volume up to high performance/high endurance enterprise models that can handle several full writes per day. The prices climb with every bit of performance or endurance raise so choosing the right SSD for your needs isn't that easy. The most important thing to consider is if you want to use SmartCache only for read caching or write caching as well. Read-only caching is not a problem if the SSD fails as all data is also on the bulk storage so this doesn't require a high endurance SSD. High performance is important all the time, that's why you choose SmartCache actually. If you want write caching as well, a reliable SSD (perhaps in a mirrored configuration) should be the way to go but that will be quite expensive. Since the HP tools normally tell you how long your SSD will keep on working, I think it is acceptable to go with a mid endurance, high performance SSD in a non mirrored setup.

Looking at the SSDs size, HP recommends to have at least 5-10% the size of the bulk storage as cache. That means if you have 100TB of configured capacity (not raw, this means useable capacity) you should have at least 5-10TB of SSD cache. And this is the minimum, having higher ratios can be quite helpful in further acceleration. That's why SmartCache seems to be only wise to be implemented in small to mid-size environments. If you take the recommendation above also into account then you would need 5-10TB of FBWC (1GB of FBWC for every 1TB of SSD) on your controller to support 100TB of bulk storage. That's simply not possible so the FBWC is the limiting factor here. Looking at the biggest FBWC value you have 4GB. Useable for SmartCache is ~3TB what translates to a maximum of 3TB of SSD cache. This translates to a maximum bulk storage capacity of ~30TB per SmartArray controller.

You see, this configuration is not very scalable on a single RAID controller but can scale with several controllers in the same server. That's what software like DataCore SANsymphony-V can support and is even best practice in a DAS environment.

One more configuration topic: SmartCache uses so called CacheVolumes. CacheVolumes are logical volumes carved from the SSD(s) that are freely chooseable in size. These volumes will be dedicated to the bulk storage volumes as cache areas. This way you can freely choose which volume in which level will benefit from the SmartCahe. Only limitation here: you can create a maximum of 32 cache volumes dedicated to a maximum of 32 standard volumes. The number of volumes must not exceed 64.

Looking at the costs up to here: 300$ for the license key, 600$ for the SmartArray controller (if not already included in the server but starting with Gen9 systems no SmartArray controller (except a basic B140i SATA controller) is included anymore) and about 3500$ for a 400GB mainstream endurance SAS SSD. That is 4400$ for each SA you want to speed up.

With DataCore in place you could also add more RAM to your DCS and use this one for read and write caching. 400GB is not a problem for modern servers and will cost ~5000$ if you go for 24x16GB modules.

Another option is using AutoTiering feature but this requires SSDs or PCIe accelerators as well pushing costs into the several thousands plus the AT feature license for SANsymphony-V with additional 5000$ per node.

These prices in mind, the SmartCache solutions seems to be a model woth thinking about. But what about the real performance gain you can expect from SmartCache in DataCore environments? Referred to internal Datacore sources the performance gain with SmartCache will be below 10% average. It highly depends an the environment, the workload and the cache/bulk storage ratio but you shouldn't expect more than a low two-digit percent uplift.

The most important thing here to note is: SmartCache isn't a bad idea nor badly implemented. It is a solution built for special purpose and can not be considered as a general purpose performance after-burner product. You can increase performance on heavy read-related workloads that use the same blocks of storage most of the time but in a highly random and highly spreaded environment like a storage virtualization node the benefit is not that high as you might expect.