SSD usage review with Auto Tiering in SSY-V

Having standard SSDs and PCIe accelerators in place for Tier0 in DataCore's AutoTiering pool, it's time to review the amount of data written to the disk to estimate lifetime of both flash media types in production environments.

First have a look at the two environments.

The first setup constists of two DCS with 100% direct attached storage (SAS and MDL SAS) as well as four HP SSDs (SATA 3GBit + SAS 6GBit) in two RAID1 setups per DCS. The SATA SSDs have 200GB capacity, the SAS have 800GB. The pool size is ~48TB and used by 68% which translates to a flash/harddisk ratio of  1:33. The allocation map shows all hot blocks are on the flash storage.

The application servers on top are eight VMware vSphere 5 servers as well as 10 Hyper-V and 3 Citrix Xen-Servers. A sum of ~140VMs are running on top of those hypervisor servers containing file, print, database, mail and provisioning services for Citrix.

 

HP specifies the life endurance of the SATA SSDs to 1.5PB written or 1.5 drive writes per day (DW/D). For the SAS SSDs numbers are 14.6PB write and 10 DW/D for a 5 year period.

The HP utilities won't tell you how much data actually is written to the SSDs but there is information about the power on hours, the estimated lifetime remaining and a health percentage. For the SATA the numbers are:

  • Power-On hours: 9433 (393 days)
  • Estimated lifetime remaining: 21938 days
  • Health remaining: 98,24%

 

For the SAS SSDs the numbers are:

  • Power-On hours: 1798 ( 75 days)
  • Estimated lifetime remaining: 41545 days
  • Health remaining: 99,82%

Second setup consists of two DCS with a mix of direct attached SAS and MDL SAS storage and two HGST s1122 1TB accelerator cards in each DCS. The two cards are mirrored by SSY-V to provide a higher level of fault tolerance. The pool size is 46TB and is used by ~70%. That means we have a flash/harddisk ratio of 1:32. If you have a look at the allocation map one can see that all hot data is located on the flash card and the harddisks only contain mid to cold blocks.

The application servers on top are six hypervisor servers running VMware vSphere 5. On the servers there are ~80 VMs containing file, print, database and mailserver systems. So one can say this is a general setup and the data collected can be used in most of the environments out there.

HGST specifies the life endurance to 66PB for one card which translates to 24 full writes per day (DW/D) in a five year range. The accelerator cards were installed roughly 260 (6209h power-on) days ago and show the following statistics:

  • Blocks read: 106.713.684.467 (~107 billion blocks read)
  • Blocks written: 518,013,049,924 (~519 billion blocks written)
  • Remaining health: 100%

With a configured block size of 512kb this translates to ~50TB read and ~250TB written. This ratio is quite interesting as you can see that the flash device writes 5x the amount of data it reads. So it seems AT is a more write intensive feature.

Nevertheless, compared to the 24 DW/D this amount of data is extremely low thus the health counter is still at 100%.

Conclusion: both SSDs and PCIe accelerator cards are perfectly suited to be used as Tier0 in storage systems. Regarding their lifetime, PCIe cards will probably offer more than SSDs but even low-end and midrange SSDs are okay to be used in AutoTiering solutions. The write rate isn't that high in an environment like the two above so you don't have to fear your flash devices will fail within a short period of time.

Regarding performance PCIe cards will offer a lot more and capacity is higher too. On the other side, PCIe cards use PCIe slots and thus are less flexible and scalable.

No matter which form factor you use and what reliablity you get offered treat flash the same way as you would do with good old harddisks. Always use RAID to protect against an unexpected failure or make sure a failure of one of your flash devices won't tear your storage down.

 

 

Leave your comments

Post comment as a guest

0
Your comments are subjected to administrator's moderation.

People in this conversation

  • Guest - Marcel Mertens

    This will exactly the same in at our customer installations.<br />Last week i updated our first FusionIO-in-DataCore Server customer to SYMV10. <br />Usable space per DataCore Server: 33 TB, 1,2 TB FusionIO as T1. Pool Usage: 80%<br /><br />ioSphere shows 17PBwrite endurance, after 1 1/2 year 0,9PBwrite is used. So i think eaven READ Intensive SSDs are fine for DataCore TIER1

Powered by Komento
joomla templatesfree joomla templatestemplate joomla
2017  v-strange.de   globbers joomla template