3PAR StoreServ peer persistence in iSCSI environments

User Rating: 3 / 5

Star ActiveStar ActiveStar ActiveStar InactiveStar Inactive
Published: Tuesday, 15 September 2015 20:05

The peer persitence feature of the 3PAR StoreServ family brings transparent failover function the mid-range storage systems from HP, a feature absolutely fundamental for the euopean, especially german market. Nearly 2/3 of all our storage projects are based on a two datacenter concept using simultaneous mirroring between the sites and fully automated failover in case of a site failure. Transparent failover on the storage side is one of the key concepts behind these environments.

A second evolving technology is iSCSI. For a quite long time, iSCSI was called the FC successor but never catched up with FC if you ask storage admins. They like the easyness of FC, the reliability, the small overhead, the pure performance. Especially in the performance area, iSCSI catched up and outpaced FC. While FC is still limited to 16GBit, Ethernet is available at 100Gbit and further evolving. Don't ask for the sense in 40GBit or even 100GBit SAN in the mid range market but if you want to spent a lot of money and have a storage system that is capable of saturating 40GBit and more, feel free to do so. With FC you are still at 16GBit. More than enough for 99% of all mid range companies.

So iSCSI is not only used because you can scale it up to the speeds mentioned above but rather because it's based on Ethernet technology. Most admins are familiar with Ethernet, the cabling, the whole handling, the monitoring/analyzing aspects and all the things around Ethernet. So more and more companies give iSCSI a try and ask for a 10GbE based SAN.

That's where things get weired with HPs 3 PAR systems. iSCSI itself is not a problem for all StoreServ systems. All models can be euqipped with additional 10GbE iSCSI cards, although there is not a single model that comes with iSCSI interfaces only. Two FC ports are always available on each controller, hard installed on the controller board itself. I also don't want to talk about the price of the additional 10GbE cards. These are standard cards you can also install in any server system but HP charges a lot of money for them PLUS an additional charge for the support of the NICs. This way spending several thousand Euro only for two 10GbE ports is not a problem with 3PAR.

Back to the main topic. Imagine an environment completely based on iSCSI. You ask for iSCSI support for the 3PAR, you get a "Yes, we can". You ask for peer persistence technology you get a "Yes, we can". You ask for a combination of both technologies you get a "Yes, we can but.....". Hopefully you get the "BUT" and not only the "Yes, we can" statement from your consultant. The BUT means you can combine the two technologies but only with one of the following limitation:

Looking at the first option, noone who wants a Ethernet-based only solution will build up a second SAN based on FibreChannel only to get the mirror links to run. So why not using Arbitrated Loop and connect one controller of 3PAR A to one controller of 3PAR B. I don't know if this is a supported setup but I would never recommend that. The mirror links normally get redundancy from the possibility to reach the partner system via multiple links and direct connection won't give you that in the way the storage system needs it. Second thing is what if you have single mode fibre between the two storage systems? Even if you are able to replace the default multimode SFP with single mode versions, this setup is definetly not supported. You won't even get this option from the HP configuration tools so simply forget it.
Result is, if you wan to have a fully supported and fast interconnect between two 3 PAR storage systems you HAVE TO use an additional FC fabric to your iSCSI fabric.

Second option. It sounds silly but at the moment, the only option to have Ethernet on the mirror link is to use the single 1GbE port normally used for asynchronous replication between two systems. This 1GbE port is on the controller by default and you can't add more of these ports. So you are bound to 1Gbit mirror capacity for each controller. Would you ever think about spending several 10s or 100s of thousands of Euro/Dollar just to have two perfectly performing storage systems connected with 1GBit between each other? I don't think so. BUT, and that's the most funny thing at all, this setup is fully supported by HP. LOL!

You may ask why not simply adding more of these "cheap" 10GbE cards and use them for mirroring? Sounds easy and logic but looking at the QuickSpecs of the latest StoreServ 8000 family (it's the same for the older 7000 series) shows:

"Host adapters can be ordered separately to be installed in the field or they can be factory configured into controller nodes. Host adapter cards provide the array with additional FC ports, with 10Gb/s iSCSI/FCoE ports, or with 1GbE/s and 10Gb/s Ethernet ports. The additional FC ports can be used for connection to hosts or used to connect to other HP 3PAR StoreServ Storage systems in a Remote Copy relationship. The iSCSI/FCoE ports permit host connection in iSCSI and FCoE environments. The Ethernet ports can be used only with the HP 3PAR File Persona Software Suite for File services connectivity."

HP simply doesn't support "Remote Copy" (the technology behind synchronous mirroring and thus Peer persistence) with the 10GbE ports. In one of my last projects this was a showstopper for a 3PAR deployment. I went to the HP Partner Days and asked one of the HP storage guys if this is a misunderstanding or when this "restriction" will go away. Obviously I was the first one to ask this because they had to ask several other HP guys and a few minutes later the simple answer was "It's not supported because there are other features with higher priority".

I really enjoyed seeing peer persistence coming for the 3PAR systems but it seems the feature is not fully implemented at this time. I understand HP not shouting this restriction into the world but it's not a new functionality either. I would have expected this feature to be completed by iSCSI support until the second generation (the 8000 series) had time to market but obviously adding more theoretical maximum IOPS or whatever marketing can use to show how cool the new generation ist, was more important.