HP StoreVirtual manager requirements

User Rating: 4 / 5

Star ActiveStar ActiveStar ActiveStar ActiveStar Inactive

During the last two years I installed several HP StoreVirtual (formerly Lefthand) P4000 storage systems. These systems are quite easy to setup and manage and provide very decent performance. The only really annoying thing is the misleading information about how many regular/virtual managers or failover managers are required to tolerate system outages.

I checked all available papers about this, spoke to HP storage employees and also made some tests and finally I can give you some information on how to setup your StoreVirtual environment to keep it fault tolerant.

Read more ...

SMH username and password for P4000 systems

User Rating: 0 / 5

Star InactiveStar InactiveStar InactiveStar InactiveStar Inactive

Ever wanted to use the System Managemet Homepage (SMH) of a P4300 or P4500 storage node? Since it is a standard HP server with a some-kind-of-linux OS installed on it, there is also support for the SMH and you can reach it with the address https://ip-of-server:2381.
It is enabled by default and you can use it as you would use on a DL or ML server. But what about the logon credentials?
There is no username/password combination needed to login to a P4000 storage node and even if it's part of a management group, you can not use the credentials you entered while adding it to the group.

 

For this case, HP set a default username/password combination that you can use to logon. Use "sanmon" as user and "sanmon" as password, both without the quotes. 

Quick note on new Lefthand OS 10.0 (formerly SAN/IQ)

User Rating: 0 / 5

Star InactiveStar InactiveStar InactiveStar InactiveStar Inactive

Today I made an upgrade to one of our customer's HP Lefthand P4500 installation.

The installation consists of two P4500 nodes and a virtual failover manager running on VMware vSphere 5.

The version on all nodes before the upgrade was 9.5 with some patches. The new version should be 10.0 with latest patches.

Upgrading the CMC is very easy, either via the CMC upgrade function or manually by downloading the CMC from the HP website and installing over the existing CMC.

The very good news for CMC 10.0 is the support for a proxy to download patches and the ability to sort out unneded patches. Most of you familiar with the older CMC versions know that always ALL patches available on HPs FTP server were downloaded resulting in 2-3GB for an initial download. The speed of HPs FTP server isn't always good so this sometimes took several hours. Since you had no option to select which patches are to be downloaded, you download patches even for SAN/IQ version 7 or 8... not very efficient.

With the new options (you can configure them by editing the CMC preferences) you can choose automatic or manual download, all or selected patches and the bandwith (only "fast" or "normal" bandwith usage) used for the download. And, as already mentioned, you now can set a proxy so the CMC doesn't need direct internet access anymore.

 

After upgrading CMC and download of all needed patches it was time to start the upgrade of the two P4500 nodes and the FoM. The CMC updater told me that "only" the new SAN/IQ version, the firmware update for the P410 controller and the firmware update for the SAS disks will be installed. For the FoM, only the update to version 10.0 has to be applied.

Cool, seems to be a fast update session...... unfortunately not. Updating all three nodes with the patches mentioned above took nearly 3h. Most of the time the CMC waits for a successful connection to the nodes after reboot. For each reboot and node this takes more than 15min!. It doesn't matter if the SAN/IQ is upgraded as whole of if only the P410i firmware is being upgraded, you will always have to wait for several minutes. I don't want to imagine how long it will take to upgrade a big installation with several nodes and some more patches..... I don't know why it takes so long. The nodes are reachable via ping 1-2 minutes after reboot. No idea what causes the CMC to have such a delay.....

So please keep in mind that upgrading P4000 systems from 9.5 to 10.0 will take a serious amount of time!

 

vSphere and P4500 G2

User Rating: 0 / 5

Star InactiveStar InactiveStar InactiveStar InactiveStar Inactive

Last month we installed three installed three new vSphere 4.1 U1 servers (HP Proliant DL380G7) with plenty of RAM and CPUs on top of a HP Lefthand P4500 G2 SAS storage system. The SAN is iSCSI based on ProCurve 2910al switches. All components are redundantly attached so we don't have any single point of failure.

 

We had to migrate 3TB of data from the old storage to the new Lefthand storage and everything went smooth. Although we changed from an EVA4100 fibre channel storage system to "only" a iSCSI based Lefthand storage system, speed was perfect.

In addition to the new vSphere servers and the storage system, we also installed a new backup server. This server needs direct access to the iSCSI storage, too because we want to use SAN-based backups of the VMs.

So we installed the new server with Windows Server 2008 R2 and enabled the built-in software iSCSI initiator. Furthermore, we presented all VMware storage to the backup server too.

 

With doing this our problems began.....

 

Read more ...

Problems upgrading SAN/IQ from 9.0 to 9.5

User Rating: 1 / 5

Star ActiveStar InactiveStar InactiveStar InactiveStar Inactive

{jcomments off}Running in the P4000 208 days reboot bug mentioned earlier on this blog we decided to upgrade to SAN/IQ 9.5.

Install instructions are quite simple:

  • Download updates via CMC or any CMC that has direct ftp connection to the internet
  • Upgrade CMC to version 9.5
  • Check your storage nodes for unresolved errors and solve them
  • Upgrade storage nodes one after another (this will be done autoimatically, you cannot choose which node within a management group is upgraded and which not. You can't also choose the order in which the upgrade is done){jcomments on}

    Read more ...

FlowControl - Must-have or Nice-to-have

User Rating: 0 / 5

Star InactiveStar InactiveStar InactiveStar InactiveStar Inactive

You can find multiple posts on the internet regarding HP P4000 (formerly Lefthand) and flow control. 1/3 says flow control is a must-have, 1/3 says it's a nice-to-have and 1/3 tells you not to use it at all.

So what option do you choose?

Read more ...

Funny restart bug in SAN/IQ 9.0

User Rating: 0 / 5

Star InactiveStar InactiveStar InactiveStar InactiveStar Inactive

A few days ago one of our customers called me and told me that all P4000 systems he has in his environment rebootet the same time. As a result, all VMs running on these storage nodes crashed. Sure, if no P4000 is left to take over then all I/O is blocked.

First we thought that there was an power outage but this wasn't the case. In the event logs of the P4000 systems there was no cause given why the systems rebootet and especially not why they rebootet all the same time.

We opened a call with HP and they told me that there is a known bug with SAN/IQ 9.0 that causes the system to reboot after exactly 208 days. Well, this explains why all the systems get down the same time... they were all switch on the same time :-)


HP released a customer advisory that deals with this error. There you can see which versions are affected and how to solve this problem.


If you are still running on 9.0 or 9.0.01 please consider upgrading to 9.5 or at least install the patch mentioned in the customer advisory above to prevent storage outage.

joomla templatesfree joomla templatestemplate joomla
2019  v-strange.de   globbers joomla template