I give the replication feature a 10 out of 10. The GUI for snapshot replication gives you a lot of flexibility to schedule asynchronous replication, bandwidth control, and disk-rebuild resource allotment.
High Availability is a 10 out of 10, too, for having redundant RAID controllers per tray and the ability to build an HA Multi-Site.
I also like the easy setup of these units. We get project bids with zero lead time and when you have to build out a facility and you have a deficit of time to do so, it helps to have a quick and easy install and intuitive GUI. Running updates on these systems is nice and easy. The support staff are also very good.
How has it helped my organization?
When we bid on projects and scope out the work, we usually will have three design iterations that we follow -- a Multi-Site, Single-Site P4500, or Single-Site P4300 class setup -- depending on what is needed. All three build-outs use the same CMC and basic setup helps to standardize and help get a handle on costs and budgeting for these projects. We’ve built a multi-tiered storage solution for our customers using one product.
What needs improvement?
For disk utilization I give it a 7 out of 10. In the typical network RAID 10 coupled with horizontal shelf RAID 5, you lose over 55% of your disk. But this is a price I am willing to pay to have highly-available storage.
I would look into using some of the technologies used in the 3PAR line. The loss of disk space due to traditional RAIDing methods is wasteful, and when you buy 14TB of disk and have 6TB usable, you sometimes whimper a little.
They need to create a separate management port to allow for sending email alerts via non-iSCSI network. As it stands, you have to allow routing from your iSCS network and open relay on your mail server to get alerts. Other storage system models use separate management ports to allow for event notifications.
For how long have I used the solution?
I have used the product for over six years. We previously used it in our main datacenter for four years, but then opted to go with a more enterprise solution and now use it in smaller remote site build-outs, and it's usually two or three nodes per cluster.
What was my experience with deployment of the solution?
Deployment of these devices is easy and very stable. I have added on many different trays with no problems.
What do I think about the stability of the solution?
Stability has been good except in one scenario. We had a vSphere Metro-Cluster with HP P4000 Multisite setup, and the coordinating node (VIP holder) completely crashed in a bad way. It seems that the coordinating node was not able to transfer the VIP to a new node in time and when vSphere recognized it as a PDL event (Permanent Device Loss), we were operational within eight minutes after vSphere rescanned for storage, although the storage node motherboard and controller had to be replaced.
What do I think about the scalability of the solution?
For scalability, I give it a 9 out of 10. It is very easy to deploy a new shelf of disk and add a pair of controllers to your environment for increased IO “Pay as you Grow”. You just plug in the network info and add it to the existing cluster. I can just throw another tray of disk into the mix and within a couple hours allocate disk space.
How are customer service and technical support?
Tech support is good. I have always had good experience with both phone support and on-site support staff. On-site staff went above and beyond to help in problem tickets I had open.
Which solution did I use previously and why did I switch?
We previously had MSA units, and we chose P4000 class as it was the next step solution for us. We will be evaluating HP StoreServ 8000 series for these remote site setups. We currently use HP StoreServ 7000 series for our main datacenter and may potentially move to this solution if we determine cost savings and ease of setup.
How was the initial setup?
It was easy. Single-sites and multi-sites both had a similar setup. From a cabling perspective, you just plug in your 10gb or 1gb connection to the switch and off you go. Once you install the CMC and you plug in the network information on the nodes, the units are found in the CMC and you can build out the site and cluster.
What about the implementation team?
We built ours in-house. Depending to what degree you will be engaged in the setup will depend on whether you will need other expertise. Working hand-in-hand with your network team and VMware\server team will help. You basically need the network in place before you configure the nodes. Then after configuration of nodes and build-out of clusters\volumes, you need to engage with the members of your team who will help present the volumes to VMware or Windows servers.
What was our ROI?
When we sign five year contracts to build a facility, we expect the storage units to last that time, and they do.
What's my experience with pricing, setup cost, and licensing?
The price paid for a highly-available solution weighs in here. If one of our facilities is down for an hour, we stand to lose a lot of money (Automotive Assembly), so up-time and life of a unit is, let's assume five years, then demoting this storage to a second-tier storage for other aspects of our company’s needs (backups, file retention, etc.) really are the only ways I feel I can determine ROI, and in this regard I feel we our ROI is good.
What other advice do I have?
It’s a solid product and you can roll these out like nothing. We have standardized our deployments to use these models. We will be re-evaluating soon and if we do I will miss the easy setup and GUI.
**Disclosure: IT Central Station contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.