Pavilion HyperParallel Flash Array Review

Good support, improves performance, scales well, and boosts team efficiency


What is our primary use case?

I use this product for high-speed parallel storage, multi-user asset storage, and production storage. Basically, it's the high-speed tier of our production pipeline and backend.

How has it helped my organization?

The solution's performance and density are excellent.

Typically, there is a trade-off. You can have incredibly dense storage in a small footprint sometimes, but the trade-off to that is you need a lot of horsepower to access it, which ends up counterbalancing the small footprint. Then, sometimes you can have very fast access to a storage array, but that usually requires a more comprehensive infrastructure.

This kind of balance, to somehow fit it all into one chassis, in a 4U server rack, is unheard of. You have the processing proxy accessing the data and almost a petabyte of flash accessible.

It's a very small footprint, which is important to our type of industry because we don't have massive servers.

We have benefited from this technology because we were able to centralize a lot of workflows. There is normally a trade-off, where you can have very fast local storage on the computer, but in a collaborative environment that's counterproductive because it requires people to share files and then copy them onto their system in order to get the very fast local performance. But with Pavilion, basically, you get that local NVMe performance but over a fabric, which makes it easier to keep things in sync.

We have been able to consolidate storage and as part of a multi-layer storage system, it plays a very important part. For us, it cuts down on costs because we essentially get an NVMe tier that's large enough to hold everyone's data, but the other thing for us is time and collaboration. Flexibility is worth a lot to us, as is creativity, so having the resources to do that is incredibly valuable.

If we wanted to do so, Pavilion could help us create a separation between storage and compute resources. It's one of those things where, in some environments, such as separation is natural and in other environments, there's an inclination to minimize the separation between compute and data. But to that point, Pavilion has the flexibility to allow you to really do whatever you want.

In that sense, you have some workloads where compute is very close to the data, such as iterative stuff, whereas we have some things where we simply want bulk data processing. You can do any of that but for us, that type of separation is not necessarily something we are concerned with, just given our type of workflows. That said, we have that flexibility if necessary.

This system has allowed us to ingest a lot of data in parallel at once, and that has been very useful because it's a parallel system. It's really helped eliminate a lot of the traditional bottlenecks we've had.

Pavilion could allow for running additional virtual machines on existing infrastructure, although in our case, the limitation is the core densities in our hardware. That said, it is definitely useful for handling the storage layer in a lot of our VMs. The problem is that the constraints of our VM deployments are really in just how many other boxes we have to handle the cores and the memory.

What is most valuable?

The most valuable features are the NVMe flash array and the parallel architecture of the underlying system. Instead of having very large gateway nodes or very large servers that exist at the border of a lake of storage, the Pavilion approach is to have many mid-size to smaller server nodes, which can basically all access the main flash array. This means that there's no bottleneck going into that very high-speed array. It's a better size, given the size of the user requests.

Typically, Pavilion sizes its multi-node system in such a way that each parallel node can actually service requests from individual users and because there are so many of them, everyone can essentially do this in parallel. It eliminates the bottlenecks in that respect.

Pavilion provides us with flexibility in our storage, which is one of the reasons that we've applied this architecture. There's lots of flexibility in how we use the resources while also maintaining a small footprint. Ultimately, Pavilion ensures industry-standard protocols. They present their storage as just NVMe over Fabrics, so it's standard-conforming. That means you can basically hook it into anything you want. It means that you can run GPFS on it, and you can run anything that can talk to NVMe over Fabrics. This means that we can use the Pavilion box as a drop-in replacement for a conventional array.

The fact that this solution enables us to run block, file, and object storage is something that's very important. As the industry changes, there's a tendency towards that type of overall storage solution and there's a lot of competition in that space. It's nice to see Pavilion taking it very seriously. It's one of those things where our needs evolve on a day-to-day basis. While it may not be important now, it will become more important in the future and it's important that anyone in this segment takes that technology seriously.

We haven't deployed Pavilion's HyperOS 3.0 support for global namespace for files and objects yet, although I have used it in lab environments. I think it's very compelling and I'm very excited about it. Of course, with NVMe, we roll things out slowly. But luckily, we have excellent partners in the area like AIT, who has a lab deployment where we can actually test these features out. Really, HyperOS 3.0 is the result of a lot of feedback that we provided them as well as played a key role in how that was architected. So, it's nice to see our feedback reflected in the direction of the software and the hardware.

What needs improvement?

For us, in terms of what is very important, is keeping pace with the evolution of the new standards. For example, as PCI Express 4.0 becomes more ubiquitous, moving into PCI Express 5 is important. Having an architecture that can truly utilize 200-gig or maybe 400-gig networking, or having storage densities in line with what we would expect in a Gen 4, Gen 5 PCI Express, are things that as they come available, I hope that the vendor is looking at that going into the future. We need this because we're really at the point where our workloads are about to explode outwards.

I would like to see the management layer improved. HyperOS 3.0 is excellent, and this is important because one of the things that we looked at in the beginning, before HyperOS 3.0 had been released, was that this is an excellent technology and it's very versatile, but it would be great if we could run certain things on this box. It would be helpful if there were more ways to consume the APIs or if there were some ways to get into the hardware, get into the functionality of the system programmatically, or have flexibility where, for example, we just need to do quick namespaces, or something similar. We don't want to deploy an entire secondary storage layer on top of this. Rather, we just want to run something quick. Having a containerized system or having some sort of first-party support for basic storage functionality, or basic extensibility would be excellent for us. In many ways, these boxes are very malleable. It's a blank slate, but having a little more in terms of, if you want more directed use of it, having some way to really get at that, would be helpful.

For how long have I used the solution?

We have been using the Pavilion Hyperparallel Flash Array for more than six months.

What do I think about the stability of the solution?

We have put this product through a lot and it is still running like it's new, so it's excellent, stability-wise.

What do I think about the scalability of the solution?

This solution allows us to start small and scale up as we need to. That's something that we really like about its architecture. It's very modular, so we had a lot of flexibility in how we could size everything, and how we could deploy it.

We could start small and expand outwards, or we could start big and then add even more in the future if we needed to. When we talked about everything, part of what we discussed was starting with a few nodes in a box, versus starting with all of the nodes in a box. What we discovered is that when you need more, you just add them. Then, when the box is full, you just add another box.

How are customer service and technical support?

Although we work with our partner for support, we do work directly with the vendor, as well. It's a close relationship just because Pavilion will coordinate with our local integrator. Pavilion's always been very active when we have questions, for example. It's nice to have that kind of dialogue.

We have not really needed support directly from Pavilion but in our experience, they are responsive and the support is excellent.

Which solution did I use previously and why did I switch?

When we chose to implement Pavilion, we were augmenting our existing storage. We needed to add a very fast flash layer to our storage strategy. Our existing implementation was excellent and very robust, but what we started to need was a very high-performance, zero-tier layer. It became obvious that there needed to be something different, and that is where Pavilion came in.

How was the initial setup?

I oversaw the planning setup and deployment, and the whole process was excellent.

We worked with an integrator and they started by setting everything up in their lab. We were able to access everything and the changeover was fairly rapid. With an excellent integrator, it's really a minimal deployment. 

For a system with this performance, being such a powerful system and such a big part of our infrastructure, it was a very painless deployment. It felt similar to adding a few servers to our computer capacity versus adding an entire storage layer.

We spent between one and two months in the planning. There were a lot of delays because this was happening right around when COVID was starting, so it's hard to pin down exact timelines just because there were a lot of periods of time where we couldn't go back into the office. However, when our integrator was finished with it, they came in on a Monday and we were all finished on Tuesday night.

For us, the process ran in parallel. We identified certain functionalities, certain items, that the Pavilion would now be taking over in terms of the responsibility of our workflow. We really developed everything, and then had the Pavilion set up in parallel to our existing systems. Once it was in place and ready, we just did a switch-over.

What about the implementation team?

We partnered with a company called AIT here in LA, and they are partners with Pavilion. They did an excellent job, essentially, getting everything deployed, where we were able to access everything that they had set up in their lab.

They are an excellent integrator.

What was our ROI?

Whether or not we have seen a return on the investment is hard to estimate because my team is a research and development/production group. Our goal is mainly innovating new production technologies for our industry, so for us, the return on investment is really just the rate at which we can iterate and innovate amongst our team.

In that respect, it's been incredibly good. In terms of affordability, I think it's very easy to justify this type of system because what you save in footprint, complexity, and labor, coupled with the boost in team efficiency and creativity, together justify the cost.

What's my experience with pricing, setup cost, and licensing?

The licensing is fairly painless because Pavilion is not in the business of selling you NVMe flash media, which is something that we liked about them. Their stake in the whole thing is their box. This means that if you have a pre-existing relationship with another storage hardware provider and get your flash from somewhere else, their system is flexible and can work with all of these different solutions. There's a lot of flexibility in that you can choose which NVMe goes inside the Pavilion box, and you can choose how you layout everything and basically how you prioritize densities versus parallel.

The licensing fees are very reasonable.

This solution provides us with DAS performance and SAN manageability at an affordable price. It's like having the local, direct-attached storage, but SANs typically require a lot of management. They're very hard to deploy in smaller and really fast-paced environments just because, by nature, they were architected for a different era, in terms of processing and requirements.

Pavilion essentially gives us the flexibility of the positive features of a traditional SAN without the massive human and capital expenditure, or the maintenance of it. At the same time, it gives us the positives of what we had traditionally associated with direct-attached storage in terms of performance.

In this respect, it has saved us time, money, and physical space. When you look at this, especially over a period of a few years, it's a very compelling approach for an industry like ours to look at.

Which other solutions did I evaluate?

We looked at a lot of other vendors. There's a lot of very compelling technology in the space, it's just that Pavilion was very interesting because they aren't reselling another OEM's parts. It's not to say that's bad, but Pavilion is focusing a lot on the innovation of first-party hardware, which is unusual in the space, but they're doing it correctly.

That is incredibly compelling because when it comes to innovation, we see a lot has been done on computing hardware and software, but we've really taken for granted the traditional paradigm of the SAN or the JBOD, essentially connected into a few servers, and that's your storage array. From there, Pavilion has come in with this very innovative new approach at the hardware level.

We chose Pavilion because they stand apart from their competitors in the sense that they currently have no equal, in terms of the tier of access. They have a very innovative system, which simply doesn't exist anywhere else. It's not just its metrics, it's the architecture that is truly innovative. It's a different approach, which happens to fit our needs quite well.

What other advice do I have?

We don't have anybody who is dedicated full-time to the management of this solution because it really doesn't need it. We have people doing maintenance on it who are responsible for refining and adding when we need certain things added or taken away. But, it's a fairly robust system that really doesn't need a lot of care, which is good for us because it means we can allocate our resources elsewhere. We have support through the vendor partners, so if there would be an issue, we can have them come out and they'll take care of it.

My advice for anybody who is considering this product is to first look at your networking strategy and if you feel like that's something that needs to be addressed, the Pavilion is an incredibly good box. It's an incredibly powerful system but to get the full value out of it, you need the infrastructure to really utilize such a system.

If you're running on a 10-gigabit network, it's a great system but that's not where it shines. We have an incredibly high-speed network and we were able to get a lot out of it, but I would suggest making sure that you have the means of actually using something like this. It will do amazing things, just don't take for granted your core infrastructure, like your networking. Make sure that you have things in place because a lot of that relies on an excellent network layer and an excellent compute layer.

The biggest lesson that I have learned from using this solution is that networking is important. In a Windows environment, you can definitely take advantage of this type of system but it really shines in your Linux compute environments. Windows support is in development but it's just not at the same parity as Linux, currently.

In summary, this is an excellent system. I'm excited to see the new technologies as they come out, such as HyperOS 3.0. This is a very compelling next step in technology. As it matures, there is a lot of feedback to be had. As it stands right now, there are areas that can be enhanced and to see that they are being enhanced is very reassuring.

I would rate this solution a nine out of ten.

Which deployment model are you using for this solution?

On-premises
**Disclosure: IT Central Station contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Learn what your peers think about Pavilion HyperParallel Flash Array. Get advice and tips from experienced pros sharing their opinions. Updated: August 2021.
534,226 professionals have used our research since 2012.
Add a Comment
ITCS user
Guest