We just raised a $30M Series A: Read our story

Dell EMC PowerScale (Isilon) OverviewUNIXBusinessApplication

Dell EMC PowerScale (Isilon) is #2 ranked solution in top File and Object Storage tools and #4 ranked solution in top NAS tools. IT Central Station users give Dell EMC PowerScale (Isilon) an average rating of 10 out of 10. Dell EMC PowerScale (Isilon) is most commonly compared to NetApp FAS Series:Dell EMC PowerScale (Isilon) vs NetApp FAS Series. Dell EMC PowerScale (Isilon) is popular among the large enterprise segment, accounting for 65% of users researching this solution on IT Central Station. The top industry researching this solution are professionals from a computer software company, accounting for 31% of all views.
What is Dell EMC PowerScale (Isilon)?

Dell EMC Isilon scale-out storage solutions are designed for enterprises that want to manage their data, not their storage. Our storage systems are simple to install, manage, and scale to virtually any size. Isilon storage includes a choice of all-flash, hybrid or archive nodes. Isilon solutions stay simple no matter how much storage capacity is added, how much performance is required, or how business needs change in the future.

Dell EMC PowerScale (Isilon) was previously known as PowerScale, Dell EMC Isilon.

Dell EMC PowerScale (Isilon) Buyer's Guide

Download the Dell EMC PowerScale (Isilon) Buyer's Guide including reviews and more. Updated: November 2021

Dell EMC PowerScale (Isilon) Customers

OMRF, University of Utah, Translational Genetics Research Institute, Arcis, Geofizyka Torumn, Cyprus E&P Corporation, Colburn School, Columbia Sportswear, Harvard Medical School, University of Michigan, National Library of France,

Dell EMC PowerScale (Isilon) Video

Pricing Advice

What users are saying about Dell EMC PowerScale (Isilon) pricing:
  • "The solution is expensive; it is not the cheapest solution out there. If you look at it from a total cost of ownership perspective, then it is a very compelling solution. However, if you're looking at just dollar per terabyte and not looking at the big picture, then you could be distracted by the price. It is not an amazing price, but it's pretty good. It is also very good when you consider the total cost of ownership and ease of management."
  • "The pricing is expensive, but I think it's a fair value because it does manage itself. It definitely is much simpler than any of the other scale-out storage platforms that we've looked at in the past."
  • "The only drawback for us is that it's a large upfront investment. This was a huge decision for a startup company to make. It took a bit for us to get over the line on it, but we have not regretted it."
  • "The platform is not cheap. However, on the software side, you can choose what you want license. So, you can start your licensing with the features that you need, then after buying the platform add some other features."

Dell EMC PowerScale (Isilon) Reviews

Filter by:
Filter Reviews
Industry
Loading...
Filter Unavailable
Company Size
Loading...
Filter Unavailable
Job Level
Loading...
Filter Unavailable
Rating
Loading...
Filter Unavailable
Considered
Loading...
Filter Unavailable
Order by:
Loading...
  • Date
  • Highest Rating
  • Lowest Rating
  • Review Length
Search:
Showingreviews based on the current filters. Reset all filters
RP
System Team Leader at Deakin University
Real User
Top 20
As you add more nodes in a cluster, you get more effective utilisation

Pros and Cons

  • "The solution has simplified management by consolidating our workloads. Rather than managing all the different workloads on different storage arrays, Windows Servers, etc., we just have one place per data centre where we manage all their unstructured data, saving us time."
  • "The replication could lend itself to some improvement around encryption in transit and managing the racing of large volumes of data. The process of file over and file back can be tedious. Hopefully, you never end up going into a DR. If you do go into a DR, you know the data is there on the remote site. However, in terms of the process of setting up the replicates and filing them back, that is just very tedious and could definitely do with some improvement."

What is our primary use case?

  • Research data
  • Departmental file shares
  • Data centre storage: NFS

We have two data centres in our university. We have Cisco UCS, Pure Storage, and are heavily virtualised with VMware. PowerScale is our unstructured data storage platform. It provides scaled-out storage and our high-level NFS across applications. It also provides all the storage for our researchers and business areas, as well as students, on the network.

With the exception of block workloads, which is primarily VMware, Oracle Databases, etc., everything else it is on PowerScale. It definitely has allowed us to consolidate the ease of management.

How has it helped my organization?

With the quotas having fewer large pools of storage in the data centres, we typically only have one or two Isilon clusters. That gives us the ability to multi-tenant or allocate data to different applications and isolate workloads. It is very efficient when managing that volume of storage. We are not tuning it every day or week. The only time that we are really doing anything with it is if we're planning an upgrade of some sort several times a year. Outside of that, it just does what we want it to do. 

We automate the vast majority of the things that we do on the Isilon clusters: provisioning of storage, allocation of storage, management of quotas wrapped into tens of thousands of students, and managing permissions. That's the level of support they have for their built-in API's, which is probably a huge game changer for us in the way that we manage the storage. It makes it far more efficient inside of PowerScale.

Compared to doing it manually, what we have been able to automate using the API is saving us at least tens of hours a month versus when we used to get service requests. We have even been able to delegate out to different areas. If we have an area with whom we do file shares, we delegate out the ability for them to create new shares and manage their permissions themselves. 

The solution allows us to manage storage without managing RAID groups or migrating volumes between controllers. We see this in the big refresh that we did earlier in the year. After you have clicked the "Join" button and joined, you go to the old node and click remove, then wait for it to finish. You don't have to configure anything when you add new node types, they are automatically configured. You can tune them and override things if you want, but there is no configuration required.

PowerScale has enabled us to maximise the business value of our data and gain new insights from it. It gives us the ability to have our data stored and presented via whatever protocol is required. Now, we can look at all these different protocols without having to move or duplicate the data.

The solution allows you to focus on data management, rather than storage management, so you can get the most out of your data. We looked at the types of data that we have on the cluster, then we just target it based on the requirements. We don't have to worry about building up different capabilities, arrays, RAID types, etc. We just have the nodes, and through simple policy, can manage it as data rather than managing it as different RAID pools and capacity levels. If someone needs some data storage, then we ask what their requirements are and we just target based on that. Therefore, we manage it as a workload rather than a disk type. 

What is most valuable?

Their SmartQuotas feature is probably the thing that we use most heavily and consistently. Because it is a scaled-out NAS product, you end up with clusters of multiple petabytes. This allows you to have quotas for people and present smaller chunks of storage to different users and applications, managing oversubscription very easily.

We use the policy-based file placement, so we have multiple pools of storage. We use the cold space file placement to place, e.g., less-frequently accessed or replicated data onto archive nodes and more high-performance research data onto our high-performance nodes. It is very easy to use and very straightforward.

The node pools give us the ability to non-disruptively replace the whole cluster. With our most recent Gen6 upgrade, we moved from the Gen5 nodes to the Gen6 nodes. In January this year, we ended up doing a full replacement of every component in the system. That included storage nodes, switching, etc., which we were able to replace non-disruptively and without any outages to our end users or applications.

We use the InsightIQ product, which they are now deprecating and moving into CloudIQ. The InsightIQ product has been very good. You can break down the cost performance right down to protocol latency by workstation. When we infrequently do have issues, we use it to track down those issues. It also has a very good file system reporting.

For maximising storage utilisation, it is very good. As you add more nodes in a cluster, you typically get more effective utilisation. It is incredibly flexible in that you can select different protection levels for different files, not necessarily for file systems or blocks of storage, but actually on a per file basis. Occasionally, if we have some data that is not important, we might need to use a lower protection. For other data that is important, we can increase that. However, we have been very happy with the utilisation.

Dell EMC keeps adding more features to the solution’s OneFS operating system. In terms of group work, we have used it for about 13 years. The core feature set rollup has largely stayed the same over that time. It has been greatly improved over that time as well. So, it has always been that storage NFS sandbox, and they've broadened their scope for NFS v4, SMB3 Multi-channel, etc. They are always bringing up newer protocols, such as S3. Typically, those new features, such as S3, don't require new licensing. They are just included, which is nice.

Over the years, the improvements to existing protocols have been important to us. When we first started using it, they were running open source sandbox for their SMB implementation under the covers and they used a built-in NFS server in a free VSD. Whereas, with the new implementations that they introduced for OneFS 7 have had huge increases in performance and been very good, though there's not necessarily any new features. We even use HDFS on the Isilons as well at the moment. The continued improvement has been really beneficial.

It is incredibly easy to use the solution for deploying and managing storage at the petabyte scale. With CIFS and IBM Spectrum Scale, there just isn't the horizontal concern. I couldn't think of an easier way to deploy Petabyte NAS storage than using Dell EMC PowerScale.

What needs improvement?

The replication could lend itself to some improvement around encryption in transit and managing the racing of large volumes of data. The process of file over and file back can be tedious. Hopefully, you never end up going into a DR. If you do go into a DR, you know the data is there on the remote site. However, in terms of the process of setting up the replicates and filing them back, that is just very tedious and could definitely do with some improvement. 

There is a lack of object support, which they have only just rectified. 

For how long have I used the solution?

About seven years.

What do I think about the stability of the solution?

The stability has been exceptional. I've been very happy with the stability of it. In the last six years, we have pretty much been disruption free. Prior to that, we have had one or two issues, which we worked with their support to fix. 

We had a major refresh at the start of the year when we replaced one petabyte at one site and a half a petabyte at another site. This completely replaced everything and took us about a month. It was finished with one staff member overseeing the process, moving the data and roping in one or two other staff at different times to help with the physical backing. 

They are quite heavy, so you always want to have two or three people involved. It has very minimal staff management required. For example, once the hardware is racked, it needs just one operator who joins the nodes, waiting for the data to move over. Internally, this is non-disruptive to the user. 

Firing up the old nodes, that is more of a management thing. 

What do I think about the scalability of the solution?

Pretty much everyone touches the solution in some way or another. It has been a bit different right now with COVID-19, since a lot of people have been recently working remotely. In any given day, probably 12,000 people have been using it. That is just going by the number of active connections that we have from staff, students, and researchers at any time.

We can't see anyway that we would ever reach the limits of the product in terms of scalability and our workloads. We have no concerns around scalability. 

It has a back-end network that it's managing to get switches with enough ports to plug the nodes in, if you want to go big. That is the most complicated part, not the actual management of storage. As you add more nodes, that management overhead remains largely the same. 

For larger scalability, I would be very comfortable with it. We would just have to do some good site planning to ensure that we have enough room for it.

Our usage is pretty extensive. It touches on almost every area of our organization. With the introduction object and support for Red Hat OpenShift, which they're releasing in OneFS 9.0, we are very keen to explore and extend the usage in those areas. That is part of the reason why we are upgrading our test cluster on OneFS 9.0 to specifically evaluate use with Red Hat OpenShift and Kubernetes in clouds. It definitely has a very strong place now in the data centre, and we don't see it going away anytime soon, as we see more workloads going onto it.

How are customer service and technical support?

The support has been mixed. If you get through to the right engineers, you can get problems resolved incredibly quickly. If you don't, you can go around in circles for a long time. We do typically have to escalate support tickets through account managers to get them positioned correctly. However, once that happens, issues are resolved pretty quickly and we're generally happy. 

The technical support is average. There are certainly not the best that we have ever dealt with, but far from the worst ones. I would not recommend the product based on their tech support alone. 

Which solution did I use previously and why did I switch?

Going back 13 years prior, we used to have a lot of Microsoft and Linux-based file servers all over the place. They were all siloed with a lot of wasted capacity. Consolidating all those down into a small handful of Isilon clusters has dramatically reduced the amount of silos that we have in the organization. In terms of reducing waste from having storage stuck in one silo or isolated area, it has made a huge improvement.

We have previously used IBM Spectrum, and I don't think you can buy anymore. Briefly, eight years ago, we moved a large portion of the workload off Isilon onto Spectrum. That was the biggest regret that I have had in my career. We couldn't get back on the Isilon fast enough. It was a commercial decision to move away from Isilon, which wasn't the cheapest. However, it was far more mature than the IBM product. Spectrum cost us so much that what we saved in capital expenditure we then lost in productivity, overhead, and maintenance. It was just a disaster. The support that we received from IBM was the worst support I have ever received. I've been in this industry and job for about 17 years now, and I have never had a worst support experience that I've had from IBM. It was a nightmare.

When we needed to get the issue with Spectrum fixed, there was no doubt about getting PowerScale. We couldn't get back on PowerScale fast enough. We just made that happen, and as soon as we did, all the fires were put out.

About 13 years ago, we were using six terabyte nodes back. Now, they're obviously a lot bigger than that. While scalability was definitely a key interest, the main driver for us was the ease of management to sort of consolidate all the separate file servers with their own operating systems and RAID arrays, and consolidating them into one pool of storage where we could allocate quotas and still manage capacity effectively, but centralize it and reduce waste. The ability to scale out was just icing on the cake, and definitely something we were very interested in. It's something we've utilised quite heavily over time, but the ease of management was the main driver.

How was the initial setup?

The initial setup has always been straightforward. The process of creating a new cluster is largely the same now as it was 13 years ago. You get your first node, then connect the serial port to it. You answer about 10 questions, then you're ready to go. The rest of the nodes are added by clicking a button. It's incredibly easy to set up, and it says a lot that the process has been the same for about 13 years. There's not really much to improve or simplify, because it is already incredibly simple.

Assuming the hardware was racked, you could have the cluster setup and your minimum three nodes joined within half an hour to 45 minutes.

The process of adding a node is very straightforward: It is pressing a button. This can take five minutes, then the process is complete. Once you have added new nodes, you can then remove old nodes. 

Understand your workload. Make sure you size and cost it correctly for the amount of metadata you expect to see on it. Don't undersize your SSD.

For the whole replacement this year, I got one of our junior staff members, who had have never actually used our PowerScale, to do the whole upgrade process. I just pointed him in the right direction. Because it was very easy, he managed to do it without any issues.

What about the implementation team?

We don't use any professional services. We always do it in-house. 

Two people are needed for racking hardware. Only one person is needed to deploy it, as that process is very straightforward.

What was our ROI?

The solution has simplified management by consolidating our workloads. Rather than managing all the different workloads on different storage arrays, Windows Servers, etc., we just have one place per data centre where we manage all their unstructured data, saving us time.

PowerScale has reduced the number of admins that we need. It has allowed our admins to focus on adding value through automating tasks and streamlining operations for our customers, rather than focusing on the day-to-day and tuning RAID profiles. We can use our APIs to automate workflows for customers and have quicker turnaround times.

What's my experience with pricing, setup cost, and licensing?

The solution is expensive; it is not the cheapest solution out there. If you look at it from a total cost of ownership perspective, then it is a very compelling solution. However, if you're looking at just dollar per terabyte and not looking at the big picture, then you could be distracted by the price. It is not an amazing price, but it's pretty good. It is also very good when you consider the total cost of ownership and ease of management.

We added on a deduplication license. That is the only thing that we have added. That was a decision where it was cheaper for us to license the deduplication than it was to buy more storage, so we went with that approach. We just did an analysis and found this was the case.

We haven't really hit a workload or situation that we have had any issues catering for. Certainly with the huge number of different node types now, we could position any sort of performance from very cheap, deep archive through to high performance, random workloads. I feel like we could respond very quickly to any business requirement that came up assuming they had budget. Even if we didn't have budget, largely with the way our clusters are configured, we typically mix in high and low performance. We won't buy top of the line, high performance, but we will buy basic H500 nodes, which are a large amount of self-spinning disks. That is what we standardize for our high performance tier. 

Which other solutions did I evaluate?

13 years ago, it was called Isilon Systems. They were a start up in Seattle, while we are in Australia. We were importing the hardware directly. At that time, there was nothing really else that we were looking at. We were just caught up in revolutionising the way we would be managing one pool storage. Then, six to eight year ago, when we had that little stint on IBM Spectrum, we didn't go to market. We very heavily evaluated the IBM product and NetApp in cluster mode as an alternative. We did rule out NetApp from a management perspective as far too difficult to manage. The Spectrum product that we saw on paper and from our evaluation of loaned hardware seemed like it was going to be on par with Isilon. Little did we know the nightmare that would ensue from that. 

The biggest lesson that we learned was from moving away from it onto the IBM product. The maturity of a product is very directly correlated to the amount of time you spend managing it, as it is a very mature product. We have been using it for 13 years, and the core has a very solid, mature foundation that has been built over that time.

We have dealt with Nimble Storage in the past. I would recommend Nimble Storage based on their support (at that time), as they had exceptional support. However, Dell EMC support is no worse than Cisco or any of the other vendors that we have had to deal with, but it is nothing special.

What other advice do I have?

Just don't underestimate how important a mature product is compared to something leading edge or new.

PowerScale's positioned primarily to receive the call within that data centre. We have PowerScale heavily centralized, both in our IT department and on our campuses. We don't really have any storage from PowerScale in the cloud or our edge because we have very good network connectivity. In terms of the right tiers of storage, the level of flexibility that we have for adding different types of storage with different characteristics to our existing cluster now is the best it's ever been in the 13 years that we've managed it. 

Between CloudIQ and DataIQ, they're replacing their legacy InsightIQ product. We haven't moved to CloudIQ yet to start looking at it.

Early on, since we have been using the solution for 13 years, if you added a new node type, then you would have to add three physical nodes to start a new pool and only end up with 66 percent utilisation on that storage pool. Whereas, in the Gen6 hardware, you can have more smaller nodes in one rackmount chassis. Now, you can add a new storage type and gain much better storage efficiency off the bat.

The S3 protocol specifically comes in OneFS 9.0. We have a test cluster for it, which we are in the process of upgrading to have a look at their S3 support. However, I haven't used it yet. Typically, we use something like MinIO, which is an open source object gateway, and put that in front of the PowerScale cluster.

On the archive side, we still have the A200 nodes. While you can go with the A2000s or go deeper than that, we can manage pretty much anything thrown our way by not going too extreme in our pools by positioning data effectively. I think it's very good.

I would rate the solution as a nine out of 10.

Which deployment model are you using for this solution?

On-premises
Disclosure: IT Central Station contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
GU
Network Manager at a government with 1,001-5,000 employees
Real User
Top 20
Handles data distribution among the nodes internally, making management easy

Pros and Cons

  • "There are also the policies that you set up on replication and purging files, and policies for something called WORM. That's a "write once, read many," where you can't overwrite certain files or certain data. It puts them in a "protected mode" where it becomes very difficult for someone to accidentally delete. We use that for certain files or certain directories, because we're dealing with video and some video has to be protected for chain-of-custody purposes. The WORM feature works great."
  • "Because of the magic that it does 'under the hood,' it is very difficult to find out within the system where all your storage is going. That's a little bit of a ding that we have on it. It does so much magic in order to protect itself from drive failures or multiple drive failures, that it automatically handles the provisioning and storage of your data. But by doing that, finding out why a file of a certain size, or a directory of a certain size, is using more storage than is being reported in InsightIQ, is very difficult to discern."

What is our primary use case?

We are using it for storage of video files, with casual access to them. We needed as much storage as we could get for the best price. If you are looking for a hybrid type of situation, when you want low latency for transactional things, and higher-latency storage for archival things, you can get the hybrid nodes.

Each of our two clusters has the same disk sizes, etc. We did that for interchangeability, in case we wanted to move shelves between the clusters. They act independently, but they replicate between the two. We love the system. That's why we continue to upgrade and buy it.

What is most valuable?

The low latency, the high-capacity connections that we have with the nodes, and the ability to add as needed to a particular system, are all important features for us.

It also handles data distribution among the nodes internally. You really don't have to do anything, so management is easy. If you're someone who really wants to get granular and know where every bit or byte is going, maybe it's not for you because I don't know if you can get that granular.

We have over a petabyte of storage and we've sliced it up. You can't really call them "shares" because it's not really like an NFS mount or CIFS share. But we've sliced it up and the policies and auditing on a particular system are, in fact, too much data. Anytime a file change or any system change happens on it, it records it and we ingest that into a SIEM. We can crunch it so we know who is changing what file at what time. That gives us auditing capabilities.

The policy-based management that we have, for who accesses what shares, is relatively simple to set up and manage. It's almost like managing an Active Directory file share.

There are also the policies that you set up on replication and purging files, and policies for something called WORM. That's a "write once, read many," where you can't overwrite certain files or certain data. It puts them in a "protected mode" where it becomes very difficult for someone to accidentally delete. We use that for certain files or certain directories, because we're dealing with video and some video has to be protected for chain-of-custody purposes. The WORM feature works great.

The OneFS file system is very simple and has an astronomical number of features that allow us to get very granular with permissions, policies, and archiving of data. It handles everything for you. It's one of the easiest storage solutions that we've ever implemented in the 12 years I've been working in this organization.

I also love the snapshot functionality. It's pretty much what everyone does in backup. It's a backup of your system, but it lets you set the frequency of the snapshots. That's very important to us because we take so many snapshots. That means we can recover up to six months back, if somebody makes a file change or deletes a file. It's like a versioning type of function. It probably isn't really special. A lot of backup software has it. But the snapshot functionality is what we utilize the most within the OneFS file system. In theory, you don't really have to back up your systems if you're taking snapshots.

What needs improvement?

The only problem with the WORM (write once, read many) feature is it does take up more space than if you just wrote a file, because it writes stuff twice. But it works for us for chain-of-custody scenarios, and it's built into the file system itself.

Also, on the PowerScale system, because of the magic that it does "under the hood," it is very difficult to find out within the system where all your storage is going. That's a little bit of a ding that we have on it. It does so much magic in order to protect itself from drive failures or multiple drive failures, that it automatically handles the provisioning and storage of your data. But by doing that, finding out why a file of a certain size, or a directory of a certain size, is using more storage than is being reported in InsightIQ, is very difficult to discern. It's the secret sauce of protecting your data and that makes it a little disconcerting for someone who is used to seeing if a directory is using 5 MB of space. So if you have a directory using a terabyte of space, it might be using a little bit more because of the way that the system handles data protection. That is something you have to get used to.

Also, a lot of people are not used to the tagging or the description in the InsightIQ application. We're used to using the normal nomenclature of terabyte, petabyte, etc. They utilize TB byte and PB byte. So you have to understand the difference when InsightIQ is telling you how much storage you have. It's different than what we're used to. It uses base-2 and the world is used to base-10. Discerning how much storage you actually have, from the information in InsightIQ, takes a little bit of math, but it's not very difficult. I wish they had an interface in there where you could click and it would report in the way the industry is used to, which is in terabytes and petabytes. It's nothing major, just something you have to get used to when you're looking at it.

For how long have I used the solution?

We have two clusters. We purchased our first cluster about seven or eight years ago. We've refreshed that particular cluster, where we traded in the old one and brought a whole new cluster. In the midst of that purchase, we also bought a second cluster where we replicate some files between the two. We just refreshed and upgraded that second cluster, which was probably about five or six years old, and bought a whole new set of A200 nodes for it, so the shelf sizes are the same.

What do I think about the stability of the solution?

We've had some bumps and bruises when buying new nodes and adding them to the cluster, but I don't think it was the technology that we really had the problems with. It was, unfortunately, Dell EMC support, where we got a couple of Dell EMC engineers who weren't as familiar with the system as we'd like. Once we kicked it up the chain, and we had an engineer that was more versed, they fixed the problem relatively fast.

When we had the first iteration of PowerScale seven years ago, we added nodes to that. This was how that process went: The node came in, it was already populated with drives, you slapped it in, put it into the rack, cabled it up to the networking, and put the networking on the same VLAN, the network backend configuration. Then, you went into the configuration manager, the OneFS file system and you told it about the node. You said, "I have a node that I want to join to the cluster." It brought the cluster in and, for lack of a better term, formatted it, added it to the array, and it was there. The amount of time it took to cable up and join that node was about two hours. Once it's there, the storage just expands.

In theory, and what we expected with the newer systems when adding nodes—and this is the way it does work, once they figured out the problem that they were having—was that it would be the same scenario. You rack the system. If you get the networking done right, which is really easy—you just drop it on—it handles a lot of the internal networking within the cluster itself, but you need to put it on the same external VLAN. If you do that right, the OneFS file system just finds it. You add it, and it just assimilates it into the cluster. Once the networking is done, it should take under an hour for it to get assimilated into the node and for the storage to become available.

Most of the problems we had were when we were adding on. We really haven't had any problems after it was up and running. When it's up and running, it's rock-solid. We never really get failures other than drives failing, because all SATA drives fail. But you just pull out a drive and you slap another one in.

What do I think about the scalability of the solution?

We were using it for video storage and we were pretty impressed with its scale-up and scale-out abilities. We are always looking at the ability of a platform for scaling up and scaling out, especially because it's file storage. This was the best thing on the block that was out there.

How are customer service and technical support?

In recent months, their backend technical support has waned a little bit. They need to address the first-line technical support. I used to have a lot of confidence in Dell EMC technical support, but since COVID—and maybe it's the COVID situation—the technical support has fallen short a little bit. We've run into some problems with them.

They stand behind their product. The support that I get from my support group and my enterprise management team is phenomenal. When there's a problem, they address it. It may take them a little bit of time, but they own up to it.

But calling in and getting that first-line technical support needs to be addressed. It's been a little bit of a "hunt and peck" when you have issues, as opposed to just coming up with the actual solution to a problem. That's only been the case in about the last nine months or a year. I continue with Dell EMC because when there's an issue, they back it up and they make it right.

How was the initial setup?

It's one of the easiest things to configure. It's pretty much set-it-and-forget-it.

Initially, because in the first system that we had seven years ago the drive space was so small—I think they were 4 TB drives—there were a lot of shelves. We had over a petabyte of storage, so it was a lot of shelves. The installation, physically, was what took a really long time.

Now, the drive size is much bigger and the density per shelf is much greater. The actual shelf count is a lot smaller, so the physical racking is a lot easier. When we switched over to the new A200 nodes, we went from four nodes to one, four shelves to one shelf, when we did the conversion.

With the initial install, it has to format all the drives and that can take some time. It was a long time ago so I'm not sure I remember correctly, but I believe it took us a day or two to format all the drives. But we had 12 shelves. After that we were fine. 

But when you add on, it just brings them up and formats them into the array, relatively quickly. But the initial one, depending on how many singles you have, can take hours, and up into a day, to format everything.

The second installation that we did was a lot quicker. We stood it up, had those initial problems adding the nodes, but then we had to move it because we had to move data centers. When we moved it, it took less than half a day. We actually had to shut it down to move it out of a data center into another data center. We carried it over to the new data center, rack mounted it, fired the thing up, and it just took off like it hadn't even been moved. It handled a good "power-down" situation with no issues.

What about the implementation team?

It was done with two guys from Dell EMC and one of my system engineers. The network guy did some backend configurations. The two guys from Dell EMC came because they were physically mounting all that stuff. When we added the second one they sent two guys, but one guy pretty much just sat around and did nothing while the other did the hands-on-the-keyboard stuff. I had a system guy down there to help with how we wanted it configured. But it's relatively simple.

Overall, the first deployment was phenomenal. Everything worked out great. The training, what they conveyed to us and walked us through, that was phenomenal. The second deployment, on the second array—same thing, when we were running with the older nodes.

Then when we did the transition where we swapped out to the A200 nodes. Once again, phenomenal, everything worked out great. When we got the A200 nodes for the second cluster and upgraded them, the installation of that went fine.

When we started adding shelves, that's when the technical support fell on its face because the individuals that were working with it were not well-versed enough. I guess they assumed—and it's how it should be—that when you add a node, it's just rack it and stack it and then turn it on. But it didn't go that easily. There was some low-level engineering trick that you needed to know about, and these particular individuals didn't know about it. They do now, because we had to escalate it. The escalation was a little frustrating because it took about two days to get to the right person. But that right person knew the answer in five minutes.

What was our ROI?

We did an analysis of using cloud storage and on-prem storage. We did a comparison of the total cost of ownership between the two. Every time we have done it, the cost of onsite storage using the PowerScale system is fractions of a penny, per gigabyte, compared to cloud storage. There are no access fees or access charges like you get with cloud storage. If you want to utilize cloud storage, there are retrieval costs sometimes. I know there are different levels of cloud storage where you can archive and then pull up, but it takes about a day to get them to pull that stuff out of archive, and then you can access it. But there's also those access charges. You don't get that with the PowerScale system.

What's my experience with pricing, setup cost, and licensing?

We're at the A200 version, which is more for online archiving. It's storage-based, but they're called archive nodes. They're all SATA spinning disks. If you need a lot of storage at a cheap, economical price, and really high-speed, if you're not doing transactional stuff, they have these archive nodes. The PowerScale A200 is more like an online archival system where the nodes are there but you're actively addressing them. It stores them on spinning disk so you get tons of storage for a good price.

What other advice do I have?

Networking can get a little confusing. The big thing is to make sure you carve out your VLANs to this particular system. Put a lot of thought into the network aspect of it. Don't just slap it into your server network. Carve out an isolated network for your storage subsystems and make sure they have high-speed paths back to wherever you're going to be accessing it from. Don't cheap out on that because this system scales out and scales up. If you start cheaping out on the network part of it, you're not going to be happy with your access to it. The biggest thing is to configure the networking right and give it the unabridged paths that it needs to realize the low-latency, scale-out aspect of the system itself. You can jam yourself up if you neglect the networking aspect of it.

The A2000 system they have now, which we didn't even look into, is more of a non-active archival type system. They also have these hybrid systems where you would have staging areas where you could store on spinning disks and tier. Your storage becomes a tiered storage infrastructure where you have spinning and flash storage. You can put your high access, low latency stuff on your flash storage, and your archival, higher latency stuff, on the spinning disks of the hybrid nodes. We were looking at that, but we're not using this particular system as a low latency, production-type system. 

They also have the all-flash arrays, which is where you're getting massive amounts of throughput but it's just expensive, obviously, because it's flash. It's a lot more money. We weren't looking into that because we did not need speed. We were just looking for storage options. We have a different Dell EMC product that we use for our day-to-day, low latency, server-based storage. That's where our block storage is. Our file storage is what we use the PowerScale for. We didn't want to go to the all-flash array nodes. They're not cheap and we already had a solution in place for that.

Overall, the hardware itself, and the OneFs file system, are the best selling points, combined with the delivery and the installation. That's why I continue to buy Dell EMC.

Disclosure: IT Central Station contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Learn what your peers think about Dell EMC PowerScale (Isilon). Get advice and tips from experienced pros sharing their opinions. Updated: November 2021.
554,148 professionals have used our research since 2012.
Jace Gregg
Information Systems Manager at a energy/utilities company with 1,001-5,000 employees
Real User
Top 20
Ensures our data quality is very high and that our consistency in processing is a lot more static

Pros and Cons

  • "It has allowed us to have more consistent quality controls. It has also allowed us to expand the number of servers in clients processing and accessing data, allowing us to get a lot bigger projects out the door."
  • "It is a bit higher priced than some of the other systems."

What is our primary use case?

We use it for storage in a scale-out data processing system.

It is a physical storage platform. We have several different nodes that all act as one large storage cluster.

How has it helped my organization?

PowerScale has allowed us to bring data acquisition and some of the initial data processing that we would typically do in the field here on-premise. That has let us speed things up from a data delivery standpoint. 

It has let us really optimize our consistency. We've been able to take something that several different people were doing out in the field and just maintain it here with one person able to do a really good job of making sure that our data quality is very high and that our consistency in processing is a lot more static. It has prevented quite a few possible issues, which has also allowed us to expand from some of our jobs, where we used to go and acquire this data in the field. The systems out there have three servers, and we're able to expand up to 10 or 12 servers all processing that data. Therefore, it's made our turnaround on data pretty quick.

PowerScale allows us to manage storage without managing RAID groups or migrating volumes between controllers. It makes it to where we don't have to have a full-time storage guy on-premise. We are able to manage our storage on PowerScale without needing to have a team. 

The solution does provide us the flexibility to add the right tier of storage at the right time for data that resides at the edge, core, or cloud. However, that is not something that we typically do, as we have a fairly large cluster. We did have one instance where we had a very large job that was going to require about two petabytes of data. We were able to purchase that and get it installed pretty quickly, which definitely helped us out.

It is simple to use the solution for deploying and managing storage at the petabyte scale. We have almost three and a half petabytes, and it's a very low impact to our team as far as the amount of effort and babysitting that we have to do on it. This has really changed the way our company can acquire and process data in the field, allowing us to differentiate ourselves against all our competitors. None of the other competitors in our market are able to handle jobs, either in the size or density that we have been able to do so far.

PowerScale allows us to focus on data management, rather than storage management, getting the most of our data. This is mainly because the system almost manages itself. Instead of having to sit and handle storage volumes, RAID groups, LUNs, or things that in traditional storage architecture our group would have to manage, we are able to just create shares. The end user side is able to access those shares just like they would any regular storage or file server. That really helped us make sure that we're not having to manage storage the way we would with a traditional block storage or any other storage that we've tried so far.

It has allowed us to have more consistent quality controls. It has also allowed us to expand the number of servers in clients processing and accessing data, allowing us to get a lot bigger projects out the door.

What is most valuable?

It has the ability to access the file system from multiple hardware platforms from a client perspective. We have Linux and Windows machines able to access the same file system, then we also have the ability for all those systems to be able to access the same data at pretty much the same time. That helps us quite a bit, as it lets us expand the number of processing nodes that we can use to access the data at the same time. This helps us to scale out the front-end data processing to speed things up quite a bit.

We do have some of the policy-based tiering that seems to be working fairly well.

As far as we can tell, it does a really good job of maximizing storage utilization. For us, the storage protection is a bit more important. The protection schemes that we have seen so far have been very effective at ensuring that our data is protected, while still being able to access as much as possible. That is one of the strengths of the OneFS software.

It definitely helps us maximize the value of our data. We don't necessarily try to get any insights into it other than we just acquire the data and process it on our client's behalf.

We have been able to consolidate and centralize our systems into one system. It lets us take data from the field and get it in one spot, where it can get quite a bit bigger. It also has a lot more processing systems to access our data and get it out the door a lot faster.

What needs improvement?

Simplify where you can. If you have a need for tiering, then that can be okay, but it can behave in ways that you may not expect. If it's at all possible to simplify and stick with one node type, your consistency will definitely stand up a little better. If you do have a workload where tiering makes sense, PowerScale does do a good job of that. That's the only real, "Gotcha," that we've run into.

For how long have I used the solution?

We are probably on our seventh or eighth year of using PowerScale now.

What do I think about the stability of the solution?

We have had a few issues here lately, as far as power and kind of unusual things in the building. We've been really surprised that PowerScale was able to work around those issues without any sort of data loss, when we have had multiple nodes go offline. After we got everything back online and running again, PowerScale worked without any issues. As far as resiliency and availability go, I am happy with the solution.

What do I think about the scalability of the solution?

PowerScale lets us scale into much larger projects than we have ever been able to do. As far as I know, that is actually what sets us apart from our competition, as they aren't able to do projects as big, dense, or high resolution as what we are able to do.

We didn't have any storage administrators previously. However, from what we've seen on other systems, they would require them. Without growing our staff or expanding, we have been able to just bring this solution on without a lot of impact to the staff that we already had.

We have a small number of actual people using it. It's mostly just different computers accessing it. We have anywhere from 60 to 200 different computers accessing it at any given time. We have a small compute cluster that sort of skews the numbers into that 200 range. Right now, we have 95 connections going into it across our different systems.

How are customer service and technical support?

We have had no issues at all with our technical and customer support. The product watches after itself. If there is a hard drive replacement or anything like that, it phones home and Dell EMC lets us know. So far, we have had good luck getting equipment out and getting service on anything that we've needed.

Which solution did I use previously and why did I switch?

With all the other file systems that I have worked on in the past, if you had the three point four petabytes that we have right now, then that would require at least two people to work on them in a mostly full-time capacity. Because of the PowerScale's simplicity, we're able to just let our infrastructure team manage it, and it's a really low impact to them. Right now, we've two people who manage it along with all the other storage and networking that we have on-premise.

How was the initial setup?

We have added a node to the solution. We added 12 of the H500 nodes to our cluster about a year and a half to two years ago. The process was really painless. We just physically installed the hardware, so rack it and stack it up, then make sure the hard drives are in place and the network connectivity is there. Once we started powering them on, we were able to quickly add them into the cluster, and the extra storage and performance were apparent very quickly.

The initial set up was straightforward. It was similar to adding the hardware where we just kind of rack and stack and get the back-end and front-end networking configured, then we have pretty much everything right there.

The initial deployment was a lot smaller. It only took a day to a day and a half before we got it going. It was only a 300 terabyte cluster at that point. 

What about the implementation team?

Our vendor helped us out with the deployment, so they were able to send one or two of their engineers (depending on whether it was the addition or initial deployment). One person can do it, but two or three people will help get it done pretty quickly.

What was our ROI?

I think we have seen ROI.

What's my experience with pricing, setup cost, and licensing?

The pricing is expensive, but I think it's a fair value because it does manage itself. It definitely is much simpler than any of the other scale-out storage platforms that we've looked at in the past. 

It is a bit higher priced than some of the other systems. I do think it's worth the value, but it's definitely not cheap.

Which other solutions did I evaluate?

We were looking at large scale storage platforms. We had a good relationship with our storage vendor who recommended this solution. So, we took a look at it and did a bit of a demo, working with our software vendor to ensure everything was working fine, then just went out to the races at that point.

We did not evaluate other options in a side-by-side comparison. We did look at a handful of other vendors. However, we were able to tell just by the specifications of what they had that they weren't really going to work for what we needed. We needed to be able to scale the storage quickly, and also have Windows and Linux access to the same data set.

It was critical for us that we could start with a few nodes and scale very large. That was one of the things that really cemented that decision for us to go with PowerScale. We started out with the 300 terabyte system and were pretty sure at the time that the jobs that we were working on were going to get quite a bit larger and would need to have more crews acquiring that data. We were really planning on being able to grow this solution right from the get-go.

The people whom we have talked to about large-scale storage have typically rolled their own with either Ceph or Gluster. However, those require two or three full-time staff which we are not going to be able support.

What other advice do I have?

We have been really happy with it. It is one of the few areas in IT that we don't have a headache. We've liked everything that we have used so far with it. We have been very happy with the feature set that it has right now. It's definitely serving our needs.

We have been using the solution since version 7. It fits our use case without us having to add new features on our side. I don't know that we have necessarily seen or needed very many of the features that they have added.

We have the ability to grow or speed up our cluster easily by adding or replacing new nodes. That makes me pretty confident that if we have a significant change in our data, whether it's the number of crews that we have or number of client servers that we need to deploy, then I'm very confident that PowerScale can handle it.

Which deployment model are you using for this solution?

On-premises
Disclosure: IT Central Station contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Keith Bradley
Director of IT at NatureFresh™ Farms
Real User
Top 20
Allows us to see everything as one large volume, instead of having multiple volumes all over the place

Pros and Cons

  • "The single pane of glass for both IT and for the end-user is a valuable feature. On the IT side, I can actually control where things are stored, whether something is stored on solid-state drives or spinning drives... The single pane of glass makes it very easy to use and very easy to understand. We started at 100 terabytes and we moved to 250 and it still feels like the exact same system and we're able to move data as needed."
  • "There aren't many templates still coming out for it. They need to provide templates so we can copy and paste what we've done in the past to future, new things."

What is our primary use case?

We used it originally for archiving our video storage, and then we expanded it to include user shares. All of our unstructured data has been moved to PowerScale.

How has it helped my organization?

We moved our shares over. Now, instead of taking up a large amount of space on a virtual machine, our shares take it up on one appliance. The load on that virtual machine is much less and it makes it easy to future-proof it, because now we don't have to move it again in the next migration of servers.

We have saved about 30 percent on storage with it. And as we grow, we get more space, meaning the efficiency improves each time we add a node. We went from 75 percent efficiency to 82.5 percent efficiency when we expanded.

The solution provides us with the flexibility to add the right tier of storage at the right time for data that resides at the edge, core, or cloud. That really is nice. We did one use case where we put it out at the edge, and it was nice to have the Isilon at the edge. It really helped improve things. It helped the storage of the cameras, and it helped get the data back to the core in a reasonable time. It allowed us to go from the edge to the core and then up to the cloud, instead of trying to go from the very edge to cloud.

PowerScale also allows us to manage storage without managing RAID groups or migrating volumes between controllers. It simplifies the storage. It allows us to see everything as one large volume instead of having multiple volumes all over the place.

And when it comes to the business value of our data, it allows us to see what's being used and how it's being used, and we can do so much more quickly and efficiently. As a result, we can better evaluate how we're storing the data.

It has also helped us to reduce data silos. We used to have four video servers out there, all storing data. On the home farm, now, we're down to one server storing data in one location, and that includes all the user shares. 

All our data is in one place and that has increased performance. We could never afford to say, "Let's have this information on solid-state," and allowed the OneFS to decide, based on usage, of where it would be stored: on a fast drive or on a slow drive. It automatically does that in the background for us, instead of our having to manually move it and then have the user change where they get the information from.

In addition, it has simplified management by consolidating our workloads. It's all done in the same portal now. And while it hasn't reduced our number of storage admins, it has definitely reduced the time we spend looking at it, so we can focus on other efforts. It saves me about five hours a week.

Another benefit is that it allows us to focus on the data rather than where it's stored. Now, we don't have to worry about moving it around from place to place to get efficiencies out of the data. We just have it all in one place. The single interface, the SmartPools policy, decides where it needs to reside.

What is most valuable?

The single pane of glass for both IT and for the end-user is a valuable feature. On the IT side, I can actually control where things are stored, whether something is stored on solid-state drives or spinning drives, as well as the access users get. But the end-user doesn't distinguish the difference between a file and its folder; the end-user doesn't have to see the difference.

The single pane of glass makes it very easy to use and very easy to understand. We started at 100 terabytes and we moved to 250 and it still feels like the exact same system and we're able to move data as needed. There are no performance issues based on how large the storage is.

Adding a node is as simple as racking and stacking the items. It takes about two to three hours to put it into the rack. Once you have it all wired up, it takes you about an hour or 90 minutes with Dell, just to configure things and make sure it's all working. Then you just redefine your policy for where you want the items stored. We just expanded to include the solid-state, a full F200 node, and we just redefined where we wanted those files stored, whether on the super-fast solid-state or on the slow archival mode. Then, overnight, it ran that script and moved all the files around to help increase performance.

We also use the CloudIQ feature to monitor performance and other data remotely. It gives us better insight into where the data's stored and the access times involved. It gives me a better understanding of what's really being accessed and helps me decide what I can move to slower drives first, and what needs to stay in the front-end and remain very fast.

What needs improvement?

There aren't many templates still coming out for it. They need to provide templates so we can copy and paste what we've done in the past to future, new things.

The refresh of the interface with version 9 did help a lot of the things. They are at least improving it.

For how long have I used the solution?

I have been using Dell EMC PowerScale for about a year and a half.

What do I think about the stability of the solution?

It's very stable. It's one of the first solutions that I feel comfortable working with during the business day, while people are using it, knowing that I can change things and it's not going to take the system down.

What do I think about the scalability of the solution?

One of the things I like the most about it is the fact that we can scale out now. If we need more space, we order more nodes and it just changes the file structure; it just expands. There are no more individual drives, new arrays, moving things around. It'll just be there.

The future-proofing of what we're doing is a great thing too, because in five years when we're ready to replace that node, just due to its age, we can put the new one in and tell it to archive the old unit. It will move all the files over, in the background, and then we will just remove the old unit. There's no more having to tell users that, "Oh, this whole share is moving and all this stuff is getting done."

How are customer service and technical support?

The technical support has been really good. It's pretty intuitive to put a ticket in, both through their email and through the calling system. It's usually pretty seamless to get to talk to somebody to actually resolve the issue.

Which solution did I use previously and why did I switch?

Before PowerScale it was just MD Storage Arrays, the standard, and the LUNs that you'd have anywhere. We eliminated that with this. We originally started with PowerScale for our video system. We were looking for a better system, in the long-term, to store our archival video and process it. We looked at unstructured data solutions and picked PowerScale for that and for the future-proofing.

Also, because we are a large Dell EMC shop, it allowed us to keep it all on the same platform. In looking to do things on a larger scale, it allowed us future compatibility, much more easily. Its ability to meet unpredictable future storage needs looks great. It feels like a great solution and it was the right direction for us.

How was the initial setup?

The first setup was pretty complex and a little different to do. Once we had the core system set up, the next deployment was much easier. The complexity came from changing our thought process, internally, regarding how we store files and how unstructured data really works, and then, how to efficiently use this.

Our deployment took about a week. We did a slow move-over, and we still continue to move anything we find over to it.

In terms of administration of the solution, for the most part it's just me who does a lot of the core work. All the users on the farm are using the system now, meaning about 350 people are accessing the data on the Isilon.

What about the implementation team?

We used the reseller, Dell EMC, for the deployment, and it was a great experience. They were there to help us and make sure we understood where we were going and what we were doing.

What was our ROI?

The fact that, with PowerScale, we could start with a few nodes and scale very large made it very cost-efficient for us. It allowed us to start out, see what it can do, and evaluate the product before we actually did a larger investment in it. We invested into it again three months later.

I'd like to say we have seen ROI because we're feeling like we're really starting to store data better and understand what's going on, more than we did a year-and-a-half ago.

What's my experience with pricing, setup cost, and licensing?

It's one of those situations where you have to find the right price for you. When we talked to the reseller, we were able to negotiate the right price for what we needed.

Which other solutions did I evaluate?

We looked at HPE and IBM.

I liked the interface of the PowerScale much better than the other ones. It was more intuitive. I logged on and could almost get to work with it right away. I felt like I could hop on and just start using it, whereas with the other ones I felt that there was a larger, steeper learning curve.

What other advice do I have?

Dell EMC keeps adding more features to the solution's OneFS operating system. The last addition was its CloudPools and that allows us to do backups to the public cloud for the data that we want to keep but don't even need on-prem anymore. It turned the system into a never-ending resource. We can now decide what we want to keep, long-term, without having to expand our storage system.

PowerScale is one of those things that will grow in your environment. Once you start it with one thing, you'll learn that it can do much more, very quickly. That's a great thing about starting small with it, you can expand very quickly later on.

Which deployment model are you using for this solution?

On-premises
Disclosure: IT Central Station contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Rachel Bauer
Chief Operations Officer & Acting CFO at Like a Photon
Real User
Simplified our storage and enabled our IT team to move from an operational focus to an optimization focus

Pros and Cons

  • "PowerScale allows us to manage storage without managing RAID groups or migrating volumes between controllers. It has really simplified things. We're not having to worry about the underlying infrastructure. That takes care of itself. We just worry about the data. It's really easy for deploying and managing storage at the petabyte scale."

    What is our primary use case?

    Our motivation for investing in PowerScale was to provide scalable, redundant, and reliable storage for our film production pipelines. We're an animation film studio and our live data sits on it. Firstly, we were up to our capacity. We're a growing startup and we need to be able to scale into the future. The second reason we went with it is that we've got a relatively small IT team, so we needed equipment that's reliable and easy to manage. 

    How has it helped my organization?

    It's really set-and-forget. We went from our IT team from managing data, moving data, and deleting data on a day-to-day basis, over the course of six months to the point where, for the last 12 months, they have not had to touch it. It's really reliable, and the reporting heads off any issues that you might have.

    In terms of the performance improvement, we've described it as moving from a single lane highway into a multi-lane freeway. We've still got speed limits in the individual end-user environment, but now more people can move at the same time without it throttling our system. One result is that on a local test we went from 61,000 milliseconds down to 5,500 on the PowerScale, which is a massive improvement. We have been able to leverage the features of the PowerScale to optimize that down, and that was while the pandemic was on, and we were moving from 100 percent on-premises, to 100 percent off-premises.

    It has made a massive difference to how our IT team's time is utilized. It has pretty much been able to move from an operational focus, day-to-day, just keeping systems up in the environment, to now having an optimization mindset where they're looking to add new features to our production pipeline. They've got the time now to do that, whereas managing our storage before was a full-time job for them. It has saved me from having to hire one person over the last six months, and our IT team has gotten about 40 percent of its time back. It's a massive difference. We still have the same number of administrators as we had before, but it has allowed them to move from an operational focus to an optimization focus.

    Before, we had disparate storage systems that we were managing separately, and now all our production storage sits within the one environment.

    PowerScale allows us to manage storage without managing RAID groups or migrating volumes between controllers. It has really simplified things. We're not having to worry about the underlying infrastructure. That takes care of itself. We just worry about the data. It's really easy for deploying and managing storage at the petabyte scale.

    It also provides the flexibility to add the right tier of storage at the right time for data that resides at the edge, core, or cloud and that is one of the reasons we chose the solution. We haven't leveraged that as yet, we're not at that point, but we definitely invested in this asset for that reason. Currently, we can build two concurrent projects, but we expect, by leveraging that technology, that we will be able to get to six concurrent projects, which will have a huge business impact.

    Another benefit is that it has allowed us to better understand our storage usage and cost over a project's duration, and that's helping us to better plan and quote for future productions.

    What is most valuable?

    We have started to leverage the data from InsightIQ to be informed when quoting for future productions, and we're getting a better understanding of our usage and costs over a project duration.

    For how long have I used the solution?

    We've had Dell EMC PowerScale for 12 months.

    What do I think about the stability of the solution?

    We haven't had any issues at all. In fact, every day we say, "Oh my God, this is so amazing."

    What do I think about the scalability of the solution?

    We haven't added a node to the solution yet. We plan on putting in A200s, as we move between productions in our franchise. It's a new product for our team, so we're still trying to optimize what we already have. We haven't really looked to use any of their new features.

    We haven't scaled the solution yet, but the reason that we could convince the board to allow us to invest in this technology was the scalability. One of the next challenges that we're going to have is how to store our historical project data. We need a solution that is going to be cost-effective, yet the data will still have to be readily accessible to our current production pipeline. The PowerScale and file pool policies will enable us to utilize the archiving, so that will likely be the next way we scale.

    How are customer service and technical support?

    We have found the post-installation support to be absolutely fantastic. They have helped us leverage the advanced features and that has hugely improved the performance of our custom applications that we have hosted on the PowerScale. Their tech support is great.

    Which solution did I use previously and why did I switch?

    We didn't have a previous solution. We knew we needed a solution because every four to six weeks we were buying a new storage system. We had to make a decision on what to invest in. We reached out to our IT provider, Davichi, and they introduced us to the Dell EMC PowerScale. When we read up on it, it seemed to meet all our needs.

    As a startup, we needed a technology that we could scale with. Secondly, we've got a small IT team, so the equipment needed to be reliable, easy to use, and there had to be additional support available when we needed it. We were also looking for additional features and options that we could leverage, like cross-platform support for SMB and NFS, because we were after a high-speed server and workstation access, because that was one of our pain points.

    How was the initial setup?

    Dell EMC were incredibly attentive through the deployment process. They met us on site and they took the time to understand our current environment, our current challenges, and they worked with us to make sure that we bought the licenses that were going to meet our needs for today. They also helped us plan for the licensing that we'll need into the future.

    They met with our tech team and spent a day with us mapping out what our requirements were, looking at our environment, and making sure that we had the right networking, so that our foundation was right when we put it in. They physically installed the equipment and they continued to work with us over the next two weeks, just to make sure that everything was right. We put the PowerScale in when we were in production, at the end of a film, and we had no downtime at all. That was a massive concern for us, doing it while we were live in production, but they helped us move all the data across to the new system and we had no downtime.

    We have a local IT company that introduced us to Dell EMC and this product, and they were also a part of that scoping session. They're called Davichi Computer Services, and they're amazing. We ring if we have a problem, and within a few hours someone's out there. If they can't solve it over the phone or can't remote-in, they'll come on site and help us solve the problem. Fortunately, since we put this gear in, we haven't had many issues. But even as we were learning to understand this gear they worked hand-in-hand with our IT team.

    On our side, the deployment required one person, our IT manager. Everyone uses it, as an end-user, in our organization because all the data for our film production pipeline lives on it.

    What was our ROI?

    The total cost of ownership has been definitely worthwhile, hands-down. I had forecast the need to hire another administrator for our tech team and I haven't needed to do that.

    In addition, our IT team is now concentrating on things that help our business grow and help our business make money and help our team achieve a better end product; things we couldn't have done if we'd invested in other products.

    In addition to the standard fees, we had to buy switches. In the scoping sessions, when Dell EMC came on site, we identified what the capacity was in our server room to put the product in. The only thing that we didn't scope for was that we needed an additional UPS because the UPS that we had couldn't hold the load.

    What's my experience with pricing, setup cost, and licensing?

    The only drawback for us is that it's a large upfront investment. This was a huge decision for a startup company to make. It took a bit for us to get over the line on it, but we have not regretted it.

    Which other solutions did I evaluate?

    We looked at another Dell EMC solution. The reasons we went with PowerScale were the simplicity of managing it, the faster write performance than RAID systems, the data access optimization features, and, of course, it's fully redundant, with high performance, and it's scalable.

    What other advice do I have?

    Make sure you take the time to understand your current environment and what additional infrastructure you might need to support the device. All that planning made it a seamless implementation for us. Sometimes that part of the process felt like it was taking forever, but it ended up being well worthwhile.

    It's allowed us to consolidate everything in one, large, redundant volume, but we expected that. Nothing has gone wrong. Everything's been exactly what we expected, which is wonderful.

    I imagine that as time goes it will become more valuable, particularly as we get into a world where we're managing archive data. And we are looking to explore the cloud options as well.

    I would rate PowerScale at nine out of 10 because of the greatly increased performance, the capacity, reliability, and the improved maintainability of our storage.

    Which deployment model are you using for this solution?

    On-premises
    Disclosure: IT Central Station contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
    Flag as inappropriate
    Maurizio Davini
    CTO at Universita' degli Studi di Pisa
    Real User
    Top 5Leaderboard
    Our storage I/O performance is three times what we had before

    Pros and Cons

    • "This is the best platform that we could have for storage utilization. It is affordable and scalable. At the end of the day, it's something that we find very easy to use."
    • "Some improvements to the NFS support would be of interest to us."

    What is our primary use case?

    We are using Dell EMC PowerScale as a central storage for our virtual HPC infrastructure based on VMware.

    We have several silos today, as our HPC infrastructure is typically divided between bare-metal and virtual configurations. The storage that we use on various infrastructures is different, as we are typically using a storage style that is different from any production facility. Until now the request from our internal users was to keep the data separated in different storage silos, and converging in central storage facility while on the virtual HPC is the new request. Therefore, we are experimenting how it works.

    We have five nodes of F200s. 

    How has it helped my organization?

    This is the best platform that we could have for storage utilization. It is affordable and scalable. At the end of the day, it's something that we find very easy to use. Our administrators and people are very happy with the platform.

    Now, our storage I/O performance is three times what we had before, even if we had not optimized the networking that is hosting the infrastructure. For this reason, our internal users are very happy.

    What is most valuable?

    We know how to deal with the OneFS system very well. 

    It is easy to use and scale. It is probably the easiest, most scalable storage that we have ever used with our infrastructure. It improves the performance of our infrastructure. We have some other types of storage, but they are not as simple to use like PowerScale.

    The ease of use and installation have cut the time of putting a new storage solution into production. This has been very useful for us.

    What needs improvement?

    Some improvements to the NFS support would be of interest to us. I think that will be available next year.

    For how long have I used the solution?

    We have been using it for less than a year. We just bought the platform in May, then we did a couple of months of testing. Now, it is in production. We bought the solution as soon as it was announced, but you have to take into account the time of the delivery and testing. With the pandemic, everything is unfortunately slower.

    What do I think about the stability of the solution?

    The stability of PowerScale is incredible. It's not so different from Isilon. PowerScale is a sort of Isilon on steroids. It has the same scalability and reliability of the Isilon platform, but now you have a lot of performance, so it is a sort of super Isilon from a customer usage point of view.

    In the year that we have had it in production, the solution has demonstrated stability and performance. It is something that we rely on for our simulation infrastructure.

    There is a team of three who maintain all the infrastructure for PoweScale. It is easy to manage as soon as you have it setup.

    What do I think about the scalability of the solution?

    It scales seamlessly. We started three nodes, then we added two and there were no problems. The impressive part: Now creating or expanding a PowerScale cluster is almost immediate. In the past, you needed more time. 

    As of today, we have around 15 research groups doing work on the platform, but we have only started the production phase after weeks of testing.

    How are customer service and technical support?

    The technical support is perfect. We are more than satisfied. They are responsive with good turnaround times.

    We have several Dell EMC solutions. We are familiar with their support and are more than happy with it.

    Which solution did I use previously and why did I switch?

    For NFS and CIFS services, we used Isilon and now PowerScale. We have lengthy Isilon experience in our data center. Today, we have still a Dell EMC Isilon H600 hybrid in production, but we decide to go to PowerScale to host our simulation facility. Typically, the workloads in which we are hosting on our virtual HPC environment come from engineering and chemical simulations as well as the latest AI and deep learning workloads.

    We were beta testers from the first platform of Isilon before it was acquired by Dell EMC. Its scalability, ease of use, and performance were key. When PowerScale came out, we didn't try to buy another platform for this kind of work.

    We have been very satisfied with our Isilon experience as a centralized system for HPC. PowerScale is much better than the Isilon that we had before.

    How was the initial setup?

    The platform is really straightforward to install and use, so we are not losing too much time setting up the storage as is and have more time to deal with the data on it.

    The initial deployment took one day to set up. You do have to do some preparation for the setup, especially on the networking side. However, on the infrastructure, the platform is easy and straightforward to set up. The preparation was to prepare the networking, where you will be connecting the machines, such as, the typical networking configuration and VLANS, then you are ready to go.

    It is immediate to add a new node and put that inside your configured cluster, e.g., when we installed the new PowerScale, the installation of the operating system was very quick. It was really unbelievable. We came from the first generation of Isilon where the installation of the operating system was not so fast. The F200 skyrockets onto the OneFS. Though, if we could afforded the F600, then that would be also faster. However, what we can afford is the F200, and we are happy now with that.

    We have seen an improvement of performance without losing too much time when setting up the new platform.

    What about the implementation team?

    We did the implementation ourselves with the help of the Dell EMC support team, who set up the system. One person, myself, took a half a day to set up the infrastructure and another day to install it, then putting the platform in production.

    Our infrastructure is directly managed by us.

    What was our ROI?

    We have improved the performance and reliability of our HPC storage. We are very happy with it. Our systems are typically used for research. The added value is in the performance. Typically, it's not a problem saving money. It is more a problem of how much research you are able to do, how many jobs you're able to afford, and so on. In this sense, PowerScale, in our infrastructure, is really a winning piece. Today, we have three times the performance on the I/O. The gain that we have with the I/O is significant.

    Isilon was an incredible return on investment. I think PowerScale will be the same because it's giving us the performance that we were looking for at an affordable price. 

    What's my experience with pricing, setup cost, and licensing?

    The platform is not cheap. However, on the software side, you can choose what you want license. So, you can start your licensing with the features that you need, then after buying the platform add some other features. 

    We went for the traditional NFS and CIFS platform. We have also licensed the HDFS platform because we want to do something with the HDFS.

    There are some new features, but we are not using all the features because you need licensing for all them. However, we are seeing that the platform is growing. At the end of the day, when we will need some more features, we will license some more of those features, knowing that they will have them.

    The F600 machine of PowerScale is much better than what we have. It has MDM drives and 100 GB connection with the same software.

    I know that you can license also some enterprise class features on the platform, but we are not using those features today.

    Which other solutions did I evaluate?

    I have a small team who analyzed the market, but it is difficult to find some competition for PowerScale with the same performance and price. Something that was important during our decision was you have to teach a technician the new platform, and maybe that takes time. In this case, the integration of the PowerScale was almost seamless for the infrastructure and internal technicians.

    Apart from Isilon, we are using DDN. We also have some parallel side systems that we are using production with our HPC. However, PowerScale is really the easiest to use.

    What other advice do I have?

    I would recommend going for this solution.

    PowerScale is already at the edge of the technology. If you give a look at what you find on the market today from the technology point of view, PowerScale hardware and software are at the top.

    80 percent of our operations are brands, especially for HPC, but our organization is moving to the cloud from some services.

    We have discussed with Dell EMC their roadmap of the platform and are very interested in it. We hope we will be able to afford the new features that will come up, like the NVMe nodes.

    We have some projects using the S3 protocol, but not on PowerScale. They are on the old Isilon for HDFS.

    We use the CloudIQ feature to monitor performance and other data remotely. We have two platforms on the CloudIQ: PowerScale and PowerStore. We haven't use the platform yet so much that it has been useful. We have typically been users of InsightIQ software to monitor infrastructure. Now, we are using the CloudIQ, but do not much experience.

    We are not thinking about using it as an enterprise platform. However, we do see increasing our usage over time.

    I would rate this solution as a 10 out of 10.

    Which deployment model are you using for this solution?

    On-premises
    Disclosure: IT Central Station contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
    Bill Sharp
    Senior Vice President, Product Development & Strategy at EarthCam, Inc.
    Real User
    Top 20
    Everything is consolidated, simplifying management and decreasing time spent administering the system

    Pros and Cons

    • "For maximizing storage utilization, PowerScale is great. When you write the data to it, it spreads it out to all the nodes, so you get all the performance from the entire pool."
    • "You plug in a new node and data starts migrating over to it, and IT spreads out the load. We've added multiple nodes to the system since deploying it. The process is pretty seamless, and we are able to do it with no downtime. It's a very easy process to do."
    • "There is room for improvement with the updates. It can take a significant amount of time to do a major OS update. However, even though it takes multiple reboots, the cluster stays up. If we want to apply a newer version of the OS, we have to roll back some of the patches so that we can upgrade. It requires a few reboots just to do that. The cluster doesn't come down, everything is still running, but it's time-consuming, at times."

    What is our primary use case?

    We’re the world-leaders in webcam technology, content, and services. We do high-resolution imaging from cameras. We have millions of camera images a month coming into our network from our systems in the field. We store all of that image data, and then we edit those images into time-lapse movies.

    How has it helped my organization?

    We've had an 82 percent reduction in our systems administration resources.

    One of the things we have also noticed is about a 20 percent reduction in our video processing time. Our video editors are able to work on editing natively, on the system, and that cuts down on a lot of time that was required to move data around. It helps their workflow. 

    It's also giving us double the capacity in less space. We get about 26 times greater density, compared to our previous storage systems.

    In addition, Dell EMC keeps adding new features and improving on existing ones. When upgrading from the old generation, the redundancy was restructured with the domains and different node schemes, giving us more fault tolerance.

    In terms of flexibility, we have two different types of nodes, and we're able to change the performance on directories, depending on the usage. It allows us to manage the entire system without having to worry about specific LUNs. It literally takes me just a few minutes to configure something and apply it.

    As we expand and have to add new things to our product line, we're able to scale very well, because we have visibility on our storage, our capacity, and our needs. It has definitely helped us from a business standpoint in that we don't have to be concerned about our storage environment. We always know where we stand.

    PowerScale has also helped us to eliminate data silos. Everything is consolidated and, as a result, it has simplified how we manage things and how much time we spend administering the system. With all our data in one place, we don't have to manage different types of storage systems. Everything is just a single brand. We do have different nodes, but they all get administered the same way, so we don't have to relearn different things, such as how to manage the RAIDs, RAID groups, and different protocols.

    The solution has definitely freed up a lot of time. We used to spend a lot of time on our previous system. PowerScale allows us to focus on data management rather than storage management and helps us get the most out of our data.

    What is most valuable?

    The most important things for us are the reliability and the ability to cut down on our system administration resources. It's very easy to manage, and we have very good visibility on how the storage system is being utilized. In addition to the reliability, it's very easy to work with and it's very fast. Its sustained throughput is probably 100 times faster than previous systems.

    For maximizing storage utilization, PowerScale is great. When you write the data to it, it spreads it out to all the nodes, so you get all the performance from the entire pool.

    In addition, managing storage at the petabyte scale is very easy if you go through the user interface. Everything is there. But if we want to do more complex things, we can use the CLI. Since we're very familiar with Unix/Linux CLI we feel comfortable making configurations changes through there.

    Another thing we particularly like is the documentation available, and how you can self-troubleshoot a lot of things. I like to know why something does not work and Dell EMC provides extensive documentation with technical details of bugs or technical shortcomings.

    What needs improvement?

    There is room for improvement with the updates. It can take a significant amount of time to do a major OS update. However, even though it takes multiple reboots, the cluster stays up. If we want to apply a newer version of the OS, we have to roll back some of the patches so that we can upgrade. It requires a few reboots just to do that. The cluster doesn't come down, everything is still running, but it's time-consuming, at times.

    For how long have I used the solution?

    We have been using PowerScale for over five years.

    What do I think about the stability of the solution?

    We have five nines of uptime, 99.999. We have almost no downtime with the system.

    What do I think about the scalability of the solution?

    The scalability is great. You plug in a new node and data starts migrating over to spread out the load. We've added multiple nodes to the system since deploying it. The process is pretty seamless, and we are able to do it with no downtime. It's a very easy process to do.

    The fact that we could start with a few nodes and scale very large was one of the great things with this solution. With the other systems you could add "Bricks"—that's what they call them—but you had to set up LUNs, and we spent too much time managing that part of the system. Here, you just add it in and everything just scales up. Being able to add new nodes and increase the storage without having to redo the storage pool is great. That's one of the reasons we went with PowerScale. That was definitely a big selling point.

    We're relying on it completely. I don't know if there's anything that we're not using it for. We're using it in production at full capacity.

    We’re confident about the solution's ability to meet unpredictable future storage needs. I don't think there's been anything that we've needed so far that they haven't been able to accommodate. We're planning on staying with the platform for the future.

    How are customer service and technical support?

    I've used their technical support a few times when I had certain random issues. Sometimes the issue was Windows-related. Even when they were not able to give me an answer immediately, three hours later, after researching things, they got back to me with the correct answer and technical details on why the issue was happening. To me, that's great. That's something our previous vendor wasn't doing.

    Which solution did I use previously and why did I switch?

    I've been with the company for 20 years, and we have had various enterprise-level storage systems over those years, but the immediate predecessor was Pillar Data Systems. The primary reason we switched to PowerScale was its ability to handle the types of data that we manage. We have over a billion very small—one-megabyte to 24-megabyte files—that we are writing to the system continuously. It's an archival storage process and PowerScale was very suited for that type of environment.

    What we needed was to simplify our entire system: to have higher throughput, more redundancy, and the ability to scale without having to recreate different storage pools or LUNs, like we were used to doing.

    We went with PowerScale for the reliability, the scalability, and the ease of management. 

    How was the initial setup?

    We had a lot of practice with the simulator, so once we actually had the hardware and the real system in here, we were already familiar with how to manage and do a lot of the configuration. That's something that is not available with other vendors or other systems.

    Moving from the old storage, which was from another vendor, was a significant bottleneck and took months.

    Upgrading from the older generation Isilon was seamless. We just plugged in the new generation nodes and told the OS to evacuate the data from the old nodes and the data migrated without downtime.

    In terms of users of the system, on the management side it's our systems administration teams, so there are a handful of people involved. The people actually using the storage are our customers and our internal teams.

    What about the implementation team?

    Techs from EMC came over and helped us with the physical implementation, while a remote team helped us with configuration and data migration. Our experience with them was good.

    What other advice do I have?

    We would highly recommend PowerScale. We've been very happy with our overall experience.

    Which deployment model are you using for this solution?

    On-premises
    Disclosure: IT Central Station contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
    SL
    Senior Consultant at a tech company with 11-50 employees
    Reseller
    Top 5
    Good stability and performance with the capability to scale

    Pros and Cons

    • "The stability of the solution is good."
    • "The solution can be a bit complex for those not well versed in the technology."

    What is most valuable?

    The solution is easy to use.

    The product has global name recognition.

    The performance, overall, is quite impressive.

    The stability of the solution is good.

    What needs improvement?

    The solution lacks a cloud version.

    It would be useful if the solution could direct to AWS or Google Cloud effectively or have an AWS version. With the global lockdown conditions, you can't get to the site. It would be easier if it was connected to the cloud.

    The solution can be a bit complex for those not well versed in the technology.

    For how long have I used the solution?

    I've been using this solution for a few years now.

    What do I think about the stability of the solution?

    The solution, so far, has been very stable for us. We don't have issues with Isilon itself, however, every once in a while we do face a few stability issues.

    What do I think about the scalability of the solution?

    The solution is scalable, however, it's quite complex, so it's not exactly straightforward. For organizations that have a lot of items they need to upgrade, it's good to have support to help. However, the solution can scale if a company needs to.

    We have a few hundred users on the solution right now.

    How are customer service and technical support?

    The technical support aspect of the solution has been good. We've been satisfied with their level of attention and find them to be knowledgable and helpful.

    Which solution did I use previously and why did I switch?

    We didn't previously use a different solution, however, we are looking to switch solutions now, simply due to the fact that we would like to migrate to the cloud.

    How was the initial setup?

    The initial setup was a bit complex for personal not knowledgeable with the solution. When you are just shown a manual, it does take a while to understand how everything works. It's not exactly straightforward.

    What's my experience with pricing, setup cost, and licensing?

    While the initial setup isn't too expensive, it can end up being expensive depending on how many machines you have or how big you are.

    What other advice do I have?

    We're a reseller of Isilon products.

    I'm not sure which version of the solution we are using. It's one of the version seven releases.

    Right now, we are researching moving from on-premise to cloud, and want to know whether there is something that is more convenient than Isilon when moving to a cloud server. 

    For example, with EMC, if you have something on-premise, and if you want a cloud version, you should rather take ECS. The company finds the concept a bit confusing, so they are looking around for something that is similar in terms of ease of use, and yet has a cloud version as an option.

    As it stands now, I'd advise new users to rather use the Dell EMC service and learn on the job. It will be faster to get set up and be able to handle the solution.

    It's still a fairly good solution. Overall, I'd rate it eight out of ten.

    Which deployment model are you using for this solution?

    On-premises
    Disclosure: My company has a business relationship with this vendor other than being a customer: reseller
    Product Categories
    NAS File and Object Storage
    Buyer's Guide
    Download our free Dell EMC PowerScale (Isilon) Report and get advice and tips from experienced pros sharing their opinions.