2019-01-03T20:10:00Z

What needs improvement with Microsoft Storage Spaces Direct?


Please share with the community what you think needs improvement with Microsoft Storage Spaces Direct.

What are its weaknesses? What would you like to see changed in a future version?

Guest
44 Answers

author avatar
Top 20Real User

There is room for improvement in their network capabilities. Right now, if I'm going with the on-prem Storage Spaces Direct then I need to have a toll switch. They have a requirement: If I'm going for more nodes, they need to have raw traffic — which means FCoE traffic — that can only be through a toll switch. All other OEMs that have hyper-converged do not require a toll switch; I can just plug into a core or distribution. The main reason that people are moving away from the existing, traditional, converged solution is to replace that toll switch.

2020-03-19T13:00:00Z
author avatar
Top 5Real User

Actually the technology is heading in the right direction so it is a little difficult to criticize the product itself for what we use it for. I think the online documentation needs a lot of work and so do the sizing tools. Considering what this tool is for, these tools are a very important part of the product. I know what some of the features are that will be coming out because I do have the opportunity to check in with some of the people on the product team. Like for example, it will support thresh clusters, which means that I can have two nodes in one location and two nodes in another location belonging to the same cluster. One more feature beyond that is the ability to converse with the cloud. This adds some processing abilities that are amazing. This type of solution is also something that many of the other competitors cannot say that they have. They just don't have the same capabilities in terms of the reach with the services that Microsoft currently has in the cloud. Microsoft's reach in the cloud is really very extensive.

2020-02-13T07:50:00Z
author avatar
Top 5LeaderboardReal User

With this solution, you have to invest much more in hardware than is required with some other solutions. An example is that costly SSD drives are needed for caching. The overall cost of this solution needs to be reduced. More optimization could be done in terms of mirroring. In order to have 20 terabytes of usable storage, you have to buy about sixty.

2020-01-26T09:26:00Z
author avatar
Real User

RDMA ease of deployment. The performance benefits only came with all the new technology, and not only was RDMA a big requirement, but it was also the most challenging to be fully confident in 100%. We used RoCEv2 and switched to iWaRP a year later. To expand on our challenges, we have the hosts connected via multiple 40GB connections to Cisco 9396 switches with vPC. We had a lot of experience with Fiber Channel in the past, but using ethernet for storage was a change that we didn't have a lot of practical experience with. MS strongly recommends using RDMA and we decided to use RoCEv2. After it was all setup we could see the performance counters confirmed that RDMA was being used, but that doesn't mean that DCB is working 100% correctly. There isn't a lot of great articles published on PFC and DCB configuration end-to-end because it depends on your NICs, Host OS, Switches, etc. Piecing learnings from Mellonix, MS and Cisco documents we believed we had it configured correctly, but we never had 100% confidence that we had and it is very difficult to find a partner willing to put a stamp of certification confirming they believed it was 100% configured correctly (Cisco vPC, DCB/DCBx, LLDP, PFC, SMB multichannel, RDMA, etc. all in the mix). When we experienced some unexplained issues that pointed to intermittent network issues which some errors suggested could be related to RDMA, it was difficult to troubleshoot. When we switched to LACP with vPC (which doesn't work with RDMA/RoCE and so we disabled it) the issues didn't reoccur, but the performance became much less consistent. When we switched to iWarp, the performance was reliably good again and the issues didn't reoccur. It's difficult to be sure where the issue was, my gut says it was PFC configuration on the Cisco switches and with iWarp DCB doesn't need to be 100% because it uses TCP rather then PFC to tolerate certain network conditions. I think we would have seen similar issues with vSAN, but I can't be certain...it may be more tolerant of the edge cases.

2019-01-03T20:10:00Z
Learn what your peers think about Microsoft Storage Spaces Direct. Get advice and tips from experienced pros sharing their opinions. Updated: April 2020.
442,041 professionals have used our research since 2012.