Please share with the community what you think needs improvement with StarWind Virtual SAN.
What are its weaknesses? What would you like to see changed in a future version?
The StarWind Management Console is available only for Microsoft Windows/Windows Server, and should also be available for Linux and macOS, as it would reduce implementation costs.
Perhaps more reporting features on the utilization, usage, and performance of the configured high-availability images and underlying physical disks would be helpful.
The console is something that I feel could be improved. There is nothing technically wrong with it, but it can be jazzed up and/or made to be a little more intuitive. Perhaps introduce a few right-click options, as that's my general go-to approach as opposed to searching for specific menu items. I also feel that when dealing with a magnitude of tapes perhaps better formatting or color-coding will help locate or identify tapes easier than scrolling through the list. Also if the lists' real estate on the screen could be increased that would be helpful as well. Other than that, I have no complaints or issues with the software.
If there was a way to automatically put disks in maintenance mode when shutting the host down and exit maintenance mode automatically, that would simplify things. I'm not sure how they would implement that, but I think it would be possible. Another feature they could add would be better integration with Windows networking and disk management tools. Maybe something as simple as shortcuts to those applications within the StarWind management console. Perhaps automatic recommendations when setting up hosts/disks would be helpful as well.
It would be nice to add the ability to use raw partitions instead of file containers. I think that using equal partitions would improve performance by eliminating the additional file system necessary to host the container
This is a tough question. One that I wouldn't have put in here for the simple fact that I decided to write a positive experience with Starwind. Now, if I want to be picky, I could simply say that the price is not cheap. I started with the basic package, for 4 TB. Than I had to purchase to go up to 8 TB. It does look now that I will have to pay the remaining fee to purchase unlimited data. Should I have done so first, maybe but things change very rapidly and what wasn't needed not too long ago is now. SO overhaul, price. I would also need to pay extra to be able to have support during the weekend, otherwise, you have support only from Monday to Friday. Can be inconvenient if your business work 24 hrs shift Sunday to Friday.
I think the setup could be streamlined a bit. I was already familiar with iSCSI and setting up shared storage, but the abstraction process of getting from configuring the raid array on the local host machine, and making sure all the correct values are set to get the best performance, was a bit finicky. Then, you had to prepare the appliances on each host and again make sure your VMDKs are properly formatted and then mounted correctly so they can be shared as iSCSI disks. It just felt like a lot of opportunities for a user to make big mistakes that would affect the end performance. I was very impressed with the tech who walked me through the whole process, as I'm just not sure I would have gotten there on my own.
Built-in Notification would really help and as I understand, their new release has this now. A mobile app to sync up for overview and status would really be helpful. Also, it would be helpful if the software had a few more guides and links/videos on how-to's. An "Update available" notification within the software would also be helpful and a guided wizard to do the upgrade properly would also be nice and efficient. A better visual of the SAN storage/actual storage and how it is used would be good, especially when it comes to where the files are located on the disk.
* Easy migration to ZFS system that is being presented on a new version of StarWind, migrations look complicated as this restructures the whole architecture on the raid level, but could be a good option just by having it and letting the user decide this new feature migration, as based on our experience with ZFS systems they work pretty fast and secure. * Android app for monitoring and receiving push notifications as alarms or monitoring I/O from any mobile device could be a good feature and nice to have as we are not always on our desk.
Management of VSAN itself could be improved. A Web UI for management would be great rather than an application installation. StarWind is testing a command center virtual appliance that I have installed in my environment. It is very much a step in the right direction to make StarWind Virtual SAN management easier and even Hyper-V as a whole. Something Hyper-V has lacked for years is a good Web UI to manage a host or cluster. StarWind has taken upon themselves to help with this as well as manage the Virtual SAN in a single pane of glass.
Management tools could be improved, sometimes the usage seems to be slowed down and confusing. A native web interface could also be an option. I love to see in the future port of the software on a general Linux distribution like RedHat or Ubuntu in order to avoid windows license costs. I would also like to see features like erasure coding implemented. On the VSAN software, I would like to see some improvements in the storage pools (eliminate the usage of the file as a data container and use the raw partition).
The product can be improved: * There is no good way to see how all networks are distributed on the console. Besides, once they are created and highlighted, everything seems to be going well. * If we stay with Virtual SAN instead of managed devices, it would be nice to have more automated tools to manage iSCSI connections in Windows, which can be a bit confusing at first. * We would like the documentation to be more complete. Most items are covered, but if you don't know something, you may need to contact their support.
When StarWind Virtual SAN for vSphere nodes go offline unexpectedly, the nodes have to re-sync disks fully which takes a long time. We had a power failure and when both nodes came online, VMware vSphere didn't see StarWind disks before I manually re-scanned them form ESXi administration console even though it should happen automatically - maybe I had to wait till all the StarWind disks were fully synchronized but that toked in our case 8+ hours. There are no security and bug fixes without an active support agreement!
A duplication feature inside of a CSV would be very useful. I'm sure there are a lot of duplicated blocks on a CSV that have 75 VMs of Windows Server. StarWind relies on the underlying OS to manage the "SAN files" whether that would be a RAID volume, software RAID (such as LVM), etc. It would be useful if StarWind could incorporate the actual physical drive management inside of the solution, similar to Storage Spaces Direct. A web interface for management and StarWind SNMP MIBs would also be very useful.
I would like to see all the network adapters in the console with their assigned roles as sync, Heartbeat, and iSCSI, then their link speeds and real-time loading. In the next release, they could make some graphs of the real-time loading, speed of storage, and interfaces. Of course, these can be viewed in other places. But, in the event of a malfunction or troubleshooting, this would be convenient.
If a node goes offline unexpectedly, a re-sync between the nodes takes place in order to ensure data integrity. This sync, though necessary, can take several hours.
New versions of this solution should be tested more thoroughly before the release, as we had a few problems with one version due to a bug.
If we could get more within its price, it would be useful to: * Be able to collect operational logs with external dedicated syslog or SNMP servers; * Be able to encrypt separate LUNs/iSCSI targets with a key stored on external KMS/KMIP servers; * Being able to run StarWind vSAN on top of any free UNIX operating system to build a resilient iSCSI/FTP/SMB storage system would be useful.
For the StarWind VSA vSphere solution, I would like to see a simpler and automated virtual machine installation process in terms of network settings. The areas where this solution should be improved are: * Use as a node server without RAID volumes to ensure a longer period of use of the equipment and faster recovery of the complex; * You need a separate server responsible for the main node, which is synchronized in case of failure of one of the nodes; * Use SSD caching to write to industrial operation; * Monitoring the status of server equipment. Programmatically bypasses the offline state of disks; * To circumvent the speed restrictions of the network data when using virtual adapters VMware Vsphere VXNET3.
If there was one feature I would like to see it would be a built-in subsystem for managing UPS backups shutdown procedures providing a way to initiate VM shutdown on all host servers, shut down the host servers, then put the fault-tolerant mirroring in standby, and finally shut down the StarWind SANs.
I would like to see some additional, and possibly clearer, implementation videos with some slower and possibly more detailed descriptions of what the various steps of implementation are for someone who is unfamiliar with high availability and failover clustering in Windows.
I wish there was online support because email return takes a long time and a faster solution should be found.
Performance when in storage-separate configuration needs to be improved. The virtualization layers and not having the storage on the same node as compute take a lot of the IOPS we could have on an HCI scenario, but we grew out of it.
This solution should be more self-sufficient, running without creating domains or failover clusters.
Maybe in the future, the replication will be supported in more cloud providers. I don't think there are features that this product doesn't have it. It has all of the things that a system engineer will want.
The product can include a more simple way of synchronization after a forced shutdown as the current process has a few more steps to check that hosts have synchronized and this can be automated. The service support should have in its basic plan a chat support 24x7 in addition to email.
Perhaps the developer should refine the product management through PowerShell. It is not entirely clear and there quite a bit of documentation.
The system performs as expected, but we're always looking for performance improvements regarding the best utilization of NVMe disks.
In testing, we found some features in the Linux appliance were missing. So, for full functionality, you will want to use the Windows version. Server-side snapshots are one thing the Linux appliance can't do yet. I hope the feature is added in the next update release. Adding storage after its all setup is a little difficult.
We would like to see the documentation more fully developed. Most subjects are covered but if you do not know something then you may need to contact their support.
The only thing that I have any difficulty with is that in order to perform upgrades, it is required that the SANs be detached from the Hosts before that can happen. In my industry, having one hundred percent uptime means that I must either migrate everything off of the existing SANs so I can perform the upgrade, or shut everything down, detach the SANs and then reconnect them after the upgrade is complete. At this point, while I am running an older version and the features still work within the latest versions of VMware, I won't see it as being a problem until I add more storage to allow for the proper migration and upgrade to happen.
Some configuration options still demand service restarting. For example, changing of cache settings. Multi-tiering needs to be improved. There is option to use SSD for cache only currently.
The ability to manage the SAN with vSphere would be nice. It would also be of benefit to have more vSAN-like features, like not having to worry about creating multiple volumes. It would be nice if we could designate pools, or tiers, for storage of different speeds, and then assign rules to new VMs that would automatically place them into the proper pool.
I would like to see full support for iSER. This time, technical support does not recommend using iSER on NICs except on an internal (StarWind Node-to-StarWind node) connections. High availability for direct attached hardware drives (without virtual disk layer) could be useful to increase the performance of StarWind virtual storage cluster.
I would like to see more monitoring and alert tools. StarWind offers a management console to configure the and monitor the servers, setting up SMTP alerts would be a plus.
This product could be improved with the inclusion of new health check procedures. Operations performed in distributed file systems are complex and in case of network outage admins rarely know what is going on. Any kind of simple UI for admins is better than nothing. Something that would admins give fast, easily understandable information about status and rebuild status etc.
I'd like for it to be more user-friendly in the future.
In all areas, the product could be made faster. If any additional features appear, we would appreciate having the service inform us. We are quite satisfied with the things we have already received, but we also would be glad to see other inspiring features from a new generation.
StarWind currently has a Windows native application that it uses for management. There is not a web-based GUI at this time. This may be a choice to reduce the services running on the storage nodes but does seem like it would be a good alternative.
It would help us if the vendor continues to release software updates for earlier versions of the Windows operating systems. For example, Windows Server 2008 R2. We are sitting on earlier Microsoft products while there is support. This is because of the need for new hardware when switching to new software.
The cluster configuration is time-consuming and tedious. StarWind's guide for a two-node cluster can be found at (https://www.starwindsoftware.com/resource-library/starwind-virtual-san-hyperconverged-2-node-scenario-with-hyper-v-server-2016) and it is a long document to work through. I am not sure if it is possible to make the confiugration process more streamlined on generic hardware. This only affects the cluster configuration, not it's production use. I am very satisfied with the StarWind features so far and not interested in additional ones.
They recommend RAID 10 for HDD which reduces the usable storage capacity. If they could improve this area, it will be of great benefit in terms of storage.
I would like to have an easy way of automatic notification about issues, even when I'm away. The system has the possibility to send e-mails, but only in an internal mail system. Simply there is no possiblity the e-mail sender to log on smtp server. Several years ago some smtp server didn't require user to log on. But now it's impossible to send e-mail without login on. So if I had my own company internal e-mail server I could use it. Otherwise not. It's not enough for me.
Central management webpage should be a must-have. We have three different sites where we use VSAN, and a single webpage to manage everything is necessary. For example a website where I can add each cluster and manage everything in one place rather by each physical server.
It would be great if the Linux version of the management console offered the same features as Windows.
The price is a bit more expensive than analogs.
It would be helpful to have a little more insight into what kind of performance the VSAN cluster is utilizing; something that would be more proactive on our side, versus their ProActive Support.
* We would like the price to be lower. * In the next releases, we would like a new improved interface.
There is a limit on HA storage for standard and professional versions which is too low to be very useful for any but the smallest of SMBs or startups. Most SMBs we work with have more than 50TB of data, so the 4TB and 8TB limits are nothing more than a sales gimmick. The enterprise-level supports unlimited HA storage but starts competing with Windows Server (S2D) at the price point. The hardware requirement for S2D, however, puts the Windows HCI out of reach for most SMBs.
The documentation could be a little more concise, but, for the most part, it just works.
For improvement, I would like to see how the software determines which networks to use for which purpose. It seems like the naming terminology changes a bit from here to there. When I access the console on the computer, where is it going in through: * The computer's connections? * The heartbeat connection? * The iSCSI connection? It is a little odd as far as making sure those networks are isolated just for their function. On the console, there is no good way to see how all the networks are allocated. Other than that, once they are set up and allocated, everything seems to run nicely. I just don't want, e.g., my heartbeat network bleeding into other things, like the iSCSI. For this market, in general, it would be nice if I could go to a website where they had all the pricing listed comparatively, then maybe I could shop around.
I'm sure it needs bug fixes, and there are new features coming down the pipe, but it works great.
If there are domain controllers inside the cluster, there needs to be some sort of logic allowing them to boot independently so all the rest of the domain clients can gain the authority they need to come online. We made that mistake at first. We have since moved one of our domain controllers out of the cluster, so everything can obtain whatever authentication it needs on the initial boot. Ultimately, Microsoft says they support it, but we would like to see all of our domain controllers running within the cluster, too. We don't want to have additional hardware just to run domain controllers.
Initially, when we first started, the sync was horrible. It would take about 13 hours. However, they have since then improved on it. It also depends on the pipe. We had a small pipe back then. So, we would do things at around 8:00 AM, then by 4:00 or 5:00 in the morning (the next day), everything would be back on. Once we upgraded the pipe between them, within half an hour, it was synced. StarWind made us understand that we had a small pipe and our drives were not SSD, but SATA. All these things contributed because they have tons of clients. Thus, if we were the only ones having this issue, then we had the issue. Once we made the changes, we saw amazing improvement on the way it synced. Instead of 13 hours, it took five to ten minutes for it to complete. For improvement, there should be simpler, user-friendly training about how the system works. I have dabbled in it, but if I need to do anything I'd rather pick up the phone, call them, and say, "This is what I need to do," and they're more than happy to help. While they do have help documentation, there is a relatively steep learning curve. You need to take into consideration the amount of data that you are syncing as it will come into play: The amount of data that needs to sync between the two devices and the amount of data that the pipe has to read right. With data verification, I would like to know how does the solution perform validation of data being synced between two VSANs. If data is corrupt, how does it determine that I'm not going to sync something because it's corrupt? How does any software determine that the data is bad. Then, how does it fix it? Because if we get corrupted on one server, we don't want to transfer it to the other server.
Encryption: I would love to see "at rest" data encryption as a new feature for organizations like mine looking for ways to simplify this mandated compliance issue.
* A detailed performance monitoring of the storage system. * A tool/wizard which analyzes the installed system and its configuration and gives a recommendation for improvement.
* Linux version for vSphere could incorporate deduplication. * Also, the documentation for configuring alerts in vSphere is not simple.
* The documentation is sub-par. The pre-sales documentation and information is sub-par. * StarWind is not cheap. It is not hard to set up but not a cakewalk either. Having tech support set you up is certainly a good value. I would say the performance cost ratio is great, if not fantastic. Be sure to plan well and ask lots of questions. * Next release needs to include complete documentation. Even if it's download only or even optional.
I would like additional documentation regarding possible networking configurations with 10GbE switching.
* It would be great if it provided thin provisioned virtual disks. * It should reclaim white spaces after big files are deleted.
* I would like to replace Windows nodes to Linux. As of today, Linux nodes are ready for production. * Ability to to test the virtual storage are network area and storage speed from StarWind Management Console. * Ability to check network card settings (such as MTU), VMware settings, and other settings needed for work under StarWind best practices. I would like to have something like a StarWind Best Practices Analyzer.
One area that could be improved is the reconnection of the attached drives upon a reboot. Rarely, the shared drive will not connect automatically.
I haven't had a lot of interaction with support. My only complaint is that an update caused a syncing issue and it took over a month to resolve it. Every other time I have used support, it has been for a training/configuration question and was not time sensitive. They normally responded to those requests within 24 hours.
It's been a while since I checked the associated PowerShell module, but I would like to see an extensive set of cmdlets that could allow for easier automation as well as status management.
Updates seem to be non-existent. The software is stable, but I do not get any updates or emails about new releases.
It is difficult to control all of the hardware components. I went through a couple of different NICs before I found the one that would work with my hardware and server OS, and yield the necessary throughput for the StarWind portion.
Very few aspects of this solution need improvement. Sometimes documentation on their site can be out of date, and it is always good to check with support to make sure that whatever you are looking at is current. The good news here is that this is very easy to accomplish. If we stay with the Virtual SAN instead of the managed appliances it would be great to have more automated tooling around managing the iSCSI connections in Windows, which can be a bit confusing at first.
What do you like most about StarWind Virtual SAN?
Thanks for sharing your thoughts with the community!
Let the community know what you think. Share your opinions now!