The initial learning curve is low. I had a working configuration building fairly complex proprietary Internet servers within a couple of months, well before the rest of our server team was ready for production builds.
The developers are very quick to respond to reported issues and offer advice to deal with them (or correct something you are not using well). The couple of times I had to deal with them were actually very pleasant.
The relationship between the state files and the actual filesystem being served by the master is as simple and elegant as the way *NIXes treat everything as a file.
The execution capability both in a shell on the Salt master and using cmd.script within state files allows even a novice to make things happen the way they want until they learn to use all of the available modules the right way. This, for me, was part of getting up and running fast. This reduced the learning curve for me tremendously, as I got my initial server build framework running. I have been able to continue refining the system in stages since then and it is easy because of the relationship between the state files and the files they serve.
Improvements to My Organization
We have developed a complete, multi-tiered, stable build system for our Internet servers with SaltStackas the base of the build system. It is stable and easy to modify as we grow and change our needs.
Room for Improvement
We currently use the Salt Cloud module for integration with Amazon Web Services, but I would like to see more integration with AWS, specifically an ability to stably control an ever-expanding and contracting cloud of EC2 instances in a sane fashion.
SaltStack has many community-maintained modules available. One of the modules is called EC2 Autoscale Reactor and it's function (alongside the Salt Cloud module) is to control an autoscaling group's instances as they are added and removed. I found this module difficult to configure and unreliable, as far as getting and maintaining control of new instances as they were created by the autoscaling group. In fact, the developers even labeled it "experimental." I would like to be able to reliably control all instances in an expanding and contracting autoscaling group without manual intervention.
For the record, our cloud has moved away from needing this as a requirement. We use SaltStack and Salt Cloud strictly as a build management system and have moved towards our Internet servers being strictly "hands-off," except for developer instances. I want this feature as an improvement because the ability to manage a dynamic cloud of Internet servers adds a lot of power to SaltStack and to me.
Use of Solution
I have been using it for 1.5 - 2 years.
I mentioned the initial learning curve elsewhere in this review. Of course I encountered issues with deployment of SaltStack. I had never used an infrastructure management system prior to this, so the concepts were a bit foreign. I put in a ticket or two as I initially learned to get the system running. I found that across Linux systems, there were sometimes version differences in the repositories and began building a specific Git revision of SaltStack on all systems as a result.
The only stability issue I encountered in almost two years of use had to do with a different version of SaltStack being served on the repositories for an Ubuntu Salt Master and Amazon Linux minions. I have since migrated to using all Amazon Linux instances for everything and always building the same Git revision on all instances and have never had a bit of instability in the SaltStack system since then.
I have encountered no scalability issues with SaltStack. In fact, I haven't stretched the system very far, but because it supports multiple masters, Syndic, and minions as "runners", the scalability and high availability looks to be amazing.
Customer Service and Technical Support
A+ for the little time I have spent dealing with support. They were quick to respond and the technical expertise was fantastic. Technical Support
A+ because the developers are directly involved in the support.
SaltStack was my first choice because it is open source and was reviewed extensively as a good choice because of the low learning curve.
The hardest parts of initial setup for me were learning some of the intricacies of YAML and Jinja, and figuring out the moving parts on the master so I could get the system to reliably create the minions I wanted. Later, learning to configure Salt-cloud was a bit tough because of the configuration files required to work with resources on Amazon Web Services. None of these issues were "showstoppers", though, as the amount of online documentation and configuration examples for other users is excellent.
An in-house team implemented it.
The only calculation I can make on ROI is the countless hours I have NOT spent configuring and deploying servers. I now issue a few commands on the Salt Master as my build server, and the servers are built, Amazon Machine Images are created, and they are blue-green deployed. All I have to do is check the various stages for completion and occasionally check build logs for errors and make corrections. I have a lot more time to focus on the rest of DevOps.
Pricing, Setup Cost and Licensing
As a small start-up, we have not gone to a licensed model yet.
Other Solutions Considered
The only evaluation I did was to spend lots of time reading reviews and asking questions of people I know who are already using configuration management and execution tools. SaltStack was my first choice.
I spent my time learning Saltstack through trial and error, researching the online document system as needed. If you decide to use SaltStack, buy the O'Reilly book called Salt Essentials first. It is not very big, but it explains the concepts required to get a working system very well. I think if I had gotten the book first, I would have cut my initial time spent learning in half.
Disclosure: I am a real user, and this review is based on my own experience and opinions.
Sep 20 2016