We just raised a $30M Series A: Read our story

Automic Workload Automation Alternatives and Competitors

Get our free report covering BMC, Broadcom, Red Hat, and other competitors of Automic Workload Automation. Updated: October 2021.
542,608 professionals have used our research since 2012.

Read reviews of Automic Workload Automation alternatives and competitors

FB
Data Platforms Operations Lead Managed Hosting at a marketing services firm with 1,001-5,000 employees
Real User
Top 5
Dashboards enable tier-one people to monitor multiple jobs and alert when things fail, helping our reliability and in managing SLAs

Pros and Cons

  • "Tidal helps administrators and users to see the information that is relevant to them in that single pane of glass. They can see jobs running, they can see job history, and they can see job progression. If you look at alternatives like Airflow and clouds, you'd have to design your own UI to monitor the progress of the different jobs that you've created in Airflow. So Tidal is huge for us."
  • "One area for improvement is the command-line interface and the API to bulk-load jobs. It's a little bit kludgy, but we still manage without it. They're working on it and it's getting better all the time. In addition, the documentation for their API for creating jobs needs to be updated. It's a bit of a learning curve."

What is our primary use case?

Our use of Tidal is mostly file-event driven. We use it to manage our ingestion, processing, and loading of data. Tidal has a hook and it runs ETL for us. It runs jobs and SQL and some of our database appliances like IIAS, the new version of Netezza Teradata.

We have a file gateway that receives a file and drops it in a location. That file event picks it up and drops it over to the ETL tool. The ETL tool will run and aggregate a number of source files and turn it into a properly formatted input file. That file then goes through data hygiene and data analysis. Then it goes through a matching process. It is then put back out and runs an ETL process to stick it into a SQL database. And then there are a number of jobs that are run in the SQL database to manipulate that file.

We don't have a lot of calendared events or scheduled windows.

We have a central location for Tidal in our data center, and then we have client-hosted solutions where we run smaller instances of Tidal, and those are in the cloud. We use AWS, Azure, and GCP.

How has it helped my organization?

It reduces our administrative costs. As much as people are in a DevOps model, we can create dashboards for tier-one people to monitor multiple jobs and then alert or call when things fail. It helps us with reliability and managing SLAs.

It has also helped to reduce weekend and overtime hours due to the fact that you can have a single person manage multiple jobs. If we didn't have the single pane of glass and that visibility, people would have to manually look at logs to determine the progress of a job. So it reduces headcount. But when you run 24 by seven and 365 you still have people working weekends.

We run 70,000 Tidal jobs a day. it would take a mountain of people months to run that many jobs manually.

What is most valuable?

What we find most useful from the operations side is that it provides a single pane of glass for managing that workstream. It also alerts us on failed jobs, so it's our monitoring and management tool for those workstreams. 

Tidal helps administrators and users to see the information that is relevant to them in that single pane of glass. They can see jobs running, they can see job history, and they can see job progression. If you look at alternatives like Airflow and clouds, you'd have to design your own UI to monitor the progress of the different jobs that you've created in Airflow. So Tidal is huge for us.

Most of our stuff is private clouds. We haven't had an issue with its support for private cloud or its migration to the cloud. In our scenarios, we run the masters here and we reach out to agents that are running in the cloud. We also use it to kick off command-line utilities for loading data into BLOB storage and S3 buckets. We use the SFTP utility to move files around.

What needs improvement?

One area for improvement is the command-line interface and the API to bulk-load jobs. It's a little bit kludgy, but we still manage without it. They're working on it and it's getting better all the time. In addition, the documentation for their API for creating jobs needs to be updated. It has a bit of a learning curve.

We also wish there was a search functionality for assigning actions to events, and users to workgroups. 

Finally, the S3 data mover jobs are still a little buggy.

For how long have I used the solution?

I've been using Tidal Workload Automation for about 14 to 15 years.

What do I think about the stability of the solution?

After the 6.2 release, the stability became awesome. With 6.6.1 it was a little bit difficult, but everything after that has been solid.

What do I think about the scalability of the solution?

Scaling is easy. You could run these in VMS. We happen to have physical boxes. 

We haven't scaled it out, such as creating a remote master. In instances where we thought we may have to kick off jobs from our Maryland data center or jobs in our Denver data center, over MPLS, we thought we would have issues but we didn't have any issues. We were fine. We've been able to run things centrally.

The databases scale the way SQL scales, either by giving it more memory or more CPU.

As we have brought on clients we've grown over the years. We have a tendency to overbuy for the Client Managers. Our Client Managers are coming up on four years now. In 2021 we'll likely do a tech refresh. We'll stand it up with another version of Tidal and we'll do the migration onto the new platform. At that time we'll look at scaling up the boxes a little bit. You can put a lot more workload, a lot more Tidal jobs, on these without having to increase CPU or memory.

How are customer service and technical support?

Their tech support is awesome. We've had Tidal for a long time. We had Tidal when it was Tidal, and then when it was purchased by Cisco. During the time that it was purchased by Cisco, support was lacking. But now that it's part of the STA, it's back to being awesome.

Which solution did I use previously and why did I switch?

We were using a home-grown solution. It was a cron job manager. It didn't do file events very well; it had monitor CIS logs. It was tough to schedule tasks. It was purpose-built so it didn't have a SQL adapter. It didn't have the ability to run on Netezza and things like that.

We switched because to programmatically create the enhancements for the things that came out-of-the-box with Tidal was just too costly. It would have taken too much time.

How was the initial setup?

We've retooled our environment three times since we first installed it. Our last one was easy, a piece of cake. The ones prior to that were not so good. 

When Tidal sold it to Cisco, and they had introduced the concept of a Client Manager, a type of web interface, there was a time when going from one version to another version was not good. Now that Tidal is back to the STA Group, our upgrades are much easier.

With our last upgrade, we stood up a whole other set of servers — our servers were old — as well as a database. From the time we got the servers installed, loaded Tidal, and did our initial database export, so we could do testing, it took two to three weeks. It was a piece of cake. And then we did extensive testing.

In terms of the solution's learning curve, from an operations standpoint, teaching people how to search and manage jobs, and start and stop them, put jobs on hold and kill them, we can get someone up to speed in less than a week. For developers, it's a little bit more lengthy. There have been several instances where we have a Tidal developer, a subject matter expert — we've only had one or two of them — who has been able to train multiple people and make them serviceable. We've been doing it for 14 years, so we don't use Tidal training. We've created our own training documentation to get them up to speed for how we use Tidal. We can get them up to speed very quickly. I know people who have joined the company and who are writing and creating Tidal jobs two weeks or three weeks later.

What was our ROI?

For ROI we'd have to figure out how many man-hours am we're saving with Tidal versus not having it or having one of the other automation tools. We've grown up with it. I can't imagine being without it. Back in 2016, when we looked at possibly switching over to another solution, it wasn't a clear path to migrate to any of the other tools. We literally run our whole enterprise on this, so if Tidal goes down, the world stops.

We feel we're getting a pretty good deal with Tidal. It's supporting $600 to $700 million in revenue.

What's my experience with pricing, setup cost, and licensing?

The licensing model's flexibility is awesome. The way it's licensed for us is per master and then per agent. We have an enterprise agreement, so we have unlimited agents, and we have it on 500 devices.

I don't know how it could be easier to budget for Tidal, given that there are no costs for upgrades and other enhancements. There are increases over time, but unless you add functionality, such as buying other adapters, it's very easy to manage costs for maintenance and the like.

In terms of the hardware that we purchased — VMs and storage and networking, and the VMs' SQL licensing — it was a little bit below $200,000. That doesn't include licensing.

The hardware list is includes

  • a SQL cluster
  • a utility server that we use to migrate jobs from dev to prod
  • two masters in dev
  • a fault manager in both dev and prod
  • three Client Managers in dev and two Client Managers in prod
  • for each of those Client Managers we have a database
  • 11 VMs
  • 12 physical boxes.

So we've got a pretty big environment.

Which other solutions did I evaluate?

There have been a couple of times that we have looked at competitors, especially when we saw that Cisco wasn't really investing time or money into it. It wasn't clear to us if Cisco was going to continue to invest in Tidal. So we went out and looked at the market and did evaluations. 

We looked at Automic or UC4. We looked at BMC Control-M. Stonebranch was actually interesting, back in 2016.

What it came down to was that Automic was tough because it was changing hands on a regular basis. Stonebranch was more in our price range, but Tidal's price for the way that we use it was cheaper. When we started looking at what it would take to migrate from one to the other, there was no ROI.

The way we evaluated things was we looked at our use cases and ranked them from one to ten, and then costs. All of Automic, Stonebranch, and BMC would do what we wanted them to do. I'm sure, if we had dug a little dig deeper, we'd have found the little idiosyncrasies between them. But the cost for those and the cost of migration was just too much.

We started seeing how Cisco was propping it up a little bit more, right before they sold it to STA. And when STA bought it, they assured us that they would start making improvements. We stopped our analysis of other solutions there.

What other advice do I have?

Tidal's drill-down functionality is one of those things where you get out of it what you put into it. If you program it to fire-and-forget then it doesn't have a lot of drill-down mode to it. If you put in result codes and things like that, instead of using the agent to kick off the SSRS package in SQL, or if you use the adapter, then you can drill down.

We have about 100 users using Tidal in our organization. They are anywhere from developers to operations people to administrators. There are only a couple of administrators. There's a bunch of operators because we use this to run 24/7, 365 for 20 or 30 customers. For each of them there may be a couple of operations people and a couple of developers. As for maintenance, we patch our boxes, our masters, our Client Managers, and our databases every month, and it takes one person.

Which deployment model are you using for this solution?

On-premises
Disclosure: I am a real user, and this review is based on my own experience and opinions.
LM
Manager at a financial services firm with 1,001-5,000 employees
Real User
Top 20
Enabled us to significantly reduce manual touches in our system, but testing automations is difficult

Pros and Cons

  • "The core system is the most valuable part: being able to view the processes that we've never really been able to view as a whole before. That is super-helpful, as is being alerted when issues arise."
  • "The process of getting automations done and the process of testing them is a little complicated."

What is our primary use case?

We're using it to automate our nightly processing work, such as transfers and the actual integrations into our core banking system. We do a lot of file transfers and complicated job processing. We have a lot of processes that have two jobs that have to run before other jobs can run, and based on the output of one job it may need to do one thing or another. OpCon allows us to build complicated workflows that handle all of that.

It performs flawlessly. We were able to go live the first night with zero problems.

How has it helped my organization?

We're able to complete our nightly processing about 10 percent faster. We've also been able to eliminate manual touches on our systems and we're down to five actual touches to make nightly processing go. The ideal is for us to become a "lights-out" organization at nighttime. We're really close to that. Before OpCon, there was a team of five that was doing nightly processing, almost through the night. It's always difficult when you're changing people's processes and you're changing their work, but they've been able to handle the differences in their jobs. Overall, the reception has been positive.

We've automated hundreds of processes since deploying OpCon. We're up to 78 percent automation of nightly processing. Being able to automate the nightly processing is super-useful. It has been streamlined through the process of automation, which is great. The nightly processing is easier.

For daily processing, we haven't seen results yet when it comes to freeing up employees to do more meaningful work, but eventually we will. It's just a matter of getting through the process. Once we get this down we'll be able to free up more people to do more work in different places.

OpCon has also reduced daily processing times; not as much as I would have expected, but that's because we haven't really optimized anything.

What is most valuable?

The core system is the most valuable part: being able to view the processes that we've never really been able to view as a whole before. That is super-helpful, as is being alerted when issues arise.

For example, we've had problems with a vendor that has not been providing files in a timely fashion. OpCon actually alerts our teams that this file has not arrived yet and that allows us to get on the phone with the vendor, make sure we get the file, and get all of that working so that we have accurate records to start with the next morning.

We use SMA as a managed-service provider to actually build automated processes. It makes it easy for us to build work orders for them to execute. That is useful.

What needs improvement?

The process of getting automations done and the process of testing them is a little complicated. Anything with daily processing and nightly processing, which is something that's very critical for our organization, is always going to be tough. The testing of it can be really difficult.

The navigation could use some work to be able to get to the flow charts. Coming from the high level, all I want to see are the flow charts and where we are at with the workflow. Whenever I go in there, I have to remember how to do it again. It's not intuitive, at least for me.

Also, we could not use the FTP agent it has. Their protocol and that piece has been difficult to work with. It has definitely been a little bit weird. They did figure out a way to get to ServiceNow, but having some plug-and-play integrations to different ticketing systems would be good. They've been responsive. They did put together that ServiceNow integration, but they had to build it.

For how long have I used the solution?

We started the OpCon project in January and it went live about five months ago in June.

What do I think about the stability of the solution?

OpCon has been incredibly stable. We haven't had any issues with the core OpCon system. It has not died.

What do I think about the scalability of the solution?

We haven't dealt with scalability yet, but I think it would scale relatively well, beyond what we have.

We're continuing our automation process. Any sort of data processing will go through this system. Once we're done with that, then we get to look at anything else that could work with it. That's our plan.

How are customer service and technical support?

Tech support is amazingly responsive. We've had multiple times where they've responded within 20 minutes when we've had an issue with a workflow at night. I've been happy with that.

Which solution did I use previously and why did I switch?

I've used many automation tools in my career and the time to implement OpCon, compared to some of those other tools, is about the same. This is a specialized job-automation tool, instead of a generic automation tool. The way it works is a little bit more job-like than some of the other automation tools. That's really the difference between OpCon and a full-blown orchestrator-type of tool, like Automation Anywhere. It's important to keep those separate and use OpCon for what it's good for and other tools when you need things to be a little bit more diverse.

Other job-automation tools are not specific to credit unions and financials. There are some hooks that OpCon has that other tools don't, which is why credit unions go to them.

Tidal Workload Automation sits in between OpCon and full orchestrator tools. It's not as fully functional as some of those big automation toolsets, but it does some things very well.

The total cost of ownership of OpCon is quite comparable to other automation tools I've used. For a financial institution, in particular, OpCon makes a lot of sense. We're replacing another tool, Automic, that would have been comparable. There are certain things you can't do in Automic, or it's costly to do.

How was the initial setup?

The initial setup is complex. The first pieces of it, while they weren't really easy, went off well. When we got into the FTP processing, it got a little bit more bumpy. The deployment, overall, was an iterative process. We started in January and went live with the first step in June.

It was pretty easy to put our first processes together. It was just a matter of making sure they were fully tested and that we had the right test environment to make it work.

We have about five people who are working on it right now, since our deployment is ongoing.

I would like to have seen a little bit more of a plan at the beginning. SMA should have been guiding us through the process of automating these things in the most efficient way possible.

What was our ROI?

It's going to reduce the time that data processing takes, certainly. We're also going to see a quality improvement, meaning fewer human errors. I expect we'll see a meaningful difference in another year or so.

What's my experience with pricing, setup cost, and licensing?

It's not cheap. It's a licensing system. It costs money to put it in and it's a subscription-based system. The managed service costs money on top.

Which other solutions did I evaluate?

We looked into a tool called Jantz, which is a competitor. They're great as well. But this made the most sense financially, considering our size.

What other advice do I have?

The biggest lesson I've learned from using it is plan really well. Line up your resources and don't be afraid to do a big cut-over to it. It's a stable system. But definitely be cognizant of the fact that there are agents involved, and whenever you have agents involved you need to make sure that the agents continue to be stable.

Consider how well you understand the processes that you're looking to automate. This is going to work the best if you have more traditional types of automations that you need to do, like batches. Make sure that you've already detailed what those processes do, because the more detail you have, the quicker you can actually get to automating the work. And make sure you have complete buy-in by everybody in the organization.

When people are working with the SMA product teams it's really important for both sides to be really clear on what the testing scenarios are like. You need to make sure you're really good at writing your work orders in an accurate fashion and recognize that, as a credit union, or any sort of enterprise, you've got things that you need to do as well to make it work. Any time you deal with agents that are sitting on multiple systems it's going to be problematic because you're always going to have agents that fall apart or something happens to them. Keeping on top of that type of thing is important in order to be successful.

It's not easy to do. I've never seen these types of things be easy. You need to put a lot of effort into it. It requires working a lot with the teams who have some of these processes, who need these types of files, to make sure that everything you automate works and that the output works for them. It definitely isn't simple to implement.

In our organization, there are about 200 people who specifically work with these types of things.

I would rate OpCon at seven out of 10. It's taken a little bit longer than we thought to get it done, but the team on their side has been great.

Which deployment model are you using for this solution?

On-premises
Disclosure: IT Central Station contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Get our free report covering BMC, Broadcom, Red Hat, and other competitors of Automic Workload Automation. Updated: October 2021.
542,608 professionals have used our research since 2012.