We just raised a $30M Series A: Read our story

ActiveBatch Workload Automation OverviewUNIXBusinessApplication

ActiveBatch Workload Automation is #3 ranked solution in top Workload Automation tools, #4 ranked solution in top Managed File Transfer (MFT) tools, and #6 ranked solution in top Process Automation tools. IT Central Station users give ActiveBatch Workload Automation an average rating of 10 out of 10. ActiveBatch Workload Automation is most commonly compared to Control-M:ActiveBatch Workload Automation vs Control-M. ActiveBatch Workload Automation is popular among the small business segment, accounting for 55% of users researching this solution on IT Central Station. The top industry researching this solution are professionals from a computer software company, accounting for 31% of all views.
What is ActiveBatch Workload Automation?

Orchestrate your entire tech stack with ActiveBatch Workload Automation and Enterprise Job Scheduling. Build and centralize end-to-end workflows under a single pane of glass. Seamlessly manage systems, applications, and services across your organization. Eliminate manual workflows with ActiveBatch so you can focus on higher value activities that drive your company forward.

Limitless Endpoints: Use native integrations and our low-code REST API adapter to connect to any server, any application, any service.

Proactive Support Model: 24/7- US-based support and predictive diagnostics.

Low Code Drag-and-Drop GUI: Easily build reliable, customizable, end-to-end processes.

ActiveBatch Workload Automation was previously known as ActiveBatch.

ActiveBatch Workload Automation Buyer's Guide

Download the ActiveBatch Workload Automation Buyer's Guide including reviews and more. Updated: December 2021

ActiveBatch Workload Automation Customers

Informatica, D&H, ACES, PrimeSource, Sub-Zero Group, SThree, Lamar Advertising, Subway, Xcel Energy, Ignite Technologies, Whataburger, Jyske Bank, Omaha Children's Hospital

ActiveBatch Workload Automation Video

Pricing Advice

What users are saying about ActiveBatch Workload Automation pricing:
  • "ActiveBatch is currently redesigning themselves. In the past, they were a low cost solution for automation. They had a nice tool that was very inexpensive. With their five-year plan, they will be more enhancement-driven, so they're trying to improve their software, customer service, and the way that their customers get information from them. In doing that, they're raising the price of their base system. They changed from one pricing model to another, which has caused some friction between ActiveBatch and us. We're working through that right now with them. That's one of the reasons why we're why we were evaluating other software packages."
  • "The pricing was fair. There are additional costs for the plugins. We have the standard licensing fees for different pieces, then we have the plugins which were add-ons. However, we expected that."
  • "If you compare ActiveBatch licensing to Control-M, you're looking at $50,000 as opposed to millions."
  • "The price was fairly in line with other automation tools. I don't think it's exorbitantly expensive, relatively speaking."
  • "I don't think we've ever had a problem with the pricing or licensing. Even the maintenance fees are very much in line. They are not excessive. I think for the support that you get, you get a good value for your money. It's the best value on the market."
  • "It allows for lower operational overhead."

ActiveBatch Workload Automation Reviews

Filter by:
Filter Reviews
Industry
Loading...
Filter Unavailable
Company Size
Loading...
Filter Unavailable
Job Level
Loading...
Filter Unavailable
Rating
Loading...
Filter Unavailable
Considered
Loading...
Filter Unavailable
Order by:
Loading...
  • Date
  • Highest Rating
  • Lowest Rating
  • Review Length
Search:
Showingreviews based on the current filters. Reset all filters
Richard Black
Systems Architect at a insurance company with 201-500 employees
Real User
Top 20
Everything runs automatically from start to finish; we don't have to worry about somebody clicking the wrong button

Pros and Cons

  • "Since we are no longer waiting for an operator to see that a job is finished, we have changed our daily cycle from running in eight hours down to about five. We had a third shift-operator retire and that position was never refilled."
  • "There are some issues with this version and finding the jobs that it ran. If you're looking at 1,000 different jobs, it shows based on the execution time, not necessarily the run time. So, if there was a constraint waiting, you may be looking for it in the wrong time frame. Plus, with thousands of jobs showing up and the way it pages output jobs, sometimes you end up with multiple pages on the screen, then you have to go through to find the specific job you're looking for. On the opposite side, you can limit the daily activity screen to show only jobs that failed or jobs currently running, which will shrink that back down. However, we have operators who are looking at the whole nightly cycle to make sure everything is there and make sure nothing got blocked or was waiting. Sometimes, they have a hard time finding every item within the list."

What is our primary use case?

We are using ActiveBatch to automate as many of our processes as we can, limiting the amount of time operators are running recurring jobs. Included in that is about 99.5 percent of our nightly cycles. We call a mixture of executables: SSIS jobs, SQL queries, and PowerShell scripts. We also call processes in both PeopleSoft and another third-party package software.

How has it helped my organization?

As an IT department, we do solutions for the entire business and control everything. Our nightly cycles affect everybody in the company, so we do have some jobs that we run in one department, then create output which goes to another department. Based on email distribution lists, we can let anybody in the company know when things run.

We don't really use ActiveBatch for sharing knowledge. It's more for sharing output. We have some processes that run SSRS reports that distribute links to many people across the organization all at once, so they all get the same data fed to them simultaneously.

The most complex process that we run is our nightly cycle, which is made up of about 230 individual jobs triggered based on other jobs completing or files showing up in the system. It integrates a mixture of executables, a third-party policy system called LifePRO, and PeopleSoft. With all the handshaking back and forth between the systems, it allows an operator to start a job at around eight o'clock at night. Then, at around two in the morning, the last job finish with very minimal interaction from the operator, who is more sitting there watching to see if a job fails or not.

The operator used to run a job for our nightly cycles and go off doing something, then they would come back to see if the job was finished. If it was, they would start the next job. With the operator's intervention, this entire process would run for around eight hours. We have managed to streamline that down, because we're no longer waiting for an operator to look for a job completion, to run in about five hours. This allows us to have one nighttime operator instead of two, so we have cut the number of staff at night in half.

Additionally, daytime jobs are what we are starting to focus on now to allow our daytime operators to basically sit there and watch different jobs run. We'll be retraining both the nighttime and daytime operators to do different jobs. For example, with our nighttime operator, while the job is running in the background, she doesn't have to do anything anymore. She has now been tasked with other systems, like upgrading servers, and doing other things that cannot be done when the majority of our staff are in the building. So, not only have we removed half of our nighttime staff, we have repurposed our one person whose there to do both jobs.

Internally, we ran a number of executables. Our operators used to run these jobs all manually and press buttons within our console. Now, all those processes are automated. The operator doesn't do anything. We have a number of reports that just get generated automatically throughout the course of the night or based on their own dependency. The operators used to have to wait for a specific job to finish before they could do all these pieces. Currently, those just automatically trigger on their own. In addition to that, with our financial system, PeopleSoft, we can call any job within it automatically. This is without our operators even opening up the PeopleSoft console. In our LifePRO policy system, we have about 150 jobs that we can call those automatically as well, including some daytime jobs that run processes every five minutes. Instead of having the operator sit there like Homer Simpson pressing a button, these jobs trigger automatically, ensuring that all the data is kept updated in real-time.

It is a system that calls other jobs. Therefore, it will return error codes from those other systems. If it's a job that is truly an ActiveBatch job and doing biomanipulation, it will return error codes. The logs associated with those error codes are usually pretty in-depth to let you know exactly what happened. This prevented problems from becoming fires. We have an email that goes out every day with a list of all the jobs that failed to ensure that we hit every single one and can take care any issues.

We have one job that runs every 30 minutes, handling batch input into our system. If one of those jobs fails, then it keeps the rest of them from working the rest of the day. Then, if one of those fails, the entire team that supports that is notified immediately, giving them the full amount of time to rectify the issue before the next time that process runs. In the past, when this was done manually, we would have to wait for someone to notice that there was an error and then find the right person to deal with it. Now, within 10 seconds, an email has been sent out saying, "There is an issue. Fix it."

For our nightly cycles, we have some cycles that will run from start to finish without a single error because it is controlling when jobs run. It does a lot of clean up before the system starts. Therefore, it knows where certain files are supposed to be and where they are. So, we don't have to worry about somebody clicking the wrong button; everything runs from start to finish.

What is most valuable?

We can control the runtime of files, based on timing, by a file showing up. They can be controlled by an email being sent into the system. We get error codes back. Therefore, we have one centralized location where we can see how jobs are running. We have the ability to notify end users when jobs are finished or if there are problems with jobs. It's a very robust system, which allows for a lot of different functionality.

The system is very easy to use. In a short amount of time, we trained a couple people who have been able to create jobs on their own. For the two people whom I have trained so far, I spent about an hour or two with them. They were able to start creating minor jobs themselves by looking at existing jobs. We gave them minor jobs to work on. Then, within a couple hours, they were able to create jobs and processes that work correctly.

A lot of our processes are jobs that we know run one job after another, along with a hierarchical system, e.g., once this one job finishes, it triggers these three. Then, as soon as those three are done, it triggers a fifth job. The scheduling of those in that format is very easy to do. 

You can set up automated controls where as soon as one job finishes, then another one kicks off. You can put in constraints where a job won't start until other situations are met. It's very easy to use.

The console is very easy and flexible to use. Whenever we have come up with something that we wanted to try in ActiveBatch, we have managed to find a solution. When you're calling an application, you can call it through a batch job or script. You can also call the executable directly or through PowerShell. Depending on how it's running and how the security needs to pass through the system, there are many different ways to get the processes to work.

ActiveBatch provides a central automation hub for scheduling and monitoring, bringing everything together under a single pane of glass. There is a daily activity screen within ActiveBatch that shows you every job currently running. You can look in the past and future. I think you can set it all the way up to seven days in the future. So, if you have jobs scheduled to run on a timeframe, then those will show up. It will show everything that is on hold. You can limit it down to show only the stuff that has run in the last hour, if you are trying to deal with a specific problem. You can set the ranges, to say, "Okay, show me between 5:00 and 8:00 PM." It is very easy to use in that regard.

It handles a lot of different business-critical system for us. We have applications that our agents use out on the field which trigger other things that run in the office. Those run every five minutes looking for input to make sure that we can keep things running smoothly. Things that we would have needed to have the operators, or somebody, just running every couple of minutes, we have about a dozen of those run automatically looking for input to keep things running. It also allows our financial system to integrate with all our other systems without anybody having to do the work. Our whole nightly cycle is automated through this. We just did an inventory, and I think we have about 500 unique jobs that we run through ActiveBatch now. These are things that somebody would have had to run manually in the past.

You can keep history of run times, so you can start setting up SLAs on job performance. We have one job setup now where if that job takes more than 15 minutes to run, then it automatically aborts the job, sending an email saying, "This job needs to be looked at, as it's running past its run time."

The have done a pretty good job with the operator interface. There are a number of different screens which can be used to see what is going on. We have chosen the daily activity screen because it gives the most complete view of what's going on: what's finished, what's failed, and what's currently running.

The performance on ActiveBatch has been stellar.

What needs improvement?

There are some issues with this version and finding the jobs that it ran. If you're looking at 1,000 different jobs, it shows based on the execution time, not necessarily the run time. So, if there was a constraint waiting, you may be looking for it in the wrong time frame. Plus, with thousands of jobs showing up and the way it pages output jobs, sometimes you end up with multiple pages on the screen, then you have to go through to find the specific job you're looking for. On the opposite side, you can limit the daily activity screen to show only jobs that failed or jobs currently running, which will shrink that back down. However, we have operators who are looking at the whole nightly cycle to make sure everything is there and make sure nothing got blocked or was waiting. Sometimes, they have a hard time finding every item within the list.

Now, it integrates well with our other solutions. There were some issues initially with getting ActiveBatch to work, but once we found a solution that worked, it was easy to replicate. The initial issues were a mixture of the fact that very few people had done this type of work before, and partly the person we had working on it at the time. We're not sure exactly what the issue was. We actually reached out to ActiveBatch who helped us to get this to work. 

It is a very complex application because the code we are trying to connect to was COBOL based and still dealt with INI files. So, we had to trick the system into thinking it was calling the system the exact same way. Once we did, everything worked fine, including getting the error messages back and being able to display them within ActiveBatch.

It was the connection between systems that became complex. Basically, we had to set about a dozen environment variables within a script in ActiveBatch. So, when we called the outside application, all those variables were set and we could understood what it was trying to do. The complexity was on the actual calling of the third-party application. It was not from the ActiveBatch side.

You have to be careful with automation tools. We had one job where the person who initially programmed it created an infinite loop, so it kept triggering itself. It ran for less than a second, so we couldn't stop it. 

For how long have I used the solution?

The company has been using ActiveBatch for about five or six years. I have been using it for about three years.

What do I think about the stability of the solution?

Stability-wise, there is only one function that we have had trouble with. We haven't even reached out to ActiveBatch to try and figure it out because we're trying to figure out what is causing it. There is one DLL within the system that gets the current date, but will just stop working from time to time. The rest of the system is very stable. On occasion, we will have to reboot a server to release some locks on things, but that's once a month where we have to do anything like that.

I maintain all the jobs in production. There is nothing out of the ordinary that needs to be done. It does its own self-cleanup. It also deletes history periodically on its own.

What do I think about the scalability of the solution?

We started with just a few jobs and are right now up to 500 jobs that we run. When adding new things, it allows you to put everything in its own folders, so you can keep track of different parts. You can flag them as part of different systems, if you want. As we have added more things, we have seen no degradation in the performance.

We use it more as an automation tool, so it is just running jobs. In terms of people who go into ActiveBatch to look at it, we have our two daily operators who go in and look how things are running. We do have some jobs that they go in and trigger, because we're still automating the actual execution of these jobs, but they're all still controlled from ActiveBatch. We have a number of programmers, probably about a dozen, who will go into ActiveBatch. Some will tinker around with creating jobs that they need in our test system. Some will go into production to see how their jobs ran, if they're supporting the system. They can go in and see what the end result was, if it came back successful, had a warning, or an error. They can look at the logs to see what the problem was, allowing them to fix the process themselves.

Right now, we don't have any end users going into the system directly. We're building them a web interface front-end where they will be able to trigger specific jobs, so they can see the jobs that they can control. We have it setup through the ActiveBatch API so it returns the results to that web interface of how the job ran the previous time and when it ran last.

Our nightly cycle is 99.5 percent automated right now. We're finishing up the last few pieces of that. We have started looking at all of our daytime operator jobs. Those are being worked on next. All of our reports sent out to users on a daily basis are all automated within ActiveBatch to be triggered at specific times and sent out. The next piece that we will be working on is giving our programmers the ability to bring up Azure sites as needed, then we will be starting to add in all of our FTP jobs into ActiveBatch as well.

How are customer service and technical support?

In the past, we haven't used their technical support that much. The few times that I have called and asked them questions, they have been very easy to work with and get the answers from. They are in the process of changing their whole structure on how they support their clients, along with having their pricing structures change. 

They are trying to make the system more user-friendly from the support side, so you can go and look for the information yourself as opposed to trying to call someone.

Which solution did I use previously and why did I switch?

Previously, we have only used some scheduling through Microsoft Schedulers and SSRS schedulers.

How was the initial setup?

I was not involved for the initial setup. Though, the installation of ActiveBatch was straightforward. 

I was involved the last time that we did an upgrade. Everything was straightforward. Moving the jobs from one version to the next was relatively straightforward. The initial application that they picked to interface with was one of our more complex ones. That may have been why the person who was doing the program initially had an issue, because nobody had done this before with this type of system. 

There are a lot of APIs for packages that you can get with ActiveBatch for doing connections. We don't use a lot of their integration tools, though it does integrate with a lot of different ones. The one we do use right now is PeopleSoft. The issues with the integration of PeopleSoft have been more on the PeopleSoft side, not the ActiveBatch side. We had to reconfigure how we had PeopleSoft setup, so it would allow outside applications to communicate into it.

Once we decided to do the installation, I think it was done in the course of a day over a weekend.

What about the implementation team?

We did the installation ourselves. It was done by our systems department. One of my coworkers did all the work. She installed the new system and exported everything out of the old version into the new one. On top of that, we broke one system into two, because we used to have our model and production on one server. In the course of upgrading to version 12, we put our test and production systems on different servers.

What was our ROI?

Since we are no longer waiting for an operator to see that a job is finished, we have changed our daily cycle from running in eight hours down to about five. We had a third shift-operator retire and that position was never refilled.

The person who used to run all these jobs now just watches the system run. She is doing other stuff while she is working. On top of that, with the pandemic, we have managed to be able to allow our second shift operator to run everything remotely from home. They don't even have to be in the building anymore to run our cycles.

The central automation hub for scheduling and monitoring brings everything together under a single pane of glass by streamlining everything:

  1. It takes less time to run everything.
  2. It's less expensive because we no longer have the extra operator running jobs.
  3. There is less chance of an operator clicking the wrong button because we run both a test system and production system side by side. In the past, where they might have run the job in the wrong system, this makes sure that the correct system is running the right jobs.
  4. It automatically will send an output where it needs to go in real-time. We have management reports that used to have to be run by an operator. Now, if management comes in early, the report is there just waiting for them.

What's my experience with pricing, setup cost, and licensing?

Make sure that the pricing is in the contract.

ActiveBatch is currently redesigning themselves. In the past, they were a low cost solution for automation. They had a nice tool that was very inexpensive. With their five-year plan, they will be more enhancement-driven, so they're trying to improve their software, customer service, and the way that their customers get information from them. In doing that, they're raising the price of their base system. They changed from one pricing model to another, which has caused some friction between ActiveBatch and us. We're working through that right now with them. That's one of the reasons why we're why we were evaluating other software packages. For the time being, we are staying with ActiveBatch because we like it the best of the four.

Up until now, if you wanted to do a training class through them or go into some of their deep dives, you needed to pay additionally for that. The new way that they are doing their structured agreements has that all part of the contract. Now that we will be paying for it, we will be looking at their deep dives a lot more and seeing the stuff that they have done in the past.

Which other solutions did I evaluate?

It is the only automation tool that we're using. We are actually moving items from other automation tools that we have into this, so we have one central location where everything is automated. In the past, we have used some of our Microsoft Servers' scheduling tools and SQL Servers to automate the distribution of reports. Now, we are moving everything into one place so it's all controlled centrally. Then, you can look in one place to see where everything is. 

We have looked at a few different solutions in this past six months to see if they offer that same type of functionality and evaluated three other ones, which are very similar. I like ActiveBatch the best among the four solutions. The other tools seemed to not have the file manipulation tasks, and kept saying, "Well, you can do that in Doc." I thought, "That's okay. Welcome to the eighties." They basically said, "We don't have any filing manipulation tools built in because you can do that other ways." However, we're trying to put everything in one place. There is a lot of archiving of files that we do based on different criteria. For example, there was one job that we wrote which looks at the size of an Access database. When the size of the file gets too large, it notifies that team, saying, "You need to go delete data out of it." Those kinds of things were not available within other solutions.

What other advice do I have?

I would recommend reaching out to a client who has used it, especially if you have questions. While talking with customer support is great, people who use it on the build have better knowledge of how to use it in the business area.

We haven't used any of the APIs directly through ActiveBatch yet. We have started playing around with having our own little outside website which allows our end users to trigger jobs directly within ActiveBatch. But, we have not fully implemented that yet.

We have started looking at cloud solutions for bringing Azure sites up and down. We have not implemented that yet.

I would rate this solution as a nine out of 10.

Which deployment model are you using for this solution?

On-premises
Disclosure: IT Central Station contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Jason Fouks
Sr Technical Engineer at Compeer Financial
Real User
Top 20
We can automate just about anything

Pros and Cons

  • "ActiveBatch's Self-Service Portal allows our business units to run and monitor their own workloads. They can simply run and review the logs, but they can't modify them. It increases their productivity because they are able to take care of things on their own. It saves us time from having to rerun the scripts, because the business units can just go ahead and log in and and rerun it themselves."
  • "They have some crucial design flaws within the console that still need to be worked out because it is not working exactly how we hoped to see it, e.g., just some minor things where when you hit the save button, then all of a sudden all your job's library items collapse. Then, in order to continue on with your testing, you have to open those back up. I have taken that to them, and they are like, "Yep. We know about it. We know we have some enhancements that need to be taken care of. We have more developers now." They are working towards taking the minor things that annoy us, resolving them, and getting them fixed."

What is our primary use case?

It does a little bit of everything. We have everything from console apps that our developers create to custom jobs built directly in ActiveBatch, which go through the process of moving data off of cloud servers, like SFTP, onto our on-premise servers so we can ingest them into other workflows, console apps, or whatever the business needs.

How has it helped my organization?

We use it company-wide. With us being a financial organization, we rely on a bunch of data from some of our parent companies that process transactions for us. We are able to bring all that data into our system, no matter what department it is from, e.g., we have things from the IT department that we want to do maintenance on, such as clearing out the logs in IAS on the Exchange Server, to being able to move millions of dollars with automation.

If there is a native tool for it, then we try to use it. We have purchased the SharePoint, VMware, and ServiceNow modules. Wherever we find that we can't connect in because the native APIs aren't there, we have been using PowerShell to strip those rows out into an array of variables that have worked pretty well. So far, we have not found a spot where we can't hook in to have it do the tasks that we are asking it to do.

We have only really tapped into SharePoint native integration because we haven't gotten to the depths of being able to use the ServiceNow and some of the other integrations. However, being able to use the native plugins has been very helpful. It saves us from having to write a PowerShell script to do the functionality that we are looking to do. We are really trained to write it, because within the old process that we used to use, we would do a lot of PowerShell as the old tool just wouldn't do what we're asking it to do. We are finding a lot of processes within ActiveBatch are now replacing those PowerShell scripts because ActiveBatch can just do it. We don't have to teach it how to do it.

We can do things within ActiveBatch, not having to teach it everything. That is the biggest thing that we've been learning with it: It's easy to use and its workflows work a lot better. The other day, we ran into a problem where Citrix ShareFile, which is one of our SFTP locations, was being stupid where it would disconnect from the SFTP server. It was all just a time out. Well, ActiveBatch has a process included where we can troubleshoot the connection failures and have itself heal enough to be able to get the data off of the SFTP server. Being able to discover the functionalities of ActiveBatch self-healing has been a lifesaver for us.

We have so many different processes out there with so many different schedules. My boss looked at it one day and noticed there was somewhere between 1,000 and 2,000 processes a day. The solution gives us that single pane of glass to see everything under one spot because we have four execution agents constantly running, so there are processes happening at all times of the day and night.

We are actively monitoring all our ActiveBatch processes using SolarWinds Orion. If a process doesn't run, a service is not running on one particular execution agent, etc., Orion will alert us to that. I don't think that we have set up anything too major within ActiveBatch to figure out what is going on. I know that we have HA across everything. So, we are running four execution agents and two jobs schedulers. Having all that stuff put together, then it does failover to the other location if there is a problem with one of the sites.

What is most valuable?

The most valuable feature is being able to ingest some PowerShell scripting into variables that we can then utilize in loops. Our first rendition of doing PowerShell into variables was being able to pull some Active Directory computers using a PowerShell script and Active Directory PowerShell modules, then we were able to take that and dump it into a SharePoint list, because we keep inventory of all our servers. It was through the process of trying to understand how to get something out of PowerShell into an array and being able to process that out into something else that it would become useful down the road.

There are some things that ActiveBatch can't do natively, which is no fault to them. It's just the fact that we're trying to do things that just don't exist in ActiveBatch. With us being proficient in PowerShell scripting, we were able to extend the ActiveBatch environment to be able to say, "We'll run this PowerShell script and get the array that we're looking for, but then take that and do something native within ActiveBatch that can ultimately meet our goals."

The ease of use has been pretty good. I have been able to create workflows and utilize different modules within the job library, which has worked out really well. 

ActiveBatch's ability to automate predictable, repeatable processes is good. It does that very nicely. A lot of what we do is we pull files down from SFTP servers and put them onto our local file servers. Based on that, we are able to run a console app that developers have written, which is a lot more complicated, for doing various tasks. Our console apps are easy to set up because we have templates already drawn up. So, if we just right click into our task folder, we can quickly create an item in there that we can start up for doing an automation feature. Just being able to use PowerShell to drop variables into the ActiveBatch process has worked really well now that we understand it.

What needs improvement?

I know that there are some improvements that I have brought back to the development team that they want to work on. The graphical interface has some hiccups that we have been noticing on our side, and it seems a little bit bloated. 

While the console app works well, they have some crucial design flaws within the console that still need to be worked out because it is not working exactly how we hoped to see it, e.g., just some minor things where when you hit the save button, then all of a sudden all your job's library items collapse. Then, in order to continue on with your testing, you have to open those back up. I have taken that to them, and they are like, "Yep. We know about it. We know we have some enhancements that need to be taken care of. We have more developers now." They are working towards taking the minor things that annoy us, resolving them, and getting them fixed.

For how long have I used the solution?

We did a proof of concept back in April.

We are in the process of migrating all our old processes over to ActiveBatch. The solution is in production, and we do have workloads on it.

What do I think about the stability of the solution?

It is pretty stable. Now that we have worked through the details and ensured that we can do a failover to let the process do what it needs to do, we haven't seen any problems with it.

We are about 90 percent done migrating our processes.

What do I think about the scalability of the solution?

Right now, we have four execution agents, and they are sitting pretty idle for the most part. If we find that we're starting to see taxed resources on our execution agents, then we have the capability of spinning up more. So, we can run hundreds of servers and automation, if we wanted to.

There are only three of us who have been working with ActiveBatch, which is a good fit. We have one admin who is a developer first, then admin second. Then, there are two of us, who are server people first and developers second. All three of us manage all the different job libraries out there.

In the entire organization, there are about 1,300 of us using the different processes. A lot of people who would be more hands-on are the IT department, mainly because we are directly involved with all the different console apps. We have actually got a significant number of console apps, just because SCORCH couldn't do some of the things that ActiveBatch can do, so our developer teams went in and created the console app. At this point, all that ActiveBatch really needed to do was to be able to run an executable and provide an exit code on it, then let us know if it fails. There are some other business units who are involved a bit more along the way due to the movement of money, for example.

It is heavily used, at least in terms of what is out there. There is a lot of interest in adoption of using it in the future along with a lot of processes that people are really pushing to get put into ActiveBatch. They still have the mentality that a lot of it needs to be done as a console app. However, with us just ending the migration phase of things, we are trying to just get everything moved over so we can shut down the servers. Then, the next step in the future, probably 2021, we'll end up focusing on what ActiveBatch can do without us having to write a console app. 75 percent of the time, we could have ActiveBatch do it natively. There is just a matter of getting a lot of the IT developers to feel comfortable with adopting it as a platform.

How are customer service and technical support?

I am working with them on their tech support. We have a customer advocate with whom we have been working. She has been awesome. We have had some issues where tech support will suggest one thing, then we are sitting there scratching our heads, going, "Do we really need to go that complicated on a solution?" Then, we reach out to our customer advocate, who comes back, saying, "No, this is how you really need to do it. I'm going to take this ticket and go train that tech support person. So, in the future, you don't get the answer you did." Therefore, their tech support is a bit rough around the edges, but I foresee in the next six months to a year, they will be on their game and able to provide exactly the answers within the timeframe that we expect.

Which solution did I use previously and why did I switch?

We see ActiveBatch as the Center of Excellence for all things related to automation for our business. It is the best solution that we have had compared to what we were running before, which was Microsoft System Center Orchestrator (SCORCH). We don't want to have a whole bunch of different solutions out there. Being able to have one solution that can do all our automation is the best way to do it.

We switched over because of the intelligence. We were right in the middle of trying to decide whether we were going to upgrade SCORCH to the latest version or if it was time for us to go a different path. As we started going down through the different requirements that we needed SCORCH to do, we decided that it was time for us to go in a different direction. SCORCH had to be taught everything you wanted it to do, whereas there are a lot of processes that ActiveBatch will just go ahead and handle.

The performance is about the same between the two solutions in terms of doing what they are supposed to do. Where we really have the advantage is the fact that we don't have to reinvent the wheel, e.g., triggers within Active Batch are native and can be set up pretty quickly and easily. Whereas with SCORCH, we struggled with trying to get a schedule setup for that trigger or being able to rely on constraints. For example, if a file doesn't exist, then you really can't do anything. In SCORCH, we had to teach it that if you don't see a file, then hold on a second because we have to wait. Where ActiveBatch just says, "Oh, okay. I know how to do that."

In certain cases, ActiveBatch has resulted in an improvement in workflow completion times, because of the error retries. We can take care of them by telling ActiveBatch that if you have a problem, go ahead, try it again, and modify this. If the job runs at two o'clock in the morning and it failed with SCORCH, we always had to go back, figure out what happened, and how to get it run again. It might have been something as stupid as no network connection, because one of our upstream providers had an outage. Whereas, at least with ActiveBatch, we have been able to build in that self-healing or error detection. Once it sees the connection, it can go ahead and just correct the problem. For example, the Internet might go down from 2:00 AM to 2:15 AM, then by 2:30 AM, it's all back up and running. ActiveBatch can go ahead and finish the task. Where with SCORCH, we were finding that it would fail. Then, at seven o'clock in the morning, we got to troubleshoot any issues that might have come up. 

A lot of times, troubleshooting did not take very long, as it depended on the process. If it's something that could be downloaded from the SFTP, then that relied on several other steps that needed to take place. That might have delayed it a bit because we had to walk through five different processes that normally would have been scheduled to run at 3:00 AM versus 2:00 AM. So, if the Internet is out between 2:00 AM and 2:15 AM, ActiveBatch heals that first process before the second one runs at 3:00 AM. Then, we don't have to go through and do any added troubleshooting because step one didn't work, and step two failed because we can't troubleshoot it until we get up and start looking at it that day.

How was the initial setup?

The initial setup was straightforward.

It took two to three hours to deploy, by the time we had all the intricacies done that we wanted.

We knew that we wanted it to be highly available in two data centers for DR purposes, because some of these processes move millions of dollars of money between accounts (in various pieces for wire transfers). I think HA was the big thing that we were trained to ensure that our strategy was based around. 

The only other strategy was the fact that we have multiple environments that we go through to test our solution out first. When we are done, we export/promote it up to the production environment.

What about the implementation team?

The good part was that we really didn't have to do the install because we ended up getting a proof of concept setup with one of their engineers. So, we didn't have to do the initial setup ourselves, but we did build two other environments: one in our test environment and one in our development environment. Based on the fact that we walked through it the first time with the proof of concept, I was able to go back and reproduce every step that they walked us through on day one to build out the test and dev environments.

What was our ROI?

I have absolutely seen ROI. Coming from the admin point of view, it has streamlined the process of being able to just implement something instead of having to teach the software how to do its job. From our point, I know that I have implemented a couple of different processes that were not a migration piece, and it's been fairly easy for us to deploy because we know what the business unit wants to do with it. For us to implement, it takes us about 20 minutes to get it perfected on my side, then I can have developers run with it, test it, and figure out what their code was doing to make it happen. So, the biggest thing is that it is easy to use.

I know that there are enough processes out there that it's worth a gold mine. We can automate just about anything that we would ever want to. If we wanted the lights to turn on at a certain time, we could go ahead and turn the lights on at a certain time, and it would just happen.

ActiveBatch's Self-Service Portal allows our business units to run and monitor their own workloads. They can simply run and review the logs, but they can't modify them. It increases their productivity because they are able to take care of things on their own. It saves us time from having to rerun the scripts, because the business units can just go ahead and log in, then rerun it themselves. 

This solution improves our job success rate percentage. The biggest thing is having built-in capabilities of error detection, retries, and the ability to self-heal.

ActiveBatch has saved us man-hours. We don't have to rerun some of these scripts on behalf of the business unit. Or, if there is a script that fails, it can go ahead and self-heal, fixing itself. That is all unaccounted for troubleshooting time while helping our business units. 

What's my experience with pricing, setup cost, and licensing?

The pricing was fair. 

There are additional costs for the plugins. We have the standard licensing fees for different pieces, then we have the plugins which were add-ons. However, we expected that.

Which other solutions did I evaluate?

We had a consultant come in and try to share with us all the different tools. However, there isn't a lot of competition out there for automation capabilities.

A major component was that the vendor is thinking five years ahead, looking to future-proof our business. When we were making our decision, we were either ready to go with either upgrading SCORCH or a different path. We wanted to be in connection with an organization who had a long-term plan. We didn't want to revisit this in one to three years down the road.

What other advice do I have?

We have been able to learn it pretty quickly. We were kind of thrown right in after we got the proof of concept up and going. We had a couple of use cases drawn up and implemented, and they showed us how to do it. Our boss ended up buying the software, and said, "Ready, set, go. We're going to start migrating all these different processes over." We really didn't get time to learn it. Based on what we knew about our previous application that we were using for automation, we were able to step right in and do the best we could. We have been doing weekly, one- to two-hour sessions where three of us get together, just understand the solution, and try to work through all the details. We have been able to learn it pretty quickly without having too much training or knowledge.

We have gone through and given the business units a demo of what the possibilities are for sharing knowledge and ideas. At the end of the day, there is a team of three of us who are actually implementing all the processes so we keep a kind of standard. However, to give a business unit an idea of what the functionality is and how we could best utilize it, we at least give them the 30,000 foot view of what ActiveBatch could do, then we build it.

We mainly use it for console apps, but we haven't explored them in real depth. I know that we could get even deeper. At some point down the road, a lot of the console apps that our developer teams create will more than likely become native ActiveBatch processes which we will no longer need the console apps to run.

For the admins, the biggest lesson learnt would be in those first 30 days going through and learning through the Academy. They have an online Academy that they have out on their website. The biggest struggle that we had was just the fact that we were trying to do this migration not knowing all the different features of the software. We ran into trouble where we would try and implement something (and we wanted to do it by best practices because we want to get it right the first time), but there were features that we were discovering along the way that we had no idea about until all of a sudden we needed that feature. Then, we would go back, and go, "Oh, you know what? That last procedure that we just implemented. It would've been really cool if we would have known that at the time."

If we would have taken the first 30 to 60 days, or even a week long crash course, in ActiveBatch development to get all the highlights of everything that the software could physically do, that would have helped us immensely just to make sure that we knew what was going on and how it worked. We probably would have implemented some of our migrations a little differently than we have them done today. So, we will have to circle back and revisit some of those processes and reinvent them.

Take that time and learn the solution. Make sure you understand the software, at least at a higher level, maybe not the 30,000 foot view, but maybe the 1,000 foot view and get through the Academy first. Once you get through the Academy, then you can go ahead and start implementing the job libraries and how you want it to lay out and be implemented. Even after nine months of working with the software, we're still discovering features that we wish we would have known nine months ago coming into the migration.

I would probably rate the software as a nine and a half or 10. I would rate the tech support as probably a six, but they are improving immensely. If I had to give it an overall score, I would go with an eight (out of 10).

Which deployment model are you using for this solution?

On-premises
Disclosure: IT Central Station contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Learn what your peers think about ActiveBatch Workload Automation. Get advice and tips from experienced pros sharing their opinions. Updated: December 2021.
554,676 professionals have used our research since 2012.
Shaun Guthrie
Senior Operations Administrator at Illinois Mutual
Real User
Top 5
Provides critical functionality in moving from our mainframe to a distributed environment

Pros and Cons

  • "As far as centralization goes it's nice because we can see all these processes that are tied to this larger process. The commissions, FTP processing, the reporting, the file moves to the business users — all that is right there. It's very easy to read. It's easy to tie it together, visually, and see where each of these steps fits into the bigger picture."
  • "The Jobs Library has been a tremendous asset. For the most, that's what we use. There are some outliers, but we pretty much integrate those Jobs Library steps throughout the process, whether it's REST calls, FTP processes, or file copies and moves... That has helped us to build end-to-end workflows."
  • "One thing I've noticed is that navigation can be difficult unless you are familiar with the structure that we have in place. If someone else had to look at our ActiveBatch console and find a job, they might not know where to find it."

What is our primary use case?

ActiveBatch is used for scheduling our nightly batch processes. That is our main use at this point. It includes billing, processing, claims, commission statements, and a lot of reporting. It's all tied into that batch process.

We do use the built-in REST call process for nightly printing, coming out of that batch cycle. We distribute the nightly reports out of the batch cycle to different departments using ActiveBatch. It's used for FTP processing every week coming out of the weekly commissions process.

The most important part to us is to keep those nightly batch cycles in an easy to read format, which is where ActiveBatch Plans come into play. We run these cycles in four different environments, from development to production and a couple stops in between. Keeping all of those jobs separate from one another is key for us.

Outside of batch, we do run a process every five minutes throughout the day during business hours to scrape data from our mainframe entry system to our new policy administration system. As people enter claims into the mainframe system, those claims get moved over within five minutes, rather than waiting for the mainframe batch cycle to run that night and those claims not being seen until the next day. That saves us up to 24 hours. The business end-users can get that data within five minutes now.

How has it helped my organization?

ActiveBatch has allowed us to move forward quickly with our modernization effort, to get off of the mainframe and to move that data to a distributed environment. It has been huge for us to use ActiveBatch to run these nightly processes: everything from Dev to QA, UAT, and Production. Those are all cycles that we run every night to allow different users to test processes that they're working on in each of those stages, to get them into production and off the mainframe.

With the systems we're using now, it's a lot easier with ActiveBatch. The mainframe is so manual. If there's a problem with some mainframe code, it requires a call to a developer, but our new system works great with ActiveBatch because everything is built into that system. There's no JCL code or mainframe COBOL code, up front. Our batches just work seamlessly between ActiveBatch and our new administration system. We've had no problem with our batch processing from that point of view. Whereas with the mainframe, it's a struggle at times. If we have a problem with a job and it cancels, we may be waiting three hours for a developer to get online, troubleshoot, test, and get a fix in place so we can finish the cycle. We've not had that issue with ActiveBatch.

What is most valuable?

A lot of the built-in processes are among the most valuable features because when just starting out, although I went through the ActiveBatch Boot Camp — and I've got a couple of other people who went through it as well — it was a little overwhelming, not having used the product.

We found it easier once we were using the product and then doing refreshers on the Boot Camp or doing the deep dives that ActiveBatch provides. Even the Knowledge Base articles allow us to grow and let us know what we can use in our environment.

We're able to use the Plans, rather than seeing individual jobs within all four of our environments. Seeing all of these jobs individually would be overwhelming to try to easily decipher workflows, whereas everything is nested nicely within each Plan for us. It makes it very easy to read the next day, and to look at how each cycle ran. It also helps with troubleshooting if there's an issue with one of them at night.

As far as centralization goes it's nice because we can see all these processes that are tied to this larger process. The commissions, FTP processing, the reporting, the file moves to the business users — all that is right there. It's very easy to read. It's easy to tie it together, visually, and see where each of these steps fits into the bigger picture.

Other important features for us are file triggers, file constraints, and job constraints, because of the sequential nature of the batch process. The file triggers have made our processes more efficient and reduced delays. It might be minimal at this point, but it would still be a manual process that would have had to be done. Our second-shift operator would have to wait each night for that mainframe cycle to finish and then manually trigger certain processes within each of our ActiveBatch cycles.

It's also a very flexible product. We're just over a year in and we're still getting our feet wet and realizing its potential. One thing I am anxious to roll out — and I've tried to push some business end-user meetings, but it's still a little early in the process as everyone has been so busy with the overall modernization effort — is the Self-Service Portal. It will allow the business users to run processes on-demand, rather than putting in a ticket to have IT do it for them. This would also allow other IT users to see any processes they may be testing, in the ActiveBatch environment.

In addition, the Jobs Library has been a tremendous asset. For the most part, that's what we use. There are some outliers, but we pretty much integrate those Jobs Library steps throughout the process, whether it's REST calls, FTP processes, or file copies and moves. We do use some process job steps to call out external batch processing through external scripts, but most of what we're using is what is built-in, at this point. That has helped us to build end-to-end workflows.

What needs improvement?

When our mainframe process ends each night it sends out an email to certain users that the system is up, so that they can log on and do work on the mainframe at that point. We tried to use that email as a trigger for our ActiveBatch printing processes but it didn't work out too well. I believe it ended up being a bug that they're going to address in a future release.

But at the same time, that was an easy fix. We were able to change that from an email trigger to a file trigger. Now we have the mainframe job, in addition to sending out that email, create four text files that will trigger our four batch cycles through ActiveBatch. That has worked out great for us.

One thing I've noticed is that navigation can be difficult unless you are familiar with the structure that we have in place. If someone else had to look at our ActiveBatch console and find a job, they might not know where to find it. That being said, I have been using that search function a lot lately. That search function is definitely your friend.

For how long have I used the solution?

I have been using ActiveBatch for about a year-and-a-half.

What do I think about the stability of the solution?

I've not had any major issues with ActiveBatch at all. It seems extremely stable. We've not had any downtime. We've had issues here and there with different processes, but nothing that has affected the overall environment. Granted, we don't have very many users on it; it's mostly processing at this point.

What do I think about the scalability of the solution?

In terms of bandwidth, we've not had an issue. There are no limitations that I can see.

How are customer service and technical support?

The email support can be hit-or-miss. Overall, I've had a pretty good experience with it. They're quick to reply and they let you know exactly what they need. You get it to them and they dig into it and get back to you. Sometimes it can be cumbersome emailing back and forth and waiting for replies. Overall, it's been good.

Which solution did I use previously and why did I switch?

We didn't have a previous solution.

We were looking for a product that could handle a company-wide insurance systems modernization project. This project has been in the making for years. It boiled down to putting new products on our distributed systems, migrating data from the mainframe to those distributed systems, and eventually sun-setting the mainframe. This approach makes more sense since it's simpler to start with new products rather than migration to begin with and this also allowed us a nice starting point with ActiveBatch.

How was the initial setup?

Out-of-the-box, it was a challenge to understand the best way to structure it for our system. Obviously you don't know what you don't know. Once we started using it, we realized the best way to lay it out for ourselves and it became easier and easier over time. I've had to move things around a great deal to make it easier because we weren't sure, when starting, how to set it up, as far as our environment goes with its file structure and object structure.

As far as objects go, it's pretty straightforward. It's like any other file structure. It's just a matter of knowing what you need for your environment, which is something you learn as you go: You need these things in this folder, you need those items in that folder. Do you want all your FTP processes in one folder or do you want them underneath a certain project that they're tied to?

As far as setup and configuration go, they're very straightforward. I've never seen an issue with that or with upgrading.

The planning stage took a while. We got the product and then I and another operator went through the training, which we did in a week. The actual deployment has been scattered. The initial deployment went well, but it was staggered because there were, and still are, different pieces flowing in, a little at a time. It won't be really set until we get all of our business on this platform. It's as set as it can be right now. The actual deployment slowly fell into place. I hate to say it took two months to deploy this product. It didn't. But to get to where we were comfortable running that first batch cycle, it probably did, but that's no fault of ActiveBatch. That's just developers getting the pieces to us and then us figuring out how to use ActiveBatch in the most efficient manner.

What about the implementation team?

We implemented ActiveBatch on our own, but we did work closely with the provider of our new policy administration system and learning how the two products would work together for batch processing. I have worked very closely with someone there to tie in with ActiveBatch. I don't believe he had experience with ActiveBatch prior to that, but one of his coworkers did and he called on that coworker from time to time. We mostly worked on using ActiveBatch to call those external processes through the scripts that were provided to us. That's where we had to get them involved because that was also a new product to us, and it still is. So we were trying to learn how that product worked, how ActiveBatch worked, and how to get them to work together.

For ActiveBatch there were five or six people within Operations/Infrastructure involved in the deployment. We're a small-to-midsize company with a couple of hundred employees.

What was our ROI?

It's hard to say how many hours it has saved because it is new. There have been a lot of hours put into learning the product. For instance, putting SSIS packages in has required a lot of Knowledge Base research on ActiveBatch's site. The Knowledge Base is tremendous there. I've really never had an issue finding plenty of information, sometimes more than enough information, to decipher. But in terms of man-hours, at this point, it's just figuring out the system and how to set up these jobs to work together. Those savings will definitely really be seen down the road.

But our return on investment is because it has allowed us to move forward with this project. Even with just using new business, it's allowed us to move incredibly fast when it comes to putting these batch processes in place. So far there's limited data and each cycle runs in 10-20 minutes, but at the same time, on the back end, it's providing that foundation. So we'll know what we need to do when we have more data. For example, currently, load-balancing is counterproductive. There's so little processing going on that it would take longer to load balance this 10-minute cycle than it would be to just run straight through.

What's my experience with pricing, setup cost, and licensing?

The cost is outside the scope of my job responsibilities. Obviously we're using it, so it was worth the cost. I think it's a tremendous product. I don't know what the cost is compared to others, but having seen the results, it's worth it.

We recently signed up for the certification courses and training, which is money well spent. Anything involving training is money well spent, but especially with a new product that is going to be a major part of your environment and your business. From what I've seen, the videos and online training through ActiveBatch are tremendous. They provide examples, and they actually provide a test environment with jobs that you can put into ActiveBatch. You're able to run these jobs, make changes to them and work through the training with them.

Which other solutions did I evaluate?

Maybe at a higher level in our company there was some research into other solutions and came to ActiveBatch as the best solution. As far as I know, it has always been ActiveBatch. I was hearing that name long before we had it in hand.

What other advice do I have?

Jump in. That's what we did and we're seeing the results. I can't stress enough how much it's allowed us to move forward with this modernization project. Overall, it really has been seamless. There have been a lot of hours on my part, learning the system and researching different processes that I need to put in place for the cycles. But to anyone else, the end result probably appears seamless. It is a lot of work learning it, especially if you have no prior knowledge of enterprise job schedulers and that type of flow. But ActiveBatch provides a wealth of information; their Knowledge Base is tremendous. The support gets back to you pretty much immediately. It might take them a couple of days here and there while they're researching or working with their engineers to replicate a problem.

And sign up for the training, for sure, as well as the additional training certification. In the year since I took the Boot Camp and worked my way through putting this in place to meet our immediate needs, when I revisited the Boot Camp, I found there was a ton of stuff that you forget that you can be using. In that initial Boot Camp, you're really not sure exactly what you're going to use it for. Once you start seeing ActiveBatch processes in your system and go through that training again, you realize, "Oh yeah, I can definitely see where I can tie this in," or "Yeah, we can definitely use that here or we could use this function in this way instead of that way." It will definitely help you become more efficient.

It's easy to learn the basics. It's just a matter of knowing what you need to know, what you need to use it for. At that point the ball is in your court because, while it can definitely be challenging, at the same time it's very rewarding to see things fall into place the way you pictured them. It is a very powerful tool and we've only barely scratched the surface. Keep learning. I'm learning more and more processes within ActiveBatch every day. It's definitely an ongoing process.

What I've learned from using ActiveBatch is that the sky's the limit. With all the additional, third-party licenses — Active Directory, System Manager — at this point it seems endless for us. I honestly don't know where we would be without it at this point.

We just started testing SSIS packages, as we're trying to move those off of the SQL environment and into ActiveBatch, rather than setting up schedules within SQL. We started testing one, out-of-the-box, and we're ready to move that to production this week. There will be more after that.

We aren't leveraging the cloud. We are trying to get into that area but, at the same time, we're focused on this part of our modernization project right now, getting off of the mainframe first and onto the distributed systems. Then we can take it another step. We don't have any of those additional licenses for integration with things like SharePoint, Informatica, or ServiceNow. Those options are definitely something my manager has his finger on. He knows those are available and he realizes ActiveBatch can definitely be leveraged to a greater extent.

Our developers work outside of ActiveBatch. It's mostly me who puts together the ActiveBatch jobs. The developers are mainly mainframe developers who don't touch ActiveBatch, or they are application developers who tie everything together into this entire modernization effort. There are a ton of products tied into that effort, ActiveBatch being one. ActiveBatch "brings the others together," such as printing from a third-party vendo, our insurance suite for billing, claims, commissions, etc. A new underwriting tool will also be tied in eventually. So most of the developers are working on those other applications. Direct users of ActiveBatch boil down to me and a couple others who are familiar with Activebatch but who are not as familiar with it as I am.

Currently, any issues with the batch processes are more the result of a learning curve for us.

I would rate the solution at eight out of 10. I'm a stickler with ratings. Nine would be the highest I would ever give anything because nothing is perfect. Here, it comes down to the fact that the navigation can be clunky at times, but I think that's more on you to learn. One thing ActiveBatch could do is provide more examples of real-life business use and business case examples, that show how others have structured their systems. That would probably be a big help. They do tell you how to organize jobs within Plans and you can nest things that way, but more real-life examples would probably have helped me to see how other businesses are using it or how their folder or their object structures are set up.

I love the product. It's exactly what we were looking for.

Which deployment model are you using for this solution?

On-premises
Disclosure: IT Central Station contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
PeterBirksmith
Senior System Analyst at a insurance company with 5,001-10,000 employees
Real User
Top 20
Native API calls are very good and very easy, enabling us to tie in to a large range of solutions, including Tableau and ServiceNow

Pros and Cons

  • "The most valuable feature is its stability. We've only had very minor issues and generally they have happened because someone has applied a patch on a Windows operating system and it has caused some grief. We've actually been able to resolve those issues quite quickly with ActiveBatch. In all the time that I've had use of ActiveBatch, it hasn't failed completely once. Uptime is almost 100 percent."
  • "A nice thing to have would be the ability to comfortably pass variables from one job to another. That was one of the things that I found difficult."

What is our primary use case?

We have roughly 8,000 jobs that run every day and they manage anything from SaaS to Python to PowerShell to batch, Cognos, and Tableau. We run a lot of plans that involve a lot of constraints requiring them to look at other jobs that have to run before they do. Some of these plans are fairly complicated and others are reasonably simple.

We also pull information from SharePoint and load that data into Greenplum, which is our main database. SharePoint provides the CSV file and we then move it across to Linux, which is where our main agent is that actually loads into the Greenplum environment.

Source systems acquire data that goes into Greenplum. There are a number of materialized views that get populated, and that populating is done through ActiveBatch. ActiveBatch then triggers the Tableau refresh so that the reports that pull from those tables in Greenplum are updated. That means from just a bit after source acquisition, through to the Tableau end report, ActiveBatch is quite involved in that process of moving data.

We have 19 agents if you include the Linux environment, and 23 if you count the dev environments. It's huge.

It's on-prem. We manage the agents and the scheduler on a combination of Windows and Linux.

How has it helped my organization?

We have some critical processes in ActiveBatch that go to finance and to the auditors in our organization. Those processes are highly critical because that allows us to trade. If those reports don't get to them, we get penalized by the government or by APRA or by some financial institutions. ActiveBatch, in this particular case, is absolutely critical for getting those reports out.

We have SLAs requiring us to get reports out by a certain time of day or by a certain day of the month, by a certain time. We're judged on whether those reports go out. ActiveBatch, being as stable as it, is only impacted by external factors like the network and database performance. But otherwise, we are quite comfortable with the way ActiveBatch is able to handle these jobs without our having to look at them.

Because the connections between ActiveBatch and other tools are automated, it gives us more time to do other things, and more interesting things. If something goes wrong, we can go back and have a look in the logs that are produced and that explain what's going on, and we can then repair it. It's an enabler, and it provides us with more time to get on with other jobs. It's something that's critical and it runs by itself and we're really happy it does that. We have that time available because we're not actually manually babysitting processes.

It provides a central automation hub for scheduling and monitoring, bringing everything together under a single pane of glass, absolutely. There is finance, sales, marketing. Pretty much every department has a job that we deal with. It's quite heavily integrated into our whole stack. As an insurance company, our major events department, for example, is critical because every time there's a storm or a hail event or a cyclone somewhere, those reports must get out in a timely manner. I can't think of any department that isn't impacted by ActiveBatch, running some report for them.

The single pane of glass helps the DataOps team manage all of the processes that are supported by ActiveBatch as the main scheduling tool. We've created a dashboard which pulls information from ActiveBatch, information that we can share with the organization. They can look at jobs and the schedules and, if necessary, run their own jobs from that point. It's like the lungs of our company.

Overall, it has helped to improve workflow completion times by 70 to 80 percent, easily. Once you've built a job, it just runs and no one has to concern themselves with it doing what it's doing. They will get the notification or the file or the email that says it's processed and they move on with their day.

In addition, we had a guy who was spending seven hours in a week to extract, compile, and then export information into a CSV file, and then another few hours to get it transferred to another department. We were able to build a PowerShell script, with a query that could easily be updated, that was automated through ActiveBatch. It takes 10 minutes to run. What that guy was doing in hours, we are now doing within minutes.

What is most valuable?

One of the valuable features is the ability to tie things in using API calls. The native integrations and REST API adapter for orchestrating the entire tech stack are really good and really easy. We have a product called ServiceNow, which is a call tracking system. If a problem occurs, ActiveBatch will send an API call into ServiceNow, and it will raise a ticket to say that there's a problem. That gives us an auditing process. We're also using API calls for Tableau and we're also using some API calls for SharePoint. We tie ActiveBatch into a lot of different applications.

Also, the overall ease of use is brilliant. It's easy to pick up. We can get a newbie up and running within a day, using ActiveBatch. It's not to the extent where that person will know some of the more complicated issues, but in terms of being able to build a job and export or run the job, it's within a couple of hours. Within a day, people are quite comfortable with the application. We've just signed an agreement with ActiveBatch which gives us all the education materials now. That means we'll be applying more advanced features. It's really good as far as ease of use goes.

We use the solution across all sorts of organizational branches. It's used for SaaS and SAP, which is finance. We have fraud and Salesforce, which is for the sales group. It's also used with marketing and major events because, when there's a storm, we need to know what's going on. We also have the ability to pull from external sources, meaning external vendors such as Guidewire. So ActiveBatch is widely utilized and probably more widely utilized than the executives realize. It's well embedded in our company.

What needs improvement?

We are moving to version 12 soon, and I believe that interface is going to be more of a "webbie" look and feel, but I can only comment on version 11 which is what we have. 

A nice thing to have would be the ability to comfortably pass variables from one job to another. That was one of the things that I found difficult. Other than that, it's all good.

For how long have I used the solution?

I've been with this company for almost 10 years and it was already here before I arrived.

What do I think about the stability of the solution?

The most valuable feature is its stability. We've only had very minor issues and generally they have happened because someone has applied a patch on a Windows operating system and it has caused some grief. We've actually been able to resolve those issues quite quickly with ActiveBatch. In all the time that I've had use of ActiveBatch, it hasn't failed completely once. Uptime is almost 100 percent.

With those 8,000 jobs that run in a 16-hour period, the majority of the time we're spending about an hour of the day with ActiveBatch, repairing problems. There are issues where we have to re-run a job because of it exceeding its runtime. Or when a job fails, even though the alert goes out to the end user, we still have to tap the user on the shoulder and say, "Did you look at this alert? We've got a problem here, can you please fix it?" Other than that, it pretty much runs itself. Overall, ActiveBatch saves us a huge amount of time, being as stable as it is.

If we were having to repair everything, on an ongoing basis, we would be spending more than five or six hours a day, so we are saving at least five to six hours a day by using this tool. The improvement to the business is quite substantial. People aren't having to manually do anything that would normally take them two or three hours to do. Those things are being done within a matter of minutes and then passed on. And those five or six hours are just for us in our department. You can multiply that by the number of people who would normally have done something manually and who now have it done through ActiveBatch in minutes.

We're looking at more than a 98 percent success rate for uptime and for running jobs. The only time that something falls over is not to do with ActiveBatch itself, rather it's to do with problems with either the network, the database, or developers.

What do I think about the scalability of the solution?

The scalability is brilliant. We've got 23 machines. We have redundancy integrated into this environment. 

If a server goes down, we can turn that queue off and re-queue those jobs to another server, while we get a new image spun up and restarted. In that situation, the delay is in getting the IT guys to spin up the image. If we could get an image spun up when it failed, it would be a matter of five or 10 minutes to be back in business with that server. As it is, once the IT guys do spin it up, we kick off from there.

The main interface is used by about 12 people. The dashboard that we've built on top of it is probably used by 70 to 80 people. But the number of people it affects is in the thousands across the entire organization.

It's heavily utilized across a number of departments in the organization and they really do rely on ActiveBatch to stay up and stable and to provide their reporting mechanisms.

How are customer service and technical support?

We've had a couple of issues where we've had to log a defect with ActiveBatch. But the guys at ActiveBatch are really responsive. We had things fixed in 24 hours, and they're in a different time zone. The response time is exceptional. This is one of the few vendors that I can say is highly responsive and that shows a level of commitment that I don't think many other organizations show.

Which solution did I use previously and why did I switch?

ActiveBatch replaced Windows Scheduler, Chrome jobs that had been running on some servers. There was also another scheduling tool that popped up somewhere but that data was moved into ActiveBatch. The scheduling from Cognos was also moved into ActiveBatch because it was more convenient, and some of the Tableau scheduling was moved into ActiveBatch as well.

How was the initial setup?

The initial setup was straightforward. It's super-easy to install and super-easy to set up. Even on the Linux box, it was really easy to install and set up and run. There was no real complexity in the installation process.

Most of the time with setup or upgrades is spent testing. We usually deploy agents within 20 minutes. The scheduler and the database might take an hour and a half, but because the agents are on virtual machines, we have an image and we just spin that image up. If something goes wrong, we can just spin up a new image and get that agent started straight away. In terms of testing, when we do disaster recovery, we redeploy to a disaster recovery environment and then we test that the connections are working, the jobs are running, and that there are no problems. That's where most of the time is spent, not in the deployment itself.

We usually have two people involved in the process, one who is the primary and one who is the secondary. And then we have a couple of people on standby. The primary does the installation and the secondary is looking over their shoulder for learning purposes. Then we have a few people on the IT side in case there is a problem with the operating system or the network that we have to deal with, but they're not involved until there's a problem. The DBA is also on-call just in case there's an issue with the database.

Maintenance-wise, it's only if something happens that we go and look. We have a job that looks at the health of the database that ActiveBatch uses. It's pretty much all automated, so it looks after itself. We have another job that pings the servers to make sure that all the ports that it needs are running and open. We also have jobs that look at the network latency so that if the network latency is beyond a certain point, it notifies IT and us. It also looks at the operating system and the actual directories. Unless we schedule it for an upgrade, which we do every six months, we don't look at maintenance for that six months unless there's a problem.

What was our ROI?

It pays for itself because it gives the DataOps team more time to be involved in other projects. It allows the organization to move forward without having to worry about doing anything manually. ActiveBatch is performing a huge service to the organization in terms of reducing the number of man-hours required to do manual tasks.

What's my experience with pricing, setup cost, and licensing?

If you compare ActiveBatch licensing to Control-M, you're looking at $50,000 as opposed to millions.

Which other solutions did I evaluate?

ActiveBatch isn't the only scheduling tool that we have. There's also a product called Control-M, but control-M is a lot more expensive and mostly manages mainframe. ActiveBatch is at a very modest price for running a very complex process.

We can expand ActiveBatch more readily than Control-M because, with Control-M, you pay for X number of runs in a run book. If you want to extend that run book, they want half-a-million dollars, or more, for 500 jobs. We can expand ActiveBatch. We could go to 10,000 jobs and it wouldn't cost us any more. It's only if we were to add more agents to load balance that we would be charged any more, and it wouldn't be anywhere near what Control-M charges.

I've mainly been involved with ActiveBatch and it's hard to compare another vendor when there hasn't been a vendor to compare against. As far as performance is concerned, Control-M and ActiveBatch are on par, but they're not the same because Control-M is really just moving files and running programs on mainframes, whereas we're running against Windows and Linux environments.

The other one that's being utilized at the moment is Apache Airflow, but that's more for the developers because they like to be able to program the backend, rather than to use a frontend interface. We've been looking at how that works, but we haven't seen it to be very stable for a production environment. You can't compare Airflow with ActiveBatch, in effect.

What other advice do I have?

My advice would be to jump on it straight away. With the ease of installation, the expandability or scalability of the product across multiple servers with different agents, the ability to not only use Windows but Linux as well, and the fact that you can build complex plans that have multiple constraints, multiple types of scheduling, and multiple types of alert mechanisms, it's highly expandable. You're going to have a lot of fun with it.

It's highly flexible and easy to use. In terms of what we can do, we still haven't gone to the Nth degree of what we can't do with ActiveBatch. It's incredibly flexible. We're running shell scripts that run Python scripts. We've got PowerShell scripts and batch scripts. We tie into different applications. We still haven't exhausted the potential of ActiveBatch. That's what I've learned.

Predictability is something that is out of the control of ActiveBatch. We can set a job to run against a database, but it's really going to be the network or the database that will impact ActiveBatch. ActiveBatch will continue to run. There is an average run time that we look at, but if the network has high latency or the database is under load, the time will increase. ActiveBatch will continue to run as normal. The frequency of ActiveBatch failing is quite rare.

We use the ActiveBatch interface up to a certain point, and then we start looking at running Python and shell scripts. That's why we have the Linux agent. We call a shell script which runs a Python script that does some manipulation and passes that information back. And then there are a number of plans that manipulate the process. In this particular plan, the CSV file is created and it's dropped into a file location. ActiveBatch is polling for that location. It sees that file. Then a Python script runs and creates an MD5 hash. When you download a file from the internet, there's an alphanumeric number that indicates whether that file is valid or not. The MD5 hash is generated on the file and when it's moved to another location, another MD5 hash is generated to determine whether there was a change in that file when it moved from A to B. It's a validation to make sure that no data was corrupted during the movement from where the file was dropped to where the file landed. Once it has been validated, the file is then moved into another location where it's uploaded into the Greenplum database and a notification is sent to whomever was involved in that particular process. It's quite involved.

If a job fails, we have set it to wait for a few minutes and to then re-run. If that fails, we can trigger another job to continue on in that process flow, if the failed job isn't critical. Some of the plans are quite complicated and have a certain amount of logic involved, but that enables us to navigate around problems that might otherwise need a developer's assistance, if it doesn't affect the overall plan process. As long as there are no constraints involved that require the next job to run, and it can move around that job and continue on, that's how we set it up.

We're looking forward to version 12 to see how that goes as well. We've also mirrored the database, the backend database that ActiveBatch uses. We have a failover process which was just recently installed. If one database fails, we can switch over immediately to the other database in real time.

Overall, we're really comfortable with how ActiveBatch is performing and with what it's doing.

Which deployment model are you using for this solution?

On-premises
Disclosure: IT Central Station contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
JM
Client Service Manager/Programmer at a tech vendor with 51-200 employees
Real User
Top 10
Automation for workflow triggering and stability have increased our efficiency, reduced delays

Pros and Cons

  • "One of the most valuable features is the job templates. If we need to create an FTP job, we just drag over the FTP template and fill out the requirements using the variables that ActiveBatch uses. And that makes it reusable. We can create a job once but use it for many different clients."
  • "It does have a little bit of a learning curve because it is fairly complex. You have to learn how it does things. I don't know if it's any worse than any other tool would be, just because of the nature of what it does... the learning curve is the hardest part."

What is our primary use case?

In our company we deal with a lot of data processing. Clients will send us extract files that we load into our system so that we can run calculations. And all of that is orchestrated using ActiveBatch automation. To summarize, we have software that we use to calculate values, but we need to receive the files from the client, get them to the right spot, and get them ready for processing. All of those steps are done using the automation tool.

The integrations we mainly use it with are FTP and SQL and we use a batch file or a script file to call our internal programs. It does have the ability to call PowerShell scripts and we do use some of those. We just don't have a need to use a lot of PowerShell because most of our software is designed using a different language.

How has it helped my organization?

The biggest example of the way it has improved things — and this is actually why we moved to ActiveBatch — is that most of our jobs are our processes that run overnight. That's the critical time for us because we have to load and calculate this data overnight so that the clients can have it in the morning. Our old automation tool would frequently have jobs that just failed, with no reason given. It would not track the history, so there was no way to determine if there was a pattern of failure. And it was difficult to restart jobs. That's what moved us to ActiveBatch: knowing that the job is going to run, and that if it does fail it's going to give adequate information as to why it failed. Typically, any failure in our case is data-related or due to code on our side. Rarely has it ever been an issue with ActiveBatch itself. Having that stability, especially doing our overnight processing, is the biggest benefit to our business from using ActiveBatch.

If you're a programmer, you can certainly write out scripts and design jobs that are similar to programs. But a lot of our technicians who use it do not have a programming background, and it's simply a matter of using the templates that are already provided. You do not have to have any kind of programming background to be able to use the software. 

While we've never had a whole lot of scripting done, even with our old tool, with ActiveBatch it's very easy to have junior employees log into the system. They can learn how to create jobs. It's definitely something that's accessible by more junior level employees, as well as senior level.

It also has the capability for event-driven automation to trigger workflows based on specific emails, file events, FTP file triggers, message queues, date database modifications, tweets, etc. For us, the big one is a file trigger, when a file arrives on our FTP server in a certain location. We'll occasionally use a database trigger as well. And we use the scheduling capability that it has to run a job at a certain date and time. These abilities have definitely increased efficiency and reduced delays. It's mainly from the stability of the automation. Even with the old software, it had that same capability, but it just wasn't as reliable. It would just have odd failures that we never could quite explain, and the vendor could not either. ActiveBatch, having that stability and being able to use those triggers to automatically trigger our jobs and get them running overnight, greatly enhances our efficiency. Having a team manually do those things would take much longer.

I don't know if I could quantify the improvement in job success rate percentage, but when I joined this particular department it was right around the time that the transition was being made from the old automation to ActiveBatch. What I do know is that there were enough failures and instability in the old automation tool to trigger moving to a new tool, which is ActiveBatch. Since then, we have not had those types of issues. It's very reliable and very stable which is exactly what we need. 

I would think there has been improvement in workflow completion times, just from the stability standpoint. The way we create and use jobs in ActiveBatch is similar to what we did before. If everything worked as designed, I imagine that the old tool and ActiveBatch would probably process things in the same timeframe. It's just that ActiveBatch is much more stable. There aren't as many failures. The speed factor, for what we use it for, would probably be similar with any automation tool because we use it for such straightforward, simple tasks. Based on all the other performance indicators, I would imagine it's just as fast, if not faster than other tools.

Because we're a pretty small company, using a tool like this doesn't necessarily reduce headcount, but it allows us to not have to add headcount.

What is most valuable?

We mostly use the fairly straightforward features of the solution:

  • copying and moving files from one location to another
  • FTP processes to send and receive files 
  • database queries to update certain data elements. 

It's nothing super-complex, but these are things we would not be able to do manually without adding a lot more time to the process.

It's also very easy to restart jobs at a certain point, in the event of a failure. Things like that are things that we didn't want to have with some of our former automation tools: overall ease of use.

In addition, you can go to one screen and see every job that is currently running and what the status of that job is. You can scroll up or down and see jobs that ran in the past jobs and jobs that are scheduled to run in the future. It makes it easier to monitor jobs. A lot of our processes run overnight. We have a team that monitors the automation jobs to make sure everything's running and to correct any failures that may happen. They are able to easily see the status of everything using ActiveBatch, without having to click on multiple jobs to see an individual status. They can get a summary of it on the summary view.

It's pretty customizable, from what I can tell. We haven't had a need to customize a lot of things because most of what we do is pretty straightforward. But you can script out a PowerShell script and use some of the internal functions and features of  ActiveBatch within the script. You could, theoretically, customize it pretty extensively. We just haven't had a need to do that very much.

What needs improvement?

The only thing is that it does have a little bit of a learning curve because it is fairly complex. You have to learn how it does things. I don't know if it's any worse than any other tool would be, just because of the nature of what it does. Like many things, you learn how to do something initially and then, a year or two later, you might find a better way to do it and you have to adjust how you did it before. So the learning curve is the hardest part. Even then isn't bad, because any tool is going to have that type of learning curve. 

We're migrating to version 12 and I know they've made a lot of improvements that can help with navigating that application. I expect that would improve it.

For how long have I used the solution?

We started migrating to ActiveBatch around 2012 so we've been using it for about eight years. We are currently on version 10 with plans to upgrade soon to version 12.

What do I think about the scalability of the solution?

We've never run into any bandwidth issues, but we're also a pretty small company. The number of jobs that we run is much smaller than a larger company would run. We've talked with other companies that use ActiveBatch and they have far more jobs running concurrently than we do. They have never expressed any issue with bandwidth either. 

From my experience, it seems like it's very scalable. You can create jobs in a manner that they can be reused for multiple clients, using variables. We've never had any issue with the number of concurrent jobs running.

ActiveBatch is running around 300 jobs for us. As our company grows, we'll use it more and more. It's integral to our processing that we have built our business around. As we get more and more clients, we will be using and creating more and more jobs. Eventually, we'll probably need to add additional resources to help with that. It's as scalable as our company is.

How are customer service and technical support?

Their support is excellent. If we run into any issue, and we can't find a solution on the forums, we'll create a ticket with them and we'll get a response very quickly, especially compared to some of our other vendors. They've always been able to help out and find a solution or answer to our questions, which is great.

Which solution did I use previously and why did I switch?

Our previous solution was AutoMate BPA. 

We switched because we needed stability. We also needed something that was easy to use where we could have certain functionality, like restarting jobs from different points and reusing steps for multiple clients. Those were things we just did not have in the old tool. Having that stability and the ability to see if a job failed and having adequate log information to indicate why it failed are the biggest reasons why we moved over.

How was the initial setup?

The technician who researched solutions and found ActiveBatch was the guiding force as far as getting it installed, set up, and configured. So I don't have a lot of experience with that side of it. I've mostly been designing how jobs should work and be built. The setup seemed like it was straightforward from what I could tell. I don't think it was super-difficult.

It took us a good year or two to fully convert all of our jobs to ActiveBatch. But that was because we had a large number of jobs that were in the old tool and we had to be careful about adjusting things that are in a production environment. We spaced it out a while to get everything converted.

Our implementation strategy was mostly looking at which clients had more complex jobs and which clients had simpler jobs, so that we could start with the simpler ones as we were getting our feet wet using the tool. Then it was just scheduling out which clients would be converted when and creating the jobs to mirror what we already had in the other tool. It was nothing too complicated.

What was our ROI?

We have definitely seen ROI. It's a critical component of how we do things now. It has definitely been worth everything we've paid so far, and more.

What's my experience with pricing, setup cost, and licensing?

From what I recall, the price was fairly in line with other automation tools. I don't think it's exorbitantly expensive, relatively speaking. It's definitely been worth every penny for us. It hasn't been the case that we have thought, "Oh, it's too expensive. We need to find something else." It's worth it for us, by a large margin.

In addition to the licensing fee, I believe there is a cost for how many different agents you need to put on servers. There's some additional licensing that you can get, that we haven't had a need for, where you can add jobs that work with VMware or other third-party tools, to open up that part of it.

Which other solutions did I evaluate?

One of our other technicians was the lead on finding a new automation tool. Along with ActiveBatch, he found three or four others that he thought might have good potential. I was on a few calls where they were demoing the software, and there wasn't really anything that fit for us as well as ActiveBatch did.

What other advice do I have?

Take the time to get a good feel for how it works. That's the biggest thing. Once you have that, start creating the jobs. I would expect that people will be very satisfied with how well it runs and the flexibility that the tool has.

In terms of execution on hybrid machines or across on-prem and cloud systems, it's not applicable for us at this point. All our stuff is hosted. We're not doing anything in the cloud right now, although that may be something that's in our future. But right now, it's just used for servers that we have in our data center.

We have a team of about six or seven people who use ActiveBatch at least a little bit. But only three of us are the "power users." ActiveBatch is designed to have different roles but all three of us do a little bit of all of them. So we haven't divided it out yet in terms of having an operations person or a design person. My role leans more toward designing jobs. The technician that found ActiveBatch, his role leans more towards the operation and administrative side of getting things installed and working on upgrading the application. The third guy does a little of both.

We're pretty satisfied with everything. Their support is great. It does everything we need it to do. There isn't anything that we're having to find workarounds for.

Which deployment model are you using for this solution?

On-premises
Disclosure: IT Central Station contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
BO
Supervisor IT Operations at a insurance company with 501-1,000 employees
Real User
Top 10
Reduction of coding and development costs are substantial

Pros and Cons

  • "The nice thing about ActiveBatch is once we have created a specific job that can be easily be replicated to another job, then minimal changes will have to be made. This makes things nice. Reduction of coding is substantial in a lot of cases. The replication of one job to another is just doing a few minor tweaks and rolling it into production. This decreases our development costs substantially."
  • "There is this back and forth, where ActiveBatch says, "Your Oracle people should be dealing with this," and Oracle people say, "No, we don't know anything about ActiveBatch." Then, it all falls back on me as to what happens. Nobody is taking responsibility. This is the biggest failing for ActiveBatch."

What is our primary use case?

ActiveBatch controls just about everything in our organization. We do server monitoring with our EDI feeds being inbound and outbound. We do Oracle processing with it. 

It is very comprehensive for what we do and a central point of everything in our organization at this point.

How has it helped my organization?

We have some things coded out to execute processes on systems internal to us, but nothing out of the cloud. We have web based products that are internal and made available to our internal users. We have some external users who use these web based products. We control those from within ActiveBatch where we do remote logins and can control some of the processes. This is for internal and external clients' availability.

It reduces the load and manual efforts on everybody's parts. With a thousand jobs running on a daily basis, it allows our programming staff to focus on other things rather than deal with manual programming efforts, taking quite a load off our programming staff. 

The nice thing about ActiveBatch is once we have created a specific job that can be easily be replicated to another job, then minimal changes have to be made. Reduction of coding is substantial in a lot of cases. The replication of one job to another is just doing a few minor tweaks and rolling it into production. This decreases our development costs substantially. 

Automated integrations have helped us build end-to-end workflows. When we send an ACH to the bank, it used to be that a report would had been generated, then somebody had to call the bank and provide the bank with the totals. We are calculating all that now within ActiveBatch, then sending an automated email to the bank informing them of what is contained within the actual ACH. This has eliminated the need for several people in accounting or finance to have to deal with this work. It runs flawlessly. Though, it took a while to develop, it's a good case example.

We do have FTP file triggers and file triggers internally. We don't have to wait for somebody to say, "Hey, we've posted a file. Can you process it?"  The nice thing about ActiveBatch is we can specifically look for triggers, pick stuff up, and process it the minute it hits. So, it takes that step out of the equation of using internal or external people, and asking, "Something's been posted. Can you take care of it?" Instead, it's done and out of the way. This reduces delays.

What is most valuable?

I find all the features valuable. 

A lot of our server monitoring has becoming more critical. We monitor CPU loads and disk space requirements. Those are becoming more helpful to us from an automation standpoint, where it makes business decisions on returns. It really helps out the entire IT department and the entire company, as it takes a lot of the manual effort away from a lot of people.

It takes a lot of the manual effort off a lot of people from having to continually look at information. We make business rules within jobs. If something is wrong, it will get somebody out of bed in the middle of the night and let them know there is a problem. Rather than people coming in the morning, we have people who get up in the middle of the night and start working. Because when there's a server issue, that just creates a whole problem. This eliminates a lot of that since we catch these problems. We're taking a proactive approach to our internal structures.

The solution provides us with a single pane of glass for end-to-end visibility of workflows. The nice thing about ActiveBatch is you can see at a glance what is running and what's going to run (future runs). It gives us a good snapshot of everything that's going on, which is something that was lacking for years. With our window pane, we can see exactly everything that will happen at a glance.

The console is extremely flexible. We have incorporated things into ActiveBatch that a lot of people never thought possible, e.g., a lot of the server monitoring stuff and we have over a 1000 jobs that run out of it on a nightly basis. From an automation standpoint, it is really reducing the need for so much manual effort, which creates its own problems because we have a thousand jobs. Somebody has to look to determine if there are any issues. So, we have business rules put in place in all our jobs which try to make it easier for everybody. We do banking information, EDIs, specific automation for other applications, service monitoring, and reporting. A lot of the stuff is called from other systems and imported into ActiveBatch, then manipulated. It's so comprehensive.

What needs improvement?

It may require some weird programming of things. However, most of the time, we can solve the problem and set solutions in place, then it's carried forward to other jobs. 

I would really like to get into Active Directory stuff with it, but that creates a problem in our security audits, etc. We have to tread carefully down that road.

Moving to version 12 will be a real challenge for us because we have to put in a whole new server, as we are on one now that is obsolete. Plus, when we build the whole thing out, we will need to: 

  • Build out a test environment. 
  • Go through every single one of the jobs, then test out everything on maneuvers.

We will have to engage ActiveBatch in a contractual relationship to help us with this because it will be a huge project.

For how long have I used the solution?

Eight years.

What do I think about the stability of the solution?

I have a great impression of the stability. We just keep adding to it, and this thing never fails. It just runs. Comparing that to our back-end systems where there are always problems, ActiveBatch just continually runs. That's what I've told our executive team. I said, "The only time there's a failure in this company is when your back-end systems screw up."

What do I think about the scalability of the solution?

We have limited users in this product. We have a couple of developers (EDI specialists) who look at some of this stuff. We probably have several hundred people who end up with the end result (report distribution) of ActiveBatch via email. We distribute mainly via phones.

How are customer service and technical support?

I have emailed Active Batch about a couple of things. I have always had great experiences with the technical support guys. Some of them just go above and beyond their call of duty. They are fabulous to work with.

Which solution did I use previously and why did I switch?

Everything was a manual effort before ActiveBatch.

How was the initial setup?

There are so many different components that we had to integrate with Oracle. There was a lot of back-end work which had to be done when the server was originally built out. Missing those steps would have ended up creating some problems. We had to go through it a couple of times before we got everything straightened out. With the Oracle integration, there are a lot of components that have to be installed correctly. Even when migrating to version 10, we had some issues with that too. There are a lot of internal components with Oracle.

This is sort of where ActiveBatch system falls down just a bit. While it's easy to say, "Your Oracle people need to deal with this." Our Oracle people know nothing about ActiveBatch. There is this back and forth, where ActiveBatch says, "Your Oracle people should be dealing with this," and Oracle people say, "No, we don't know anything about ActiveBatch." Then, it all falls back on me as to what happens. Nobody is taking responsibility. This is the biggest failing for ActiveBatch. It would be nice if Advanced Systems Concepts, Inc. could just say, "We'll help you with this entire process."

What about the implementation team?

We contracted with ActiveBatch to help move us from version 9 to 10. It took us two or three times to get it right because there were components that ActiveBatch wasn't clear on about needing to be installed. They finally came back and helped us on this because we had an engagement contract with them. However, it took a couple of times to do this. The problem in a production environment is you don't have a lot of leeway for downtime. The jobs that we have, they run 24/7/365. Trying to find an open slot to do migrations is pretty difficult.

What was our ROI?

With the automation efforts that we have done over the years, we have gotten our money back. We save thousands of man-hours annually.

The use of the solution resulted in an improved job success rate percentage of 90 percent. It reduces manual efforts. Once you take manual efforts out of the equation and put business rules in, we find the failures that occur are usually external to the company, not internal anymore. Job failures during the day are a handful out of a thousand jobs, and usually an external issue. It is external vendors not following their rules, though we have business rules and alerts set up to inform them. We send emails back to external clients, and say, "Something was supposed to be posted, and it wasn't posted." In that sense, it has eliminated a lot of those manual effort steps as well. It is all self-contained in ActiveBatch.

Use of the solution has resulted in a 60 to 70 percent improvement in workflow completion times. 

What's my experience with pricing, setup cost, and licensing?

I don't think we've ever had a problem with the pricing or licensing. Even the maintenance fees are very much in line. They are not excessive. I think for the support that you get, you get a good value for your money. It's the best value on the market. I've worked with a lot of products in my career, and this is by far one of the best products I've ever seen. You're getting your value.

Which other solutions did I evaluate?

We did evaluate other products before purchasing.

We asked for a proof of concept on this solution that ActiveBatch provided. We looked at the scalability, integration, ease of use, and constructing automated jobs. Those were the driving forces in the selection of these products. Their job libraries are so nice. You don't have to be a rocket scientist to figure some of this stuff out. 

What other advice do I have?

It is a great product. I can't speak enough about it. We haven't found anything that we can't overcome in ActiveBatch. When they put this product out, they thought it out and put a lot of nice stuff into it. There are features we haven't touched yet, even though we have been on it for so many years.

We have never really uncovered anything that's a problem. It is a well-thought-out product and one of the best that I've ever worked with. I would rate this product as a 10 out of 10. I really like this product.

Think about what you want to automate, then put a process flow in place. For somebody who wants to start this, take one job and put a process flow in place, then develop it within the system. Once you get one product in place, it is pretty easy to replicate it. Initially, to get started on some of this, it can be a horrifying effort. It looks overwhelming, but once you get going on this stuff, get one job in place, and figure out what to do, then it's pretty easy to replicate across the board.

All our back-end systems are Oracle driven from an integration standpoint. Oracle interfaces are very nice which helps us a lot because we can do a lot of coding and take care of a lot of the back-end Oracle stuff. However, we don't use external things, like Amazon, as that is against our security

We just started looking at email triggers, but have not implemented any at this point.

Which deployment model are you using for this solution?

On-premises
Disclosure: IT Central Station contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Peter MacDonald
Senior IT Architect at a pharma/biotech company with 5,001-10,000 employees
Real User
Top 5
Makes the environmental passback of an SDLC process seamless

Pros and Cons

  • "What ActiveBatch allows you to do is develop a more efficient process. It gave me visibility into all my jobs so I could choose which jobs to run in parallel. This is much easier than when I have to try to do it through cron for Windows XP, where you really can't do things in parallel and know what is going on."
  • "I can't get the cleaning up of logs to work consistently. Right now, we are not setup correctly, and maybe it is something that I have not effectively communicated to them."

What is our primary use case?

We use it for a variety of different tasks, most of which are related to data management tasks, such as scheduling, processes related to updating business intelligence reporting, or general data management stuff. It's also used for some low level file transfers and mergers in some cases. 

We use the solution for execution on hybrid machines, across on-prem, and cloud systems. We have code that it is executed on a cloud environment, various Windows and Unix servers.

We are on version 11, moving to version 12 later this year.

How has it helped my organization?

We found that the solution created simplicity for us with our workflows and process automation. It gives me the folder and job name, then I'm done. I don't have to remember a plethora of things and that makes life a lot easier. Once you get it setup and have it configured, you don't have to remember it anymore. It allows you to focus on doing the right thing. 

I find it super flexible. Every time that I ask if the solution can do something, they say, "Yes." I have not been able to come up with a challenge so far that they have not been able to do.

It definitely allows the ability to develop the workflow. It has reduced the amount of coding. Some groups don't pay attention to that, as they are very much an old school group. I am trying to get people to do things differently, but that's just changing habits.

One process may at some point time run across five different servers in parellel before coming back to a final point of finishing. They built that in, where it say, "Every time we do certain things, execute this package." All I have to do is drag that package into the master package and master plan. It's very modular. 

All our workflows are efficient. This solution allows for tighter integrations across environments where you don't necessarily want developers cross pollinating each others' code. It's more or less about securing code. I have people who are experts in doing PowerCenter. They don't have any idea what they're doing in other solutions. You don't want them accidentally editing the wrong code. Therefore, it helps keep related things isolated, but allows them to communicate.

For code maintenance, it's really simplified it. For things that are coded, like day-to-day Unix or Windows level batch type jobs, this means I don't have to rewrite the code and I can easily migrate it from the environment. I can do this by leveraging variables and naming practices. I can basically develop code, do development, migrate it through our four environments, and not made changes to the code at all. It makes the environmental passback of an SDLC process seamless.

What is most valuable?

One of the great features that they have implemented is called Job Steps. It is a much more mechanical way to control processes. It allows us to connect to external providers. For example, we were a big Informatica shop. The development time to create a job that can execute a task or workflow (once you have the initial baseline set up) takes you about a minute to say, "I created this new job in Informatica. I have created an equivalent job to run the batch, then about a minute later, it was done." It improves the development time to market and getting things done.

What ActiveBatch allows you to do is develop a more efficient process. It gave me visibility into all my jobs so I could choose which jobs to run in parallel. This is much easier than when I have to try to do it through cron for Windows XP, where you really can't do things in parallel and know what is going on.

Improvement in workflow completion times has to do with optimization. The ability to do true parallel submittal of jobs, then be able to pay attention to the status of those job simultaneously to know when they are done, that is what creates the optimization.

The solution provides us with a single pane of glass for end-to-end visibility of workflows. It has a very broad, deep scale vision of what's going on. You can go down to an individual job level or see across the whole system and different groups. Because we roll out by project area, each project has their own root group folder that they use to manage their routines. We don't have a master operational group yet that is managing it. Therefore, each of group does its own operational support for it. However, if I look at things in it, there are a lot of shared things that we have put in there. If a machine is taking too long, I can go focus on that. E.g., why is it taking so long? Then, I can let people know that we have a particular routine that is running poorly.

What needs improvement?

I can't get the cleaning up of logs to work consistently. Right now, we are not setup correctly, and maybe it is something that I have not effectively communicated to them. This has been my challenge.

For how long have I used the solution?

I have been using this solution since 2007: 13 years.

What do I think about the stability of the solution?

The stability is rock solid. The four failures that we have had are related to issues we've done to our server or environment. Mostly, they are self-inflicted failures. There was a bit of cross pollination for what we were doing with security procedures where we experienced interruption. ActiveBatch hadn't updated itself directly to handle that situation.

We use the solution’s API extensibility. It has helped with the stability. It allows us to know when a job fails. If there's a problem connecting to a server or a job fails because something has gone wrong with a server, then we know very quickly. 

Four people are needed for development and maintenance of this solution. I am the primary admin but I don't support the solution on a day-to-day basis. I have a secondary gentleman, who like me, is also an admin. There are two others who primarily deal with the database. There's not a lot to it, except for the log stuff. When it comes to individual job failures, that's not our domain. That's the domain of each group maintaining their space. We also manage security issues.

What do I think about the scalability of the solution?

We are not the biggest shop out there. In our production environment, there are about 10 group who are doing work on a daily basis. Our user base is primarily developers and a few technical business analysts. There are approximately 50 to 100 users.

We have administrators, operations people, and developers. Administrators have full control across all environments. Operators have the ability to execute and see things across many of the environments. Developers can only work on a nonproduction event. 

For what we are doing on a relatively modest machine, ActiveBatch hasn't had any issues.

I haven't had to scale it yet. It has been a simple server for 13 to 14 years now. I haven't had to go to multicluster. We have a failover setup. However, we don't use that for parallel processing. It is more just for failing. 

How are customer service and technical support?

I'm on a first name basis with many of their engineers and developers. I have passed on some challenging things since my history goes so far back. They have always been very responsive to answering questions and providing the right knowledge base article. They are open to suggestions and very interactive.

Which solution did I use previously and why did I switch?

We first implemented this a number of years ago, it took our processes from several hours overnight, and not knowing if those jobs failed until we checked in the morning, to having an ActiveBatch team as an overnight team who watched jobs for us. Though, sometimes they would take an hour or two before they realized something had failed. Now, we have it so that team is responding within minutes. The alerting that texts and emails you has improved our ability to respond in a timely fashion.

How was the initial setup?

We installed versions 5, 6, 8, 9, and 11. Upgrades have always been seamless. It has been able to recognize code from previous versions, even 10 years ago, and update it.

Every time we do a redeployment, we go through the same process. We develop, upgrade the dev environment, and have people check to make sure their job still work. We then take that environment and migrate it to our test environment where we totally check it. That usually goes faster because we are just moving the database forward, checking to make sure everything works, and then moving onto the next page. Typically, we do a new server for production. We don't upgrade in place. I've done the upgrade in place without a problem in the dev environment, and it does go faster. I find it very clean, and I've not had a problem. Most of the issues are related to consumers of the tool.

We have only used it in one scenario. It took us a bit of time to get it setup as we have two halves of our processes. One is the data management process that happens multiple times a day. When that is completed, we want see reporting based on these processes. What we have is an event base that is executable. The viewable data sets are in different folders so these two groups don't actually see each other. That is routine, but they are able to read and have scheduled events.

What about the implementation team?

I installed it. To install it and get the environment up and running, it takes less than a day. Once my database is up and I have access to install the software, it takes an hour or two for me to get it up and running.

What was our ROI?

Over the years that I have used this, it has probably saved us several hundred hours of development time for other teams and my own. 

The solution has absolutely resulted in an improvement in job success rate percentage. We can see what the problems are and isolate them sooner. We are able to catch these problems and alert people.

It allows for lower operational overhead.

What's my experience with pricing, setup cost, and licensing?

I buy features when I have need of them.

What other advice do I have?

Right now, we only use the Informatica AI and Informatica PowerCenter. We are looking at  a ServiceNow integration. Some of the other ones, like Azure, we don't need right now as we continue to grow it organically. It's more as teams migrate technologies. We want to have an opportunity to have a conversation with them, and say, "Hey, come in and do it this way."

We are not using all the features yet. E.g. we don't use any load balancing variables.

I would rate the solution as an eight to nine (out of 10).

Which deployment model are you using for this solution?

Public Cloud

If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?

Amazon Web Services (AWS)
Disclosure: IT Central Station contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
MS
Data Warehouse Operations Analyst at a leisure / travel company with 1,001-5,000 employees
Real User
Top 20
Map View feature makes it easy to see what the dependencies are; we get a visual, top-down look at what flows are running

Pros and Cons

  • "One of the valuable features is the ability to trigger workflows, one after another, based on success, without having to worry about overlapping workflows. The ability to integrate our BI, analytics, and our data quality jobs is also valuable"
  • "The thing I've noticed the most is the Help function. It's very difficult, at times, to find examples of how to do something. The Help function will explain what the tool does, but we're not a Windows shop at the data warehouse. Our data warehouse jobs actually run on Linux servers. Finding things for Linux-based solutions is not as easy as it is for Windows-based solutions. I would like to see more examples, and more non-Windows examples as well, in the Help."

What is our primary use case?

We use ActiveBatch to run the data warehouse production batch schedule, which is 24/7. We run, on average, about 200 distinct workflows each day to update the warehouse. And once the warehouse tables are loaded, we trigger our business intelligence reports and our analytics reports. We also use ActiveBatch to run a software tool called iCEDQ for data quality, as well as some Alteryx jobs.

Our production servers are in a co-location, and the solution is deployed onsite there.

How has it helped my organization?

Before we had ActiveBatch, we used the Informatica Workflow Scheduler, and we would have to start a downstream workflow, but have it wait for the completion of the first one by a trigger file. So "Workflow B" would be waiting for a control file that said "Workflow A" is done. If we had to do reruns — sometimes we would create a control file by mistake and that would throw off the next day's run — and we'd have to do manual reruns. With ActiveBatch, it's very easy to say, "Workflow A is done, run B," and onward: "Run C, Run D," as soon as they're done. You don't need to worry about whether a control file was created, or how long is the job going to wait for. It gives you much simpler and easy-to-understand control of the flow of jobs, as they run.

Using ActiveBatch hasn't really reduced our code base because we would be developing these workflows in Informatica if we weren't using ActiveBatch. But the scheduling and integration into the batch schedule for something new are much simpler and save us a little bit of time, now that we have everything developed, for the most part. We may go a month without adding anything to our schedule and we may go four or five months without adding anything to the schedule, but it gives us an easier understanding of the flow of the data and helps us make sure dependencies are met in a more straightforward fashion than through the Informatica scheduler.

ActiveBatch hasn't really improved our job success rate percentage. If a job fails, we still get our failure messages from Informatica, and in some cases from ActiveBatch. The biggest benefit is that the biggest issue we were having was the timing of all of the downstream applications from the warehouse, and it has greatly improved that.

And it has saved man-hours, although it has not reduced headcount. It has saved man-hours in that situation when we would have issues and our old scheduling solution would break down because of them. This allows us to not have to worry about how to start the downstream applications, based on the warehouse. I would estimate it saves us about 20 hours per month.

What is most valuable?

One of the valuable features is the ability to trigger workflows, one after another, based on success, without having to worry about overlapping workflows. 

The ability to integrate our BI, analytics, and our data quality jobs is also valuable. We used to have everything set up just based on time: Run the data warehouse until five in the morning, run BI at 5:30 in the morning. There were times that we missed the deadline so that when the BI jobs would run, the data would be incomplete, or we had a big gap in time where we were missing out on starting early. It has really saved us a lot of man-hours compared to when we would have a data issue and we would have to manually restart all of the downstream jobs, after the warehouse.

ActiveBatch also provides us with a single pane of glass for end-to-end visibility of workflows. That simplifies the process when we check to see if things have run or how they're running. The Map View feature makes it easy to see what the dependencies are. It's helpful to have a visual, top-down look, from start to finish, at what flows are running when you need to look into that.

In terms of the unlimited bandwidth, as far as I can tell it's handled all of our volume without any issues whatsoever. For the analytics stuff and the business intelligence stuff, I don't keep track of how many jobs they have running each day. I can only really check the warehouse, but as far as I can tell it has handled the total volume of our needs without any issue whatsoever.

We use event triggers and file events, and one job we have uses email triggers. Especially for the business side, if they have a list of call center people or a list of promotions or some costing information that they need loaded into the warehouse, it allows us to say to them, "We don't need a dummy file and we don't need a blank file. Whenever you have a file ready to go, just put it on a shared drive and the job will automatically pick it up." So it simplifies our interactions with the business and allows them more flexibility to get their work done. The triggering doesn't so much reduce delays but it alleviates the need either to have the business create a dummy file or to code the job in such a way that if it doesn't find a file to run each day, it won't error-out or have to send an informational message. If we get a file a day, or if we get five files in a day, or if we only get one file every six months, the job just runs when the business has the data available, without our having to worry about it.

What needs improvement?

We also use an Oracle trigger, although we've had inconsistent performance with the Oracle trigger. It had to do with the timing of the Oracle logs. The Oracle trigger function wouldn't work because Oracle had a lock on the archive log file. We have had a couple of cases where we had to remove that Oracle trigger function from our schedule. But we still use it for some cases.

The thing I've noticed the most is the Help function. It's very difficult, at times, to find examples of how to do something. The Help function will explain what the tool does, but we're not a Windows shop at the data warehouse. Our data warehouse jobs actually run on Linux servers. Finding things for Linux-based solutions is not as easy as it is for Windows-based solutions. I would like to see more examples, and more non-Windows examples as well, in the Help.

For how long have I used the solution?

I have been using ActiveBatch for almost five years.   

What do I think about the stability of the solution?

Stability has been excellent. In the four or five years I can't even think of a time when the scheduler went down. We use two agents for production, and a scheduler and two agents for tests, and I can think of maybe three times that we had to reboot one of the agents. But I can't think of a time when the scheduler actually went down.

What do I think about the scalability of the solution?

It seems very scalable. We use a very small portion of the functionality and the available types of jobs. Of the job steps in the library, we only use about 2 or 3 percent of them. We bought it for a specific purpose and it served our purpose quite well.

How are customer service and technical support?

We have used the technical support. On a scale of one to 10, I'd give the Knowledge Base a six or seven. I would give the actual support folks an eight-and-a-half or nine.

It just depends on who you get to respond to your question or to your issue. We've had folks that have been excellent and have pinpointed the problem right away and given us a clear solution to our problems. And there have been times when we have gotten someone who doesn't quite understand the product and it feels like we're providing them more answers than they're providing us. That's been rare but I can think of at least one case where we had to say, "Can you put somebody else on or ask for some help on our question?" And they eventually did, but it was kind of frustrating. But for the most part, it's been fine.

Which solution did I use previously and why did I switch?

Ninety-five percent of the warehouse jobs that we run that were Informatica jobs have been replaced with ActiveBatch. We have a couple of jobs with some specialized logic that we haven't taken the time to figure out how to do in ActiveBatch yet. Of the 200 workflows, we run a day, 190 of them or so run through ActiveBatch.

What was our ROI?

We have seen ROI with the solution. It has simplified the warehouse job flow, our analytics workflow, as well as our business intelligence and data quality workflows. I don't know the exact cost per year of the solution, but it has simplified and made things much easier to understand in terms of dependencies among our data flows.

What other advice do I have?

The breakthrough for us was when we were able to take completely different software tools and integrate them into one long flow of data. We have our Informatica jobs which then trigger some PLC to SQL jobs in ActiveBatch, but they also trigger Alteryx jobs, which is its own software tool. It can integrate and execute iCEDQ, which is its own software, as well as Tableau. The ability to trigger those jobs from completely different software tools, in one flow, has saved us a lot of time and a lot of headaches.

Don't be afraid to dig in and try things. I said one of the weaknesses is the Help, but the Help function has helped me figure a few things out. We have jobs that update the pager email to go from an offsite pager to an onsite pager and back again. So don't be afraid to take the time to try to figure something different out. There are some useful things in the Help.

I'm the primary person using ActiveBatch in the warehouse. A month ago, we had a lot more people using it, but in the travel industry we've already had some severe layoffs. There were 10 people using ActiveBatch. They were all data analysts or data quality analysts, and I am the data warehouse developer. There were also business intelligence developers.

Which deployment model are you using for this solution?

On-premises
Disclosure: IT Central Station contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Buyer's Guide
Download our free ActiveBatch Workload Automation Report and get advice and tips from experienced pros sharing their opinions.