ActiveBatch Workload Automation Valuable Features

Shaun Guthrie
Senior Operations Administrator at Illinois Mutual
A lot of the built-in processes are among the most valuable features because when just starting out, although I went through the ActiveBatch Boot Camp — and I've got a couple of other people who went through it as well — it was a little overwhelming, not having used the product. We found it easier once we were using the product and then doing refreshers on the Boot Camp or doing the deep dives that ActiveBatch provides. Even the Knowledge Base articles allow us to grow and let us know what we can use in our environment. We're able to use the Plans, rather than seeing individual jobs within all four of our environments. Seeing all of these jobs individually would be overwhelming to try to easily decipher workflows, whereas everything is nested nicely within each Plan for us. It makes it very easy to read the next day, and to look at how each cycle ran. It also helps with troubleshooting if there's an issue with one of them at night. As far as centralization goes it's nice because we can see all these processes that are tied to this larger process. The commissions, FTP processing, the reporting, the file moves to the business users — all that is right there. It's very easy to read. It's easy to tie it together, visually, and see where each of these steps fits into the bigger picture. Other important features for us are file triggers, file constraints, and job constraints, because of the sequential nature of the batch process. The file triggers have made our processes more efficient and reduced delays. It might be minimal at this point, but it would still be a manual process that would have had to be done. Our second-shift operator would have to wait each night for that mainframe cycle to finish and then manually trigger certain processes within each of our ActiveBatch cycles. It's also a very flexible product. We're just over a year in and we're still getting our feet wet and realizing its potential. One thing I am anxious to roll out — and I've tried to push some business end-user meetings, but it's still a little early in the process as everyone has been so busy with the overall modernization effort — is the Self-Service Portal. It will allow the business users to run processes on-demand, rather than putting in a ticket to have IT do it for them. This would also allow other IT users to see any processes they may be testing, in the ActiveBatch environment. In addition, the Jobs Library has been a tremendous asset. For the most part, that's what we use. There are some outliers, but we pretty much integrate those Jobs Library steps throughout the process, whether it's REST calls, FTP processes, or file copies and moves. We do use some process job steps to call out external batch processing through external scripts, but most of what we're using is what is built-in, at this point. That has helped us to build end-to-end workflows. View full review »
Client Service Manager/Programmer at a tech vendor with 51-200 employees
We mostly use the fairly straightforward features of the solution: * copying and moving files from one location to another * FTP processes to send and receive files * database queries to update certain data elements. It's nothing super-complex, but these are things we would not be able to do manually without adding a lot more time to the process. It's also very easy to restart jobs at a certain point, in the event of a failure. Things like that are things that we didn't want to have with some of our former automation tools: overall ease of use. In addition, you can go to one screen and see every job that is currently running and what the status of that job is. You can scroll up or down and see jobs that ran in the past jobs and jobs that are scheduled to run in the future. It makes it easier to monitor jobs. A lot of our processes run overnight. We have a team that monitors the automation jobs to make sure everything's running and to correct any failures that may happen. They are able to easily see the status of everything using ActiveBatch, without having to click on multiple jobs to see an individual status. They can get a summary of it on the summary view. It's pretty customizable, from what I can tell. We haven't had a need to customize a lot of things because most of what we do is pretty straightforward. But you can script out a PowerShell script and use some of the internal functions and features of ActiveBatch within the script. You could, theoretically, customize it pretty extensively. We just haven't had a need to do that very much. View full review »
Bob Olson
Supervisor IT Operations at a insurance company with 501-1,000 employees
I find all the features valuable. A lot of our server monitoring has becoming more critical. We monitor CPU loads and disk space requirements. Those are becoming more helpful to us from an automation standpoint, where it makes business decisions on returns. It really helps out the entire IT department and the entire company, as it takes a lot of the manual effort away from a lot of people. It takes a lot of the manual effort off a lot of people from having to continually look at information. We make business rules within jobs. If something is wrong, it will get somebody out of bed in the middle of the night and let them know there is a problem. Rather than people coming in the morning, we have people who get up in the middle of the night and start working. Because when there's a server issue, that just creates a whole problem. This eliminates a lot of that since we catch these problems. We're taking a proactive approach to our internal structures. The solution provides us with a single pane of glass for end-to-end visibility of workflows. The nice thing about ActiveBatch is you can see at a glance what is running and what's going to run (future runs). It gives us a good snapshot of everything that's going on, which is something that was lacking for years. With our window pane, we can see exactly everything that will happen at a glance. The console is extremely flexible. We have incorporated things into ActiveBatch that a lot of people never thought possible, e.g., a lot of the server monitoring stuff and we have over a 1000 jobs that run out of it on a nightly basis. From an automation standpoint, it is really reducing the need for so much manual effort, which creates its own problems because we have a thousand jobs. Somebody has to look to determine if there are any issues. So, we have business rules put in place in all our jobs which try to make it easier for everybody. We do banking information, EDIs, specific automation for other applications, service monitoring, and reporting. A lot of the stuff is called from other systems and imported into ActiveBatch, then manipulated. It's so comprehensive. View full review »
Learn what your peers think about ActiveBatch Workload Automation. Get advice and tips from experienced pros sharing their opinions. Updated: August 2020.
442,517 professionals have used our research since 2012.
Peter MacDonald
Senior IT Architect at a pharma/biotech company with 5,001-10,000 employees
One of the great features that they have implemented is called Job Steps. It is a much more mechanical way to control processes. It allows us to connect to external providers. For example, we were a big Informatica shop. The development time to create a job that can execute a task or workflow (once you have the initial baseline set up) takes you about a minute to say, "I created this new job in Informatica. I have created an equivalent job to run the batch, then about a minute later, it was done." It improves the development time to market and getting things done. What ActiveBatch allows you to do is develop a more efficient process. It gave me visibility into all my jobs so I could choose which jobs to run in parallel. This is much easier than when I have to try to do it through cron for Windows XP, where you really can't do things in parallel and know what is going on. Improvement in workflow completion times has to do with optimization. The ability to do true parallel submittal of jobs, then be able to pay attention to the status of those job simultaneously to know when they are done, that is what creates the optimization. The solution provides us with a single pane of glass for end-to-end visibility of workflows. It has a very broad, deep scale vision of what's going on. You can go down to an individual job level or see across the whole system and different groups. Because we roll out by project area, each project has their own root group folder that they use to manage their routines. We don't have a master operational group yet that is managing it. Therefore, each of group does its own operational support for it. However, if I look at things in it, there are a lot of shared things that we have put in there. If a machine is taking too long, I can go focus on that. E.g., why is it taking so long? Then, I can let people know that we have a particular routine that is running poorly. View full review »
Mike Scocca
Data Warehouse Operations Analyst at a leisure / travel company with 1,001-5,000 employees
One of the valuable features is the ability to trigger workflows, one after another, based on success, without having to worry about overlapping workflows. The ability to integrate our BI, analytics, and our data quality jobs is also valuable. We used to have everything set up just based on time: Run the data warehouse until five in the morning, run BI at 5:30 in the morning. There were times that we missed the deadline so that when the BI jobs would run, the data would be incomplete, or we had a big gap in time where we were missing out on starting early. It has really saved us a lot of man-hours compared to when we would have a data issue and we would have to manually restart all of the downstream jobs, after the warehouse. ActiveBatch also provides us with a single pane of glass for end-to-end visibility of workflows. That simplifies the process when we check to see if things have run or how they're running. The Map View feature makes it easy to see what the dependencies are. It's helpful to have a visual, top-down look, from start to finish, at what flows are running when you need to look into that. In terms of the unlimited bandwidth, as far as I can tell it's handled all of our volume without any issues whatsoever. For the analytics stuff and the business intelligence stuff, I don't keep track of how many jobs they have running each day. I can only really check the warehouse, but as far as I can tell it has handled the total volume of our needs without any issue whatsoever. We use event triggers and file events, and one job we have uses email triggers. Especially for the business side, if they have a list of call center people or a list of promotions or some costing information that they need loaded into the warehouse, it allows us to say to them, "We don't need a dummy file and we don't need a blank file. Whenever you have a file ready to go, just put it on a shared drive and the job will automatically pick it up." So it simplifies our interactions with the business and allows them more flexibility to get their work done. The triggering doesn't so much reduce delays but it alleviates the need either to have the business create a dummy file or to code the job in such a way that if it doesn't find a file to run each day, it won't error-out or have to send an informational message. If we get a file a day, or if we get five files in a day, or if we only get one file every six months, the job just runs when the business has the data available, without our having to worry about it. View full review »
Georg Johansen
Operations Manager at Statkraft AS
We use the main job-scheduling feature. It's the only thing we use in the tool. That's the reason we are using the tool: to reduce costs by replacing manual tasks with automated tasks and to perform regular, repetitive tasks in a more reliable way. It's quite customizable because it supports many different platforms and technologies, and it covers almost everything we need to set up different jobs in our environment. We are using it mostly for our Windows and Unix servers and we are using different triggers, for example, Apache ActiveMQ. It is used by many different applications and systems. We use various databases, including Oracle, SQL Server, Microsoft, as well as Active Directory. We are at the beginning of implementing agents in our Azure cloud. We haven't used that part very much yet but it will be used. We are moving more and more systems from on-prem to the cloud, so it will increase gradually. View full review »
DBA at a venture capital & private equity firm with 11-50 employees
The schedule is good because you don't miss any issues. Let's say you reboot the server and there are still things pending, they will resume. From a scheduling point of view, it is pretty good. View full review »
Learn what your peers think about ActiveBatch Workload Automation. Get advice and tips from experienced pros sharing their opinions. Updated: August 2020.
442,517 professionals have used our research since 2012.