2020-01-15T08:04:00Z

What needs improvement with Tidal Automation?


Please share with the community what you think needs improvement with Tidal Automation.

What are its weaknesses? What would you like to see changed in a future version?

Guest
1515 Answers

author avatar
Top 10Real User

They have a bit of work to do on the ServiceNow Adapter. At the moment with 6.2.1, we can send an SNMP Trap to ServiceNow in order to create an incident fail. However, there is so much scope for a CLA API interface between the Adapter and the stuff that you can do with it. I would have other use cases for different things within ServiceNow potentially if that was the case. The reporting is kind of lacking and not super awesome. They have a product where the administrative overhead isn't that straightforward. Maybe, we're using it wrong. The ability to express jobs as code is something I wanted for years now, especially as we move into the DevOps space. We have been doing one-touch deploys in terms of our CI/CD pipeline for a while and we have releases and code deployments that go through environments with a single tool for deploying. Therefore, SQL code, SSIS packages, and registry entries can install something all at once. Tidal can't do this for jobs, because they use a Transporter mechanism, which baffles me because the product is a SQL Server on the back-end. We would like it for a developer to be able to push a button saying "Script", which exports a script for the injection from one environment to another. This is what it needs instead of a clunky Transporter tool to take it from one environment to another. If they could just rip out the code that they were going to insert into the next phase, then we can express those jobs as code and dive into our consolidated release process. For me, in the DevOps space, expressing jobs as code would be the way to go. The solution’s current drill-down functionality is alright because the Client Manager is an actual database. With the next version 6.5.3, they put that into a memory database. Therefore, you have no real ability to go through and have a look at it. I think there's a gap there.

2020-04-05T09:13:00Z
author avatar
Top 5LeaderboardReal User

Before STA bought this product, Cisco owned it and, unfortunately, they did not update things as well as they should have. We're just now seeing improvements to the product and bug fixes. The biggest improvement they need to work on is doing better QA checks before they release new patches and service packs. We do find that you can't trust getting the new product right away, as they have to get some bug fixes out. They do tend to have some bugs in the first iteration. In addition, something that they already know about is that speed can be a little bit of an issue in the environments and the viewers. And while everything is nice in the GUI interface — they recently upgraded it — they could take it a step further. I would like it to have more flexibility and the overall look of the product could be better. Before this recent patch that we're doing to 6.53, in the 6.5 series it still looked like a product from the 1990s. They recently did a mini-refresh on graphic user interface, but it still looks a little bit clunky. It doesn't look as smooth as I would expect from a 21st-century product, but it's getting there. But this a secondary item, versus the speed and working on bug fixes.

2020-03-03T08:47:00Z
author avatar
Top 5LeaderboardReal User

The solution’s drill-down functionality, so admins can investigate data or processes, depends on what we are looking at. In some places, it is better than others and getting a lot better. In the five years that I've been supporting this solution, I've seen them get much better at allowing us to get more detailed information in the logs and job activity. I'm still hoping with Explorer to be able to see end-to-end job streams. That's not really something that's easy to see today in the web client. However, I haven't worked with Explorer yet. One of the things that we have found frustrating is not being able to see an end-to-end job stream across multiple applications within Tidal. We use jobs for that right now, but I have high hopes that we'll be able to see that in Explorer. The reporting piece needs improvement. They are working to improve it but this is the piece that they can continue to work on. By reporting, I mean things like end-to-end job streams, historical reporting over the long-term, and forecasting. Those are some areas that I've expressed to them where they need to up their game. We have the transport functionality where you move ops from one system to another. Right now, it's a manual process. I would love to be able to have more automated transports. Then, I'd love that to be able to tie this into our ITSM system so we can have change approvals, which are then approved, then transports automatically happen.

2020-02-12T17:16:00Z
author avatar
Top 10Real User

From an administrative point of view, I wouldn't give really high marks to the solution. I actually entertained getting the JAWS application at one point. One of the shortcomings with the scheduler is the reporting capabilities. At least at the time, JAWS was the best that they had for a third-party integration. I think they've got things in the pipeline to help alleviate that gap. Also, one of the things I'm concerned about is that, with the security we have, there's a hazard that somebody could go in and accidentally delete a master grouping of definitions out of Tidal. Right now, I don't have an easy way to recover from that. It looks like a couple of things that are in the pipeline with Tidal are going to allow for that kind of recovery. There should eventually be a replacement for the Transporter tool. That sounds like it's going to have the capability of doing copies out of Tidal. If I scheduled that once a week, it would give me a copy of definitions out of Tidal. If it turned out that one of the operators, who had the rights, accidentally deleted a grouping of definitions, I would have something that listed definitions that I could go back to and recover.

2020-02-09T12:23:00Z
author avatar
Top 10Real User

We've had some quirky stuff happen on an occasional basis where a job does not take off. For example, a job we expected to be finished by 3:00 a.m. is sitting there and not executing when we come in in the morning. We have to go all the way back to the dependencies and then we can see that one of the dependencies has become unscheduled, for some reason. No changes were made to the schedule but this prerequisite job has, all of a sudden, become unscheduled. I have brought this up with Tidal's support but they have never had an answer for it. It would be helpful to be notified ahead of time when something is going to stop the schedule, even if we don't necessarily know what's causing it. But the main area for improvement is reporting. A lot of our managers would like to have metrics shown in graphs for the products they keep track of. The reporting part of Tidal isn't very useful. When you use the report function, you can't bring that data into an Excel spreadsheet. I understand in the new release they have something called Explorer which is a new reporting feature. I think they acquired a product to handle reporting functions, but we haven't gotten it yet.

2020-02-05T10:15:00Z
author avatar
Top 5LeaderboardReal User

One thing I would like to see is better training on both how to set up and support the product as well as on how to make use of the product, especially regarding the scripting that is available. Another place I'd like to see an improvement is that there are certain agents that I don't have access to. It's on the wishlist but they can't do everything for everybody. The one that I'm looking for particularly is IBM's Data Store driver. I understand why they haven't created one, but my life would be better if they did. They also need to make sure they have the adapters, or have a mechanism to get the adapters, that people need. There's an adapter that I would really like. I've even said, "I'll pay for it. Just tell me how much and I'll get it paid for." They're just not in a position to do that.

2020-02-05T10:15:00Z
author avatar
Top 10Real User

The HANA adapter is not available today. If I need to call a procedure in HANA right now, I don't think Tidal has any adapters. I know that we do not have a ServiceNow adapter either, but I believe they will be coming out with a new release. With the client, we have had certain issues. The user interface for Tidal is a little slow. A lot of people would love this tool if they had a faster user interface. The drill-down functionality should be much quicker than what it is pulling out now. If I fill out some data, then it takes awhile to get that data back onto the screen. It's not as fast as we were expecting. I would like to see improvement in terms of performance, meaning that it triggers jobs at the right time. If Tidal improves their performance with the client, that will be really useful for people who are developers and doing call/production support of jobs. We are looking for a cloud offering from STA Group. We keep hearing from STA Group that this is in discussion on their end. We are also looking at SaaS offering that other customers are using.

2020-01-30T11:44:00Z
author avatar
Top 10Real User

Their software installation and update process could use some improvements. I'm pretty sure they're working on that, but that's definitely an area where it could be streamlined a lot. There's still a lot of manual work that you have to do with the schedule when you deploy masters or do the agents. The other thing is that the performance of the web interface has not been great. It's feedback I get quite a bit, that the web interface can be sluggish at times. We've got to recycle it to get it to be more responsive. We brought up this issue a while ago. A lot of what we may be dealing with is that we are running on an older version. A lot of the performance stuff, I suspect, has been corrected in the later versions. We are running on 6.2.1 but they have got 6.3.5 out there now. As for stuff we'd like to have, I'd love to see the database back-end have PostgreSQL or MySQL. Right now the choices are Microsoft SQL Server or Oracle.

2020-01-29T11:22:00Z
author avatar
Top 10Real User

One area for improvement is the command-line interface and the API to bulk-load jobs. It's a little bit kludgy, but we still manage without it. They're working on it and it's getting better all the time. In addition, the documentation for their API for creating jobs needs to be updated. It has a bit of a learning curve. We also wish there was a search functionality for assigning actions to events, and users to workgroups. Finally, the S3 data mover jobs are still a little buggy.

2020-01-29T11:22:00Z
author avatar
Top 5LeaderboardConsultant

I would like more involvement with the cloud. That is something I know we were interested in, as we are moving applications. One client's management team has told Tidal that they would like to see integration with the new application. They have been doing a pretty good job on improving it. The update of the client to not have a separate database has been a big improvement because that could add another bottleneck. Right now, it's a much faster process, where it has an in-memory database instead of having to go to a database until you read all this stuff.

2020-01-29T08:35:00Z
author avatar
Top 10Real User

Tidal enables admins and users to see the information relevant to them for the most part. It depends on what you are looking at. One of the weaknesses of the product is, when something happens, it's difficult to find out the root cause. There are a lot of logs you can take a look at in Tidal. Sometimes, they are useful, but other times, they're not. That is mostly relegated to the administrative team. Users for the most part don't see that and don't know anything about that. They just know they have a problem, then it's up to the administrative team to see what happened and figure out the problem. When you need to drill further down to the lower level, that's when it becomes a bit more difficult. At the lower levels, it tends to be clearer. When you get into the guts of the app (the technical level), it is sometimes difficult to find out the root cause. Tidal comes with two front-ends (GUIs): their Java client and web client. The Java client is a very lightweight client which you install on your desktop and terminal server. The web client just runs on the browser. They are slightly different, and what we are finding is sometimes there are discrepancies and inconsistencies between the two. One function may work in the Java client but may not work in the web client. That is because they have two sets of code with different front-ends, so they are inconsistent. I have asked if they can just use one of them. We prefer the web client because it doesn't require any installs on your desktop. However, we also like the Java client because the usability and look and feel are better on the Java client than the web client. We have been using this solution for a number of years, using both front-ends. Sometimes, we see it as an advantage if there's a problem with the web client to go use the Java client. So, you have two ways of getting in. Although it's a pain sometimes, because you when you have an issue you need to check both and they may behave differently. On the other hand, when you have a problem, there is a different way to get in and you are glad that you have two ways to get into it rather than just one.

2020-01-29T08:35:00Z
author avatar
Top 10Real User

For the most part, the drill-down and the logging are really good. But if we take an Informatica job, for example: We have the ability, and the operators have the ability, to actually drill down and see, at a session level, where the failure is. There is, unfortunately, no way to extract that into an actual output email or failure email. It's not that that information is not available, but extracting it into an email would be a nice-to-have. It's minor, but it would definitely be a help. In the grand scheme of things though, you can drill down to session-level failures and get that error message to provide to support. Another thing has to do with job events. A job event triggers when a job completes. It sends an email or reruns a job. Right now — and I've even talked to Tidal about this — it will run all the events at the same time. It doesn't provide the logic to say, "I want this job to rerun five times. If it fails on the fifth time, then send an email: 'Out for Failure.'" The only other thing I would like to see is an easy way to flag jobs running longer than a certain percentage of the estimated time they should take. Right now, you can hard code in a max expected run-time and you can trigger a notification off of that. The unfortunate thing is, in a consumer product-related business such as ours, Q3 and Q4 jobs are going to run longer. So you can't really put a hard-coded expected run-time, because that's going to fluctuate. So it would be useful if we could specify something like "Flag this job if it runs 25 percent longer than estimated," which the solution does track for 30 or 35 days. That's what they usually recommend, out-of-the-box, for keeping track of history.

2020-01-27T06:39:00Z
author avatar
Top 5LeaderboardReal User

I know they are working on improving this already, but there needs to be better reporting. Currently, there are only like three or five reports that we can get off of the system. They already have a solution to this in the new version. I.e., a schedule of all the jobs running for one day, specifically calling out what dependencies that a job relies on. It would be like a flow chart of how the day's jobs would run.

2020-01-27T06:39:00Z
author avatar
Real User

We started to deploy Azure, and it's still not fully baked. We are struggling with it. It is not something that has worked out-of-the-box. We haven't installed Tidal in the public or private cloud. We have a problem with security. While we can install the entire platform in the cloud to handle separate work or an entity, if we want to centralize it, then it's a little difficult. They don't have good reporting capabilities. From the user perspective, I have 6,000 jobs running per day, and I would like to track them to know exactly what is going on. E.g. if a manager asks me, "Can you bring me this data or can you do a dashboard or report?" I need to take a lot of actions in order to do that. It's not easy to compute that data. We are now testing version 6.5. The speed of this console is much better than 6.2, where the speed has not been sufficient for me. Most of my users are doing customer service review these days. So, we are asking the customers what they think about Tidal and what the vendor needs to improve. The number one that we are exploring is the user experience (UX). It has a lot of features, which is one thing that is great. On the other hand, the user experience is a bit old. It is hard to find what you're looking for. The UX is not intuitive for all users. So, if I'm a user, it might take me some time to know where I need to find my stuff. It takes a lot of time to learn the product. I have admins and developers who are working on the products for the last three to four years and still don't know all the functionalities. Tidal has really great things about it, but people are focused on their day-to-day job and the solution is not intuitive. We have internal training where we do two weeks of training for three hours each day. So it's approximately 30 hours of training. I cannot say after that users know everything. It takes about six months to ramp up on Tidal to be really good and professional.

2020-01-23T14:08:00Z
author avatar
Top 5Real User

The biggest problem for us was the Transporter tool that works through the API. It's like a GUI into the API where you can transfer and compare jobs between two Tidal spaces. Up until the last few months, the Transporter tool that was offered was not really good at all. It was hard to take a job in development and promote it to production. There was no really good tool to do that. They offered a tool, but it wasn't that good. But they just put out the Tidal Explorer tool, which is basically a replacement for the Transporter. That looks promising. I haven't really gotten to use it yet, but it seems to be a better system. That's what people have been requesting for a while now: an easy way to promote and review changes; something like a script repository-type of system, where you can promote something or pull it down, compare it, and then, if you like it, push it. If it doesn't work, you can back it out to previous revisions. It looks like it offers all those features, but I really haven't had a chance to dig into it. I set it up and it does look promising for the future. It's probably something that we're going to try to integrate into the day-to-day processing once it gets released. I don't think it has even been released as general-availability yet. It's still in beta. But once it gets to be production-ready, we would definitely love to use it. It's something that's been on our radar for a while now. Tidal also had a cache database, which was a copy of the master database, that the web client used. They got rid of that in the latest version, and that is something we had been asking for, for a long time. The way it had been set up didn't really seem optimal. It looks like they're trying to put forth a better tool for certain places that were lacking. On another topic, we have to set up ways to send a job event that finds a job that completes abnormally. What we do is send it to an SNMP trap that gets aggregated into one space and we can see those errors. We try not to use Tidal for monitoring, as much as for job launching and tracking. We have a Nagios setup so that if something fails, the error can be sent to Nagios and checked there. If a job is a long-running job, like an eight-hour job, we don't want that job active in Tidal for the whole time and taking up a job slot. We'll kick the job off in Tidal and it will show that it has completed normally. Then we'll hand it off to another tool to monitor that the process is running for the specified amount of time. I don't know if Tidal wants to get into the business of monitoring long-running jobs, but that could be a feature for the future: a job launching and monitoring tool. Using Tidal for monitoring doesn't seem like a good fit, but if they could offer something that did that as an add-on or include it, it might be helpful. Finally, the solution is a little tough to learn. Talking to people who are new to using the Tidal interface, it's difficult. But I don't have anything to compare that to. They have said it's not as difficult as Control-M or some of the larger scheduling systems that people have used. It's not as hard as that. Tidal has worked to prevent new users, especially, who aren't exactly sure what they're doing, from hurting themselves too much, which is good. They've put a lot of restrictions in place to prevent people from doing things that weren't intended. There is a learning curve, but I don't think it's steeper than any other new scheduling system. In the past, we've downloaded some other options and they had a learning curve too. If you've never used it, there's always a curve, with the terminology, etc. But I don't think it's any harder than any of the others. New users of Tidal need at least a month of working with it a little bit each day. I give people a three-hour introductory course. Every quarter I provide an overview for new users of how things are set up. Luckily, in our company, a lot of these new users are joining groups that already use Tidal on a daily basis. If they have any questions after the initial course, they can talk to their team. Over time, the teams that use Tidal are resources for the new employees. That takes a little bit of training off of my plate. Within a few months people are confident and moving along. It takes a few hours to pick up but to be fully confident it would take a few months to really feel that you know what you're doing in the space.

2020-01-15T08:04:00Z
Learn what your peers think about Tidal Automation. Get advice and tips from experienced pros sharing their opinions. Updated: April 2020.
442,141 professionals have used our research since 2012.