Dynatrace Review

I can quickly go to the PurePath and find the root cause of the problem in the application

What is our primary use case?

The primary focus using this product is to find performance issues and then to find the root causes to provide a full recommendation. Coming from a performance engineering background, we love this tool. We focused on PurePaths, looked at hotspots, and this helped us a lot in rectifying most of the performance issues.

We are using it for production monitoring, mainly testing all the new applications and performance monitoring.

How has it helped my organization?

The core features are the monitoring alerts and an easy, faster way of getting to the problem, and identifying what should be fixed.

What is most valuable?

The most valuable feature is, I can quickly go to the PurePath and find the problem in the application. I can say that it provides me a way by which I can quickly find the root cause of the problem. Then you try to tune it. So that has been an amazing experience.

What needs improvement?

The new Dynatrace release is already fully loaded. I still need to explore it. We are still using it and the new features, like log analysis and session replay, are good. As of now, I don't see any particular feature that is missing in Dynatrace, except one.

I still don't see the full depth of database metrics for database performance management. For example, I use Oracle Enterprise Manager and I use a type of access that provides me a lot of metrics and meaningful ways to evaluate database performance. That is something I don't see in Dynatrace yet.

What do I think about the stability of the solution?

It's very stable. I haven't seen any issues with the stability.

What do I think about the scalability of the solution?

We started with a couple of the applications initially, and it was fast. We wanted this tool to be part of the applications and that's when and we started adding more and more application into Dynatrace. Then, we realized that there were some slowness issues, but we were quickly able to see what was causing them and then we added adequate hardware and storage so that it became scalable. 

Now we know what the math is between the number of applications and the sizing requirements of Dynatrace. After finding these things we know what to do, how to scale in terms of how many applications we want to put into Dynatrace.

How is customer service and technical support?

They're very quick, they're always on top of our questions. Most of the issues are getting resolved quickly. I can say the support is fabulous.

Which solutions did we use previously?

We were using the Wily Introscope before, and we had a hard time setting up and capturing method-level performance metrics. For example, in Dynatrace the way PurePaths show insight into the method-level level hotspots, stack trace - that was missing in Wily Introscope. And Dynatrace was more intuitive. These are the few things that pushed us to go for it.

How was the initial setup?

It was complicated in the sense of teaching the support team how to do this because they were new. But once we showed them how to put the agents and how to administrate it, then it was easy for the ops team to take care of supporting, administrating, and putting all the applications into one stack.

Which other solutions did I evaluate?

New Relic, and Wily.

What other advice do I have?

What we have seen in the last two or three years is the technology space has been continuously changing and new features are being added. What we realized in the last quarter was, we should have a better way of identifying in production, end-users' scenarios using artificial intelligence. Since our alpha, we are excluding thousands of test scenarios. Better to run focused test scenarios based on artificial intelligence and our log analysis, and focus our energy on testing the key scenarios that have been performed by end-users. I think that is a new space where we need intelligent solutions like artificial intelligence.

The problem with the siloed monitoring tools was you could not save the past or the story of your test results, and it needed a lot of setup. You needed to work with so many tools and it didn't provide all the key features that we were looking for. Maybe it is good for one thing, maybe just plain CPU and memory. If I need holistic metrics, that's missing in the siloed monitoring tools.

If we had just one solution that could provide real analysis, as opposed to just data, that would be fantastic. We would not need to find so many different tools and capture all the individual bits and pieces of the data. It would be faster and more meaningful.

When picking an APM solution, it should be able to support all heterogeneous applications: it can be mainframe, it can be integration, it can be Java, .NET. The tool should be able to support a wide range of applications. And it should be scalable. As we add more applications, it should not see any slowness issues. It should be easy to use. There are so many folks in the performance team, the ops team, so it should be easy to use.

I can definitely say use Dynatrace, but I would say evaluate what, in terms of space, you are looking into and make you are able to support it fully. Make sure you evaluate all the technical criteria you are setting up, based on the workspace.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
Add a Comment
Sign Up with Email