We performed a comparison between Dynatrace and Grafana based on real PeerSpot user reviews in five categories. After reading all of the collected data, you can find our conclusion below.
Comparison Results: Dynatrace is preferred over Grafana due to its AI capabilities, real user monitoring, session replay, and synthetic monitoring functionalities. It offers good visibility and thorough scanning of services and applications, with the ability to drill down and analyze traffic. While Grafana is praised for its customizable and visually appealing graphs and flexibility in integration with other tools, it lacks some of the advanced features and capabilities of Dynatrace.
"It has created total transparency between technology and business on all aspects of systems and performance as well as being a proxy for network performance through user experience monitoring. This followed a major performance degradation of our primary frontline system, which highlighted inadequacy of infrastructure focus tools, e.g., Nagios and Zabbix. It helped detect and remediate several performance issues on systems on both vendor supplied packages as well as in-house developed systems. It also improved InfraOps and development teams understanding of system behaviour and performance characteristics."
"The stuff that's coming with the new pieces around the Dynatrace Managed SaaS implementation. The ease of implementation there is significant. We've spent a lot of time with AppMon and DC RUM - that's a lot of time to set up, configure. With Managed solution, you just drop it in and everything pretty much auto-instruments."
"We use it to monitor over a 1000 servers in AWS."
"Daily metrics which us analyze the page composition and the corresponding performance metrics so we can quickly and easily determine when something has changed, to aid in root cause analysis."
"The most valuable features for us include problem detection, root cause identification, Smartscape, and integration with cloud infrastructure like AWS, Azure, GCP, etc."
"Developers love the tool, because it is easy to use. They can immediately see how their code behaves."
"We can see issues that occur, sometimes before the clients do. Before we have client (or end user) calls for issues, we are able to start troubleshooting and even resolve those issues. We can quickly identify the root cause and impact of the issues as they occur, and this is very helpful for providing the best client experience."
"The initial setup was straightforward. The documentation and the university helped on Dynatrace."
"What I found most valuable in Grafana is that it has a lot of integrations and features that I need for data processing and visualization."
"It is easy to change and move virtual servers."
"The best feature was the creation of graphs and trends."
"It integrates well with other solutions."
"The best thing about Grafana is the visualization. The colors and the ease of use make it very user-friendly."
"Kubernetes could help us to better visualize the trend of our data by recording and displaying our history over a chosen duration, such as the last 30 days."
"Compatibility with Prometheus databases and the Spring Boot application make it the first choice when moving toward an SRE model."
"The integration between Loki and Tempo is valuable."
"One piece that we think that's missing is, there were thread names that were missing in analytical information in the Dynatrace solution, versus the AppMon solution. The AppMon solution gives you that information, and it is very helpful for connecting dots and bringing all the pieces together."
"We have some issues with react user sessions."
"It needs education and training to ensure you get the full value of your purchase. Maybe add in a certification for Dynatrace."
"Custom reporting is still missing."
"The one area that we get value out of now, where we would love to see additional features, is the Session Replay. The ability to see how one individual uses a particular feature is great. But what we'd really like to be able to see is how a large group of people uses a particular feature. I believe Dynatrace has some things on its roadmap to add to Session Replay that would allow us those kinds of insights as well."
"The con of Dynatrace is that, at times, because it has so much information, it becomes difficult to see the root cause of your problem, and then you have to dig around to find the root cause."
"They should provide a guide to arrive at the solution for non-super experts."
"The scalability is there, but it is a headache when you do a lot of stuff and when you need to compare a lot of servers and do a lot of things. The scalability is very difficult to maintain."
"The service dashboard is very hard and needs improvement."
"If there was an issue on one node, we couldn't drill down and see all the issues on other nodes."
"It can take a considerable amount of time to learn the graphs if a long duration is selected."
"I have a problem with Grafana in the area of documentation."
"It would be helpful if they simplified the data source."
"The documentation or training provided by Grafana is limited compared to its competitors, like Splunk."
"Lacks in-depth graphs and sufficient AI."
"Multiple dashboards combined into one dashboard has slowed things down for us."
Dynatrace is ranked 2nd in Application Performance Monitoring (APM) and Observability with 340 reviews while Grafana is ranked 6th in Application Performance Monitoring (APM) and Observability with 39 reviews. Dynatrace is rated 8.8, while Grafana is rated 8.0. The top reviewer of Dynatrace writes "AI identifies all the components of a response-time issue or failure, hugely benefiting our triage efforts". On the other hand, the top reviewer of Grafana writes "Agent-free with great dashboards and an active community". Dynatrace is most compared with Datadog, New Relic, AppDynamics, Splunk Enterprise Security and Prometheus, whereas Grafana is most compared with New Relic, Azure Monitor, Sentry, Elastic Observability and Honeycomb.io. See our Dynatrace vs. Grafana report.
See our list of best Application Performance Monitoring (APM) and Observability vendors.
We monitor all Application Performance Monitoring (APM) and Observability reviews to prevent fraudulent reviews and keep review quality high. We do not post reviews by company employees or direct competitors. We validate each review for authenticity via cross-reference with LinkedIn, and personal follow-up with the reviewer when necessary.