We performed a comparison between Dynatrace and Selenium HQ based on real PeerSpot user reviews.
Find out what your peers are saying about Datadog, Dynatrace, New Relic and others in Application Performance Monitoring (APM) and Observability."Data analytics help us to find us issues in the short-term or long-term."
"The ability to take each individual request and dive in to inspect what methods and calls are being made is extremely helpful."
"The initial setup was straightforward. The documentation and the university helped on Dynatrace."
"In terms of explaining to a customer how their data works, it has been a great tool. Instead of trying to draw it out, then hoping that is exactly where the data goes."
"Quick availability of multiple aspects of performance from infrastructure to application layers."
"Automation and anomaly detection has helped reduce MTTR and MTBF."
"The autodiscovery of service intercommunication has saved countless man hours and is dynamically updated when new services are added."
"The linking is very good in Dynatrace. What happens in other monitoring tools is the linking is not proper. In those solutions, a person has to manually link many of the layers and what is happening in them, while in Dynatrace you get that from the very first visit. For example, if a person is visiting your website, from there it will traverse you to the end. If the application is a Java application, it will traverse you there, to the Method level. So that linking and traversing is better in Dynatrace."
"It is compatible with and supports multiple languages, such as Java and Python. It is open source, and it is widely used."
"It is programming language agnostic, you can write tests in most currently used languages."
"The testing solution produces the best web applications."
"It is more stable in comparison to other solutions because they have quite some experience in the market."
"The most valuable feature of Selenium HQ is it provides support for third-party tools, such as screenshots, and automates Windows-based applications."
"My customer previously validated every file and it would take almost 15-20 minutes for a document. They used to randomly select and test only 100 out of the thousands, maybe 85,000, files, to pick up sampling. Each file would take around 20 to 25 minutes, so we were not able to do it manually, but with the help of Selenium, we were able to test all the files in two days. It saves a lot of time."
"The stability of the solution has been good, it is reliable we have not had any bugs."
"The most valuable features are ExpectedConditions, actions, assertions, verifications, flexible rates, and third-party integrations."
"I would like to have something more along the lines of the old DC RUM, because we have lot of clients with old technologies, legacy technology, and we really want to integrate it with Dynatrace so that they can use just one single product. So, it needs better integration with legacy products."
"Provide much better alignment between AppMon and Dynatrace."
"I would like a testing module focused on quality gates."
"The analytics feature provides us some information, but is limited for now. We want to see how we can consume the data further down and have analytics guys look at the datacenter information."
"The solution's ability to assess the severity of anomalies based on the actual impact to users and business KPIs is great. In my opinion, it could be extended even more. I would like it to be more configurable for the end-user. It would be nice to have more business rules applicable to the severity. It's already very good as it is now. It is based on the impact on your front-end users. But it would be nice if we could configure it a bit more."
"It would be great to have Synthetic automatically retrieve what the customer sees on his side."
"It definitely needs HA, because we have so many applications that are dependent on AppMon that it has been deemed critical. Any downtime, it just affects so many users. So that's one of our key asks for the future."
"Getting the EM data, we have to open a browser. Generally, one of the asks from our clients or our engineering team is to change this."
"I don't have that much experience with it, but I know that Selenium is more used for websites. It is not for testing desktop applications, which is a downside of it. It can support desktop applications more."
"It takes such a long time to use this solution that it may be worth looking into other free solutions such as TestProject or Katalon Studio, or paid solutions to replace it."
"We'd like to see some more image management in future releases."
"The reporting part can be better."
"The drawback is the solution is not easy to learn."
"To simplify the development process, everyone needs to do a Selenium Framework to acquire the web application functions and features from Selenium methods."
"The solution is open-source, so everyone relies on the community to assist with troubleshooting and information sharing. If there's a complex issue no one has faced, it may take a while to solve the problem."
"There are stability issues with Internet Explorer only."
Dynatrace is ranked 2nd in Application Performance Monitoring (APM) and Observability with 341 reviews while Selenium HQ is ranked 5th in Functional Testing Tools with 102 reviews. Dynatrace is rated 8.8, while Selenium HQ is rated 8.0. The top reviewer of Dynatrace writes "AI identifies all the components of a response-time issue or failure, hugely benefiting our triage efforts". On the other hand, the top reviewer of Selenium HQ writes "Easy to use with great pricing and lots of documentation". Dynatrace is most compared with Datadog, New Relic, AppDynamics, Splunk Enterprise Security and Azure Monitor, whereas Selenium HQ is most compared with Eggplant Test, Tricentis Tosca, Worksoft Certify, Telerik Test Studio and OpenText Silk Test.
We monitor all Application Performance Monitoring (APM) and Observability reviews to prevent fraudulent reviews and keep review quality high. We do not post reviews by company employees or direct competitors. We validate each review for authenticity via cross-reference with LinkedIn, and personal follow-up with the reviewer when necessary.