1. leader badge
    Dynatrace is heavily automated which is a big advantage. You don't have to configure a lot, it installs and it runs straight away. There's direct return on investment and you can focus on more interesting stuff than always have to configure and change the configurations. Issues or problems in detection is also fully automated, which is great.
  2. leader badge
    The most valuable feature is the flow map.From the performance management side, I like everything from business transaction work to tracking. On the database side, we can get a lot of insights from the database. On the server monitoring side, it helped us a lot to find out some of the issues on the VM side because VMs were creating a little trouble for us.
  3. Find out what your peers are saying about Dynatrace, AppDynamics, BMC and others in Application Performance Management (APM). Updated: January 2021.
    454,950 professionals have used our research since 2012.
  4. The solution has a very good business event manager tool. The event management part of TrueSight Operations Management, in my experience, is probably the best in the market. You have endless flexibility. You can build your own rules, you have the MRL language, and you can implement any kind of logic on the alerts. It may be correlation, abstraction, or executing something as a result of the alerts. You have almost the whole range of options available for event management using the available customization.
  5. The most valuable features are logging, the extensive set of integrations, and easy jumpstart.Having a wealth of information has helped us investigate outages, and having historical data helps us tune our system.
  6. Aternity's Digital Experience Management Quadrant (DEM-Q) has been a game changer for us. While knowing your own metrics is nice, if you don't know how you compare to others or what the numbers should be, then it doesn't tell you much. This solution puts that into context (if we are doing better than others or worse), which helps us prioritize where we want to focus and do improvements versus that's just how slow it's supposed to be. It's also great in communicating what we are doing and why we're doing it to our IT leadership teams, by saying, while we're pretty far behind others in certain categories, the time and changes for our prioritizations are justified.
  7. It's also easy to implement. The implementation of Geneos is very easy and interesting. It's not complicated. It's very quick to implement. The installation is very easy. There are many topics about ITRS Geneos that explain more about the features of the function of Geneos.
  8. report
    Use our free recommendation engine to learn which Application Performance Management (APM) solutions are best for your needs.
    454,950 professionals have used our research since 2012.
  9. The VPN is one of the solution's most valuable features for us.The most valuable feature is application monitoring.
  10. Azure Monitor is really just a source for Dynatrace. It's just collecting data and monitoring the environment and the infrastructure. It is fairly good at that.

Advice From The Community

Read answers to top Application Performance Management (APM) questions. 454,950 professionals have gotten help from our community of experts.
Ariel Lindenfeld
Let the community know what you think. Share your opinions now!
author avatarit_user364554 (COO with 51-200 employees)
Vendor

Hi,
Full disclosure I am the COO at Correlsense.
2 years ago I wrote a post just about that - "Why APM project fails" - I think it can guide you through the process of the most important aspects of APM tools.

Take a look - feel free to leave a comment:
http://www.correlsense.com/enterprise-apm-projects-fail/
Elad Katav

author avatarit_user342780 (Senior Software Engineer Team Lead at THE ICONIC)
Vendor

Speed to get data into the platform is one of our most important metrics. We NEED to know what is going on right now, not 3-4minutes ago.

author avatarDavid Fourie
Real User

Full stack end-to-end monitoring including frontend and backend server profiling, real user monitoring, synthetic monitoring and root cause deep dive analysis. Ease of use and intuitive UX. 

author avatarit_user229734 (IT Technical Testing Consultant at adhoc International)
Vendor

In order to evaluate/benchmark APM solutions, We can based on the 5 dimension provided by Gartner:
1. End-user experience monitoring: the capture of data about how end-to-end application
availability, latency, execution correctness and quality appeared to the end user
2. Runtime application architecture discovery, modeling and display: the discovery of the
various software and hardware components involved in application execution, and the array of
possible paths across which those components could communicate that, together, enable that
involvement
3. User-defined transaction profiling: the tracing of events as they occur among the components
or objects as they move across the paths discovered in the second dimension, generated in
response to a user's attempt to cause the application to execute what the user regards as a
logical unit of work
4. Component deep-dive monitoring in an application context: the fine-grained monitoring of
resources consumed by and events occurring within the components discovered in the second
dimension
5. Analytics: the marshalling of a variety of techniques (including behavior learning engines,
complex-event processing (CEP) platforms, log analysis and multidimensional database
analysis) to discover meaningful and actionable patterns in the typically large datasets
generated by the first four dimensions of APM

In other side, we tried to benchmark internally some APM solutions based on the following evaluation groups:
Monitoring capabilities
Technologies and framework support
Central PMDB (Performance Management DataBase)
Integration
Service modeling and monitoring
Performance analysis and diagnostics
Alerts/event Management
Dashboard and Visualization
Setup and configuration
User experience
and we got interresting results

author avatarit_user178302 (Senior Engineer at a financial services firm with 10,001+ employees)
Real User

Most vendors have similar transaction monitoring capabilities so I look at the End user experience monitoring features to differentiate. Not only RUM (mobile and web) but also Active Monitoring through synthetics.

author avatarit_user510261 (Services Consultant at a financial services firm with 501-1,000 employees)
Vendor

I Check about a Customer Experience Tool, I think the organization have to improve your client experience and if you don't have a tool to view the Improve Points this is so hard to do!!

author avatarit_user308760 (User)
Vendor

Visibility into the application transaction and all the magic behind the curtain

author avatarit_user304704 (IT Release Manager at Ventera)
Consultant

Monitoring end user experience (including impact of external partners). It's the price of admission

Rony_Sklar
With so many APM tools available, it can be hard for businesses to choose the right one for their needs.  With this in mind, what is your favorite APM tool that you would happily recommend to others?  What makes it your tool of choice?
author avatarGustavoTorres
User

My favorite APM tool is Dynatrace, the one agent handling enables fast and agile deployment.

author avatarHani Khalil
Real User

I have tested a lot of APM tools and most of it are doing the same job in different techniques and different interfaces. 


One of the best tools I tested called eG Enterprise, this tool provided the required info and data to our technical team. we Found also great support from eG technical team during the implementation. One of the main factors was cost and they can challenge a lot of vendors on that.


 

author avatarreviewer1397637 (Vice President Derivatives Ops IT at a financial services firm with 10,001+ employees)
Real User

My favourite APM tool is New Relic. Monitoring dashboard shows exact method calls with line numbers, including external dependencies for apps of any size and complexity.

author avatarPradeep Saxena
Real User

My favourite APM tool is Azure Monitoring from this I can check application insights. I can also check when application crashed.

author avatarRavi Suvvari
Real User

Agree, well explained.

author avatarMark Kaplan
Real User

Dynatrace is my only APM and has been for 3 years now.

author avatarPatrick Stack (Accedian)
Vendor

Some may call me a little bias. But I would strongly suggest taking a look at Accedian. From an overall total cost of ownership perspective, as well as ease of use, and deployment. This one is tough to beat!

Rony_Sklar
How is synthetic monitoring used in Application Performance Management (APM)? 
author avatarBrian Philips
User

There is actually a place and a need for both synthetic and real user experience monitoring. If you look at the question from the point of view of what you trying to learn, detect, & then investigate, the answer should be that you want to be pro-active in ensuring a positive end-user experience.

I love real user traffic. There are a number of metrics that can be captured and measured.  The number of items that can be learned will be controlled by the type and kind of data source used. NetFlow, Logs, and Ethernet packets. Response time, true client user location, application command executed, response to that command from the application including exact error messages, direct indicators of server and client physical or virtual performance, the list goes on and on. Highly valuable information for APP-OPS, Networking, Cloud, & data center teams.

Here is the challenge though, you need to have real user traffic to measure user traffic. The number of transactions and users and volumes of traffic and the path of those connections are great for measuring over time and baselining and triage as a count or measure and to find correlations between metrics when user experience is perceived as poor. The variation in these same metrics though makes them poor candidates for measuring efficiency and pro-active availability. Another challenge is that often real user traffic is often encrypted now so just exposing that level of data has a cost that is prohibitive to do outside of data center, cloud, Co-Lo. These aspects are often controlled by different teams so coordinating translations and time intervals of measurements between the different data sources is a "C" level initiative. Synthetic testing is/are fixed in number, duration, transaction type, & location. A single team can administer them but everyone can use the data. Transaction types and commands, tests, can be scaled up and down as needed for new version changes of applications and micro-services living in Containers, Virtual hosts, clusters, physical hosts Co-Lo, & data-centers. These synthetic transactions also determine availability and predict end-user experience long before there are any actual end-users. Imagine an organization that can generate transactions and even makes phone calls of all types and kinds in varying volumes a few hours before a geographic workday begins? If there is not a version change in software or change control in networking or infrastructure and there is a change from baseline or a failure to transact, IT has time to address the issue before a real user begins using the systems or services. These fixed transactions in number and time are very valuable in anyone's math for comparison and SLA measurements and do not need to be decrypted to get a COMMAND level delineation measurement.

Another thing to consider is that these synthetic tests also address SaaS and direct cloud access as well as 3rd party collaboration access {WEBEX, ZOOM, TEAMS, etc.}. Some vendors' offerings integrate together with there real-user measurements and baseline's, out of the box to realize the benefit of both and provide even more measurements and calculations and faster triage. Others may offer integration points like API or WEBHOOKS and leave it up to you.

The value and the ROI are not so much one or the other. Those determinations for an organization should be measured by how you responded to my original answer, /"//you want to be pro-active in ensuring a positive end-user experience."

author avatarDiego Caicedo Lescano
Real User

Synthetic monitoring and real user monitoring (RUM) are two extremely different approaches that can be used to measure how your systems are performing. While synthetic monitoring relies on automatic simulated tests, Real User Monitor (RUM) records the behavior of actual visitors on your site and let you analyze  and diagnose


Synthetic monitoring is active, meanwhile Real User Monitoring is passive,  that means both are complement of each other

author avatarNetworkOb0a3 (Network Operation Center Team Leader at a recruiting/HR firm with 1,001-5,000 employees)
Real User

I think different shops may use the term differently. In regards to an industry standard the other replies may be more appropriate. 


I can tell you that where I work we refer to SEUM (Synthetic End User Monitoring) UX and Synthetic (both user experience monitors)  monitoring as simulating actual human activities and setting various types of validations. These validations may be load times for images, text, pages, or validating an expected action based on the steps completed by the monitor. We target all aspects of infrastructure / platform for standard monitoring and then for any user facing service we try to place at least one Synthetic / UX monitor on top of the process. I often find the most value from our Synthetics comes in the form of historical trending. Great examples of NOC wins have been patch X was applied and we noticed a consistent 3 second additional time required to complete UX monitor step Y. Another value from Synthetics is quickly assessing actual user impact. More mature orgs may have this all mapped out but I have found that many NOCs will see alarms on several services but not be able to determine what this means to an actual user community until feedback comes in via tickets or user reported issues. Seeing the standard alarms tells me what is broken, then seeing which steps are failing in the synthetics tells me what this means to our users. 


I think that one of the great benefits to an open forum like this is getting to consider how each org does things. There are no wrong answers, just some info applies better for what you may be asking. 

author avatarSunder Rajagopalan
Real User

Synthetic monitoring helps simulate traffic from various geographic locations 24/7 at some regular frequency, say 5 minutes to make sure your services are available and performing as expected. In addition, running Synthetic monitoring along with alerts on some of your critical services that are dependent on other external connections like Payment Gateways, etc. will help you catch any issues with external connections proactively and address them before your users experience any issue with your services.

author avatarMichael Sydor
Real User

Synthetics for production, are best used when there is little or no traffic to help confirm that your external access points are functioning.  They also can be used to stress test components or systems - simulating traffic to test firewall capacity or message queue behavior  and many other cases.  You can also use synthetics to do availability testing during your operational day - again usually directed at your external points.  Technology for cloud monitoring is generally synthetics.  And the ever-popular speedtest.net is effectively doing synthetics to assess internet speed.  The challenge with synthetics is maintaining those transactions.  They need to be updated every time you make changes in you code base (that affects the transactions) and to cover all of the scenarios you care about.  And also the HW requirements to support the generation and analysis of what can quickly become thousands of different transactions.  Often this results in synthetics being used every 30 minutes (or longer) - which, of course, defeats the usefulness as an availability monitor.


Real User monitoring is just that - real transactions, not simulated.  You use the transaction volume to infer availability of the various endpoints, and baselines for transaction type and volume to assess the availability.  This eliminates the extra step of keeping the synthetics up-to date and trying to live with the intervals at which you have visibility into actual traffic conditions.  But it will take extra work to decide which transactions are significant and to establish the baseline behaviors, especially when you have seasonality or Time-of-Day considerations that vary greatly.


However, I'm seeing that the best measure of transaction performance is to add user sentiment to your APM.  Don't guess at what the transaction volume means - simply ask the user if things are going well, or not!  This helps you narrow down what activities are significant, and thus what KPIs need to be in your baseline.


A good APM Practice will use both synthetics and real-user monitoring - where appropriate!  You do not choose one over the other.  You have to be mindful of where each tool has its strengths, what visibility they offer and the process that they need for effective use.


author avatarATHANASSIOS FAMELIARIS
Real User

Synthetic Monitoring refers to Proactive Monitoring of Applications’ Components’ and Business Transactions Performance and Availability. Using this technique the monitoring of availability and performance of specific critical business transactions per application is achieved by simulating user interactions with web applications and by running transaction simulation scripts.


By simulating user transactions, the specific business is constantly tested for availability and performance. Moreover, synthetic monitoring provides detailed information and feedback for the reasons of performance degradation and loss of availability, and with this information, performance and availability issues can be pinpointed before users are impacted. Normally tools supporting Synthetic Monitoring  include features like: complete performance monitoring, continuous synthetic transaction monitoring, detailed load-time metrics, monitoring from multiple locations, and browser-based transaction recording. 


On the other hand Real User’s experience Monitoring (RUM), allows recording and observation of real end-user interactions with the applications providing information on how users navigate in the applications, what URLs and functions they are using and with what performance. This approach is achieved by recording time-stamped availability (status, error codes, etc.) and performance data from an application and its components. RUM also helps in defining the most commonly used business transactions or most problematic transactions to properly configure them for synthetic monitoring, as described previously.

author avatarTjeerd Saijoen
User

In real-time monitoring the load on the systems is different every time based on the total number of users, applications, batch jobs, etc. while in synthetic monitoring we use what we call a robot firing for example every hour the same transaction. Because it is the same transaction every time you can determine the performance of the transaction. if you do this in DevOps you can monitor the transaction before actually going live and minimize the risk of performance problems before going in production.

author avatarSaadHussain
Real User

Synthetic monitoring is a method to monitor your applications by simulating users – directing the path taken through the application. This provides information as to the uptime and performance of your critical business transactions, and the most common paths in the application. The simple reality is that there is no easy way to combine the accessibility, coherence, and manageability offered by a centralized system with the sharing, growth, cost, and autonomy advantages of a distributed system. It is here, at this intersection, that businesses turn to IT development and operations teams for guidance—APM tools enable them to negotiate these gaps.

Menachem D Pritzker
Below are the rankings. What do you think? Gartner reports these four solutions as Leaders: Cisco (AppDynamics) Dynatrace New Relic Broadcom These are the Visionaries: Splunk (SignalFx) Datadog Only one Challenger: Microsoft Eight Niche Players: Riverbed (Aternity) IBM Instana Oracle SolarWinds Tingyun ManageEngine Micro Focus Thoughts?
author avatarRuan Van Staden
Real User

In the past we found Gartner useful in deciding where to start an investigation into APM but the magic Quadrant view is just not representative of capability. We still had to do the work to find a solution that works for us. One aspect we found annoying is that Gartner does not make a distinction between general APM tools and Technology management tools. As an example in our case: We have been using Dynatrace OneAgent for two year now due to the list of technologies that Dynatrace can monitor out of the box.
We also have Oracle Enterprise Monitoring (Oracle Cloud Control) which in my mind is not really an APM but rather a technology specific infrastructure management tool. It is great at managing patching and deployments but it does not have the APM functions that Dynatrace has. We are running an extensive technology stack that we depend on. Since we have a mix of Microsoft, Linux, Solaris and AIX servers using Dynatrace to get an application overview is important but so is using Oracle EM and Microsoft to manage the technologies.

author avatarBill Burke
Real User

I don't put too much thought into the Gartner reports anymore. The reviews are based on surveys from individuals provided to Gartner by the vendors.

author avatarRob Salmon
Real User

I think that a major player in this space -eG Innovations - is missing. I have personally used the product to manage our server farms/cloud platforms since 2004 and could not be more pleased. Functionality, cost, ease of use, ROI are all magic quadrant worthy and they are a leader in root cause analysis. The #1 call is why is it slow? eG gives you the answer usually in four clicks. Huge value and huge ROI in my humble opinion.

author avatarRadoslaw Runc
User

I cannot speak about all range of ranked products because I don't know them well enough. I can speak about current Broadcom offering. It seems that this platform is strenthening since next to new recently issued new release of APM Broadcom introduced new release of Apps Synthetic Monitor. Also part of APM license is currently module App Experience Analytics for user experience monitoring. More than that APM,ASM and App Exp Analytics are part broader of AIOps family next to Uunified Infrastructure Management , NetOps - Spectrum and ML solution called Data Operation Inteligence (DOI).All these solutions are beeing glued with automation mechanisms coming from Broadcom One Automation platform.

author avatarFarruKh Tirmizi
User

I agree with Gartner list as shared

See more Application Performance Management (APM) questions »

What is Application Performance Management (APM)?

The best application performance monitoring solutions (APM) are important for proactively monitoring and managing a software application’s performance and availability. APM’s scope further includes performance measurement for virtually any IT asset that affects end user experience. The sign of the best APM tools are that they detect application performance issues in order to adhere to an agreed-upon service level. In particular, APM is focused on app response times under various load conditions. As part of this, APM also measures the compute resources required to support a given level of load.

According to members of the IT Central Station community, the best APM vendors serve multiple masters. Developers need to understand app performance characteristics in order to ensure an optimal software experience for end users. Business managers and IT department leaders use APM data to help make decisions about infrastructure and architecture.

As applications grow more complex and interdependent, application performance monitoring users express high expectations for potential APM toolsets. Accessibility, manageability and scalability are essential. Users argue that an effective APM tool must give business stakeholders accurate, understandable data while allowing developers to dive deeply into stored data over the long term.

DevOps users want app performance management tools to measure the deep internal transactions that take place inside an application or between integrated system elements. They want APM data in real time, across multiple application tiers, with transparency along the entire application process chain. Some refer to this as “full stack tracing.”

Ideally, APM data should be measured against user experience as a key performance indicator. For example, if a bottleneck is being caused by database latency, users want to understand the root cause so they can fix it immediately. This might require an alerting based on patterns and “baselining.”

Some expect APM tools to enable the discovery of complex distributed application architecture or even microservices and containers. After all, not all application architecture is known at the outset, and it certainly changes over time. Users need APM tools to be proactive whether they are used in dev, test, QA or production environments.

The APM toolset itself should have low impact on application performance. The measurements it takes have to be easy to interpret and place into a business-friendly reporting output. For instance, IT Central Station members suggest that APM tools should offer a predefined customizable reporting capability, with high visibility and a capacity to export and report on large quantities of raw data.

Find out what your peers are saying about Dynatrace, AppDynamics, BMC and others in Application Performance Management (APM). Updated: January 2021.
454,950 professionals have used our research since 2012.