Top 8 Application Performance Management (APM) Tools

DynatraceDatadogAppDynamicsBMC TrueSight Operations ManagementAternityNew Relic APMITRS GeneosAzure Monitor
  1. leader badge
    I like the full-stack agents, the Oneagents, and the futures dynamic.If you look in the APM sector, it is a very nice package to install.
  2. leader badge
    I have found some of the most valuable features to be the way things all come together that gives us a point of view that is useful. The panel is very beautiful and customizable.
  3. Find out what your peers are saying about Dynatrace, Datadog, AppDynamics and others in Application Performance Management (APM). Updated: April 2021.
    501,499 professionals have used our research since 2012.
  4. The SAP monitoring element is very helpful.The solution helps us save a lot of time on certain tasks.
  5. The solution has a very good business event manager tool. The event management part of TrueSight Operations Management, in my experience, is probably the best in the market. You have endless flexibility. You can build your own rules, you have the MRL language, and you can implement any kind of logic on the alerts. It may be correlation, abstraction, or executing something as a result of the alerts. You have almost the whole range of options available for event management using the available customization.
  6. It is useful for working out whether there are any issues in the network or between the endpoints. It is also useful for working out any performance issues. It has been useful for a lot of stuff around Teams. Our customers like to know what's happening with Teams when they call in. It is helpful for easily profiling users. It records all the applications that are being used for each user, and you can see what users are doing. It is very good in terms of performance. You don't have to wait forever to try and get reports or results. It is quite quick to get everything that you need out of the software.
  7. Working with the solution is very easy. It's user-friendly. The breakdown of the response time of different components and getting in-depth details of the slow component are the most valuable features. It is easy to use, and it gets the job done.
  8. report
    Use our free recommendation engine to learn which Application Performance Management (APM) solutions are best for your needs.
    501,499 professionals have used our research since 2012.
  9. The solution is used across the entire investment banking division, covering environments such as electronic trading, algo-trading, fixed income, FX, etc. It monitors that environment and enables a bank to significantly reduce down time. Although hard to measure, since implementation, we have probably seen some increased stability because of it and we have definitely seen teams a lot more aware of their environment. Consequently, we can be more proactive in challenging and improving previously undetected weaknesses.
  10. Azure Monitor is really just a source for Dynatrace. It's just collecting data and monitoring the environment and the infrastructure. It is fairly good at that.

Advice From The Community

Read answers to top Application Performance Management (APM) questions. 501,499 professionals have gotten help from our community of experts.
Ariel Lindenfeld
Let the community know what you think. Share your opinions now!
author avatarit_user342780 (Senior Software Engineer Team Lead at THE ICONIC)
Vendor

Speed to get data into the platform is one of our most important metrics. We NEED to know what is going on right now, not 3-4minutes ago.

author avatarit_user364554 (COO with 51-200 employees)
Vendor

Hi,
Full disclosure I am the COO at Correlsense.
2 years ago I wrote a post just about that - "Why APM project fails" - I think it can guide you through the process of the most important aspects of APM tools.

Take a look - feel free to leave a comment:
http://www.correlsense.com/enterprise-apm-projects-fail/
Elad Katav

author avatarDavid Fourie
Real User

Full stack end-to-end monitoring including frontend and backend server profiling, real user monitoring, synthetic monitoring and root cause deep dive analysis. Ease of use and intuitive UX. 

author avatarit_user229734 (IT Technical Testing Consultant at adhoc International)
Vendor

In order to evaluate/benchmark APM solutions, We can based on the 5 dimension provided by Gartner:
1. End-user experience monitoring: the capture of data about how end-to-end application
availability, latency, execution correctness and quality appeared to the end user
2. Runtime application architecture discovery, modeling and display: the discovery of the
various software and hardware components involved in application execution, and the array of
possible paths across which those components could communicate that, together, enable that
involvement
3. User-defined transaction profiling: the tracing of events as they occur among the components
or objects as they move across the paths discovered in the second dimension, generated in
response to a user's attempt to cause the application to execute what the user regards as a
logical unit of work
4. Component deep-dive monitoring in an application context: the fine-grained monitoring of
resources consumed by and events occurring within the components discovered in the second
dimension
5. Analytics: the marshalling of a variety of techniques (including behavior learning engines,
complex-event processing (CEP) platforms, log analysis and multidimensional database
analysis) to discover meaningful and actionable patterns in the typically large datasets
generated by the first four dimensions of APM

In other side, we tried to benchmark internally some APM solutions based on the following evaluation groups:
Monitoring capabilities
Technologies and framework support
Central PMDB (Performance Management DataBase)
Integration
Service modeling and monitoring
Performance analysis and diagnostics
Alerts/event Management
Dashboard and Visualization
Setup and configuration
User experience
and we got interresting results

author avatarit_user178302 (Senior Engineer at a financial services firm with 10,001+ employees)
Real User

Most vendors have similar transaction monitoring capabilities so I look at the End user experience monitoring features to differentiate. Not only RUM (mobile and web) but also Active Monitoring through synthetics.

author avatarAmarnadh Sai , ITIL, Prince2
Real User

1.Ability to Corelate 


2. Machine learning/AI based thresholds 


3. Ease of configuration ( in bulk)

author avatarRavi Suvvari
Real User

Tracing ability like record level, latency and capabilities to inform good predictions in advance, history storage, pricing, support etc..,

author avatarit_user510261 (Services Consultant at a financial services firm with 501-1,000 employees)
Vendor

I Check about a Customer Experience Tool, I think the organization have to improve your client experience and if you don't have a tool to view the Improve Points this is so hard to do!!

Rony_Sklar
With so many APM tools available, it can be hard for businesses to choose the right one for their needs.  With this in mind, what is your favorite APM tool that you would happily recommend to others?  What makes it your tool of choice?
author avatarGustavoTorres
User

My favorite APM tool is Dynatrace, the one agent handling enables fast and agile deployment.

author avatarHani Khalil
Real User

I have tested a lot of APM tools and most of it are doing the same job in different techniques and different interfaces. 


One of the best tools I tested called eG Enterprise, this tool provided the required info and data to our technical team. we Found also great support from eG technical team during the implementation. One of the main factors was cost and they can challenge a lot of vendors on that.


 

author avatarAbbasi Poonawala (Yahoo!)
Real User

My favourite APM tool is New Relic. Monitoring dashboard shows exact method calls with line numbers, including external dependencies for apps of any size and complexity.

author avatarPradeep Saxena
Real User

My favourite APM tool is Azure Monitoring from this I can check application insights. I can also check when application crashed.

author avatarRavi Suvvari
Real User

Agree, well explained.

author avatarDavid Fourie
Real User

Dynatrace has been our tool of choice for many years, since the days of AppMon and lately Dynatrace OneAgent with AI driven monitoring and reporting.

author avatarMark Kaplan
Real User

Dynatrace is my only APM and has been for 3 years now.

author avatarPatrick Stack (Accedian)
Vendor

Some may call me a little bias. But I would strongly suggest taking a look at Accedian. From an overall total cost of ownership perspective, as well as ease of use, and deployment. This one is tough to beat!

Rony_Sklar
How is synthetic monitoring used in Application Performance Management (APM)? 
author avatarBrian Philips
User

There is actually a place and a need for both synthetic and real user experience monitoring. If you look at the question from the point of view of what you trying to learn, detect, & then investigate, the answer should be that you want to be pro-active in ensuring a positive end-user experience.

I love real user traffic. There are a number of metrics that can be captured and measured.  The number of items that can be learned will be controlled by the type and kind of data source used. NetFlow, Logs, and Ethernet packets. Response time, true client user location, application command executed, response to that command from the application including exact error messages, direct indicators of server and client physical or virtual performance, the list goes on and on. Highly valuable information for APP-OPS, Networking, Cloud, & data center teams.

Here is the challenge though, you need to have real user traffic to measure user traffic. The number of transactions and users and volumes of traffic and the path of those connections are great for measuring over time and baselining and triage as a count or measure and to find correlations between metrics when user experience is perceived as poor. The variation in these same metrics though makes them poor candidates for measuring efficiency and pro-active availability. Another challenge is that often real user traffic is often encrypted now so just exposing that level of data has a cost that is prohibitive to do outside of data center, cloud, Co-Lo. These aspects are often controlled by different teams so coordinating translations and time intervals of measurements between the different data sources is a "C" level initiative. Synthetic testing is/are fixed in number, duration, transaction type, & location. A single team can administer them but everyone can use the data. Transaction types and commands, tests, can be scaled up and down as needed for new version changes of applications and micro-services living in Containers, Virtual hosts, clusters, physical hosts Co-Lo, & data-centers. These synthetic transactions also determine availability and predict end-user experience long before there are any actual end-users. Imagine an organization that can generate transactions and even makes phone calls of all types and kinds in varying volumes a few hours before a geographic workday begins? If there is not a version change in software or change control in networking or infrastructure and there is a change from baseline or a failure to transact, IT has time to address the issue before a real user begins using the systems or services. These fixed transactions in number and time are very valuable in anyone's math for comparison and SLA measurements and do not need to be decrypted to get a COMMAND level delineation measurement.

Another thing to consider is that these synthetic tests also address SaaS and direct cloud access as well as 3rd party collaboration access {WEBEX, ZOOM, TEAMS, etc.}. Some vendors' offerings integrate together with there real-user measurements and baseline's, out of the box to realize the benefit of both and provide even more measurements and calculations and faster triage. Others may offer integration points like API or WEBHOOKS and leave it up to you.

The value and the ROI are not so much one or the other. Those determinations for an organization should be measured by how you responded to my original answer, /"//you want to be pro-active in ensuring a positive end-user experience."

author avatarDiego Caicedo Lescano
Real User

Synthetic monitoring and real user monitoring (RUM) are two extremely different approaches that can be used to measure how your systems are performing. While synthetic monitoring relies on automatic simulated tests, Real User Monitor (RUM) records the behavior of actual visitors on your site and let you analyze  and diagnose


Synthetic monitoring is active, meanwhile Real User Monitoring is passive,  that means both are complement of each other

author avatarNetworkOb0a3 (Network Operation Center Team Leader at a recruiting/HR firm with 1,001-5,000 employees)
Real User

I think different shops may use the term differently. In regards to an industry standard the other replies may be more appropriate. 



I can tell you that where I work we refer to SEUM (Synthetic End User Monitoring) UX and Synthetic (both user experience monitors)  monitoring as simulating actual human activities and setting various types of validations. These validations may be load times for images, text, pages, or validating an expected action based on the steps completed by the monitor. We target all aspects of infrastructure / platform for standard monitoring and then for any user facing service we try to place at least one Synthetic / UX monitor on top of the process. I often find the most value from our Synthetics comes in the form of historical trending. Great examples of NOC wins have been patch X was applied and we noticed a consistent 3 second additional time required to complete UX monitor step Y. Another value from Synthetics is quickly assessing actual user impact. More mature orgs may have this all mapped out but I have found that many NOCs will see alarms on several services but not be able to determine what this means to an actual user community until feedback comes in via tickets or user reported issues. Seeing the standard alarms tells me what is broken, then seeing which steps are failing in the synthetics tells me what this means to our users. 


I think that one of the great benefits to an open forum like this is getting to consider how each org does things. There are no wrong answers, just some info applies better for what you may be asking. 

author avatarSunder Rajagopalan
Real User

Synthetic monitoring helps simulate traffic from various geographic locations 24/7 at some regular frequency, say 5 minutes to make sure your services are available and performing as expected. In addition, running Synthetic monitoring along with alerts on some of your critical services that are dependent on other external connections like Payment Gateways, etc. will help you catch any issues with external connections proactively and address them before your users experience any issue with your services.

author avatarMichael Sydor
Real User

Synthetics for production, are best used when there is little or no traffic to help confirm that your external access points are functioning.  They also can be used to stress test components or systems - simulating traffic to test firewall capacity or message queue behavior  and many other cases.  You can also use synthetics to do availability testing during your operational day - again usually directed at your external points.  Technology for cloud monitoring is generally synthetics.  And the ever-popular speedtest.net is effectively doing synthetics to assess internet speed.  The challenge with synthetics is maintaining those transactions.  They need to be updated every time you make changes in you code base (that affects the transactions) and to cover all of the scenarios you care about.  And also the HW requirements to support the generation and analysis of what can quickly become thousands of different transactions.  Often this results in synthetics being used every 30 minutes (or longer) - which, of course, defeats the usefulness as an availability monitor.


Real User monitoring is just that - real transactions, not simulated.  You use the transaction volume to infer availability of the various endpoints, and baselines for transaction type and volume to assess the availability.  This eliminates the extra step of keeping the synthetics up-to date and trying to live with the intervals at which you have visibility into actual traffic conditions.  But it will take extra work to decide which transactions are significant and to establish the baseline behaviors, especially when you have seasonality or Time-of-Day considerations that vary greatly.


However, I'm seeing that the best measure of transaction performance is to add user sentiment to your APM.  Don't guess at what the transaction volume means - simply ask the user if things are going well, or not!  This helps you narrow down what activities are significant, and thus what KPIs need to be in your baseline.


A good APM Practice will use both synthetics and real-user monitoring - where appropriate!  You do not choose one over the other.  You have to be mindful of where each tool has its strengths, what visibility they offer and the process that they need for effective use.



author avatarATHANASSIOS FAMELIARIS
Real User

Synthetic Monitoring refers to Proactive Monitoring of Applications’ Components’ and Business Transactions Performance and Availability. Using this technique the monitoring of availability and performance of specific critical business transactions per application is achieved by simulating user interactions with web applications and by running transaction simulation scripts.


By simulating user transactions, the specific business is constantly tested for availability and performance. Moreover, synthetic monitoring provides detailed information and feedback for the reasons of performance degradation and loss of availability, and with this information, performance and availability issues can be pinpointed before users are impacted. Normally tools supporting Synthetic Monitoring  include features like: complete performance monitoring, continuous synthetic transaction monitoring, detailed load-time metrics, monitoring from multiple locations, and browser-based transaction recording. 


On the other hand Real User’s experience Monitoring (RUM), allows recording and observation of real end-user interactions with the applications providing information on how users navigate in the applications, what URLs and functions they are using and with what performance. This approach is achieved by recording time-stamped availability (status, error codes, etc.) and performance data from an application and its components. RUM also helps in defining the most commonly used business transactions or most problematic transactions to properly configure them for synthetic monitoring, as described previously.

author avatarTjeerd Saijoen
Vendor

In real-time monitoring the load on the systems is different every time based on the total number of users, applications, batch jobs, etc. while in synthetic monitoring we use what we call a robot firing for example every hour the same transaction. Because it is the same transaction every time you can determine the performance of the transaction. if you do this in DevOps you can monitor the transaction before actually going live and minimize the risk of performance problems before going in production.

author avatarSaadHussain
Real User

Synthetic monitoring is a method to monitor your applications by simulating users – directing the path taken through the application. This provides information as to the uptime and performance of your critical business transactions, and the most common paths in the application. The simple reality is that there is no easy way to combine the accessibility, coherence, and manageability offered by a centralized system with the sharing, growth, cost, and autonomy advantages of a distributed system. It is here, at this intersection, that businesses turn to IT development and operations teams for guidance—APM tools enable them to negotiate these gaps.

See more Application Performance Management (APM) questions »

Application Performance Management (APM) Articles

Tjeerd Saijoen
CEO at Rufusforyou
May 06 2021

How are security and performance related to each other?

Today a lot of monitor vendors are on the market, most of the time they focus on a particular area, for example, APM (Application Performance Monitoring) or Infrastructure monitoring. Is this enough to detect and fix all problems?

How are performance and security related?

Now our landscape is changing rapidly. In the past, we had to deal with one system. Today we are dealing with many systems in different locations. For example, your own data center called on-premise. Next, we have on-premise and for example AWS, and now we get on-premise and AWS and Azure and it doesn't stop. Now hackers have more locations and a better chance to find a weak spot in the chain, also if performance slows down, where is the problem. 

Because of this, you need many different monitoring tools also they don't monitor your application or OS parameter settings. For example, I have a webserver and it has a parameter to set the number of concurrent users to 30, a monitor tool will probably tell you more memory is required, you ad more expensive memory and you get the same result more memory, while the real solution is to adjust those parameter settings. 

We had several applications running for years while the total number of end users is rapidly growing, now most people don't adjust the parameters because they are not aware of they exist and the right value. 

How are performance and security related to each other, if they compromise systems as well you will see unusual behavior in performance? For example, a performance drop and more CPU will be allocated. For this, you need monitors capable of looking holistic to the complete environment, checking parameter settings and alert on unusual behavior also look for one single dashboard to check your environment including the cloud. Don't look at a sexy dashboard but more important a functional dashboard. Important is the tool capable and give it advise on what to do or is it to tell you there is a problem in the database but it doesn't tell you the buffer setting on DB xxx needs to be adjusted from 2400 Mb to 4800 MB

If we have the right settings, performance will increase and better performance is more transactions. More transactions mean more selling and more business.

Caleb MillerGood article, but the spelling and grammatical errors are pretty blatant.
Tjeerd Saijoen
CEO at Rufusforyou
Mar 29 2021

End-users can connect with different options: by cloud (AWS, Microsoft Azure or other cloud providers), by a SaaS solution or from their own datacenter. The next option is Multi Cloud and hybrid - this makes it difficult to find reasons for a performance problem. 

Now users have to deal with many options for their network. You have to take into account problems such as latency and congestion, and now an added a new layer because of Covid-19. Normally you work in an office space as an end-user and your network team takes care of all the problems. Now everybody is working from home, and many IOT devices are connected to our home network - are they protected? It is easy for a hacker to use these kinds of devices to enter your office network. 

How can we prevent all of this? With a security tool like QRadar or Riverbed. The most important thing to know is that you don't need a APM solution only. Many times, I hear people say,  "We have a great APM solution." Well, this is great for application response times, however an enterprise environment has many more components, like the network, load balancers switches and so on. Also, if you're running power machines you have to deal with microcodes and sometimes with HACMP - an APM solution will not monitor this. 

Bottom line: you need a holistic solution.  

Find out what your peers are saying about Dynatrace, Datadog, AppDynamics and others in Application Performance Management (APM). Updated: April 2021.
501,499 professionals have used our research since 2012.