1. leader badge
    The real-user monitoring is mostly used to gauge the difference in performance for multitenant applications, This is so we can discern if there are any local network or client-facing issues when we do a comparison between each customer. It is quite important for us to be able to identify a client-side issue, as opposed to a feature managed problem, because we're essentially providing managed services of business applications.
  2. Provides monitoring more around business processes versus just servers, applications, etc. E.g., with complex systems, where a business process passes across multiple applications, the business needs us to monitor the heath of the process, not just a segment of the application.
  3. Find out what your peers are saying about Dynatrace, AppDynamics, Datadog and others in Application Performance Management (APM). Updated: September 2020.
    437,168 professionals have used our research since 2012.
  4. Datadog's ability to group and visualize the servers and the data makes it relatively easy for the root cause analysis.This is definitely a good product and I would consider them one of the leaders within the application monitoring and cloud monitoring space.
  5. The service maps that it creates, the health maps that it creates, the insights that it provides, etc., are all quite useful. End-user Synthetics and monitoring are very good.
  6. The event management part of TrueSight Operations Management, in my experience, is probably the best in the market. You have endless flexibility. You can build your own rules, you have the MRL language, and you can implement any kind of logic on the alerts. It may be correlation, abstraction, or executing something as a result of the alerts. You have almost the whole range of options available for event management using the available customization.
  7. One thing we're utilizing in Geneos is the Gateway-SQL. That's really helpful for us. Using Gateway-SQL, we are able to merge two different views into one. Suppose we have to check something in the log and that we have to check something in the database and do a comparison before publishing a result. We can achieve that using Gateway-SQL.
  8. report
    Use our free recommendation engine to learn which Application Performance Management (APM) solutions are best for your needs.
    437,168 professionals have used our research since 2012.
  9. The feature that I have found the most valuable is its user interface.The features that I find most valuable are related to network monitoring.
  10. Azure Monitor is really just a source for Dynatrace. It's just collecting data and monitoring the environment and the infrastructure. It is fairly good at that.

Advice From The Community

Read answers to top Application Performance Management (APM) questions. 437,168 professionals have gotten help from our community of experts.
Rony_Sklar
With so many APM tools available, it can be hard for businesses to choose the right one for their needs.  With this in mind, what is your favorite APM tool that you would happily recommend to others?  What makes it your tool of choice?
author avatarHani Khalil
Real User

I have tested a lot of APM tools and most of it are doing the same job in different techniques and different interfaces. 


One of the best tools I tested called eG Enterprise, this tool provided the required info and data to our technical team. we Found also great support from eG technical team during the implementation. One of the main factors was cost and they can challenge a lot of vendors on that.


 

author avatarRony_Sklar
Community Manager

@Hani Khalil ​Thanks for sharing your experience with eG Enterprise. If you're willing to, we'd love it if you could leave a review to help other users who are researching their options: https://www.itcentralstation.c...

author avatarRavi Suvvari
Real User

@Hani Khalil AGree well explained

author avatarreviewer1397637 (Vice President Derivatives Ops IT at a financial services firm with 10,001+ employees)
Real User

My favourite APM tool is New Relic. Monitoring dashboard shows exact method calls with line numbers, including external dependencies for apps of any size and complexity.

author avatarPradeep Saxena
Real User

My favourite APM tool is Azure Monitoring from this I can check application insights. I can also check when application crashed.

author avatarRony_Sklar
Community Manager

@Pradeep Saxena Thanks for your input! Can you share a bit more about what makes Azure your favorite tool?

author avatarGustavoTorres
User

My favorite APM tool is Dynatrace, the one agent handling enables fast and agile deployment.

author avatarRony_Sklar
Community Manager

@GustavoTorres ​Thanks for sharing your favorite tool! Do you have any insights to share about other tools that you used in the past?

author avatarPatrick Stack (Accedian)
Vendor

Some may call me a little bias. But I would strongly suggest taking a look at Accedian. From an overall total cost of ownership perspective, as well as ease of use, and deployment. This one is tough to beat!

author avatarRavi Suvvari
Real User

Agree, well explained.

Rony_Sklar
How is synthetic monitoring used in Application Performance Management (APM)? 
author avatarBrian Philips
User

There is actually a place and a need for both synthetic and real user experience monitoring. If you look at the question from the point of view of what you trying to learn, detect, & then investigate, the answer should be that you want to be pro-active in ensuring a positive end-user experience.

I love real user traffic. There are a number of metrics that can be captured and measured.  The number of items that can be learned will be controlled by the type and kind of data source used. NetFlow, Logs, and Ethernet packets. Response time, true client user location, application command executed, response to that command from the application including exact error messages, direct indicators of server and client physical or virtual performance, the list goes on and on. Highly valuable information for APP-OPS, Networking, Cloud, & data center teams.

Here is the challenge though, you need to have real user traffic to measure user traffic. The number of transactions and users and volumes of traffic and the path of those connections are great for measuring over time and baselining and triage as a count or measure and to find correlations between metrics when user experience is perceived as poor. The variation in these same metrics though makes them poor candidates for measuring efficiency and pro-active availability. Another challenge is that often real user traffic is often encrypted now so just exposing that level of data has a cost that is prohibitive to do outside of data center, cloud, Co-Lo. These aspects are often controlled by different teams so coordinating translations and time intervals of measurements between the different data sources is a "C" level initiative. Synthetic testing is/are fixed in number, duration, transaction type, & location. A single team can administer them but everyone can use the data. Transaction types and commands, tests, can be scaled up and down as needed for new version changes of applications and micro-services living in Containers, Virtual hosts, clusters, physical hosts Co-Lo, & data-centers. These synthetic transactions also determine availability and predict end-user experience long before there are any actual end-users. Imagine an organization that can generate transactions and even makes phone calls of all types and kinds in varying volumes a few hours before a geographic workday begins? If there is not a version change in software or change control in networking or infrastructure and there is a change from baseline or a failure to transact, IT has time to address the issue before a real user begins using the systems or services. These fixed transactions in number and time are very valuable in anyone's math for comparison and SLA measurements and do not need to be decrypted to get a COMMAND level delineation measurement.

Another thing to consider is that these synthetic tests also address SaaS and direct cloud access as well as 3rd party collaboration access {WEBEX, ZOOM, TEAMS, etc.}. Some vendors' offerings integrate together with there real-user measurements and baseline's, out of the box to realize the benefit of both and provide even more measurements and calculations and faster triage. Others may offer integration points like API or WEBHOOKS and leave it up to you.

The value and the ROI are not so much one or the other. Those determinations for an organization should be measured by how you responded to my original answer, /"//you want to be pro-active in ensuring a positive end-user experience."

author avatarDiego Caicedo Lescano
Real User

Synthetic monitoring and real user monitoring (RUM) are two extremely different approaches that can be used to measure how your systems are performing. While synthetic monitoring relies on automatic simulated tests, Real User Monitor (RUM) records the behavior of actual visitors on your site and let you analyze  and diagnose


Synthetic monitoring is active, meanwhile Real User Monitoring is passive,  that means both are complement of each other

author avatarNetworkOb0a3 (Network Operation Center Team Leader at a recruiting/HR firm with 1,001-5,000 employees)
Real User

I think different shops may use the term differently. In regards to an industry standard the other replies may be more appropriate. 


I can tell you that where I work we refer to SEUM (Synthetic End User Monitoring) UX and Synthetic (both user experience monitors)  monitoring as simulating actual human activities and setting various types of validations. These validations may be load times for images, text, pages, or validating an expected action based on the steps completed by the monitor. We target all aspects of infrastructure / platform for standard monitoring and then for any user facing service we try to place at least one Synthetic / UX monitor on top of the process. I often find the most value from our Synthetics comes in the form of historical trending. Great examples of NOC wins have been patch X was applied and we noticed a consistent 3 second additional time required to complete UX monitor step Y. Another value from Synthetics is quickly assessing actual user impact. More mature orgs may have this all mapped out but I have found that many NOCs will see alarms on several services but not be able to determine what this means to an actual user community until feedback comes in via tickets or user reported issues. Seeing the standard alarms tells me what is broken, then seeing which steps are failing in the synthetics tells me what this means to our users. 


I think that one of the great benefits to an open forum like this is getting to consider how each org does things. There are no wrong answers, just some info applies better for what you may be asking. 

author avatarSunder Rajagopalan
Real User

Synthetic monitoring helps simulate traffic from various geographic locations 24/7 at some regular frequency, say 5 minutes to make sure your services are available and performing as expected. In addition, running Synthetic monitoring along with alerts on some of your critical services that are dependent on other external connections like Payment Gateways, etc. will help you catch any issues with external connections proactively and address them before your users experience any issue with your services.

author avatarMichael Sydor
Real User

Synthetics for production, are best used when there is little or no traffic to help confirm that your external access points are functioning.  They also can be used to stress test components or systems - simulating traffic to test firewall capacity or message queue behavior  and many other cases.  You can also use synthetics to do availability testing during your operational day - again usually directed at your external points.  Technology for cloud monitoring is generally synthetics.  And the ever-popular speedtest.net is effectively doing synthetics to assess internet speed.  The challenge with synthetics is maintaining those transactions.  They need to be updated every time you make changes in you code base (that affects the transactions) and to cover all of the scenarios you care about.  And also the HW requirements to support the generation and analysis of what can quickly become thousands of different transactions.  Often this results in synthetics being used every 30 minutes (or longer) - which, of course, defeats the usefulness as an availability monitor.


Real User monitoring is just that - real transactions, not simulated.  You use the transaction volume to infer availability of the various endpoints, and baselines for transaction type and volume to assess the availability.  This eliminates the extra step of keeping the synthetics up-to date and trying to live with the intervals at which you have visibility into actual traffic conditions.  But it will take extra work to decide which transactions are significant and to establish the baseline behaviors, especially when you have seasonality or Time-of-Day considerations that vary greatly.


However, I'm seeing that the best measure of transaction performance is to add user sentiment to your APM.  Don't guess at what the transaction volume means - simply ask the user if things are going well, or not!  This helps you narrow down what activities are significant, and thus what KPIs need to be in your baseline.


A good APM Practice will use both synthetics and real-user monitoring - where appropriate!  You do not choose one over the other.  You have to be mindful of where each tool has its strengths, what visibility they offer and the process that they need for effective use.


author avatarATHANASSIOS FAMELIARIS
Real User

Synthetic Monitoring refers to Proactive Monitoring of Applications’ Components’ and Business Transactions Performance and Availability. Using this technique the monitoring of availability and performance of specific critical business transactions per application is achieved by simulating user interactions with web applications and by running transaction simulation scripts.


By simulating user transactions, the specific business is constantly tested for availability and performance. Moreover, synthetic monitoring provides detailed information and feedback for the reasons of performance degradation and loss of availability, and with this information, performance and availability issues can be pinpointed before users are impacted. Normally tools supporting Synthetic Monitoring  include features like: complete performance monitoring, continuous synthetic transaction monitoring, detailed load-time metrics, monitoring from multiple locations, and browser-based transaction recording. 


On the other hand Real User’s experience Monitoring (RUM), allows recording and observation of real end-user interactions with the applications providing information on how users navigate in the applications, what URLs and functions they are using and with what performance. This approach is achieved by recording time-stamped availability (status, error codes, etc.) and performance data from an application and its components. RUM also helps in defining the most commonly used business transactions or most problematic transactions to properly configure them for synthetic monitoring, as described previously.

author avatarTjeerd Saijoen
User

In real-time monitoring the load on the systems is different every time based on the total number of users, applications, batch jobs, etc. while in synthetic monitoring we use what we call a robot firing for example every hour the same transaction. Because it is the same transaction every time you can determine the performance of the transaction. if you do this in DevOps you can monitor the transaction before actually going live and minimize the risk of performance problems before going in production.

author avatarSaadHussain
Real User

Synthetic monitoring is a method to monitor your applications by simulating users – directing the path taken through the application. This provides information as to the uptime and performance of your critical business transactions, and the most common paths in the application. The simple reality is that there is no easy way to combine the accessibility, coherence, and manageability offered by a centralized system with the sharing, growth, cost, and autonomy advantages of a distributed system. It is here, at this intersection, that businesses turn to IT development and operations teams for guidance—APM tools enable them to negotiate these gaps.

Menachem D Pritzker
Below are the rankings. What do you think? Gartner reports these four solutions as Leaders: Cisco (AppDynamics) Dynatrace New Relic Broadcom These are the Visionaries: Splunk (SignalFx) Datadog Only one Challenger: Microsoft Eight Niche Players: Riverbed (Aternity) IBM Instana Oracle SolarWinds Tingyun ManageEngine Micro Focus Thoughts?
author avatarRuan Van Staden
Real User

In the past we found Gartner useful in deciding where to start an investigation into APM but the magic Quadrant view is just not representative of capability. We still had to do the work to find a solution that works for us. One aspect we found annoying is that Gartner does not make a distinction between general APM tools and Technology management tools. As an example in our case: We have been using Dynatrace OneAgent for two year now due to the list of technologies that Dynatrace can monitor out of the box.
We also have Oracle Enterprise Monitoring (Oracle Cloud Control) which in my mind is not really an APM but rather a technology specific infrastructure management tool. It is great at managing patching and deployments but it does not have the APM functions that Dynatrace has. We are running an extensive technology stack that we depend on. Since we have a mix of Microsoft, Linux, Solaris and AIX servers using Dynatrace to get an application overview is important but so is using Oracle EM and Microsoft to manage the technologies.

author avatarBill Burke
Real User

I don't put too much thought into the Gartner reports anymore. The reviews are based on surveys from individuals provided to Gartner by the vendors.

author avatarRob Salmon
Real User

I think that a major player in this space -eG Innovations - is missing. I have personally used the product to manage our server farms/cloud platforms since 2004 and could not be more pleased. Functionality, cost, ease of use, ROI are all magic quadrant worthy and they are a leader in root cause analysis. The #1 call is why is it slow? eG gives you the answer usually in four clicks. Huge value and huge ROI in my humble opinion.

author avatarRadoslaw Runc
User

I cannot speak about all range of ranked products because I don't know them well enough. I can speak about current Broadcom offering. It seems that this platform is strenthening since next to new recently issued new release of APM Broadcom introduced new release of Apps Synthetic Monitor. Also part of APM license is currently module App Experience Analytics for user experience monitoring. More than that APM,ASM and App Exp Analytics are part broader of AIOps family next to Uunified Infrastructure Management , NetOps - Spectrum and ML solution called Data Operation Inteligence (DOI).All these solutions are beeing glued with automation mechanisms coming from Broadcom One Automation platform.

author avatarFarruKh Tirmizi
User

I agree with Gartner list as shared

See more Application Performance Management (APM) questions »

What is Application Performance Management (APM)?

The best application performance monitoring solutions (APM) are important for proactively monitoring and managing a software application’s performance and availability. APM’s scope further includes performance measurement for virtually any IT asset that affects end user experience. The sign of the best APM tools are that they detect application performance issues in order to adhere to an agreed-upon service level. In particular, APM is focused on app response times under various load conditions. As part of this, APM also measures the compute resources required to support a given level of load.

According to members of the IT Central Station community, the best APM vendors serve multiple masters. Developers need to understand app performance characteristics in order to ensure an optimal software experience for end users. Business managers and IT department leaders use APM data to help make decisions about infrastructure and architecture.

As applications grow more complex and interdependent, application performance monitoring users express high expectations for potential APM toolsets. Accessibility, manageability and scalability are essential. Users argue that an effective APM tool must give business stakeholders accurate, understandable data while allowing developers to dive deeply into stored data over the long term.

DevOps users want app performance management tools to measure the deep internal transactions that take place inside an application or between integrated system elements. They want APM data in real time, across multiple application tiers, with transparency along the entire application process chain. Some refer to this as “full stack tracing.”

Ideally, APM data should be measured against user experience as a key performance indicator. For example, if a bottleneck is being caused by database latency, users want to understand the root cause so they can fix it immediately. This might require an alerting based on patterns and “baselining.”

Some expect APM tools to enable the discovery of complex distributed application architecture or even microservices and containers. After all, not all application architecture is known at the outset, and it certainly changes over time. Users need APM tools to be proactive whether they are used in dev, test, QA or production environments.

The APM toolset itself should have low impact on application performance. The measurements it takes have to be easy to interpret and place into a business-friendly reporting output. For instance, IT Central Station members suggest that APM tools should offer a predefined customizable reporting capability, with high visibility and a capacity to export and report on large quantities of raw data.

Find out what your peers are saying about Dynatrace, AppDynamics, Datadog and others in Application Performance Management (APM). Updated: September 2020.
437,168 professionals have used our research since 2012.