Application Performance Management (APM) Forum

Rhea Rapps
Content Specialist
IT Central Station
Aug 13 2018
One of the most popular comparisons on IT Central Station is Dynatrace vs New Relic APM. One user says about Dynatrace, "Dashboards are one reason, and troubleshooting is another. I come from the monitoring perspective, so the ability to triage quickly is important. What I like about the product is its ability to alert and tell people where a problem is."Another user says about New Relic APM, "It reveals where our code is insufficient or needs to be refactored, which is great. The most important thing is it tells us where the latency in the throughput and response times are." In your experience, which is better and why?
Carolina PerezNew Relic was very easy to both deploy and configure.
SteveClarkDynatrace is the best of the best when it comes to APM solutions. Here’s why: Dynatrace uses a single agent for everything which results in fast deployment. Dynatrace offers a fully on prem solution for organizations needing tight security. Dynatrace does a deep dive into the code running an application, and can isolate an application issue to the line of code causing it. Dynatrace has AI built in and learns what is or is not normal behavior so you only get alerts that are meaningful. The Dynatrace dashboards are awesome out of the box, and they can be customized to provide specific views for the same information. This is useful for an executive looking at the data one way and the programmer looking at the same data from a different view. Dynatrace can also look at the network traffic flowing between servers to provide a true end-to-end solution. This is just six of the areas where Dynatrace outshines the rest. My opinions come from having worked with New Relic, AppDynamics, CA’s APM and Dynatrace. I have been using Dynatrace for four years.
Janet Peng
Manager of IT at a financial services firm with 10,001+ employees
Jun 26 2018
Hi. I hear great things about both Dynatrace and AppDynamics. What is the advantage of Dynatrace over AppDynamics and vice versa? For an enterprise, how do I decide which one is better for my needs?
lrobertsFirst thought.... Do not go with CA. CA is horrible with just about everything with the exception of maybe Autosys and that is not even great. Why CA continues to show up as highly rated only means to me they pay out a ton of money. We have a saying where I work... "CA is where software goes to die." To answer your question... They both are really are good. I have looked at DynaTrace, AppDynamics, and New Relic. We have used CA Wily (stay away, trust me) and Foglight which is now out of the APM game. I tend to lean more towards DynaTrace. This is why.... #1. Adoption One of the things that I often see with APM tools is a struggle with adoption once it is purchased. An APM tool can provide all the information you want but if you need to have worked at NASA before in order to use it, then people are not going to use it. DynaTrace really has done a nice job with the GUI. Its very easy to understand and navigate. It's been very well thought out and it shows. #2 One Agent Technology All you really need to know is what OS and that's it with the exception of Solaris. Solaris is still painful and does things in the way most of the legacy tools do. If you are using Linux and/or Windows, its gold! Administration is very easy with DynaTrace. #3 Mature AI (machine learning) The other area that I think they have a leg up on all others is the AI. Tons of vendors claim AI technology but they really have a matured AI. As soon as that agent goes on, its amazing how many things are detected, mapped, technologies identified, and no false alarms. The AI for us has been spot on every time. We have taken the approach of let AI do its job and it does. You have the option to tweak things. #4 Designed for today's needs from the ground up DynaTrace is the ONLY company that decided not to try to patch and upgrade their way to get there. They went back 5 years ago or so and wrote DynaTrace from the ground up. I sit and I watch these other vendors such as CA or IBM try to keep adding more and more to the technology that they acquired in the first place and their products have become a complete mess with their support teams failing more and more to provide support as even they are confused. Hope that helps!
Scott FarnumDynatrace has a forte for providing all the data points that a developer could ever want. In essence, it is reasonable to believe that with Dynatrace and an aware dev group, 100% monitoring and tracing coverage could be achieved. However, all of this data comes with a significant cost burden, which is why Dynatrace is also the most expensive in the market. When it comes to AppDynamics, you will reduce coverage by some small percentage (realized over a bell curve, truly), with some associated cost savings. Both platforms offer very similar capabilities and indeed leverage some of the same technology to drive their solution. Both UIs are comprehensive and complex, likely requiring a seasoned monitoring professional to drive value out of the analytics. If your needs are 100% coverage and money grows on trees, go for Dynatrace.
EricRepecThe answer is "It Depends" as you can imagine. Our team supports both technologies for the simple reason that there are use cases that work well with each of the tools. I would start with the following checklist. 1) Compatibility of subject technologies. 2) Ability to coexist with the current tools used and trusted by your staff. 3) Level of maturity of your staff. 4) Level of access / understanding of the monitored application. You should have a strategy for implementing any tool. Understand how it will be consumed by your staff and build well documented processes around the entire toolset. This will assure that the tool you select will be successful and fully utilized. Finally, most failed implementations stem from the fact that the tool is underfunded or under deployed. Make sure you dedicate staff and purchase licenses to cover the entire application. These tools depend on the fact that all tiers of the application are instrumented, any missing tier will greatly hinder the tools success and may delay or block the ROI that your management expects. Thanks Eric Repec
Rhea Rapps
Content Specialist
IT Central Station
Jun 26 2018
One of the most popular comparisons on IT Central Station is Dynatrace vs New Relic APM. One user says about Dynatrace, "Dashboards is one [of the best features], troubleshooting is another. I come from the monitoring perspective, so the ability to triage quickly is important, and the ability to alert and tell people where the problem is, that's what I really like about the product."  Another user says about New Relic APM, "The most important thing is that it tells us where the latency in throughput and response time are." In your experience, which is better and why?
Nikhil MishraBoth are good monitoring tools. But as many of the people posted above, Dynatrace gives you the exact drill down for the application(every api being hit), access both Browser and Server request, every third party/Dynamic request. Also, it also has a feature of providing waterfall for document loaded across the page for every framework ,including SPA(single page application) Page too. You would not find these many things in NEW RELIC and CPU, MEMORY & Throughput is common metrics in every monitoring tool, so do not judge on these counters. BOTTOM LINE :- Go for DYNATRACE
Sandeep KanchalwarI would recommend Dynatrace as it's the leader in the APM space and fulfills all what APM should deliver.
Kunal MattooI would any day go with Dynatrace. They have come up with Dynatrace SaaS which is one of the best tools used in the industry. This is way up over Dynatrace Appmon. SaaS is UI based. So that is the best part.
Rhea Rapps
Content Specialist
IT Central Station
Jun 21 2018
We all know that it's important to try out software as part of the buying process. Do you have any advice for the community about the best way to conduct a trial or POC? How do you conduct a trial effectively? Are there any mistakes to avoid?
Edward SuWe conducted a POC within our development/test environment where we ran a select set of UAT test cases manually after we had installed the APM solution. This so as to assure ourselves that the APM solution would not have any negative impact to our application’s functionality. I would have preferred to have run a complete regression test and would suggest that be done where possible and especially if the regression test suite has already been automated via an automated testing tool such as HPE UFT or Selenium. I would watch out for the ability to monitor and improve the performance of batch jobs as different APM solutions provide support for batch jobs to different degrees. Lastly, if you are selecting an APM solution for the enterprise, you might want to look into how the APM solution should integrate with the ITMS solution as well as any IPM (infrastructure performance monitoring) solutions and also whether it would be suited to monitoring the different technologies your applications are implemented upon.
Cedric MurphyBecause there are so many APM solutions available, you will need to do some preliminary research first. Consider your current APM solution that is in your environment. Review articles and evaluations such as those found in Gartner’s software evaluations. Make note of the good and bad of the tool being used now in your environment. From your preliminary research, narrow your perspective trial candidates to perhaps 5 possible solutions. Sign up for the trail offers and take advantage of the demos and WebEx’s that will engage the support and sales staff of the products. Typical you want to setup any downloaded software in a test or developer environment. If the trail is agentless, then you should have no problem. Consider the most problematic applications that you currently have in your current APM solution to be the ones to focus on in the trail or demo. Obviously there are a number of other things to consider i.e. firewall rules, authentication, security, etc., but I hope this post help to answer your question.
SureshRamaswamyYour questions are about 'how to trial effectively' and 'avoid mistakes' so will assume that after a lot of research, you have narrowed down the tools that fit your organization's specific needs. Also assumed is that you will be able to procure free trial license for the time you want which almost all tool vendors are happy to comply. - First task is to form a small team with lead(s) to own the evaluation effort. This can be tricky but most organizations have top tech folks at various levels who are open to change in tool direction. For APM tool solution trials, team members should come from engineering, performance, capacity planning and development/devops. Engineering include platform, network, infrastructure and service desk SMEs. Team would be a handful with hands on ability to install,configure, script, test, analyze and present findings. - Next task is identifying sandbox environment for trials and naturally would not be production or replication environments. Performance or staging would be ideal and the footprint of tool coverage across the CIs (Configuration items) should be minimal but representative of the APM operating environment. - A load / performance test is almost always a requirement for APM tool evaluation and should be used for trial. Any existing tool would suffice as the objective is APM tool trial not test tool capability. - Some criteria for effective evaluation are : a. that resolved issues could be captured by the new tool when the fixes are removed (satisfies current capability) b. one or more unresolved issues can be unearthed during the poc and corroborated (new capability) c. time to resolution meets or exceeds current state d. tool instrumentation is easy and dynamic or dynamic enough e. tool footprint on CIs are known and scalable f. tool integrates well with ITSM, platform specific tools(e.g. vm, SAN, DB) - Some guidelines to avoid mistakes during APM tool trial: a. focusing too narrowly on application areas can make choice look better than what it actually is b. not differentiating monitoring and debug instrumentation can overstate bau efforts c. during trials, it may be difficult to determine the false positives and negatives received/not received at ITSM so have to keep an eye the number and quality; it is difficult to be objective d. all IT teams have a chance to independently review from their vantage point The suggestions here are not on how to select an APM tool.
Rhea Rapps
Content Specialist
IT Central Station
Jun 21 2018
One of the most popular comparisons on IT Central Station is AppDynamics APM vs ITRS Geneos. One user says about AppDynamics APM,"[It] helps me not just rule in the areas, but rule out where I don't have to talk. More often than not, the rule-out gets hidden away, but it's a really good add-on because I'm only focusing on the problems." Another user says about ITRS Geneos, "It has improved our efficiency in many ways. Management and user information are now on self-service as we can provide them dashboards which give them live updates on what they want to know when they want to know, rather than running a query every day and e-mailing it to them." In your opinion, which is better and why? Thanks! --Rhea
Ariel Lindenfeld
Sr. Director of Community
IT Central Station
Let the community know what you think. Share your opinions now!
Aymen TouziIn order to evaluate/benchmark APM solutions, We can based on the 5 dimension provided by Gartner: 1. End-user experience monitoring: the capture of data about how end-to-end application availability, latency, execution correctness and quality appeared to the end user 2. Runtime application architecture discovery, modeling and display: the discovery of the various software and hardware components involved in application execution, and the array of possible paths across which those components could communicate that, together, enable that involvement 3. User-defined transaction profiling: the tracing of events as they occur among the components or objects as they move across the paths discovered in the second dimension, generated in response to a user's attempt to cause the application to execute what the user regards as a logical unit of work 4. Component deep-dive monitoring in an application context: the fine-grained monitoring of resources consumed by and events occurring within the components discovered in the second dimension 5. Analytics: the marshalling of a variety of techniques (including behavior learning engines, complex-event processing (CEP) platforms, log analysis and multidimensional database analysis) to discover meaningful and actionable patterns in the typically large datasets generated by the first four dimensions of APM In other side, we tried to benchmark internally some APM solutions based on the following evaluation groups: Monitoring capabilities Technologies and framework support Central PMDB (Performance Management DataBase) Integration Service modeling and monitoring Performance analysis and diagnostics Alerts/event Management Dashboard and Visualization Setup and configuration User experience and we got interresting results
it_user364554Hi, Full disclosure I am the COO at Correlsense. 2 years ago I wrote a post just about that - "Why APM project fails" - I think it can guide you through the process of the most important aspects of APM tools. Take a look - feel free to leave a comment: http://www.correlsense.com/enterprise-apm-projects-fail/ Elad Katav
Sumalya PatibandlaTo know NFR's
Juliano Souza
Pre-Sales at a tech consulting company with 11-50 employees
Hello,  I need a comparison between AppInternals and Dynatrace APM offerings.What are the pros and the cons of each solution, and why would you buy one rather than the other?
Scott CriscillisRiverbed is more of an NPM probe-based offering and only has a moderate level of application awareness. It is more aligned with Dynatrace’s DCRUM offering competitively. https://www.dynatrace.com/topics/performance-test/data-center-monitoring/ At some point our DCRUM will be integrated into the Dynatrace product, so one great thing coming is you will truly have a single platform/pane of glass for everyone down to the full network detail. For now, it is still a separate entity. Although Riverbed offers strong NETWORK analytics, it is not going to provide the application layer detail and provide a holistic all in one offering like Dynatrace does with RUM, webchecks, cloud, containers, network, infrastructure. It also samples and doesn’t provide a high fidelity of data. Dynatrace looks at every transaction, so we offer gap-free visibility into performance, bottlenecks, and issues. This is also important if you want to understand user and performance trends and understand where to align resources and focus on areas of development. With Riverbed, you’ll be making a lot of assumptions based on their sampling/averages output. It’s going to take you a month plus to evaluate it…minimum. It takes weeks to set it up and configure. And the cost of services to get installed and trained is costly. So ease of use with what you evaluated is not comparable. Dynatrace installs in 3 mins or less and our ROI is typically within 2-3 months. Maintenance of Riverbed is huge. Dynatrace is automatic in providing releases and upgrades, zero maintenance. Riverbed is going to require taking things off line, doing the updates, and putting things back on line. Lots of manual effort. Riverbed strongly emphasizes packet capture capabilities so it’s appealing to the network team. It captures and records everything from the wire and applies very light weight analytics. It’s a very reactive approach. If you want to apply the application layer information into the networking piece, you will be purchasing and managing multiple components instead of just one with Dynatrace. Dynatrace is application centric however it does provide some network analytics in relation to the performance of the applications it’s monitoring; This is like comparing an apple to a banana. So, for me, the top areas in which Dynatrace is better: • Ease of use, zero maintenance • Higher fidelity of data with Dynatrace-no sampling, aggregates or averages like you’d get from Riverbed. • Quicker ROI and user adoption • We show you every user, every app, everywhere. We provide gap free data from end-user, code, infrastructure, network. No blind spots, no samples, no averages. So no matter what device you are operating on, we’ll provide gap free visibility into performance. • Zero manual configuration. Just install one agent per host, we monitor everything. • AI gives you auto-everything. Automates, discovery, modelling, analysis, troubleshooting and stops you from having to figure this all out manually. • Automated root cause analysis. Avoid alert storms, get one single notification. • Seamless integration for cloud and containers.
Chuan Wei Yiawi don't 100% agree what @Scott mention. 1st - Please refer to the latest Magic Quadrant NPM & APM Dynatrace is good in APM space but Riverbed is offering end to end monitoring in NPM & APM space and the latest acquisition Aternity expands Riverbed’s SteelCentral offerings up to the end users performance monitoring level wish i personally love it. Traditional solutions base in DC to analyse packet and from the packet turn into end users performance monitoring however with the Riverbed Aternity is provide a real end point monitoring rather than monitoring in DC. Back to the APM solutions, since the question is more on AppInternal & Dynatrace the main differentiation is the language code , depending which applications that you require to monitor and what language code is the application using. Dynatrace -Supports .NET, J2EE, PHP, Mainframe, Hadoop Riverbed - Supports .NET and Java - thats all Instruments .NET and Java Runtime environments to give you access to actual code Others technical differentiation will post to you later
Janet Peng
Manager of IT at a financial services firm with 10,001+ employees
Hi, I work for one of the leading banks in the mid-west US and we are looking at some APM tools (AppD, New Relic, Dynatrace, CA) for an important web application. It seems that price points are all over the place! How much should we be expecting to budget for a good APM solution?
Mike DieterI'd say that assuming that you have identified and prioritized the requirements of your use-case, and assuming that all the providers you listed meet your use-case requirements, you've answered your own question already: take the quotes they have provided you and calculate the mean value to use as your budgetary number. When the time comes to turn that budgetary number into explicit acquisition/implementation/operation dollar values, open a negotiation dialogue with those providers and use that process to see with which provider you will be able to form the most effective partnership and relationship. A few notes to remember: If the $ cost reached after negotiation is greater than a quantified dollar value of the performance issues you and/or your customers are experiencing, then you may want to consider the wisdom of acquiring and operating the service. Don't confuse cost with value: if one negotiation results in the lowest cost but doesn't deliver the functionality that you've prioritized, then it is worthless.
Michael SydorYou have a couple considerations before someone can give you a realistic budget. * What tools have been tried before? Why are they not being considered now? Who is going to approach procurement? You need to make a business case for this next investment - no matter the dollar volume. * Is your web app on-premise or cloud? How many instances? What volume of transactions or concurrent users? Any potentials mergers or acquisitions upcoming, that might introduce a different tool or requirement? This affects the license model, and duration that you might need it, which directly impacts the price. * What are you monitoring goals? Are you just looking for up/down status? Are you planning to migrate any other applications to the web? What visibility into performance do you have for non-web systems today? How many other applications would follow-on, after this initial project? This gives you an idea of the duration and extent of a relationship that you might need with a given vendor. * What skills and organization do you have available? Is there a dedicated team in place? How do you handle turnover? Will you be looking for long-term services support and operation? How do you handle training, internal and external? This gives you and idea into your upfront, and on-going training and services (staff augmentation) needs. * How much vendor evaluation are you prepared to undertake? Do you have criteria established? Do you have a pilot strategy? Are you following a corporate mandate or enterprise strategy? You need to 'know' what you want before the vendors tell you what 'you need'! There is nothing more expensive that a host of capabilities that you will never employ. * Who are your stakeholders and how will they participate? Do you have visibility into Development and Testing? Are you bringing operational metrics back to the business? Do you need to integrate alerts and/or data with a 3rd party operations center? You need to know who really cares about performance in order to get support for any level of investment, and you need to meet their expectations. The better you understand your organization's prior experiences; assess the potential scope of what will be monitored long-term; assess the ability of your current team to support, implement and maintain the solution for the long-term; the better you will be able to evaluate and negotiate with the various vendors - and get a solution that fits the way your organization works. For a modest investment, get a copy of the vendor-neutral APM Best Practices https://www.amazon.com/APM-Best-Practices-Application-Professionals/dp/1430231416 The first third of the book is all about Planning an APM initiative, from Business Justification through Pilot Evaluation.
Scott FarnumWhen it comes to selecting an APM service provider, the costs vary as do the capabilities. In our organization, we operate with a thin technologist layer and also focus on 80/20 in troubleshooting. Generally, the higher the cost, the better the coverage out-of-the-box. So given this information - you can find very inexpensive service providers when you need 80/20 coverage, or you can pay for the extra coverage and go for 90/10 or 95/5. As a general rule, I think that end user coverage ranges from $5-10k per 1M monthly active users. APM coverage can range from $50 to $1000 per box in your stack. If you want to discuss more, feel free to drop me a note. I'll send a LinkedIn Request.

Sign Up with Email