Senior Software Engineer | AWS Certified Solutions Architect at Venture Garden Group
Real User
Top 5
2022-07-24T07:18:00Z
Jul 24, 2022
When Stackify completes drill downs, sometimes there is a block of execution pipelines, and you cannot see the details. It does not allow you to analyze the block of code, and we are unsure what that means. Also, there is a bunch of middleware or native framework execution that Stackify does not analyze, and it is difficult to tell what the issue is. The search feature could be improved. On the trace dashboard, the search box is not intuitive enough. For example, you see search by URL, but it is unclear if you need to search by absolute URL or URL segments. A placeholder in that search box would make a big difference. Also, using Stackify with Docker, there could be better documentation. It might be a knowledge gap, but sometimes we get it working on a Docker container and even with the same configurations, it doesn't work, and we end up searching for documentation on the internet. Another improvement would be the agent memory utilization, which led to our recent reevaluation. For example, you could have a 16-gig server, and the Stackify agent takes a bulk of the memories. Regarding additional features, it would be great to add availabilities of the applications. Other companies use some metrics to measure their application stability, showing the system's availability within a time range.
They need to improve non-.NET infrastructure. We always had difficulty when it comes to reporting or metrics that come from Linux operating systems and Docker containers. For anything that runs within the Unix environment, we always had problems with them, however, if it was a document-based application, Stackify was 100%, it gave everything. Now, the aggregation agent, the metric agent for Stackify for Linux, collects everything. When I say everything, I mean, everything. It collects so much information that we now started to term it as useless data as all that ingestion will just come in and overwhelm your log retention limit for the month and really this spike up your cost at the end of the month. You'll need to do a lot in order to train down the data coming in from all your Linux environments, to get to what you really need, which actually takes some time as well. I would like to be able to see metrics about individual running containers on the host machines. Stackify has not really gotten that right, as far as I'm concerned. Netdata has done a better job and New Relic has also done a better job. They need to improve on that. We need to be able to see the individual resource usage of containers running within a particular host.
Stackify is an application performance management (APM) solution that combines application performance monitoring with logs, errors, and reporting. It is a SaaS solution that is developer-focused. Users can quickly scan, identify, and repair issues with applications. Stackify APM offers valuable tools, such as Prefix and Retrace, which help to make it a comprehensive and valuable APM solution. Stackify is now part of the Netreo family of IT Infrastructure Management (ITIM), which is...
When Stackify completes drill downs, sometimes there is a block of execution pipelines, and you cannot see the details. It does not allow you to analyze the block of code, and we are unsure what that means. Also, there is a bunch of middleware or native framework execution that Stackify does not analyze, and it is difficult to tell what the issue is. The search feature could be improved. On the trace dashboard, the search box is not intuitive enough. For example, you see search by URL, but it is unclear if you need to search by absolute URL or URL segments. A placeholder in that search box would make a big difference. Also, using Stackify with Docker, there could be better documentation. It might be a knowledge gap, but sometimes we get it working on a Docker container and even with the same configurations, it doesn't work, and we end up searching for documentation on the internet. Another improvement would be the agent memory utilization, which led to our recent reevaluation. For example, you could have a 16-gig server, and the Stackify agent takes a bulk of the memories. Regarding additional features, it would be great to add availabilities of the applications. Other companies use some metrics to measure their application stability, showing the system's availability within a time range.
They need to improve non-.NET infrastructure. We always had difficulty when it comes to reporting or metrics that come from Linux operating systems and Docker containers. For anything that runs within the Unix environment, we always had problems with them, however, if it was a document-based application, Stackify was 100%, it gave everything. Now, the aggregation agent, the metric agent for Stackify for Linux, collects everything. When I say everything, I mean, everything. It collects so much information that we now started to term it as useless data as all that ingestion will just come in and overwhelm your log retention limit for the month and really this spike up your cost at the end of the month. You'll need to do a lot in order to train down the data coming in from all your Linux environments, to get to what you really need, which actually takes some time as well. I would like to be able to see metrics about individual running containers on the host machines. Stackify has not really gotten that right, as far as I'm concerned. Netdata has done a better job and New Relic has also done a better job. They need to improve on that. We need to be able to see the individual resource usage of containers running within a particular host.