We performed a comparison between Azure Monitor and Google Cloud's operations suite (formerly Stackdriver) based on real PeerSpot user reviews in five categories. After reading all of the collected data, you can find our conclusion below.
Comparison Results: Azure Monitor is the preferred solution because it has better customization and integration options, lower pricing, and more out-of-the-box functionalities with AI for event correlation. It also provides a one-stop place to monitor all resources, making it easier to manage cloud resources across multiple subscriptions.
"The initial setup is straightforward."
"The dashboard allows us to easily track various metrics and quickly understand the overall health of our system."
"The most valuable feature is that it ensures our servers are up."
"It is a robust, stable product."
"It has good troubleshooting features."
"I am monitoring all of my Azure Monitor and getting good reports. I can customize the reports to get the information I need. I am also getting emails about which AAS instances are down and everything in the system related to my services. It is easy to use, scalable, and user-friendly. Microsoft has Many guides and videos to help you understand how to create and use Azure Monitor."
"In the last company where I worked about a year ago, it looked very simple."
"The solution very easily integrates with Azure services and in one click you can monitor your resource."
"Provides visibility into the performance uptime."
"We find the solution to be stable."
"I like the monitoring feature."
"The features that I have found most valuable are its graphs - if I need any statistics, in Kubernetes or Kong level or VPN level, I can quickly get the reports."
"It's easy to use."
"The most valuable feature is the multi-cloud integration, where there is support for both GCP and AWS."
"Google's technical support is very good."
"The cloud login enables us to get our logs from the different platforms that we currently use."
More Google Cloud's operations suite (formerly Stackdriver) Pros →
"In terms of pricing, Azure Monitor's billing based on data size can sometimes lead to increased costs, especially when developers need to purge data frequently. While there are mechanisms in place to track and manage this, there is room for improvement in terms of optimizing data pausing and related processes. Enhancements in this area could help mitigate potential billing concerns and provide a more seamless experience for users."
"The price could be lower but it is not a must."
"This solution could be improved with more out-of-the-box functionalities and artificial intelligence to complete event correlation."
"Automation related to gathering metrics from more applications could be improved."
"Azure Monitor could improve the visualization aspect and integrate better with other third-party services."
"There are a lot of things that take more time to do, such as charting, alerting, and correlation of data, and things like that. Azure Monitor doesn't tell you why something happened. It just tells you that it happened. It should also have some type of AI. Environments and applications are becoming more and more complex every day with hundreds or thousands of microservices. Therefore, having to do a lot of the stuff manually takes a lot of time, and on top of that, troubleshooting issues takes a lot of time. The traditional method of troubleshooting doesn't really work for or apply to this environment we're in. So, having an AI-based system and the ability to automate deployments of your monitoring and configurations makes it much easier."
"They can simplify the overall complexity since you have multiple data sources in the cloud for monitoring. It's quite simple, but there are so many portals. It takes time to work with it. If they could simplify the user configuration, that would be good."
"In my opinion, they should improve the overall user experience, especially when it comes to indexing and searching collective logs."
"It could be more stable."
"Lacking sufficient operations documentation."
"The product provides minimal metrics that are insufficient."
"If I want to track any round-trip or breakdowns of my response times, I'm not able to get it. My request goes through various levels of the Google Cloud Platform (GCP) and comes back to my client machine. Suppose that my request has taken 10 seconds overall, so if I want to break it down, to see where the delay is happening within my architecture, I am not able to find that out using Stackdriver."
"The logging functionality could be better."
"It could be even more automated."
"While we are satisfied with the overall performance, in certain cases we must add additional metrics and additional tools like Grafana and Dynatrace."
"This solution could be improved if it offered the ability to analyze charts, such as a solution like Kibana."
More Google Cloud's operations suite (formerly Stackdriver) Cons →
More Google Cloud's operations suite (formerly Stackdriver) Pricing and Cost Advice →
Azure Monitor is ranked 4th in Application Performance Monitoring (APM) and Observability with 44 reviews while Google Cloud's operations suite (formerly Stackdriver) is ranked 27th in Application Performance Monitoring (APM) and Observability with 9 reviews. Azure Monitor is rated 7.6, while Google Cloud's operations suite (formerly Stackdriver) is rated 7.8. The top reviewer of Azure Monitor writes "A powerful Kusto query language but the alerting mechanism needs improvement". On the other hand, the top reviewer of Google Cloud's operations suite (formerly Stackdriver) writes "Good logging and tracing but does need more profiling capabilities". Azure Monitor is most compared with Datadog, Dynatrace, Prometheus, Sentry and SolarWinds Pingdom, whereas Google Cloud's operations suite (formerly Stackdriver) is most compared with AWS X-Ray, Datadog, Amazon CloudWatch, Grafana and New Relic. See our Azure Monitor vs. Google Cloud's operations suite (formerly Stackdriver) report.
See our list of best Application Performance Monitoring (APM) and Observability vendors and best Cloud Monitoring Software vendors.
We monitor all Application Performance Monitoring (APM) and Observability reviews to prevent fraudulent reviews and keep review quality high. We do not post reviews by company employees or direct competitors. We validate each review for authenticity via cross-reference with LinkedIn, and personal follow-up with the reviewer when necessary.