We just raised a $30M Series A: Read our story

Turbonomic OverviewUNIXBusinessApplication

Turbonomic is #1 ranked solution in top Cloud Analytics tools, #1 ranked solution in top Cloud Cost Management tools, #2 ranked solution in top Cloud Migration tools, #2 ranked solution in top Virtualization Management Tools, and #3 ranked solution in top Cloud Management tools. IT Central Station users give Turbonomic an average rating of 8 out of 10. Turbonomic is most commonly compared to VMware vRealize Operations (vROps):Turbonomic vs VMware vRealize Operations (vROps). Turbonomic is popular among the midsize enterprise segment, accounting for 51% of users researching this solution on IT Central Station. The top industry researching this solution are professionals from a computer software company, accounting for 31% of all views.
What is Turbonomic?

Turbonomic, an IBM Company, provides Application Resource Management (ARM) software used by customers to assure application performance and governance by dynamically resourcing applications across hybrid and multicloud environments. Turbonomic Network Performance Management (NPM) provides modern monitoring and analytics solutions to help assure continuous network performance at scale across multivendor networks for enterprises, carriers and managed services providers.

For further information, please visit www.turbonomic.com

www.turbonomic.com/resources/case-studies

Turbonomic was previously known as VMTurbo Operations Manager.

Turbonomic Buyer's Guide

Download the Turbonomic Buyer's Guide including reviews and more. Updated: November 2021

Turbonomic Customers

JPMorgan Chase, Bank of America, Citi, ANZ, Credit Suisse, State Street, Morningstar, VOYA, TPICAP, LPL Financial, Cisco, BMC, Hewlett Packard Enterprise, Agilysys, MetLife, Hiscox, Humana, Tokio Marine, Allscripts, SHARP, Providence St. Joseph Health, NBC Universal, pwc, KPMG, Wayfair, Carhartt, Tiffany & Co., UCLA, NASA, NIH

Turbonomic Video

Pricing Advice

What users are saying about Turbonomic pricing:
  • "When we have expanded our licensing, it has always been easy to make an ROI-based decision. So, it's reasonably priced. We would like to have it cheaper, but we get more benefit from it than we pay for it. At the end of the day, that's all you can hope for."
  • "In the last year, Turbonomic has reduced our cloud costs by $94,000."
  • "If you're a super-small business, it may be a little bit pricey for you... But in large, enterprise companies where money is, maybe, less of an issue, Turbonomic is not that expensive. I can't imagine why any big company would not buy it, for what it does."
  • "The pricing and licensing are fair. We purchase based on benchmark pricing, which we have been able to get. There are no surprise charges nor hidden fees."
  • "I know there have been some issues with the billing, when the numbers were first proposed, as to how much we would save. There was a huge miscommunication on our part. Turbonomic was led to believe that we could optimize our AWS footprint, because we didn't know we couldn't. So, we were promised savings of $750,000. Then, when we came to implement Turbonomic, the developers in AWS said, "Absolutely not. You're not putting that in our environment. We can't scale down anything because they coded it." Our AWS environment is a legacy environment. It has all these old applications, where all the developers who have made it are no longer with the company. Those applications generate a ton of money for us. So, if one breaks, we are really in trouble and they didn't want to have to deal with an environment that was changing and couldn't be supported. That number went from $750,000 to about $450,000. However, that wasn't Turbonomic's fault."
  • "We see ROI in extended support agreements (ESA) for old software. Migration activities seem to be where Turbonomic has really benefited us the most. It's one click and done. We have new machines ready to go with Turbonomic, which are properly sized instead of somebody sitting there with a spreadsheet and guessing. So, my return on investment would certainly be on currency, from a software and hardware perspective."

Turbonomic Reviews

Filter by:
Filter Reviews
Industry
Loading...
Filter Unavailable
Company Size
Loading...
Filter Unavailable
Job Level
Loading...
Filter Unavailable
Rating
Loading...
Filter Unavailable
Considered
Loading...
Filter Unavailable
Order by:
Loading...
  • Date
  • Highest Rating
  • Lowest Rating
  • Review Length
Search:
Showingreviews based on the current filters. Reset all filters
RM
Director of Enterprise Server Technology at a insurance company with 10,001+ employees
Real User
Top 20
Helps us optimize cloud operations, reducing our cloud costs

Pros and Cons

  • "The proactive monitoring of all our open enrollment applications has improved our organization. We have used it to size applications that we are moving to the cloud. Therefore, when we move them out there, we have them appropriately sized. We use it for reporting to current application owners, showing them where they are wasting money. There are easy things to find for an application, e.g., they decommissioned the server, but they never took care of the storage. Without a tool like this, that storage would just sit there forever, with us getting billed for it."
  • "The issue for us with the automation is we are considering starting to do the hot adds, but there are some problems with Windows Server 2019 and hot adds. It is a little buggy. So, if we turn that on with a cluster that has a lot of Windows 2019 Servers, then we would see a blue screen along with a lot of applications as well. Depending on what you are adding, cores or memory, it doesn't necessarily even take advantage of that at that moment. A reboot may be required, and we can't do that until later. So, that decreases the benefit of the real-time. For us, there is a lot of risk with real-time."

What is our primary use case?

Our use case: Planning for sizing servers as we move them to the cloud. We use it as a substitute for VMware DRS. It does a much better job of leveling compute workload across an ESX cluster. We have a lot fewer issues with ready queue, etc. It is just a more sophisticated modeling tool for leveling VMs across an ESX infrastructure.

It is hosted on-prem, but we're looking at their SaaS offering for reporting. We do some reporting with Power BI on-premise, and it's deployed to servers that we have in Azure and on-prem.

How has it helped my organization?

The proactive monitoring of all our open enrollment applications has improved our organization. We have used it to size applications that we are moving to the cloud. Therefore, when we move them out there, we have them appropriately sized. We use it for reporting to current application owners, showing them where they are wasting money. There are easy things to find for an application, e.g., they decommissioned the server, but they never took care of the storage. Without a tool like this, that storage would just sit there forever, with us getting billed for it.

The solution handles applications, virtualization, cloud, on-prem compute, storage, and network in our environment, everything except containers because they are in an initial experimentation phase for us. The only production apps we have which use containers are a couple of vendor apps. Nothing we have developed, that's in use, is containerized yet. We are headed in that direction. We are just a little behind the curve.

Turbonomic understands the resource relationships at each of these layers (applications, virtualization, cloud, on-prem compute, storage, and network in our environment) and the risks to performance for each. It gives you a picture across the board of how those resources interact with each other and which ones are important. It's not looking at one aspect of performance, instead it is looking at 20 to 30 different things to give recommendations.

It provides a proactive approach to avoiding performance degradation. It's looking at the trends and when is the server going to run out of capacity. Our monitoring tools tell us when CPU or memory has been at 90 percent for 10 minutes. However, at that point, depending on the situation, we may be out of time. This points out, "Hey, in three weeks, you're not going to be looking good here. You need to add this stuff in advance."

We are notifying people in advance that they will have a problem as opposed to them opening tickets for a problem.

We have response-time SLAs for our applications. They are all different. It just depends on the application. Turbonomic has affected our ability to meet those SLAs in the ability to catch any performance problems before they start to occur. We are getting proactive notifications. If we have a sizing problem and there's growth happening over a trended period of time that shows that we're going to run out of capacity, rather than let the application team open a ticket, we're saying, "Hey, we're seeing latency in the application. Let's get 30 people on a bridge to research the latency." Well, the bridge never happens and the 30 people never get on it, this is because we proactively added capacity before it ever got to that point.

Turbonomic has saved human resource time and cost involved in monitoring and optimizing our estate. For our bridges, when we have a problem, we are willing to pay a little bit extra for infrastructure. We're willing to pull a lot more people than we're probably going to need onto our bridge to research the problem, rather than maybe getting the obvious team on, then having them call two more, and then the problem gets stretched out. We tend to ring the dinner bell and everybody comes running, then people go away as they prove that it's not their issue. So, you could easily end up with 30 to 40 people on every bridge for a brief period of time. Those man-hours rack up fast. Anything we can do to avoid that type of troubleshooting saves us a lot of money. Even more importantly, it keeps us productive on other projects we're working on, rather than at the end of the month going, "We're behind on these three projects. How could that have happened?" Well, "Remember there was that major problem with application ABC, and 50 people sat on a bridge for three days for 20 hours a day trying to resolve it."

In some cases you completely avoid the situation. A lot of our apps are really complex. A simple resource add in advance to a server might save us from having a ripple effect later. If we have a major application, as an example, and to get data for that application, it calls an API in another application, then pulls data from it. Well, the data it asks for: 80 percent of it's in that app, but 20 percent of it's in the next app. There is another API from that call to get that data to add it to the data from application B to send it back to application A. If you have sometimes a minor performance problem in application C that causes an outage in application A, which can be a nightmare to try and diagnose those types of problems, especially if those relationships aren't documented well. It is very difficult to quantify the savings, but If we can avoid problems like that, then the savings are big.

We are using monitoring and thresholds to assure application performance. It is great, but at the point where our monitoring tools are alerting, then we already have a problem in a lot of cases, though not always. The way we have things set up, we get warnings when resource utilization reaches 80 percent, because we try to keep it at 70 percent. We get alerts, which is kind of like, "Oh no," but we can do something about it when the applications are at 90 percent. The problem is there are so many alerts and it's such a huge environment. Because there is too much work going on, they get ignored. So, they can work into the 90s, and you end up a lot more often in a critical state. That's why the proactive monitoring of all our open enrollment stuff is really beneficial to us.

What is most valuable?

You have different groups who probably use almost everything. We use it for sizing of servers, and if somebody feels like their server needs additional resources, we validate it with the solution. We have a key part of the year called "open enrollment", where we really can't afford anything to be down or have any problems. We monitor it on a daily basis, and contact server owners if Turbonomic adds a forward-looking recommendation that they are running low on space. So, it keeps us safe. It is easy to monitor the virtual infrastructure and make sure there is capacity. However, with the individual VMs, in production alone, there are 12,000 of them. How do you keep up with those on an individual basis? So, we use Turbonomic to point out the individual VMs that are a little low.

Turbonomic provides specific actions that prevent resource starvation. They make memory recommendations and are very specific about recommendations. It looks at the individual servers, then it puts them in a cluster. At the end of the day, it comes back, and goes, "I can't fit these on here. There's not enough I/O capacity." Or, "There's just not enough memory, so you need to add two hosts."

What needs improvement?

For implementing the solution’s actions, we use scheduling for change windows and manual execution. The issue for us with the automation is we are considering starting to do the hot adds, but there are some problems with Windows Server 2019 and hot adds. It is a little buggy. So, if we turn that on with a cluster that has a lot of Windows 2019 Servers, then we would see a blue screen along with a lot of applications as well. Depending on what you are adding, cores or memory, it doesn't necessarily even take advantage of that at that moment. A reboot may be required, and we can't do that until later. So, that decreases the benefit of the real-time. For us, there is a lot of risk with real-time.

You can't add resources to a server in the cloud. If you have an Azure VM, you can't go add two cores to it because it's not going to have enough processing power. You would have to actually rebuild that server on top of a new server image which is larger. They got certain sizes available, so instead of an M3, we can pick an M4, then I need to reboot the server and have it come back up on that new image. As an industry, we need to come up with a way to handle that without an outage. Part of that is just having cloud applications built properly, but we don't. That's a problem, but I don't know if there is a solution for it. That would be the ultimate thing that would help us the most: If we could automatically resize servers in the cloud with no downtime.

The big thing is the integration with ServiceNow, so it's providing recommendations to configuration owners. So, if somebody owns a server, and it's doing a recommendation, I really don't want to see that recommendation. I want it to give that recommendation to the server owner, then have him either accept or decline that change control. Then, that change control takes place during the next maintenance window.

For how long have I used the solution?

Three years.

What do I think about the stability of the solution?

Because of the size of our company, earlier versions were slow. However, they rearchitected the product about a year or 18 months ago and containerized parts of it, so we could expand and contract. Performance has been good since then.

I've a couple of guys who support it. We upgrade six or seven times a year. We are upgrading fairly often, so we are very close to current.

We have one guy spending maybe three weeks of the year doing upgrades. The upgrades are easy and fairly frequent, but there are almost always enhancements with these releases.

There are probably 50 people using it now. There are a handful who use it almost every day for sizing and infrastructure. We have a capacity management team who uses it all day long, every day. There are also multiple cloud teams and application teams who have been given access, so they can use it to appropriately size and work on their own applications. We are in the process of automating that to get that data out to everybody. There are a lot of other key teams who have found out what we were doing, and are like, "Can we have access to it now? So, we don't have to wait?" We are like, "Sure."

What do I think about the scalability of the solution?

The scalability is good. I don't see any issues at all.

We were initially on the high-end of their customers. We ran two instances of it for a while, just because there was a limit of like 10,000 devices per system, and we were significantly past that.

Just from a server perspective, we are running about 26,000 servers right now, where 97 to 98 percent are virtualized. One person can't get a handle on that. Even figuring out what direction to look, you need to have tools to help you.

How are customer service and technical support?

The technical support is good. We actually rarely call them. We have done quite a bit of work with them. Because of the number of purchases, they provided a TAM to work with us. So, we have kept that TAM around on an ongoing basis. We pretty much just call them, and they handle any support issues. From a support perspective, it has been one of the better experiences.

If it stops doing its thing and moving VMs around, it will be many days before it is going to have any impact on the environment, because everything is configured so well. From that perspective, it is an easier application to score than if you have a VMware host crash and trap a bunch of VMs on it.

Which solution did I use previously and why did I switch?

We started using Turbonomic as a replacement for VMware DRS, which handled the VM placement.

We knew we were having some performance issues and ready queue problems that we felt could be improved. We worked with VMware for a while to tweak settings without a lot of success. So, we saw what Turbonomic said that they could do. We tried it, and it could do those things, so we bought it.

From a compute standpoint, Turbonomic provides us with a single platform that manages the full application stack. When we originally started, we were primarily looking for something that would make better use of our existing infrastructure. Because it does a much better job of putting VMs together on hosts, we were able to save money immediately just by implementing it. At the time, we were non-cloud. There was a period of time where we just couldn't put anything into the cloud for security reasons. We have moved past that now and are moving to the cloud. This solution has a lot more use cases for that, e.g., sizing workloads for the cloud and monitoring workloads in the cloud.

How was the initial setup?

It's incredibly easy to set up. It took a couple of days. You spend more time building servers and getting ready for it.

It gathers its own data from vCenter. It doesn't touch the actual servers at all. Same thing with the different cloud vendors. It looks at your account information. It doesn't actually have to touch the servers themselves.

As far as the product goes, it's not an agent based. It can gather information, and start making recommendations within two or three days, then better recommendations within a week. After that, you're good. It doesn't get much easier.

What about the implementation team?

We did the implementation ourselves. It took one guy to deploy it.

My group built a couple of the VMs that we needed and installed it. It took a couple of days. As far as gathering information, you don't have to put agents on any servers or anything like that. You give a user an ID for vCenter, and we have multiple vCenters.

What was our ROI?

The open enrollment applications are all mission-critical apps. If they go down, then the clock starts ticking on its way to seven-digit sales losses. It helps us avert situations like this multiple times a week. We are constantly using it to watch and notify application owners. If we don't use Turbonomic for this, then what would typically happen is the node recommendations that they would get from Dynatrace would start showing them that there is latency in their app. If they started digging into Dynatrace, then it would come up, going, "I'm running at 90 percent CPU all the time. I better get some more CPU." Well, Turbonomic tells us two weeks before that happens, that, "We need to be adding CPUs." So, it has a proactive nature. There are a lot of other tools in play that are monitoring what is happening. For our managers, Turbonomic helps us figure out what is going to happen.

We use Turbonomic to help optimize cloud operations, and that has reduced our cloud costs. We have a lot of applications that we run which are very cyclical. Fourth quarter of the year, they get the crap beat out of them. The other three quarters of the year, they are not used a whole lot. Without Turbonomic, would it be appropriate for the application to get resized nine months out of the year. Probably not.

It has helped save cloud costs by seven figures.

The tool itself is not free, but it's easily a positive ROI. It's hard to measure the benefit of just doing the DRS and optimizing our virtual infrastructure. I just can't stress enough how much it does such a better job of stacking VMs onto a set of ESX infrastructure. If you're using Turbonomic and looking at a cluster, you will see pretty much even utilization across a set of hosts. If you let VMware manage it, you will see one host at 95 percent, then another at five percent. Everything is running fine, and that's all they care about. However, if something starts going wrong on the host that is running at 95 percent, then you may see some degradation, just like rats leave the sinking ship trying to get out through that 5 percent host. Because it does a better job of balancing things, it utilizes infrastructure better, so you have fewer servers to host the same amount of VMs.

We have probably reduced our server purchase by a million dollars, just having Turbonomic manage the VDI infrastructure. Before they were static, so they just put an X number of VMs on each host, e.g., there are 70 VMs on that one, then it goes onto the next one. If we saw hotspots, then we would manually try and move a VM or two around.

We are using Turbonomic now to manage that and the supercluster feature that lets us migrate across clusters, which is really key for the VDIs, because we had infrastructure that wasn't well utilized 24 hours a day. So, we were buying lots of extras. The reason for that was we have developers in India, tons of people offshore, and people in the Philippines. As those people come and go, the utilization of different clusters shifts radically. So, if you're trying to have enough infrastructure to manage each cluster individually, then it takes a lot more than if you're managing it as a whole. That is one of the things that we use it for.

What's my experience with pricing, setup cost, and licensing?

When we have expanded our licensing, it has always been easy to make an ROI-based decision. So, it's reasonably priced. We would like to have it cheaper, but we get more benefit from it than we pay for it. At the end of the day, that's all you can hope for.

We paid for our TAM, but I'm sure it's embedded in the cost. However, that's optional. Obviously, you can do it all yourself: Open all your own support tickets and just send in an email to your TAM. Our TAM has access to log in, because she's set up as a contractor for us. So, she can actually get in and work with us.

Which other solutions did I evaluate?

There weren't a lot of other options available at the time, but we did look at three others. I know there are other companies on the market. I don't remember which ones were competing with it at the time. There was only really one other in that space at the time, and there's a bunch now. Then, VMware was there competing as well, saying, "You just don't have it configured right. We can do better," but they really couldn't.

The model behind the scene that Turbonomic uses to make decisions just has a better way of balancing resources. It considers a lot more factors.

We use other tools to provide application-driven prioritization, to show us how top business applications and transactions are performing.

What other advice do I have?

Unfortunately, a lot of our infrastructure in the cloud is still legacy. So, we can't make full use of it to go out and resize a server, because it will bring the application down. However, what we are doing is setting up integration servers now. This puts a change control out to make the recommended change and the owner of the server can approve that change, then it will take place within a maintenance window.

We don't manage resources in real-time. Most of our applications just don't support that. We don't have enough changes required that it would be mutually beneficial to us, so we aren't doing that yet, but we're headed in that direction.

It would be a big stretch for us to actually use Turbonomic to take resources away from servers. Our company has a philosophy, which was decided four or five years ago that the most important thing for us is for our applications to be up. So, if we waste a little money on the infrastructure to bolster applications when there is a problem, that is okay. We even have our own acronym, it's called margin of error (MOE). Typically, we are looking to have at least 30 percent free capacity on any server or cluster at any given time, which is certainly not running in the most efficient way possible, but we're okay with that. While we may spend three million dollars more a year on infrastructure, an hour long outage might cost us a million dollars. So, if there is a major problem with it with big performance degradation, then we want to have the capacity to step up and keep that application afloat while they figure out the issue.

It projects the outcome of if you are going to move from one set of infrastructure to another, then it will make a recommendation. For example, if I'm moving from one type of server to another type of server where there are different core counts, faster cores, and faster memory, then it will tell me in advance, "You need fewer resources to make that happen because you are moving to better equipment."

Biggest lesson learnt: What you should do is the obvious, it is just difficult to get people to do it. You need to have servers grouped and reported up to an executive level that can show the waste. Otherwise, you are working with server owners who have multiple priorities. They have a release that's due in two weeks which will impact their bonus at the end of the year, etc. If you hit them up, and go, "Hey, you're wasting about a thousand dollars a week on this server, and more on the others, so we need to resize them." They don't care. On an individual application or server basis, it's not a big deal. However, across a 26,000 server environment, $10,000 here or there pretty becomes real money. That is the biggest challenge: competing priorities. You have one group trying to manage infrastructure for the least possible amount while getting the best performance, and you have other people who have to deliver functionality to a business unit. If they don't, the business unit will lose a million dollars a day until they get it. Those are tough priorities to compete with.

Build that reporting infrastructure right from the beginning. Make sure you have your applications divided up by business unit, so you can take that overall feedback and write it up when you are showing it to a senior executive, "Hey look, you are paying for infrastructure. You are spending a million dollars more a month than you should be."

I would rate this solution as an eight (out of 10). It is a great app. The only reason I wouldn't give them a higher rating is from a reporting standpoint. That's just not their focus, but better reporting would help. We use an app called Cloud Temple with them, who is actually a partner of theirs. Turbonomic will tell you reporting is not what they see as their core competency, and they are going to take actions to optimize your environment. However, at the same time, they have done these partnerships with another company who does better reporting.

Which deployment model are you using for this solution?

On-premises
Disclosure: IT Central Station contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Matthew Koozer
Ict Infrastructure Team Cloud Engineer at a mining and metals company with 10,001+ employees
Real User
Top 20
Provides recommendations whether workloads should be scaled up or down

Pros and Cons

  • "The tool provides the ability to look at the consumption utilization over a period of time and determine if we need to change that resource allocation based on the actual workload consumption, as opposed to how IT has configured it. Therefore, we have come to realize that a lot of our workloads are overprovisioned, and we are spending more money in the public cloud than we need to."
  • "There is an opportunity for improvement with some of Turbonomic's permissions internally for role-based access control. We would like the ability to come up with some customized permissions or scope permissions a bit differently than the product provides."

What is our primary use case?

We primarily use it as a cost reduction tool regarding our cloud spend in Azure, as far as performance optimization or awareness. We use Turbonomic to identify opportunities where we can optimize our environments from a cost perspective, leveraging the utilization metrics to validate resources are right-sized correctly to avoid overprovisioning of public cloud workloads. We also use Turbonomic to identify workloads that require additional resources to avoid performance constraints. 

We use the tools to assist in the orchestration of Turbonomic generated decisions so we can incorporate those decisions through automation policies, which allow us to alleviate long man-hours of having someone be available after hours or on a weekend to actually perform an action. The decisions from those actions are scheduled in the majority of cases at a specific date and time. They are executed without having anyone standing by to click a button. Some of those automated orchestrations are performed automatically without us having to even review the decision, based on some constraints that we have configured. So, the tool identifies the resource that has a decision identified to either address a performance issue or take a cost saving optimization, then it will automatically implement that decision at the specific times that we may have defined within the business to minimize impact as much as possible.

There are some cases where we might have to take a quick look at them manually and see if it makes sense to implement that action at a specific date and time. We then place the recommendation into a schedule that orchestrates the automation so we are not tying up essential IT people to take those actions. We take these actions for our public cloud offering within Azure. We don't use it so much for on-prem workloads. We don't have any other public cloud offerings, like AWS or GCP. 

We do have it monitor our on-prem workloads, but we do not really have much of an interest in the on-prem because we're in the process of a lift and shift migration for removing all workloads in the cloud. So, we are not really doing too much with the on-prem stuff. We do use it for some migration planning and cost optimization to see what the workload would look like once we migrated into the cloud. 

From our on-prem perspective, we do use it for some of the migration planning and cost planning. However,& most of our implementations with this are for optimization and performance into the public cloud.

It provides application metrics and estimates the impact of taking a suggested action from two aspects: 

  1. It shows you what that impact is from the financial aspect in a public cloud offering. So, it will show you if that action will end up costing you more money or saving you money. Then, it also will show you what that action will like from a performance and resource utilization perspective. It will tell you, "If you make the change, what that resource utilization consumption will look like from a percentage perspective, if you will be consuming more or less resources, and if you're going to have enough resource overhead for performance spikes." 
  2. It will give you the ability to forecast, but the utilization consumption's going to be in the future term. So, you can kind of gauge whether the action that you're taking now, e.g., how it's going to look and work for you in the long-term.

How has it helped my organization?

In our organization, optimizing application performance is a continuous process that is beyond human scale. We see tremendous value in Turbonomic to help us close that gap as much as possible within our organization. Essentially Turbonomic will provide us with a recommendation on how to address a workload in real-time based on its actual utilization. Then, we have pre-defined time slots where those actions can be implemented with minimal impact to the business because some of the changes may require rebooting the server. So, we don't want to reboot the server at 2:00 in the afternoon when everyone is using it, but we might have a dedicated time slot that says, "After 5:00 today or 2:00 in the morning when no one is using it, this server can be rebooted to take the action."

We have leveraged Turbonomic to not only ingest the data from the utilization of workloads to come up with performance-based driven decisions. We also have used Turbonomic to help orchestrate and initiate those actions automatically for a very large portion of our organization without us having to even be involved at all. For some more sensitive workloads, we look at them and coordinate with the business whether we will take action at another date and time.

We primarily use it in the public cloud for servers. We also monitor storage and databases within Azure. This is another added benefit that we like about Turbonomic. When we look at a decision, we are looking at how that decision is being driven based from a storage perspective, the IOPS being driven to a specific storage solution within our public cloud offering, its decisions based on specific DTU utilization from a database perspective, or if it is even a percentage of memory or CPU consumption. It takes into account all those various aspects and never puts us in a position where we take a decision or action without accommodating these other pieces and having them negatively impact us.

That level of monitoring is what has given us the confidence to allow Turbonomic to implement actions automatically without having IT oversight micromanage decisions, because it provides that holistic view, takes into account all those aspects, and ensures that a decision that is implemented never puts you into a point of contention or concern. We have the confidence to allow the appliance of the software solution to take actions without little to no IT oversight.

Turbonomic has identified areas within our public cloud where we had storage that was not being used at all. So, it provided us with insight into what that unused storage was so we could delete the unused storage and save on the recurring consumption cost. That was very helpful.

We have identified numerous workloads which have been overprovisioned by an administrator. We were able to essentially right-size workloads to use less resources, which cost us less money in our public cloud offering, e.g., a configuration with less memory or less CPU than what it was originally configured for. That helps us reduce our cloud consumption significantly.

In addition to ensuring that workloads are right-sized correctly, we have been able to save even more with our public cloud consumption by identifying workloads where we could purchase reserved instances, essentially long-term contracts for specific workload sizes. This allows us, on average, to save an additional 33% or more on our server run rates.

Turbonomic provides a proactive approach to avoiding performance degradation. It has allowed us to detect issues before they have actually become issues. Traditionally, in IT, we would not be aware of an issue until someone from the business came to us with an issue, then we would investigate the issue. In some cases, we would spend a couple hours trying to figure out what the issue was, then determine if something needed more resources, like more memory. Since Turbonomic, we have been able to almost immediately identify that our system needs more resources and take the action right then and there. Or, Turbonomic has identified there is an issue and we take an action, then notify the business that an action was taken in order to preemptively avoid a business impact.

Previously, a business impact use case would potentially take us hours. With Turbonomic, whenever we run into a business impact use case now, before we even log into a system to initially troubleshoot it, the first thing we do is go to Turbonomic and see, "What is Turbonomic telling us? What is the workload like now? What has it looked like in the last 24 hours or week? Do we see any trends to help guide us towards identifying where we should go from a troubleshooting perspective?" From that aspect, Turbonomic has definitely helped guide our path to resolution.

What is most valuable?

The ability to look at a workload from an actual consumption perspective for the resources that it's consuming internally is particularly valuable. For instance, when we have a server in the public cloud, we might provision a certain amount of memory resources to it and CPU, e.g., two processors and 24GB of memory. The tool provides the ability to look at the consumption utilization over a period of time and determine if we need to change that resource allocation based on the actual workload consumption, as opposed to how IT has configured it. Therefore, we have come to realize that a lot of our workloads are overprovisioned, and we are spending more money in the public cloud than we need to. 

This solution allows us to have the data to make business decisions without having a concern on whether we are going to be impacting the business negatively by taking the wrong action. We actually have the analytical data to back decisions. This helps us have discussions with the business on if it's the right decision to make or not. 

Turbonomic has the ability to manage the full application stack. We have not plugged in all aspects of our application stacks, but it does provide that. That's one of the things that we love from Turbonomic is that we're not only ingesting the data into Turbonomic and reviewing the decisions that Turbonomic is providing, but Turbonomic is also essentially providing us a single pane of glass to implement those actions. So, if there is an action that we would like to take, whether it is someone manually clicking a button and taking the action or the action being initiated automatically by Turbonomic, that is all taken from within the appliance. We don't have to go and log in somewhere else or log into our public cloud offering and take that action. It can all be done from a single management pane. We can look at our supply chain for a specific application or workload and see if one specific part of the solution is causing a problem, as opposed to having a bunch of people on the phone with a bridge call and having people looking at different aspects of the solution that they are more intimate with. Turbonomic shows us the ability from a service chain perspective, how things pitch together, and helps us identify that single point or bottleneck causing the impact. We have used it from that perspective.

It provides the ability for us to create customized dashboards and custom reports to help showcase info to key stakeholders. We have leveraged the custom reporting for things, like SAP, that we have running in the public cloud to show how SAP is running, both from a performance aspect as well as from a cost perspective.

What needs improvement?

There is an opportunity for improvement with some of Turbonomic's permissions internally for role-based access control. We would like the ability to come up with some customized permissions or scope permissions a bit differently than the product provides. We are trying to get broader use of the product within our teams globally. The only thing that is kind of making it hard for a mass global adoption, "How do we provide access to Turbonomic and give people the ability to do what they need to do without impacting others that might be using Turbonomic?" because we have a shared appliance. I also feel that that scenario that I'm describing is, in a way, somewhat unique to our organization. It might be something that some others may run into. But, predominantly, most organizations that use or adopt Turbonomic probably don't run into the concerns or scenarios that we're trying to overcome in terms of delegating permission access to multiple teams in Turbonomic.

For how long have I used the solution?

It has been somewhere between two and a half to three years since we started our relationship with them.

What do I think about the stability of the solution?

The stability is very good. We have not had to open up any support tickets for the product to troubleshoot or recover the appliance. It has been running just fine. We haven't had to redeploy or recover anything with it, surprisingly, in the two and a half years that we have had it. The code updates are pretty easy to perform as well. Ongoing maintenance is really simple, and our account team helps us with the code updates. They get a meeting invite together, then it is less than a whole 10 minutes, but they are there every step of the way.

What do I think about the scalability of the solution?

It is pretty scalable, in terms of any concerns that we would have. Right now, we are using on-prem appliances. However, if we needed to, they have the ability of pouring into a SaaS-based offering, which would help us adopt it faster, in terms of some of our sister companies, because we are not isolated to network access within this particular data center. We could leverage the same licensing from a SaaS perspective, then they wouldn't have to use a VPN to connect to the appliance to use it. 

There are situations from a scalability perspective where we have to take into account things like GDPR. For things where GDPR or data sovereignty come into play, the scalability becomes a bit of a concern because you can only keep the appliance within that specific region. You need separate instances of Turbonomic, but the team has the ability to allow us to tackle that from a licensing perspective. This is a pretty minimal concern. We tackle GDPR or data sovereignty from the perspective that we just apply an instance of Turbonomic within that specific country region.

How are customer service and technical support?

If we have any questions or concerns, the account team as well as the product support team are always there and very accommodating to help us. With any problems that we have, even if they are not built into the product, we have worked with them to give them feedback on the product and on how we would like it to work. They have worked with us to help import some of that functionality into the product so it is available, not just for us, but for other customers who use the product as well.

How was the initial setup?

The initial setup was relatively straightforward. It was a pretty easy setup. I wouldn't say it was any more difficult than any other tool that we set up or have used in our environment. It is pretty easy to deploy, then probably just as easy to configure once it was deployed.

What was our ROI?

It helps us gauge our return on investment for the purchase of Turbonomic, based on the overall actions that we've taken and how much money we have saved by taking those actions over a period of time.

In the last year, Turbonomic has reduced our cloud costs by $94,000. It has identified a lot more cost saving areas, but we haven't taken advantage of those.

The amount of tickets that we have had come in for performance issues has surmounted to almost nothing in the calendar year. I don't know what we had before, but now in a calendar year, it is less than 10 to 12 tickets a year for a performance issue.

It has definitely provided a huge benefit in the area of man-hours saved. Without the tool, we would be flying blind on that and would probably be spending a lot of man-hours trying to formulate in-house strategies on how to reduce costs. Our company is a very lean company, in terms of headcount for IT resources as well as cloud skillset awareness. Having a tool like Turbonomic has allowed us to adopt and implement strategies like this, like cost saving measures with the public cloud, probably making us exponentially faster than we could have been without them.

When we had hit on how it ingests the workload performance data to help provide performance-driven analysis or recommendations to provide a recommendation for whether a workload should be scaled up or down, one of the things that has been kind of like a side effect to the ingestion of this data and the business decisions coming out of Turbonomic is it has been helping us identify workloads which are really not being used at all. From identifying those workloads that are not being used, we are able to go through our lifecycle management faster and more efficiently than we would have in the past. We have been able to decommission servers, essentially deleting them from our public cloud and completely reducing the operational cost of that workload altogether. So, it is not just ensuring that the VM is right-sized or locking in a commitment, but identifying that the workload is so low to utilize.

We are able to go back to the business and having a discussion with them based on the utilization of that VM over the course of a period of time for the data that we have, then have the justification and communication with the business to say, "Yeah, it doesn't make sense to have this workload in the environment anymore. Let's delete it." or, "Yeah, it's something that isn't used it all. Let's go ahead and delete it." It is allowing us to identify areas to save cost in those areas, but it's also helping us say, "This workload is costing us this much money. Is it really worth spending this much money every month or so for this solution that is running in the public cloud? Is it generating enough revenue for the business to warrant the run rate? Is the solution providing a service to the business that justifies the operational consumption on a monthly basis?" We are able to have these internal discussions within the business based on the data that Turbonomic is providing. This is a side effect of the product because the product is not providing these decisions and implementing them, but the product is providing us the data to have these discussions and net these decisions as an outcome. Then, this ends up saving money in our public cloud offering.

Which other solutions did I evaluate?

We did try some other solutions as PoCs before we worked with Turbonomic. Unfortunately, I am not aware of who those companies were because that was before I came onboard with the team. The big thing that it always came down to was whether we were going to adopt the entire implementation setup and configuration aspect. For example:

  • How much work was it going to take to deploy the appliance? 
  • How many man-hours would it take to configure it? 
  • What the continuous configuration and management was going to be?
  • Was it really saving us time and money in the long run?

Other solutions always fell flat because of how much involvement it would require from IT to deploy and work it, but also because of the ongoing configuration and maintenance of the appliance.

What other advice do I have?

It doesn't pick up the entire supply chain automatically. It requires minimal effort in configuration. We have to show a relationship in a sense that this workload is associated with another workload. However, once that relationship is established, the solution helps us manage our business-critical applications by understanding the underlying supply chain of resources.

Our capital expenses are relatively flat. We are not purchasing any new equipment. We are actually in a consolidation process. Everything is getting moved to the public cloud. From an operational perspective, with our workloads being in the public cloud, it has provided us:

  1. The ability to identify what we have running in the public cloud and how much it will actually cost us. 
  2. How we can reduce public cloud operational costs, e.g., what actions can we do to help reduce operational expenses in the public cloud? 

It identifies areas where we can delete storage that is not being used. We can address right-sizing workloads that are overprovisioned in the public cloud as well as logging in long-term commitments with workloads in the public cloud and saving on incidents, on average for us, over 33% or higher for our workloads, as opposed to just paying the pay as you go hourly rate with the provider.

Try to look at things, not just from a cost savings perspective, but also from performance avoidance. We looked at: How do we quantify our spend in the public cloud and how do we avoid our spend in the public cloud? But we always forgot that there were workloads out there that do have performance impacts. So, we counted this as a cost savings and cost optimization tool, but it became so much more than that. 

We developed a crawl, walk, run approach. We took some workloads in our public cloud and looked at the business decisions. We took the decisions, then we tested to see what the outcomes were with them. As we went through those actions manually, gained the confidence on how those actions were being made, and what the post impact of that was, that allowed the business to become more confident in the tool. We no longer needed to have meetings to discuss why we were doing what we were doing.

It then became a point of communication. An action would be taken because Turbonomic said it was the right thing to do. Nowadays, it's not even questioned. When I talked to people about trying out Turbonomic and looking at how to adopt it in their workload, I say to look at areas which are current pain points in your environment and see where Turbonomic would fit into that instead of trying to come up with the workloads or use cases to plug into Turbonomic. Instead of trying to figure out what you have or seeing where you could put Turbonomic in your environment, see where your environment fits into Turbonomic. That was the way that we were able to drive adoption much faster and use it, not just as a reporting tool, but also as an orchestration tool as well.

They have some room to grow. I wouldn't give them a perfect 10. I would probably give them an eight and a half or nine (as a whole number).

Which deployment model are you using for this solution?

On-premises
Disclosure: IT Central Station contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Flag as inappropriate
Learn what your peers think about Turbonomic. Get advice and tips from experienced pros sharing their opinions. Updated: November 2021.
553,954 professionals have used our research since 2012.
David Grudek
System Engineer at a financial services firm with 201-500 employees
Real User
Top 20Leaderboard
Looks at history and patterns to understand our spikes and when to move things around, always keeping things in perfect balance

Pros and Cons

  • "I only deal with the infrastructure side, so I really couldn't speak to more than load balancing as the most valuable feature for me. It provides specific actions that prevent resource starvation. It always keeps things in perfect balance."
  • "It would be good for Turbonomic, on their side, to integrate with other companies like AppDynamics or SolarWinds or other monitoring softwares. I feel that the actual monitoring of applications, mixed in with their abilities, would help. That would be the case wherever Turbonomic lacks the ability to monitor an application or in cases where applications are so customized that it's not going to be able to handle them. There is monitoring that you can do with scripting that you may not be able to do with Turbonomic."

What is our primary use case?

We pretty much use it only for load balancing between hosts.

We're a payroll company and Turbonomic is really important for us from about November until March, each year, because our end-of-year processing increases our load by six to seven times. That's especially true in November and December when companies are running their last payrolls. If we're going to be losing any customers, they definitely have to finalize everything all at one shot. In addition, companies that pay out bonuses at the end of the year also have to be running all these extra payrolls. There are a slew of reasons for extra payrolls at that time of year. They may need to do some cleanup if they messed up something and didn't do so all year long. At that point, they have to do it before December 31st. And after December 31st is the beginning of tax preparation, so our systems are very heavily utilized.

It does a great job year-round, but we're in a situation where we have plenty of resources during most of the year, but at year-end, depending on how busy it gets, it can overwhelm the systems if you're not careful, depending on where a VM sits, on which host.

How has it helped my organization?

We have parts of our payroll engine that run pretty hot, depending on what's happening. Those pieces of the payroll engine and the SQL servers tend to overrun a server pretty quickly, if you're not careful. But Turbonomic always does a great job of making sure that that is not happening.

Our company is growing like a monster. Even during COVID we've been growing, where a lot of companies, unfortunately, are not. Because we've gained a lot of new customers during COVID, because other payroll companies have gone under or because our pricing model is good, the result has been a lot more load on our systems. And even though we've had much more load, Turbonomic has done a great job of keeping everything balanced. We're able to completely utilize our systems before having to introduce more hardware.

Its whole job is to manage our business-critical applications by understanding the underlying supply chain of resources. In my opinion, applications are applications. When your application is running, it's reserving a piece of memory, it's utilizing the CPU, and it's utilizing the network interface. There is a certain limit to the hardware that's available to it. Applications need watching. If we were to use the pieces where it digs down inside the application, it would probably even do better, but we're not using those pieces yet. In the end, no matter what the application is doing, it's using resources, either on a regular basis or on a random basis, and Turbonomic looks for patterns and spikes and how long these patterns last. It looks at the risk of a spike only happening for 30 seconds versus 30 minutes. It makes decisions to move other things off or move off the application in question. It does a great job keeping that balance.

We hardly do any manual execution. Maintenance windows are when it's probably most important to us. For example, when we're upgrading or doing a repair on an ESX host, we're really taxing the other servers because those CPUs and all the I/O capability on that host are lost. That's when it's even more important for Turbononic to keep the rest of the system healthy enough and to keep the system going without showing any degradation. It helps us while we're doing maintenance.

It provides a proactive approach to avoiding performance degradation, absolutely. It always looks at history and looks for patterns. In a lot of cases, it knows that there's a scheduled task or that, on average, a customer is always like hopping on around three o'clock. After a week or two of seeing the exact same things happening—although that timeframe really depends on how big your organization is, the bigger your organization, the longer it takes to understand the full dynamic—it really understood all the patterns and all the schedules. In the past, we'd see systems starting to get a little hotter and then a little hotter still. With Turbonomic, that issue would just go away by shuffling stuff around. After a week or two we wouldn't see that thing getting warm anymore. When that application or server got hot, it never put that host into any kind of jeopardy anymore, not even a little bit where we thought, "Hey, we should look at that."

We have definitely seen a reduction in tickets open for application performance issues. In some cases we get tickets for performance problems because the developer wrote crappy code and wants to blame us in infrastructure. But I can show them, "Look, we're not overloaded." They look at some of the logs we provide them that show that the servers are operating at full capacity and usually they find that there's some kind of weird issue with a database query that they wrote. The solution has definitely reduced tickets for application performance problems a little bit, but it has mostly decreased troubleshooting time.

It's helped us to always meet our SLAs. In previous companies I've worked at, the company was either so cheap, or the boss didn't understand how the infrastructure works, that we got to the point where we were over utilizing our hardware. That's where Turbonomic really makes a difference because without it you can be way over-committing what you have versus what you need. You always have servers that are sitting there idle, so it's best if you can balance the ones that run hot with the ones that run idle and shuffle around use of them based on what you're utilizing. If you don't have Turbonomic, then whatever boxes happen to be on there could be in overdrive to the point where you start ballooning and paging. That can cause a denial of service because you don't have enough resources to handle the workload. It's really great to try to maximize every inch of your infrastructure, to make sure you're utilizing everything to its fullest capacity.

Whoever is on call loves Turbonomic because if we didn't have it we'd be getting alerted much more often. That's especially true at three o'clock in the morning when some backup or something that's not even important is running and the pager will go off because it hits some threshold for too long. Turbonomic sees, "Okay, this server does this at this time and it's going to use a lot more resources than anybody else," and it just shuffles everything away.

What is most valuable?

I only deal with the infrastructure side, so I really couldn't speak to more than load balancing as the most valuable feature for me. It provides specific actions that prevent resource starvation. It always keeps things in perfect balance. Most of the time it tends to be our SQL servers. 

When I first started at this company, only the boss knew about Turbonomic, and he had totally forgotten to even mention to me that they had it. They weren't using any other software like that. I was always curious and I kept asking all the other guys about how the system always stays so balanced and that we never seem to have a host that runs really hot, under any circumstances. Finally, in my first review, I asked him, "What are you guys doing there in the background that you haven't told me about, that keeps the system so balanced?" And he said, "Oh yeah, we're using Turbonomic." For whatever reason, they didn't have a DNS entry for it. The guy who put it in, he always just connected to it by IP, and he totally forgot about it because it just works. You don't have to go in there and do a lot to it unless there is a problem, and we never have a problem.

Also, since we're only using it for the infrastructure part, it's not telling us anything about the application. It just tells us about the server that is running the application. But if the application is getting bogged down because you're starting to see disk I/O problems, it does a fabulous job of recommending. "Hey, let's move this here or do that there." In every case that I can ever remember, it has always done a great job.

It doesn't really require any maintenance. Just set it up and let it do its job.

What needs improvement?

On the infrastructure side, they've been doing it long enough. But until I get a better use case for the cloud, the only thing I can think of is that I'd like to see it work with SevOne, when you're doing true monitoring, so that the software packages work together. 

It would be good for Turbonomic, on their side, to integrate with other companies like AppDynamics or SolarWinds or other monitoring software. I feel that the actual monitoring of applications, mixed in with their abilities, would help. That would be the case wherever Turbonomic lacks the ability to monitor an application or in cases where applications are so customized that it's not going to be able to handle them. There is monitoring that you can do with scripting that you may not be able to do with Turbonomic. So if they were able to integrate better with third-party monitoring software—and obviously they can't do them all, but there are a few major companies that everybody uses—and find a way to hook into those a little bit more, the two could work together better.

For how long have I used the solution?

We were actually a customer of theirs even before it was Turbonomic, back when it was VMTurbo. It's been at least 10 years.

What do I think about the stability of the solution?

I've never ever had it crash or go down.

Turbonomic is one of the best software solutions ever written. You just set it up and you only go in there when you're having a problem. The truth is that we don't have that many problems anymore. The only time we had a problem was shortly after I first started, because none of the guys even paid attention to it because it always did its job. We ended up having to change our vCenter server. We moved from a physical box to an appliance and when we did that, the vCenter had to have a new name. Because none of the guys, other than my boss, were around when Turbonomic was set up, they didn't log into it because it always did its job. They totally forgot about it. And for something like two weeks we were seeing some goofiness where boxes were getting hot. I messaged, "Hey, did you put that change in Turbonomic?" and he replied, "Did I put what in what?" Other than that, we just don't have to go in there. It works so well that we just don't do anything to it.

We have it set up to throw out some alerts if things get too haywire, but it didn't even get to that point. We like to be very proactive in keeping our systems very low in resources. Had we let the problem go on a little bit longer, it probably would have thrown out an alert, but the problem was that we didn't have the vCenter in there.

What do I think about the scalability of the solution?

At the first shop I worked at where we got Turbonomic, we had over 60 hosts and it did a magnificent job. In my current company we're only monitoring 15 or 20. I can't see it having a problem. The more servers you have, the more return you get on it.

In terms of increasing our usage, we're always pushing the developers to do stuff with it. The problem for us is that we're owned by a parent company and IT infrastructure works for and reports to the parent company. But the IT development software group reports to the actual company. They're under a totally different chain of command, so we can't really dictate anything that they do. All we can do is make recommendations, but their director has his own plans. We try to show them the benefits, but it's a lot of work to sit down and configure it, which is not worth it if they're not going to use it. And we have enough other projects that we have to work on, so we have to pick our battles.

How are customer service and technical support?

Their tech support is awesome. We've never had any problems with them. But to be honest with you, we haven't really had to call them in 10 years, except for one time when I called them because I couldn't find our license key. It was not about supporting the software, per se.

Which solution did I use previously and why did I switch?

We were trying to use VMware's version of it, but it's a pile of junk. It doesn't work. It just sees that something is busy and moves it somewhere where it's slower. Turbonomic is better because it trends and it looks at history. If it knows that something is only going to run hot for 30 seconds, but it's going to take 45 seconds to move everything around, then it's not worth moving. You don't get that out of VMware's product. It just saw, "Okay, so it's hot, start moving." But the problem is that when you start doing the move, you're also now utilizing even more resources, and you could save on those when the move is going to take longer than the actual task that's causing the spike. Turbonomic sees that and it doesn't necessarily move stuff just for the sake of moving it. It knows that this is something that happens every day at this time and we're just going to ride it out.

We were actually one of their first customers, when they were still ironing out all the wrinkles of their algorithm. My boss and I were both looking at it and seeing how it was shuffling stuff around non-stop. It couldn't even gather data at a point. It was already moving it somewhere else. We called in, took some notes, and they produced a new version. We did what felt like beta-testing even though it had been a production piece for a while. But they were very responsive and produced patches really quickly. The company was a really good company. They were really friendly, and they're always trying to deliver what they promise.

How was the initial setup?

If you can't set this thing up then you shouldn't be in IT. We're only using it from the infrastructure side. You log into it, you set your passwords, import your vCenter into it, put in the credentials, and tell it what vCenters you want it to monitor. And it just starts gathering data.

We did our setup many years ago and, back then, we had a guy there walking us through it because they were a brand-new company. They really gave us the white-glove treatment. I think we may have spent two hours on it, but most of that was talking. If you buckle down and just shut up and do it, it would probably take 15 or 20 minutes. It only takes one person to implement it.

We have six guys who use Turbonomic now. Some are higher-level guys and we have our entry-level guys, but everybody has access to it and looks at it if they need to.

What was our ROI?

I don't know what we would have bought if we didn't have it, so it's hard to say how much we have saved. We may have bought more hardware thinking we needed more hardware when we didn't. We just needed to shuffle stuff around. So it's hard to say what we would have done if we didn't have it, and if we didn't have that reporting telling us, "Hey, this is where the problem is".

What's my experience with pricing, setup cost, and licensing?

If you're a super-small business, it may be a little bit pricey for you. The problem with small businesses is that the owners are super-cheap and they don't want to spend anything unless they absolutely have to. It's really hard to explain this solution to people with that mentality. But you can run more servers on less physical hardware because it will keep things balanced based on your usage patterns. 

But in large, enterprise companies where money is, maybe, less of an issue, Turbonomic is not that expensive. I can't imagine why any big company would not buy it, for what it does.

If you didn't have it and you went out and bought a whole new server, you're talking about spending something like $7,000 with Microsoft for a decent license, and then you're talking about a VMware license as well, which I would venture is in the $5,000 to $7000 range. And memory is so expensive all the time. And another server is going to cost, say, $10,000 to $15,000. If you do that twice over a couple of years, Turbonomic will have paid for itself. And that's not to mention the fact that it's also made things so much better for you because it has kept the system stable. I don't think Turbonomic is expensive, in that sense.

I put in a lot of expense and time in the very beginning, because I was trying to learn it. But if you're smart, you'll look back and see how well it's managing your systems now and you'll feel like a fool that you went out and bought all that new hardware in the past, because you probably could have gotten away without it. If you're truly maximizing your systems, the way that Turbonomic does, you can get away with less hardware with the same infrastructure because it's maximizing the hardware better. And that keeps your licensing costs down.

Which other solutions did I evaluate?

We did a lot of research, especially at that time, and there aren't really a whole lot of other software solutions out there, at least nothing that's comparable to Turbonomic. There are other companies that do something similar, but they don't have anywhere near the level of complexity or the type of algorithms that Turbonomic is using to keep the system stable. 

I don't remember what we evaluated at the time, other than VMware, because everybody works with Vmware, but the other solutions we had looked at were not even in the realm of Turbonomic, so we didn't evaluate them for more than a day.

What other advice do I have?

If you're looking into Turbonomic, just do it. You will not regret it.

The biggest thing that I've learned is that you don't realize how much your hardware can do until it's truly balanced. Some people operate foolishly and they just won't step up because they're being cheap. Other people want to be ultra-conservative because they don't ever want to have a problem, but using software like this, you realize that there is a balance. If you trust the software, you get to utilize your hardware better while still feeling like you have those reserves available without putting yourself at risk by being foolish.

It provides a single platform that can manage the full application stacks, but we're not using that aspect. Our developers are not interested in using it yet. We're in the process of looking for a new monitoring software as well, and I'm pushing heavily for them to look at SevOne but we've had some unfortunate experiences with the people at SevOne. If we go down that route and start using SevOne, my boss is going to lean on them much more heavily to start integrating with Turbonomic.

It only handles virtualization right now, for us. It's probably going to start handling cloud soon, because we're just starting to migrate things there. We have some things in the cloud, but we're looking at moving quite a few other servers up to Azure. The solution understands the resource relationships perfectly, on-prem. From what I have seen so far of the cloud piece, it seems to understand that, mostly, although the cloud is still fairly new compared to on-prem infrastructure. I have no doubt that they're going to make huge strides and make that part even better. I don't know that it's as good in the cloud as it is on-prem. We have used it a little bit in some testing, but we haven't run it in production for any long periods of time. But we're really hoping it reduces our cloud cost at some point, because those cloud vendors really take advantage of every ounce of I/O that you use.

Honestly, on a scale of one to 10, I would give Turbonomic a 12. It's way better than a lot of software. Other solutions look really shiny—and if you're like a fish and all you care about is like looking at something shiny, that's one thing—but when those products are delivered, they don't do half of what they say they're going to do. They'll say, "Oh, that's in the next release," or "Oh, we're working on that". Turbonomic was very upfront about what their software did. Yes, they had a few bugs, but they were also just opening at the time. We expected that in the beginning because it was a brand-new company. But what they told you it would do, it did, and it did it well, too. Nowadays, that's hard to find.

Which deployment model are you using for this solution?

On-premises
Disclosure: IT Central Station contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Anita H
AVP Global Hosting Operations at a insurance company with 10,001+ employees
Real User
Top 20Leaderboard
Saved us a significant amount of money by rightsizing instances

Pros and Cons

  • "In our organization, optimizing application performance is a continuous process that is beyond human scale. We would not be able to do the number of actions that Turbonomic takes on a daily, weekly, and monthly basis. It is humanly impossible with the little micro adjustments that it can make. That is a huge differentiator. If you just figure each action could take anywhere very conservatively from five to 10 minutes to act upon, then you multiply that out by thousands of actions every month, it is easily something where you could say, "I am saving a couple of FTEs.""
  • "It would be nice for them to have a way to do something with physical machines, but I know that is not their strength Thankfully, the majority of our environment is virtual, but it would be nice to see this type of technology across some other platforms. It would be nice to have capacity planning across physical machines."

What is our primary use case?

We wanted the performance assurance because we have seasonal spikes in our volume. One of the use cases was making sure that we could adjust for seasonal spikes in volume. 

Another use case was taking a look at how we increase our density and make a more effective utilization of the assets that we have on the floor. 

The third use case was the planning, being able to adjust for mergers, acquisitions, divestitures, and quickly being able to separate out the infrastructure required to support that workload.

We just upgraded and are using the latest on-prem version. 

We use Turbonomic for our on-prem hosting: servers, storage, and containers. We also use it in Azure. We are trying to use it across multiple hosting environments. The networking team is not really using it. Instead, I am there from a hosting standpoint, where the main focus is on servers and storage, then the linkage to applications with the resources that they are using.

How has it helped my organization?

It integrates into our other tools that we have been able to stitch together. When I take a look at an infrastructure cluster, I can see what applications are running on it. I can see down to the transaction level who is actually causing a performance constraint. We can then go back to our application teams to get that issue resolved.

When I start to take a look at a cluster level, I can look to see which application is running in that cluster. Then, we can get down into specific transactions. We can then watch to see how workload is trending and identify where we may need to add more hosts into the environment. With our transactions, we use Turbonomic linked into AppDynamics. When it links in and pulls the application data, it also helps us dig down. So, if I see my utilization trending up, then is it something on the infrastructure side or the application side? Is it something the application team needs to address? Or, is it something my infrastructure team can address? This allows us to make fact-based decisions.

In our organization, optimizing application performance is a continuous process that is beyond human scale. We would not be able to do the number of actions that Turbonomic takes on a daily, weekly, and monthly basis. It is humanly impossible with the little micro adjustments that it can make. That is a huge differentiator. If you just figure each action could take anywhere very conservatively from five to 10 minutes to act upon, then you multiply that out by thousands of actions every month, it is easily something where you could say, "I am saving a couple of FTEs."

On Windows 2008, whenever we did a large scale OS upgrade, it was kind of taking a look at what resources were allocated to each of the applications and server instances. Then, you basically would replicate that. Being able to use Turbonomic, we have been quickly able to go through and take a look, and say, "Okay, wow. This may have been what was previously allocated to you. We now realize that your utilization doesn't require that level." We are able to actually downsize as we go through and rebuild. This part, the planning aspect, is really good.

One of the things that we completed this year was starting to tag applications so we can pull up more critical applications and take a look at their resources needs. We can have a specific dashboard per critical application.

What is most valuable?

For performance assurance, I love the dynamic resource allocations. We don't have any nuisance performance issues. 

When you take a look at the utilization of our resources, it is great that this solution works both on-prem and in the cloud. We have been able to identify some quick saves in the cloud, and then on-prem, with their algorithm. So, we have been able to go ahead and increase our density by about 35 percent, which has delayed purchases of hardware.

Turbonomic provides specific actions that prevent resource starvation. One of the best features about using their algorithm is it can go through and tell me that I have a specific server instance or virtual image that needs either more CPU or memory added, tell us "These are the ones that aren't using the resources." Then, we can decrease the allocations to those server instances. The nice thing about this is we can schedule which of these activities you want Turbonomic to do automatically for us.

Monitoring and thresholds are very reactive, so somebody would have to be sitting there with eyes on glass, taking action. Whereas, with Turbonomic, we now have our thresholds set, and it automatically takes those actions.

The reporting is good.

What needs improvement?

It would be nice for them to have a way to do something with physical machines, but I know that is not their strength Thankfully, the majority of our environment is virtual, but it would be nice to see this type of technology across some other platforms. It would be nice to have capacity planning across physical machines.

For how long have I used the solution?

Between my two companies, I have been using it now for about four or five years.

What do I think about the stability of the solution?

The stability has been wonderful. We have never had any issues.

What do I think about the scalability of the solution?

The scalability is great. There is no problem with scaling.

There are about a dozen people from engineering, operations, and capacity who login and use the data to make decisions. It is a hands-off type of product. You only need a couple of key people from the different use case areas to use it.

How are customer service and technical support?

What is really impressive with the Turbonomic team is that after you sign the deal, they don't disappear. In the two and a half years in my current position, Turbonomic has been right there, whether we have an issue, which is very rare, or we are trying to still complete the objectives of the purchase, such as integrating our use cases. The Turbonomic team is very supportive and hands-on with you. I can't say enough about their customer support because it helps drive the value faster. They are always right there working with my team as part of the team.

Turbonomic is a real partner, which is a really good thing. I have been in IT my whole life, decades, and there are way too many vendors that once you make the sale, that's it. You are now at the bottom of their pile because they are chasing the next sale.

Which solution did I use previously and why did I switch?

Before I came to this company, my previous company was using this tool extensively. At my previous job, I had seen the benefits of the tool. When I came over to this company, it was one of the first things that I started to champion.

I have been with the company for three years, and we have used a tool called VMware DRS. We are a heavy VMware shop, and vROps wasn't anywhere near the level of automation needed. DRS, even though it can do some things automatically, it is all based on data pulled from the night before. We didn't have anything in the environment that could do the real-time automated resource moves, like Turbonomic does.

I think DRS is gone now. The engineering team still uses VMware for a couple of things, simply because that is their preference. vROps is still in the environment, but I would love to get to the point where we can continue showing success with Turbonomic and eventually eliminate vROps.

How was the initial setup?

The initial setup was very straightforward. This is one of the very few tools which we were able to stand up and get it running within weeks. 

It is a very simple product to install, then there are just a couple of configurations to tweak. Then, you are up and running. They literally tell you what you need. It's like, "Here are the requirements: You need X number of virtual images - this level." It has very simple instructions. We probably had it installed in one day, then we had everything reporting within a couple of days. After that, we did the tuning, mapping, and everything else. Within 30 days, we were probably getting useful data out of this tool.

What about the implementation team?

We just worked with Turbonomic. Cisco was our reseller, but they actually provide Turbonomic resources.

We have only two people involved with setup and maintenance. I have one main person with a backup person for him. That is how easy it is to set up and maintain. Our future plans are to migrate to the cloud offering probably later this year. Once we do that, that will free up one person.

The main guy is a Windows Server admin who supports the Turbonomic platform, but this isn't his only job. It is something that just takes up a fraction of his time. Once we go to the cloud offering, then the management of the tool goes back to Turbonomic and we will just be a consumer of the data.

What was our ROI?

When I first put the proposal on the table, we put in the proposal that we would get our payback within three years. We got our payback in 15 months. For example, we went through and increased our density, then we were able to delay the purchase of close to 200 servers.

We are very excited about the fact that it does integrate with ServiceNow, our service management ticketing system. It will go out there, and when it says, "I need to add CPU/memory," then it creates the change ticket for us. So, we can have an automated ticket created and get the approvals in place, then it is automatically executed and the ticket is closed off. This saves my team hundreds of actions every year.

When the application starts to see performance degradation, those tickets will go to their queue, but then they will get escalated to me. I can tell you that I have received almost no calls about, "My application is running slow." Before Turbonomic, during the busy season, it seemed like almost every day that I was receiving calls. So, there is definitely a huge drop in, "My performance is running slow," where you would then kind of scramble to find out, "Okay, why is it running slow?"

We use Turbonomic to help optimize our cloud operations and it has reduced our cloud costs. We have been able to identify unattached premium storage, paying for storage that we weren't using. We have also been able to identify instances that were assigned a larger template than was actually needed. So, we were able to then downsize them. This ended up saving us a significant amount of money by rightsizing those instances. 

By increasing our level of density, we have been able to delay hardware purchases. So, we have been able to absorb growth without hardware purchases. Without hardware purchases, we also save money on software licensing.

It has allowed us to deploy where our resources spend their time by focusing on other project or high-value activities with the business. There is less firefighting and more project work.

What's my experience with pricing, setup cost, and licensing?

The pricing and licensing are fair. We purchase based on benchmark pricing, which we have been able to get. There are no surprise charges nor hidden fees.

Which other solutions did I evaluate?

We did have to go through and do a comparison of vROps, DRS, and Turbonomic in order for me to get it on board at the company.

The performance assurance and automatic allocations (the automation that comes with it) really drove our decision to go with Turbonomic. They have a level of automation that the competitors don't.

Turbonomic understands the resource relationships at each of the elements of our environment's layers and the risks to performance for each. That is part of what makes them a key differentiator, especially against something like a vROps. Their algorithm is based on: in the moment, what is being used, and what is needed. It will not make an automated move that may cause another issue. Whereas, VMware DRS would move stuff based on data that it had pulled the night before, which may not be valuable or still valid. At that point, you could move something that needed CPU, but you moved it someplace else where now there is a memory constraint instead of a CPU constraint.

A big deciding factor with Turbonomic was you can set how much trending data that you want to keep, whether it is a 30, 60, 90, 120 days, etc. You can set your trending there, then you can schedule your actions based on utilization over that time frame, e.g., the last 90 days.

What other advice do I have?

We are using it mainly to manage the resource utilization for our virtual environment. We are using it for project planning, like the Windows 2008 upgrade with the infrastructure that needs to be built out for that. We are using it to manage our cloud expenses and the utilization within the cloud, which then drives cost reductions there. In the last few months, we started to do the application tagging so we can start to get down to specific application dashboards. This year, we want to start to drive more of the automation to reclaim unused resources, so I can then go ahead and delay further purchases. Our plan is to continue driving up the density of the environment.

Right now, we have certain tasks that get automatically done today. We are working on the piece which does the scheduling, using the change tickets, because we wanted to ensure there was an audit trail so we had an interface with our ticketing system worked out. So, we are getting ready to do that. Adding resources throughout the business day is no big deal, but we want to make sure we don't remove any resources (during the business day). We want to do this during a maintenance window to ensure that there will be no business impact. It is just being ultraconservative and sensitive to the business's needs. As they get more comfortable, we will continue ratcheting up the level of automation that we use. 

Everything is very specific with Turbonomic. We can take manual action throughout the day, if we see that it is necessary. We can have Turbonomic take certain specific actions automatically, then we can decide which ones we want to actually schedule so we can link them to approve change tickets.

It will show application metrics and estimate the impact of taking a suggested action from infrastructure resource utilization. I don't know if it will get down into the transaction level performance. I think the new release does that, but we haven't tested that piece out. However, this is the planning piece, e.g., if I were to remove the CPU, what would the performance and utilization look like? Or, in the case of some stuff that I was recently looking at, if I were to add the CPU, what does that do to the overall utilization metrics? You can then decide: Do I want to take that action?

The biggest lesson learnt is probably that people are afraid of change. Our biggest hurdle was putting their faith in automation versus we have always done it this way. We have always been oversized so the application teams would make sure that we never run out of resources, but they needed to be open to change. My favorite analogy that I like to use with them is, "I understand it is hard because instead of you telling me, 'I want this many CPU or this much memory.' I'm telling you trust me." It's like the gas gauge in your car. Don't look at the gas gauge when you get in your car. Just trust me that I have put enough gas in the car for you to get where you are going. It's a very difficult mindset for application teams who are used to saying, "Okay, I have eight CPUs over here. Don't touch them." But, Turbonomic actually gives us the data to show them, "You have eight CPUs over here. You'll never get above 40 percent utilization, so you are costing us money." So, it is fact-based decision-making.

My advice is, "Go for it." Don't let other teams hold you back because this is how they have always done it. Trust the Turbonomic team because they are great at being able to implement, and they are ready to move fast. Make sure you get all the right stakeholders, because we have had to deal with everything from:

  • Engineering
  • How do we do an internal chargeback?
  • The application team's perception that I can't run with anything less than this. 

Get ready to be able to put some facts on the table and lean on the Turbonomic team because they are just phenomenal at helping put together business cases and doing the implementation. However, also get ready to tell your people to go for it. Don't be saddled with, "This is how we've always done it," because technology changes. I have seen nothing in my infrastructure career that was as great as this product when it comes to resource utilization.

I would give them a 10 (out of 10). The tool does what it says, and the Turbonomic people don't sell it to you, then disappear. They are always there and a pleasure to work with.

Disclosure: IT Central Station contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Sam Beckett
Senior Cloud Engineer at O.C. Tanner Co.
Real User
The cost savings is significant, especially with our AWS computing

Pros and Cons

  • "Turbonomic can show us if we're not using some of our storage volumes efficiently in AWS. For example, if we've over-provisioned one of our virtual machines to have dedicated IOPs that it doesn't need, Turbonomic will detect that and tell us."
  • "The deployment process is a little tricky. It wasn't hard for me because I have pretty in-depth knowledge of Kubernetes, and their software runs on Kubernetes. To deploy it or upgrade it, you have to be able to follow steps and use the Kubernetes command line, or you'll need someone to come in and do it for you."

What is our primary use case?

We have a hybrid cloud setup that includes some on-prem resources, and then we have AWS as our primary cloud provider. We have one or two resources on the Google Cloud Platform, but we don't target those with Turbonomic. Our company has a couple of different teams using Turbonomic. Our on-premise VMware virtualization and Windows group use Turbonomic to manage our on-prem resources. They use it to make sure that they're the correct size. 

I'm on the cloud engineering team, and I use it in a unique way. We use it for right-sizing VMs in AWS. We're using it to improve performance efficiency in our Kubernetes containers and make sure the requests are in line with what they should be. If an application has way more memory allocated than it needs, Turbonomic helps us decide to scale that back.

We have a platform that we use for our internal deployments. I use our API to get data and transform it for use in our platform. I've developed APIs that go in between our internal platform and Turbonomic. When our developers create and release code, these APIs allow them to take advantage of Turbonomic without using it directly. It's built into our platform so they can benefit from the performance improvements Turbonomic can recommend, but they don't need access to Turbonomic.

How has it helped my organization?

Cost savings is a significant benefit, especially with our AWS computing. It cuts down on human error. For example, sometimes someone will spin up some resources in AWS and then forget about it. We can go into Turbonomic's reporting and see that a virtual machine is idle, so you might want to scale it down. If it's not being used, you should delete it. Then you can save X amount of money. Turbonomic will automatically apply those things for you and tell you how much you're going to save. It's integrated with your AWS billing report and everything; it can give you real data. You click a button, and it'll apply all the functions for you, so you save a bunch of money. I would say that that's a huge part of it for us.

We have a couple of different use cases, and it was essential for us to meet all of them without the need to go to several different vendors. Turbonomic can manage on-premises and cloud-native resources all in the same place, providing direct cost benefits through our cloud providers and our on-prem hardware storage.

Turbonomic has also helped us improve our efficiency as an organization. We can better understand the actual cost of our applications and how to optimize, so we've become more efficient and cut down some of the extra expenses. It's also useful for capacity planning. We can understand how much resources we're using right now and how much we'll need when we bring on new clients for our software solution.

Turbonomic has helped us manage multiple facets of our business-critical functions. Our company provides a software platform for our clients. They log into a portal that's hosted either in the cloud or on-premises. Turbonomic can monitor those applications as well as the underlying storage and computing resources. It's monitoring the applications themselves, the production environments, development, and QA for future changes. We can understand how changes are going to impact our production.

It depends on the system that we're looking at. We have a change-management process for our business-critical things and our production resources. With that, we either schedule a change or manually execute the change during a planned maintenance window. Our change management board approved other functions, like development and QA-type resources that aren't in production that we're developing. We can automate those kinds of things all the time. I know that our storage team automates a ton of tasks, but I'm not exactly sure. I assume they wouldn't be automating production resources either.

We follow some pretty strict change management policies. Applying some of these resources will require restarting your process. We would do it either in a change management window that we schedule through Turbonomic or manually apply it. 

Turbonomic's application-driven prioritization helps us identify where risks are coming from while proactively preventing performance degradation. It's nice to be able to avoid problems before they happen. I don't have to wake up in the middle of the night and respond to some alert because one of our applications ran out of memory, and people couldn't use our product. It's helped me get some sleep. Our storage teams are super stoked about that, too, because they had all sorts of alarms going off all the time, and they set up a ton of automation with Turbonomic to handle that all for them. We've seen a significant reduction in open tickets for application issues.

What is most valuable?

Turbonomic can show us if we're not using some of our storage volumes efficiently in AWS. For example, if we've over-provisioned one of our virtual machines to have dedicated IOPs that it doesn't need, Turbonomic will detect that and tell us. You can save like a thousand bucks a month by switching the storage class. With a click of a button, it automatically makes the changes for you, and you can go in and save a ton of money on AWS with it. That's one of the primary ways I've used it. 

Kubernetes integration is excellent. Turbonomic helps us right-size deployments and replica sets. They've come a long way since I started here. I've been working with the team that uses Kubernetes or develops the Kubernetes integration, and it's been fantastic. Turbonomic helps prevent resource starvation too. Inside the console, there's a little graph that tells you what your application has been doing over the last week. You know that you need to take action right now before you run out of CPU or memory and your application starts to suffer. 

With Turbonomic, you have everything in one place. There aren't a bunch of different things to worry about or manage. It helps you manage full-stack applications as well. It's a challenge for many of our developers to understand what resources their application needs. We can automate that. Turbonomic processes all of the data, makes intelligent decisions, and automatically applies changes to the application. These are problems that are difficult for humans to solve because of the complexity of taking into account all these variables and determining how much memory to give an application. If you don't make the right decision, Turbonomic can discover that for you and fix it.

You can automate all of these functions. It tracks your application performance, and you can automate everything or have it wait for your input. It'll do it in real-time asynchronously in the background. Turbonomic can predict the impact of any given action, and that's one of the things I like about it. There's a little graph that pops up when you're about to do something. It shows you the history and predicts the future impact of what will happen when you click the button. For example, it can tell you that your utilization of the resource allocation will drop by this much, and you're going to be at about X percent utilization.

It's reasonably accurate. I haven't had a situation where it told me that everything would be okay, but it didn't work when I applied the change. So far, everything has been smooth sailing. Turbonomic can tell you how everything is currently performing, but we use other tools for that kind of monitoring. It can show you how your system is currently acting. If some things don't need action at the moment, it will tell you why. For example, it'll say you have this much memory allocated, and you're right on target, so you don't need to do anything. 

It's harder to use a monitoring tool to understand how your application performs over time. It depends on the monitoring tool, but often you have to set it up to ingest all this data and pick the right things to look at. Turbonomic does all that for you in the background. You can look at a suggestion, for example, if you need to up your memory allocation by a certain amount — and see all the data Turbonomic has gathered to make that decision. With a standard monitoring tool, you have to make that decision yourself. You're the one ingesting all the data. 

A monitoring tool is probably better if I want to see what my application is doing right this instant. As far as thresholds go, I think that's something I would probably use monitoring tools for. I would set it up to alert me when my resource allocation or memory usage exceeds 80 percent. I haven't used Turbonomic to do things like that. It's more forward-looking. When something is happening, like my application is running low on startup resources, I'll hop on a Turbonomic to see if there's a solution that I should apply. 

What needs improvement?

It's tough to say how they could improve. They've done a lot better with their Kubernetes integration. If you'd asked me a year and a half ago, I would say that I think their Kubernetes integration needs work. They started with more of a focus on on-prem VMware virtual machines. I think it was called VMTurbo at one point. Their main goal was to help you with these virtual machines. 

Now they've pivoted to also supporting containers, cloud-native tools, and cloud resources. At first, it was a little hard because they had this terminology that didn't translate to cloud-native applications for the way that Kubernetes deploy things versus a virtual machine. 

I was left wondering if this was a Kubernetes resource but now, it's come a long way. I think they've improved our UX as far as Kubernetes goes. I'm interested in seeing what they do in the future and how they progress with future Kubernetes integration. I would say that's something they've improved on a lot. 

For how long have I used the solution?

I've been at my current company for a little over three years, and I believe we started looking into Turbonomic around that time. I would say I've been using it for two to three years.

What do I think about the stability of the solution?

I have never had Turbonomic go down or had a problem with it not being available when I need it, so I would say stability is great.

What do I think about the scalability of the solution?

I've never had an issue where I would need to scale Turbonomic to handle more resources. Knowing what I know about how the solution is deployed, I would say it's scalable since it's built on Kubernetes. You can install the Kubernetes cluster and scale up instantly. Turbonomic has a micro-service architecture, so it appears to be scalable on the backend. I would say it's very scalable, but I haven't had any direct experience with scaling it myself. 

We're using it fairly extensively, but we don't have a ton of people working with it right now. Every relevant team uses it, including my team, cloud engineering, storage, and networking groups. In total, that's around 10 or 15 people using it. We are planning to increase usage. We're working on some new applications for Turbonomic, like integrating some of the data from Turbonomic into our platform as a service. 

I've also worked with some of their engineers on this. It's not necessarily things that I wouldn't figure out on my own, but they've helped to smooth the process along. Every once in a while, one of my contacts at Turbonomic lets us know a new feature is coming and ask us if we want to beta test it. We install it, update to the beta version, then go through and take a look. Some of those things would be cool, like a scaling solution with Istio, a Kubernetes load balancer service mesh tool.

I want to delve into scaling applications horizontally with Turbonomic based on response times and things like that. It would be nice to be able to automate more actions. Right now, I've integrated this into our platform, but in the future, we want to automate some of this more, especially for non-production resources. For example, if a developer decides to spin up a development application using way too many resources, we can automatically scale that down. That's the problem Turbonomic is trying to solve. It's tough to know how much you need. 

How are customer service and support?

I rate Turbonomic support eight out of 10. Their support team has been good. We haven't had many problems, but when we do, they respond quickly. Whenever I've had to reach out for anything, they've been super-responsive, and they'll hop on Zoom call if we need them to troubleshoot something. 

How would you rate customer service and support?

Positive

How was the initial setup?

The deployment process is a little tricky. It wasn't hard for me because I have pretty in-depth knowledge of Kubernetes, and their software runs on Kubernetes. To deploy it or upgrade it, you have to be able to follow steps and use the Kubernetes command line, or you'll need someone to come in and do it for you. We're deploying it to use with our OVA in our VMware environment on-premise, which is a little rough. It's not terrible, and I've had way worse software vendors, but I would say there's probably a little bit of room for improvement as far as upgrades go. We have to schedule a window and then make sure everything's working. With other on-prem services, you just run one command, and everything updates for you. Turbonomic upgrades are a little more involved.

When the guy on our side was going through the install process and setting all of this up, he had to get into the virtual machine environment and do a bunch of stuff, download some things, and then start running scripts. On top of that, he was trying to run these Kubernetes controls, and this other guy was helping install them. So it felt a little more clunky. I don't know how you would improve that unless it was a complete software solution service or a simple installer that you download and run.

The total deployment time depends on some different factors. We've deployed Turbonomic a couple of times. When they come out with a new version, we have to do a complete redeployment. I wasn't involved in the initial setup, so it's hard for me to say. But it took a couple of days to deploy the new version, plus a couple of hour-long sessions. It was around 15 hours total. I remember we tried to download a file, and it took two hours. I think that was because of the internet connection on our side. It's hard for me to quantify it. 

What was our ROI?

We've seen a great return on investment. Then again, I'm not sure how much we initially paid for it anyway, but we went through renegotiation. I don't have the numbers, but we bought some additional licenses, so we just expanded our use a little bit two or three weeks ago. I'd say that we got a good return on our investment, and we're excited about expanding our use in the future too. 

It has reduced our capital and operational expenditures. It's hard to estimate it, but the cloud savings have been significant. I can't give a percentage. However, there have been multiple times when I've applied something, and it has cut a considerable portion of our monthly spending on AWS — over 5 percent. Sometimes it's just a little, but all of those actions add up over time. If I apply a bunch of changes at once, it can add up. I can say we reduced 5 percent of our monthly spending just once, and that was pretty huge for us because we spent a ton on AWS resources. That was one time I can remember, but I'm sure it's been more than that, especially our other teams using it. We've also seen some savings in human resources costs, especially on the other team. They're not dealing with alarms going off all day anymore.

What other advice do I have?

I rate Turbonomic 10 out of 10. For anyone thinking about implementing Turbonomic, I would suggest having someone familiar with Kubernetes — the more familiar, the better. You need someone who knows how to run a Kubernetes command to see what's happening with the state of the Turbonomic deployment if necessary. If you've got someone who knows how to use Kubernetes, include them in the deployment process.

Which deployment model are you using for this solution?

On-premises
Disclosure: IT Central Station contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Flag as inappropriate
Chris Bannoura
Sr System Engineer at Liquidity Services
Real User
Top 20Leaderboard
Gives engineers the ability to launch environments without having to create a bunch of changes down the road

Pros and Cons

  • "Turbonomic has helped optimize cloud operations and reduced our cloud costs significantly. Overall, we are at about 40 percent savings, and we spend about three million a year just in Azure. It reduces the size of the VMs, putting them into the right template for usage. People don't realize that you don't have to future-proof a virtual machine in Azure. You just need to build it for today. As the business or service grows, you can scale up or out. About 90 percent of all the costs that we've reduced has been from sizing machines appropriately."
  • "I would love to see Turbonomic analyze backup data. We have had people in the past put servers into daily full backups with seven-year retention and where the disk size is two terabytes. So, every single day, there is a two terabyte snapshot put into a Blob somewhere. I would love to see Turbonomic say, "Here are all your backups along with the age of them," to help us manage the savings by not having us spend so much on the storage in Azure. That would be huge."

What is our primary use case?

There have been quite a few use cases, even some that were probably unintended. 

  1. Reduce our footprint and cost. It handled that perfectly. 
  2. Handle our RI purchasing, which is what we are in the process of doing now. 
  3. Automating shutdowns and startups so we can turn machines off when they are not being used. We have several machines in this category. We are going to continue to add more to it, once we go through some finalization. We are using it to delete unattached volumes to manage databases.

The unintended use case was that we started looking at what else could we save. We realized that we had a ton of data in Blob Storage for backups. Turbonomic can't see that, but it brought it to light because we wanted to find a way to look at our overall spending. So, we have saved a bunch of money by reducing that footprint. 

It's on-prem, but we are in the process of moving into the cloud.

How has it helped my organization?

It helped reshape the organization. We used to have around 46 admins who could create virtual machines in Azure, and it was very difficult to manage them. It became exceedingly expensive, getting to a point where it received attention from the president of the company. Installing Turbonomic gave us sort of a governor over our cloud environment, where people started to really understand that everything you do has a cost. Now, we have like four or five admins. If someone wants a VM, we run a plan based off: available RIs, cheapest costs, and can it go into shut down mode at night? This has helped us create a standard of implementation going forward to prevent us from spending the money in the first place. While I'm sure Turbonomic would say that's their overall goal, we just never really saw it that way. 

I didn't know it was going to change how we submit tickets and work orders for adding service to the environment. That has all changed for the better.

I know when we are moving workloads from on-prem to the cloud, we have not been utilizing the planning in Turbonomic as much as we should. I think that is changing. People who have access to Turbonomic are now realizing, "Here are the specs of my machine. Turbonomic will put it in the cheapest compute resource template that it can find." 

This is starting to change how we even consider building something. We don't ask the end user, "What do you need?" Instead, we say, "What do you want to host?" Then, we look at other virtual machines out there. If we want to host a website, then it will be X number of users per day, month, or year. We look at some of our other marketplaces, then we make a plan to see what Turbonomic recommends, as it takes into account the disk, IOPS, RI, CPU, RAM, etc. This helps us prevent the spend in the first place. The idea where we build then save on the backend, that is what's changing. We are no longer just building blindly, then going to Turbonomic, and saying, "Okay, fix what we built." We are saying, "Turbonomic, tell us what we should build." I would rather do it when we build a machine than have to take a website down and schedule maintenance at two in the morning on Saturday when I am at the bar.

It gives people like me, who are engineers, the ability to launch environments without having to create a bunch of changes down the road. I think that's a wake-up call for a lot of the people at the company. Because we would just build our build, where we were spending $5,000 more than we should, so then we have to put in change tickets and scale all that back. I don't have to do that anymore.

If we see a system or service increasing usage, we can then anticipate building it out bigger to manage traffic and workflow. With version 8, it will almost be like having a tool to manage our machines actively versus just actions.

What is most valuable?

The Executive Dashboards are probably the best way to showcase what we are spending, what we can save, and how to grease the wheel to make it happen. A lot of times when we say, "Hey, we're spending too much," executives just go, "Yeah, well, it's just the cost of doing business." However, when they see a report where it shows, "You can save $8,000 a month," and it can provide those results. That is really powerful for upper management because they are very non-technical. They just know this thing exists, we have to pay for it, and it's critical to have, but they don't understand the nuts and bolts of it. The Executive Dashboards are probably the most beneficial overall for the business. However, as a techie guy, I don't think that they are the best for me.

Personally, the most valuable feature is the organization of it all, e.g., being able to drill down into any category and feeding the maps. It helps a lot by giving a visual representation of what is dependent on what. With the maps, you can drill into the different sections of the topography and find out what is what.

I like the tool overall from top to bottom. Anything that can save money and preserve productivity is going to get an A-plus in my book. 

Turbonomic provides specific actions that prevent resource starvation. For example, if we see a machine that is being overly utilized, then it needs to be increased in space, size, RAM, and processor.

It provides a proactive approach to avoiding performance degradation by scanning the environment every 10 minutes. It looks at 30 days worth of metrics per node. So, if it sees an upward trend on a machine, then I will get an alert that says, "You may want to scale up to accommodate the needs of this machine." However, it's not super fast. For example, it's not as fast as if I set a virtual machine to scale up or out as needed on the fly, but it does give us an overview of being able to see trends that we can plan for.

What needs improvement?

There are some issues on that point of it providing us with a single platform that manages the full application stack. I think version 8 is going to solve a lot of those issues. Turbonomic version 6 doesn't delete anything. So, if I create a VM, then destroy the VM, Microsoft doesn't delete the disk. You have to go in and manually do that. Turbonomic will let you know that it's there and that it needs to be deleted, but it doesn't actually manually delete the disk. The inherent problem with that is, it will say, "This disc is costing you $200 a month." Then, I go in and delete it. Since this is being done outside of the Turbonomic environment, that savings isn't calculated in the overall savings because it's an action that was taken outside of Turbonomic. I believe with Turbonomic 8, that doesn't happen anymore. 

We are still saving the money, but we can't show it as easily. We have to take a screenshot of, "Hey, you're spending this much on a disk that isn't needed." We then take a screenshot after, and say, "Here is what you're spending your money on," and then do a subtraction to figure it out. So, there are some limitations. 

It is the same with the databases. If a database needs to be scaled up or scaled down, Turbonomic recommends an action. That has to be done manually outside of the Turbonomic environment. Those changes are also not calculated in the savings. So, it doesn't handle the stack 100 percent. However, with version 8 coming out, all of that will change.

I would love to see Turbonomic analyze backup data. We have had people in the past put servers into daily full backups with seven-year retention and where the disk size is two terabytes. So, every single day, there is a two terabyte snapshot put into a Blob somewhere. I would love to see Turbonomic say, "Here are all your backups along with the age of them," to help us manage the savings by not having us spend so much on the storage in Azure. That would be huge.

Resources, like IP addresses, are not being used on test IP addresses. With any of the devices that you would normally see attached to a server resource group, such as IP addresses, network cards, etc., you can say, "Look, public IP addresses cost $15 a month. So if you don't have a whole lot of money and a hundred IP addresses on a public IP sitting there not being used, you're talking $1500 a month YOY." That becomes quite a big chunk of money. I know that Turbonomic is looking at the lowest hanging fruit. That is not something worth developing for only $15 a month saving, but I would love to see Turbonomic sort of manage Azure fully versus just certain components.

One thing that has always been a bit troublesome is that we want to look at lifetime savings. So, we want to say, "Okay, we installed this appliance in October 2018. We want to know how much money we have saved from 2018 until now." The date is in there. It is just not easy to get to. You have to call an API, which dumps JSON data. Then, you have to convert that to comma separated values first. After that, you can open an Excel spreadsheet, which has hundreds of rows and columns. You can find the data that you want and get to it, but it is just not easy. However, I believe there is a fix in version 8 to solve this problem. 

When we switch to version 8, we can't upgrade our appliance, because it's a new instance. What that means is we will lose all our historical data. This is a bummer for us because this company likes to look at lifetime savings. This means I have to keep my old appliance online, even though we're not using it for that data and I can't import that data into the new appliance. That is something that is kind of a big setback for us. I don't know about other companies and how it is being handled, but I know I will need to keep that old appliance online for about three years. It is unfortunate, but I see what Turbonomic did. They gave us so many new bells and whistles that they think probably people aren't going to care because they're so much more savings to be had. However, for our particular environment, people like to see lifetime savings. That sort of puts a damper on things because now I need to go back to the old appliance, pull the reports using an API in a messy way, and then go to the new appliance. I don't even know what I am going to get from that. I don't know if it's going to be the Excel spreadsheet or just a dashboard, then somehow combine the two. While we haven't experienced it yet, when we do upgrade, we'll experience that problem. We know it is coming.

For how long have I used the solution?

About two years.

What do I think about the scalability of the solution?

Everything is manual. We don't automate anything, only because our environment isn't that big. If we were to set up all the groups as manual, it would actually take more time than just to go in and click go on each of the items after I have submitted a change request. We have about 180 VMs in Azure, not quite big enough to automate.

What was our ROI?

We broke even after year two. We definitely got our return on investment, but I think there is a lot more to come.

Turbonomic has helped optimize cloud operations and reduced our cloud costs significantly. Overall, we are at about 40 percent savings, and we spend about three million a year just in Azure. It reduces the size of the VMs, putting them into the right template for usage. People don't realize that you don't have to future-proof a virtual machine in Azure. You just need to build it for today. As the business or service grows, you can scale up or out. About 90 percent of all the costs that we've reduced has been from sizing machines appropriately.

The solution has absolutely helped reduce our IT-related CapEx and OpEx. The money that we have saved by minimizing our costs in the cloud allows us to spend more in the cloud versus buying physical hardware. We had at one point three data centers at approximately 18 offices with servers in them (all VMware). We are now down to three offices that have servers and a total of five complete servers. That is all been directly related to our savings in Azure and the ability to continue building without exceeding what we have previously spent. We budget three million a year. If I can shave 30 to 40 percent off of that, then we can build 30 to 40 percent bigger in Azure.

Turbonomic has saved a lot of human resource time and cost involved in monitoring and optimizing our estate by not having physical hardware because we don't have to deal with stale disks or power outages since it is hosted in the cloud. We had servers in 18 offices that required physical contact to replace a disk or troubleshoot a network connection. We had power issues, air conditioning issues, and network issues as well as having an ISP drop and having a backup ISP drop. I don't even know if I can calculate how much we've been able to save by moving most of that to Azure. Now, we still have outages in Azure, but Azure is way more stable than any physical environment we could build by a long shot. 

What's my experience with pricing, setup cost, and licensing?

I know there have been some issues with the billing, when the numbers were first proposed, as to how much we would save. There was a huge miscommunication on our part. Turbonomic was led to believe that we could optimize our AWS footprint, because we didn't know we couldn't. So, we were promised savings of $750,000. Then, when we came to implement Turbonomic, the developers in AWS said, "Absolutely not. You're not putting that in our environment. We can't scale down anything because they coded it."

Our AWS environment is a legacy environment. It has all these old applications, where all the developers who have made it are no longer with the company. Those applications generate a ton of money for us. So, if one breaks, we are really in trouble and they didn't want to have to deal with an environment that was changing and couldn't be supported. That number went from $750,000 to about $450,000. However, that wasn't Turbonomic's fault.

Which other solutions did I evaluate?

We have monitoring tools out there that can tell us things that Turbonomic can tell us. I think the difference is the ability to just click a button and have it happen versus having to do it manually as well as checking the costs. While we could use a monitoring tool, we wouldn't be able to track our costs, and that's been a big impact to the company because we've saved a lot of money. So, if someone said, "Hey, we've got this monitoring tool, and we want to have you decide between Turbonomic and this monitoring tool." I wouldn't even look at the monitoring tool. I would say, "We are going to stay with Turbonomic."

What other advice do I have?

We are installing the Kubernetes version of Turbonomic now. Then, it will be able to see application issues when they come up. Once we transitioned to Turbonomic version 8, we will be able to see the application side of things, which we were not able to see before.

Application performance wasn't even something we considered until Turbonomic 8 was announced and revealed to us. This will open a whole new door for us in terms of savings that we probably never even considered in the past.

I am pretty impartial to Turbonomic. I have not used anything to optimize cloud previously, but I'm going to base my rating solely on the support that we have received, the engagements that we had, the attention to detail, and the overall feel of the company and the interactions in the software. I would rate it as a 10 (out of 10).

Which deployment model are you using for this solution?

On-premises
Disclosure: IT Central Station contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Todd Winkler
Principal Engineer at a insurance company with 10,001+ employees
Real User
Top 20
Lets us take a good look at our environment and decide how we will size our workloads into new areas

Pros and Cons

  • "It has automated a lot of things. We have saved 30 to 35 percent in human resource time and cost, which is pretty substantial. We don't have a big workforce here, so we have to use all the automation we can get."
  • "There are a few things that we did notice. It does kind of seem to run away from itself a little bit. It does seem to have a mind of its own sometimes. It goes out there and just kind of goes crazy. There needs to be something that kind of throttles things back a little bit. I have personally seen where we've been working on things, then pulled servers out of the VMware cluster and found that Turbonomic was still trying to ship resources to and from that node. So, there has to be some kind of throttling or ability for it to not be so buggy in that area. Because we've pulled nodes out of a cluster into maintenance mode, then brought it back up, and it tried to put workloads on that outside of a cluster. There may be something that is available for this, but it seems very kludgy to me."

What is our primary use case?

Currently, we're doing migrations for older versions of Windows, both in the Azure Cloud and on-prem in our VMware vCenters. We use this tool to do comparisons between the current and future workloads and what would they look like, based on the usage. So, it is kind of a rightsizing exercise or rightsizing, either downsizing or upsizing, depending on the requirements. We just put all that information into Turbonomic, and it builds us out a new VM, exactly the size that we need, based on the trending and analysis. Then, you can also put in some factors, saying, "Look, it was Windows 2008, and we're going to windows 2019, or whatever. We're going to grow the database by X amount." This tool helps you do some of the analysis in order for you to get the right size right out-of-the-box. We love that.

I oversee a lot of stuff, so I don't really get an opportunity to go in there to point and click. We have people who do that.

It is doing Azure Cloud and VMware. Turbonomic understands the resource relationships at each of these layers and the risks to performance for each. You can compartmentalize your most critical workloads to make sure that they are getting the required resources so the business can continue to run, especially when we get hit by a lot of work at once. 

How has it helped my organization?

The solution provides us with a single platform that manages the full application stack. In our decision to go with this solution, this was critical. We had so many vCenters and physical clusters out there. We had virtual and physical machines all over the place. Turbonomic was the way we were able to centralize all our vCenters and get a good picture of what is going on in the environment. It was all over the place without it, so there was no way that we could centralize and work on getting off of some of the older hardware platforms that we were on and start moving to converged, then eventually hyper-converged. This tool allowed us to take a good look at our environment and decide how we were going to size those workloads into those new areas, off of the old blade chassis and old standalone systems, to the more modern hyper-converged systems.

In our organization, it is optimizing application performance as a continuous process that is beyond human scale. The reason is because there are times of the year that we have these big hits. It is like if you were Verizon and you were to sell all your cell phones during the Christmas time. Well, we have a very similar thing here at our company, where we have a period of time we basically shut the business down. We have to give critical resources to critical applications, giving them the resources that they need in order to function. In order for us to do that, we are able to take critical workloads and put them off into their own area, then determine how much we have to take from the rest of the resources, which we take from the rest of the systems in order to put it into new clusters or systems. That is super critical for us every year.

We use it for management and rightsizing of our platforms, specifically for migration activities, because we're always doing it. The migration has been the biggest thing that I personally use.

What is most valuable?

There are a number of tools that we use in it. Some of the things that I request are the data dumps. They write some kind of scripts or something inside there where they are actually able to pull for me CSV files. Then, I can go in, take all that information, and build a master gold list for my migration activities. 

Everything that I ask for, I get. I don't know what they are clicking nor do I know what they're doing, but when I request it, I get it. There are all sorts of different ideas and scenarios that I put forth to the developers.

Turbonomic provides specific actions that prevent resource starvation. While I'm not in there banging around on the tool all the time, I can tell you that I do very much benefit from it. On Monday, I was getting additional information from the Turbonomic guys.

We use the solution’s automation mode to continuously assure application performance by having the software manage resources in real-time. 

What needs improvement?

There are a few things that we did notice. It does kind of seem to run away from itself a little bit. It does seem to have a mind of its own sometimes. It goes out there and just kind of goes crazy. There needs to be something that kind of throttles things back a little bit. I have personally seen where we've been working on things, then pulled servers out of the VMware cluster and found that Turbonomic was still trying to ship resources to and from that node. So, there has to be some kind of throttling or ability for it to not be so buggy in that area. Because we've pulled nodes out of a cluster into maintenance mode, then brought it back up, and it tried to put workloads on that outside of a cluster. There may be something that is available for this, but it seems very kludgy to me.

I would like an easier to use interface for somebody like me, who just goes in there and needs to run simple things. Maybe that exists, but I don't know about it. Also, maybe I should be a bit more trained on it instead of depending on someone else to do it on my behalf.

There are some things that probably could be made a little easier. I know that there is a lot of terminology in the application. Sometimes applications come up with their own weird terminology for things, and it seems to me that is what Turbonomic did. 

For how long have I used the solution?

Three years.

What do I think about the stability of the solution?

I have never had a problem with it. The product is a little over anxious at times.

What do I think about the scalability of the solution?

When we did the rollout in that phased approach, it was not difficult at all to roll in new technologies. They converged and hyper-converged into Turbonomic. So, it's definitely scalable. It moved right into the company pretty easily.

There are quite a few people using it, mostly for operations type of work. There are probably 25 users from operations, support, the performance team, and performance planning.

How are customer service and technical support?

I have worked personally with Turbonomic, one of their guys, on some of this stuff. I haven't talked to him in a while, but he helped us develop a lot. The support for Turbonomic is incredible. 

Their technical support is excellent. By far, they are probably the best. It's probably why I am sitting here talking today, because I have to give these guys top props. I think the employee enthusiasm about this product is absolutely top-notch. It would probably be a great place to work.

I've worked with the Tier 1 support and their consultants. We had a consultant here for a year who was absolutely a top-notch fellow. He just became part of the team. He wanted to learn how we were doing things and tool the application to do what we needed it to do, which he did. He also left great instructions. A lot of his legacy is still there and being used today. 

Which solution did I use previously and why did I switch?

We were using a combination of vROps and VMware. We were also using BMC TrueSight, which we still use today. There are a couple of others out there, because the network team uses a few things. There is all sorts of stuff that I think they were kind of hog-tying together to make them work.

Some of these solutions are ingrained in our processes and have been around literally forever. So, there isn't the staff or the resources right now to rewrite a lot of these things. Currently, we do kind of a side-by-side comparison, and I believe some folks have written some ways to integrate the new data from Turbonomic into the old way of doing things. That's just a culture change at the company. It's just a big place that has been around for a long time, which works slowly.

This solution was brought to us by one of our AVPs. She had worked at HPE, and we didn't know about it. She said, "Let's look at this," because apparently she used it at HPE. We looked at it, and said, "Ah, this is great." Then, we went with it.

Turbonomic is more customizable with a lot more features. 

Even though you can turn on automation in VMware, it's not very good. It's kludgy and has a tendency to break things, where the autobalance of workload management that Turbonomic does within VMware is much better than the VMware tools which are designed for this. That may change, because VMware seems to be doing great with this. However, for right now, Turbonomic is the only way to go. 

TrueSight is just straight up what you see is what you get.

How was the initial setup?

I went to the training when they first rolled it out, but I wasn't involved in the setup.

They did the setup in sections. So, they started off with the lower environments and some of the clusters out there that really needed a lot of attention, mostly blade servers and such. So, it was a gradual rollout. I think the entire rollout was somewhere in the area of a year to a year and a half. However, to get it fully running, where we could use this solution to our benefit, that was at least six months.

We use scheduling in real-time for implementing the solution’s actions as well as manual execution, which is when we schedule for a later date to change activities. We have had to enable or disable certain things. It seems to do that just fine.

What about the implementation team?

We used Turbonomic for everything to do with the setup. On our side, it required about five FTEs, who were engineering and operations personnel. There were folks who were creating the design and where it would be rolled out. That design was passed down to the operations folks who were actually implementing everything. So, it was done in phases.

We only have two engineers doing maintenance, a primary and a backup, and this is like their extracurricular activity.

What was our ROI?

The ROI would be in the return of hardware, specifically for a lot of the older hardware where we start to go into the converged systems. That is where we are seeing our ROI. We are getting rid of that old, junky hardware, starting to integrate and align things into one specific way of managing all our workloads, but not on old hardware. If anything, the ROI is end of life hardware elimination.

Also, we see ROI in extended support agreements (ESA) for old software. Migration activities seem to be where Turbonomic has really benefited us the most. It's one click and done. We have new machines ready to go with Turbonomic, which are properly sized instead of somebody sitting there with a spreadsheet and guessing. So, my return on investment would certainly be on currency, from a software and hardware perspective.

Turbonomic provides a proactive approach to avoiding performance degradation. Our capacity and performance team use this solution as part of other tools that they utilize.

The solution provides application-driven prioritization, with its AppDynamics integration, to show us how top business applications and transactions are performing. If Turbonomic comes back and tells us, "Hey, this application needs more resources. Or, you're coming up onto a period where it will need more resources. Start planning now." We have certainly used it for that and will continue to use it for that. We have actually used it for troubleshooting a couple of times, saving us 25 percent when it comes to performance-based issues.

We have seen a 25 percent reduction in tickets opened for application issues.

Turbonomic has definitely helped to save human resource time and cost involved in monitoring and optimizing our estate. It has automated a lot of things. We have saved 30 to 35 percent in human resource time and cost, which is pretty substantial. We don't have a big workforce here, so we have to use all the automation we can get.

Which other solutions did I evaluate?

We did an architectural review. We had to look at other options, but I don't know exactly what those were.

Some of the tools which already exist are not that great. They really need to up their game if they're going to keep up with something like Turbonomic.

What other advice do I have?

If you have a big shop, and it's scattered all over the place, then definitely take a look at this tool. Make sure you take a look at this tool because there is probably fit for purpose licensing for any size organization. It's a great automation process.

Turbonomic shows application metrics and estimates the impact of taking a suggested action based on its input from AppDynamics. So, we plug it into AppDynamics, then AppDynamics and Turbonomic seem to work together for that. 

It knows what business-critical applications we have, but I don't think it manages anything specifically within the application itself. It is mostly just resource-driven.

As money starts to get tight and budgets start to get really scrutinized, I think people are going to have to start looking at using Turbonomic to help optimize cloud operations to reduce cloud costs.

We are going to continue to use it going forward. I just don't know at what level. There are a lot of changes being made to the infrastructure, so it's going to depend on the tools and things that become available, like VCF as well as all the products that they have built-in through vROps, enhanced vROps, and things that already come with the software.

I would rate it an eight (out of 10). Personally, there is a lot that it does that a regular person like me does not have the time to sit down and dig into it. We expect things to be a little bit more automated. That is why I gave it an eight. I would give it a 10 (out of 10) if I got in there and it's like, look, click, click, and click. However, I don't know if there is that kind of a comfort level here yet to just let this thing go and have its day with the place.

Which deployment model are you using for this solution?

On-premises
Disclosure: IT Central Station contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
DA
Infrastructure Manager at a insurance company with 501-1,000 employees
Real User
Recommendations regarding volumes and family types tell us how much we will be saving by implementing them

Pros and Cons

  • "The recommendation of the family types is a huge help because it has saved us a lot of money. We use it primarily for that. Another thing that Turbonomic provides us with is a single platform that manages the full application stack and that's something I really like."
  • "In Azure, it's not what you're using. You purchase the whole 8 TB disk and you pay for it. It doesn't matter how much you're using. So something that I've asked for from Turbonomic is recommendations based on disk utilization. In the example of the 8 TB disk where only 200 GBs are being used, based on the history, there should be a recommendation like, "You can safely use a 500 GB disk." That would create a lot of savings."

What is our primary use case?

We use the Reserved Instances and the recommendations of sizing of our family types in Azure. We use it for cost optimization for our workloads there.

We started with the on-prem solution, but then we went with the SaaS model. Now, Turbonomic handles the installation and the support of the appliances.

How has it helped my organization?

The volumes feature lets us know which volumes or disks are not attached or that are not being used anymore and that we can go ahead and delete them. It tells us how much money we'll be saving if we delete them. It's the same thing with Reserved Instances. It has that ability, that visibility, with those recommendations. 

There is also the family type that tells you which family the VM is going to and how much you're going to be saving. Disk tiering is one of the latest features. If you go from premium to standard, it shows you just how much you're going to be saving. It makes those decisions based on metrics.

When it comes to cloud costs, to VMs, the solution is saving us about $30,000 a month. It has also definitely reduced our IT-related expenditures by about $40,000 per month. And when it comes to the human resource time involved in monitoring and optimizing our estate, it saves us about 20 hours a week.

What is most valuable?

The recommendation of the family types is a huge help because it has saved us a lot of money. We use it primarily for that. Another thing that Turbonomic provides us with is a single platform that manages the full application stack and that's something I really like. 

One other useful feature in Turbonomic is the support for Kubernetes. That's one of the things that I have worked on with Kevin, our account rep, from Turbonomic. We're going to work on setting that up because our developers are pushing hard for Kubernetes for containers this year. Knowing that it's supporting that is awesome.

Something that Turbonomic started doing, just a couple of months ago with one of their latest releases, is the potential savings when it comes to disks. It is very promising. They make recommendations based on the type of disks. For example, if you're using a premium SSD, it makes recommendations, based on I/O metrics, to go to a standard SSD. Those types of recommendations are very valuable and that's another area where we see cost savings, which is awesome.

What needs improvement?

One ask that I'm waiting for, now that they have the ability to make recommendations for disks, for volumes, and disk tiering, is all about consumption. For example, we have a lot of VMs now, and these VMs use a lot of disks. Some of these servers have 8 TB disks, but they're only being used for 200 GBs. That's a lot of money that we're wasting. In Azure, it's not what you're using. You purchase the whole 8 TB disk and you pay for it. It doesn't matter how much you're using. So something that I've asked for from Turbonomic is recommendations based on disk utilization. In the example of the 8 TB disk where only 200 GBs are being used, based on the history, there should be a recommendation like, "You can safely use a 500 GB disk." That would create a lot of savings. And we would have more of a success rate than with the disk tiering, at least in our case.

Also, unfortunately, there is no support for cost optimization for networking.

For how long have I used the solution?

I've been using Turbonomic for about three years.

What do I think about the stability of the solution?

It was definitely more stable on-prem. The reason I say that is because we've had several times where we have run into licensing issues. I don't know why that has been the case, although they have been few and far between. 

But when it has no issues, it runs just as if it were on-prem. The performance and the stability are not a problem.

What do I think about the scalability of the solution?

It's a mature product. It very quickly detects when new VMs, new workloads, are added. You don't have to wait long. The tool picks things up very quickly in our environment.

How are customer service and technical support?

Their technical support is excellent. I would rate them a nine out of 10. Whenever I send an email, they respond back. The only reason I don't give them a 10 is that I have been waiting for some time now on the Reserve Instances to work again. That's the only thing that has been a downer because we rely on them heavily. We are now having to use the Azure tool for that, and before the issue with Reserve Instances, we didn't have to. There's a lot of overlap between Azure on Turbonomic, but Turbonomic works better for us.

An aspect of the Turbonomic team that I have found, working with them over the years, is that whenever we've had an issue, they have always been willing to listen and to address it and to add the features we need. For example, when we started, Reserved Instances was really not up to par. But they listened to their customers and they started making changes. As time has gone on, the product has matured. They've incorporated a lot of the features that we've asked for into their appliance.

How was the initial setup?

We tried it first on-prem, years ago. We used to host it. I installed it and updated it, working with the Turbonomic team. When it was hosted in our environment, I was responsible for everything.

The initial setup was straightforward. Because it was an appliance, the deployment took about an hour to stand it up. We use VMware on-prem so it was done with an OVA file, and it was pretty much a "next-next" process because the OVA is already packaged with how the tool should be deployed. There are certain custom inputs needed, like the name of the appliance, and how much storage. But everything else was already pre-packaged. The configuration definitely took a little bit longer.

The only downside was that Turbonomic came out with many releases. The latest releases had the latest features, but it required continuous upgrades. If we wanted to take advantage of one feature we continued to have to upgrade the appliance on-prem. That is why, when we found out that they do have a SaaS model, we went with that instead. We wanted Turbonomic to worry about things like the licensing, the updates, et cetera. We don't have to worry about that at all now, and that has been a huge relief. That has saved us a lot of time, for sure.

We didn't have to do any type of migration to their SaaS offering. They took care of everything in the back end.

We have five engineers who use the product, including a networking engineer, a storage engineer, and our DevOps team.

Which other solutions did I evaluate?

There are competitors out there. Since we're in Azure, which is the only cloud vendor that we use today, it has something called cost Azure Advisor, to help you with costs. I've tried it because it comes with it and we're paying for it, but Turbonomic is a better tool for us. We always seem to gravitate more toward it because everything is right there in that single pane of glass. It gives you recommendations based on Reserve Instances, even though right now, unfortunately, that's not working 100 percent. It does a lot of things, like the family types and the deleted volumes, and that type of automation for us, which is awesome. Azure Advisor does give you that as well, but it doesn't have everything. We have to drill down in it and it's not easy to navigate.

What other advice do I have?

At one point the most valuable feature for us was Reserved Instances. The only problem with that today is that last year we changed from the EA licensing model to an MCA. At this moment, unfortunately, the Reserved Instances is not working. They're still working on it. It's in the roadmap, but that definitely was a big selling point for us. It worked well for us because we purchase a lot of Reserved Instances for our VMs.

Turbonomic makes a lot of recommendations to help prevent resource starvation. We can't implement all of them because it depends on our workloads. Not all the recommendations work for us because workloads on some of our VMs are very seasonal. There may be three times throughout the year, for about two weeks, where those VMs' usage is very high. They have to work at a high level. The solution can only go back a maximum of three months, and it won't work for us in some of those workloads because it doesn't have full visibility into the past year. But for some of our other workloads, those recommendations work.

Optimization of application performance is an ongoing process for us, especially as we move VMs from on-prem to Azure, or even build new VMs in Azure. More apps are being created and more services are being created, and we're taking advantage of that within Azure. However, we don't use Turbonomic's automation mode to continuously assure application performance by having the software manage resources in real-time. Our DevOps team is using Azure to control that automation.

For us, Turbonomic is an infrastructure service, VMs. As for applications, not yet, because now that we're introducing Kubernetes into our Azure environment, while it does have support for that, I don't know what it looks like yet. I have a meeting scheduled with them in order to configure that. It doesn't create it for you automatically in the back end. But it's more for our IaaS, infrastructure as a service. For storage, the closest thing now is the disk tiering with recommendations for going from and to different types of standard and premium SSD and HDD disks. Before, there wasn't that level of support. It was just VMs and family types, in our case.

We use manual execution for implementing the solution’s actions. We use manual because it depends on the business. We run a 24/7 shop. That's how it has always been on-prem, and that's how it is now in Azure, for our production VMs. We need to schedule maintenance windows because some of the recommendations from Turbonomic require a reboot. We need to schedule downtime with the application owners within the business.

Disclosure: IT Central Station contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Flag as inappropriate