Aternity Review

Not only helped us know which devices to refresh, but helped us determine if a refresh was even necessary, with factual data


What is our primary use case?

The initial use case was purely endpoint performance monitoring, but one of the key things that really shone about Aternity as a product was that the use cases were extremely broad. It became, without a doubt, our most important asset management tool.

It was used for productivity management — that was also a very strong use case. 

Another use case was compliance and security, because one of the key things that we started to leverage it for was monitoring when people were turning off or disabling antivirus products. We could measure that with Aternity and then take action. It was really great at compliance and security.

We also used it for application performance, obviously. It provided super-deep levels of insight into applications through performance tracking.

We also used it for cost reduction when it came to unused, licensed software. Adobe was a big one; Visio, Project, Access, etc. We managed to drop our spend quite heavily by using it for that.

How has it helped my organization?

One of the key benefits was when it came to buying. From a procurement team perspective, very often what would happen was that when they were going to buy new IT hardware, they would go to a couple of vendors — big names, like Microsoft, Apple, etc. — and the vendors would give them half a dozen test devices each, and then they would deploy those to various people and wait for feedback. Normally, the feedback would be very human and very speculative. More than likely, the person who got the super-shiny, super-sexy MacBook Pro or Surface Pro, would say, "Yeah, I really like it. It's amazing. It does the job." But what we were actually able to do with Aternity was scientifically measure which asset was giving us the best performance for the spend.

We actually found, in some instances, that it wasn't always the most expensive laptop that was performing the best. It was the one that actually managed to run the company's image optimally. We were able to really save when it came to spending, and do so scientifically. We did not need to solicit feedback from people. The feedback was present in the tool. So when it came to buying, we knew exactly what to buy, at the right price point, for that performance. There were big savings there.

Another key thing that we weren't anticipating saving a lot of money on was network capacity. There were some really interesting dashboards that you can get to in Aternity, out-of-the-box, no configuration needed. They showed top talkers on a per-site basis. If you've got a really distributed organization — our company had offices in 200 countries — each country will procure network infrastructure from whichever incumbent in that nation is the easiest or the cheapest or the best one to get it from. You end up with a very complicated network. In the third-world regions, it's a lot of ADSL. In the more metropolitan areas, in first-world countries, you're getting expensive lease line, or fibre, or dark fibre. For traditional network monitoring solutions, it can become quite challenging, especially when bandwidth and things like that are changing regularly. But what Aternity would allow us to do is actually see individuals who were taxing the network from an endpoint perspective, and we could tackle that on an individual-by-individual basis.

We could also give advice to local IT leaders on whether or not their bandwidth was appropriate for what they were doing. In some instances, we were able to tell people that they could actually shrink the capacity that they were paying for because it was unnecessary. There were all sorts of "edge" use cases. Your ability to save money and to improve performance and to improve productivity with Aternity, is limited only by the imagination of the team that is in charge of the tool.

The solution also provides metrics about actual employee experience of all business-critical apps, rather than just a few. You need to create signatures so that the tool can monitor them appropriately, but it is very agnostic. You need to point Aternity at the thing that you want visibility into, and it gives you exactly that, and in the ways that you want it. You're measuring it from the user, from the inside out, and from the outside in. It gives you very different levels of perspective compared to standard, traditional IT monitoring tools that you use: SNMP, pings, polls. Those conventional, old-world metrics are very easy to dispute as an end-user. If you're an end-user and your experience is bad, someone telling you that the network is up and running and okay doesn't really help you. Someone telling you that the server is good doesn't help you either. It's the perspective of the monitoring with Aternity that really changed the dynamic, because all of a sudden you're able to see things from the end-user's experience. So there are far fewer occasions when you are arguing with your end-user and saying, "No, we don't see an issue." You're far more a proponent of that person's experience. You can tell much more quickly exactly what those issues are that they are experiencing.

We also used its Digital Experience Management Quadrant (DEM-Q) to see how our digital experience compared to others who use the solution. Aternity were probably one of the earlier adopters of a strategy where they would allow customers to baseline their experience against a wider marketplace. It's becoming more prevalent in other tool sets that I see across big enterprise, but it was at least 18 months ago that we started to see Aternity providing us with that capability. It was very interesting because one of the things that some of the bigger industry consultancies, like Forrester, try to do, is create "industry monoliths," where you can baseline against people within your industry. Media companies will look at other media companies; industrial transport and logistics organizations can then benchmark against each other. But where Aternity, and some of the other vendors that are doing this at the moment, brings something quite new to the marketplace, is that you're benchmarking against everyone. That allows you to really see whether what you're doing is correct for you as an organization. Are you getting the results that you need for the money you're spending?

Using DEM-Q undoubtedly affected our decisions about IT investments. It's always very difficult, especially at a large enterprise, to know that you're doing the right thing. When you go into a big purchase, especially for someone who is head of enterprise or head of IT, a key consideration is, "Am I spending the money wisely? Am I going to get return on investment?" If you are able to benchmark against your industry peers and see that you're doing the right thing, that in itself is a validation. It's a validation that you're headed in the right direction. It's a validation that you're spending the money appropriately for the improvements that you're getting.

It can also potentially help you to avoid spending money unnecessarily, because there are certain components, certain aspects of your stack, where you would need to invest heavily to get a small gain. The tool can allow you to look at whether or not that is a necessary investment. "Do I need to upgrade everyone's memory chips from 8 GB to 16 GB?" If you've got 8,000 devices, and an 8 GB memory chip costs you $100, you're looking at close to a million bucks. The tool can show you through its own metrics, and through the baselines against your industry peers, that maybe that's not a worthwhile investment. That million dollars is going to get you 5 percent, and that 5 percent is not necessarily really worth it. Outlook is going to open one second faster. Do you want to spend a million bucks so that everyone can get their emails one second faster? It's that kind of thing that makes decision making much more clinical, much simpler. When I'm sitting in front of a director and he says, "Why do you want this much money?" I want to be able to stand behind that request and say, "If I spend it, this is what you're going to get." That kind of ability to baseline, not only against your own org, but against industry peers, means that when you have those conversations, you can say those things much more confidently.

We saved on hardware refresh by considering the actual employee experience, but it was not only that. Traditionally, with refresh, there is one single metric that IT departments use for going after assets that need refreshing, and that is age. Age is the number-one metric. If you've got 10,000 devices and you get enough budget to replace, say, 1,000 of them, 99 percent of big enterprises are going to go for the oldest 1,000 devices in the estate. That's completely wrong. Just because they're the oldest, it doesn't mean they're the worst. What we were doing with Aternity was targeting the 1,000 least-performant devices; not the oldest. There wasn't some sort of guesswork, but actual science that says which 1,000 were the worst. The 1,000 human beings using those devices would gain the best levels of productivity from those devices being refreshed.

You can also see whether or not a refresh is actually necessary. This is something like "painting the Forth bridge." You paint the bridge and then you go back to the beginning and start all over again because it's taken you that long to do it. With traditional refresh programs, you replace those 1,000 devices, and then you start all over the following year, and you replace another 1,000 devices because you get the same budget. And you do that again and again. But with Aternity, you can look at it and say, "Do we need to?" Are the bottom 1,000 devices performing in such a bad way that they need refreshing? Or are they actually performing well enough that maybe you don't need to spend that $10 million this year? And you can roll that money into network upgrades, or server upgrades, or cloud migration, and wait until the end of the next financial year before you look at it again, because you can actually see.

So you're saving money, undoubtedly, but also investing properly. You're now using metrics that provide you with certainty, instead of just something as monolithic as age. "Oh, a device is three years old, let's refresh it." Sometimes a three-year-old device is perfectly adequate.

In our company, we had 55,000 laptops. On average, the refresh spend would be between $50 million and $100 million a year. We were able to turn about 10 percent of that around, meaning a savings of between $5 million and $10 million, by making sure that we were not refreshing devices that didn't need to be refreshed, and targeting the ones that were most appropriate rather than just the oldest.

It's true that the simplest way to look at these products is in the monolithic way that a financial analyst would look at return on investment. Did we save money? That's really a small part of the value that you can derive from this. The bigger bit is that if you've just replaced 1,000 old machines, and 400 people out of those 1,000 users had bad experiences with their old laptops, they get a slight improvement and they're pretty happy. If you go at it with Aternity, you actually target the 1,000 worst devices, and you're highly likely to be getting a 100 percent success rate when you give that person a new device. All 1,000 of those people are going to be happy. Your "net promoter score," your customer satisfaction, is going to be much more true, accurate, and high. It's very easy to focus only on the financials, but there's actually a big chunk that doesn't fall into financial buckets. That piece is also very good, given the more accurate, targeted approaches that you can use with Aternity.

When employees complained of trouble with applications or devices, the solution enabled us to see exactly what they saw as they engaged with apps, and hilariously so. We did some travel to remote offices to showcase some of the capabilities, and we would sit in an executive boardroom with 10 to 15 people, and troubleshoot performance issues, in the room, in front of people. There was surprise, amazement, and genuine pleasure that we would see on people's faces when we could resolve issues that they had been facing for months or years. They had been having the same issues, the same performance problems, whether it was Excel taking a long time to load, or network instability, or voice call problems, and we would fix it in minutes, in front of them in a meeting, with absolute confidence. It would just blow their minds. You would see levels of faith and trust build in minutes, because they could see that there were no shadow games. We were not hiding behind a telephone. We were sitting in front of them and fixing it tangibly, right in front of their faces. That level of confidence and trust that we built with them was completely irreplaceable.

What was even better than that was that we set aside small pockets of time each month for people to go and target the worst-performing machines, and then proactively reach out to the users. So instead of waiting for someone to complain, we would reach out to the people who were having the hardest time. We would have an IT rep phone a person and say, "Look, we can see your machine is running like absolute trash and here's a couple of things that we can do to fix it." That's just unheard of. Most people were just completely blown away by the fact that they were getting a call to make their day easier and better, and they didn't have to do anything about it.

What is most valuable?

What was really quite good about it was that, with some of the out-of-the-box, standard applications that everyone expects to be able to monitor it was good, but we could monitor home-brewed applications, which big enterprises have a lot of — applications that are not off-the-shelf but are developed in-house — we could monitor those very carefully, and that was incredibly important. It gave us very bespoke levels of detailed monitoring, and that was for on-prem, mainframe, cloud — any type of application. That was great.

The most valuable thing that you get from Aternity is very broad visibility. You get visibility of your network, of your endpoints, of your software usage, your application performance, capacity, in one pane of glass. We had 20 to 30 IT tools, including application performance monitoring, network monitoring, security, endpoint detection, network protection, capacity management, service management — every kind of monitoring you can imagine. But Aternity was always the first place that I turned for anything, because you can see everything in it. 

The beauty of it is that it has that really simple Tableau backend so you can manipulate the data within it incredibly easily. If you can think of something, you can usually find a way to force Aternity to show you that permutation of data, in the way that you want to see it. It's flexibility is great.

The user interface is good. It's elegant, it's quick, it's simple, it's all built on Tableau, so it feels familiar. It's not difficult to learn how to use it.

What needs improvement?

Potentially, the one thing that could probably help with better levels of enterprise adoption is around creating the application monitoring signatures. That process can be a little bit difficult. If one thing could be simplified a little bit, it would be the application monitoring signature creation process.

But that's probably quite unfair because it's a super-technical thing, so it's difficult. There is no other tool that can do it in a simpler way. If there were something I would want to simplify or improve, it would be that, but even that would be quite unfair to demand of any product.

For how long have I used the solution?

I have used Aternity for three years. 

What do I think about the stability of the solution?

The stability was very good. It wasn't without teething issues, but comparatively speaking, if you were to line it up against every other product of a similar nature in the industry, it's very stable.

One of the things we were able to do is set maximum load limits on how much CPU, memory, and disk the product would use. If it went over a specific threshold, the sensor would shut itself down, which meant that it would never really impact performance because before it got to the point where it started to impact performance on a machine, the sensor would kill itself. You've got safety nets upon safety nets. From a stability perspective, it was fantastic.

What do I think about the scalability of the solution?

Scalability was one of the things where I was having to go in and beg my executive for more money because I wanted to put it on every device in the network. I could quite quickly see every possible use case under the sun. The initial business case was only to cover desktops and laptops, but about three months into the project I was back in the executive office asking for more money so that I could deploy it to servers and everything else. I wanted that same visibility across the full enterprise.

The compatibility of the sensor is very broad so we didn't really have an issue when it came to scale. The issue I had was that I wanted it everywhere, and it was a case of having to reformulate the business case and go back to the exec and ask for more money because we identified that it was such a good product. We needed to put it everywhere rather than just on endpoints.

Which solution did I use previously and why did I switch?

We did not have a similar solution previously. We had every other kind of monitoring tool that you can imagine, but they were all for specific use cases: network, database, infrastructure, clouds. They were all point solutions. Aternity was the first solution that was focused on end-user performance monitoring, but it also brought in that breadth of being able to see everything.

How was the initial setup?

I was involved in the initial setup of Aternity, every step of it: the proof of concept, the purchase, the initial roll-outs, the deployment, the management, the training; every facet of it.

The initial setup was fantastically simple. That was one of the things that allowed the business case to go through so quickly and so efficiently. From the proof of concept, the business could immediately see the value in the tool. It was solving problems that had been around for a really long time, and it was solving them in really simple ways. Even though it was a time when the company was going through quite a rigorous digital transformation, we were able to deploy the sensor without creating any disruption. No one really noticed it, to be honest. They didn't even know it was there. And we immediately got the results and the data back.

The thing that took a little bit of time was creating the signatures for our in-house developed applications, but a lot of the out-of-the-box functionality provided immediate value. Two or three days after deployment, we were getting value back. We were seeing data that was interesting and useful and insightful.

We were quite aggressive and were at 99 percent coverage within about three months. That covered just under 60,000 devices, so we were deploying it to a huge enterprise.

Our implementation strategy for Aternity was "concentric circles." We started close to home. We would look at deploying to sites and to teams that we knew and were familiar with, so that we could solicit feedback quickly. We would roll it out and we would give it a little time, with concentric circles of 100, 1,000, and 5,000 users. We'd wait a week, get feedback from people, and see if it had impacted performance. One of the beautiful things is that you can monitor Aternity with Aternity. You can see if it is impacting performance of the machines you deploy it to and you can't say that about a lot of tools. When you're deploying antivirus or EDR or other monitoring solutions, it's very rare that you get to see, first-hand, exactly what impact you are having by deploying your own toolset.

That really allowed us to do quite a lot of PR work with the change-management department. We could say, "Look, we've smashed it out onto 1,000 devices and it has caused no impact. You can see that it has caused no performance issues." We could show them baselines and measurements to prove that, and that allowed us to develop trust very quickly with the change team. As a result, we could move quite fast.

Aternity was the one tool that we were able to actually train all people on. Within my team of 100, I would have specific pockets of people that were experts at database, network, infrastructure servers, endpoints. And for those specific skill sets, your network guys, for example, would be trained on SolarWinds and PuTTY. Your database guys would be Oracle and SQL. But Aternity was the one tool we could give everyone access to and everyone training on, and it was useful to all of them.

What about the implementation team?

We had two dedicated Aternity Professional Services people in-house all the time, attached to our purchase. To me, that was fundamental. Having people who are technical experts to help with the deployment and training and application signature-creation was something you can't beat. If you had to buy the software on your own and try to do it on your own, you would move much slower. Having that Professional Services component attached to our purchase of Aternity was a really beneficial situation.

What was our ROI?

When you look at the breadth of usability, the breadth of use cases that you will discover, you start getting into the kinds of volume metrics where you are saving money when it comes to asset management, and you are saving money when it comes to productivity management, and you're saving money when it comes to procurement and compliance and security. That return on investment business case is the easiest one that you're ever going to do.

I was given 12 months to demonstrate return on investment. We finished our ROI business case within four months. It was that convincing.

What's my experience with pricing, setup cost, and licensing?

It's not a cheap product. There are no two ways about that. If you compare it with a couple of the other solutions operating in the space, it might be on the slightly more expensive side, but it is one of those tools where, once you've got it, you understand the true value. You will get that money back.

What I would say to people who are thinking of buying Aternity is that it's not always better to go cheaper. Sometimes you buy cheap and you end up buying twice. What we found with Aternity is that, fine, it's on the expensive side when compared to other products, but it's also 16 times more useful. You will get so much more out of it.

Which other solutions did I evaluate?

We evaluated other similar tools, but ultimately settled on Aternity due to its capabilities and compatibility with our existing tooling stack.

The other thing that was very attractive was how Aternity stitched naturally into the Riverbed ecosystem. We were using some of Riverbed's other programs, like AppInternals and NetIM, among others. Aternity felt like it would fit into that ecosystem much better. Ultimately, that was one of the key considerations. And because of the fact that Aternity was a Riverbed product, we already had relationships with that team. Creating that vendor ecosystem was a simpler situation.

But when it came down to the the nuts and bolts of the RFP, when we got into the proof of concept, we could see, despite what a lot of tools say they can do, this one just did it, simply and well and out-of-the-box, without fussing and messing around and trying to configure the bejesus out of it. That was key: Simply put the agent in place and you're good to go. 

Some of the other ones said that they were end-user performance monitoring solutions, but they were very focused on some quite simple things — CPU, memory, and disk — and nothing more than that. They were very mechanically simple and that led to the tool being a little bit useless. Anyone can open up Task Manager and look at how much processor, memory, and disk they're using. But that information isn't really usable and useful until you start to line it up with the other things that really matter. And those include: What does your CPU, memory, and disk utilization mean for the end-user experience? How performant is your operating system, and how performant is your image, and how performant is the application stack that you're deploying on top of that specific image. A couple of the other products that we looked at were very heavily focused on extracting kernel data from the machine, but not really looking at the stuff that mattered. Context is very important. You can't really give someone contextual awareness when the product is only looking at a monolithic subset of metrics.

What other advice do I have?

Based on my experience, what was key was having Professional Services for at least a period of time. It might not be necessary for the full, end-to-end life cycle of the product, or the period of time that you buy licenses for. But having Professional Services — these are people who know the product intimately, inside and out, and who have a direct line of communication to the engineering teams within Aternity — come and help you set it up, get it out of the box and to start to think of those use cases, is helpful. 

Because they've got a direct link to the engineering team which is also getting requests from all of Aternity's other customers, they have the capability of bringing ideas back to you and saying, "This is what another customer is doing. Why don't we do this?" It makes the speed with which you can start to really leverage the product so much faster. You start to get value from it much quicker. My advice is that when it comes to implementation, a bit of Professional Services will go a long way.

Another big thing for me was that monitoring, prior to us using Aternity, always felt like something that we were doing in very specific ways. If I wanted to look at a network, I would go to one product. And if I wanted to look at application performance, I would go to another product. The thing I learned from Aternity was that if you change the perspective that you are using, you can get a much broader level of visibility. The perspective, in this case, is looking from the end-user or endpoint. Because we had changed that dynamic and we were looking from the endpoint inwards, all of a sudden we could see so much more. That was just "revelationary." I really started to look quite hard at whether or not we needed 10 different monitoring tools. And a couple of those monitoring tools were retired because we found very little need for them after we had built proper levels of monitoring into Aternity. There was just no need to have those point solutions in place because we could already see everything in Aternity. The thing that I learned was, although we bought it because we wanted to see endpoint performance — and that's probably why everyone goes shopping for that type of product in the beginning — what I very quickly learned was that it's much more than that. It's a very wide and capable tool.

If you had to choose one tool, if your organization said, "We're going to stop spending money on IT tools altogether, and you're only allowed to have one thing," I would take Aternity every time, because you can do so much with it. It's like the Swiss Army Knife of IT tools. It's the most useful tool I've ever used by a long, long way. There's nothing that I've used that has ever come close to being as useful as Aternity.

Which deployment model are you using for this solution?

On-premises
**Disclosure: IT Central Station contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
More Aternity reviews from users
...who work at a Financial Services Firm
...who compared it with AppDynamics
Add a Comment
Guest