Plixer Scrutinizer Review

Allows for QoS tuning or traffic shaping around what is being used, saving from us from unnecessary upgrades


What is our primary use case?

The primary use case is all bandwidth utilization.

Our solution is up-to-date. We're using the standard NetFlow v9 and IPFIX with the products that currently support NetFlow.

How has it helped my organization?

Scrutinizer gives us an answer. Time to resolution for problems has been reduced, because I now have a tool where I can look at historical data. I no longer just say, "Well, you're going to have to call us when it happens again. Maybe we'll catch it." It's pretty much the only tool that gives me this type of visibility.

The internal reputation of our IT to resolve historical bandwidth problems has 100 percent improved. The general time to resolution has improved by having a tool where we can look and see what is going on, even in the last half hour, with alignment that isn't performing well.

The insight the solution provides as a result of its correlation of traffic flows and metadata is really all that I have, so it is extremely valuable. If I were to give it a number on a scale, I'm probably holding it around a seven or eight, as far as usefulness, compared to my other tools.

We found the solution helps eliminate data silos because we do allow all company access to the product, since it's a read-only tool. We have shown a number of different departments in DevOps how to look at it themselves and diagnose their own problems, e.g., when they're having slowdowns to Azure. We have our express routes tagged to the Scrutinizer product. They can tell when the line is saturated and what's saturating it. This has empowered them to self-police what they're doing on the line, and it reduces the ticket count that we get. This gives us an insight on how to manage the traffic flows. More people can see IT data in real-time without having to ask IT a question and wait.

It is a workflow for the basic troubleshooters to always check anytime someone says there is slowness or a performance loss. You check Scrutinizer for that site to see what it is doing. So, it is in our workflow.

Our biggest lesson from using this solution is how to control and manage Commvault. Our biggest clobber of traffic was Commvault backups. There was a lot of stress on the network as backups ran into the daytime activation hours. We were able to track when and where they were running their backups just based on how NetFlow showed Commvault's usage.

What is most valuable?

History features are the most useful for going back and looking at when a problem has been reported, anything prior to immediately right now. A lot of it is, if we had really slow traffic over the weekend, and I come in on a Monday, it will not be slow now. So, I have nothing to look at, it's in the history.

The solution helps to enrich the data context of our network traffic. It allows me to see what applications are most in use on a slightly historical basis, going back a day or week at tops. It allows me to tune QoS or traffic shaping around what's being used. It saves me from having to unnecessarily upgrade, if I don't need to.

It is an easy go-to tool.

What needs improvement?

The visual acuity of how it presents data can sometimes be confusing. It takes a bit for people to spin up how to look at the graphs. It's how the graphs are displayed and how busy the information is. When you first take a glance at anything that's displayed, other than just the single line drawings, there is a lot of information displayed. It can be overwhelming if you're not used to it. In a lot of cases, a product like this only gets looked at when there's a report of a problem. It's not an everyday tool. Thus, most people don't get used to it.

For how long have I used the solution?

I have been using this solution for five to six years.

What do I think about the stability of the solution?

It is extremely stable. I don't ever play with it. I have never had to tweak or tune it. We have had to upgrade it in the past when they have made a major architecture change. Other than that, we sort of forget that the box is running, use the GUI, and go. It does what it does. It doesn't crash.

One person is required for maintenance and deployment. There is a backup guy, but all he does is look at my docs and repeats what I do. He doesn't spend any time on it because I don't spend any time on it. If it were to crash right now and had to rebuild it, we would just download a new one and start over because that is how infrequently we touch the product. So, no one probably remembers the database passwords anymore because in six years we haven't had to touch it.

What do I think about the scalability of the solution?

I haven't ever used more than a single collector. So, I've never really tried to scale. My quantity count for device input has always been below a 1,000. Thus, I have never pushed the box to its max.

Out of IT, there are about eight technicians who either configure debt flow on a device or are directly effecting a ticket. After that, we have about 40 to 50 end users that view data to understand their own areas of the network in the different regions, such as Asia, Europe, etc. These are IT professionals, but they are monitoring, not networking. For example, "What's my internet usage? What's my MPLS usage, so I can see how my site's doing?" It's become more of an overview.

We are not really looking to create any new usage. In fact, we've pulled back some of its usage only because we have gone away from traditional MPLS and routers and onto an SD-WAN solution that already brings onboard its own version of the same metrics. Therefore, we've reduced the number of inputs to it, but we're almost topped out there. 

That's pretty much our way in infrastructure. It's pulled back from the use of NetFlow. NetFlow is still being used for most of our major Internet connection points over the globe. It probably still is being used on all of our ties with other vendors, as they're private lines into our company. Also, it can be anything at the data centers that use traditional networking. So, we're not really growing it. It's not really shrinking anymore, but it was. Last year, it shrunk by quite a bit.

We are primarily a retail shop. We have a lot of little stores which used to be part of a much larger network. Those are all SD-WAN now, so they're not seeing anything with Scrutinizer. However, it's still on all of our Internet lines. So, it's pretty stagnant and stable.

How are customer service and technical support?

The technical support is stellar. It feels like Plixer really has the one product that they're doing, and that's pretty much all they do. They're not overly divested. When you call them, it's almost as if they're waiting on the hook for someone to have a problem so they have something to do. That's what it feels like. 

When I try to contact either Jamie or Jake, it feels like they're ready to start up GoToMeeting within a minute or two of my email going out. It does almost feel like they're on the hook hoping somebody will have a problem somewhere so they have something to do. That's the response level that I get.

Which solution did I use previously and why did I switch?

The company was using the old MRTG, which doesn't really provide application visibility at all. It's not really a commercially supported product. So, if anything went wrong, it was like, "Well, I don't know how it works." We switched to get onto something that was commercial.

How was the initial setup?

The initial setup was straightforward because I've used the product in a number of other companies. I'm very familiar with NetFlow. For me, it was rather easy. Then again, it could have been really complex and I would have thought it was easy.

There was really not a lot to get it setup. I would give its construction of maps a bit of a ding for complexity. Trying to get maps and lines to show up so people can look at it and understand what they're seeing was a little on the complex side because their little drawing manipulator is not exactly the greatest. It's like using crayons.

It wasn't a hard product to set up. The hardest part was getting the resources out of VMware to get it set upon. But, that's not their fault. A product like this comes in, and says, "I need this much storage." Then, the people that run VMware freak out, "Why would anything need that much storage?"

What about the implementation team?

I was the one who set it up. I came in as the expert. 

I talked to Jamie and Jack directly at Plixer because I already knew them from other jobs. I use them because, as a technical person, I suck at doing reports. So, anytime a boss will ask me for some type of oddball historical reporting on a site, I still go right back to them, and go, "Okay, guys, show me again how this works," because they do it maybe once a year, and we don't have a reseller who does this.

It was probably up and running inside of four hours.

My implementation strategy was just to gain visibility. It was to set up the company's product, send everything at it, and show my employers what they can see. It was to show them a blind spot.

What was our ROI?

For historical questions, it has reduced time to resolution by a significant amount since we previously didn't have the data. So, there were some problems that never went resolved because we didn't have the data. In some cases, it's just flat out allowed a time to resolution rather than cancelling the ticket. Easily, it is a solid 70 percent time reduction.

It allows us to show as a department that we can answer some technical problems from past complaints, so we look like we are tracking what has been going on in the network rather than its current state. The goodwill that comes out of seeing the IT department as someone who can solve a problem is where the biggest return on investment has come. 

Just for my team of eight, having something that they can look at, and go, "Oh, that's what's taking up the traffic." Now, we have a smoking gun to go address. Was it backups? Was it someone's download? That's another good return on investment rather than, "I don't know. Let's try this." We are not taking the shotgun approach to troubleshooting anymore.

Which other solutions did I evaluate?

I saw a gap in our visibility, and I already knew what solution would make that work. This solution was something I knew we needed to bring in. Because Plixer is dedicated to the idea of NetFlow, I don't think there is anything out there that could be gleaned from NetFlow that they haven't thought of or built into their product. So, I'm comfortable giving them a leader role in that technology because that is where they're focused.

We did evaluate other products. We had a minimal capital for an expense on a tool, and I was put up against the guy who does all the Voice over IP. They had Actionware's QoS manager look at all of the QoS network-wide and keep it tuned so we were at least flowing the right data for the right reasons in all the right places through and through so everything matched. He wanted a tool that kept all of that in place. I felt that watching the data flow outside of where QoS ran would be a bigger bang for the buck. I won out on this one.

The differences between the two products is that they service a different master. They're not apples to apples by any means. One is just making sure that your policies are uniform and balanced for QoS, not crossed all of your products. Whereas, Scrutinizer is there to show you what your product's actually doing. It can be used for tuning QoS if you wish to, but then you would be doing that part manually. It could be used for telling you how your site has been over the last week or month, as it does capacity planning. It's real easy for the end user to look at it too. It gives them a view so they get that self-help. High level management can build their own views and look at it, whereas nobody else can really look at the QoS tool because it actively changes the network. So, you don't want to give that tool out. Therefore, it really wasn't apples to apples. 

For our business, it was which direction was the right way to go for the money involved to make our department more visible. It made better sense to have this solution than just something that helped our one engineer engineer QoS better. 

Our SD-WAN is not directly a product that needs Scrutinizer to be effective. I would almost consider it a slight competitor. Its internal metrics and tools provide a very similar insight to what Scrutinizer does. It is the only product that I probably have in my entire architecture that doesn't need Scrutinizer to watch it. It watches itself with a little better clarity, but that is only because it knows itself really well. 

Our SD-WAN solution, CloudGenix, is able to do some IPFIX. We don't send it at Scrutinizer, because their data is just as good, and there is no need to duplicate it on the network.

What other advice do I have?

I would strongly advise that you look at selling the tool as a self-visibility tool to other departments and areas of your business. It makes a great internal status page that others can look at. If an end user or manager hears a complaint about something, then they have a page that they can go to, to say, "How's the network doing?" It saves a lot of calls. I think for the tool to be its own internal health selling point is something to not overlook.

I would rate the product as a 10 (out of 10).

Which deployment model are you using for this solution?

On-premises

Disclosure: I am a real user, and this review is based on my own experience and opinions.

Add a Comment
Guest