We just raised a $30M Series A: Read our story

Vectra AI OverviewUNIXBusinessApplication

Vectra AI is the #2 ranked solution in our list of Network Traffic Analysis tools. It is most often compared to Darktrace: Vectra AI vs Darktrace

What is Vectra AI?

Vectra® is the leader in network detection and response – from cloud and data center workloads to user and IoT devices. Its Cognito® platform accelerates threat detection and investigation using artificial intelligence to collect, store and enrich network metadata with the right context to detect, hunt and investigate known and unknown threats in real time. Vectra offers three applications on the Cognito platform to address high-priority use cases. Cognito Stream™ sends security-enriched metadata to data lakes and SIEMs. Cognito Recall™ is a cloud-based application to store and investigate threats in enriched metadata. And Cognito Detect™ uses AI to reveal and prioritize hidden and unknown attackers at speed.

Vectra AI is also known as Vectra Networks, Vectra AI NDR.

Vectra AI Buyer's Guide

Download the Vectra AI Buyer's Guide including reviews and more. Updated: October 2021

Vectra AI Customers

Tribune Media Group, Barry University, Aruba Networks, Good Technology, Riverbed, Santa Clara University, Securities Exchange, Tri-State Generation and Transmission Association

Vectra AI Video

Pricing Advice

What users are saying about Vectra AI pricing:
  • "We are running at about 90,000 pounds per year. The solution is a licensed cost. The hardware that they gave us was pretty much next to nothing. It is the license that we're paying for."
  • "The pricing is very good. It's less expensive than many of the tools out there."
  • "From a pricing perspective, they are very commercially competitive. From a licensing perspective, just be conscious that some of their future cloud solutions come with additional subscriptions. Also, if you're outside of the US, you will get charged freight for the device back to your country."
  • "We have a desire to increase our use. However, it all comes down to budget. It's a very expensive tool that is very difficult to prove business support for. We would like to have two separate networks. We have our corporate network and PCI network, which is segregated due to payment processing. We don't have it for deployed in the PCI network. It would be good to have it fully deployed there to provide us with additional monitoring and control, but the cost associated with their licensing model makes it prohibitively expensive to deploy."
  • "At the time of purchase, we found the pricing acceptable. We had an urgency to get something in place because we had a minor breach that occurred at the tail end of 2016 to the beginning of 2017. This indicated we had a lack of ability to detect things on the network. Hence, why we moved quickly to get into the tool in place. We found things like Bitcoin mining and botnets which we closed quickly. In that regard, it was worth the money."
  • "Their licensing model is antiquated. I'm not a fan of their licensing model. We have to pay for licensing based on four different things. You have to pay based on the number of unique IPs, the number of logs that we send through Recall and Stream, and the size of our environment. They need to simplify their licensing down to just one thing. It should be based on the amount of data, the number of devices, or something else, but there should be just one thing for everything. That's what they need to base their licensing on. Cost-wise, they're not cheap. They were definitely the most expensive option, but you get what you pay for. They're not the cheapest option."
  • "The license is based on the concurrent IP addresses that it's investigating. We have 9,800 to 10,000 IP addresses."
  • "There are additional features that can be purchased in addition to the standard licensing fee, such as Cognito Recall and Stream."

Vectra AI Reviews

Filter by:
Filter Reviews
Industry
Loading...
Filter Unavailable
Company Size
Loading...
Filter Unavailable
Job Level
Loading...
Filter Unavailable
Rating
Loading...
Filter Unavailable
Considered
Loading...
Filter Unavailable
Order by:
Loading...
  • Date
  • Highest Rating
  • Lowest Rating
  • Review Length
Search:
Showingreviews based on the current filters. Reset all filters
LW
Head of Information Security at a insurance company with 1,001-5,000 employees
Real User
Top 10
Gives us that extra chance to stop a disaster before it happens

Pros and Cons

  • "One of the key advantages for us is we define a 24/7 service around it. We use far more of Vectra alerts than we do with our SIEM product because we understand that when we get an alert from Vectra we actually need to do something about it."
  • "The solution has not reduced the security analyst workload in our organization because we still need to SIEM. Unfortunately, while Vectra, for us, is a brilliant tool for network investigations, giving wonderful visibility, it doesn't go the whole way to replace our SIEM that is needed for compliance. So, I still have the same amount of alerting and logging that I did before. It gives us more defined ability to see incidents, but it doesn't give us enough information to satisfy a PCI or 27001 audit."

What is our primary use case?

One of the biggest things is the visibility of stopping or identifying any infection as soon as possible. In this case, if someone downloads something malicious to their workstation, we have a number of controls in place. However, it wasn't so much the endpoint. It was the spreading of a worm type scenario or a WannaCry type thing. Anything that could potentially spread after the initial infection, which is where we wanted to come in and get that visibility.

It was key for us to have something that we could use for identifying as soon as possible, which would be call center initiated. That was probably our biggest thing: To push it in that direction, as we're a regulated company from the FCA. They drive us continually for improvement and behavioral analysis. Network analysis sort of falls into that bucket.

We already have a SIEM, which some people would argue gives us a lot of that visibility. It doesn't tend to give it the focus that we need. From Vectra, we get a lot of alerts of, "This is happening," or, "This is unusual." This is a lot easier than waiting for a couple of logs to come in, then a bit of AI logic at the back of it to potentially push it in that direction. It's very much for us to get a view of a potential attack, then deal with it as quickly as possible. To pinpoint where it's coming from, and where it is going to go.

One of the biggest things that I wanted to ensure is that it covered our call centers because that is where I see my biggest risk. So, I was really key on getting sensors across all geographic locations within the UK and in all of our small communication rooms.

It is all on-premise. We have a number of call centers spread around the UK. We look at all east-west traffic, as well as north-south. It all goes into our brain in our data center. We do have some branches out in Azure, but we're waiting on the new plugin that they are trying to develop. We are just starting in on our cloud journey and most of our infrastructure is in still private cloud. We haven't really gotten to the point where we have public cloud.

We're up-to-date, but I don't know the exact version number that we are on.

How has it helped my organization?

The key improvement for us were:

  1. The additional monitoring 24/7, and using the high fidelity alerting from Vectra rather than SIEM, This was our biggest change. We have managed to leverage that rather more than our SIEM, which just throws out loads of spam. 
  2. The FCA requirements to build on behavior monitoring.
  3. The use case of the call center with its high turnaround of staff who are perhaps not as clued in or engaged in our user awareness program as they could be. 
  4. Lack of end user deployment is another big improvement. We wanted something that was easy to deploy, or get up and running really quickly. It took a couple of weeks to rid of the alerts that we didn't want, but the actual involvement from the network teams was minimal, which was really good for us because we just don't have the resource to spend a lot of time trying to configure devices.

We use the solution’s Privileged Account Analytics for detecting issues with privileged accounts. Although I haven't seen a huge amount of alerts. We have a quarterly QBR, and they mentioned it the day before the QBR and noticed an alert pop up.

One of the key things for us is we have an annual pen test (an internal one), that's not as involved as a full red team. But, it's enough for the pen test to sit with the SOC guys, then we put the different tool sets together, what they're doing, and how that reacts to our Vectra,  SIEM, and endpoint AV. To see what picks up where, so it gives us an ability to check those tool sets that we have.

From a Vectra point, it will pick up a number of different things. But, it will also miss a number of different things. That's how pen testers work. They work covertly. So, it's really good for us to see what we can do and what we can't. Then, that feedback goes back to Vectra. We say, "Okay, well why didn't we pick up this?" They'll come up with a reason or they'll take it away and find something out about it. That's really good and a nice part of the service. We get to check to make sure the tool sets are working, but we also provide feedback and they're very open to that type of feedback.

I believe the solution has increased our security efficiency. It's hard to prove without having a direct attack. But, I get challenged about ransomware from my board, to say, "How do we defend against ransomware?" That's a big topic. One of the key things was when Vectra went in, it saw a developer run a script, which essentially changed the names for a number of files and put a different extension on, but they were doing some development type work. That's how their script ran, and it identified that as ransomware, which is a great thing to say. 

Although there was no encryption or malice involved, it did create new files, rename files, and delete old files, which essentially is what ransomware does anyway. It followed the same sort of logic to it,  I can report that back. "We do have some protection. It wouldn't stop it. But we could limit the amount of damage that it may do." 

I don't know about other companies, but I get the feeling most people look to identify rather than block. We're not a high-end bank. We are not going to stop people working. We're going to investigate what they're trying to do. That's just our risk appetite. We have to work. Unless it's absolutely 100 percent, we won't stop them. We would just look at it afterwards. So, all our alerting, we don't have any orchestration at the back of it to say, "Okay, if this happens, then I'm going to play that port in a firewall or I'm going to drop that from there." We won't do that. Humans will all be part of that process. We'll get a call, then we will make a crisis management team decision, etc. That's how we operate.

If, for instance, our AV doesn't pick it up. I think that is where Vectra will come in. So, if somebody gets infected and maybe hasn't picked it up. That's where, if that worm spread and our endpoint signatures weren't up-to-date, they went into zero day, and nobody knew about it. Vectra would give us that opportunity. It would potentially give us something that would say, "Well, this is not normal. This machine does not communicate with all these other machines like it is now." That's where we see it coming in. It gives us that extra chance to stop a disaster before it happens, or at least limit the amount of potential output of damage that that an incident can do.

Zero days are always very difficult. If the AV vendor doesn't know about it, it's not going to be able to tell me about it, stop it, quarantine it, or do anything. Having a tool set like this, which monitors network traffic for anomalies, it gives us that chance. I can't say that it definitely will pick it up, but there's another opportunity for us to reduce the amount of damage that can be done.

What is most valuable?

It gives us the point of where something is happening, which is the key thing for us. (I know that there is a back-end recall, which probably gives a lot more data, but we don't use that.) We then leverage our SIEM product to provide us logs from those specific sources that it's talking about, giving us that information. It is the accuracy of: It is happening here and on this particular host, then it's going to here to this particular host. It's that focus which is probably the most advantageous to us.

The logic behind Vectra's ability to reduce alerts by rolling up numerous alerts to create a single incident or campaign for investigation grows with severity, as there are additional alerts around that particular host. This is a useful feature rather than spamming alerts. But, we've never really had an issue with a lot of alerts. We really do triage our alerts quite well and have a good understanding of what does what. 

One of the key advantages for us is we define a 24/7 service around it. We use far more of Vectra alerts than we do with our SIEM product because we understand that when we get an alert from Vectra we actually need to do something about it. You can't really say you don't get false positives, as the action has happened. It's whether we consider that action as a concern rather than a SIEM that sort of gives you a bit of an idea of, "That may be something you're interested in." Whereas, Vectra says, "This has happened. Is this something you would consider normal?" I think that's the bit that we like. It just says, "Is this normal behavior or isn't this normal?" Then, it's up to us to define whether that is or isn't, which we like. 

The solution provides visibility into behaviors across the full lifecycle of an attack in our network, beyond just the internet gateway, because we do east-west traffic. So, it looks at the entire chain across there. We're fortunate enough not to be in a position that we've seen a meaningful attack. When we do have pen testers come in, we can see quite clearly how they pick traffic up and how it develops from a small or medium alert to go to higher severity, then how it adds all those events together to give more visibility. 

The solution does a reasonable job of prioritizing threats and correlating them with compromised host devices. We use that as how we react to it, so we leverage their rating system. We are reasonably comfortable with it. At the end of the day, we actually spend a lot of time and effort to tweak it. It's never going to be right for every company because it depends on what your priorities are within the company, but we do leverage what they provide. If it is a high, we will treat it as a high, and we will have SLAs around that. If it's a low, we'll be less concerned, and the events that come out pretty much lead to that. The events that we see and the type of activity going on, it makes sense why it's a low, medium, or high. Just because a techie has done a port scan, that doesn't mean we need to run around shouting, "Who has done this?"

When we originally put it in, it was really quite interesting to see. Picking up the activities from the admin user and what they were doing, then going, "By the way, why have you done that?" Then looking at a scan and going, "Well, how did you know that?" So, it's sort of cool to pick up that type of stuff. We tend to trust what it tells us.

What needs improvement?

Room for improvement depends on how their strategy and roadmap develops, as they have a lot of third-parties that they integrate with, e.g., more orchestration around what alerts and what to do with afterwards. They don't pretend to be working in that space. That is a third-party type activity.

There are always the little things that they could do a bit better, like grouping or triage filters. Clearly, they've taken that onboard and developed those over the course of the last 18 months to two years to put these additional functions in. My guys are constantly saying, "Oh, it'd be useful to do this and useful to do that."

The solution has not reduced the security analyst workload in our organization because we still need to SIEM. Unfortunately, while Vectra, for us, is a brilliant tool for network investigations, giving wonderful visibility, it doesn't go the whole way to replace our SIEM that is needed for compliance. So, I still have the same amount of alerting and logging that I did before. It gives us more defined ability to see incidents, but it doesn't give us enough information to satisfy a PCI or 27001 audit. 

For how long have I used the solution?

I have been using this solution for about two years.

What do I think about the stability of the solution?

Interestingly enough, when we first got Vectra, we had a number of problems with it. The guys were all over the solution trying to fix it. It turned out to be a hardware issue. I think they ended up changing their supplier. They just ripped everything out and put a load of new equipment in. This was identified about three months after it being here. 

These things happen. There's not a lot you can do about it. However, they were really good and didn't make any excuses, apart from, "It can only be the hardware," which it was. Once they put the new hardware in, everything went really well.

Very few people are required for maintenance. We just generally run the alerts now. I have a guy spend probably less than an hour a day, maybe less than that, putting out fires and alerts. Then we investigate that, depending on its severity. The actual hardware maintenance is nothing. We'll just keep an eye on it or get an alert if an interface has gone down.

What do I think about the scalability of the solution?

One of the biggest things that we wanted to implement was something that was easy to do. Our problem, as well as I'm sure a number of other companies, is the amount of resources to install these new technologies, then how the resource center operates and uses these technologies. It's great having all these additional add-ons here, there and everywhere, but my team is quite small. So, it had to be quite easy. It has to be quite focused. Hence, we went with Vectra.

At the moment, we have a hardware brain and are not near the limit of that. To go from that, I think Vectra was looking at some sort of applied solution, but it would then be a change. So, we're down to limitations of the hardware. I always say, "If we bought a massive company, we would probably have to redesign and architect the solution." At the moment, they made sure that we have some growing room. 

Our purchase was a one time thing for the entire company, otherwise we would be leaving ourselves exposed. Just this week, I took a Vectra device up to a new company that we purchased and stuck it in there. It is really that simple. We'll probably end up with a bit of traffic because we will see a lot of new servers and workstations that we have to do triage around.

We have probably 3,500 to 4,000 users across the UK. My team is quite small. I have a couple of guys who are cyber-related.

How are customer service and technical support?

The technical support is brilliant and really responsive. That is probably down to the fact that they are a small company. Their guys respond instantly, normally within the day that they have somebody online and having a look at it, or they're putting it away and the communication is excellent. They will say, "Okay, we'll put it back to the developers," and then they give us updates, which is really efficient.

Vectra is growing at the moment. They support us very well. They do seem to rely on key people. Would my service be the same if they got rid of our technical manager? I don't know. They are a small, close family team, which is really good. Whether that would change when it's a few key people left, I don't know. But I know they are growing as a company as well, so let's hope they scale it in proportion to their customer base. Only time will tell. Other companies I've got at the moment grew too quickly in the services and service suffered as a part of that.

Which solution did I use previously and why did I switch?

It isn't a tool set to replace a current tool set. It's just an additional feature. For me, it has only increased our workload, but that's because we had nothing there before.

We did not previously have a network monitoring solution. We have a toolset that does event log monitoring, but nothing across the network itself. I think we have basic flow visiblity, and the network team use that. However, there is no real way of investigating individual network packets, then using them for anything in particular.

How was the initial setup?

The initial setup was easy. We have multiple sites, so we had to go around and travel to different sites. However, the actual brain was conifgured in a few hours. Once it was up, it was up. The network guys did nothing after that point. My guys probably spent a couple of days, over the course of a month just tweaking it. Then, it gradually goes down as we get a new server pop up, which might add a bit of additional alerting. Once we get a handle on that, then it comes down to something really quite manageable.

The priority for us was to get the main call center up and running at the start. We needed the brain up and do the implementation to see the east-west traffic in our call center. Then, we brought on additional sites, depending on the size around the UK, as we monitored it. 

What about the implementation team?

We used the Vectra guys for the implementation. Our technical engineer came in, going into the data centers with our network engineer (or remotely), then set it up.

For the actual deployment within the data center and around the sites, just two people were needed one form each company. After that, it was the configuration of the alerting which took one of the SOC guys suing Vectra for reference.

They provide us a health check and provide us with recommendations on what we need to do every quarter, which is perfect. There is nobody else who does that. That is probably part of the advantage of being a smaller company. 

Once every quarter, they'll put health and safety in, and say, "Alright, these are the new functions. This is what you need to turn on. That's not quite working. Those haven't fired. You might want to look at removing those." This is really good to see, because I get a lot of vendors, who once they've sold you a technology, they don't really care. They go, "Yep, there you go." They don't look at what you installed, how you've installed it, provide any recommendations, or look at how it's performing. 

This provides me the assurance my SOC guys are doing a good job, we are on top of any changes and the assurance we are getting the most of the solution.

Vectra has pretty much forced this upon us, which is really good because everyone is very busy. Before you know it, the months turn to years and disappear. 

What was our ROI?

ROI is a difficult one for security tools. You can argue that if you don't see anything where you did investment, this is the reason to have good security tools: not to have an incident. You only really know when bad things happen, and you're in the middle of it. Otherwise, it's doing what it needs to do to stop or identify an issue in the first place.

What's my experience with pricing, setup cost, and licensing?

We are running at about 90,000 pounds per year. The solution is a licensed cost. The hardware that they gave us was pretty much next to nothing. It is the license that we're paying for. I think if we outgrow our current hardware, then we will have a look at bigger hardware or some sort of distribution. I'm sure they have a number of different options for larger companies. I don't see that being a major issue for us in the next three to five years.

We don't have complete visibility because we don't have all of that metadata surrounding it. Sometimes there might be more metadata before, it might be something afterwards, or there might be something missing, but we accept that because we don't have the funds to pay for the additional functionality that it can provide its a trade off.

Which other solutions did I evaluate?

When we started off, apart from money, we had to look at behavioral analysis. We weren't sure where we wanted to go with the solution, whether we wanted to look at the endpoint or network. So, after a RFI, to define which direction we wanted to go, we thought that we would go down the network analysis route.

Because we have call centers, there is normally a high turnover of staff. The jobs themselves are quite intense and people move around quite a lot, it was key for us to get some visibility in what those guys are doing. We thought, "Although we do a lot of user awareness and logging, this is probably where our weakest link is." It was a case of somebody potentially clicking on a malicious link, some sort of phishing attack which was probably, or is probably, going to cause us the most pain.

We looked at Darktrace and there was another option that dropped out. So, we looked at the main players in that area. We decided on the behavior analysis for network, then we took the top three: Vectra, Darktrace, and another solution. 

It came down to Darktrace and Vectra. Darktrace looked much prettier than Vectra, unfortunately the support that we'd heard about and reviews that we read, led to, "Here's the new tool set. Off you go". This is what we didn't want. We wanted somebody to hold our hand, then give us the support we needed to ensure we get the best out of the tool set.

It obviously comes down to price as well and we feel we picked the best product that fitted us. We did quite a lot of due diligence on both. I went to different places that got both installed and got references from both. I firmly believe that both products would have done the job well. However, the support from Vectra along with their customers' references to say how good it was, I think we made made the right decision.


What other advice do I have?

People do a lot more than we actually see. Looking at the test and development guys, sometimes they do things that they don't understand. So, they will do it because it works. The actual things that are behind the scenes are the sort of things that happen, and they don't really understand. If there's something that's really complicated, they're people that have initiated it that don't really know what it is. That is always a problem, because in our sort of company, we have a lot of developers who are doing a lot of coding and things like that, but they're not 100 percent on all the other things that they affect, such as the supporting applications underneath it. 

They are making a change on one particular app, but it's using the other apps underneath it to develop that and push that across to something else. All these extra, different steps that they are completely oblivious to where we go, "Actually, you've just done this." They go, "Well, I don't know, I just ran the script over here. I don't know why that would happen." But, it'll do a LDAP lookup or connect to a share. Those are the sort of things that you get a lot of visibility from people who don't understand. So, that can become tricky. That's pretty much par for the course for a lot of security tool sets. Where you have a couple of people who know one particular aspect, but don't really understand everything that's going on. To be fair, IT is a big area. You can't expect everyone to know everything of everything, not when you're not working in a massive IT structure, and the security team is a small department.

You need to be quite key on your business case and what you're expecting from it. Be 100 percent sure on your use cases. It's an excellent tool. It doesn't create a huge amount of overhead, but it is a tool that you need to keep on top of. The more you keep on top of it and get it right at the start, the easier it will make your life going forward. Don't just stick it in, then leave it to whirl away as a lot of people do. You have to spend that bit of extra time, and it's not huge amount of time, and leverage other teams. 

The way they do their customer success is really good. There's nothing bad that I've got to say apart from the costs, but nothing's free, is it?

It has to be up there with my favorite security tool set at the moment. I am quite lean on scores, but the solution is definitely nine (out of 10). If I look at all my other security tool sets, this is the one that my guys value the most.

Which deployment model are you using for this solution?

On-premises
Disclosure: IT Central Station contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
SW
Operational Security Manager at a financial services firm with 1,001-5,000 employees
Real User
Top 20
Using Recall and Detect we have been able to track down if users are trying to bypass proxies

Pros and Cons

  • "The most valuable feature for Cognito Detect, the main solution, is that external IDS's create a lot of alerts. When I say a lot of alerts I really mean a lot of alerts. Vectra, on the other hand, contextualizes everything, reducing the number of alerts and pinpointing only the things of interest. This is a key feature for me. Because of this, a non-trained analyst can use it almost right away."
  • "The key feature for me for Detect for Office 365 is that it can also concentrate all the information and detection at one point, the same as the network solution does. This is the key feature for me because, while accessing data from Office 365 is possible using Microsoft interfaces, they are not really user-friendly and are quite confusing to use. But Detect for Office 365 is aggregating all the info, and it's only the interesting stuff."
  • "Vectra is still limited to packet management. It's only monitoring packet exchanges. While it can see a lot of things, it can't see everything, depending on where it's deployed. It has its limits and that's why I still have my SIEM."
  • "The main improvement I can see would be to integrate with more external solutions."

What is our primary use case?

Vectra was deployed to give us a view of what is happening on the user network. It helps us to check what is being done by users, if that is compliant with our policies, and if what they're doing is dangerous. It covers cyber security stuff, such as detecting bad proxies, malware infections, and using packet defense on strange behaviors, but it can also be used to help with the assessment of compliance and how my policies will apply.

We also use Vectra to administer servers and for accessing restricted networks.

There are on-prem modules, which are called Cognito Detect, the NDR/IDS solution, which captures traffic. We also have the SaaS data lake, and we also have the Cognito Detect for Office 365, which is a SaaS-type sensor within the O365 cloud.

How has it helped my organization?

If we didn't have Vectra and the Detect for Office 365, it would be very difficult to know if our Office 365 was compromised. We tried, in the past, to do it with a SIEM solution consuming Office 365 logs and it was really time-consuming. The Office 365 Detect solution has the exact same "mindset" as the Detect solution for networks. It's almost like we can deploy it in the fire-and-forget mode. You deploy the solution and everything is configured. You have all the relevant alerts out-of-the-box. If you want to, you could tweak, configure, contextualize, and rewrite the parser, because some things might be out of date,  and customize the solution. For a big company with a large team it might be feasible, but for small companies, it's an absolute showstopper. The Detect for Office 365 gives us a lot of visibility and I'm very pleased with the tool.

We use three services from Vectra: Cognito Detect, Detect for Office 365, and Cognito Recall, and we are leveraging all these services within the SOC team to have proper assessments. We even use these tools to prepare the new use cases that we want to implement into our SIEM solution. Recall stores all the metadata that is brought up from Cognito Detect at a central point, data-lake style, with an elastic stack and a Kibana interface available for everybody. Using this, we can try to see what are the general steps.
Without this, I would not have been able to have my SOC analyst do the job. Creating a data lake for cyber security would be too expensive and too time-consuming to develop, deploy, and maintain. But with this solution, I have a lot of insight into my network.

An additional thing that is very convenient with the Recall and Detect interfaces is that you can do use cases involving individuals in Recall and have them triggered in Detect. For example, we found ways to track down if users are trying to bypass proxies, which might be quite a mess in a network. We found a type of search within Recall and have it triggering alerts in Detect. As a result, things can be managed.

It's so efficient that I'm thinking about removing my SIEM solution from our organization. Ours is a small organization and having a SIEM solution is really time-consuming. It needs regular attention to properly maintain it, to keep it up and running, consume all the logs, etc. And the value that it's bringing is currently pretty low. If I have to reduce costs, I will cut costs on my SIEM solution, not on Vectra.

The solution also provides visibility into behaviors across the full life cycle of an attack in our network, beyond just the internet gateway. It provides a lot of insight on how an attack might be coming. There are multiple phases of an attack that can be detected. And there is a new feature where it can even consume intelligence feeds from Vectra, and we can also push our own threat-intelligence feeds, although these have to be tested. The behavioral model of the Detect solution also covers major malware and CryptoLockers. I know it's working. We tested some cases and they showed properly in the tool. I'm quite reassured.

It triages threats and correlates them with compromised host devices. One of the convenient things about Detect is that it can be used by almost anybody. It's very clear. It's quite self-explanatory. It shows quadrants that state what is low-risk and what is high-risk. It is able to automatically pinpoint where to look. Every time we have had an internal pen test campaign, the old pen test workstation has popped up right away in the high-risk quadrant, in a matter of seconds. To filter out false positives it can also provide rules that state, "Okay, this is the standard behavior. This subnet or this workstation can do this type of thing." That means we can triage automatically. It also has some features which aren't so obvious, because they are hidden within the interface, to help you to define triage rules and lower the number of alerts. It looks at all your threat or alert landscapes, and says, "Okay, you have many alerts coming from these types of things, so this group of workstations is using this type of service. Consider defining a new, automated triage rule to reduce the number of alerts."

To give you numbers, with my SIEM I'm monitoring some IDS stuff within my network. Everything is concentrated within my SIEM. From my entire site, IDS is giving me about 5,000 more alerts than my Vectra solution. Of course it will depend on how it is configured and what types of alerts it is meant to detect, but Vectra is humanly manageable. You don't have to add something to make the triage manageable, using some time-consuming fine-tuning of the solution, requiring expertise. This is really a strong point with Vectra. You deploy it, and everything is automatically done and you have very few alerts.

Its ability to reduce false positives and help us focus on the highest-risk threats is quite amazing. I don't know how they made their behavioral or detection models, but they're very efficient. Each alert is scored with a probability and a criticality. Using this combination, it provides you insights on alerts and the risks related to alerts or to workstations. For example, a workstation that has a large number of low-criticality alerts might be pinpointed as a critical workstation to have a look at. In fact, in the previous pen test we launched, the guys were aware that the Vectra solution was deployed so they tried some less obvious tests, by not crawling all the domain controllers, and things like that. Because there were multiple, small alerts, workstations were pinpointed as being in the high-risk quadrant. This capability is honestly quite amazing.

And, of course, it has reduced the security analyst workload in our organization, on the one hand, but on the other it has increased it. It reduces the amount of attention analysts have to pay to things because they rely on the tool to do the job. We have confidence in its capability to detect and warn only on specific things of interest. But it also increases the workload because, as the tool is quite interesting to use, my guys tend to spend some time in Recall to check and fix things and to try to define new use cases. Previously, I had four analysts in my shop, and every one of them was monitoring everything that was happening on the network and in the company on a daily basis. Now, I have one analyst who is specialized in Vectra and who is using it more than the others. He is focusing on tweaking the rules and trying to find new detections. It brings us new opportunities, in fact. But it has really reduced the workload around NDS.

In addition, it has helped move work from our Tier 2 to our Tier 1 analysts. Previously, with my old IDS, all the detection had to be cross-checked multiple times before we knew if it was something really dangerous or if it was a false positive or a misconfiguration. Now, all the intelligence steps are done by the tool. It does happen that we sometimes see a false positive within the tool, but one well-trained analyst can handle the tool. I would say about 20 to 30 percent of work has moved from our Tier 2 to our Tier 1 analysts, at a global level. If I focus on only the network detections, by changing all my IDS to Vectra, the number is something like more than 90 percent.

It has increased our security efficiency. If I wanted to have the same type of coverage without Vectra, I would need to almost double the size of my team. We are a small company and my team has five guys in our SOC for monitoring and Tier 1 and Tier 2.

It reduces the time it takes for us to respond to attacks. It's quite difficult to say by how much. It depends on the detections and threat types. Previously, we had an antivirus that was warning us about malicious files that were deployed on a workstation within one year. Now, we can detect it within a few minutes, so the response time can be greatly enhanced. And the response time on a high-criticality incident would go from four hours to one hour.

What is most valuable?

The most valuable feature for Cognito Detect, the main solution, is that external IDS's create a lot of alerts. When I say a lot of alerts I really mean a lot of alerts. Vectra, on the other hand, contextualizes everything, reducing the number of alerts and pinpointing only the things of interest. This is a key feature for me. Because of this, a non-trained analyst can use it almost right away.

It's very efficient. It can correlate multiple sources of alerts and process them through specific modules. For example, it has some specific patterns to detect data exfiltration and it can pinpoint, in a single area, which stations have exfiltrated data, have gathered data, and from which server at which time frame and with which account. It indicates which server the data is sent to, which websites, and when. It's very effective at concentrating and consolidating all the information. If, at one point in time, multiple workstations are reaching some specific website and it seems to be suspicious, it can also create detection campaigns with all the linked assets. Within a single alert you can see all the things that are linked to the alert: the domains, the workstation involved, the IPs, the subnets, and whatever information you might need.

The key feature for me for Detect for Office 365 is that it can also concentrate all the information and detection at one point, the same as the network solution does. This is the key feature for me because, while accessing data from Office 365 is possible using Microsoft interfaces, they are not really user-friendly and are quite confusing to use. But Detect for Office 365 is aggregating all the info, and it's only the interesting stuff.

We are still in the process of deploying the features of Detect for Office 365, but currently it helps us see mailboxes' configurations. For example, the boss of the company had his mailbox reconfigured by an employee who added some other people with the right to send emails on his behalf, and it was a misconfiguration. The solution was able to pinpoint it. Without it, we would never have been able to see that. The eDiscovery can track down all the accesses and it even helped us to open an incident at Microsoft because some discoveries were made by an employee that were not present in the eDiscovery console on the protection portal from Office 365. That was pinpointed by Vectra. After asking the user, he showed that he was doing some stuff without having the proper rights to do so. We were able to mitigate this bit of risk.

It also correlates behaviors in our network and data centers with behaviors we see in our cloud environment. When we first deployed Vectra, I wanted to cross-check the behavioral detection. After cross-checking everything, I saw that everything was quite relevant. On the behavioral side, the Office 365 module can alert us if an employee is trying to authenticate using non-standard authentication methods, such as validating an SMS as a second factor or authenticating on the VPN instead of the standard way. The behavioral model is quite efficient and quite well deployed.

What needs improvement?

Vectra is still limited to packet management. It's only monitoring packet exchanges. While it can see a lot of things, it can't see everything, depending on where it's deployed. It has its limits and that's why I still have my SIEM.

I am in contact with the Vectra team, if not weekly then on a monthly basis, to propose improvements. For the time being, the main improvement I can see would be to integrate with more external solutions. Since Vectra provides an API, that  should be quite easy to handle. For example, we're using an open source ticketing system within our team and I want to have it handled properly by Vectra. We'll go forward on that with the API. 

Another area for improvement that I have pinpointed is that the Office 365 solution and the Detect solution cannot match the same users. That means we have two "different worlds" currently, the world from Office 365, which is bringing alerts based on users' emails and email addresses. And we have the network world, which is bringing an Active Directory view. On the one hand we are seeing emails or email addresses, and on the other hand we are seeing things like logons on to the domain controller. From time to time, it does not match and the tool cannot currently cross-check this info and consolidate everything. I would like to be able to see that detection related to one workstation and covering a user: what he is using, what services he is using, and what he did with his Office 365 and configuration. That would help. 

Another major feature would be to have all logs pushed to Cognito Detect, and all these logs should be also pushed to Recall. Currently, within Recall, I can't call up the Office 365 detections and I would love to do so. 

The last point would be an automated IoT threat feed consumption by the tool.

For how long have I used the solution?

I have been using Vectra for two years.

What do I think about the stability of the solution?

The stability is absolutely flawless. The last time it was rebooted was almost two years ago. 

The only thing we have seen was some interruption in log feeding to the Recall instance, the SaaS solution. I had a quick call with a product manager in Europe and he was very keen to share information about this issue and willing to improve it.

So, within two years we have faced one stability incident. This incident lasted less than two hours and it was not on the monitoring solution but more on the data lake solution.

What do I think about the scalability of the solution?

The scalability is very good. From the financial perspective, we are not limited by the number of sensors. We can deploy as many virtual sensors as we want. The key factor is the IP addresses that are being monitored. In terms of technical scalability, we have one brain appliance, one very big sensor, and multiple virtual sensors, and I don't see any limits with this solution.

We are currently using all the things that it's possible to use in this solution. One thing I like with Vectra is that it's updated very frequently. Almost every month new features are popping up: new detections, new dashboards, new ways to handle things. That's quite good. I work with our SOC team so that they can use everything right away.

How are customer service and technical support?

The tech support is surprisingly good. We had questions, we faced some slight issues, and we always got very quick answers. Things are taken into account within a few minutes and answers usually come in less than two hours.

How was the initial setup?

To deploy Recall, which is the data lake in SaaS, or to deploy the Office 365 sensor, it was effortless. It was just a quick call and, within minutes, everything was set up.

It was set up the same way the solution is behaving. It's a turnkey solution. You deploy it and everything works. The configuration steps are minimal. It's exactly the same for the SaaS solution. You deploy the tool and you just have to accept and do very basic configuration. For Office 365, you have to grant rights for the sensors to be able to consume API logs and so on. You grant the rights and everything is properly set up. It's exactly the same for Recall. It was a matter of minutes, and not a matter of days and painful configurations.

In terms of maintenance it is very easy and takes no time. It's self-maintaining, aside from checking if backups have properly ended. And in terms of deployment, when we add a network segment, we have to work a bit to determine where to deploy the new sensors, but the deployment model is quite easy. The Vectra console is providing the OVA to provide a virtual sensor for deployment. It can also automate the deployment of the sensor if you link it with vCenter, which we have not done. But it's very easy. It's absolutely not time-consuming.

If I compare the deployment time to other solutions, it's way easier and way quicker. If I compare it to my standard IDS, in terms of deployment and coverage, it's twice or three times better.

What about the implementation team?

We were in contact with Vectra a lot at the beginning to plan the deployment, to check if everything was properly set up. But the solution is quite easy to set up. The next decisions we had were focused on how to enhance the solution: what seemed to be missing from the tool and what we needed for better efficiency.

The guys from Vectra were more providing guidance in terms of where the sensors needed to be deployed and that was about it.

We had a third-party integrator, Nomios, that provided the appliances, but they did not do anything aside from the delivery of appliances to our building. Our team took the hardware and racked it into the data center on its own. With just a basic PDF, we set up the tool within minutes. The integrator was quite unnecessary.

Nomios are nice guys, but we have deployed some of other solutions with them and we were not so happy about the extra fees. We were not the only ones who were not happy about that. We tried to deploy the ForeScout products with Nomios and it was quite a mess. But they have helped us with other topics and they have been quite efficient with those. So they are good on some things and on other things they are not good.

What was our ROI?

It's ineffective to speak just about the cost of the solution, because all the solutions are costly. They are too costly if we are only looking at them from a cost perspective. But if I look at the value I can extract from every Euro that I spend on Vectra, and compare it to every Euro I spend on other solutions, the return on investment on Vectra is way better.

ROI is not measurable in my setup, but I can tell you that Vectra is way more cost-efficient than my other solution. The other solution is not expensive, but it's very time-consuming and the hardware on which it's running it's quite expensive. If I look at the global picture, Vectra is three or four times more cost-efficient than my other solution.

What's my experience with pricing, setup cost, and licensing?

The pricing is very good. It's less expensive than many of the tools out there.

Which other solutions did I evaluate?

I evaluated Darktrace but it wasn't so good. Vectra's capabilities in pinpointing things of interest are way better. With Darktrace, it is like they put a skin of Kibana on some standard IDS stuff.

Vectra enables us to answer investigative questions that other solutions are unable to address. It provides an explanation of why it has detected something, every time, and always provides insights about these detections. That's very helpful. Within the tool, you always have small question marks that you click on and you have a whole explanation of everything that has been detected: Why has it been detected and what work is the recommended course of action. This approach is very helpful because I know that if I ask somebody new, within our team, to use Vectra, I don't have to spend months or days in training for him to be able to handle the solution properly. It's guided everywhere. It's very easy to use.

What other advice do I have?

Do not be afraid to link Vectra to the domain controller, because doing so can bring a lot of value. It can provide a lot of information. It gets everything from the domain controller and that is very efficient.

You don't need any specialized skills to deploy or use Vectra. It's very intuitive and it's very efficient.

We are in the process of deploying the solution’s Privileged Account Analytics for detecting issues with privileged accounts. We are using specific accounts to know whether they have reached some servers. It's quite easy with all these tools to check whether or not a given access to a server is a legitimate one or not.

We don't use the Power Automate functionality in our company, but I was very convinced by their demonstration, and an analyst in my team played with it a bit to check whether or not it was working properly. These are mostly advanced cases for companies that are using Office 365 in a mature manner, which is not the case for our company at the moment.

In our company, less than 10 people are using the Detect solution, and five or six people are using Recall. But we are also extracting reports that are provided to 15 to 20 people.

Disclosure: IT Central Station contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Learn what your peers think about Vectra AI. Get advice and tips from experienced pros sharing their opinions. Updated: October 2021.
542,608 professionals have used our research since 2012.
DW
Operations Manager at a healthcare company with 51-200 employees
Real User
Gives us a greater level of confidence that we will be able to detect threats more quickly

Pros and Cons

  • "One of the core features is that Vectra AI triages threats and correlates them with compromised host devices. From a visibility perspective, we can better track the threat across the network. Instead of us potentially finding one device that has been impacted without Vectra AI, it will give us the visibility of everywhere that threat went. Therefore, visibility has increased for us."
  • "I would like to see data processed onshore. Right now, the cloud components, like Office 365, must be processed on servers outside of Australia. I would like to see a future adoption of onshore processing."

What is our primary use case?

The key challenges are employee weakness, getting alerted as soon as possible on our network and infrastructures to anything suspicious that is happening, and policy-type enforcement.

The challenge that it tends to solve is visibility. We put a lot of controls in place for what we suspect will be a risk. However, something like Vectra gives us more visibility and confidence that we have a better understanding of what is actually happening, rather than just the things that we have already planned for.

How has it helped my organization?

We adopted an Office 365 add-in with the product that looks over the Office 365 suite and data traversing that platform. In the future, we see this as a valuable asset that we already have in place to be able to better monitor that type of detection of information. We don't have an environment where there are many true positives, which is good. That has been consistent across the old and new. Our detections have usually been benign or more configuration-based rather than some sort of attack. Because it provides more context and raises things in a way that make it more actionable, it does help you understand the anomaly on a deeper level because it is not just a log that is being forwarded on and has context around it. Vectra AI does do a good job of providing the model information upfront about how its detections work, which is helpful.

We have an external SOC and most of the data or detections from Vectra now flows to them. The final design is that they are the recipient of those alerts in parallel with us. We also receive them directly at times, depending on the criticality. What it does for us is it improves the information and context that they are getting upfront, which means less questions for our internal IT team about what these assets are and what they are doing. Because the analysts at the SOC have more information to work from, it has reduced wasted time and improved the path that we are taking to a resolution, if there is a problem. It is more straightforward when you are getting quality information upfront about what you are actually investigating and why you are investigating it, rather than just, "This particular activity was detected on the network. Go and work out everything about it," Vectra gives you some context around it and a little bit of direction when you see these things, e.g., this is potentially what could be causing it. This improves workflow, reduces wasted time, and makes everyone's life a little bit easier.

It has given us an increased level of confidence in our information security that we have a tool like Vectra to back up some of the incidents that could take place, knowing we are going to get them detected as quickly as possible and identified to us. Nowadays, with threats on ransomware and information security types of techs, we believe that Vectra does give us a greater level of confidence that we will be able to detect those more quickly. If they do occur, we can shut them down more quickly, preventing further risks or damage to our systems or infrastructure.

Vectra AI provides visibility into behaviors across the full lifecycle of an attack in our network, beyond just the Internet gateway. It spells that out quite clearly in each detection. It is not just in the detection. You can look at detections individually, which are essentially individual events. Also, when you are looking at an asset that has multiple detections attached to it, you can see where those sit in the lifecycle of an attack. This gives you an idea of how far Vectra thinks that it has progressed. Having the ability to know where you are in an attack helps you prioritize things a bit better.

The solution correlates behaviors in our enterprise network and data centers with behaviors that we see in our cloud environment. In terms of a specific example, it links cloud identities to on-prem identities. This is something that we have never really had before, because we didn't have that visibility in our cloud environment. Now, it improves the visibility that we have of our security operations as a whole. Rather than sometimes viewing these things in silos and objects as individual objects, we are now viewing them as what they are, which is people undertaking action in our network and the pathways that they are taking to get to certain resources. By combining the cloud and on-prem data, it gives us context and helps us to get a proper view of what is actually going on.

What is most valuable?

An attractive thing about Vectra AI is the AI component that it has over the top of the detections. It will run intelligence over detections coming across in our environment and contextualize them a bit and filter them before raising them as something that the IT team or SOC need to address. 

While the device itself is deployed on-prem, the hybrid nature of what it can monitor is important to us.

Its ability to group detections for us in an easier way to better identify and investigate is beneficial. It also provides detailed descriptions on the detection, which reduces our research time into what the incident is. 

There are also some beneficial features around integration with existing products, like EDR, Active Directory, etc., where we can get some hooks to use the Vectra product to isolate devices when threats are found.

On a scale of good to bad, Vectra AI is good at having the ability to reduce alerts by rolling up numerous alerts to create a single incident or campaign for investigation. My frame of reference is another product that we had beforehand, which wasn't very good at this side of things. Vectra AI has been a good improvement in this space. In our pretty short time with it so far, Vectra AI has done a lot to reduce the noise and combine multiple detections into more singular or aggregated alerts that we can then investigate with a bit more context. It has been very good for us.

There is a level of automation that takes place where we don't have to write as many rules or be very specific around filtering data. It starts to learn, adapt, and automate some of the information coming in. It works by exception, which is really good. Initially, you get a little bit more noise, but once it understands what is normal in your environment, some of the detections are based on whether an action or activity is more than usual. It will then raise it. Initially, you are getting everything because everything is more than nothing, but now we are not getting much of that anymore because the baseline has been raised for what it would expect to see on the network.

We use the solution’s Privileged Account Analytics for detecting issues with privileged accounts. Privileged accounts are one of the biggest attack vectors that we can protect ourselves against. This is one of the few solutions that gives you true insight into where some of those privileged accounts are being used and when they are being used in an exceptional way.

We have found that Vectra AI captures network metadata at scale and enriches it with security information. We have seen that data enriched with integrations has been available and implemented. This comes back to the integration of our EDR solution. It is enriching its detection with existing products from our EDR suite, and probably some other integrations around AWS and Azure. In the future, we will see that improve even further. 

One of the core features is that Vectra AI triages threats and correlates them with compromised host devices. From a visibility perspective, we can better track the threat across the network. Instead of us potentially finding one device that has been impacted without Vectra AI, it will give us the visibility of everywhere that threat went. Therefore, visibility has increased for us.

What needs improvement?

I would like to see data processed onshore. Right now, the cloud components, like Office 365, must be processed on servers outside of Australia. I would like to see a future adoption of onshore processing. 

For how long have I used the solution?

I have been using it for two to three months.

What do I think about the stability of the solution?

We have only a few months of history with it, but the solution has been rock solid. I don't think it has gone down yet.

What do I think about the scalability of the solution?

We have the ability to add agents in Azure and AWS Cloud if we want, but we still haven't made a decision yet. We can also add more agents or sensors on-prem with the VMware virtual machine that they provide. It is scalable in that way, but at some point, you will hit the limit of the device.

One of the selling points for us was, down the track, we can just add additional agents to the box from other sources without the need for additional licensing costs.

Internal to the business, there are only two users. External to the business (the SOC), there could be a team of up to 10 people who are watching alerts day-to-day as well as using the product and logging into the product to better identify what those alerts are. Being the owners of the system, we use it when we are triggered by alerts about something significant.

We have a small IT team with fewer than 10 staff, where there are only one to two information security focused staff. We leverage an external SOC, i.e., a third-party.

Vectra AI has enabled us to do things now that we could not do before. We are able to give our SOC a tool that can both reduce their time and potentially allow them to do more on our network. Potentially, they will look into isolating the threat a lot quicker. They can use some of the integrations to turn off endpoints when a threat, which is significant, is detected.

How are customer service and technical support?

Through the different phases of deployment that we have gone through so far, we have been mainly assigned one technical resource to assist us with everything from beginning to end. He has been very knowledgeable and responsive. I can't say anything really negative about him. 

In terms of the ongoing support, we haven't had to leverage it much yet. We are now in the production phase, so we have been handed over to the main support desk, but I haven't had to use them yet.

Through deployment, the technical support was very responsive. I think every question that I asked, if it wasn't able to be answered, got passed onto someone who could then come back with something. I think they were pretty upfront as well when the solution couldn't do what we were after. We were told that they would go away and check, then they would come back with an answer about whether what we were asking for could be done. It has all been pretty good so far.

Which solution did I use previously and why did I switch?

We already had a solution like this one in place, which was another competitor's product, where the three-year contract for that product was up. We wanted to retain the level of detection that the product provided, but adapt to the way our network had changed over three years to adopt a more hybrid cloud technology. This device sits on our internal network watching for any threats to our internal network. It looks at our Office 365 threats as well.

We were previously using DarkTrace. We went to the market for reasons of maturity over time for our network. We wanted to further adapt this product to a hybrid working model. We wanted it to be able to adapt to cloud technology that we were adopting. We also wanted something commercially competitive. After three years, they came back asking for a 20% increase in their renewal fees, which wasn't acceptable.

One of the main things that Vectra has brought to the table for us, over what we were previously using, was the ability to combine our on-prem packet data that we were watching with the cloud data that we needed to start including. We have one system monitoring a hybrid environment, rather than having separate systems for separate environments. That is a key thing that Vectra does that others might not. It comes back to visibility with network monitoring.

For critical alerts, there has been a huge reduction compared to our previous solution, approximately 80% less. What our previous tool would mark as high, we wouldn't, and Vectra AI aligns with that. Vectra gave us some classifications of the threats, where our previous tool would just trigger high risks on a lot of things that to us, as a business, were not high risk. This is because of fundamentally the way that Vectra looks at detections compared to the way that our previous product did. Every detection was its own entity within the previous one. Whereas, with Vectra AI, it is all about combining the detections and getting a more complete picture. When you are looking for more than just one indicator of compromise, and you are not viewing these things in isolation, you start to realize that one indicator oftentimes doesn't mean critical. That is what Vectra does pretty well.

How was the initial setup?

The initial setup was straightforward. We had the existing competitor already in place, and it was architected in a pretty similar way. Someone without a device like this one in place would need to spend a little bit of time on the setup. However, that is not so much about Vectra as it is with the type of device that it is. No matter which device does this sort of thing, when you put it in place, you will need to set certain things up.

We unboxed the device, plugged it in, and it pretty much turned on. We didn't have to do much at all. Then, there was the config after the fact, which was all supported.

The initial deployment really only took a couple of weeks to get it to the point that we were relatively comfortable with what we were receiving. In terms of getting the box plugged in, that took a day. Then, we finished the whole deployment phase of it. which was to fine tune some of our detections and config. That has really been finalized in the last few weeks.

Vectra was extremely easy and quick to get into place. It was able to run inline with DarkTrace while we were evaluating it. Also, the implementation was not heavy in any way.

What about the implementation team?

We went through a proof of concept with Vectra. We had already identified our functional requirements for the product and entered into our proof of concept arrangement with Vectra to assess that they could achieve all the functional requirements that we had.

The support for deploying it was ready to assist further, if needed, with the deployment. In our case, it was very straightforward. It was very quick to implement. The support that they gave us week-to-week kept us moving. They were also able to implement it in line with us.

Development and maintenance needs a tenth of a staff member. We mostly handle this ourselves. To be effective with the alerts that you are getting, you need security staff or people who are dedicated to this kind of thing. It is one thing to maintain and deploy the device.

It is another thing to action the information that the solution is giving you. We outsource that, so we don't do it in-house.

What was our ROI?

The capturing of network metadata at scale reduces the time of investigations when researching incidents. Instead of having to look over multiple tools, that data can be somewhat aggregated, from a Vectra perspective. The time to detect and understand a threat has been reduced.

Vectra AI has reduced the time it takes us to respond to attacks. The amount of time depends on the specific detection or circumstance around it. Some things have been raised previously, then we would have good knowledge about what that detection meant and how to investigate it effectively. Other times, a detection might be viewed as more novel, where there may not be the immediate skills in place to investigate it effectively, whether that is the security team or me. There is a whole lot of research that needs to go into this to make sure that you have the knowledge to actually verify whether a thing needs to be dealt with.

Vectra AI provides you this information very well, with more context around the detection. Someone with a more general knowledge of some of these things can look at all the factors rather than just the detection to make a determination of how risky it is and how you might start investigating it. For example, with autodetection in an account, if it was just that detection, then your initial response might be to lock that account out. However, if you get a bit more context about it and can see what other activities were happening on the same asset around the same time, then you might not lock that account. You might just reach out to that user, and say, "Hey, what was this about?" because you are not so concerned about an immediate threat.

There is ongoing maturity from our security strategy, which this solution introduces. Down the track, we could look to extend this from an agent perspective to our cloud platforms in a more rigorous way than what has already been implemented. It gives us increased confidence over time as we do get these detections and alerts that are valid, so we are able to accurately resolve and stop them quite quickly. That is where we will see the bigger benefit. It will tick something and alert us as quickly as possible, then we can get to it and shut it down as quickly as possible. That means our security maturity is only strengthening, and we can respond and have visibility over events in the future.

The return on investment was passed over to our SOC. They were using our previous tool, DarkTrace, and now they are using Vectra. There will be a lot less in future reports because there will be a lot less that they are actually investigating.

What's my experience with pricing, setup cost, and licensing?

From a pricing perspective, they are very commercially competitive. From a licensing perspective, just be conscious that some of their future cloud solutions come with additional subscriptions. Also, if you're outside of the US, you will get charged freight for the device back to your country. I tried to negotiate getting rid of this, but unfortunately, it just wasn't something they would take off the table.

I would like to see ways they can look to bring out new cloud functionality without introducing additional costs for them as additional subscriptions. They're about to bring out their AWS add-in, which has an additional cost. So, I would like to see them start to roll that into the product, as opposed to having it be offered as a separate subscription service. Because the more that that happens, the more it goes away from the core functionality of the product if we are just buying a lot of separate cloud processing pieces doing different functions. Why is that not being made as part of the core product?

They also have some additional threat hunting tools that I would like to at least consider leveraging, but the cost is just prohibitive.

Which other solutions did I evaluate?

After deploying this solution in our network, it began to add value to our security operations straightaway. We ran the Vectra product in line with DarkTrace and were watching the alerts from both. Because I was sometimes getting exactly the same detections on both platforms, the Vectra information was actually assisting me in understanding what DarkTrace was doing and what it was warning me about. Straightaway, I started to get a better understanding of the alerts that we had been receiving for a long time.

It pays to evaluate the market regularly on products like this. The industry and platforms change very rapidly, and there is always new technology coming out. Three years ago, these guys wouldn't have probably been around or been looked at. Now, they are. Therefore, going out to the market and actually assessing our existing investment, against what is out there today, was very worthwhile.

For EDR, we are using CrowdStrike.

What other advice do I have?

The visibility of your threats will be easier to understand with Vectra AI. It provides you with a centralized dashboard of those threats and alerts. It gives you detailed descriptions for quicker research into what the identified threats and alerts are. It will integrate with existing products you may already be using. Overall, it reduces a lot of time spent on chasing false positives.

Right now, we are leveraging the on-prem appliance and the Office 365 Cloud component. We want to look to the future around potentially extending this to further parts of Office 365 and cloud environments, like Azure and AWS.

We haven't adopted Power Automate into our environment as of yet.

I would rate this solution as eight and a half out of 10.

Which deployment model are you using for this solution?

On-premises
Disclosure: IT Central Station contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Flag as inappropriate
SR
Global Security Operations Manager at a manufacturing company with 5,001-10,000 employees
Real User
Top 20
Aggregates information on a host and host basis so you can look at individual detections and how they occur over time

Pros and Cons

  • "One of the most valuable features of the platform is its ability to provide you with aggregated risk scores based on impact and certainty of threats being detected. This is both applied to individual and host detections. This is important because it enables us to use this platform to prioritize the most likely imminent threats. So, it reduces alert fatigue follow ups for security operation center analysts. It also provides us with an ability to prioritize limited resources."
  • "You are always limited with visibility on the host due to the fact that it is a network based tool. It gives you visibility on certain elements of the attack path, but it doesn't necessarily give you visibility on everything. Specifically, the initial intrusion side of things that doesn't necessarily see the initial compromise. It doesn't see stuff that goes on the host, such as where scripts are run. Even though you are seeing traffic, it doesn't necessarily see the malicious payload. Therefore, it's very difficult for it to identify these type of host-driven complex attacks."

What is our primary use case?

We use Vectra with the assumption that our other defensive controls are not working. We rely on it to be able to detect anomalous activities on our network and trigger investigation activities. It's a line of detection assuming that a breach occurred or has been successful in some way. That's our primary use case.

We have it in some of other use cases, like anomalous network activity and detection for things. E.g., we are trying to refine or improve suspicious internal behaviours because we are a development technology company. We have developers doing suspicious things all the time. Therefore, we use it to help us identify when they are not behaving correctly and improve our best practices.

We have it predominantly on-prem, which is a combination of physical and virtual sensors. We also have a very minor element on the cloud where we are trialing a couple of components that are not fully deployed. For the cloud deployment, we are using Azure.

We are on the latest version of Cognito.

How has it helped my organization?

We have a limited use of Vectra Privileged Account Analytics for detecting issues with privileged accounts at the moment. That is primarily due to the fact that our identity management solution is going through a process of improving our privileged account management process, so we are getting a lot of false positives in that area. Once our privilege account management infrastructure is fully in place and live, then we will be taking on more privileged account detections and live SOC detections to investigate. However, at the moment, it has limited applicability.

We have a lot of technically capable people with privilege who are able to do things they should or should not be able to do, as they're not subject-matter experts when it comes to things like security. They may make a decision to implement or download a piece of software, implement a script, or do something that gets the job done for them. However, this opens us up to major security risk. These are the types of activities that the tool has been able to identify, enabling us to improve communication with those individuals or teams so they improve their business process to a more secure or best practice approach. This is a good example of how the solution has enabled us to identify when people are engaging in legitimate risky activities, and we're able to identify and engage with them to reduce risk within the network.

It has enabled our security analysts to have more time to look at other tools. We have many tools in place, and Vectra is just one of them. Their priority will always be to deal with intrusion attempt type of alerts, such as malware compromise or misuse of credentials. Vectra was able to simplify the process of starting a threat hunting or investigation activity on an anomaly. Previously, we weren't able to do this because the amount of alerts and volume of data were just too large. Within our security operations, they can now review large volumes of data that provide us with indicators of compromise or anomalous behaviour. 

By reducing false positives, we are able to take on more procedures and processes. We have about seven different tools providing alerts and reporting to the SOC at any one time. These range from network-based to host-based to internet-based alerts and detections. We are more capable to cover the whole spectrum of our tooling. Previously, we were only able to deal with a smaller subset due to the sheer workload. 

In some regards, I find that Vectra probably create more investigative questions. E.g., we need to find answers from other solutions. So, it is raising more questions than it is specifically answering. However, without Vectra, we wouldn't know the questions to ask in the first place. We wouldn't know what anomalies were occurring on our network.

Vectra data provides us with an element of enrichment for other detections. For example, if we see a detection going onto a single host, we could then look at that activity in Vectra to see whether there are suspicious detections occurring. This would give us the high percentage of confidence that the compromise was more severe than a normal malware alert, e.g., destructive malware or commander control malware enabling someone to pivot horizontally across the network. Vectra provides us with that insight. This enables us to build up an enriched view quickly.

What is most valuable?

One of the most valuable features of the platform is its ability to provide you with aggregated risk scores based on impact and certainty of threats being detected. This is both applied to individual and host detections. This is important because it enables us to use this platform to prioritize the most likely imminent threats. So, it reduces alert fatigue follow ups for security operation center analysts. It also provides us with an ability to prioritize limited resources.

It aggregates information on a host and host basis so you can look at individual detections and how they are occurring over time. Then, you can have a look at the host scores too. One of the useful elements of that is it is able to aggregate scores together to give you a realistic view of the current risk that the host plays in your network. It also ages out detections over time. Then, if that host is not been seeing doing anything else that fits into suspicious detection, it will reduce its risk score and fall off of the quadrant where you are monitoring critical content for hosts that you're trying to detect. 

When you are analyzing and triaging detections and looking for detection patterns, you are able to create filters and triage detections out. Then, in the future, those types of business usual or expected network behaviours don't create false positive triggers which would then impact risk scores. 

Without the detection activities that come from Vectra, we wouldn't have been able to identify the true cause of an event's severity by relying on other tools. This would have slipped under the radar or taken a dedicated analyst days to look for it. Whereas, Vectra can aggregate the risk of multiple detections, and we are able to identify and find them within a couple of hours. 

What needs improvement?

You are always limited with visibility on the host due to the fact that it is a network based tool. It gives you visibility on certain elements of the attack path, but it doesn't necessarily give you visibility on everything. Specifically, the initial intrusion side of things that doesn't necessarily see the initial compromise. It doesn't see stuff that goes on the host, such as where scripts are run. Even though you are seeing traffic, it doesn't necessarily see the malicious payload. Therefore, it's very difficult for it to identify these type of host-driven complex attacks.

It only shows us a view of suspicious behaviours. It doesn't show us a view of key or regularly attacked company targets. This could be because we don't have one of the other tools or products that Vectra provides, such as Stream or Recall. 

My challenge with the detection alerting platform, Cognito, is it tells us this host is behaving suspiciously and is targeting these other machines, but it won't give you a view when a host is the target of multiple attacks. This because you may have a key assets, such as domain controllers or configuration management servers. These are key assets which may get targeted. If you're a savvy attacker, you spread out your attack across multiple sources to try and hide them across the network. That is where the solution falls a bit short. It is trying to build that chain of relationships across detections and also trying to show detections from a perspective of a victim rather than the perspective of an attacker. I have expressed these concerns to Vectra and they are currently in as feature requests.

There is another feature in place which takes additional data feeds, such as DHCP IP allocation data. Their inputs are taken from Windows event logs, and that's the format they have in place. They use that to provide them with a more accurate view of host identities. If you are only relying on IP addresses, and IP addresses change over time, it's sometimes very difficult to show a consistent view of a system behaviour over time, as the IP can change per month. Unfortunately, because their DHCP data is taken from Windows host events and our DHCP data is taken from a Palo Alto system that generates the IP leasing, the formats are incompatible. I think taking different formats for that type of data is something else we have a feature request in for. At the moment, we don't have an accurate view, or confidence, that they are resolving when an IP address changes from host to host. So, we may be missing an accurate view of risk on some of those hosts. 

We also have the same problem with VPN and Citrix. E.g., if you're on the network and on IP address A, then you come in via the VPN, you're now on IP address B. Thus, if you're spreading your suspicious behaviour across both the internal network and VPN, then across Citrix, we don't get to join all that information up. They are seen as three different systems, so it causes a bit of a problem trying to correlate that type of event data.

For how long have I used the solution?

If you include the proof of concept, I have been using Vectra for three years.

What do I think about the stability of the solution?

There are no concerns regarding the stability. It seems to be very reliable. I've had one sensor in two and a half years become corrupt and need to be rebuilt. That's it.

Day-to-day maintenance takes half an FTE to one FTE a day. There is no maintenance really required on the platform. All we need to do is monitor for when a health alarm occurs (a sensor is not working), then we raise the relevant request with the teams to investigate. Maintaining the health of the platform requires a feed into our operations team to be able to look at our monitor to determine when the health is degrading. Doing general health, like detection filters, triage filters, reviewing, looking for patterns and anomalies, and creating new filters, needs a daily dedicated FTE.

What do I think about the scalability of the solution?

The scalability is brilliant. It is able to cope with virtual sensors. You can increase the hardware that supports the image and it will work with the high bandwidth of the data going through. There are no concerns in terms of the scalability.

It does create capture network data at scale because we have it deployed at over a 100 geographically split sites. We have over 8000 users on cloud. So, it's able to deal with the network traffic very easily, providing us with additional information. If we were just relying on things like firewalls and packet capture applications, we wouldn't get to that enrichment of a security context put on top of normal network traffic. 

Mainly, there are five people dedicated to using the platform: Tier 2 security analysts and an operations director. However, that is widen out to whomever we are raising the support requirements to, like the Tier 3s. When raised, we also enable the shared link so they can go into the platform and look at the data associated with the detection on that host. So, there is a wider volume of people who use the solution to get information for specifically requested cases. 

How are customer service and technical support?

The technical support is very good. They always respond within a short amount of time to provide expert information and have always been helpful in trying to work through problems to find a good solution.

Which solution did I use previously and why did I switch?

Previously, we had a general sensor solution taking logs. We didn't have an equivalent detection platform for our network nor did we have a tool capable of providing us with competent intrusion detection capabilities post-breach. Our main SIEM logging platform was generating over a 1000 alerts a day. It was bloated and unusable when trying to identify events/anomalies that were occurring. Once we implemented Vectra, it was able to give us a refined view and tell us which things we need to prioritize so we were able to reduce our workload from a 1000 alerts a day down to 10.

How was the initial setup?

The initial setup was relatively straightforward. It was pretty much plug and play.

The initial pilot deployment took weeks, but that was because the scope kept on changing. However, the initial deployment only took hours. 

It has not helped us move work from our Tier 2 to Tier 1 analysts, but this is a fault in our implementation. The structure of our organization hasn't necessarily changed. We don't have Tier 1 security analysts. Therefore, we don't have the capacity or capability for them to deal with these types of detections. We have to leave our Vectra detection and activities with our Tier 2s.

We now have an implementation strategy. We have virtualized sensors in most locations rather than physical sensors. We only have physical sensors in the areas where there is high bandwidth traffic, such as key internal data centers. The virtual centers for local offices are sufficient for the volume of traffic there. We only deploy in areas that are key risks. We also only deploy and monitor network zones which are of significant risk, so we don't monitor our guest WiFi subnet nor do we monitor our development network subnets. Therefore, we keep our segregated networks and zoning structure consistent so we are able to only monitor for priority areas.

What about the implementation team?

Vectra had an engineer come down. They plugged the device in and set it up. Since the firewall rules were already in place, it was working.

Assuming the firewall rules are already in place for the physical sensor, it needs one person plugging it in and putting it into a rack. If it is a virtual sensor, then it is just somebody who can deploy the virtual image onto the virtual infrastructure and switch it on. It takes two dedicated people to deploy. If you have a network team and a server team, then you will need one of each of those skill sets to be able to deploy the tool. It all depends on how your organization is structured.

What was our ROI?

It has increased our security efficiency because we can now do more with the tool. E.g., if we had a data analyst who was creating models and searching the data to identify the same types of the numbers/behaviours within Vectra, we would need at least two or three FTEs.

Vectra has reduced the time it takes us to respond to attacks. In 2019, we conducted a red team activity. The Vectra appliance was able to alert the red team on activity within three hours of the test starting. Prior tests to that, in real life or red team scenarios, we were potentially looking at days. However, we also tightened controls prior to that testing period. While Vectra has done an amazing job in reducing the time to respond, there are so many other things that we also have put in place which have contributed towards it.

Vectra has saved us weeks, if not months, in terms of the ability to identify a breach. Our process has been reduced down to hours, which is a potentially massive return on investment, if we were compromised. From an insurance perspective, the return investment is fantastic. 

From an FTE perspective, while it reduces the number of events that we have to look up and the number of alerts, we now have very specific things where we need to ask questions. Therefore, it's creating more work which we weren't capable of doing. 

What's my experience with pricing, setup cost, and licensing?

At the time of purchase, we found the pricing acceptable. We had an urgency to get something in place because we had a minor breach that occurred at the tail end of 2016 to the beginning of 2017. This indicated we had a lack of ability to detect things on the network. Hence, why we moved quickly to get into the tool in place. We found things like Bitcoin mining and botnets which we closed quickly. In that regard, it was worth the money. Three years later, the license is now due for renewal so we will need to review it and see how competitive it is versus other solutions.

When we implemented the physical sensors, there were costs for support in terms of detection review sessions. We had a monthly session where an analyst would talk through the content, types of detections that they were seeing, etc. 

We have a desire to increase our use. However, it all comes down to budget. It's a very expensive tool that is very difficult to prove business support for. We would like to have two separate networks. We have our corporate network and PCI network, which is segregated due to payment processing. We don't have it for deployed in the PCI network. It would be good to have it fully deployed there to provide us with additional monitoring and control, but the cost associated with their licensing model makes it prohibitively expensive to deploy.

Which other solutions did I evaluate?

We did review the marketplace and look around. For example, we looked online at Darktrace, but we didn't run a side by side comparison to see which one would work better.

Vectra was the only tool in which we did a physical pilot or proof of concept. Vectra stood out for its simplicity and the general confidence that I had with the people whom I was engaging and having conversations with at that time. I am very much a people person. If I talk to people and don't get the impression they know what they're talking about, then that will reduce my confidence in their product. E.g., our initial engagement with Darktrace wasn't good enough to provide confidence in their platform, and we had to move quickly.

What other advice do I have?

Make sure you have a dedicated resource committed to daily use of the tool. Because the selling point is it frees up your time, reducing the amount of time you need to spend on it so you don't have to commit resources. Then, you find yourself in an implementation two years later and you don't have committed resources who use it daily or are committed to it full-time. This means you don't maintain things like the triad rules and filters. Even though the sales material says it makes it easier and reduces alert fatigue, it doesn't give more time. You still need to have a dedicated resource to operate the tool, which we never committed at the beginning.

Having an established mature team structure is really important as well. Making sure people are aware of their role and how their role fits into the use of the tool is key. Whereas, we were building a security operation center (SOC) at the same time that we took on the tool, so our analyst activities have evolved around the incorporation of the tool into the organization and it's not necessarily a mature approach.

I would rate this solution as an eight (out of 10).

Which deployment model are you using for this solution?

On-premises
Disclosure: IT Central Station contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
TS
Senior Security Engineer at a manufacturing company with 10,001+ employees
Real User
Top 20
Easy to deploy and maintain, gives us ML, AI, and custom detection options for rule detection, and saves storage cost and time

Pros and Cons

  • "It does a reliable job of parsing out the logs of all the network traffic so that we can ingest them into our SIEM and utilize them for threat hunting and case investigations. It is pretty robust and reliable. The administration time that we spend maintaining it or troubleshooting it is very low. So, the labor hour overhead is probably our largest benefit from it. We spend 99% of our time in Vectra investigating cases, responding to incidents, or hunting, and only around 1% of our time is spent patching, troubleshooting, or doing anything else. That's our largest benefit from Vectra."
  • "They use a proprietary logging format that is probably 90% similar to Bro Logs. Their biggest area of improvement is finishing out the remaining 10%. That 10% might not be beneficial to their ML engine, but that's fine. The industry standard is Zeek Logs or Bro Logs, or Bro or Zeek, depending on how old you are. While they have 90% of those fields, they're still missing some fields. In very rare instances, some community rules do not have the fields that they need, and we had to modify community rules for our logs. So, their biggest area of improvement would be to just finish their matching of the Zeek standard."

What is our primary use case?

In terms of deployment, we have one brain and seven physical sensors. We're currently working on deploying a large number of virtual sensors, but those aren't done yet. We also have a SIEM and an EDR.

How has it helped my organization?

There are a large number of difficult-to-manage devices on a network. Traditional security vendors do a great job of making sure that workstations and servers are properly protected, secured, and observed, but they fall short when we're talking about odd peripherals, such as printers, scan guns, tablets, guest devices, and things like that. That's what Vectra helps us see. I can't tell the number of employee guest phones that just show up on the network, and they're infected because they're not managed by us and people do things with their phones. Now, we're able to actually see those devices hit our internal LAN instead of our guest networks, and we can properly move them over, whereas earlier, we were blind. Now, we have some reasonable assurance that our internal tablets, scan guns, and things like that are not performing abnormal network behavior. So, that's what we use Vectra for.

We've got a centralized data center with a large number of physical locations throughout the country. So, our network is very distributed. It's very much like a campus. Vectra is really good at reducing the complication of deploying an NDR solution, and that really helps us because we have over 175 stores that we need to capture traffic from, as well as a number of sales offices, regular employee offices, and distribution centers distributed across the country. So, Vectra makes it really easy. We just drop or ship it over there, and it is up and running real quick once it gets there. Shipping takes longer than configuration. So, basically, our network is a centralized data center infrastructure with a large number of stores, distribution centers, and offices geographically dispersed around the country.

It provides visibility into behaviors across the full lifecycle of an attack in our network beyond just the internet gateway. We tap client to server, server to server, and client and server to internet traffic, and it does a good job. It doesn't have an issue with internal traffic. In terms of the full lifecycle of the attack, Vectra is not designed to interface with or inspect the host. So, we're not seeing host activity obviously. That's what our EDR is doing. Vectra does an okay job. If we get a weird detection, we're also able to see a large number of other activities that happened just before and just after the attack and relate those to it.

Before we deployed Vectra, we were not monitoring network traffic. So, there was definitely a need and a gap, and Vectra has filled it. We have reliable network logs that are readable, and it does a good job of doing a default set of detections for us. We're very happy with the gap that it has filled.

It has overall reduced the time to respond to attacks, especially with the PCAP function on the detection, where when it gets a detection, it PCAPs the session. So, we're able to get a lot of context to alerts that we were unable to get before we deployed this because we weren't doing a full PCAP. Because Vectra only PCAPs the session when it triggers a detection, we didn't have to deploy hundreds of terabytes of storage across our network. So, we saved a lot of money there. There are $50,000 to $100,000 storage cost savings because it only captures the full packet capture for traffic that triggers detections. In terms of time, it has saved hundreds of hours. I can't even explain how happy we are with the amount of time it has saved us. Imagine the amount of time it would have taken us to deploy to 175 stores plus dozens of distribution centers and dozens of remote offices. Even if it was just one hour per location for deployment, that makes it hundreds of hours. Vectra, with being so easy to deploy and so easy to maintain and administer, has saved us hundreds of hours just on deployment and standing up the environment alone. I am not counting the maintenance and administration that come along with the solution.

What is most valuable?

It does a reliable job of parsing out the logs of all the network traffic so that we can ingest them into our SIEM and utilize them for threat hunting and case investigations. It is pretty robust and reliable. The administration time that we spend maintaining it or troubleshooting it is very low. So, the labor hour overhead is probably our largest benefit from it. We spend 99% of our time in Vectra investigating cases, responding to incidents, or hunting, and only around 1% of our time is spent patching, troubleshooting, or doing anything else. That's our largest benefit from Vectra.

We've got machine learning and AI detections, but we also have the traditional ability to create our own custom detections and rules that are important to us for compliance. When we were demoing other vendors, a large number of vendors let you make your own rules, but they don't provide their own rules and ML and AI rule engine, or they provide AI and ML, but they don't allow you to make your own rules. Vectra is very nice in that sense. We have detection rules that Vectra provides that are very common to the security industry, such as whenever there's a major event like the SolarWinds event. Those rules get built and deployed for us really quickly. We can manage our own, but then we also have the ML and the AI engine. We really like that. It is one of the few platforms that we've found to be supporting all three options.

What needs improvement?

They use a proprietary logging format that is probably 90% similar to Bro Logs. Their biggest area of improvement is finishing out the remaining 10%. That 10% might not be beneficial to their ML engine, but that's fine. The industry standard is Zeek Logs or Bro Logs, or Bro or Zeek, depending on how old you are. While they have 90% of those fields, they're still missing some fields. In very rare instances, some community rules do not have the fields that they need, and we had to modify community rules for our logs. So, their biggest area of improvement would be to just finish their matching of the Zeek standard.

They could provide distributed endpoint logging capability. We have a lot of remote workers nowadays in the day of the pandemic. If they're not connected to our VPN, then we're not capturing that traffic. So, the ability to do the traffic analysis for endpoints that are distributed would be cool. I have no idea how they would do that. I'm not aware of a single vendor that does that, but it would be cool if they could do that. To my knowledge, that's not really possible with the amount of compute power it would take on endpoints. It would be ridiculous. They'd have to really invent something new and novel that doesn't exist today in order to accomplish that. If they do, that would be great. Because I'm a customer already, I would use it. 

Cost-wise, they're not cheap. They were definitely the most expensive option. Their licensing model is antiquated. We have to pay for licensing based on four different things. They need to simplify their licensing down to just one thing.

For how long have I used the solution?

We have been using this solution for around 18 months.

What do I think about the stability of the solution?

I'm very happy with it. In the 18 months, I cannot recall any outage. We keep up on all the patching and maintenance, and there have been very few bugs. The SaaS product Recall has always been there when we use it. Our on-prem version has never broke. It seems very stable.

What do I think about the scalability of the solution?

It has got no problem with scaling. We picked Vectra because it was able to scale up to our size fairly easily without scaling up the deployment and administration overhead. So, it scales really well. It has no problem handling our volume of data.

How are customer service and technical support?

Their technical support is pretty good. They're very responsive. Nine out of 10 times, they understand my problem. They're not perfect, obviously, but at the end of the day, I got answers for the few issues for which I've had to use support. I can only think of one instance where it was painful, and that's why I say nine out of 10 instead of 10 out of 10. The guy just didn't understand what I was asking, and about seven emails later, it got triaged, and the next guy figured it out. Other than that, the first person I email in at support is able to answer my question in that initial response or just one extra email.

Which solution did I use previously and why did I switch?

We did not use any similar solution. 

How was the initial setup?

We have a couple of SaaS-based products. We use Cognito, Recall, and Stream. Recall is their SaaS-based product where all the logs go into their hosted elastic search instance, which allows us to search and create custom rules and everything like that, and then we pull data from that environment into our on-prem environment. In terms of the deployment of the brain, that's all on-prem. All the sensors are on-prem obviously, but we do use Recall.

In terms of the effort involved in deployment considering that some of the pieces we use are SaaS-based, it was literally just a toggle switch and an API client and key in the interface, and then it was working. We had to wait for accounting to approve it, and it added a little bit more time to our deployment because of paperwork, but technically, it was pretty simple. We told them we wanted this, and by the time that we got our paperwork done, everything at their end was stood up and ready to go for us.

It does take two to three weeks for the brain to baseline and establish its ML baseline. The moment it was done with the two-week to three-week machine learning period, it was good. So, it started providing value after three or four weeks after deployment.

What's my experience with pricing, setup cost, and licensing?

Their licensing model is antiquated. I'm not a fan of their licensing model. We have to pay for licensing based on four different things. You have to pay based on the number of unique IPs, the number of logs that we send through Recall and Stream, and the size of our environment. They need to simplify their licensing down to just one thing. It should be based on the amount of data, the number of devices, or something else, but there should be just one thing for everything. That's what they need to base their licensing on. 

Cost-wise, they're not cheap. They were definitely the most expensive option, but you get what you pay for. They're not the cheapest option. I know that their prices scared away a couple of people who have demoed it in the past. Once they got their quote, they were like, "Well, see you later. We can't do this." So, that is an area that they come up short against other people.

Which other solutions did I evaluate?

We did evaluate other options. We evaluated rolling Bro or Zeek on our own. We evaluated Security Onion. We also evaluated Corelight and almost picked them. We also investigated a couple of solutions that are significantly more involved than Vectra, just like full managed solutions, but we decided not to do that.

The main reason for choosing Vectra over all the other solutions was twofold. One was the deployment time and routine administration costs. Its deployment was very simple. The amount of time it would take to deploy and configure was very low. The time it would take to maintain the environment was significantly lower than the other solutions and on par with Corelight.

The second reason for picking it up is that it allowed us to create our own detection rules. They build rules for us when there are major events, as well as they have the ML and AI engine. This was the only solution that was easy and fast to deploy and maintain, and that was giving us all three options for rule detection. That's why we went with them. Some of the solutions provided all three options, but they were a pain to configure and maintain, and some of them were easy to deploy and maintain, but they didn't provide all three options.

What other advice do I have?

It is pretty straightforward. Plug it in and use aggregators in front of the sensors to aggregate multiple tap sources into a single sensor. The sensors can handle it. They de-duplicate everything. There is no need to purchase a sensor for every tap. Truncate all that traffic into an aggregator and have it come out one feed into the sensor. There is no issue there with the Vectra sensor being able to carve out all that. They're powerful enough to do that. Vectra recommends that. So, if someone is purchasing Vectra, they're going to hear that from them. With Vectra, you're picking reliable and fast among cheap, reliable, and fast.

In terms of Vectra's ability to reduce alerts by rolling up numerous alerts to create a single incident or campaign for investigation, we do not generate a lot of incidents. We're pretty quick off the gun on detections. We're responding to detections before subsequent detections are detected and become an incident. We maybe get one incident a week, so I don't know if I can comment on that effectively.

We don't use privileged account analytics from Vectra for detecting issues with privileged accounts. In terms of its detection model for providing security around things like Power Automate or other anomalies at a deeper level, we don't use Power Automate, but we use their anomaly detection, and it is very interesting. While it always does provide us something interesting to look at, more times than not, it is our IT admin who does anomaly detection. So, we learn a lot, and it brings odd things to our attention, but with anomaly detection, it has usually been our IT admin.

In terms of Vectra helping our network's cybersecurity and risk-reduction efforts in the future, I'm hoping that one day, we can achieve even client-to-client inspection. Vectra should stay up with the times, and they shouldn't start coasting, which I don't see at all. They fill a good gap, and they do that well. We're just going to leave them filling that gap until the time comes where that is no longer a need, which I don't foresee. So, I don't know if they're going to do anything more than inspect network traffic and provide us an alerting engine on anomalous or malicious network traffic. That's their niche, so that's what they're going to do, probably just more of it. As we grow, we'll deploy more Vectra sensors to capture that extra traffic. I see them scaling very well.

I would rate this solution a solid eight out of 10. It loses a star for not adhering to Bro Logs in my book, and there is no perfect 10.

Disclosure: IT Central Station contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Flag as inappropriate
AG
Sr. Specialist - Enterprise Security at a mining and metals company with 5,001-10,000 employees
Real User
Top 20
Scoring and correlation really help in focusing our security operations on critical issues

Pros and Cons

  • "The solution's ability to reduce alerts, by rolling up numerous alerts to create a single incident or campaign, helps in that it collapses all the events to a particular host, or a particular detection to a set of hosts. So it doesn't generate too many alerts. By and large, whatever alerts it generates are actionable, and actionable within the day."
  • "One thing which I have found where there could be improvement is with regard to the architecture, a little bit: how the brains and sensors function. It needs more flexibility with regard to the brain. If there were some flexibility in that regard, that would be helpful, because changing the mode of the brain is complex. In some cases, the change is permanent. You cannot revert it."

What is our primary use case?

Our main intention was to see what type of visibility, in terms of detections, Vectra could give us. 

We use it on both our manufacturing perimeter and at the internet perimeter. That's where we have placed the devices. We have placed it across four sites, two in UAE and two outside UAE.

How has it helped my organization?

What we have seen over the course of the three to four months it has been in place is that it has not found anything bad. That's good news because nothing specific has happened. But we have identified a lot of misconfigurations as well as some information on how applications are working, which was not known earlier. The misconfigurations that became known because of Vectra have been corrected.

It has given us the opportunity to understand some of the applications better than we had understood them before because some of the detections required triage and, while triaging, or in that investigation, we found how applications work. That is one of the main benefits.

We did a red team penetration exercise and almost all the pen activities were picked up by Vectra. That is another big benefit that we have seen through the deployment of the device.

Apart from the network traffic, a lot of the privileged accounts get monitored. It focuses on the service, the machine, and the account. We have seen many of the privileged accounts flagged with alerts whenever they're doing any activity which they do not normally do. We can see that it is the admin accounts or our support team accounts where the activity is happening. It is important because any privileged access which sees increased activity becomes a cause for suspicion. It's something that we need to be watchful for. It's a very useful feature because a privileged account can propagate more easily than an account that is not privileged.

These are all examples of the kind of information which is of great value, information that we didn't have earlier.

The detections, as well as the host ratings, allow us to focus in cases where we are pressured for time and need to do something immediately. We can focus on the critical and high hosts, or on the detections that have a very high score. If you do a good job in the rules and policy configuration, the alerts are not too numerous. A person can easily focus on all the alerts. But as of now we focus on the critical, high, and medium. The scoring and the correlation really help in focusing the security operations.

While I wouldn't say Vectra AI has reduced our security analyst's workload, it allows him to focus. It's a new tool and it's an additional tool. It's not like we implemented this tool and removed another one. It doesn't necessarily reduce his total time, but what it definitely does is it allows him to prioritize more quickly. Previously, he would be looking at all the other tools that we have. Here, it allows him to focus so things of serious concern can be targeted much faster and earlier. The existing tools remain. But Vectra is something to help give more visibility and focus. In that sense, it saves his time. Vectra is very good for automated threat-hunting, so you get to pick out things faster. All the other tools give you a volume of data and you have to do the threat-hunting manually.

Also, the technical expertise required to do the hunting part is much less now, because the tool does it for you. I wouldn't say that it has moved work from tier 2 to tier 1, but both of them can use their time and efforts for resolving problems rather than searching for actual threats. You cannot do away with tier 2 people, but they can have a more focused approach, and the tier 1 people can do less. It reduces the work involved in all their jobs.

In addition, it has definitely increased our security efficiency. The red team exercise is a very clear-cut example of how efficiency has been enhanced, because none of the other tools picked these things up. Vectra was the only tool that did.

It makes our workforce more efficient, and makes them target the actual threats, and prioritizes their efforts and attention. Whether that eventually leads to needing fewer people is a different question. Quantifying it into a manpower piece is probably more an HR issue. But improved efficiency is definitely what it provides. If I needed three or four tier 2 people before, I can manage with one or two now.

And Vectra has definitely reduced the time it takes us to respond to attacks. It's a significant reduction in time. In some cases, the key aspect is that, more than saving time, it detects things which other tools don't. It helps us find things before they actually cause damage. The other tools are more reactive. If your IPS and your signatures are getting hit, then you're already targeted. What Vectra achieves is that it alerts us at the initial phase, during the pre-damage phase. During the red team exercise we had, it alerted us at their initial recon phase, before they actually did anything. So more than saving time, it helps prevent an attack.

What is most valuable?

The solution's ability to reduce alerts, by rolling up numerous alerts to create a single incident or campaign, helps in that it collapses all the events to a particular host, or a particular detection to a set of hosts. So it doesn't generate too many alerts. By and large, whatever alerts it generates are actionable, and actionable within the day. With the triaging, things are improving more and more because, once we identify and investigate and determine that something is normal, or that it is a misconfiguration and we correct it, in either of these two instances, gradually the number of alerts is dropping. Recently, some new features have been introduced in the newer versions, like the Kerberos ticketing feature. That, obviously, has led to an initial spike in the number of tickets because that feature was not there. It was introduced less than a month back. Otherwise, the tickets have been decreasing, and almost all the tickets that it generates need investigation. It has very rarely been the situation that a ticket has been raised and we found that it was not unique information.

Also, we have seen a lot of detections that are not related to the network. Where we have gained extra value in terms of the internet is during data exfiltration and suspicious domains access.

The detections focus on the host, and the host's score is dependent on how many detections it triggers. We have seen with many of our probing tools, without triaging, that these hosts pretty quickly come into the high-threat quadrant. Its intelligence comes from identifying vulnerable hosts along with the triaging part. That's something that we have seen.

What needs improvement?

One thing which I have found where there could be improvement is with regard to the architecture, a little bit: how the brains and sensors function. It needs more flexibility with regard to the brain. If there were some flexibility in that regard, that would be helpful, because changing the mode of the brain is complex. In some cases, the change is permanent. You cannot revert it. I would like to see greater flexibility in doing HA without having to buy more boxes just to do it.

Another area they could, perhaps, look at is with OT (operational technology) specifically. Vectra is very specific to IT-related threats. It really doesn't have OT in its focus. We are using another tool for that, but maybe that is another area they can consider venturing into.

It's being used by my team of four or five people. Once we hand it over to operations, then the team size will increase significantly. It will grow to about 10 to 15 people.

For how long have I used the solution?

We have been using Vectra AI four about four months.

What do I think about the stability of the solution?

Stability-wise, we've not had any issues, although it has only been three or four months. We had some slight bugs in there, bugs that were related to the triaging and how we used the conditions. But stability-wise, we've had no problem. 

There were some software issues, bugs, but then nothing major. There were minor cosmetic and syntax-based issues while raising the conditions. Apart from that, no issues with the stability.

What do I think about the scalability of the solution?

Currently we are in the process of expanding it to two more remote sites. One is in West Africa, in Guinea, and another one in the U.S. Those are more recent deployments, in place less than a month. We are in the process of creating the policies, and triaging, and investigations for those. That's ongoing. With those sites, the benefit realization is still pending because we just started the traffic loading.

The scalability part is where the architecture comes in. That's one of the areas for improvement that I would like to recommend. Unless you have dedicated brains doing anything other than brain functions, it doesn't become scalable. If you have a brain in mixed mode, your scalability is limited. Also, the brain's capacity gets reduced based on its function, so if it's in mixed mode, the capacity is less. If it's in brain mode, the capacity is more. If it's in sensor mode, the capacity is different. It makes scalability difficult. Unless you go for two big brains with your highest capacity device and then you keep adding.

When I spoke to our internal success team at Vectra, they mentioned that this is something that they're planning to fix in the near future with an upgrade.

How are customer service and technical support?

Whenever we have raised issues we have gotten timely responses. Getting support is fairly easy compared to some of the other technologies that we have. A simple email is sufficient to get attention from their support team. They have a remote access feature wherein we don't necessarily have to give a WebEx. We just simply enable the remote access on the device, and the remote team can log in, and have a look, and understand what the problem is.

How was the initial setup?

The problem was the architecture. Once we arrived at an architecture, it was simple. What takes time is to build the architecture plan because of the way the brains work. We had to agree on a design. Once you agree on the architecture, the implementation is pretty straightforward.

The initial architecture design took some time, a week or so. The implementation was done within a day.

Our implementation strategy was to have an HA setup for each site. We put two brains into mixed mode, but then we found out that if we put it in mixed mode, HA is not possible. So we set it up as a standby and we configured manual scripts to transfer the file from one brain to the other brain. That's how we are managing it now. If we want to go live on the standby brain, we just import the configuration and go live, if there is a failure.

It's a little bit manual process for us. If it has to be automated, I believe the brains cannot be in mixed mode. That was where we faced the initial problem, I mean, for the architecture part. So we have two brains configured in mixed mode and we have a couple of sensors on the OT side, sensors that are talking to these brains. The sensors are there in the OT connectivity, the active or standby firewalls, and this is repeated on the other site as well.

Two or three people are enough for the deployment. They should have a sound understanding of the network and an idea of how the architecture and the applications function. One person from the architecture team and one person from the network or security team are sufficient to understand how to get maximum utilization from Vectra.

What was our ROI?

In terms of visibility and security improvement, we have definitely seen a return on our investment.

What's my experience with pricing, setup cost, and licensing?

We have a one-year subscription that covers support and everything. There is no other overhead.

Which other solutions did I evaluate?

We evaluated Darktrace, in addition to Vectra, each in a PoC. We chose Vectra because the things that Vectra picked up were far more useful, and necessary from an enterprise point of view. Darktrace was a bit noisier.

What other advice do I have?

One thing we have learned using Vectra is that anomaly detection is a critical component of security; a non-signature-based technology is very critical. It helps pick up things that other tools, which are more focused on active threats, will miss. That is one major lesson that we have picked up from Vectra.

My advice would be that you need to focus, because the licensing is based heavily on IPs and area of coverage, although predominantly IPs. You need to have a very clear idea of what areas you want to cover, and plan according to that. Full coverage, sometimes, may not be practical because, since it's a detection tool, covering everything for large organizations is complicated. Focus on critical areas first, and then expand later on.

Also, the architecture part needs to be discussed and finalized early on, because there is a limited flexibility, depending on which model you choose to take.

The solution captures network metadata at scale and enriches it with security information, but the full realization of that will come with Cognito Stream, which we have yet to implement. Right now we are on Cognito Detect. Cognito Stream is something that we are working on implementing, hopefully within the next month or so. Once that comes online, the enriched metadata will have greater value. As of now, the value is there and it's inside Vectra, but we don't see that information — such as Kerberos tokens, or certificates, or what the encryption is — unless it leads to a detection. Only in that event do we currently see that information.

The Cognito Stream can feed into our SIEM and then we will have rich information about all the metadata which Vectra has in our data lake.

Which deployment model are you using for this solution?

On-premises
Disclosure: IT Central Station contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
JM
Manager, IT Security at a energy/utilities company with 201-500 employees
Real User
Top 20
Produces actionable data using automation reducing our security team's workload

Pros and Cons

  • "Vectra produces actionable data using automation. That has helped us. It's less manpower now to look at incidents, which has definitely increased efficiency. Right now, in a lot of cases, our mean time to detection is within zero days. This tells me by the time something happened, and we were able to detect it, it was within the same day."
  • "I would like to see a bit more strategic metrics instead of technical data. Information that I could show to my executive management team or board would be valuable."

What is our primary use case?

The Detect platform that we have is on-prem. We have what's called "the brain", then we have sensors placed in different key/strategic areas in the organization. It is helping us do a lot of the monitoring. We also have some SaaS offerings from the Recall platform, which look at some of the metadata, etc. If we were doing things like incident response, it gives us a bit more granular type of information to query. However, the Cognito Detect platform is all on-prem.

We are using the latest version.

How has it helped my organization?

We had a gap where we didn't necessarily have a managed service, which we do today, but at the time we needed something that would help us detect malicious behavior and anomalies within the organization. We found that Vectra solved this. We were able to find issues within minutes or hours of them occurring, then we were able to action them rather quickly.

Some of the metrics that we try to show from an incident response perspective are the effectiveness of our controls, like mean time to detection and mean time to remediate. E.g., mean time to detection shows how quickly the organization detects it from when it first occurred, then determines the remediation aspect as well. We take those numbers and correlate them back to how effective our tools are in our organization. Vectra's really helped in the sense that our mean time to detect is within zero the majority of the time, meaning that from the time we detect it to the time it occurred is within zero days. This promotes how effective our controls are.

When we get an alert, we're not wasting hours or so trying to determine if, "I need to find more logs. I need to correlate the data." We're getting actionable data that we are able to action on right away. I have found value in that.

We can find things quickly that users shouldn't have been doing in the organization. Simple things, e.g., all of a sudden we have a user whose exfiltrating a lot of gigs of data. Why are they doing that? We found value there. My very small team does not have to waste cycles on investigating issues when we get a good sense of exactly what is occurring fairly quickly.

We have the solution’s Privileged Account Analytics. We have seen detection on certain cases, and it's been good. It actually is a good feature. We already have an organizational approach to privileged accounts, so we have seen a few detections on it but haven't necessarily seen abuse of privilege because of the way our organization handles privilege management. We are an organization where users don't run with privilege. Instead, everybody runs with their basic user account access. Only those that need it have privileges, like our IT administrators and a few others, and those people are very few and far between. 

If we are investigating something, we may be investigating user behavior. Using the metadata, we can find exactly, "What are all the sites he's going to? Is he exfiltrating any information? Internally, is he trying to pivot from asset to asset or within network elements?' Using that rich set of information, we can find pretty much anything we need now. 

The solution provide visibility into behaviors across the full lifecycle of an attack in our network, beyond just the internet gateway. It augments what we are doing within the organization now. Being able to discover/find everything that is occurring within the kill chain helps us dive down to find the root of the problem. It's been beneficial to us because that's a gap we've always had in the past. While we may have gotten an alert in a certain area, trying to find exactly where it originated from or how it originated was difficult. Now, by utilizing the information that Vectra produces, we can find exactly what the root cause is, which helps with discovering exactly how it originated in the first place.

With a lot of the detections or things that are happening, I would not say they're necessarily malicious. Where I find it very valuable is that it gives us an opportunity to understand exactly how users are sometimes operating as well as how systems are operating. In a lot of cases, we have had to go back and reconfigure things because, "Oh, this was not done." We realized that maybe systems were not setup correctly. I really liked this aspect of the solution because we don't like false positives. We don't want Vectra to produce things that are just noise, which is something that it doesn't do. 

Vectra produces actionable data using automation. That has helped us. It's less manpower now to look at incidents, which has definitely increased efficiency. Right now, in a lot of cases, our mean time to detection is within zero days. This tells me by the time something happened, and we were able to detect it, it was within the same day.

What is most valuable?

It gives you a risk score of everything that you just found. The quadrant approach is useful because if there are things in the lower-left quadrant, then we don't necessarily need to look at them immediately. However, if there's something with a high impact and high risk score, then we will want to start looking at that right away. We found this very valuable as part of our investigative analysis approach.

The solution’s ability to reduce alerts by rolling up numerous alerts to create a single campaign for investigation is very good. Once it starts adding multiple detections, those are correlated to a campaign. Then, all of a sudden, this will increase the risk score. I've found that approach helps us with understanding exactly what we need to prioritize. I find it very useful.

The amount of metadata that the Recall solution produces is enormous. What we can find from that metadata is exceptional. Once you get to know how to use the tool, it's much simpler and more intuitive to use when finding information than using a traditional SIEM, where you have to build SQL type commands in order to retrieve data. So, I do find it very valuable.

What needs improvement?

I would like to see a bit more strategic metrics instead of technical data. Information that I could show to my executive management team or board would be valuable. 

I would like to see some improvements on the integration aspects of it. They are getting better in this. However, most organizations have a plethora of cybersecurity solutions that they run, and I think that there is a bit more that could be done on the integration side. 

For how long have I used the solution?

About four years.

What do I think about the stability of the solution?

The stability is good. I don't think we've ever had an issue with it at all. I don't think I've ever seen it misbehave, crash, or anything like that.

It is continuously updated. Whenever they release a new patch or updates, they push it to the brain (the centralized management).

What do I think about the scalability of the solution?

We have never seen an issue from a scaling perspective. It is not an issue for us.

We have a team of less than four people. We don't really have a Tier 1 or Tier 2. We just have people working in cyber.

There are areas where we would like to increase our capabilities. We have 100 percent visibility for anything leaving the organization. There are some areas within the organization where we would like to monitor some of the internal workings. One of the places where we are looking to expand is into our OT segment. We do have a path for where we would like to see this go.

How are customer service and technical support?

They are very competent and good. They are always able to solve problems.

Which solution did I use previously and why did I switch?

A few years ago when we were looking at this, we had a gap in the organization. We didn't have like a managed service offering. We had an on-prem SIEM, but we didn't have a large team so we didn't have resources fully dedicated to looking to see threats and correlating them with other event logs to see exactly what was occurring. The reason that we didn't have a managed server previously was cost. Therefore, we looked for alternative ways to solve the gap, lower the resource count, and be able to automate and integrate within our enterprise solutions.

How was the initial setup?

It was pretty straightforward. You can plug the appliances in, whether it is into a switch, router, or some other demarc point from a SPAN port, then you let it learn. That is it. There's nothing really you have to do.

Our deployment took days at most. Once you configure it, you just let the system learn. Usually, within a week, it starts to detect things. For it to be effective, it needs to know what the known baseline is.

You plug it in, let it learn, and it's up and running.

What was our ROI?

We saw ROI within the first six month due to the reduced impact on our staff and we have been deploying it for years. 

Vectra has absolutely reduced security analyst workload in our organization. This was the real thing that we were trying to find: How can we do this? With a small team, it is very hard. We have a small team with a large stock of solutions. Therefore, we were looking for the best way to reduce the amount of manual effort that's required for an individual. We've found Vectra has significantly reduced the workload by probably 200 percent for our staff.

Which other solutions did I evaluate?

We looked at NextGen traffic analysis type of solutions, like Darktrace. Then, we looked at Vectra. I found Vectra was a bit more intuitive. I think both products had some really good offerings. What really helped us make a decision was we were trying to find things that help us produce actionable items. I liked Vectra because the one thing it was trying to do is it was show you exactly what is happening in the kill chain. The whole premise behind it was, "These are things that are actually occurring in your network, and they're following a specific pattern." I really liked it because in my view it was very actionable and automated.

I don't want to have to spend cycles on things on unnecessary things. One thing I found with Darktrace was it produces a lot of good things, but it's too much in certain cases. Whereas, I like the way Vectra tells you exactly the things that are happening right now in your network, then groups it based on exactly what the type is, providing you a risk score.

Also, it did seem like it was like a resource built into a box with AI capabilities. I found that the amount of effort we have to spend on analysis from it is a low cost to us. Vectra just fit in well with my team mandate.

I found Darktrace was a bit more noisier than Vectra. Sometimes, when you deal with products like this, the noise is time and effort that you may not necessarily have.

Once we started to do the PoCs, we ran Vectra in certain use cases with the sense of, "Okay, let us know exactly what's kind of going on within the network." What we found in a lot of cases is, and these weren't just cybersecurity incidents that were occurring, and Vectra gave us a good sense of how a lot of our solutions were operating. We ended up finding out, "This is exactly what this solution may be doing. Maybe there is a misconfiguration here or there."

What other advice do I have?

There was no complexity with Vectra; it is very simplistic. However, for the tool to be effective, you want to make sure that you place your sensors in appropriate places. Other than that, you let the tool run and do its thing. There's really no overhead.

I would probably rate it as a nine or 10 (out of 10). We have been extremely happy with the solution. It's been one of the best solutions we have in our enterprise. I would put it at the top of the list.

Which deployment model are you using for this solution?

On-premises
Disclosure: IT Central Station contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
RM
Cyber Security Analyst at a financial services firm with 1,001-5,000 employees
Real User
Top 20
Reduces the times between an alert and a ticket coming up

Pros and Cons

  • "It is doing some artificial intelligence. If it sees a server doing a lot of things, then it will assume that is normal. So, it is looking for anomalous behavior, things that are out of context which helps us reduce time. Therefore, we don't have to look in all the logs. We just wait for Vectra to say, "This one is behaving strange," then we can investigate that part."
  • "We would like to see more information with the syslogs. The syslogs that they send to our SIEM are a bit short compared to what you can see. It would be helpful if they send us more data that we can incorporate into our SIEM, then can correlate with other events."

What is our primary use case?

The original use case was because we had some legacy stuff that doesn't do encryption at rest. Compliancy-wise, we had to put in some additional mitigating actions to protect it. That was the start of it. Then, we extended it to check other devices/servers within our network as well.

We are on the latest version.

How has it helped my organization?

It is doing some artificial intelligence. If it sees a server doing a lot of things, then it will assume that is normal. So, it is looking for anomalous behavior, things that are out of context which helps us reduce time. Therefore, we don't have to look in all the logs. We just wait for Vectra to say, "This one is behaving strange," then we can investigate that part.

We have implemented it fully now. We have done some training and filtering on it. Now, every alert that we see means that we need to investigate. It sees roughly 300 events a day. The majority are normal behavior for our company. So, there are about 10 to 15 events a day that we need to investigate.

The solution triages threats and correlates them with compromised host devices. It looks at a certain IP address, and if you're doing something strange, then it will give us an alert. E.g., normally John Doe is logged into it for four days, going to server XYZ. If all of a sudden, it's in a different timescale, going to server B, then it will send us an alert.

We have privileged accounts. They have a specific names, and if I see those names, then I investigate a bit more thoroughly. That's our policy. I don't know whether Vectra does anything different with them.

The solution gives us more tickets. If we did not have Vectra, we wouldn't have those tickets. So, it's actually increasing them. However, it is improving our security with a minimum amount of work. That's the whole purpose of the device. We have 10 to 15 events that we need to look into a day, and that is doable.

The solution creates more work for us, but it is work that we are supposed to do. We need more FTEs because we need more security.

What is most valuable?

We mainly use it for the detection types, checking dark IPS or command-and-control traffic. 

We bought Recall so we can have more information. Recall is an addition onto Vectra. We haven't enabled Recall yet, but we will. So, if there is an incident, we can investigate it a bit further with Vectra devices before going into other tools and servers. This gives us the metadata for network traffic. So, if we have a detection, we can check with Recall what other traffic we are seeing from that device, if there is anything else. It's mainly a quick and dirty way of looking at it and getting some extra information to see whether it's malicious.

We found that the solution captures network metadata at scale and enriches it with security information. This is one of the reasons why we added Recall, so the alert gives us information on where we need to look, then we can investigate a bit further. For example, a certain device is sending data to command-and-control server, then we can investigate whether that is really happening or just a false alarm with the metadata in Recall. It makes it easier to find out.

What needs improvement?

We would like to see more information with the syslogs. The syslogs that they send to our SIEM are a bit short compared to what you can see. It would be helpful if they send us more data that we can incorporate into our SIEM, then can correlate with other events. We have mentioned this to Vectra.

It does some things that I find strange, which might be the artificial intelligence. E.g., sometimes you have a username for a device, then it makes another. It detects the same device with another name, and that's strange behavior. This is one of the things that we have with Vectra support at the moment, because the solution is seeing the device twice. 

For how long have I used the solution?

We started the pilot roughly a year ago. So, we started small with a pilot on part of the systems, then with two other vendors. Afterwards, we decided to buy it.

Now, it's almost in production. It's still a project in the end phase, as we are still implementing it. But, most of it has been running for a year.

What do I think about the stability of the solution?

So far, the stability has been good. There are no issues. It's never been down. It has been updating automatically on a regular basis and there are no issues with that where it has stopped working.

One person will be responsible for the deployment, maintenance, and physical upkeep; a person from the service delivery team will keep the device up and running. The security analysts (my team) deal with the alerts and filtering.

What do I think about the scalability of the solution?

The part that we designed is not really scalable. They have options, and there is some room for improvement. If we need to scale up, which we have no intention of doing, then the physical devices need to be swapped over for a bigger one. Other than that, we have some leeway. This came up in the design with, "What are your requirements?" and those requirements have been met, so that's fine. They will probably be met for the foreseeable future.

At the moment, we don't have Tier 1 and Tier 2. Instead, we have a small team who does everything. I am mostly using it. There will be three security analysts. Then, we have a number of information security officers (ISOs) who will have a read-only role, where they can see alerts to keep an eye on them, if they want, and be able to view the logging and see if they need more information. But, there are three people who will be working with Vectra alerts.

How are customer service and technical support?

We are in contact with the Vectra service desk. If you send them ideas, they talk about them and see if they can incorporate them.

Which solution did I use previously and why did I switch?

We decided that we wanted to have an alert within 30 minutes, which is doable with this solution. It fulfills our needs. However, we didn't have this before, so it has increased our time, but for things we need to do.

How was the initial setup?

The initial setup is relatively straightforward. They have security on a high level. There are a lot of logins with passwords and very long passwords. This made it go a bit longer. However, the implementation is relatively easy compared to other devices.

We made a design. That's what we implemented.

What about the implementation team?

Initially, it was set up in conjunction with Vectra. When we put it into production, the majority was done by me, then checked by a Vectra engineer. If I had issues, I just contacted Vectra support and they guided me through the rest of it.

The Vectra team is nice and helpful. The service desk is fast. They know what they are doing, so I have no complaints on that part. We have a customer service person who knows about our environment and can ask in-depth questions. He came over as well for the implementation to check it, and that was fine. The work was well done.

What was our ROI?

The solution has reduced the time it takes us to respond to attacks. It sends an email to our SIEM solution. From that SIEM solution, we get emails and tickets. Therefore, the time between an alert coming up and a ticket is reduced. This is for tickets that we monitor regularly. Within 15 to 20 minutes, it gives us an alert for the things that we want. Thus, it has greatly reduced our measurable baseline.

The return of investment is we have tested it so sometimes we have auditors who do pen tests and see them. That's the goal. It seems to be working. We haven't found any actual hackers yet, so I'm not completely a 100 percent certain. However, we found auditors who are trying to do pen tests, which essentially the same thing.

What's my experience with pricing, setup cost, and licensing?

The license is based on the concurrent IP addresses that it's investigating. We have 9,800 to 10,000 IP addresses. 

There are additional features that can be purchased in addition to the standard licensing fee, such as Cognito Recall and Stream. We have purchased these, but have not implemented them yet. They are part of the licensing agreement.

Which other solutions did I evaluate?

We investigated Darktrace, Vectra, and Cisco Stealthwatch.

Darktrace and Vectra plus Recall were similar in my opinion. Darktrace was a bit more expensive and complex. Vectra has a very nice, clean web GUI. It easier to understand and cheaper, which is one of the main reasons why we chose Vectra over Darktrace.

Darktrace and Vectra are very different, but eventually for what we wanted it to do, they almost did the same thing. Because Darktrace was a bit more expensive, it was a financial decision in the end.

I did the comparison between Darktrace and Vectra. They did almost the same thing. Sometimes, there are differences that Darktrace did detect and Vectra didn't. For the majority, we didn't find any actual hackers. So, it's all false positives, eventually. Both of them are very similar. The big thing is the hacker activity. They both detected it in the same way. But, in the details, they were different.

The options for Stealthwatch were a bit limited in our opinion for what we wanted it to do. Stealthwatch is network data, and that's it.

What other advice do I have?

Start small and simple. Work with the Vectra support team.

The solution’s ability to reduce false positives and help us focus on the highest-risk threats is the tricky part because we are still doing the filtering. The things it sees are out of the ordinary and anomalous. In our company, we have a lot of anomalous behavior, so it's not the tool. Vectra is doing what it's supposed to do, but we need to figure out whether that anomalous behavior is normal for our company. 

The majority of the findings are misconfigurations of servers and applications. That's the majority of things that I'm investigating at the moment. These are not security risks, but need to be addressed. We have more of those than I expected, which is good, but not part of my job. While it's good that Vectra detects misconfiguratons, there are not our primary goal.

The solution is an eight (out of 10). 

We don't investigate our cloud at the moment.

Which deployment model are you using for this solution?

On-premises
Disclosure: IT Central Station contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.