What is our primary use case?
I run an incident response, digital forensics team for OpenText. We do investigations into cyber breaches, insider threats, network exploitation, etc. We leverage Devo as a central repository to bring in customer logging in a multi-tenant environment to conduct analysis and investigations.
We have a continuous monitoring customer for whom we stream all of their logging in on sort of a traditional Devo setup. We build out the active boards, dashboards, and everything else. The customer has the ability to review it, but we review it as well, acting as a security managed service offering for them.
We use Devo in traditional ways and in some home grown ways.
For example, if there is a current answer response, I need to see what's going on in their environment. Currently, I'll stream logs from the syslog into Devo and review those. For different tools that we use to do analytics and forensics, we'll parse those out and send that up to Devo as well. We can correlate things across multiple forensic tools against log traffic, network traffic, and cloud traffic. We can do it all with Devo.
It's all public cloud, multi-factor authentication, and multi-tenant. We have multiple tenants built in as different customers, labs, etc. Devo has us set up in their cloud, and we leverage their instance.
We are using their latest version.
How has it helped my organization?
Being able to build and modify dashboards on the fly with Activeboards streamlines my analyst time because my analysts aren't doing it across spreadsheets or five different tools to try to build a timeline out themselves. They can just ingest it all, build a timeline out across all the logging, and all the different information sources in one dashboard. So, it's a huge time saver. It also has the accuracy of being able to look at all those data sources in one view. The log analysis, which would take 40 hours, we can probably get through it in about five to eight hours using Devo.
When you deal with logs, a lot of times the log fields from different vendors have partial data. For instance, an endpoint log may have the domain user name as Jay Grant, whereas the network log has it as example.com/jaygrant. Because of the way that you can manipulate the log sources and do the search, you can do a search for Jay Grant across all these log sources, even though the fields are a bit different. That is something very difficult to do in a one-off scenario, where you are able to do it with Devo. Then, once you have things built out on the Activeboards, you can build out alerts and build off automation processes where you can right click and execute other tools to run based on data sets that you found.
As far as reporting to our customers, it gives us time back where traditionally we would have to sit and write out written reports and take snapshots to illustrate things to our customer. It's easy so I can give role based access to my customer directly to the data. I can render it to them, visualizing it in the way that we want them to see it, and they're able to export that out on their own. It sort of takes away the need for my analysts to write reports like they have in the past. We can have the customer's log write and render results in real-time without stopping and writing reports, then picking up analysis again. It's easily saved us 60 percent of time from a log analysis, correlation, and timeline perspective.
I can bring cloud, on-prem, a static security tool, and static forensic tools in it. This has greatly affected our visibility into key business functions. It's a cross correlation of real-time data that's coming in, investigative data findings, being able to overlay it and see it in real-time, and what's going on based on the investigative findings that we've had.
What is most valuable?
The Activeboards are the most valuable feature. Given multiple different types of unstructured and structured data, we can then build Activeboards that can do queries across all those data sources with one query, being able to visualize the data from multiple different sources. That is probably the most useful thing that we find in Devo.
The visual analytics are extremely easy to understand. You have to learn how the queries need to be built and how to do that in an effective manner, but once you have someone trained in how to do the queries and Activeboards, it's very easy for that person to build them and render the data in whatever manner you need. If I bring in forensic memory analysis, forensic hard drive analysis, and network data, I can point it to specific fields in each of those logs and have it correlated altogether.
The solution is very nice because of the Activeboards that we build out. It's multi-tenant and easy for us to pull the code into other tenants and leverage them for other customers. From an attack perspective, Devo also allows us to scan across multiple tenant environments to see if the same attack is occurring towards multiple different customers. Then, it also keeps their data isolated from each other in compliance conformity. This is a huge factor for us, and one of the reasons why we looked at Devo originally. They were the only ones that we saw who offered that multi-tenant environment.
Devo manages 400 days of hot data, which is obviously great because you have the ability to go back in logs and correlate against things that you've seen. If you have a web attack come in on day 300, you can go back across all the logs with Activeboards and look for that same artifact for almost a year's time. So, it's very effective in what it can do. Depending on the logs themselves, it could be even longer than those 400 days. It just depends on how deep and rich those logs are.
I like the UI. It's simple to use. When you get into the advanced features, once you have some training, it's very easy to toggle around. But, even from a novice standpoint, you can definitely get in there, find information and data that you're looking for, and everything else, which is good.
What needs improvement?
The only downfall that I have is it is browser based. So, when you start doing some larger searches, it will cause the browser to lock up or shut down. You have to learn the sweet spot of how much data you can actually search across. The way that we found around that is to build out really good Activeboards, then it doesn't render as much data to the browser. That's the work around that we use. As far as ingestion, recording, and keeping it, I've seen no issues.
It comes down to some feature requests here and there, which is normal stuff with software. As a user, I may want to scroll through the filters, but the filter didn't allow scrolling at first. That's a feature that came in with version 6.
For how long have I used the solution?
We've been playing around with it since June and had it fully deployed since August.
What do I think about the stability of the solution?
I've had no stability issues.
What do I think about the scalability of the solution?
Scaling has been easy. It's cloud, so you just keep dumping data at it. I haven't seen any issues.
I have six or seven people who maintain and log into it, using it for analysis and everything else. Everyone is capable of doing the same thing on it. We also have customers who log into it to look at their data. There are about 25 people who have access over all the tenants.
It's definitely being fully utilized. It is a core tool for us in looking at logs, because logs are the starting point in any investigation. So, leveraging Devo from start to finish in any investigation is basically what we do.
How are customer service and technical support?
Their tech support is average. You are going to open up a ticket and wait a while. They need to go through their scripts, like everyone else. If there is an issue, you have to push to have it escalated, then go from there.
The support is average, but the Professional Services is above average.
Two areas of improvement would be their tech support and documentation. Their documentation could be better. They are growing quickly and need to have someone focused on tech writing to ensure that all the different updates, how to use them, and all the new features and functionality are properly documented.
Which solution did I use previously and why did I switch?
I've used a ton of other solutions: ELK Stack, Kibana, and Splunk. The cost of Devo, as it relates to Splunk, is significantly less with higher value. Its capabilities of ingesting so many different types of structured and unstructured data beats out the other tools that I've used. The pre-built parsers also beat out what we've used. Overall, it's far more advanced and user-friendly than the other competitive log analysis and SIEM tools. I've used these tools at OpenText and in different roles as well.
We're on the professional services side. This isn't OpenText IT services. This is us providing service to customers who are doing investigations. As investigators, we use whatever tool is out there that's best-of-breed. We came across Devo, then PoC'd and liked it. That's why we brought it into the toolbox.
How was the initial setup?
The initial setup is easy. They just send you your credentials, you log in and go to their user docs, grab the relay, bring it down, and you can point the data to it. Or, you could do direct ingestion of CSV files or other data sources directly to the cloud. Setup-wise, there's very little that you actually have to set up.
Anytime we deploy into an environment, we could have a relay setup in 20 to 30 minutes.
What about the implementation team?
We PoC'd the tool, then I built my own deployment strategy as to how my team leverages it, as we leverage it in a different manner than its intended use.
I was the one that designed and deployed it. It took me maybe a day or two to come up with the exact way that I wanted it to be done and create a document for my team.
We initially had 40 hours of Professional Services that we leveraged to do some customized things that we wanted done. Every customer gets those Professional Services hours with their purchase just to get through the little nuances that are different in every environment. Their Professional Services team was excellent, very responsive, and for anything that we needed, the turnaround time was minimal. So, it was good.
What was our ROI?
The solution has definitely decreased our MTTR. The faster you can get through data, the faster you can get to the actual root cause and remediation. Identifying a root cause, cuts time down in half by maybe 50 percent. As far as getting to remediation, I'd put it at about the same.
We have seen ROI. It's the fact of having a tool that you can build a repeatable process off of for your analysts. To be able to provide repeatable investigative capabilities is a big return on investment for us.
What's my experience with pricing, setup cost, and licensing?
It's a per gigabyte cost for ingestion of data. For every gigabyte that you ingest, it's whatever you negotiated your price for. Compared to other contracts that we've had for cloud providers, it's significantly less.
Which other solutions did I evaluate?
We have used everything out there. We have used Splunk, ArcSight, and LogRhythm. We've used all those tools. We have leveraged them from customer environments and used them as tools. So, we have exposure to all of those.
Devo is used on every investigation that we do. It's a core tool for us. Without Devo, it would be very difficult for my analysts to do the same investigation from a threat hunt or incident response perspective repeatedly. Because we're consistently using the same tool, we consistently know how it's setup. We've set it up that way. So, it's very easy for us, customer-to-customer, to repeat the process. Whereas, if I had LogRhythm with one customer, then Splunk with another customer, it's not a repeatable process.
What other advice do I have?
Definitely get training and professional services hours with it. It is one of those tools where the more you know, the more you can do. Out-of-the-box, there is a lot of stuff that you can just do with very little training. However, to get to the really cool features and setups, you'll need the training and a bit of front-end assistance to make sure it's customized for your environment the right way.
You need to have a tool of this capability in your environment, whether you're providing service for someone else or if it's your own internal environment that you're working in. It is a core piece of functionality.
I would rate the solution between an eight point five and nine (out of 10). The only two things that stop it from getting a 10 are they need to improve their documentation and customer service. That's just customer service from the standpoint of support. It's just your generic, outsourced, call in support, where they read through a script, and go, "Did you try this? Or, did you try that?" Then, open up a ticket, and you're waiting for a period of time. If they can improve their support process and documentation, they would very easily push towards a 10.
Which deployment model are you using for this solution?
If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?
Which version of this solution are you currently using?