What is our primary use case?
We needed a SOC operation, and we weren't going to build it in-house, so we were looking for exactly what they offer. They're an MDR service, and we were looking for somebody that would manage the SIEM tool as well as the endpoint management tool and have the ability to take action, when necessary, on endpoints and function as a full, hands-on SOC. That is why we selected them.
The service doesn't require us to make use of any hardware. The software required is Splunk, as a SIEM tool, which provides options as to how it's managed. We opted to have CRITICALSTART fully manage it, so we're hands-off with the SIEM tool, and it's hosted in AWS. Then you have to have an endpoint endpoint detection tool that CRITICALSTART has approved. I don't know what their current selection is, but a year-and-a-half ago it was either Cylance or Carbon Black. We're using Cylance.
Our use of the service covers 100 percent of our endpoints. We're covering 1,100 endpoints.
How has it helped my organization?
We didn't have a security team before. If I were to say the service had improved our organization, it might lead you to think we were doing security a certain way before, but we weren't. I came into the company as the first security professional for them.
The service has increased efficiency for me to the point that I can focus on other areas of the business. Again, as a department of one, and not having to attempt a one-person SOC operation, I'm able to focus on the strategic security posture, the architecture, for the company, and focus on where our keys to the kingdom are. I can also pay attention to compliance, which is part of my role. I'm able to do my job because I have this outsourced SOC.
What is most valuable?
The most valuable part of the service is that they are 100 percent taking care of all first-line alerts. With eyes on glass, fingers on keyboard, they're doing the work. If they have a question, or they haven't seen something in our environment before, then they will escalate it to me. The service takes care of Tier-1 and Tier-2 triage. They actually provide a report that gives details on how much that saves us. I looked at it when we first started, and it was multiple FTEs, on an annual basis, that they're saving us.
I also use their mobile app. It's very easy to use and very convenient to be able to respond to alerts wherever you are. I love the app. You can respond and communicate, per ticket, with their SOC in near real-time. The response is very quick. I can close tickets, I can escalate them. I have very close to all of the capabilities that I have on my desktop. All the things that I need to do in a ticket, I can typically do them from the app. I am a one-man show. I'm the only security analyst for our organization. I couldn't really do my job without the app. I can't sit in front of a computer all the time, so it's critical for us.
I communicate with CRITICALSTART's security analysts. I haven't spoken with them over the phone, except for one time, in a year-and-a-half, but their accessibility is very high. I always receive quick responses to my escalated tickets. When I'm commenting, they're following up, and they're very fast.
I feel I have full transparency to their SOC. Anything I want to go look at, I can do so. I can see all of the comments and discussions that the SOC team has on behalf of us. I have full transparency.
In terms of CRITICALSTART contractually committing to paying a penalty if it misses a one-hour SLA to resolve an escalated alert, I honestly haven't looked at the contract in a year and a half, so I don't remember if it's monetary. I believe that it is. They're very proud of their SLA and not missing it, so I've not ever had an issue or concern or had to think about it. This high commitment to SLAs was our CIO's primary concern when we were looking at CRITICALSTART. After seeing their record, 18 months ago, of not missing a single SLA, it became a moot point. It was a concern at the time but they satisfied that concern.
What needs improvement?
The updated UI is actually pretty bad. Regarding the intuitiveness, it is fairly easy to use, but the responsiveness, on a scale of one to 10, is a one. It's really poor performance.
I have shared this next point with them already, but I would like to see a monthly report to talk about advancements or new alerts, anything to do with what we call IOCs — indicators of compromise. When there is anything that they have changed on behalf of their customers on the backend, they should say, "Hey, we have made these modifications. We're now looking at these types of alerts." It would give the customer a sense that they're actively looking for new IOCs. So I would like a monthly recap of what they have done, not specifically for me, but what they've done for all of their customers. That would be good.
For how long have I used the solution?
I have been using CRITICALSTART for a year and a half.
How are customer service and technical support?
I would rate the customer support, post-deployment, as highly as it can be rated. Their focus on doing the right thing for the customer is how you would hope that every company you deal with would respond to customers. They are 100 percent focused on doing the right thing for the customer, and they back it up. I've seen that multiple times.
In terms of project management, in the lifespan of managed detection and response companies, I'm an old customer now, at 18 months. Back then, the project management was poor and that was part of the reason our roll-out was delayed. CRITICALSTART took all of the necessary steps to revamp that department and correct their mistakes, and that's why we were compensated monetarily, as well. It was poor then, and I haven't had the experience of working with the revamped project management team, because I'm already established.
In terms of delivering services on time, on budget, and on spec, we're a little bit of a unique customer. I know that because we had some early growing pains. They did miss the scoping of our network, which did impact the budget. I brought it to their attention and they stepped up. From a monetary standpoint, they made it right, with no fight. They just recognized it. They have a great ability to put themselves in the customer's shoes and do the right thing on behalf of the customer without any friction.
Which solution did I use previously and why did I switch?
Prior to CRITICALSTART, we were a customer of Arctic Wolf.
It's really not even fair to compare the two companies, because Arctic Wolf was not a 24/7 SOC operation, even though they sold themselves as that. It was more like a managed SIEM service. They used a proprietary SIEM. I cannot say anything positive about that company. Not a single thing. Right from the time for migration and sending the SIEM tools back to them, it was a very bad experience. They don't do what CRITICALSTART does. Even though they try to market themselves as an MDR, they're really not an MDR. They don't manage the endpoint tool, so it was really apples and oranges.
How was the initial setup?
There wasn't really an initial setup required at our end to use this service. The implementation of the endpoint tool, in this case Cylance, was a requirement for us. That involved some GPOs and the Splunk forwarders that we implemented in our environment. But as far as man-hours on our side to do the setup, it was very low.
It was straightforward. Pushing out software is something we do. Creating GPOs to make sure that the correct data from servers was being pushed and directed to the Splunk forwarders was all typical, sysadmin-type work. Nothing was complicated.
There were no data sources that this service wasn't able to integrate with.
From the time we entered into an agreement to use them, it was about four to five months until we started using it, but a lot of that was dependent on our ability to get the product rolled out, and our activity for base-lining the system, or our environment. Some of that time span was us, and some of it was them, but they made monetary compensations for the delay that we had. While it didn't go as fast as we wanted, the end result was positive.
What was our ROI?
We are absolutely seeing return on our investment from CRITICALSTART's services. They're doing the job of a 24/7 SOC at a fraction of the price that it would cost me to run it myself.
What's my experience with pricing, setup cost, and licensing?
You get what you pay for.
Which other solutions did I evaluate?
Compared to the competitors that we looked at, CRITICALSTART had a longer history, even though they were a young company. I liked that they were not using proprietary tools in the environment. That allowed us the freedom to move, if we wanted to, to another provider. They were just ahead of everybody else in terms of maturity.
What other advice do I have?
In terms of advice, I don't feel that implementing this service is any different than implementing any other system into your environment. A lot relies on your project management skills.
I would attempt to test your MDR choices against a framework. The framework that comes to mind is the MITRE ATT&CK framework, which everybody is familiar with. Have realistic expectations about what vulnerabilities your MDR partner is really going to mitigate. That's the lesson I have learned.
In terms of CRITICALSTART's Trusted Behavior Registry and the way it resolves things that are known as trusted, so that the focus is on resolving unknown alerts, I'm obviously not looking at all of the alerts that they work on. But what they escalate to me, only the alerts that I'm seeing —which is a small percentage — if I were to rate them on a scale of one to 10, I'd rate this aspect at eight. There are a few things that slip through, things that they'll escalate that I know should not have been escalated, but it's a very small percentage of what they actually escalate. It's a very small percentage where I'll have to just say, "Hey, did you mean to do this one, because we've been through this before," or a virus total shows that it's 100 percent clean, so why did it get escalated? It's not common but it does happen.
The service missed a pen test, but I still have a high level of confidence with the data and the actions they take. We had hired a red team, so the situation was a red team test. Red teams are generally 100 percent successful, or very close to it. With them, you always expect to uncover the unknown. But I do have confidence in the tool and the data that they are looking at.
The number of escalated alerts we receive, compared to the number the service's Trusted Behavior Registry resolves, is probably less than 5 percent of the total.