We just raised a $30M Series A: Read our story

Prisma Cloud by Palo Alto Networks OverviewUNIXBusinessApplication

Prisma Cloud by Palo Alto Networks is the #1 ranked solution in our list of Container Security Solutions. It is most often compared to Aqua Security: Prisma Cloud by Palo Alto Networks vs Aqua Security

What is Prisma Cloud by Palo Alto Networks?

The move to the cloud has changed all aspects of the application development lifecycle – security being foremost among them. Security and DevOps teams face a growing number of entities to secure as organizations adopt cloud native approaches. Ever-changing environments challenge developers to build and deploy at a frantic pace, while security teams remain responsible for the protection and compliance of the entire lifecycle. Prisma™ Cloud by Palo Alto Networks delivers complete security across the development lifecycle on any cloud, enabling you to develop cloud native applications with confidence.

Prisma Cloud by Palo Alto Networks is also known as Palo Alto Networks Prisma Cloud, Prisma Public Cloud, RedLock Cloud 360, RedLock, Twistlock, Aporeto.

Prisma Cloud by Palo Alto Networks Buyer's Guide

Download the Prisma Cloud by Palo Alto Networks Buyer's Guide including reviews and more. Updated: October 2021

Prisma Cloud by Palo Alto Networks Customers

Amgen, Genpact, Western Asset, Zipongo, Proofpoint, NerdWallet, Axfood, 21st Century Fox, Veeva Systems, Reinsurance Group of America

Prisma Cloud by Palo Alto Networks Video

Pricing Advice

What users are saying about Prisma Cloud by Palo Alto Networks pricing:
  • "From my exposure so far, they have been really flexible on whatever your current state is, with a view to what the future state might be. There's no hard sell. They "get" the journey that you're on, and they're trying to help you embrace cloud security, governance, and compliance as you go."
  • "I don't know a better way to do it, but their licensing is a little confusing. That's due to the breadth of different types of technologies they are trying to cover. The way you license depends on where you're securing. When they were Twistlock it was a simple licensing scheme and you could tell what you were doing. Now that they've changed that scheme with Palo Alto, it is quite confusing. It's very difficult to predict what your costs are going to be as you try to expand coverage."
  • "If you pay for three years of Palo Alto, it's better. If you're planning on doing this, it's obviously not going to be for one year, so it's better if you go with a three-year license... The only challenge we have is with the public cloud vendor pricing. The biggest lesson I have learned is around the issues related to pricing for public cloud. So when you are doing your segmentation and design, it is extremely important that you work with someone who knows and understands what kinds of needs you will have in the future and how what you are doing will affect you in terms of costs."
  • "The pricing and the licensing are both very fair... The biggest advice I would give in terms of costs would be to try to understand what the growth is going to look like. That's really been our biggest struggle, that we don't have an idea of what our future growth is going to be on the platform. We go from X number of licenses to Y number of licenses without a plan on how we're going to get from A to B, and a lot of that comes as a bit of a surprise. It can make budgeting a real challenge for it."
  • "The pricing is good. They gave us some good discounts right at the end of the year based on the value that it brings, visibility, and the ability to build in cloud, compliance, and security within one dashboard."
  • "One thing we're very pleased about is how the licensing model for Prisma is based on work resources. You buy a certain amount of work resources and then, as they enable new capabilities within Prisma, it just takes those work resource units and applies them to new features. This enables us to test and use the new features without having to go back and ask for and procure a whole new product, which could require going through weeks, and maybe months, of a procurement process."
  • "If a competitor came along and said, "We'll give you half the price," that doesn't necessarily mean that's the right answer, at all. We wouldn't necessarily entertain it that way. Does it do what we need it to do? Does it work with the things that we want it to work with? That is the important part for us. Pricing wasn't the big consideration it might be in some organizations. We spend millions on public cloud. In that context, it would not make sense to worry about the small price differences that you get between the products."
  • "The pricing and licensing are expensive compared to the other offerings that we considered."

Prisma Cloud by Palo Alto Networks Reviews

Filter by:
Filter Reviews
Industry
Loading...
Filter Unavailable
Company Size
Loading...
Filter Unavailable
Job Level
Loading...
Filter Unavailable
Rating
Loading...
Filter Unavailable
Considered
Loading...
Filter Unavailable
Order by:
Loading...
  • Date
  • Highest Rating
  • Lowest Rating
  • Review Length
Search:
Showingreviews based on the current filters. Reset all filters
LukeLynch
Cloud Security Specialist at a financial services firm with 501-1,000 employees
Real User
Top 20Leaderboard
Gives me a holistic view of cloud security across multiple clouds or multiple cloud workloads within one cloud provider

Pros and Cons

  • "You can also integrate with Amazon Managed Services. You can also get a snapshot in time, whether that's over a 24-hour period, seven days, or a month, to determine what the estate might look like at a certain point in time and generate reports from that for vulnerability management forums."
  • "In addition to that, I can get a snapshot of what I deemed were the priority vulnerabilities, whether it was identity access management, key rotation, or secrets management. Whatever you deem to be a priority for mitigating threats for your environment, you can get that as a snapshot."
  • "It's not really on par with, or catering to, what other products are looking at in terms of SAST and DAST capabilities. For those, you'd probably go to the market and look at something like Veracode or WhiteHat."

What is our primary use case?

Primarily the intent was to have a better understanding of our cloud security posture. My remit is to understand how well our existing estate in cloud marries up to the industry benchmarks, such as CIS or NIST, or even AWS's version of security controls and benchmarks.

When a stack is provisioned in a cloud environment, whether in AWS or Azure or Google Cloud, I can get an appreciation of how well the configuration is in alignment with those standards. And if it's out of alignment, I can effectively task those who are accountable for resources in clouds to actually remediate any identifiable vulnerabilities.

How has it helped my organization?

The solution is really comprehensive. Especially over the past three to four years, I was heavily dependent on AWS-native toolsets and config management. I had to be concerned about whether there were any permissive security groups or scenarios where logging might not have been enabled on S3 buckets, or if we didn't have encryption on EBS volumes. I was quite dependent on some of the native stacks within AWS.

Prisma not only looks at the workloads for an existing cloud service provider, but it looks at multiple cloud service providers outside of the native stack. Although the native tools on offer within AWS and Azure are really good, I don't want to be heavily dependent on them. And with Google, where they don't have a security hub where you can get that visibility, then you're quite dependent on tools like Prisma Cloud to be able to give you that. In the past, that used to be Dome9 or Evident.io. Palo Alto acquired Evident.io, and that became rebranded as this cloud posture management solution. It's proven really useful for me.

It integrates capabilities across both cloud security posture management and cloud workload protection. The cloud security posture management is what it was initially intended for, looking at configuration of cloud service workloads for AWS, Azure, Google, and Alibaba. And you can look at how the configuration of certain workloads align to standards of CIS, NIST, PII, etc.

And that brings our DevOps and SecOps teams closer together. The engineering aspect is accountable for provisioning dedicated accounts for cloud consumers within the organization. There might be just an entity within the business that has a specific use case. You then want to go to ensure that they take accountability for building their services in the cloud, so that it's not just a central function or that engineering is solely responsible. You want something of a handoff so that consumers of cloud within the organization can also have that accountability, so that it's a shared responsibility. Then, if you're in operations, you have visibility into what certain workloads are doing and whether they're matching the standards that have been set by the organization from a risk perspective.

You've also got the software engineering side of the business and they might just be focused on consuming base images. They may be building container environments or even non-container environments or hosting VMs. They also have a level of accountability to ensure that the apps or packages that they build on top of the base image meet a certain level of compliance, depending on what your business risk-appetite is. So it's really useful in that you've got that shared accountability and responsibility. And overall, you can then hand that off to security, vulnerability management, or compliance teams, to have a bird's-eye view of what each of those entities is doing and how well they're marrying up to the expected standards.

Prior to Prisma cloud, you'd have to have point solutions for container runtime scanning and image scanning. They could be coupled together, but even so, if you were running multiple cloud service providers in parallel, you could never really get the whole picture from a governance perspective. You would struggle to actually determine, "Okay, how are we doing against the CIS benchmark for Azure, GCP, and AWS, and where are the gaps that we need to address from a governance and a compliance perspective so as to reduce our risk and the threat landscape?" Now that you've got Prisma Cloud, you can get that holistic view in a single pane of glass, especially if you're running multiple cloud workloads or a number of cloud workloads with one cloud service provider. It gives you the ability to look at private, public, or hybrid offerings. It saves me having to go to market and also run a number of proofs of concepts for point solutions. It's an indication of how the market has matured and how Palo Alto, with Prisma Cloud in particular, understands what their consumers and clients want.

It can certainly help reduce alert investigation times, because you've got the detail that comes with the alert, to help remediate. The level of detail offered up by Prisma Cloud, for a given engineer who might not be that familiar with a specific type of configuration or a specific type of alert, saves the engineer having to delve into runbooks or online resources to learn how to remediate a particular alert. You have to compare it to a SIEM solution where you get an event or an alert is triggered. It's usually based on a log entry and the engineer would have to then start to investigate what that alert might mean. But with Prisma Cloud and Prisma Cloud Compute, you get that level of detail off the back of every event, which is really useful.

It's hard to quantify how much time it might save, but think about the number of events and what it would be like if they didn't have that level of detail on how to remediate, each time an event occurred. Suppose you had a threshold or a setting that was quite conservative, based on a particular cloud workload, and that there were a number of accounts provisioned throughout the day and, for each of those accounts, there were a number of config settings that weren't in alignment with a given standard. For each of those events, unless there was that level of detail, the engineer would have to look at the cloud service provider's configuration runbooks or their own runbooks to understand, "Okay, how do I change something from this to this? What's the polar opposite for me to get this right?" The great thing about Prisma Cloud is that it provides that right out-of-the-box, so you can quickly deduce what needs to be done. For each event, you might be saving five or 10 minutes, because you've got all the information there, served up on a plate.

What is most valuable?

For me, what was valuable from the outset was the fact that, regardless of what cloud service provider you're with, I could segregate visibility of specific accounts to account owners. For example, at AWS, you might have an estate that's solely managed by yourself, or there might be a number of teams within the organization that do so.

You can also integrate with Amazon Managed Services. You can also get a snapshot in time, whether that's over a 24-hour period, seven days, or a month, to determine what the estate might look like at a certain point in time and generate reports from that for vulnerability management forums. In addition to that, I can get a snapshot of what I deemed were the priority vulnerabilities, whether it was identity access management, key rotation, or secrets management. Whatever you deem to be a priority for mitigating threats for your environment, you can get that as a snapshot.

You can also automate how frequently you want reports to be generated. You can then understand whether there has been any improvement or reduction in vulnerabilities over a certain time period.

The solution also enables you to ingest logs to your preferred SIEM provider so that you've got a better understanding of how things stack up with event correlation and SIEM systems.

If you've got an Azure presence, you might be using Office 365 and you might also have a presence in Google Cloud for the data, specifically. You might also want to look at scenarios where, if you're using tools and capabilities for DevOps, like Slack, you can plug those into Prisma Cloud as well to understand how well they marry up to vulnerabilities. You can also use it for driving out instant vulnerabilities into Slack. That way, you're looking at what your third-party SaaS providers are doing in relation to certain benchmarks. That's really useful as well.

In addition, an engineer may provision something like a shared service, a DNS capability, a sandbox environment, or a proof of concept. The ability to filter alerts by severity helps when reporting on the services that have been provisioned. They'll come back as a high, medium, or low severity and then I ensure that we align with our risk-appetite and prioritize higher and medium vulnerabilities so that they are closed out within a given timeframe.

When it comes to root cause, Prisma Cloud is quite intuitive. If you have an S3 bucket that has been set to public but, realistically, it shouldn't have been, you can look at how to remediate that quite intuitively, based on what the solution offers up as a default setting. It will offer up a way to actually resolve and apply the correct settings, in line with a given standard. There's almost no thinking involved. It's on-point and it's as if it offers up the specific criteria and runbooks to resolve particular vulnerabilities.

That assists security, giving them an immediate way to resolve a given conflict or misalignment. The time-savings are really incomparable. If you were to identify a vulnerability or a risk, you might have to draw up what the remediation activity should look like. However, what Prisma Cloud does is that it actually presents you with a report on how to remediate. Alternatively, you can have dynamic events that are generated and applied to Slack, for example. Those events can then be sent off to a JIRA backlog or the like. The engineers will then look at what that specific event was, at what the criteria are, and it will tell them how to remediate it without their having to set time aside to explain it. The whole path is really intuitive and almost fully automated, once it's set up.

What needs improvement?

One scenario, in early days, was in trying to get a view on how you could segregate account access for role-based access controls. As a DevSecOps squad, you might have had five or six guys and girls who had access to the overall solution. If you wanted to hand that off to another team, like a software engineering team, or maybe just another cloud engineering team, there were concerns about sharing the whole dashboard, even if it was just read-only. But over the course of time, they've integrated that role-based access control so that users should only be able to view their own accounts and their own workloads, rather than all of the accounts.

Another concern I had was the fact that you couldn't ingest the accounts into Prisma Cloud in an automated sense. You had to manually integrate them or onboard them. They have since driven out new features and capabilities, over the last 12 months, to cater for that. At an organizational level you can now plug that straight into Prisma Cloud, as and when new accounts are provisioned or created. Then, by default, the AWS account or the Azure account will actually be included, so you've got visibility straight away.

The lack of those two features was a limitation as to how far I could actually push it out within the organization for it to be consumed. They've addressed those now, which is really useful. I can't think of anything else that's really causing any shortcomings. It's everything and more at the moment.

For how long have I used the solution?

I've been using Prisma Cloud for about 12 months now

How was the initial setup?

It's pretty straightforward to run an automated setup, if you want to go down that route. The capabilities are there. But in terms of how we approached it, it was like a plug-and-play into our existing stack. Within AWS, you just have to point Prisma Cloud at your organizational level so that you can inherit all the accounts and then you have the scanning capability and the enforcement capability, all native within Prisma Cloud. There's nothing that we're doing that's over and above, nothing that we would have to automate other than what is actually provided natively within Prisma Cloud. I'm sure if you wanted to do additional automation, for example if you wanted to customize how it reports into Slack or how it reports into Atlassian tools, you could certainly do that, but there's nothing that is that complex, requiring you to do additional automation over and above what it already provides.

What was our ROI?

I haven't gone about calculating what the ROI might be.

But just looking at it from an operational engineering perspective and the benefits that come with it, and when it comes to the governance and compliance aspects of running AWS cloud workloads, I now put aside half an hour or an hour on a given day of the week, or alternative days of the week. I use that time to look at what the client security posture is, generate a number of reports, and hand them off to a number of engineering teams, all a lot quicker than I used to be able to do so two or three years ago.

In the past, at times I would have had to run Trusted Advisor from AWS, to look at a particular account, or run a number of reports from Trusted Advisor to look at multiple accounts. And with Trusted Advisor, I could never get a collective view on what the overall posture was of workloads within AWS. With Prisma Cloud, I can just select 30 AWS accounts, generate one report, and I've got everything I need to know, out-of-the-box. It gives me all the different services that might be compliant/non-compliant, have passed/failed, and that have high, medium, or low vulnerabilities. It has saved me hours being able to get those snapshots.

I can also step aside by putting an automated report in place and receive that on a weekly basis. I've also got visibility into when new accounts are provisioned, without my having to keep tabs on whether somebody has just provisioned a new account or not. The hours that are saved with it are really quite high.

What's my experience with pricing, setup cost, and licensing?

As it stands now, I think things have moved forward somewhat. Prisma and the suite of tools by Palo Alto, along with the fact that they have integrated Prisma Cloud Compute as a one-stop shop, have really got it nailed. They understand that not all clients are running container workloads. They bring together point solutions, like what used to be Twistlock, into that whole ecosystem, alongside a cloud security posture management system, and they'll license it so that it's favorable for you as a consumer. You can think about how you can have that presence and not then be dependent on multiple third-parties.

Prisma cloud was originally destined for cloud security posture management, to determine how the configuration of cloud services aligns with given standards. Through the evolution of the product, they then integrated a capability they call Prisma Cloud Compute. That is derived from point solutions for container and image scanning. It has the capabilities on offer within a single pane of glass.

Prior to the given scenario with Prisma Cloud, you'd have to either go to Twistlock or Aqua Security for container workloads. If you were going open source, obviously that would be free, but you'd still have to be looking at independent point solutions. And if you were looking at governance and compliance, you'd have to look at the likes of Dome9, Evident.io, and OpenSCAP, in a combination with Trusted Advisor. But the fact that you can just lean into Prisma Cloud and have those capabilities readily available, and have an account manager that is priced based on workloads, makes it a favorable licensing model.

It also makes the whole RFP process a lot more streamlined and simplified. If you've got a purchasing specialist in-house, and then heads-of-functions who might have a vested interest in what the budget allocation is, from either a security perspective or from a DevOps cloud perspective, it's really quite transparent. They work the pricing model in your favor based on how you want to actually integrate with their products. From my exposure so far, they have been really flexible on whatever your current state is, with a view to what the future state might be. There's no hard sell. They "get" the journey that you're on, and they're trying to help you embrace cloud security, governance, and compliance as you go. That works favorably for them as well, because the more clients that they can acquire and onboard, the more they can share the experience, helping both the business and the consumer, overall.

Which other solutions did I evaluate?

Prior to Prisma cloud, I was looking at Dome9 and Evident.io. Around late 2018 to early 2019, Palo Alto acquired Evident.io and made it part of their Prisma suite of security tools.

At the time, the two that were favorable were Evident.io and Dome9, side-by-side, especially when running multiple AWS accounts in parallel. At the time, it was Dome9 that came out as more cost-effective. But I actually preferred Evident.io. It just happened to be that we were evaluating the Prisma suite and then discovered that Palo Alto had acquired Evident.io. For me that was really useful. As an organization, if we were already exploring the capabilities of Palo Alto and had a commercial presence with them, to then be able to use Prisma Cloud as part of that offering was really good for me as a security specialist in cloud. Prior to that, if as an organization you didn't have a third-party cloud security posture management system for AWS, you were heavily dependent on Trusted Advisor.

What other advice do I have?

My advice is that if you have the opportunity to integrate and utilize Prisma Cloud you should, because it's almost a given that you can't get any other cloud security posture management system like Prisma Cloud. There are competitors that are striving to achieve the same types of things. However, when it comes to the governance element for a head of architecture or a head of compliance or even at the CSO level, without that holistic view, if you use one of them you are potentially flying blind. 

Once you've got a capability running in the cloud and the associated demand that comes through from the business to provision accounts for engineers or technical service owners or business users, the given is that not every team or every user that wants to consume the cloud workload has the required skill set to do so. There's a certain element of expertise that you need to securely run cloud workloads, just as is needed for running applications or infrastructure on-premise. However, unless you have an understanding of what you're opening up to—the risk element to running cloud workloads, such as a potential attacks or compromise of service—from an organizational perspective, it's only a matter of time before something is leaked or something gets compromised and that can be quite expensive to have to manage. There are a lot of unknowns. 

Yes, they do give you capabilities, such as Trusted Advisor, or you might have OpenSCAP or you might be using Forseti for Google Cloud, and there are similar capabilities within Azure. However, the cloud service providers aren't native security vendors. Their workloads are built around infrastructure- or platform-as-a-service. What you have to do is look at how you can complement what they do with security solutions that give you not just the north-south view, but the east-west as well. You shouldn't just be dependent on everything out-of-the-box. I get the fact that a lot of organizations want to be cloud-first and utilize native security capabilities, but sometimes those just don't give you enough. Whether you're looking at business-risk or cyber-risk, for me, Prisma Cloud is definitely out there as a specialist capability to help you mitigate the threat landscape in running cloud workloads.

I've certainly gone from a point where I understood what the risk was in not having something like this, and that's when I was heavily dependent on native tools that are offered up with cloud service providers. 

The first release that came out didn't include the workload management, because what happened, I believe, was that Palo Alto acquired Twistlock. Twistlock was then "framed" into cloud workload management within Prisma Cloud. What that meant was that you had a capability that looks at your container workloads, and that's called Prisma Cloud Compute, which is all available within a single pane of glass, but as a different set of capabilities. That is really useful, especially when you're running container workloads.

In terms of securing the entire development life cycle, if you integrate it within the Jenkins CI/CD pipeline, you can get the level of assurance needed for your golden images or trusted image. And then you can look at how you can enforce certain constraints for images that don't match the level of compliance required. In terms of going from what would be your image repository, when that's consumed you have the capability to look at what runtime scanning looks like from a container perspective. It's not really on par with, or catering to, what other products are looking at in terms of SAST and DAST capabilities. For those, you'd probably go to the market and look at something like Veracode or WhiteHat.

It all depends on the way an organization works, whether it has a distributed or centralized setup. Is there like a central DevOps or engineering function that is a single entity for consuming cloud-based services, or is there a function within the business that has primarily been building capabilities in the cloud for what would otherwise be infrastructure-as-a-service for internal business units? The difficulty there is the handoff. Do you look at running it as a central function, where the responsibility and the accountability is within the DevOps teams, or is that a function for SecOps to manage and run? The scenario is dependent on what the skill sets are of a given team and what the priorities are of that team. 

Let's say you have a security team that knows its area and handles governance, risk, and compliance, but doesn't have an engineering function. The difficulty there is how do you get the capability integrated into CI/CD pipelines if they don't have an engineering capability? You're then heavily relying on your DevOps teams to build out that capability on behalf of security. That would be a scenario for explaining why DevOps starts integrating with what would otherwise be CyberOps, and you get that DevSecOps cycle. They work closer together, to achieve the end result. 

But in terms of how seamless those CI/CD touchpoints are, it's a matter of having security experts that understand that CI/CD pipeline and where the handoffs are. The heads of function need to ensure that there's a particular level of responsibility and accountability amongst all those teams that are consuming cloud workloads. It's not just a point solution for engineering, cloud engineering, operations, or security. It's a whole collaboration effort amongst all those functions. And that can prove to be quite tricky. But once you've got a process, and the technology leaders understand what the ask is, I think it can work quite well.

When it comes to reducing runtime alerts, it depends on the sensitivity of the alerting that is applicable to the thresholds that you set. You can set a "learning mode" or "conservative mode," depending on what your risk-appetite is. You might want it to be configured in a way that is really sensitive, so that you're alerted to events and get insights into something that's out of character. But in terms of reducing the numbers of alerts, it all depends on how you configure it, based on the sensitivity that you want those alerts to be reporting on.

I would rate Prisma Cloud at eight out of 10. It's primarily down to the fact that I've got a third-party tool that gives me a holistic view of cloud security posture. At the click of a button I can determine what the current status is of our threat landscape, in either AWS or Azure, at a conflict level and at a workload level, especially with regards to Prisma Cloud Compute. It's all available within a single pane of glass. That's effectively what I was after about two or three years ago. The fact that it has now come together with a single provider is why I'd rate it an eight.

Disclosure: IT Central Station contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
RM
Director, Cloud Engineering at a pharma/biotech company with 10,001+ employees
Real User
Top 20Leaderboard
Gives us security control gates and automated notifications in container orchestrator, but deploy is API-driven, not a built-in integration

Pros and Cons

  • "The ability to monitor the artifact repository is one of the most valuable features because we have a disparate set of development processes, but everything tends to land in a common set of artifact repositories. The solution gives us a single point where we can apply security control for monitoring. That's really helpful."
  • "I've been really pleasantly surprised with how Prisma Cloud is, over time, covering more and more of the topics I care about, and listening to customer feedback and growing the product in the right directions."
  • "When it comes to protecting the full cloud-native stack, it has the right breadth. They're covering all the topics I would care about, like container, cloud configuration, and serverless. There's one gap. There could be a better set of features around identity management—native AWS—IAM roles, and service account management. The depth in each of those areas varies a little bit. While they may have the breadth, I think there's still work to do in flushing out each of those feature sets."

What is our primary use case?

There are three pieces to our use case. For the container piece, which used to be Twistlock, we use static scan to scan our artifact repositories and we use that data to remediate issues and provide it back to developers. We also do runtime monitoring on our orchestrators, which are primarily Kubernetes, but some DC/OS as well. Right now, it's all on-premises, although we'll be moving that to the cloud in the future. 

And we use what used to be RedLock, before it was incorporated into the solution.

How has it helped my organization?

Prisma Cloud has definitely enabled us to integrate security into our CI/CD pipeline and add touchpoints into existing DevOps processes for container. In the container those touchpoints are pretty seamless. We've been able to implement security control gates and automate notifications back to teams of vulnerabilities in the container orchestrator. It all works pretty smoothly, but it required a fair amount of work on our part to make that happen. But we did not run into limitations of the tool. It enabled us pretty well. The one part where we have a little bit of a gap that most of those are at deployment time. We haven't shifted all those controls back to the team level at build time yet. And we haven't really tackled the cloud space in the same way yet. 

I'm not sure we have SecOps in the container space exactly in the same way we do in other DevOps. We shifted a lot of the security responsibility into the development teams and into the Ops teams themselves. There's less of a separation. But overall, the solution has increased collaboration because of data visibility.

It also does pretty well at providing risk clarity at runtime, and across the entire pipeline, showing issues as they are discovered during the build phases. It does a good job in terms of the speed of detection, and you can look at it in terms of CVSS score or an arbitrary term for severity level. Our developers are able to correct the issues.

We are clearly better off in that we have visibility, where there was a gap before. We know where our container vulnerabilities and misconfigurations are, and even on the cloud side, where cloud misconfigurations are happening. That visibility is a huge benefit. 

The other part is actually using that data to reduce risk and that's happened really well on the container side. On the cloud side, there's still room to grow, but that's not an issue with Prisma Cloud itself. These tools are only a part of the equation. It takes a lot of organizational work and culture and prioritization to address the output of these tools, and that takes time.

What is most valuable?

The ability to monitor the artifact repository is one of the most valuable features because we have a disparate set of development processes, but everything tends to land in a common set of artifact repositories. The solution gives us a single point where we can apply security control for monitoring. That's really helpful.

Another valuable feature is the ability to do continuous monitoring at runtime. We can feed that data back to developers so they can get intelligence on what's actually deployed, and at what level, versus just what's in the artifact repository, because those are different.

In the security space, most security solutions typically do either development-side security, or they do runtime operational security, but not both. One of the relatively unique characteristics of this solution in the marketplace—and it may be that more and more of the container security solutions do both sides—is that this particular solution actually spans both. We try to leverage that.

And for the development side, we utilize both the vulnerability results from the static vulnerability scanning as well as the certain amount of configuration compliance information that you can gather from the static pre-deployment scans. We use both of those and we pay attention to both sides of that. Because this solution can be implemented both on the development side and on the runtime operational side, we look at the same types of insights on the operational runtime side to keep up with new threats and vulnerabilities. We feed that information back to developers as well, so they can proactively keep up.

We have multiple public clouds and multiple internal clouds. Some of it is OpenStack-based and some of it is more traditional VM-based. Prisma Cloud provides security spanning across these environments, in terms of the static analysis. When we're looking at the artifact repository, the solutions we're using Prisma Cloud to scan and secure will deploy to both public cloud and internal cloud. Moving into 2021, we'll start to do more runtime monitoring in public cloud, particularly in AWS. We're starting to see more EKS deployment and that's going to be a future focus area for us. It's extremely important to us that Prisma Cloud provides security across these environments. If Prisma didn't do that, that would be a deal-breaker, if there were a competitor that did. 

Public cloud is strategically very important to our company, as it probably is for many companies now, so we have to have security solutions in that space. That's why we say the security there is extremely important. We have regulatory compliance requirements. We have some contractual obligations where we have to provide certain security practices. We would do that anyway because they are security best practices, but there are multiple drivers.

Applying some of their controls outside of the traditional container space, for example, as we're doing hybrid cloud or container development, is helpful. Those things get their tentacles out to other areas of the infrastructure. An example would be that we look at vulnerabilities and dependencies as we develop software, and we use Prisma Cloud to do that for containers. We use other tools outside of the container space. They're starting to move into that other space so we can point Prisma Cloud at something like a GitHub and do that same scanning outside of the container context. That gives us the ability to treat security control with one solution.

What needs improvement?

When it comes to protecting the full cloud-native stack, it has the right breadth. They're covering all the topics I would care about, like container, cloud configuration, and serverless. There's one gap. There could be a better set of features around identity management—native AWS—IAM roles, and service account management. The depth in each of those areas varies a little bit. While they may have the breadth, I think there's still work to do in  flushing out each of those feature sets.

My understanding of Palo Alto's offerings is that they have a solution that is IAM-focused. It's called Prisma Access. We have not looked at it, but I believe it's a separately-licensed offering that handles those IAM cases. I don't know whether they intend to include any IAM-type of functionality in the Prisma Cloud feature set or whether they will just say, "Go purchase this separate solution and then use them next to each other."

Also, I don't think their SaaS offering is adoptable by large enterprises like ours, in every case. There are some limitations on having multiple consoles and on our ability to configure that SaaS offering. We would like to go SaaS, but it's not something we can do today.

We have some capability to do network functions inside of Prisma Cloud. Being able to integrate that into the non-cloud pieces of the Palo Alto stack would be beneficial.

The solution's security automation capabilities are mixed. We've done some API development and it's good that they have APIs, that's beneficial. But there is still a little disconnect between some of the legacy Twistlock APIs versus some of the RedLock APIs. In some cases the API functionality is not fully flushed out. 

An example of that is that we were looking at integrating Prisma Cloud scans into our GitHub. The goal was to scan GitHub repositories for CloudFormation and Terraform templates and send those to Prisma Cloud to assess for vulnerabilities and configuration. The APIs are a little bit on the beta-quality side. It sounds like newer versions that some of that is handled, but I think there's some room to grow. 

Also, our team did run into some discrepancies between what's available, API-wise, that you have to use SaaS to get to, versus the on-premise version. There isn't necessarily feature parity there, and that can be confusing.

For how long have I used the solution?

We've been using Prisma Cloud by Palo Alto for about two-and-a-half years.

What do I think about the stability of the solution?

The stability has been excellent. The solution simply runs. It very seldom breaks and, typically, when it does, it's easy to troubleshoot and get back on track.

What do I think about the scalability of the solution?

The scalability has been good for our use cases.

When we first adopted it, a single console could cover 1,000 hosts that were running container workloads. That was more than enough for us, and to date it has been more than enough for us, because we have multiple network environments that need to stay separated, from a connectivity standpoint. We've needed to put up multiple consoles, one to serve each of those network environments. Within each of those network environments, we have not needed to scale up to 1,000 yet.

There's wide adoption across our organizations, but at the same time there is tremendous room to grow with those organizations. Many organizations are using it somewhat, but we are probably at 20 to 25 percent of where we need to be.

It's safe to say we have several hundred people working with the solution, but it's not 1,000 yet. They are primarily developers. There are some operational folks who use it as well. To me, that speaks to the ease of deployment and administration of this solution. You really don't need a large operational group to deploy. When it comes to security, incident response, and the continuous monitoring aspects that a continual security team does, I don't have insight because I don't work in that area of the company, but I see that as expanding down the road. It's another area of growth for us.

How are customer service and technical support?

Their technical support has been very good. Everyone that I've been involved with has been very responsive and helpful. They have remained engaged to drive resolution of issues that we have found.

Which solution did I use previously and why did I switch?

We did not have a previous solution.

How was the initial setup?

Standing up an instance is quite simple, for an enterprise solution. It has been excellent in that regard.

It's hard to gauge how long our deployment took. We have multiple consoles and multiple network contexts, and a couple of those have different sets of rules and different operational groups to work with. It took us several months across all those network environments that we needed to cover, but that's not counting the actual amount of time it took to execute steps to install a console and deploy it. The actual steps to deploy a console and the Defenders is a very small amount of time. That's the easiest part.

Our implementation strategy for Prisma Cloud was that we wanted to provide visibility across the SDLC: static scan, post-build, as things go to the artifact repository. Our goal was to provide runtime monitoring at our development, test, and production platforms.

What about the implementation team?

We did it ourselves.

What's my experience with pricing, setup cost, and licensing?

I don't know a better way to do it, but their licensing is a little confusing. That's due to the breadth of different types of technologies they are trying to cover. The way you license depends on where you're securing. When they were Twistlock it was a simple licensing scheme and you could tell what you were doing. Now that they've changed that scheme with Palo Alto, it is quite confusing. It's very difficult to predict what your costs are going to be as you try to expand coverage.

Which other solutions did I evaluate?

At the time we looked at our incumbent vendors and others that were container-specific. We were trying to avoid a new vendor relationship, if possible. We looked at Rapid7 and Tenable. Both were starting to get into the container space at the time. They weren't there yet. We did our evaluation and they were more along the lines of a future thought process than an implementable solution.

We looked at Twistlock, which was a start-up at the time, and Aqua because they were in the space, and we looked at a couple of cloud solutions, but they were in cloud and working their way to container. We did the same exercise with Evident.io and RedLock, before they were purchased by Palo Alto. They were the only vendors that covered our requirements. In the case of Twistlock, their contributions in the NIST 800-190 standards, around container security, helped influence our decision a little bit, as did the completeness of their vision and implementation, versus their competitors.

What other advice do I have?

My advice would be not to look at it like you're implementing a tool. Look at it like you're changing your processes. You need to plan for the impact of the data for the various teams across Dev and Security and Ops. Think very holistically, because a lot of this cloud container stuff spans many teams. If you only look at it as "I'm going to plug a tool in and I'm going to get some benefit," I think you'll fail.

Prisma Cloud covers both cloud and container, or could cover either/or, depending on your needs. But in both of those cases, there's often confusion about who owns what, especially as you're creating new teams with the transition to DevOps and DevSecOps. Successful implementation has a lot to do with working out lines of ownership in these various areas and changing processes and even the mindset of people. You have to make strides there to really maximize the effectiveness of the solution.

The solution provides Cloud Security Posture Management in a single pane of glass if you're using the SaaS solution, but we do not. Our use case does not make it feasible for us to use the SaaS solution. But with the Prisma Cloud features and compute features in the self-hosted deployment, you have to go to multiple panes to see all the information.

When it comes to the solution helping us take a preventative approach to cloud security, it's a seven or eight out of 10. The detective side is a little higher. We are using the detective controls extensively. We're getting the visibility and seeing those things. There is a lot of hesitance to use preventative controls here, both on the development side—the continuous integration stuff—and particularly in the runtime, continuous monitoring protection, because you are just generally afraid. This mirrors years and years ago when intrusion prevention first came out at the network level. A lot of people wanted to do detection, but it took quite a few years for enterprises to get the courage to start actively blocking. We're in that same growth period with container security.

When it comes to securing the entire cloud-native development lifecycle, across build, deploy, and run, it covers things pretty well. When I think about it in terms of build, there are integrations with IDEs and development tools and GitHub, etc. Deploy is a little shakier to me. I know we have Jenkins integration. And run is good. In terms of continuous monitoring, it feels build and run are a little stronger than deploy. If we could see better integration with other tools, that might help. If I'm doing that deploy via Terraform or Spinnaker, I don't know how all that plays with the Jenkins integrations and some of the other integrations that Palo Alto has produced.

Overall, it feels like a pretty good breadth of integrations, as far as what they claim. They certainly support some things that we don't use here at build and deploy and runtime. But a lot of what they rely on, in terms of deploy, is API-driven, so it's not an easy-to-configure, built-in integration. It's more like, "We have an API, and if you want to write custom software to use that API, you can." They claim support in that way, but it's not at the same level as just configuring a couple of items and then you can scan a registry.

In the container space, we have absolutely seen benefit from the solution for securing the cloud-native development lifecycle. At the same time, it has required some development on our part to get the integration. Some of that is because we predated some of the integrations they offer. But in the container space, there has definitely been a huge impact. The impact has been less so in cloud configuration, because there are so many competing offerings that can do that with Terraform and Azure Security Center and Amazon native tools. I don't feel like we've made quite the same inroads there.

In terms of it providing a single tool to protect all of our cloud resources and applications, I don't think it does. Maybe that's because of our implementation, but it just doesn't operate at every level. I don't think we'd ever go down that path. We have on-premise tools that have been here a long time. We've built processes around reporting. Vulnerability scanning is an example. We run Nessus on-premise, and we wouldn't displace Nessus with, say, a Twistlock Defender to do host-level scanning in the cloud, because we'd have a disparate tool set for cloud versus on-premise for no reason. I don't ever see Prisma Cloud being the single solution for all these security features, even if they can support them.

It's important that it integrate with other tools. We talked earlier about a single dashboard. A lot of those dashboards are aggregating data from other tools. One thing that has been important to us is feeding data to Splunk. We have a SIEM solution. So I would always envision Prisma Cloud as being a participant in an ecosystem.

In summary, I actually hate most security products because they're very siloed and you have mixed-vendor experiences. I don't think they take a big-picture view. I've been really pleasantly surprised with how Prisma Cloud is, over time, covering more and more of the topics I care about, and listening to customer feedback and growing the product in the right directions. For the most part, it does what they say it will do. The vendor support has also been good. I would definitely give the vendor an eight out of 10 because they've been great in understanding and providing solutions in the space, and because of the reliability and the responsiveness. They've been very open to our input as customers. They take it very seriously and we've taken advantage of that and developed a good relationship with them.

When it comes to the solution itself, I would give the compute solution an eight. But I don't think I would give the Prisma Cloud piece an eight. So overall, I would rate the solution as a seven because the compute is stronger than the other piece, what used to be RedLock.

I would also emphasize that what I think is a strong roadmap for the product and that Palo Alto is really interested in customer feedback. They do seem to incorporate it. That may be our unique experience because our use cases just happen to align with what Palo wants to do, but I think they're heading in the right direction.

Early on in a solution's life cycle or problem space, it's more important to have that responsiveness than it is even to have the fullest of solutions. The fact that we came across this vendor, one that not only mostly covered what we needed when we were first looking for it three years ago, but that has also been as responsive as they have to grow the solution, has been really positive.

Which deployment model are you using for this solution?

On-premises
Disclosure: IT Central Station contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Learn what your peers think about Prisma Cloud by Palo Alto Networks. Get advice and tips from experienced pros sharing their opinions. Updated: October 2021.
542,608 professionals have used our research since 2012.
AM
Security Architect at a educational organization with 201-500 employees
Real User
Top 20
The magic happens with traffic passing through multiple zones and our data center, as we can quickly troubleshoot problems

Pros and Cons

  • "The application visibility is amazing. For example, sometimes we don't know what a particular custom port is for and what is running on it. The visibility enables us to identify applications, what the protocol is, and what service is behind it. Within Azure, it is doing a great job of providing visibility. We know exactly what is passing through our network. If there is an issue of any sort we are able to quickly detect it and fix the problem."
  • "Getting new guys trained on using the solution requires some thought. If someone is already trained on Palo Alto then he's able to adapt quickly. But, if someone is coming from another platform such as Fortinet, or maybe he's from the system side, that is where we need some help. We need to find out if there is an online track or training that they can go to."

What is our primary use case?

We had an internal debate regarding our firewall solution for the cloud. Initially we had a vendor that suggested we could build a whole environment using the Azure firewall, but we had requirements for Zero Trust architecture. We are essentially like a bank. We were planning to host some PCI services in the cloud and we were planning to create all the zones. When we looked at the feature set of Azure, we were not able to find Layer 7 visibility, which we had on our firewalls, and that is where the debate started. We thought it was better to go with a solution that gives us that level of visibility. Our team was comfortable with Palo Alto as a data center firewall, so we went for Prisma Cloud.

How has it helped my organization?

The comprehensiveness of the solution for protecting the full cloud-native stack is pretty good. It is doing a good job in three areas: identification, detection, and the response part is also very clear. We are able to see what is wrong, what is happening, and what we allowed, even for troubleshooting. If something goes bad, we need to check where it went bad and where it started. For example, if there is an issue that seems to be performance-related, we are able to look at the logs and the traffic flow and identify if the issue really is performance-related or if it is a security issue. Because we are new to the cloud, we are using a combination of different features to understand what is going on, if the application owner does not know what is wrong. We use the traffic analysis to find out what it was like yesterday or the day before and what is missing. Perhaps it is an authentication issue. We use it a lot for troubleshooting.

We have implemented Palo Alto's SOAR solution, Demisto, and have automated some of the things that our SOC team identified, related to spam and phishing. Those workflows are working very well. Things that would take an analyst between three and six hours to do can now be achieved in five to eight minutes because of the automation capabilities.

Overall, the Palo Alto solution is extremely good for helping us take a preventative approach to cloud security. One of the problems that we had was that, in the cloud, networking is different from standard networking. Although only a portion of our teams is trained on the cloud part, because we had engineers who were using the platform, they were able to quickly adapt. We were able to use our own engineers who were trained in the data center to very quickly be able to work on Prisma Cloud. But when we initially tried to do that with Azure itself, we had a lot of difficulty because they did not have the background in how Azure cloud works.

Also, when you have a hybrid cloud deployment, you will have something on-prem. Maybe your authentication or certain applications are still running on-prem and you are using your gateway to communicate with the cloud. A lot of troubleshooting happens in both the data centers. When we initially deployed, we had separate people for the cloud and for the local data centers. This is where the complication occurred. Both teams would argue about a lot of things. Having a single solution, we're able to troubleshoot very quickly. The same people who work on our Palo Alto data center firewalls are able to use Prisma Cloud to search and find out what went wrong, even though it's a part of the Azure infrastructure. That has been very good for us. They were easily able to adapt and, without much training, they were able to understand how to use Prisma Cloud to see what is happening, where things are getting blocked, and where we need to troubleshoot.

The solution provides the visibility and control we need, regardless of how complex or distributed the cloud environments become. If you have traffic passing through multiple zones and you have your own data center as well, that is where it does the magic. Using Prisma Cloud, we're able to quickly troubleshoot and identify where the problem is. Suppose that a particular feature in Office 365 is not working. The packet capture capability really helps us. In certain cases, we have seen where Microsoft has had bugs and that is one area where this solution has really helped us. We have been able to use the packet capture capability to find out why it was not working. That would not have been possible in a normal solution. We are using it extensively for troubleshooting. We are capturing the data and then going back to the service provider with the required logs and showing them the expected response and what we are getting. We can show them that the issue is on their side.

When it comes to Zero Trust architecture, it's extremely good for compliance. In our data center, we did a massive project on NSX wherein we had seven PCI requirements. We needed to ensure that all the PCI apps pass through the firewall and that they only communicate with the required resources and that there was no unexpected communication. We used Prisma Cloud to implement Zero Trust architecture in the cloud. Even in between the subnets, there is no communication allowed. Only what we allowed is passing through the firewall. The rest is getting blocked, which is very good for compliance.

If I have to generate a report for the PCI auditor, it is very simple. I can show him that we have the firewall with the vulnerability and IPS capabilities turned on, and very quickly provide evidence to him for the certification part. This is exactly what we wanted and is one of the ways in which the solution is helping us.

Another of the great things about Prisma Cloud is that the management console is hosted. That means we are not managing the backend. We just use Prisma Cloud to find out where an issue is. We can go back in time and it is much faster. If you have an appliance, the administration and support of it are also part of your job. But when you have Prisma Cloud, you don't care about those things. You just focus on the issues and manage the cloud appliances. This is something that is new for us and extremely good. Even though we have a lot of traffic, the search and capabilities are very fast, making them extremely good for troubleshooting.

Because the response is much faster, we're able to quickly find problems, and even things that are not related to networking but that are related to an application. We are able to help the developers by telling them that this is where the reset packet is coming from and what is expected.

We are using the new Prisma Cloud 2.0 Cloud Security Posture Management features. For example, there are some pre-built checklists that we utilize. It really helps us identify things, compared to Panorama, which is the on-prem solution. There are a lot of elements that are way better than Panorama. For instance, it helps us know which things we really need to work on, identifying issues that are of high importance. The dashboards and the console are quite good compared to Panorama.

If one of our teams is talking about slowness, we are able to find out where this slowness is coming from, what is not responding. If there is a lock on the database, and issues are constantly being reported, we are able to know exactly what is causing the issue in the backend application.

What is most valuable?

The main feature is the management console which gives us a single place to manage all our requirements. We have multiple zones and, using UDR [user-defined routing] we are sending the traffic back to Palo Alto. From there we are defining the rules for each application. What we like about it is the ease of use and the visibility.

The application visibility is amazing. For example, sometimes we don't know what a particular custom port is for and what is running on it. The visibility enables us to identify applications, what the protocol is, and what service is behind it. Within Azure, it is doing a great job of providing visibility. We know exactly what is passing through our network. If there is an issue of any sort we are able to quickly detect it and fix the problem.

The solution provides Cloud Security Posture Management, Cloud Workload Protection, Cloud Network Security, and Cloud Infrastructure Entitlement Management in a single pane of glass. When it comes to anomaly detection, because we have Layer 7 visibility, if there is something suspicious, even though it is allowed, we are able to identify it using the anomaly detection feature. We also wanted something where we could go back in time, in terms of visibility. Suppose something happened two hours back. Because of the console, we are able to search things like that, two hours back, easily, and see what happened, what change might have happened, and where the traffic was coming from. These features are very good for us in terms of investigation.

In addition, there are some forensic features we are utilizing within the solution, plus data security features. For example, if we have something related to financial information, we can scan it using Prisma Cloud. We are using a mixture of everything it offers, including network traffic analysis, user activity, and vulnerability detection. All these things are in one place, which is something we really like.

Also, if we are not aware of what the port requirements are for an application, which is a huge issue for us, we can put it into learning mode and use the solution to detect what the exact port requirements are. We can then meet to discuss which ones we'll allow and which ones are probably not required.

What needs improvement?

The only part that is actually tough for us is that we have a professional services resource from Palo Alto working with us on customization. One of the things that we are thinking about is that if we have similar requirements in the future, how can we get his capability in-house? The professional services person is a developer and he takes our requirements and writes the code for the APIs or whatever he needs to access. We will likely be looking for a resource for the Demisto platform.

The automation also took us time, more than we thought it would take. We had some challenges because Demisto was a third-party product. Initially, the engineer who is with us thought that everything was possible, but later on, when he tried to do everything, he was not able to do some things. We had to change the strategy multiple times. But we have now reached a point where we are in a comfort zone and we have been able to achieve what we wanted to do.

Also, getting new guys trained on using the solution requires some thought. If someone is already trained on Palo Alto then he's able to adapt quickly. But, if someone is coming from another platform such as Fortinet, or maybe he's from the system side, that is where we need some help. We need to find out if there is an online track or training that they can go to.

Related to training is the fact that changes made in the solution are reflected directly in the production environment. As of now, we are not aware of any method for creating a demo environment where we can train new people. These are the challenges we have.

For how long have I used the solution?

We have been using Prisma Cloud by Palo Alto Networks for about eight months.

What do I think about the stability of the solution?

We have not had many issues with the solution's stability, and whatever challenges we have had have been in the public cloud. But with the solution itself there has only been one issue we got stuck on and that was NAT-ing. It was resolved later. We ran into some issues with our design because public internet access was an issue, and that took us some time. But it was only the NAT-ing part where we got stuck. The rest has all been smooth.

What do I think about the scalability of the solution?

As of now, we have not put a load on the system, so we will only know about how it handles that when we start migrating our services. For now, we've just built the landing zones and only very few services are there. It will take like a year or so before we know how it will handle our load.

This is our main firewall solution. We are not relying on the cloud-based firewall as of now. All our traffic is going through Prisma Cloud. Once we add our workloads, we will be using the full capacity of the solution.

How are customer service and support?

We have not had any issues up to now.

Which solution did I use previously and why did I switch?

We initially tried to use the Azure firewall and the VPC that is available in Azure, but we had very limited capabilities that way. It was just a packet filtering solution with a lot of limitations and we ended up going back to Palo Alto.

How was the initial setup?

The initial setup was straightforward. There was an engineer who really helped us and we worked with them directly. We did not have any challenges.

The initial deployment took us about 15 days and whatever challenges we had were actually from the design side. We wanted to do certain things in a different way and we made a few changes later on, but from the deployment and onboarding perspectives, it was straightforward.

We have a team of about 12 individuals who are using Prisma Cloud, all from the network side, who are involved in the design. On the security side, three people use it. We want to increase that number, but as I mentioned earlier, there is the issue of how we can train people. For maintenance, we have a 24/7 setup and we have at least six to eight engineers, three per shift. Most of them are from the network security side, senior network security engineers, who mainly handle proxy and firewall.

What about the implementation team?

Our implementation strategy included using a third-party vendor, Crayon, who actually set up the basic design for us. Once the design was ready, we consulted with the Palo Alto team telling them that this was what we wanted to implement: We will have this many zones and these are the subnets. It didn't take much time because we knew exactly what our subnets were but also because the team that was helping us had already had experience with deployment.

Our experience with Crayon went well. Our timeline was extremely short and in the time that was available they did an excellent job. We reached a point where the landing zones were ready and whatever issues we had were resolved.

What's my experience with pricing, setup cost, and licensing?

I can't say much about the pricing because we still have not started using the solution to its full capabilities. As of now, we don't have any issues. Whatever we have asked for has been delivered.

If you pay for three years of Palo Alto, it's better. If you're planning on doing this, it's obviously not going to be for one year, so it's better if you go with a three-year license.

The only challenge we have is with the public cloud vendor pricing. The biggest lesson I have learned is around the issues related to pricing for public cloud. So when you are doing your segmentation and design, it is extremely important that you work with someone who knows and understands what kinds of needs you will have in the future and how what you are doing will affect you in terms of costs. If you have multiple firewalls, the public cloud vendor will also charge you. There are a lot of hidden costs.

Every decision you make will have certain cost implications. It is better that you try to foresee and forecast how these decisions are going to affect you. The more data that passes through, the more the public cloud will charge you. If, right now, you're doing five applications, try to think about what 100 or 250 applications will cost you later.

Which other solutions did I evaluate?

If we had gone with the regular Azure solution, some of the concerns were the logging, monitoring, and search capabilities. If something was getting blocked how would we detect that? The troubleshooting was very complicated. That is why we went with Prisma Cloud, for the troubleshooting.

Microsoft is not up to where Palo Alto is, right now. Maybe in six months or a year, they will have some comparable capabilities, but as of now, there is no competitor.

Before choosing the Palo Alto product we checked Cisco and Fortinet. In my experience, it seemed that Cisco and Forinet were still building their products. They were not ready. We were lucky that when we went to Palo Alto they already had done some deployments. They already had a solution ready on the marketplace. They were quickly able to provide us the demo license and walk us through the capabilities and our requirements. The other vendors, when we started a year ago, were not ready.

What other advice do I have?

If you have compliance requirements such as PCI or ISO, going with Palo Alto would be a good option. It will make your life much easier. If you do not have Layer 7 visibility requirements and you do not have auditing and related requirements, then you could probably survive by going with a traditional firewall. But if you are a midsize or enterprise company, you will need something that has the capabilities of Prisma Cloud. Otherwise, you will have issues. It is very difficult to work with the typical solution where there is no log and you don't know exactly what happened and there is too much trial and error.

Instead of allowing everything and then trying to limit things from there, if you go with a proper solution, you will know exactly what is blocked, where it is blocked, and what to allow and what not to allow. In terms of visibility, Prisma Cloud is very good.

One thing to be aware of is that we have a debate in our environment wherein some engineers from the cloud division say that if we had an Azure-based product, the same engineer who is handling the cloud, who is the global administrator, would have visibility into where a problem is and could handle that part. But because we are using Palo Alto, which has its own administrators, we still have this discussion going on.

Prisma Cloud also provides security spanning multi- and hybrid-cloud environments, which is very good for us. We do not have hybrid cloud as of now, but we are planning, in the future , to be hosting infrastructure on different cloud providers. As of now we only have Azure.

Because Zero Trust is something new for us, we have actually seen a significant increase in alerts. Previously, we only had intra-zone traffic. Now we have inter-zone traffic. Zero Trust deployments are very different from traditional deployments. It's something we have to work on. However, because of the increased security, we know that a given computer tried to scan something during office hours, or who was trying to make certain changes. So alerts have increased because of the features that we have turned on.

Which deployment model are you using for this solution?

Public Cloud

If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?

Microsoft Azure
Disclosure: I am a real user, and this review is based on my own experience and opinions.
Flag as inappropriate
Devin Charters
Sr. Security Operations Manager at a healthcare company with 5,001-10,000 employees
Real User
Top 20Leaderboard
Provides feedback directly to teams responsible for AWS or cloud accounts, enabling them to fix issues independently

Pros and Cons

  • "The policies that come prepackaged in the tool have been very valuable to us. They're accurate and they provide good guidance as to why the policy was created, as well as how to remediate anything that violates the policy."
  • "The integration of the Compute function into the cloud monitoring function—because those are two different tools that are being combined together—could use some more work. It still feels a little bit disjointed."

What is our primary use case?

We are using it for monitoring our cloud environment and detecting misconfigurations in our hosted accounts in AWS or Azure.

How has it helped my organization?

As the security operations team, our job is to monitor for misconfigurations and potential incidents in our environment. This solution does a good job of monitoring those for us and of alerting us to misconfigurations before they become potential security incidents or problems.

We've set the tool up so that it provides feedback directly to the teams responsible for their AWS or cloud accounts. It has been really helpful by getting information directly to the teams. They can see what the problem is and they can fix it without us having to go chase them down and tell them that they have a misconfiguration.

The solution secures the entire spectrum of compute options such as hosts and VMs, containers and Containers as a Service. We are not using the container piece as yet, but that is a functionality that we're looking forward to getting to use. Overall, it gives us fantastic visibility into the cloud environment.

Prisma Cloud also provides the data needed to pinpoint root cause and prevent an issue from occurring again. A lot of that has to do with the policies that are built into the solution and the documentation around those policies. The policy will tell the user what the misconfiguration is, as well as give them remediation steps to fix the misconfiguration. It speeds up our remediation efforts. In some of the cases, when my team, the security team, gets involved, we're not necessarily experts in AWS and wouldn't necessarily know how to remediate the issue that was identified. But because the instructions are included as part of the Prisma Cloud product, we can just cut and paste it and provide it to the team. And when the teams are addressing these directly, they also have access to those remediation instructions and can refer them to figure out what they need to do to remediate the issue and to speed up remediation on misconfigurations. 

In some cases, these capabilities could be saving us hours in remediation work. In other cases, it may not really be of value to the team. For example, if an S3 bucket is public facing, they know how to fix that. But on some of the more complex issues or policies, it might otherwise take a lot more work for somebody to figure out what to do to fix the issue that was identified.

In terms of the solution’s ability to show issues as they are discovered during the build phases, I can only speak to post-deployment because we don't have it integrated earlier in the pipeline. But as far as post-deployment goes, we get notified just about immediately when something comes up that is misconfigured. And when that gets remediated, the alert goes away immediately in the tool. That makes it really easy in a shared platform like this, where we have shared responsibility between the team that's involved and my security operations team. It makes it really easy for us to be able to go into the tool and say, "There was an alert but that alert is now gone and that means that the issue has been resolved," and know we don't have to do any further research.

For the developers, it speeds up their ability to fix things. And for my team, it saves us a ton of time in not having to potentially investigate each one of those misconfigurations to see if it is still a misconfiguration or not, because it's closed out automatically once it has been remediated. On an average day, these abilities in the solution save my team two to three hours, due to the fact that Prisma Cloud is constantly updating the alerts and closing out any alerts that are no longer valid.

What is most valuable?

The policies that come prepackaged in the tool have been very valuable to us. They're accurate and they provide good guidance as to why the policy was created, as well as how to remediate anything that violates the policy. 

The Inventory functionality, enabling us to identify all of the resources deployed into a single account in either AWS or Azure, or into Prisma Cloud as a whole, has been really useful for us.

And the investigate function that allows us to view the connections between different resources in the cloud is also very useful. It allows us to see the relationship traffic between different entities in our cloud environment.

What needs improvement?

The integration of the Compute function into the cloud monitoring function—because those are two different tools that are being combined together—could use some more work. It still feels a little bit disjointed.

Also, the permissions modeling around the tool is improving, but is still a little bit rough. The concept of having roles that certain users have to switch between, rather than have a single login that gives them visibility into all of the different pieces, is a little bit confusing for my users. It can take some time out of our day to try to explain to them what they need to do to get to the information they need.

For how long have I used the solution?

I have been using Palo Alto Prisma Cloud for about a year and a half.

What do I think about the stability of the solution?

We really have had very few issues with the stability. It's been up, it's been working. We've had, maybe, two or three very minor interruptions of the service and our ability to log in to it. In each case there was a half an hour or an hour, at most, during which we were unable to get into it, and then it was resolved. There was usually information on it in the support portal including the reason for it and the expectation around when they would get it back up.

What do I think about the scalability of the solution?

It seems to scale fine for us. We started out with 10 to 15 accounts in there and we're now up to over 200 accounts and, on our end, seemingly nothing has changed. It's as responsive as it's ever been. We just send off our logs. Everything seems to integrate properly with no complaints on our side.

We have nearly 600 users in the system, and they're broken out into two different levels. There are the full system administrators, like myself and my team and the security team that is responsible for our cloud environment as a whole. We have visibility across the entire environment. And then we have the development teams and they are really limited to accessing their specific accounts that are deployed into Prisma Cloud. They have full control over those accounts.

For our cloud environments, the adoption rate is pretty much 100 percent. A lot of that has to do with that automated deployment we created. A new account gets started and it is automatically added to the tool. All of the monitoring is configured and everything else is set up by default. You can't build a new cloud account in our environment without it getting added in. We have full coverage, and we intend to keep it that way.

How are customer service and technical support?

Tech support has been very responsive. They are quick to respond to tickets and knowledgeable in their responses. Their turnaround time is usually 24 to 48 hours. It's very rare that we would open anything that would be considered a high-priority ticket or incident. Most of the stuff was lower priority and that turnaround was perfectly acceptable to us.

Which solution did I use previously and why did I switch?

This is our first tool of this sort.

How was the initial setup?

The initial setup was really straightforward. We then started using the provided APIs to do some automated integration between our cloud environment and Prisma Cloud. That has worked really well for us and has streamlined our deployment by a good deal. However, what we found was that the APIs were changing as we were doing our deployment. We started down the path we created with some of those integrations, and then there were undocumented changes to the APIs which broke our integrations. We then had to go back and fix those integrations.

What may have happened were improvements in the API on the backend and those interfered with what we had been doing. It meant that we had to go back and reconfigure that integration to make it work. My understanding from our team that was responsible for that is that the new integration works better than the old integration did. So the changes Palo Alto made were an improvement and made the environment better, but it was something of a surprise to us, without any obvious documentation or heads-up that that was going to change. That caught us a little bit out and broke the integration until we figured out what had changed and fixed it.

There is only a learning curve on the Compute piece, specifically, and understanding how to pivot between that and the rest of the tool, for users who have access to both. There's definitely a learning curve for that because it's not at all obvious when you get into the tool the first time. There is some documentation on that, but we put together our own internal documentation, which we've shared with the teams to give them more step-by-step instructions on what it is that they need to do to get to the information that they're looking for.

The full deployment took us roughly a month, including the initial deployment of rolling everything out, and then the extended deployment of building it to do automated deployments into new environments, so that every new environment gets added automatically.

Our implementation strategy was to pick up all of the accounts that we knew that we had to do manually, while we were working on building out that automation to speed up the onboarding of the new accounts that we were creating.

What about the implementation team?

We did all of that on our own, just following the API documentation that they had provided. We had a technical manager from Palo Alto with whom we were working as we were doing the deployment, but the automated deployment work that we did was all on our own and all done internally.

At this point, we really don't have anybody dedicated to deployment because we've automated that process. That has vastly simplified our deployment. Maintenance-wise, as it is a SaaS platform, we don't really have anybody who works on it on a regular basis. It's really more ad hoc. If something is down, if we try to connect to it and if we can't get into the portal or whatever the case may be, then somebody will open a ticket with support to see what's going on.

What was our ROI?

We have seen ROI although it's a little hard to measure because we didn't have anything like this before.

The biggest areas of ROI that we've seen with it have been the uptake by the organization, the ease of deploying the tool—especially since we got that full automation piece created and taken care of—as well as the visibility and the speed at which somebody can start using the tool. I generally give employees about an hour or two of training on the tool and then turn them loose on it, and they're capable of working out of it and getting most of the value. There are some things that take more time to get up to speed on, but for the most part, they're able to get up to speed pretty quickly, which is great.

What's my experience with pricing, setup cost, and licensing?

The pricing and the licensing are both very fair.

There aren't any costs in addition to the standard licensing fees, at this time. My understanding is that at the beginning of 2021 they're not necessarily changing the licensing model, but they're changing how some of the new additions to the tool are going to be licensed, and that those would be an additional cost beyond what we're paying now.

The biggest advice I would give in terms of costs would be to try to understand what the growth is going to look like. That's really been our biggest struggle, that we don't have an idea of what our future growth is going to be on the platform. We go from X number of licenses to Y number of licenses without a plan on how we're going to get from A to B, and a lot of that comes as a bit of a surprise. It can make budgeting a real challenge for it. If an organization knows what it has in place, or can get an idea of what its growth is going to look like, that would really help with the budgeting piece.

Which other solutions did I evaluate?

We had looked at a number of other tools. I can't tell you off the top of my head what we had looked at, but Prisma Cloud was the tool that we had always decided that we wanted to have. This was the one that we felt would give us the best coverage and the best solution, and I feel that we were correct on that.

The big pro with Prisma Cloud was that we felt it gave us better visibility into the environment and into the connections between entities in the cloud. That visualization piece is fantastic in this tool. We felt like that wasn't really there in some of the other tools. 

Some of the other tools had a little bit better or broader policy base, when we were initially looking at them. I have a feeling that at this point, with the rate that Palo Alto is releasing new policies and putting them into production, that it is probably at parity now. But there was a feeling, at the time, among some of the other members of the team that Palo Alto came up short and didn't have as many policies as some of the other tools that we were looking at.

What other advice do I have?

I would highly recommend automating the process of deploying it. That has made just a huge improvement on the uptake of the tool in our environment and in the ease of integration. There's work involved in getting that done, but if we were trying to do this manually, we would never be able to keep up with the rate that we've been growing our environment.

The biggest lesson I've learned in using this solution is that we were absolutely right that we needed a tool like this in our environment to keep track of our AWS environment. It has identified a number of misconfigurations and it has allowed us to answer a lot of questions about those misconfigurations that would have taken significantly more time to answer if we were trying to do so using native AWS tools.

The tool has an auto-remediation functionality that is attractive to us. It is something that we've discussed, but we're not really comfortable in using it. It would be really useful to be able to auto-remediate security misconfigurations. For example, if somebody were to open something up that should be closed, and that violated one of our policies, we could have Prisma Cloud automatically close that. That would give us better control over the environment without having to have anybody manually remediate some of the issues.

Prisma Cloud also secures the entire development lifecycle from build to deploy to run. We could integrate it closer into our CI/CD pipeline. We just haven't gone down that path at this point. We will be doing that with the Compute functionality and some of the teams are already doing that. The functionality is there but we're just not taking advantage of it. The reason we're not doing so is that it's not how we initially built the tool out. Some of the teams have an interest in doing that and other teams do not. It's up to the individual teams as to whether or not it provides them value to do that sort of an integration.

As for the solution's alerts, we have them identified at different severities, but we do not filter them based on that. We use those as a way of prioritizing things for the teams, to let them know that if it's "high" they need to meet the SLA tied to that, and similarly if it's "medium" or "low." We handle it that way rather than using the filtering. The way we do it does help our teams understand what situations are most critical. We went through all of the policies that we have enabled and set our priority levels on them and categorized them in the way that we think that they needed to be categorized. The idea is that the alerts get to the teams at the right priority so that they know what priority they need to assign to remediating any issues that they have in their environment.

I would rate the solution an eight out of 10. The counts against it would be that the Compute integration still seems to need a little bit of work, as though it's working its way through things. And some of the other administrative pieces can be a little bit difficult. But the visibility is great and I'm pretty happy with everything else.

Disclosure: IT Central Station contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Alex Jones
Information Security Manager at Cobalt.io
Real User
Top 20Leaderboard
Provides central visibility across multiple cloud environments in a single pane of glass

Pros and Cons

  • "Prisma Cloud has enabled us to take a very strong preventive approach to cloud security. One of the hardest things with cloud is getting visibility into workloads. With Prisma Cloud, you can go in and get that visibility, then set up policies to alert on risky behavior, e.g., if there are security groups or firewall ports open up. So, it is very helpful in preventing configuration errors in the cloud by having visibility. If there are issues, then you can find them and fix them."
  • "Some of the usability within the Compute functionality needs improvement. I think when Palo Alto added on the Twistlock functionality, they added a Compute tab on the left side of the navigation. Some of the navigation is just a little dense. There is a lot of navigation where there is a tab and dropdowns. So, just improving some of the navigation where there is just a very dense amount of buttons and drop-down menus, that is probably the only thing, which comes from having a lot of features. Because there are a lot of buttons, just navigating around the platform can be a little challenging for new users."

What is our primary use case?

Previously, we were primarily using Amazon Web Services in a product division. We initially deployed RedLock (Prisma Cloud) as a PoC for that product division. Because it is a large organization, we knew that there were Azure and GCP for other cloud workloads. So, we needed a multi-cloud solution. In my current role, we are primarily running GCP, but we do have some presence in Amazon Web Services as well. So, in both those use cases, the multi-cloud functionality was a big requirement.

We are on the latest version of Prisma Cloud.

How has it helped my organization?

It is very important that Prisma Cloud provides security spanning multi-cloud environments, where you have Amazon, Azure, and GCP multiple cloud environments. Being able to centralize all those assets, have visibility, and set some policies and rules within one dashboard when you have multiple cloud accounts is a big advantage.

The comprehensiveness of Prisma Cloud for securing the entire cloud-native development lifecycle was shown when Palo Alto bought Twistlock and integrated in some of the container security pieces, particularly for containers, Docker, and Kubernetes, and building in the Prismic Cloud Compute tab. Having that functionality from Twistlock more focused on Docker and containers filled in some of the space where the original Prisma RedLock piece was a little more focused on just the API, e.g., passive scanning. The integration of Twistlock into Prisma Cloud Compute definitely expanded this functionality into the container and Docker space, which is a big growth area in the cloud as well.

Prisma Cloud has enabled us to take a very strong preventive approach to cloud security. One of the hardest things with cloud is getting visibility into workloads. With Prisma Cloud, you can go in and get that visibility, then set up policies to alert on risky behavior, e.g., if there are security groups or firewall ports open up. So, it is very helpful in preventing configuration errors in the cloud by having visibility. If there are issues, then you can find them and fix them. 

Educates and trains cloud operators on how to better design their different cloud and infrastructure deployments. Prisma Cloud has very good remediation steps built in. So, if you do find an issue, they will give you steps on, "Here is how you go into the Console and make this change to close out this issue, preventing this in the future." So, it is a strong tool for the prevention and protection of the cloud, in general.

We have gone in and done some tuning to remove alerts that were false positives. That reduced some of the alerts. Then, as our team has gone in and fixed issues, we have seen from the metrics and tracking of Prisma Cloud that alerts have been reduced.

What is most valuable?

The compliance tabs were helpful just to have visibility into the assets as well as the asset management tabs. In the cloud, everything is very dynamic and ephemeral. So, being able to see dynamic asset inventory for what we have in cloud environments was a huge plus. Just to have that visibility in a dashboard instead of having to dump things into a spreadsheet, e.g., you are trying to do asset inventory and spreadsheets, then five minutes later it changes cause the cloud is dynamic. So, the asset inventory and compliance tabs are strong. 

When the cloud team makes a change that may introduce some risk, then we get alerts.

We pretty heavily used the Resource Query Language (RQL) and the investigate tab to find what instances and cloud resources are externally facing and might be higher risk, looking for particular patterns in the resources. 

Prisma Cloud provides the following in a single pane of glass within a dashboard: Cloud Security Posture Management, Cloud Workload Protection, Cloud Network Security, and Cloud Infrastructure Entitlement Management. It is particularly challenging, especially in a multi-cloud environment, where you would have to log into your Google Cloud, then look for your infrastructure and alerting within Google. In addition, you have to switch over to Amazon and log into an AWS Console to do some work with Amazon. Having that central visibility across multiple cloud environments is definitely important when you have different sources and different dashboards for the cloud, which will still be separate, but you still have some centralization within that dashboard.

The solution’s security automation capabilities are definitely good. We use some of the automation within the alerting, where if Prisma Cloud detected a change and there was a certain threshold, e.g., if it was above a medium or a high risk issue, then we would send off an alert that would go to our infrastructure team/Slack channel, creating a Jira ticket. The automation with Slack and Jira have been very good feature points. 

The Prisma Cloud tool identifies for the security team the resource in the cloud that is the offender, such as, the context, the resource in the cloud, what is the cloud account, and the cloud environment that the resource is in. Then, there is always very good context on remediation, e.g., how do we go in and fix that issue? Do we either go through automation or log into the Cloud Console to do some remediation? The alerts include the context that is needed as well as the risk ranking and severity, whether it is a high, medium, or low issue.

The Prisma Cloud Console always has good remediation steps, whether it is going into the Console, updating a Cloud Formation, or Terraform scripts. The remediation guidance is always very helpful from Prisma Cloud.

What needs improvement?

Some of the usability within the Compute functionality needs improvement. I think when Palo Alto added on the Twistlock functionality, they added a Compute tab on the left side of the navigation. Some of the navigation is just a little dense. There is a lot of navigation where there is a tab and dropdowns. So, just improving some of the navigation where there is just a very dense amount of buttons and drop-down menus, that is probably the only thing, which comes from having a lot of features. Because there are a lot of buttons, just navigating around the platform can be a little challenging for new users.

They could improve a little bit of the navigation, where I have to kind of look through a lot of the different menus and dropdowns. Part of this just comes from it having so many awesome features. However, the navigation can sometimes be a little bit like, "I can't remember where the tab was," so I have to click and search around. This is not a big negative point, but it is definitely an area for improvement.

For how long have I used the solution?

I started using this solution when it was still called RedLock. Before Palo Alto bought RedLock, I used RedLock for about a year and then for another year or two once Palo Alto bought them, rebranding them as Prisma Cloud. So, I have been using it for about three or four years.

What do I think about the stability of the solution?

It is very stable and solid. We haven't really had any issues with the dashboard. The availability is there. The ability to log in and get near real-time data on our cloud environment is very good. Overall, the stability and accessibility has been good.

What do I think about the scalability of the solution?

We use it pretty much daily, several days a week. We are licensed for 200 workloads in Prisma Cloud.

We are definitely still working on maturing some of our operations. We have a pretty small infrastructure team; just two engineers who are focused on infrastructure. We are trying to automate as much as we can, and Prisma Cloud supports most of that. There are still some cases where you have to log into the Console and do some clicking around. However, for the most part, we are trying to automate as much as we can to scale those operations with a very small infrastructure and security team.

How are customer service and technical support?

Their customer and technical support is very good. They helped us on scoping, getting an estimate for how many workloads and resources that we had. Their support team helped us through some issues on the configuration in the API on the Defender side. We had a couple questions that came up and the customer success and support engineers were very responsive and helpful. 

The sales team was really good. We leveraged some of our relationships, working extensively with some of the leadership at Palo Alto in Unit 42 on their threat team. The sales team gave us a pretty good deal right before the end of the year, last year. So, we were able to get a good discount, so we were able to get the purchase done. Overall, it was a good experience.

Which solution did I use previously and why did I switch?

This was a new implementation for our company.

How was the initial setup?

Deploying the baseline for Prisma Cloud, its API configuration, was straightforward. To set up the API roles and hook in the API connectivity, we were able to do that within a couple of hours. The Prisma Cloud piece at the API level was very quick. The Defender agents were a bit more complicated because we had to deploy the Compute Defender agents into our containers, Docker, and Kubernetes. That was a little more complex, because we were deploying, not just connecting an API. We were deploying agents within our environment. So, the API side was very simple and fast. The Defender side was a bit more complicated.

We are still working on expanding and deploying some more Defender agents. The API piece was deployed within about a week, which was very fast. On the Defender side, with the infrastructure team's input, it took us several weeks to get the Defender agents deployed.

When we deployed Prisma Cloud, we established some baselines for security and our infrastructure team for what was running in the cloud. They were using some automation and scripting. They thought everything was okay with the script: We just run a script and it deploys this server and infrastructure in the cloud. What we found was that there were some misconfigurations. They had a default script that was opening up some ports that were not needed. So, we worked with the infrastructure team, went back, and said, "Okay, these ports were uncovered with our Prisma Cloud scanning. Is there a business use? Is there any valid reason for these ports to be open?" The team said, "No we don't really need these ports." It was just a default that we need to deploy in Google or AWS. It was just a default that was added in. So, we worked with them to go back and change some of their defaults, then change some of their scripts. Now, in future cases, when they deploy the Terraform script, it would make sure that those ports are automatically closed.

What about the implementation team?

We purchased directly from Palo Alto. We didn't use a system integrator. We purchased directly from them and went through their support team. I have a good relationship with the sales and customer success team at Palo Alto just from past relationships. So, we did a direct purchase.

What was our ROI?

We will eventually see return on investment just out of the automation and the ability to scale the platform up.

We have reduced alert investigation times by approximately a couple hours a week.

What's my experience with pricing, setup cost, and licensing?

The pricing is good. They gave us some good discounts right at the end of the year based on the value that it brings, visibility, and the ability to build in cloud, compliance, and security within one dashboard. 

Which other solutions did I evaluate?

We did look at a couple other vendors who do similar cloud workload protections. Based on the relationships that we have with Palo Alto, we knew that Palo Alto was kind of the leader in this space. We had hands-on experience with the tool and Palo Alto was also a customer of ours. So, we had some strong relationships and Palo Alto was the leader. 

We did some demos with different tools that were not as comprehensive. We had some tools that we looked at which just focused more on the container side and some that focused more on the cloud API layer. Since Prisma Cloud has unified some of these different pieces into one platform, we ultimately decided that Prisma Cloud was going to be the best solution for us.

What other advice do I have?

It is a good tool. Work with your stakeholders and cloud teams to implement Prisma Cloud within as many environments as you can to get that rich amount of data, then come up with a strong strategy for integrations and alerting. Prisma Cloud has a lot of integrations out-of-the-box, like ServiceNow, Jira, and Slack. Understand what your business teams need as well as what your engineering and developers need. Try to work on the integrations that allow for the maximum amount of integration and automation within a cloud environment. So, work with your business teams to come up with a plan for how to implement it in your cloud, then how to best integrate the tooling and alerting.

While Prisma Cloud does have the ability to do auto-remediation, which is a part of their automation, we didn't turn any of that on now because those features have a tendency to sometimes break things. For example, it will automatically shut down a security group or server that can sometimes have an impact into availability. So, we don't use any of the auto-remediation features, but we do have automation setup with Jira and Slack to create tickets and events for our ticketing and infrastructure teams/Slack channels.

We definitely want to continue to explore and build-in some of the Shift Left principles, getting the tool into our dev cycles earlier. We do have some plans to expand more on the dev side. I am hiring an AppSec engineer who will be focused more on the development and AppSec side. That is something that is in our roadmap. It has just been something that we have been trying to work on and get into our backlog of a lot of projects.

I would rate this solution as a nine out of 10.

Disclosure: IT Central Station contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Flag as inappropriate
RW
Sr. Information Security Manager at a healthcare company with 201-500 employees
Real User
Top 20Leaderboard
Integrates into our CI/CD pipeline giving devs near real-time alerting on whether a configuration is good or bad

Pros and Cons

  • "It scans our containers in real time. Also, as they're built, it's looking into the container repository where the images are built, telling us ahead of time, "You have vulnerabilities here, and you should update this code before you deploy." And once it's deployed, it's scanning for vulnerabilities that are in production as the container is running."
  • "The challenge that Palo Alto and Prisma have is that, at times, the instructions in an event are a little bit dated and they're not usable. That doesn't apply to all the instructions, but there are times where, for example, the Microsoft or the Amazon side has made some changes and Palo Alto or Prisma was not aware of them. So as we try to remediate an alert in such a case, the instructions absolutely do not work. Then we open up a ticket and they'll reply, "Oh yeah, the API for so-and-so vendor changed and we'll have to work with them on that." That area could be done a little better."

What is our primary use case?

Our use case for the solution is monitoring our cloud configurations for security. That use case, itself, is huge. We use the tool to monitor security configuration of our AWS and Azure clouds. Security configurations can include storage, networking, IAM, and monitoring of malicious traffic that it detects.

We have about 50 users and most of them use it to review their own resources.

How has it helped my organization?

If, for a certain environment, someone configures a connection to the internet, like Windows RDP, which is not allowed in our environment, we immediately get an alert that says, "Hey, there's been a configuration of Windows Remote Desktop Protocol, and it's connected directly to the internet." Because that violates our policy, and it's also not something we desire, we will immediately reach out to have that connection taken down.

We're also integrating it into our CI/CD pipeline. There are parts we've integrated already, but we haven't done so completely. For example, we've integrated container scanning into the CI/CD. When they build a container into the pipeline, it's automatically deployed and the results come back to our console where we're monitoring it. The beauty of it is that we give our developers access to this information. That way, as they build, they actually get near real-time alerting that says, "This configuration is good. This configuration is bad." We have found that very helpful because it provides instant feedback to the development team. Instead of doing a review later on where they find out, "Oh, this is not good," they already know: "Oh, we should not configure it this way, let's configure it more securely another way." They know because the alerts are in near real-time.

That's part of our strategy. We want to bring this information as close to the DevOps team as possible. That's where we feel the greatest benefit can be achieved. The near real-time feedback on what they're doing means they can correct it there, versus several days down the road when they've already forgotten what they did.

And where we have integrated it into our CI/CD pipeline, I am able to view vulnerabilities through our different stages of development.

It has enhanced collaboration between our DevOps and SecOps teams by being very transparent. Whatever we see, we want them to see. That's our strategy. Whatever we in security know, we want them to know, because it's a collaborative effort. We all need each other to get things fixed. If they're configuring something and it comes to us, we want them to see it. And our expectation is that, hopefully, they've fixed it by the time we contact them. Once they have fixed it, the alert goes away. Hopefully, it means that everyone has less to do.

We also use the solution's ability to filter alerts by levels of security. Within our cloud, we have accounts that are managed and certain groups are responsible. We're able to direct the learning and the reporting to the people who are managing those groups or those cloud accounts. The ability to filter alerts by levels of security definitely helps our team to understand which situations are the most critical. They're rated by high, medium, and low. Of course we go after the "highs" and tell them to fix them immediately, or as close to immediately as possible. We send the "mediums" and "lows" to tickets. In some instances, they've already fixed them because they've seen the issue and know we'll be knocking on the door. They realize, "Oh, we need to fix this or else we're going to get a ticket." They want to do it the right way and this gives them the information to enable them to make the proper configuration.

Prisma Cloud also provides the data needed to pinpoint root cause and prevent an issue from occurring again. When there's an alert and an issue, in the event it tells you how to fix it. It will say, "Go to this, click on this, do this, do that." It will tell you why you got the alert and how to fix it.

In addition, the solution’s ability to show issues as they are discovered during the build phases is really good. We have different environments. Our low environments are dev, QA, and integrations, environments that don't have any data. And then we have the upper environment which actually has production data. There's a gradual progression as we go from the lower environments and eventually, hopefully, they figure out what to do, and then go into the upper environment. We see the alerts come in and we see how they're configuring things. It gives us good feedback through the whole life cycle as they're developing a product. We see that in near real-time through the whole development cycle.

I don't know if the solution reduces runtime alerts, but its monitoring helps us to be more aware of vulnerabilities that come in the stack. Attackers may be using new vulnerabilities and Prisma Cloud has increased the visibility of any new runtime alerts.

It does reduce alert investigation times because of the information that the alerts give us. When we get an alert, it will tell us the source, where it comes from. We're able to identify things because it uses a protocol called a NetFlow. It tracks the network traffic for us and says, "This alert is generated because these attackers are generating alerts," or "It's coming internally from these devices," and it names them. For example, we run vulnerability scanning weekly in our environment to scan for weaknesses and report on them. At times, a vulnerability scanner may trigger an alert in Prisma. Prisma will say, "Oh yeah, something is scanning your environment." We're able to use this Prisma information to identify the resources that have been scanning our environment. We're able to identify that really quickly as our vulnerability scanner and we're able to dismiss it, based on the information that Prisma provides. Prisma also provides the name or ID of a particular service or user that may have triggered an alert. We are able to reach out to that individual to say, "Hey, is this you?" because of the information provided by Prisma, without having to look into tons of logs to identify who it was.

Per day, because Prisma gives us the information and we don't have to do individual research, it saves us at least one to two hours, easily and probably more. 

What is most valuable?

One of the most valuable features is monitoring of configurations for our cloud, because cloud configurations can be done in hundreds of ways. We use this tool to ensure that those configurations do not present a security risk by providing overly excessive rights or that they punch a hole that we're not aware of into the internet.

One of the strengths of this tool is because we, as a security team, are not configuring everything. We have a decentralized DevOps model, so we depend on individual groups to configure their environments for their development and product needs. That means we're not aware of exactly what they're doing because we're not there all the time. However, we are alerted to things such as if they open up a connection to the internet that's bringing traffic in. We can then ask questions, like, "Why do you need that? Did you secure it properly?" We have found it to be highly beneficial for monitoring those configurations across teams and our DevOps environment.

We're not only using the configuration, but also the containers, the container security, and the serverless function. Prisma will look to see that a configuration is done in a particular, secure pattern. When it's not done in that particular pattern, it gives us an alert that is either high, medium, or low. Based on those alerts, we then contact the owners of those environments and work with them on remediating the alerts. We also advise them on their weaker-than-desirable configuration and they fix it. We have people who are monitoring this on a regular basis and who reach out to the different DevOps groups.

It scans our containers in real time. Also, as they're built, it's looking into the container repository where the images are built, telling us ahead of time, "You have vulnerabilities here, and you should update this code before you deploy." And once it's deployed, it's scanning for vulnerabilities that are in production as the container is running. And we're also moving into serverless, where it runs off of codes, like Azure Functions and AWS Lambdas, which is a strip line of code. We're using Prisma for monitoring that too, making sure that the serverless is also configured correctly and that we don't have commands and functions in there that are overly permissive.

What needs improvement?

The challenge that Palo Alto and Prisma have is that, at times, the instructions in an event are a little bit dated and they're not usable. That doesn't apply to all the instructions, but there are times where, for example, the Microsoft or the Amazon side has made some changes and Palo Alto or Prisma was not aware of them. So as we try to remediate an alert in such a case, the instructions absolutely do not work. Then we open up a ticket and they'll reply, "Oh yeah, the API for so-and-so vendor changed and we'll have to work with them on that." That area could be done a little better.

One additional feature I'd like to see is more of a focus on API security. API security is an area that is definitely growing, because almost every web application has tons of APIs connecting to other web applications with tons of APIs. That's a huge area and I'd love to see a little bit more growth in that area. For example, when it comes to the monitoring of APIs within the clouded environment, who has access to the APIs? How old are the APIs' keys? How often are those APIs accessed? That would be good to know because they could be APIs that are never really accessed and maybe we should get rid of them. Also, what roles are attached to those APIs? And where are they connected to which resources? An audit and inventory of the use of APIs would be helpful.

For how long have I used the solution?

I've been using Palo Alto Prisma for about a year and a half.

What do I think about the stability of the solution?

It's a stable solution.

What do I think about the scalability of the solution?

The scalability is "average".

How are customer service and technical support?

Palo Alto's technical support for this solution is okay.

Which solution did I use previously and why did I switch?

We did not have a previous solution. It was the same solution called Redlock, which was then purchased by Palo Alto.

How was the initial setup?

The initial setup took a day or two and was fairly straightforward.

As for our implementation strategy, it was 

  • add in the cloud accounts
  • set up alerting
  • fine tune the alerts
  • create process to respond to alerts
  • edit the policies.

In terms of maintenance, one FTE would be preferable, but we do not have that.

What about the implementation team?

We implemented it ourselves, with support from Prisma.

What's my experience with pricing, setup cost, and licensing?

One thing we're very pleased about is how the licensing model for Prisma is based on work resources. You buy a certain amount of work resources and then, as they enable new capabilities within Prisma, it just takes those work resource units and applies them to new features. This enables us to test and use the new features without having to go back and ask for and procure a whole new product, which could require going through weeks, and maybe months, of a procurement process.

For example, when they brought in containers, we were able to utilize containers because it goes against our current allocation of work units. We were immediately able to do piloting on that. We're very appreciative of that kind of model. Traditionally, other models mean that they come out with a new product and we have to go through procurement and ask, "Can I have this?" You install it, or you put in the key, you activate it, and then you go through a whole process again. But this way, with Prisma, we're able to quickly assess the new capabilities and see if we want to use them or not. For containers, for example, we could just say, "Hey, this is not something we want to spend our work units on." And you just don't add anything to the containers. That's it.

What other advice do I have?

The biggest lesson I have learned while using the solution is that you need to tune it well.

The Prisma tool offers a lot of functionality and a lot of configuration. It's a very powerful tool with a lot of features. For people who want to use this product, I would say it's definitely a good product to use. But please be aware also, that because it's so feature rich, to do it right and to use all the functionality, you need somebody with a dedicated amount of time to manage it. It's not complicated, but it will certainly take time for dedicated resources to fully utilize all that Prisma has to offer. Ideally, you should be prepared to assign someone as an SME to learn it and have that person teach others on the team.

I would rate Prisma Cloud at nine out of 10, compared to what's out there.

Disclosure: IT Central Station contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
DJ
Security Architect at a computer software company with 11-50 employees
Real User
Top 20Leaderboard
Looks across our various cloud estates and provides information about what's going on, where it is going on, and when it happened

Pros and Cons

  • "One of the main reasons we like Prisma Cloud so much is that they also provide an API. You can't expect to give someone an account on Prisma Cloud, or on any tool for that matter, and say, "Go find your things and fix them." It doesn't work like that... We pull down the information from the API that Prisma Cloud provides, which is multi-cloud, multi-account—hundreds and hundreds of different types of alerts graded by severity—and then we can clearly identify that these alerts belong to these people, and they're the people who must remediate them."
  • "Based on my experience, the customization—especially the interface and some of the product identification components—is not as customizable as it could be. But it makes up for that with the fact that we can access the API and then build our own systems to read the data and then process and parse it and hand it to our teams."

What is our primary use case?

We have a very large public cloud estate. We have nearly 300 public cloud accounts, with almost a million things deployed. It's pretty much impossible to track all of the security and the compliance issues using anything that would remotely be considered homegrown—scripts, or something that isn't fully automated and supported. We don't have the time, or necessarily even the desire, to build these things ourselves. So we use it to track compliance across all of the various accounts and to manage remediation. 

We also have 393 applications in the cloud, all of which are part of various suites, which means there are at least 393 teams or groups of people who need to be held accountable for what they have deployed and what they wish to do. 

It's such a large undertaking that automating it is the only option. To bring it all together, we use it to ensure that we can measure and track and identify the remediation of all of our public cloud issues.

How has it helped my organization?

The solution provides risk clarity at runtime and across the entire pipeline, showing issues as they are discovered during the build phases. Our developers are able to correct them using the tools they use to code. It gives our developers a point to work towards. If the information provided by this didn't exist, then we wouldn't be able to give our developers the direction that they need to go and fix the issues. It comes back to ownership. If we can give full ownership of the issues to a team, they will go fix them. Honestly, I don't care how they fix them. I don't really mind what tools they use.

It is reducing run-time alerts. It's still in the process of working on those, but we have already seen a significant decrease, absolutely.

What is most valuable?

The entire concept is the right thing for us. It's what we need. The application is the feature, so to speak it. What it does is what we want it for: looking across the various cloud estates and providing us with information about what's going on in our cloud, where it is, when it happened. The product is the most valuable feature. It's not a do-all and end-all product. That doesn't exist. But it's a product with a very specific purpose. And we bought it for that very specific purpose.

When it comes to protecting the full cloud native stack—the pure cloud component of the stack—it is very good.

One of the main reasons we like Prisma Cloud so much is that they also provide an API. You can't expect to give someone an account on Prisma Cloud, or on any tool for that matter, and say, "Go find your things and fix them." It doesn't work like that. We've got to be able to clearly identify who owns what in our organization so that we can say, "Here's a report for your things and this is what you must go and fix." We pull down the information from the API that Prisma Cloud provides, which is multi-cloud, multi-account—hundreds and hundreds of different types of alerts graded by severity—and then we can clearly identify that these alerts belong to these people, and they're the people who must remediate them. That's our most important use case, because if you can't identify users, you can't remediate. No user is going to sit there going through over a million deployed things in the public cloud and say, "That one's mine, that one's not, that's mine, that's not." It's both the technology that Prisma Cloud provides and the ability to identify things distinctly, that comprise our use case.

It also provides the visibility and control we need, regardless of how complex or distributed our cloud environments become. It doesn't care about the complexity of our environment. It gives us the visibility we need to have confidence in our compliance. Without it, we would have no confidence at all.

It is also part of our DevOps processes and we have integrated security into our CI/CD pipeline. To be honest, those touchpoints are not as seamless as they could be because our processes do rely on multiple tools and multiple teams. But it is one of the key requirements in our DevOps life cycle for the compliance component to be monitored by this. It's a 100 percent requirement. The teams must use it all the time and be compliant before they move on to the next stage in each release. It is a bit manual for us, but that's because of our environment. It's given our SecOps teams the visibility they need to do their jobs. There's absolutely no chance that those teams would have any visibility, on a normal, day-to-day basis, simply because the SecOps teams are very small, and having to deal with hundreds of other development teams would be impossible for them on a normal basis.

What needs improvement?

Based on my experience, the customization—especially the interface and some of the product identification components—is not as customizable as it could be. But it makes up for that with the fact that we can access the API and then build our own systems to read the data and then process and parse it and hand it to our teams. At that point, we realized, "Okay, we're not never going to have it fully customizable," because no team can expect a product, off-the-shelf, to fit itself to the needs of any organization. That's just impossible.

So customization from our perspective comes through the API, and that's the best we can do because there is no other sensible way of doing it. The customization is exactly evident inside the API, because that's what you end up using.

In terms of the product having room for improvement, I don't see any product being perfect, so I'm not worried about that aspect. The RedLock team is very responsive to our requirements when we do point out issues, and when we do point out stuff that we would like to see fixed, but the product direction itself is not a big concern for us.

For how long have I used the solution?

We've been using it since before it was called Prisma Cloud. We're getting on towards two years since we first purchased it.

What do I think about the stability of the solution?

The stability of Prisma Cloud is very good. I have no complaints along those lines. It seems to fit the requirements and it doesn't go down. Being a SaaS product, I would expect that. I haven't experienced any instability, and that's a good thing.

What do I think about the scalability of the solution?

Again, as a SaaS product, I would expect it to just scale.

How are customer service and technical support?

We regularly use Palo Alto technical support for the solution. I give it a top rating. They're very good. They have a very good customer success team. We've never had any issues. All our questions have been answered. It has been very positive.

Which solution did I use previously and why did I switch?

We did not have a previous solution.

How was the initial setup?

The initial setup was very straightforward. It's a SaaS product. All you have to do is configure your end, which isn't very hard. You just have to create a role for the product and, from there on, it just works, as long as the role is created correctly. Everything else you do after that is managed for you.

We have continuously been deploying it on new accounts as we spin them up. Our deployment has been going on since year one, but we've expanded. Two years ago we probably had about 40 or 50 cloud accounts. Now, we have 270 cloud accounts.

We have a team that is dedicated to managing our security tools. Something this big will always require some maintenance from our side: new accounts, and talking to internal teams. But this is as much about management of the actual alerts and issues than it is anything else. It's no longer about whether the tool is being maintained. We don't maintain it. But what we do is maintain our interaction with the tool. We have two people, security engineers, who work with the tool on a regular basis.

What was our ROI?

It's a non-functional ROI. This isn't a direct-ROI kind of tool. The return is in understanding our security postures. That's incredibly important and that's why we bought it and that's what we need from it. It doesn't create funds; it is a control. But it certainly does stop issues, and how do you quantify that?

What's my experience with pricing, setup cost, and licensing?

Pricing wasn't a big consideration for us. Compared to the work that we do, and the other costs, this was one of the regular costs. We were more interested in the features than we were in the price.

If a competitor came along and said, "We'll give you half the price," that doesn't necessarily mean that's the right answer, at all. We wouldn't necessarily entertain it that way. Does it do what we need it to do? Does it work with the things that we want it to work with? That is the important part for us. Pricing wasn't the big consideration it might be in some organizations. We spend millions on public cloud. In that context, it would not make sense to worry about the small price differences that you get between the products. They all seem to pitch it at roughly the same price.

Which other solutions did I evaluate?

Before the implementation of Prisma Cloud, there were only two solutions in the market. The other one was Dome9. We did an evaluation and we chose this one, and they were both very new. This is a very new concept. It pretty much didn't exist until Prisma Cloud came along.

The Prisma Cloud solution was chosen because of the way it helped integrate with our operations people, and our operations people were very happy with it. That was one of the main concerns.

Both solutions are very good at what they do. They approach the same problem from different directions. It was this direction that worked for us. Having said that, certain elements of Prisma Cloud were definitely more attractive to us because they matched up with some of our requirements. I'm very loath to say one product is better than the other, because it does depend on your requirements. It does depend on how you intend to use it and what it is, exactly, that you're looking for.

What other advice do I have?

You need to identify how you'll be using it and what your use cases are. If you don't have a mature enough organizational posture, you're not going to use it to actually fix the issues because you won't have the teams ready to consume its information. You need to build that and that needs to be built into the thinking around that product. There's no point having information if you're not going to act on it. So understand who is going to act on it, and how, and then you've got a much better path to understanding your use for this. There's no point in buying a product for the sake of the product. You need the processes and the workflows that go with it and you need to build those. It's not good enough to just hope that they will happen.

The solution doesn't secure the entire spectrum of compute options because there are other Palo Alto products that secure containers, for example. This is very specifically focused on the configuration of the public cloud instances. It doesn't look inside those instances. You would need something else for that. You don't want to be using other products to do this. You don't want to mistake this for something that does everything. It doesn't. It is a very specific product and it is amazingly good at what it does.

We do integrate it with our workflow as part of the process of getting an application onto the internet. It does integrate with our workflow, giving us a posture as part of the workflow. But it is not a workflow tool.

It definitely does multi-cloud. It does the three major ones plus Alibaba Cloud. It doesn't reach into hybrid cloud, in the sense that it doesn't understand anything non-cloud. We don't use it to provide security, although it is very good for that. We already have an advanced security provision posture, because we are a very large organization. We just use it to inform us of security issues that are outside our other controls.

Prisma Cloud doesn't provide us with a single tool to protect all of our cloud resources and applications in terms of security and compliance reports because we have non-cloud-related tools being folded into the reports as well. Even though it works on the cloud, and is excellent at what it does, we integrate it with our Qualys reports, for example, which is the scanning on our hosts. Those hosts are in the cloud, but this doesn't touch them. There's no such thing as a single security tool, frankly. It's basically part of our portfolio and it's part of what every organization needs, in my opinion, to be able to manage their cloud security postures. Otherwise, it would just never work.

Disclosure: IT Central Station contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
MB
Cloud Security Manager at a manufacturing company with 10,001+ employees
Real User
Top 20
We have identified and secured many misconfigurations and remediated a lot of vulnerabilities

Pros and Cons

  • "The Twistlock vulnerability scanning tool is its most valuable feature. It provides us insight into security vulnerabilities, running inside both on-premise and public cloud-based container platforms. It is filling a gap that we have with traditional vulnerability scanning tools, where we don't have the ability to scan inside containers."
  • "The alignment of Twistlock Defender agents with image repositories needs improvement. These deployed agents have no way of differentiating between on-premise and cloud-based image repositories. If I deploy a Defender agent to secure an on-premise Kubernetes cluster, that agent also tries to scan my ECR image repositories on AWS. So, we have limited options for aligning those Defenders with the repositories that we want them to scan. It is scanning everything rather than giving us the ability to be real granular in choosing which agents can scan which repositories."

What is our primary use case?

Primarily, we are attempting to secure our public cloud security posture through compliance and vulnerability scanning.

How has it helped my organization?

Overall, the solution is effective for helping us take a preventative approach to cloud security. We have managed to remediate thousands of high impact misconfigurations or vulnerabilities that have been detected by the tool.

It is how we are securing access to these public facing resources, i.e., how we are locking down S3 buckets, RDP to EC2 instances, or other administrative access that might otherwise allow easy compromise. The value to the business is simply just securing these cloud assets in alignment with security policies and best practices that we have defined.

The comprehensiveness of the solution is good for securing the entire cloud-native development lifecycle, across build, deploy, and run. We are exclusively an Azure DevOps shop. Thus, we are well-aligned with the capabilities that Prisma offers. Its ability to participate in and integrate with the DevOps lifecycle has been very good for us.

Prisma Cloud has enabled us to integrate security into our CI/CD pipeline and add touchpoints into existing DevOps processes. We are integrated in a handful of CI/CD pipelines at the moment. These touchpoints are fairly seamless in our DevOps processes. We are performing the scan and failing builds automatically without developer involvement, but we use the Visual Studio plugin. Therefore, developers can self-service scan their work prior to the build process. It is both seamless and on-demand for the people who choose to use it.

The integration of security into our CI/CD pipeline has affected collaboration and trust between our DevOps and SecOps teams has improved, though there is some diplomacy that has to occur there. The way that it's improved: We approached vulnerability management and cloud security posture with these teams historically by presenting them a list of findings, like a laundry list of things they need to go fix. These teams aren't staffed for moving backwards and fixing old problems, so we established a process for working with them that starts with securing net new development. We can do that without much of an ask, in terms of their time, by having these integrations into their CI/CD pipeline along with self-service scanning tools. So, we have the capability of securing new development while they are completing the lengthy task of reviewing and remediating existing deployments.

The solution provides risk clarity at runtime and across the entire pipeline, showing issues as they are discovered during the build phases. We are applying the same secure configuration baseline scans in the pipeline that we're doing for the deployed assets. Most of the time, our developers can correct these issues.

What is most valuable?

The Twistlock vulnerability scanning tool is its most valuable feature. It provides us insight into security vulnerabilities, running inside both on-premise and public cloud-based container platforms. It is filling a gap that we have with traditional vulnerability scanning tools, where we don't have the ability to scan inside containers.

Prisma Cloud provides security spanning multi- and hybrid-cloud environments. This is of critical importance to us because we have workloads in multiple cloud providers as well as having them on-premise.

The solution provides the following in a single pane of glass:

  • Cloud Security Posture Management
  • Cloud Workload Protection
  • Cloud Network Security
  • Cloud Infrastructure Entitlement Management.

These are all critical and challenges that we have faced. We have been unable to find solutions using native tools from cloud providers. We use AWS and Azure in production along with GCP in testing.

Prisma Cloud provides us with a single tool to protect all our cloud resources and applications, without having to manage and reconcile disparate security and compliance reports. The Redlock portion of the tool and reporting are better. There are still some gaps in terms of our ability to trend over time periods. However, in terms of point-in-time snapshot reporting, the tool is very good. What we have done is automated the process of compiling these trendline reports on a weekly basis to capture those metrics, then take them offline so we can build our own dashboarding to fill in the tool's gaps.

We are using the solution’s new Prisma Cloud 2.0 Cloud Security Posture Management features. These features give our security teams alerts, with context, to know exactly what are the most critical situations. This is critical because we have insight into new assets that are deployed out of spec, but have otherwise not been enabled for auto remediation. The challenge there has been that we deploy these policies, and if someone's not sitting there watching the console, then they might miss these misconfigurations where time is of the essence. The learning and context are important in order to prioritize how quickly we need to triage these findings.

The new Prisma Cloud 2.0 features provide our security teams with all the data that they need to pinpoint the root cause and prevent the issue from recurring. It is less data requirement gathering that has to happen in the middle of an incident or remediation. If the alerts themselves have all the context you need to address those, then it's just less legwork required to find the problem and fix the misconfiguration.

What needs improvement?

The alignment of Twistlock Defender agents with image repositories needs improvement. These deployed agents have no way of differentiating between on-premise and cloud-based image repositories. If I deploy a Defender agent to secure an on-premise Kubernetes cluster, that agent also tries to scan my ECR image repositories on AWS. So, we have limited options for aligning those Defenders with the repositories that we want them to scan. It is scanning everything rather than giving us the ability to be real granular in choosing which agents can scan which repositories. This is our biggest pain point.

There are little UI complexities that we work around through the API or exporting.

For how long have I used the solution?

I have been using it for about nine months.

What do I think about the stability of the solution?

In general, the stability is very good. As a SaaS tool, we have high expectations for how it performs, and we did have some growing pains in that regard around the console upgrade in October. 

The work that we have ongoing maintenance-wise is from a policy perspective. We have custom policies that we deploy above and beyond the CIS Benchmark policies deployed with the tool. As we deploy new services, start to use new tools, and as the cloud vendors roll out new services, there is policy work which goes along with that. However, the bulk of the work is still in meeting with business units who are responsible for deploying these applications and keeping them on track with their remediation activities.

What do I think about the scalability of the solution?

The scalability is very good. The notable exception is on the Lambda function side. We have had some challenges with its ability to scale up and scan all versions of deployed functions in a timely fashion. Otherwise, in the container space and public cloud space on the RedLock side, it has been very good in terms of scaling up to meet our demands.

25 people use this solution. Seven of those would be people on the cloud SecOps team, and the balance of them would be a mix of developers, DevOps engineers, and incident response.

There are dozens more pipelines for us to integrate with. The bulk of the growth will be organic to new app teams, who are in different business units in the enterprise.

How are customer service and technical support?

The technical support is pretty good. In most instances, they are responsive. They meet their SLAs. They are eager to engage with R&D or their engineering teams when necessary to escalate issues. 

Which solution did I use previously and why did I switch?

Prisma Cloud provides the visibility and control that we need, regardless of how complex or distributed our cloud environments become. Our security and compliance postures are significantly improved through the implementation of this tooling, mostly because we had poorly supported open source tooling acting in this capacity previously. We were using the Scout2, because it was free, which was not nearly as fully featured or capable.

How was the initial setup?

I have led this team since the beginning. The initial setup was harder when we did it than it is now. We had to go through individual AWS accounts, configuring IAM permissions and things like that, on an account by account basis. Whereas now, that happens automatically through AWS Organizations integration. While the setup was good then, it is better now.

It took us three months to have all the resources onboarded.

Our implementation strategy varied because there are so many elements of the tooling. We started with RedLock and the public cloud compliance pieces, starting with the sandbox accounts and validating the results and things of that nature. We then moved out to the larger Cloud COE as a whole and started onboarding production accounts. After that, we started meeting with the COE and app teams to socialize the findings and explain the remediation steps and go through all of that.

We broke the Twistlock stuff into a separate project phase. The deployment approach there was similar to the implementation strategy. We started with the sandbox teams and public facing apps, socializing the findings, then going through the vulnerability structure and compliance structure with them. Once we had established a rapport with them and they understood the goals of the program, then we started pushing for integration into the CI/CD pipelines, etc.

What was our ROI?

We have seen ROI. I feel like it is a good value. I am not going to say for sure that we couldn't have leveraged the same results from one of the competing platforms, but you don't need to prevent many security incidents to realize the value of an investment like this. We have identified and secured many misconfigurations and remediated a lot of vulnerabilities that I feel like we have gotten our value out of the tool.

Prisma Cloud has reduced our runtime alerts by 25 percent through the nature of developers being able to fix their own code by shifting the responsibility of identifying misconfigurations and vulnerabilities. Fewer runtime alerts are making it to runtime because they are fixing security or compliance issues earlier in the process.

Our alert investigation time is much better and has been reduced by 75 percent.

What's my experience with pricing, setup cost, and licensing?

The pricing and licensing are expensive compared to the other offerings that we considered.

Which other solutions did I evaluate?

We also looked at Aqua Security and Rapid7 DivvyCloud. Capabilities-wise, these commercial solutions have similar offerings. The two primary differentiators with Palo Alto were:

  1. It was by far the most mature solution. They had acquired that maturity through getting the most baked startups, then rebranding and rolling them under the Prisma banner. So, they were the most mature platform at the time. 
  2. There was an element of wanting to have that single pane of glass management. They had a SaaS solution that we felt would scale to our large cloud environment. 

What other advice do I have?

Have a clear plan for how you will structure your policies, then decide right from the get-go if you will augment the delivered policies with your custom ones to minimize the amount of rework that you need to do. Likewise, make sure that the ticketing application that you are planning to integrate with, if you're going to track remediation activities, is one that is supported. If not, have a plan for getting that integration going quickly.

Biggest lesson learnt: Do better planning for that third-party and downstream integration that you will be doing with your ticketing platform. Right out of the gate, our options were rather limited for integration and ticketing. It seemed to be geared around incident handling or incident response more than compliance management or vulnerability response.

The solution is comprehensive for protecting the full cloud native stack. It covers nearly all of our use cases. The gaps present are more a function of API visibility that we get from Azure, for example. As they roll out or make generally available new services, there is a lag time in the tool's ability to ingest those services. However, I think that is more a function of the cloud platforms than Prisma Cloud.

This solution is a strong eight out of 10.

Disclosure: IT Central Station contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Buyer's Guide
Download our free Prisma Cloud by Palo Alto Networks Report and get advice and tips from experienced pros sharing their opinions.