- Having the option of static scanning. Most tools of this type are centered around dynamic scanning. Having a static scan is very important.
- Utilizing the software as a service. We do the scanning of the compiled code ourselves but it's on their servers, which is a plus.
- Technical support is available if needed and that is advantageous.
- Having online education and training is also advantageous.
Application Security Server Reviews
Showing reviews of the top ranking products in Application Security, containing the term Server
We use it for security scanning of SaaS and mobile software that we develop: one server-side and two mobile applications. Most customers require SAST and DAST scanning in order to purchase.
We are using the Veracode tools to expose the engineers to the security vulnerabilities that were introduced with the new features, i.e. a lot faster or sooner in the development life cycle. We rely on this set of tools to automatically scan our artifacts when they are moving to different environments.
We got it to the point that when we were promoting the artifacts from desktop to the server environment, we already had the scans completed. We knew the vulnerabilities that we were introducing with the new features ahead of time, i.e. before the QA department was finding them. That was the main reason we decided to use Veracode or to use tools for static analysis and dynamic analysis.
The setup was easy and straight forward. We had some issues with API calls from our build automation tools, but this was related to networking issues in reaching the Veracode servers on the Internet, not the Veracode product itself.
We use the Veracode SAST solution to scan the Java, Node.js, and Python microservices as part of our CI/CD pipeline, wherein we are using our CI/CD server as Bamboo, Jenkins, and GitLab CI/CD.
We have teams for both our cloud pipeline and on-prem pipeline, and both teams use this solution. We are using Veracode to constantly run the internal application source code and ensure the code's security hygiene.
Qualys Web Application Scanning: Server
One area that could be improved is the a data server. That's probably what I most noticed in comparison with the Rapid7. Also, the UI is not user-friendly and you don't have a yearly reporting facility where you can slice and dice in different jobs. This is not good.
Additionally, you don't have a recording feature, where you can record your screen navigation. Like a macro, you want to create the full screen, and they don't provide a tool which can record your navigation and then do a replay.
In terms of what should be included in the next release, like I mentioned, just the UI, the user interface screen. Also, it would be good If they could improve and enrich the reports. These are the fundamental differences with Rapid7.
Acunetix Vulnerability Scanner: Server
Scheduling of testing cuts down on the manual, tedious activities that go into setting up a test site.
One of the features that I feel is groundbreaking, that I would like to see expanded on, is the IAS feature: The Interactive Application Security Testing module that gets loaded onto an application on a server, for more in-depth, granular findings. I think that is really neat. I haven't seen a lot of competitors doing that.
While there has not been any real reduction in remediation time, there has been a reduction in scan time. Because when you're doing a Burp scan, it can take a long time. Whereas, with Acunetix, you can basically just set it, then it will scan throughout the night.
On bigger sites, the speed can be a little tricky unless you are narrowing it down to smaller sections of the site. On small sites, half a million lines of code or less, it has gotten pretty nice and quick, down to a couple hours now for a whole scan. So, it's getting there. They are pushing out quite a few updates, every now and then.
There is something called AcuSensor, and you can install that on local servers for a deeper scan. This has worked for us, but we haven't installed it on all of our boxes yet, but I think we will pretty soon.
It's been used quite extensively here within our company. Every website is using this along with other scanners.
Initially, I believe Acunetix provided us with two solutions. One was a SaaS, which means that they host it on their cloud. They also provide the option to host Acunetix on our internal servers, behind our firewalls, with an on-premise version.
The problem with the on-premise version is that it works only on Windows Servers. I can't install it on a Mac or a Linux-based machine. That was quite challenging for us because all of our cloud infrastructure has been AWS instance, which is of a Linux-based operating system.
As far as security testing is concerned, we would prefer to host Acunetix, on-premise, because everything would be within our firewall. If we wanted to host it on the cloud, then we would have to sign a non-disclosure, because they know what vulnerabilities exist on our site.
For this reason, we generally prefer to host it on-premise so that they will have a restriction within our firewall, so no one can gain access from the outer wall. Setting up the on-premise version of Acunetix is quite challenging and it's not that straightforward because it only supports one operating system.
However, we found it so difficult to host on-premise that we actually had to stop. Instead, we have decided to go for the cloud version. All we have to do is send them our application to scan in their cloud.
PortSwigger Burp: Server
We use this solution for the security assessment of web applications before their release to the internet. The security assessment team uses this product to identify vulnerabilities and vulnerable code that developers may introduce. We host all of the beta applications in our internal web servers and then the security team starts assessments when the development freezes.
With the open edition, it's not a problem to install on any number of machines. When it comes to the professional edition, you need a license and you have to pick a license type. I have to use it against a particular machine on which I would run. From there I would run my scans. Let's say I don't find my laptop or my computer fast enough, and I decide to move my license across to a higher processor, higher memory laptop or computer, I can easily move the license across to the new machine.
As long as I am on that particular license use, I have one license that I'm able to move across to one instance at any given point of time. That is quite stable. I think even more than that, for a top-priced edition you can take multiple contract licenses. Something like a license server where you might have five licenses. You might have 10 installations and you can have different people working on various routes use the tool. Only those five licenses will be needed. In that instance, scalability is definitely a great point for most uses.
Currently, if you look at the users that are linked to roles that we have, one is the security test engineer and one is the security test analyst. At any given point in time, only one person uses the tool for engagement in the professional edition. We have about two to three people working with us on these projects.
Micro Focus Fortify on Demand: Server
In terms of the scalability of the solution, we did not have a centralized server connecting to multiple clients. We did not have scalablility issues due to our small-scale use.
Our server infrastructure team handles the deployment and maintenance of this solution. They update it regularly as patches or new versions are released. They look into all of the tools that we use and perform the installation, as well as manage them.
The most valuable feature is that it connects with your development platforms, such as Microsoft Information Server and Jira. When a vulnerability is found then it is classified as a bug and sent to IT.
One of the biggest heartaches that we have is that all of our Windows servers are on an automated upgrade. Whenever Windows upgrades, we lose the order of the ciphers and it brings down the Checkmarx webpage.
Our company policy is that we upgrade our servers at a minimum of once a month, if not more. It's a hassle to keep up on that. The ciphers are such a pain to manage.
To set up a cipher connection, there's a tool out there called IIS Crypto. We just run that tool to set the best practices. It forces us to reboot the server. We haven't figured out how to automate the whole thing yet.
There have been some Windows updates that haven't triggered this issue where the ciphers get messed up. The only thing we're running is TLS2. At that higher level, everything is just a pain.
All of our servers are built out through code. In other words, we use Ansible and Jenkins to automatically create machines. Everything is virtual these days. It's either virtual in-house or virtual in the cloud.
The issue with Checkmarx is the next pain point, i.e. their installation procedure is GUI-based. They've got a command line for upgrades. I haven't seen the command line for the initial install.
My last statement on Checkmarx is Windows would not be my choice for any kind of server implementation. I'm not a Windows fan at all. Every other tool in our company is Linux-based and our target systems are Linux as well.
I don't have the experience and the knowledge of working on a Windows system compared to my Linux knowledge. Checkmarx being Windows only is a hindrance as well.
Another problem is: why can't I choose PostgreSQL? I would like to have an additional feature added to the product to support either PostgreSQL or MySQL. Those are the two free databases that are enterprise-ready.
The particular way the tool works for the scanning at the IDE level, is very expensive. It makes it very expensive to deploy this tool on to multiple different developers' machines. Right now, the way it scans, the request is raised to the IDE of the developer but then the actual scanning gets done in the centralized scan server. This increases the load on the scanning server and that will make it difficult to use Checkmarx at the developer end. That forces me to look for another solution for implementing at the developer IDE level. I would strongly recommend Checkmarx relook into their approach.
From a technical point of view, it's better to integrate with other systems within my ecosystem. For example, when I'm connecting Checkmarx with my DevSecOps pipeline and then wiring Checkmarx with other security systems as well as the pipeline (and my defect management system), it provides the connectivity to some of the tools, but there are tools which are excluded. It would be nice if they were added to the solution itself, otherwise, it requires us to do custom development.
In terms of dashboarding, the solution could provide a little more flexibility in terms of creating more dashboards. It has some of its own dashboards that come out of the box. However, if I have to implement my own dashboards that are aligned to my organization's requirements, that dashboarding feature has limited capability right now. I would recommend much more flexibility in terms of dashboarding to help us customize more effectively.
Their licensing model is rigid and difficult to navigate.
This solution is expensive.
The customized package allows you to buy additional users at any time.
You could advise the vendor that you are in need of some more resources, and they can send you a trial license which lets you pay later. In the meantime, you can start working with the trial license.
They have subscriptions for licenses, but this is confidential information and I cannot share the price as per our non-disclosure agreement.
If you purchase a typical package then it is clear licensing with no hidden payments. You can add integration services for Checkmarx if you needed to, but it's optional.
The hardware is on the customer site. It could be virtual, or a physical server, or even cloud-based. You can choose what you want to use and there are still no hidden fees. Licensing and policy are clear.
The basic installation is easy for us but in our case, we had some additional configuration that had to be done to access our documents on the server. We were not able to complete it without help from Checkmarx because there are a lot of configuration options, and we had to make manual changes to the database as well.
There are two major use cases. One is to integrate it into the developers' workbench so that they can bench check their code against what will be done in the server-based audit version.
I think if you're going to get the paid model, I get the impression it would do pretty much everything you need as far as metrics go.
A colleague of mine did some work looking at some plugins for Visual Studio and things like that, but they weren't going to work out, so we did take a look at some other options where they could have everything done on the desktop. Our solution in place now requires an infrastructure where it doesn't look at your code, but rather the code that you last checked in, which takes some levels of complexity that we've kind of built-in anyway. It's a little less intuitive how it works to the casual observer. It's set up now to where they don't have to know how it works, they can just go to the web interface and see it.
There are about eight programmers in our section of the solution. So we're kind of a smaller shop compared to some, but larger than many.
Certainly right now I think SonarQube is being underutilized, just because old habits die hard. If I had any say I would like to change that. We had coding standards in place, but they were written documents, whereas SonarQube takes that to another level and you had to look at the specification to see what you said you were going to do. It also tells you what the industry norms are, and whether or not you're meeting them. We have had some discussions about which we want to do. If we want it to happen automatically or if we want to go look for it again ourselves. I cast my vote in the automatic way because the research has already been done by the SonarQube community to come up with these roles, rules, coding standards, etc.
It wasn't done in a vacuum. The agile community has been beating on issues like this for a long time, and they're getting to a point that it's becoming a self-sustaining method.
The initial setup was fairly straightforward. It's well documented and the documentation is easy to read.
We rolled it out to one server that was used as a POC, which was later moved into a production environment. We then rolled out a second one for Dev to test doing upgrades, which we do on a regular basis. Every time a new LTS (Long Term Support) version comes out then we run an upgrade.
Only one person is required in order to handle the maintenance. It is easy to maintain.
We have a server license here for two servers and ten users.
We use Klocwork in two different configurations, on-prem and cloud. Basically we can summarize on-premises. We connect the client directly to the server on-premises remotely. But for certain products and features, we also use a local server that is on-premise but with different configurations. In this case, the server is deployed with some rule set and configured in a certain manner locally with the second option of redirecting the connection directly to our headquarter.
I would recommend the latest version. In the roadmap of the product, a lot of improvements have been made. We are currently on hold with moving over to this tool because of the license but once we're able to, we'll import our profiles from the previous version to the new one.
The previous version was not compatible with the .NET framework. 4.7.2 it didn't fully consider the retargeting option of C++
I would rate Klocwork seven out of ten.
We use WhiteSource mainly to:
- Detect and automate vulnerability remediation. We started to research solutions since our dev teams are unable to meet sprint deadlines and keep track of product security. Most of our code scans are automated and integrated within our pipeline, which integrates with our CI server. With some, we run them manually using an agent. We recently started using the repository integration with Github, too, pre-build.
- License reporting and attribution reports. We use attribution reports and due diligence reports to asses risks associated with open-source licenses.
We needed a tool to ensure that we are not using vulnerable libraries or open-source libraries with a copyleft license. We integrated WhiteSource with our repositories and CI server and set up automated policies to reject copyleft licensed libraries because our legal department doesn't allow them. We also have it open Jira issues automatically when a vulnerable library is detected and assign it to an engineer so we can shorten our response time to vulnerabilities detected in our applications. It integrates nicely with our existing workflow.
We use WhiteSource mainly to automate open source vulnerability detection and remediation, as well as for license compliance.
I’m less on the side of the license but mainly use the service to get control over vulnerabilities, detect the ones that affect us and remediate accordingly.
Sonatype Nexus Lifecycle: Server
The scalability is good but it can be improved. I think they're working on it, but it needs to be clusterable. The best case is to have a cluster, a native cluster, for IQ Server, to improve the availability.
Scalability is not an issue. We have a microservices architecture and we've got about 150 applications in there and we scan them quite regularly. When we first started, we had a lot fewer applications, we were sending about five gigs of scanning data requests to the Sonatype servers every day. They were able to handle that. We had issues before, but I think they were more networking configuration issues, and they could have been on our side. But that has all been resolved and there are no issues.
Before, we had open-source Nexus Repository, but with Lifecycle we have Nexus RM and IQ Server as well and we can scan .jars. In addition, we have the plugins for individual developers, which benefits us and the developers when they introduce a new artifact into their applications. It helps them identify what are potential risks and defects. They can resolve them right there and proceed there with their development.
It also brings intelligence to the open-source artifacts, because intelligent servers scan all the vulnerabilities, identify the problems, and then we can ask the individual teams to fix them. That is a plus.
The solution blocks undesirable open-source components from entering our development lifecycle. There are certain .jars which we can block.
In terms of open-source governance, the tool tells us all the threats that are out there in the public sector repositories, threats which, potentially, no one knows. We get to know them and we can use the tool to let other people know which direction to go in.
The solution has improved the time it takes us to release secure apps to market by at least 50 percent. It has also increased developer productivity to some extent because of the plugin which is included for the IDE. It gives a report of the vulnerabilities. It does save time in figuring out the right open-source versions that we need to use. It has helped improve the productivity of the developers by about ten percent.
We haven't had an instance where we have run into such high volume that we needed to scale. The only change we made was to increase memory, because we started utilizing the API. In terms of redundancy, all the data is sitting in the database. We have backed up the folder structure, and the worst case is we just restore that folder structure onto any server. You could run it in Docker if you wanted to, as well so that is immutable. It's been made to be a lift-and-shift type of product.
We have 100 users actively using it at the moment. They are developers, mostly.
Our primary use case is for the SAS testing. This is the dynamic composition analysis that we need to do. In our apps, we do a lot of bespoke development and use a lot of third-party components. Therefore, it is critical to know what number is embedded within the third-party components that we may not directly be responsible for. The main use case is for scanning and ensuring that the deployments that we are adding to our servers is as secure as we can make it.
We use it for scanning alone. That is our way of mitigating risk.
We just upgraded to the latest version.
Overall, the stability is pretty good. I haven't figured this out yet, but occasionally we do see failures in the Jenkins build. I haven't figured out why yet. I don't know if it's an issue with our Jenkins server or if it's with Sonatype. But otherwise, it seems pretty stable.
A bigger, ongoing use case is security. Sonatype checks security vulnerabilities that come up for all these libraries. Oftentimes, as a developer, you add a library that you want to use, and then you might check for security issues. Sometimes a problem comes up after your product is already live. IQ Server checks all libraries that we're using for security issues, reporting these, and allowing us to go through and see them to determine, "Is this something that we can waive?" It might be a very specific use case which doesn't actually affect us or we might have to mitigate it. Also, if a vulnerability or security issue is found in libraries later, it will send out alerts and notifications if a library is being used in our production environment, letting us know there is an issue. This allows us to address it right away, then we can make the decision, "Do we want to do a hotfix to mitigate this? Or is it something that isn't an issue in our case because we're not using it in a way that exposes the vulnerability?" This gives us peace of mind that we will be notified when these types of things occur, so we can then respond to them.
We are using the Nexus Repository Manager Pro as exactly that, as an artifact repository. We tend to store any artifact that our application teams build in the repository solution. We also use it for artifacts that we pull down from open-source libraries that we use and dependencies that come from Maven Central. We use it to proxy a few places, including JCenter. We also use it as a private Docker registry, so we have our Docker images there as well.
We're on version 3.19. We also have Nexus IQ server, which wraps up within it Nexus Firewall.
Before IQ server we used an open-source solution called OWASP Dependency-Check. We wanted something a little more plug-and-play, something a little more intuitive to configure and automate.
One thing that I would like to give feedback on is to scan the binary code. It's very difficult to find. It's under organization and policies where there are action buttons that are not very obvious. I think for people who are using it and are not integrated into it, it is not easy to find the button to load the binary and do the scan. This is if there is no existing, continuous integration process, which I believe most people have, but some users don't have this at the moment. This is the most important function of the Nexus IQ, so I expect it should be right on the dashboard where you can apply your binary and do a quick scan. Right now, it's hidden inside organization and policies. If you select the organization, then you can see in the top corner that there is a manual action which you can approve. There are multiple steps to reach that important function that we need. When we were initially looking at the dashboard, we looked for it and couldn't find it. So, we called our coworker who set up the server and they told us it's not on the dashboard. This comes down to usability.
There is another usability thing in the reports section. When the PDF gets generated, it is different from the web version. There are some components from some areas which only reside inside the PDF version. When I generate the PDF for my boss to review, she comes back with a question that I didn't even see. I see on the reporting page whatever the PDF will be generating. The PDF is actually generating more information than the web version. That caught me off guard because she forwarded this to the security officer, who is asking, "Why is this? Or, why is that?" But, she has no idea. I didn't have anything handy because I saw the PDF version, which should be same as what I see on the web. This is a bit misrepresented. I would like these versions to speak together and be consistent. Printing a PDF report should generally reflect whatever you have on the page.
It's very stable. I don't recall ever seeing problems. The main concern would be data-disk corruption, but I haven't seen it, even though the server, due to patching, has been rebooted multiple times.
We have a few applications that we're developing that use several different languages. The first ones we did were Python and Yum Repository applications. Recently we've started scanning C and C++ applications that use Conan Package Manager. We will soon start doing node applications with NPM. Our use case is that we primarily rely on the IQ server to ensure we don't have open source dependencies in our applications that have security vulnerabilities, and to ensure that they're not using licenses our general counsel wants us to avoid using.
We have it implemented and integrated into our CI/CD pipeline, for when we do builds. Every time we do a build, Jenkins reaches out and kicks off a scan from the IQ Server.
We use it to automate open source governance and minimize risk. All of our third-party libraries, everything, comes through our Nexus, which is what the IQ Server and Jenkins are hooked into. Everything being developed for our big application comes through that tool.
We have Nexus Firewall on, but it's only on for the highest level of vulnerabilities. We have the firewall sitting in front to make sure we don't let anything real bad into the system.
Our environment is your standard, three-tiered environment. We have the developers develop in their Dev and Test environments, and as the code moves through each environment — Test and a QA environment — it goes through a build process. We build each time we deploy.
We're addressing anything that is a nine and above. If it's a 10, we don't let it into our system; the firewall server stops it. If we have nines we'll let it in, but I'll tag the developers and they'll have to do a little triage to figure out if the problem that is being reported is something we utilize in our system — if it's something that affects us — and if it's not, we flag it as such and let it go. We either waive it or I'll acknowledge it depending on how much it's used throughout the system and how many different components are being built with that bad library.
Our whole process of deploying code uses Snyk either as a gateway or just to report on different build entities.
The solution's ability to help developers find and fix vulnerabilities quickly is a great help, depending on how you implement it at your company. The more you empower your developers to fix their stuff, the less policies you will have to implement. It's a really nice feeling and just a paradigm shift. In our company, we had to create the habit of being proactive and fixing your own stuff. Once the solution starts going, it eases a lot of management on the security team side.
Snyk's actionable advice about container vulnerabilities is good. For the Container tool, they'll provide a recommendation about what you can do to fix your Docker, such as change to a slimmer version of the base image. A lot of stuff is coming out for this tool. It's good and getting better.
The solution’s Container security feature allows developers to own security for the applications and the containers they run in in the cloud. That is its aim. Since we are letting the developers do all these things, they are owning the security more. As long as the habit is there to keep your stuff up-to-date, Snyk won't have any effect on productivity. However, it will have a lot of effect on security team management. We put some guardrails on what cannot be deployed. After that, we don't have to check as much as we used to because the team will just update their stuff and try to aim for lower severities.
Our overall security has improved. We are running fewer severities and vulnerabilities in our packages. We fixed a lot of the vulnerabilities that we didn't know were there. Some of them were however hard to exploit, mitigating the risks for us, e.g., being on a firewalled server or unreachable application code. Though I don't recall finding something where we said, "This is really bad. We need to fix it ASAP."
Contrast Security Assess: Server
The daily reporting of vulnerabilities is very helpful for our development team. They can log in to the Contrast tool and see the vulnerabilities and start working to mitigate them before my test-app team reaches out to them inquiring about when certain vulnerabilities are going to be remediated. A case in point was last week, when I followed up with one of my developers. I said, "We need to mitigate this set of vulnerabilities," and he said, "Well, I've already started mitigating them. You should see the JIRA ticket out pretty soon." It's that type of response that we really like with Contrast. It allows us to move faster than if we were just using a SAST tool.
Before Contrast, everything was done manually. The developers were doing their own code reviews as best they could. When I came in and I started having the application security meetings, I found that most of the developers were very adept at building code for functionality, and testing functionality based on their unit tests. We had something like 25 or 30 developers in my class, and only one person was familiar with application security. That should tell you how far behind we were. So we had a heavy educational push, bringing in Contrast personnel for onsite application security training and to learn how to integrate Contrast into our SDLC. They showed them what the vulnerabilities are and how to mitigate them. The change from three years ago to now is one of the benefits.
Once we got it implemented — deployed the agents onto the application servers and got those vulnerabilities to populate into our team server — it was coming up with a Visio diagram of our processes for Agile development and our process for Waterfall development and it really turned around how our company is is a able to identify and mitigate and roll out fixes for our security vulnerabilities.
It also helps developers incorporate security elements while they're writing code. Our development team has it on their local box, through the IDE, and as they are building the functionality they're running the scans at that time. They correct some of the vulnerabilities right there before passing it along on the SDLC. Sometimes they will miss things and we'll catch them in our QA environment. It has positively affected our software development because, before that, everything was manual. When we brought in Contrast, it exposed how many vulnerabilities, criticals and highs, had been missed. The difference between doing purely manual reviews and doing a review with instrumentation was very stark.
It's hard to quantify how much time and money it has saved us by fixing software bugs earlier in the software development lifecycle. There's time, cost, and public image. In terms of the costs saved, we had something like 2,000 vulnerabilities — some critical and some high — and I don't even know how to put a price on that. Sometimes a vulnerability can end up costing 100 times what it would cost to fix in a development environment. So you can start to calculate what that cost would be, per vulnerability. And then we're looking at the time to detect, mitigate, validate, and then roll out to production. And correcting these vulnerabilities before they get into our production network is crucial to our image. If we were still doing manual reviews, we probably would not know of the critical and high vulnerabilities that we've found using Contrast. It would just be a matter of time before some hacker exploited those vulnerabilities for PHI data.
Another great benefit that Contrast has allowed us to enjoy is that there was no push-back from our development teams. Normally, in an organization, when you bring up security, developers gripe and moan because they look at security as a hindrance. But they were very receptive, very eager, and asked a lot of questions. We had two or three sessions with Contrast and, even today, developers are highly engaged with using the tool. They have implemented it into their development lifecycle process, both for our Agile teams and our Waterfall teams. It's been a huge turnaround here at FEPOC.
Management loves it. Having Contrast expose so many vulnerabilities that are in the applications means there's this heavy pressure now for 2020 to mitigate the vulnerabilities But it's a funny thing. Normally this task would be very cumbersome and problematic because of the number of vulnerabilities, but everyone loves the tool and the Contrast personnel are very helpful and very responsive. I'm enjoying it and I think our development and our test-app teams are as well. We have a very high adoption rate in our company.
The effectiveness of the solution’s automation via its instrumentation methodology is good, although it still has a lot of room for growth. The documentation, for example, is not quite up to snuff. There are still a lot of plugins and integrations that are coming out from Contrast to help it along the way. It's really geared more for smaller companies, whereas I'm contracting for a very large organization. Any application's ability to be turnkey is probably the one thing that will set it apart, and Contrast isn't quite to the point where it's turnkey.
Also, Contrast's ability to support upgrades on the actual agents that get deployed is limited. Our environment is pretty much entirely Java. There are no updates associated with that. You have to actually download a new version of the .jar file and push that out to the servers where your app is hosted. That can be quite cumbersome from a change-management perspective.
It depends on how many apps a company or organization has. But whatever the different apps are that you have, you can scale it to those apps. It has wide coverage. Once you install it in an app server, if the app is very convoluted, it has too many workflows, that is no problem. Contrast is per app. It's not like when you install source-code tools, where they charge by lines of code, per KLOC. Here, it's per app. You can pick 50 apps or 100 apps and then scale it. If the app is complex, that's still no problem, because it's all per app.
We have continuously increased our license count with Contrast, because of the ease of deployment and the ease of remediating vulnerabilities. We had a fixed set for one year. When we updated about six months ago, we did purchase extra licenses and we intend to ramp up and keep going. It will be based on the business cases and the business apps that come out of our organization.
Once we get a license for an app, folks who are project managers and scrum masters, who also have access to Contrast, get emails directly. They know they can put defects right from Contrast into JIRA. We also have other different tools that we use for integration like ThreatFix, and risk and compliance and governance tools. We take the results and upload them to those tools for the audit team to look at.
We've been using Contrast Security Assess for our applications that are under more of an Agile development methodology, those that need to deliver on faster timelines.
The solution itself is inherently a cloud-based solution. The TeamServer aspect, the consolidated portal, is hosted by the vendor and we have the actual Assess agent deployed in our own application environments on-prem.
For what it offers, it's a very reasonable cost. The way that it is priced is extremely straightforward. It works on the number of applications that you use, and you license a server. It is something that is extremely fair, because it doesn't take into consideration the number of requests, etc. It is only priced based on the number of applications. It suits our model as well, because we have huge traffic. Our number of onboarded applications is not that large, so the pricing works great for us.
There is a very small fee for the additional web node we have in place; it's a nonexistent cost. If you decide to apply it on existing web nodes, that is eliminated as well. It's just something that suits our solution.