We just raised a $30M Series A: Read our story

Application Security Jenkins Reviews

Showing reviews of the top ranking products in Application Security, containing the term Jenkins
Veracode: Jenkins
AS
DevSecOps Consultant at a comms service provider with 10,001+ employees

We use the Veracode SAST solution to scan the Java, Node.js, and Python microservices as part of our CI/CD pipeline, wherein we are using our CI/CD server as Bamboo, Jenkins, and GitLab CI/CD. 

We have teams for both our cloud pipeline and on-prem pipeline, and both teams use this solution. We are using Veracode to constantly run the internal application source code and ensure the code's security hygiene.

View full review »
SM
Principal for the Application Security Program and Access Control at a engineering company with 10,001+ employees

We clearly mentioned during our purchase cycle that we have C++ code, a Swift code from a US perspective, Python libraries, etc. We were given assurances that these were absolutely covered under the solution. However, when we started investigating through support tickets, they admitted that these were not supported. We have very limited support for C++ code scans and other things. That was a bummer from my perspective.

The support has been good. However, we work in an agile environment and our release cycles are literally every two weeks. Their response times have been very delayed, especially as we are in the Pacific Time Zone and they are in the Eastern Time Zone. 

They have a great support portal to do self-service. We have been pretty impressed with that, but we soon realized that anything you pick is 10 days to two weeks out. That has been a non-starter for us. We had to constantly escalate through our account team to get an engineer on the call, because we were in the middle of a release and needed to scan the product at the moment.

At this point, we are doing sandbox scanning. We have implemented it with our Jenkins CI/CD tool to really scan the code, upload, etc. It took awhile for us to figure it out because the support wasn't really helpful. We had to hack our way into getting through the documentation. Since the time they acquired SourceClear, they haven't really cleaned up or integrated the documentation well, and that may be one of the reasons. However, we were able to find the right combination of keys to make it work.

View full review »
Manager, Information Technology at Broadcom Corporation

The most valuable feature, from a central tools team perspective, which is the team I am part of, being a DevSecOps person, is that it is SaaS hosted. That makes it very convenient to use. There is no initial time needed to set up an application. Scanning is a matter of minutes. You just log in, create an application profile, associate a security configuration, and that's about it. It takes 10 minutes to start. The lack of initial lead time or initial overhead to get going is the primary advantage. 

Also, because it's SaaS and hosted, we didn't have any infrastructure headache. We didn't have to think about capacity, the load, the scan times, the distribution of teams across various instances. All of this, the elasticity of it, is a major advantage.

There are two aspects to it. One is the infrastructure. The other one is the configuration. There are a lot of SaaS solutions where the infrastructure is taken care of, but the configuration of the application to start scanning takes some time to gain knowledge about it through research and study. That is not the case with Veracode. You don't have any extensive security profiles to consider. It's a two-pronged advantage.

Veracode also reports far fewer false positives with the static scanning. The scanner just goes through the code and analyzes all the security vulnerabilities. A lot of scanning tools in the market give you a lot of false positives. The false positive rate in Veracode is notably less. That was very helpful to the product teams as they could spend most of their time fixing real issues.

Veracode provides guidance for fixing vulnerabilities and that is one of their USPs—unique selling propositions. They provide security consultations, and scheduling a consultation is very easy. Once a scan is completed, anybody who has a Veracode login can just click a button and have a security consultation with Veracode. That is very unique to Veracode. I have not seen this offered in other products. Even if it is offered, it is not as seamless and it takes some time to get security advice. But with Veracode, it's very seamless and easy to make happen.

Along those lines, this guidance enables developers to write secure code from the start. One of the advantages with Veracode is its ability to integrate the scanning with the DevOps pipeline as well as into the IDEs of the developers, like Eclipse or IntelliJ or Visual Studio. This type of guidance helps developers left-shift their secure-coding practices, which really helps in writing far better secured product.

Another unique selling point of Veracode is their eLearning platform, which is available with the cloud-hosted solution. It's integrated into the same URL. Developers log into the Veracode tenant, go through the eLearning Portal, and all the courses are there. The eLearning platform is really good and has helped developers improve their application security knowledge and incorporate it in their coding practices.

One of the things that Veracode follows very clearly is the assignment of a vulnerability to the CWE standard or the OWASP standard. Every vulnerability reported is tied to an open standard. It's not something proprietary to Veracode. But it makes it easy for the engineers and developers to find more information on the particular bug. The adherence to standards helps developers learn more about issues and how to fix them.

We use the Static Analysis Pipeline Scan as part of the CI pipeline in Jenkins or TeamCity or any of the code orchestrators that use scanning as part of the pipeline. There's nothing special about the pipeline scan. It's like our regular Veracode Static Analysis Scan. It's just that if it is part of the pipeline, you are scanning more frequently and finding flaws at an earlier point in time. The time to identify vulnerabilities is quicker.

Veracode with the integrated development environments that the developers use to write code, including Microsoft Visual Studio, Eclipse, IntelliJ IDEA, etc. It also integrates with project and portfolio management tools like JIRA and Rally. That way, once vulnerabilities are reported you can actually track them by exporting them to your project management tools, your Agile tools, or your Kanban boards. The more integrations a scanning tool has, the better it is because everything has to fit into the DevOps or DevSecOps pipeline. The more integrations it has with the continuous integration tools, the IDEs, and the product management tools, the better it is. It affects the adoption. If it is a standalone system the adoption won't be great. The integration helps with adoption because you don't need to scan manually. You set it up in the pipeline once and it just keeps scanning.

View full review »
Acunetix by Invicti: Jenkins
MM
IT Manager at a financial services firm with 1,001-5,000 employees

I would recommend the product. It's very easy to integrate with Jenkins, with ALM. The most important element for us is that it's very easy for developers to use. They don't need to have any knowledge about security, threats or anything. They just run the tool against their application, and that's it. They get the results.

I would rate this product a seven out of 10.

View full review »
PortSwigger Burp Suite Professional: Jenkins
Lead Security Architect at SITA

Although it provides great writeup for the identified vulnerabilities but reporting needs to improve with various reporting templates based on standards like OWASP, SANS Top 25, etc. The tools needs to expand its scope for mobile application security testing, where native mobile apps can be tested and can provide interface to integrate with mobile device platform or mobile simulator's. Burp suite has great ability to integrate with Jenkins, Jira, Teamcity into CI/CD pipeline and should provide better ways of integration with other such similar platforms.

View full review »
Checkmarx: Jenkins
MM
CEO at a tech services company with 11-50 employees

The initial setup is pretty simple, it's no problem to start using Checkmarx. It's a very good approach if you compare it with competitors.

It only takes a few hours to tune your Checkmarx solution. You may need more time for deeper integration when it comes to DLC integration, for example, when using plug-in build management, such as Jenkins

If you are scanning and you have the source code then you are good to start scanning in a few hours. Three to four hours is required for tasks done in source code.

We have one or two engineers who can work with the solution.

For some of our customers have more than 100 developers and a DevOps team.

View full review »
SonarQube: Jenkins
YB
Devops Engineer at a financial services firm with 10,001+ employees

The most valuable feature is the security hotspot feature that identifies where your code is prone to have security issues.

It also gives you a very good highlight of what's changed, and what has to be changed in the future.

Apart from that, there are many other good features as it's a code analytics platform. It also has a dashboard reporting feature, which is very good. I also like the ease of its integration with Jenkins.

Another valuable feature is the time snapshot that it provides for the code. It provides the code quality, the lagging, and the training features like what already has gone wrong and what is likely to go wrong. It's a very good feature for a project to have a dashboard where the users can find everything about their project at a single glance.

View full review »
Sr DevOps Engineer at incatech

It's convenient due to the fact that it's open-source. 

We're able to identify bugs and those kinds of things before we actually push anything into a staging or production area. It helps our developers work more efficiently as we can identify things in a code prior to it being pushed to where it needs to go. It's a great little loop. You see this, fix it, take it back. Versus, putting something into an environment and then everything is all broken. It's a good development test tool. 

Nowadays you can add extensions, similar to what you can do with the Jenkins tool, the CICB tool, the build tool. Jenkins can have a lot of plugins that interface with a lot of vendors or it can do a lot of things. Just like Google Chrome where you can bring in an extension, you can do the same here. In SonarQube, you can add something by just adding an extension that you may have to pay extra for, However, that add-on has additional functionality that the base software may not necessarily have in its core.

For example, Fortify has some kind of special capability that they have for checking and SonarQube has created an extension that allows the Fortify extensions. Right now, I have Fortify, however, it's in this product at a very modular level.

View full review »
Coverity: Jenkins
SG
Senior Technical Specialist at a tech services company with 201-500 employees

The most valuable feature is the integration with Jenkins. Jenkins can be used to automatically run it to perform the code analysis.

Integration with GitLab is helpful.

View full review »
VV
Senior Solutions Architect at a computer software company with 11-50 employees

I used CodeSonar a few years back. Both tools have their advantages. In any static analysis tool, the first stage is the instrumentation of the source code. It'll try to capture the skeleton of your source code. So when I compare them based on the first phase alone, Coverity is far better than CodeSonar. 

They both use a similar technique, but CodeSonar uses up way more storage resources. For example, to scan a 1GB code base, CodeSonar generates more than 5GB of instrumented files for every 1GB of code base. In total, that is 6GB. Coverity generates 500MB extra on top of 1GB, so that equals 1.5GB all in. That's a huge difference. CodeStar would eat up my disc space and hardware resources when I used it, whereas Coverity is minimal. 

In terms of checkers, both CodeSonar and Coverity cover a good length and breadth, especially for C and C++ programming languages. But CodeSonar focuses only on four languages—C, C++, Java, and C#—only four programming languages, whereas Coverity supports more than 20-plus programming languages.

Also, the two are comparable with respect to their plugin offerings, but there are crucial differences. For example, CodeSonar only focuses on well-known integrations, like Jenkins and JIRA, but you cannot expect all customers to use the same tools. Coverity supports almost all CI/CD tools, including Jenkins and Bamboo. It also integrates with service providers like Azure DevOps Pipelines, AWS CodePipelines that CodeSonar hasn't added yet. The plugins are available in the marketplace, and you don't have to pay extra. You just have to download it from the marketplace, hook the plugin in your pipeline, and ready to use kind of approach. So these are some of the major use cases, three major use cases I would say when you compare apples to apples with CodeSonar and Coverity.

View full review »
WhiteSource: Jenkins
User at a tech vendor with 1,001-5,000 employees

Our primary use for WhiteSource is security and license risk detection in open-source, third-party libraries and components. We run scans from multiple source control and build systems (TFS, ADO, Jenkins, ...). Some of our scans are automated, while others are done manually with the unified file agent in offline mode scan, and then the resulting "wsjson" file is uploaded to the WS SaaS portal.

View full review »
Sonatype Nexus Lifecycle: Jenkins
FT
IT Security Manager at a insurance company with 5,001-10,000 employees

The key feature for Nexus Lifecycle is the proprietary data they have on vulnerabilities. The way that they combine all the different sources and also their own research into one concise article that clearly explains what the problem is. Most of the time, and even if you do notice that you have a problem, the public information available is pretty weak. So, if we want to assess if a problem applies to our product, it's really hard. We need to invest a lot of time digging into the problem. This work is basically done by Sonatype for us. The data that it delivers helps us with fixing or understanding the issue a lot quicker than without it.

The solution integrates well with our existing DevOps tools. We have a few different ways of integrating it. The primary point is the Jenkins plugin to integrate it into the pipeline, but we also use the API to feed applications from our self-developed systems. So, the Sonatype API is very valuable to us as well. We've also experimented with IDE plugins and some other features that all look very promising.

View full review »
DevOps Engineer at Guardhat

We have it running on the majority of our builds for all of our applications and we use Jenkins for our build system. Eventually, the goal is to incorporate this into Jenkins so that if we don't get a good enough result on both Nexus IQ and SonarQube, we'll actually fail the Jenkins build. That way we force ourselves to maintain good metrics on both of them. So Nexus IQ is making sure that we're using dependencies that don't have known vulnerabilities. And SonarQube is making sure that our code maintains a certain level of quality.

Unfortunately, we haven't been able to take full advantage of Nexus. It's set up and it's working, but we haven't rolled it fully into our development process. Our builds use it, but we're not using the information from it a whole lot. The solutions are running, but we're not enforcing the results from them and, therefore, our developers aren't driven to make absolutely sure that they are going well. Hopefully, we'll get there soon.

View full review »
Sr. DevOps Engineer at Primerica

It's allowed our developers, instead of waiting till the last minute before a release, to know well ahead of time that the components are bad and they are able to proactively select different components that don't have a vulnerability or a licensing issue.

Also, the solution's data quality seems to be good. We haven't had any issues. We're definitely able to solve problems a lot faster and get answers to the developers a lot faster.

And Nexus Lifecycle integrates well with your existing DevOps tools. We were able to put it right into our build pipelines. We use Jenkins and we're able to stop the builds right in the actual build process whenever there's a quarantined item.

In addition, it has brought open-source intelligence and policy enforcement across our SDLC. It has totally changed the way we do our process. We have been able to speed up the approval process of OSS. Given the policies, we're able to say, "These are okay to use." We've been able to put in guardrails to allow development to move faster using the product. Our pipelines are automated and it is definitely a key component of our automation.

Finally, the developers like it because they're able to see and fix their issues right away. That has improved. For example, let's say a developer had to come to us and said, "Hey, scan this. I want to use it," and we scan it and it has a vulnerability. They've already asked us to do something that they could have done through the firewall product or Lifecycle. Suppose it takes us a day and then we turn around and say, "Okay, here are the results," and we say they can use this version of that product. They've got to download it and see if it works. So we're already saving a day there. But then let's say they have to send it off to security to get approval on something that security would probably approve anyways. It's just they didn't know security would approve it. They would have to wait two or three days for security to come back and give them an answer. So we're looking at possibly saving four days on a piece of code.

View full review »
ME
Sr. Enterprise Architect at MIB Group

We have a lot of legacy applications here and they're all built with Ant scripts and their dependencies come from a shared folder. There's not a lot of "accountability" there. What we get out of using Nexus is that all of our dependencies are in the same place and we can specify a specific version. We no longer have a situation where somebody has pulled down a .jar file and stuck it in this folder and we don't know what the version is or where, exactly, it came from. That's one of the benefits.

Another of the main things we get is what Sonatype calls a "bill of materials." We can go into our Nexus product and say, "Okay, here is our ABC application. What are its dependencies?" And we can be specific down to the version. We know what's in it and, if a vulnerability gets reported, we can look and see if we use that particular component and in which applications, to know if we're vulnerable. If we find we're exposed to that vulnerability we know we need to go and remediate it.

The biggest benefit we get out of it is the overall ease of development. The ability to automate a lot of the build-and-deploy process comes from that.

The data quality helps us solve problems faster, as in the security vulnerability example I just mentioned. In those circumstances, we have to solve that problem. Previously, we wouldn't have seen that vulnerability without a painstaking process. Part of the Nexus product, the IQ Server, will continually scan our components and if a new CVE is reported, we get that update through Nexus IQ. It automatically tells us, "Hey, in this open-source library that you're using, a vulnerability was found, and you use it in these four applications." It immediately tells us we are exposed to risk and in which areas. That happens, not in near real-time, but very quickly, where before, there was a very painstaking process to try to find that out.

A year ago we didn't have DevOps tools. We started building them after I came on. But Nexus definitely integrates very well with our DevOps tools. Sonatype produces plugins for Jenkins to make it seamlessly interact, not only with the repo product, but with the Nexus IQ product that we own as well. When we build our pipelines, we don't have to go through an array of calls. Even their command-line is almost like pipeline APIs that you can call. It makes it very simple to say "Okay, upload to Nexus." Because Jenkins knows what Nexus is and where it is — since it's configured within the Jenkins system — we can just say, "Upload that to Nexus," and it happens behind the scenes very easily. Before, we would have to either have run Maven commands or run Gradle commands via the shell script to get that done. We don't need to do that sort of thing anymore.

The solution has also brought open-source intelligence and policy enforcement across our SDLC. We have defined policies about certain things at various levels, and what risks we're willing to expose ourselves to. If we're going to proxy a library from Maven Central for example, if the Nexus IQ product says it has a security-critical vulnerability or it's "security high" or it's "component unknown," we can set different actions to happen. We allow our developers to pull down pretty much anything. As they pull something down from say, Maven Central, it is scanned. If it says, "This has a critical vulnerability," we will warn the developer with the report that comes out: "This has a security-critical vulnerability. You're allowed to bring it down in development, but when you try to move to QA or staging, that warning about the 'security-critical' component will turn to a failure action." So as we move our artifacts through that process, there are different stages. When someone tries to move that component to our staging environment, it will say, "Oh no, you can't because of the security-critical thing that we've been warning you about. Now we have to fail you." That's where we get policy enforcement. Before, that was a very manual process where we'd have to go out and say, "Okay, this thing has these vulnerabilities, what do we do with it?" It's much more straightforward and the turnaround time is a whole lot faster.

Automating open-source governance and minimizing risk is exactly what Nexus is for. Our company is very security conscious because we're governed by a number of things including the Fair Credit Reporting Act, which is very stringent in terms of what we can and cannot have, and the level of security for data and information that we maintain. What Nexus does is it allows us to look at the level of risk that we have in an application that we have written and that we expose to the companies that subscribe to us. It's based on the components that we have in the application and what their vulnerabilities are. We can see that very clearly for any application we have. Suppose, all of a sudden, that a Zero-day vulnerability — which is really bad — is found in JAXB today. We can immediately look for that version in Nexus. We can see: Do we have that? Yes, we do. Are we using it? Yes, we are. What applications are we using it in? We can see it's in this and that application and we can turn one of our teams to it and get them to address it right away.

I don't know exactly how much time it has saved us in releasing secure apps to market, but it's considerable. I would estimate it saves us weeks to a month, or more, depending upon the scope of a project.

And it has definitely increased developer productivity. They spend a lot less time looking for components or libraries that they can download. There was a very manual process to go through, before Nexus, if they wanted to use a particular open-source library. They had to submit a request and it had to go through a bunch of reviews to make sure that it didn't have vulnerabilities in it, and then they could get a "yes" or "no" answer. That took a lot of time. Whereas now, we allow them to download it and start working with it while other teams — like our enterprise security team — look at the vulnerabilities associated with it. That team will say, "Yeah, we can live with that," or "No, you have to mitigate that," or "No, you can't use this at all." We find that out very much earlier in the process now.

It allows us to shift gears or shift directions. If we find a component that's so flawed that we don't even want to bring it into the organization from a security standpoint, we can pivot and say, "Okay, we'll use this other component. It doesn't do everything we needed, but it's much more solid."

View full review »
Security Analyst at a computer software company with 51-200 employees

For the initial deployment, it was in place within a couple of days of starting the trial.

We did have an implementation strategy sketched out as far as requirements for success during the PoC go. The requirements were that it would easily integrate into our pipeline, so that it was very automated and hands-off. Part of the implementation strategy was that we expected to use Jenkins, which is our main build-management tool.

In terms of the integrations of the solution into developer tooling like IDEs, Git repos, etc., I wasn't really part of the team that was doing the integration into the pipeline, but I did work with the team. We didn't have any problems integrating it. And from what I did see, it looks like a very simple integration, just adding it straight into Jenkins. It integrated quite quickly into the environment.

At this point we haven't configured it to do any blocking or build-blocking just yet. But that's something we'll be reviewing, now that we have a good process.

View full review »
MA
Computer Architecture Specialist at a energy/utilities company with 10,001+ employees

We can automate the pipeline of CI/CD. For example, if a publication uses an open source library and it's vulnerable, then the security team will mark it in the Lifecycle suite and it can go through the pipeline without manual interaction by the developer.

I'm not a security guy but I have sat with the security team. Once you set the policies, you wont need to change them. The policies wouldn't change that frequently. It covers the needs that we have.

Using the solution we have been able to clean our environment, providing more protection for our applications. We have a more hygienic environment than before. Before using Lifecycle we were almost blind to whatever we had and didn't look into the vulnerabilities within open source libraries. Now we do.

It has helped to increase our productivity a lot, especially with Nexus Repository Manager. It is way more agile. There is no comparison between our productivity before and now.

In terms of the accuracy of the data from Sonatype, at first the teams were challenging whatever the solution provided, but they then verified with the vendor of the open source libraries or via the related community, and they realized that the data from Sonatype is something that is done carefully. It's accurate and valid data. We are now introducing a security layer for open source. Before, there was no security on open source and they did whatever they wanted but that is no longer the case. They have to fix things before deploying them. It helps them resolve issues. It works most of the time, but sometimes there are challenges for the developer in solving them.

We also use the solution to automate open source governance and minimize risk with policies. Some of our developers, although not all of them, have their own Jenkins installed and they set rules and policies. They have integrated Jenkins with Lifecycle and, whenever they push into production, it verifies they are not violating any policies. Once everything is smooth, it goes into production. We haven't formalized that process yet.

View full review »
BS
Application Security at a comms service provider with 1,001-5,000 employees

We have it implemented and integrated into our CI/CD pipeline, for when we do builds. Every time we do a build, Jenkins reaches out and kicks off a scan from the IQ Server.

We use it to automate open source governance and minimize risk. All of our third-party libraries, everything, comes through our Nexus, which is what the IQ Server and Jenkins are hooked into. Everything being developed for our big application comes through that tool.

We have Nexus Firewall on, but it's only on for the highest level of vulnerabilities. We have the firewall sitting in front to make sure we don't let anything real bad into the system.

Our environment is your standard, three-tiered environment. We have the developers develop in their Dev and Test environments, and as the code moves through each environment — Test and a QA environment — it goes through a build process. We build each time we deploy.

We're addressing anything that is a nine and above. If it's a 10, we don't let it into our system; the firewall server stops it. If we have nines we'll let it in, but I'll tag the developers and they'll have to do a little triage to figure out if the problem that is being reported is something we utilize in our system — if it's something that affects us — and if it's not, we flag it as such and let it go. We either waive it or I'll acknowledge it depending on how much it's used throughout the system and how many different components are being built with that bad library.

View full review »
Engineering Tools and Platform Manager at BT - British Telecom

IQ Server is part of BT's central DevOps platform, which is basically the entire DevOps CI/CD platform. IQ Server is a part of it covering the security vulnerability area. We have also made it available for our developers as a plugin on IDE. These integrations are good, simplistic, and straightforward. It is easy to integrate with IQ Server and easy to fetch those results while being built and push them onto a Jenkins board. My impression of such integrations has been quite good. I have heard good reviews from my engineers about how the plugins that are there work on IDE.

It basically helps us in identifying open-source vulnerabilities. This is the only tool we have in our portfolio that does this. There are no alternatives. So, it is quite critical for us. Whatever strength Nexus IQ has is the strength that BT has against any open-source vulnerabilities that might exist in our code.

The data that IQ generates around the vulnerabilities and the way it is distributed across different severities is definitely helpful. It does tell us what decision to make in terms of what should be skipped and what should be worked upon. So, there are absolutely no issues there.

We use both Nexus Repository and Lifecycle, and every open-source dependency after being approved across gets added onto our central repository from which developers can access anything. When they are requesting an open-source component, product, or DLL, it has to go through the IQ scan before it can be added to the repo. Basically, in BT, at the first door itself, we try to keep all vulnerabilities away. Of course, there would be scenarios where you make a change and approve something, but the DLL becomes vulnerable. In later stages also, it can get flagged very easily. The flag reaches the repo very soon, and an automated system removes it or disables it from developers being able to use it. That's the perfect example of integration, and how we are forcing these policies so that we stay as good as we can.

We are using Lifecycle in our software supply chain. It is a part of our platform, and any software that we create has to pass through the platform, So, it is a part of our software supply chain. 

View full review »
Snyk: Jenkins
CB
Senior Manager, Product & Application Security at a tech services company with 1,001-5,000 employees

The way they are presenting the vulnerabilities after a scan. It's very organized and easy to access. The UI is very organized. I also like that we can use the CLI or commands to run a scan locally or in the pipeline. 

The CLI feature is quite useful because it gives us a lot of flexibility in what we want to do. If you use the UI, all the information is there and you can see what Snyk is showing you, but there is nothing else that you can change. However, when you use the CLI, then you can use commands and can get the output or response back from Snyk. You can also take advantage of that output in a different way. For the same reason, we have been using the CLI for the hard gate in the pipeline: Obtain a particular CDSS score for vulnerability. Based on that information, we can then decide if we want to block or allow the build. We have more flexibility if we use the CLI.

For the pipeline, we use Jenkins, and for storing images in the build, we use Artifactory with some Jenkins integrations. This is super easy because we are using the CLI, which was one of the features that I really like because it's super flexible. You can do a lot of things with the CLI. It's easy to integrate. Same thing with the GitHub integration, Snyk provides Broker images that allow you to coordinate your internal GitHub repository with the cloud solution from Snyk. It's like a proxy.

The UI is super easy to use. I have no issues with the interface.

View full review »
Senior Director, Engineering at Zillow Group

It is meant to be a less intrusive type of solution. It is easy to integrate and doesn't require a lot of effort. It's more a part of the CI/CD pipelines, which doesn't necessarily interfere with developers other than if there are actions/remediations to be taken. From a development impact, it's very lightweight and minimal. 

It is not noticeable for most engineers since it's part of the pipeline. If no new findings are reported, then it goes through without any signals or noise. If there were findings, these are usually legitimate findings and can be configured in such a way that they can be blocked/stopped in your pipelines or be more informational. The user has all the knobs and screws to turn and tweak it towards their use case because there may be areas where security is more critical than in other parts of the company, like development projects. 

We exclusively use their SDE tools. Our CI/CD environments are powered by source code control systems like GitLab and GitHub. BitPocket has also been integrated to some extent. There are CI/CD pipelines where we pull in Snyk as part of the pipeline, jobs, Jenkins environment, etc.

View full review »
Information Security Officer at a tech services company with 51-200 employees

We are using it to identify security weaknesses and vulnerabilities by performing dependency checks of the source code and Docker images used in our code. We also use it for open-source licensing compliance review. We need to keep an eye on what licenses are attached to the libraries or components that we have in use to ensure we don't have surprises in there.

We are using the standard plan, but we have the container scanning module as well in a hybrid deployment. The cloud solution is used for integration with the source code repository which, in our case, is GitHub. You can add whatever repository you want to be inspected by Snyk and it will identify and recommend solutions for your the identified issues. We are also using it as part of our CI/CD pipelines, in our case it is integrated with Jenkins

View full review »
CISO at a tech vendor with 51-200 employees

For a developer, the ease of use is probably an eight out of 10. It is pretty easy to use. There is some documentation to familiarize themselves with the solution, because there are definitely steps that they have to take and understand. However, they are not hard and documented pretty well.

We have integrated Snyk into our SDE. We have a CI/CD pipeline that builds software, so it's part of that process that we will automatically run. We use Jenkins as our pipeline build tool, and that's what we have integrated. It is pretty straightforward. Snyk has a plugin that works out-of-the-box with Jenkins which makes it very easy to install.

Snyk's vulnerability database is excellent, in terms of comprehensiveness and accuracy. I would rate it a nine or 10 (out of 10). They have a proprietary database that is very useful. They are also very open to adding additional packages that we use, which might be not widely used across their customer base.

View full review »