We just raised a $30M Series A: Read our story

Application Security Integration Reviews

Showing reviews of the top ranking products in Application Security, containing the term Integration
Veracode: Integration
Senior Security Analyst at a wellness & fitness company with 1,001-5,000 employees

Veracode has improved our Application Security program by providing numerous integrations and tools to take our AppSec/DevSecOps to the next level. 

Integrations into our developer's IDE (Greenlight) and the DevOps Pipeline SAST / SourceClear Integrations has particularly increased our time to market and confidence.

In many ways, Veracode has increased productivity, helped build and improve security and development departmental relationships as well as enabling developers to consider and care about application security. 

View full review »
Sr. Security Architect at a financial services firm with 10,001+ employees

We are using Dynamic Application Security Testing (DAST), Static Application Security Testing (SAST), and Static Component Analysis (SCA). We use different types of scanning across numerous applications. We also use Greenlight IDE integration. We are scanning external web applications, internal web applications, and mobile applications with various types/combinations of scanning. We use this both to improve our application security as well as achieve compliance with various compliance bodies that require code scanning.

View full review »
AS
DevSecOps Consultant at a comms service provider with 10,001+ employees

There are quite a few features that are very reliable, like the newly launched Veracode Pipelines Scan, which is pretty awesome. It supports the synchronous pipeline pretty well. We been using it out of the Jira plugin, and that is fantastic. 

We are using the Veracode APIs to build the Splunk dashboards, which is something very nice, as we are able to showcase the application security hygiene to our stakeholders and leadership. 

We have been using Veracode Greenlight for the IDE scanning. 

Veracode has good documentation, integrations, and tools, so it has been a very good solution. 

Veracode is pretty good about providing recommendations, remedies, and guidelines on issues that are occurring.

It is an excellent solution. It finds a good number of the securities used, providing good coverage across the languages that we require at our client site.

We have been using the solution’s Static Analysis Pipeline Scan, which is excellent. When we started, it took more time because we were doing asynchronous scans. However, in the last six months, Veracode has come with the Pipeline Scan, which supports synchronous scans. It has been helping us out a lot. Now, we don't worry when the pentesting report comes in. By using Veracode, the code is secure, and there are no issues that will stop the release later on in the SDLC. 

The speed of the Pipeline Scan is very nice. It takes less than 10 minutes. This is very good, because our policy scans used to take hours.

Veracode is good in terms of giving feedback.

View full review »
MT
Software Architect at Alfresco Software

What could improve a lot is the user interface because it's quite dated. And in general, as we are heavy users of GitHub, the integration with the user interface of GitHub could be improved as well. 

There is also room for improvement in the reporting in conjunction with releases. Every time we release software to the outside world, we also need to provide an inventory of the libraries that we are using, with the current state of vulnerabilities, so that it is clear. And if we can't upgrade a library, we need to document a workaround and that we are not really touched by the vulnerability. For all of this reporting, the product could offer a little bit more in that direction. Otherwise, we just use information and we drop these reports manually.

Another problem we have is that, while it is integrated with single sign-on—we are using Okta—the user interface is not great. That's especially true for a permanent link of a report of a page. If you access it, it goes to the normal login page that has nothing that says "Log in with single sign-on," unlike other software as a service that we use. It's quite bothersome because it means that we have to go to the Okta dashboard, find the Veracode link, and log in through it. Only at that point can we go to the permanent link of the page we wanted to access.

Veracode has plenty of data. The problem is the information on the dashboards of Veracode, as the user interface is not great. It's not immediately usable. Most of the time, the best way to use it is to just create issues and put them in JIRA. It provides visibility into the SAST, DAST and SCA, but honestly, all the information then travels outside of the system and it goes to JIRA.

In the end, we are an enterprise software company and we have some products that are not as modern as others. So we are used to user interfaces that are not great. But if I were a startup, and only had products with a good user interface, I wouldn't use Veracode because the UI is very dated.

Also, we're not using the pipeline scan. We upload using the Java API agent and do a standard scan. We don't use the pipeline scan because it only has output on the user interface and it gets lost. When we do it as part of our CI process, all the results are only available in the log of the CI. In our case we are using Travis, and it requires someone to go there and check things in the build logs. That's an area where the product could improve, because if this information was surfaced, say, in the checks of the code we test on GitHub—as happens with other static analysis tools that we use on our code that check for syntax errors and mapping—in that case, it would be much more usable. As it is, it is not enough.

The management of the false positives is better than in other tools, but still could improve in terms of usability, especially when working with multiple branches. Some of the issues that we had already marked as "To be ignored" because they were either false positives or just not applicable in our context come down, again, to the problem of the user interface. It should have been better thought out to make it easier for someone who is reviewing the list of the findings to mark the false positives easily. For example, there were some vulnerabilities mentioning parts of libraries that we weren't actually using, even if we were including them for different reasons, and in that case we just ignore those items.

We have reported all of these things to product management because we have direct contact with Veracode, and hopefully they are going to be fixed. Obviously, these are things that will improve the usability of the product and are really needed. I'm totally happy to help them and support them in going in the right direction, meaning the right direction from my perspective.

View full review »
YT
R&D Director at a computer software company with 201-500 employees

To get into the solution, it took some tries to understand the structure of our repository and the code that we were using to write dependencies, etc. So, it took a bit of time, but then in the end, the solution was easy to connect.

It took about a month until we completed integration of Veracode tools into our own systems. Eventually, the tools needs to scan our code that resides on our machines in our on-prem environment. The integration of Veracode on the cloud with the on-prem repository and our processes took time. We worked with the Israeli representative of Veracode to help us. However, it was about a month overall until we stabilize it.

View full review »
SS
Head Of Information Security at a media company with 51-200 employees

It's valuable to any business that has software developers or that is producing software that consumers use. You have to do some type of application security testing before allowing consumers to use software. Otherwise, it's risky. You could be publishing software with certain security defects, which would open up your company to the likelihood of a class action lawsuit.

I don't have any examples of how it improved the way our company functions. However, I did use a lot of the findings to put pressure on our vendors to try to improve their security postures.

Veracode has helped with developer security training and helped build developer security skills. Developers who get the tickets can go into it and take a look at the remediation advice. They have a lot of published documentation about different types of security issues, documentation that developers can freely get into and read.

The integration with JIRA helps developers see the issues and respond to them.

View full review »
Manager, Information Technology at Broadcom Corporation

The most valuable feature, from a central tools team perspective, which is the team I am part of, being a DevSecOps person, is that it is SaaS hosted. That makes it very convenient to use. There is no initial time needed to set up an application. Scanning is a matter of minutes. You just log in, create an application profile, associate a security configuration, and that's about it. It takes 10 minutes to start. The lack of initial lead time or initial overhead to get going is the primary advantage. 

Also, because it's SaaS and hosted, we didn't have any infrastructure headache. We didn't have to think about capacity, the load, the scan times, the distribution of teams across various instances. All of this, the elasticity of it, is a major advantage.

There are two aspects to it. One is the infrastructure. The other one is the configuration. There are a lot of SaaS solutions where the infrastructure is taken care of, but the configuration of the application to start scanning takes some time to gain knowledge about it through research and study. That is not the case with Veracode. You don't have any extensive security profiles to consider. It's a two-pronged advantage.

Veracode also reports far fewer false positives with the static scanning. The scanner just goes through the code and analyzes all the security vulnerabilities. A lot of scanning tools in the market give you a lot of false positives. The false positive rate in Veracode is notably less. That was very helpful to the product teams as they could spend most of their time fixing real issues.

Veracode provides guidance for fixing vulnerabilities and that is one of their USPs—unique selling propositions. They provide security consultations, and scheduling a consultation is very easy. Once a scan is completed, anybody who has a Veracode login can just click a button and have a security consultation with Veracode. That is very unique to Veracode. I have not seen this offered in other products. Even if it is offered, it is not as seamless and it takes some time to get security advice. But with Veracode, it's very seamless and easy to make happen.

Along those lines, this guidance enables developers to write secure code from the start. One of the advantages with Veracode is its ability to integrate the scanning with the DevOps pipeline as well as into the IDEs of the developers, like Eclipse or IntelliJ or Visual Studio. This type of guidance helps developers left-shift their secure-coding practices, which really helps in writing far better secured product.

Another unique selling point of Veracode is their eLearning platform, which is available with the cloud-hosted solution. It's integrated into the same URL. Developers log into the Veracode tenant, go through the eLearning Portal, and all the courses are there. The eLearning platform is really good and has helped developers improve their application security knowledge and incorporate it in their coding practices.

One of the things that Veracode follows very clearly is the assignment of a vulnerability to the CWE standard or the OWASP standard. Every vulnerability reported is tied to an open standard. It's not something proprietary to Veracode. But it makes it easy for the engineers and developers to find more information on the particular bug. The adherence to standards helps developers learn more about issues and how to fix them.

We use the Static Analysis Pipeline Scan as part of the CI pipeline in Jenkins or TeamCity or any of the code orchestrators that use scanning as part of the pipeline. There's nothing special about the pipeline scan. It's like our regular Veracode Static Analysis Scan. It's just that if it is part of the pipeline, you are scanning more frequently and finding flaws at an earlier point in time. The time to identify vulnerabilities is quicker.

Veracode with the integrated development environments that the developers use to write code, including Microsoft Visual Studio, Eclipse, IntelliJ IDEA, etc. It also integrates with project and portfolio management tools like JIRA and Rally. That way, once vulnerabilities are reported you can actually track them by exporting them to your project management tools, your Agile tools, or your Kanban boards. The more integrations a scanning tool has, the better it is because everything has to fit into the DevOps or DevSecOps pipeline. The more integrations it has with the continuous integration tools, the IDEs, and the product management tools, the better it is. It affects the adoption. If it is a standalone system the adoption won't be great. The integration helps with adoption because you don't need to scan manually. You set it up in the pipeline once and it just keeps scanning.

View full review »
HB
Software Engineer at a tech services company with 1,001-5,000 employees

I would like to see them provide more content in the developer training section. This field is really changing each day and there are flaws that are detected each day. Some sort of regular updates to the learning would help. 

I would also like to see more integration with other frameworks. There were some .NET Core versions that weren't supported back when we started, but now they're providing more support for it.

View full review »
Automation Practice Leader at a financial services firm with 10,001+ employees

The solution has issues with scanning. It tries to decode the binaries that we are trying to scan. It decodes the binaries and then scans for the code. It scans for vulnerabilities but the code doesn't. They really need two different ways of scanning; one for static analysis and one for dynamic analysis, and they shouldn't decode the binaries for doing the security scanning. It's a challenge for us and doesn't work too well. 

As an additional feature I'd like to see third party vulnerability scanning as well as any container image scanning, interactive application security testing and IAS testing. Those are some of the features that Veracode needs to improve. Aside from that, the API integration is very challenging to integrate with the different tools. I think Veracode can do better in those areas.

View full review »
KE
Cybersecurity Executive at a computer software company with 51-200 employees

We utilize it to scan our in-house developed software, as a part of the CI/CD life cycle. Our primary use case is providing reporting from Veracode to our developers. We are still early on in the process of integrating Veracode into our life cycle, so we haven't consumed all features available to us yet. But we are betting on utilizing the API integration functionality in the long-term. That will allow us to automate the areas that security is responsible for, including invoking the scanning and providing the output to our developers so that they can correct any findings.

Right now, it hasn't affected our AppSec process, but our 2022 strategy is to implement multiple components of Veracode into our CI/CD life cycle, along with the DAST component. The goal is to bridge that with automation to provide something closer to real-time feedback to the developers and our DevOps engineering team. We are also looking for it to save us productivity time across the board, including security.

It's a SaaS solution.

View full review »
Acunetix by Invicti: Integration
CEO at a tech consulting company with 11-50 employees

The solution should work on dealing with the number of false positives it delivers.

While we do have it integrated with other solutions, it could still offer more integrations.

View full review »
PortSwigger Burp Suite Professional: Integration
Director - Head of Delivery Services at Ticking Minds Technology Solutions Pvt Ltd

The tool comes in three type. First, there is the  Open Community Edition, which is meant for people who use it to learn the tool or use it to secure their system. This edition does not have scanning features enabled to source scan the against application URLs or websites. From the standpoint of learning about security tests or assessing the security of application without scanning, the community edition really helps.

Then you also have a Professional edition which is more meant for doing comprehensive vulnerability assessment and penetration application which is very important. Especially for independent teams like ours who make use of tools based on tech, etc. The good part about the professional edition is that it comes with a term license which is cost-effective. You pay for an annual charge and use it for a year's time and then you can extend it on an as-needed basis.

Apart from these, we also have an Enterprise Edition which has features like scan schedulers unlimited scalability to test across multiple websites in parallel, supporting multiple user access with role based access control and easy integration with CI tools.

The very best way this tool can be used through is to understand the application, identify the various roles that are there in the application. Then capture the user flows, with Port Swigger's BurpSuite, and understand what the requests are making use of the different features in BurpSuite. 

Post this the teams look at and analyze all the requests being sent. Observe the requests, use various roles with the tool using a repeater and intruder, analyze what's breaking through in the application. As you can quickly analyze with the intruder out here how the application's really behaving, how the payload is being sent across the tool. Then you get a quick sense of what's available which could be checked through for false positives and then arrive at the final output along with it.

This is how I would like to handle the implementation of the solution.

I would rate this solution 10 out of 10.

View full review »
VR
Director at a consultancy with 10,001+ employees

The Burp Collaborator needs improvement. There also needs to be improved integration

View full review »
NC
IT Manager at a manufacturing company with 10,001+ employees

We've faced lots of challenges, including slowing down of the tool, and a lot of error messages, sometimes because of the interface. If we're running a huge number of scans regularly, I think that also slows down the tool so I'm not sure if it is good for lots of scans. I hope they will work on the amount of scans they can handle. There have been improvements in the interface and the reporting structure, but they need to do more. They have a long way to go. For now, if we use the interface directly, we need to use an integration with our web application. We're after value for money. 

View full review »
Security Researcher at a financial services firm with 5,001-10,000 employees

It's an amazing tool. We can work with it automatically, or we can work with it manually.

There is no other tool like it. I like the intuitiveness and the plugins that are available.

The plugins are similar to integration. I can create my own login and use it.

View full review »
Lead Security Architect at SITA

Although it provides great writeup for the identified vulnerabilities but reporting needs to improve with various reporting templates based on standards like OWASP, SANS Top 25, etc. The tools needs to expand its scope for mobile application security testing, where native mobile apps can be tested and can provide interface to integrate with mobile device platform or mobile simulator's. Burp suite has great ability to integrate with Jenkins, Jira, Teamcity into CI/CD pipeline and should provide better ways of integration with other such similar platforms.

View full review »
Micro Focus Fortify on Demand: Integration
BK
Sr. Enterprise Architect at a financial services firm with 5,001-10,000 employees

The initial setup was quite simple.

I performed the deployment a couple of times on different platforms and it did not take much effort to set up. I also did the integration with other platforms like Microsoft Information Server and it was quite easy. You just need to know the platform that you are integrating into.

When it came time to deploy, I just had to run through the documentation on the vendor's web site. I spent one day reading it and one the second day, I did my integration. It took about eight hours that day, and I had challenges but they came from the platform that I was integrating into, like Microsoft Information Server. There were things to be done, such as converting XML files. The next day I was able to fix the problems, so in total it took me between nine and twelve hours to integrate it.

The second time that I deployed this solution it took me not more than two or three hours to repeat all of these same steps.

View full review »
Co-Founder at TechScalable

You can choose this product for sure with a lot of confidence. It entirely depends on how you are exploring the stuff and trying to integrate it. Designing has to be good. It has all the features, but exploring the features and using it as per your need is important. It is not that features are not there. You just need to explore them and know how to use them. 

I would rate Micro Focus Fortify on Demand an eight out of ten. It is a good product. However, it needs improvements from the security aspect and from the aspect of integrations with other popular tools in the market.

View full review »
DV
Senior System Analyst at Azurian

During development, when our developer makes changes to their code, they typically use GitHub or GitLab to track those changes. However, proper integration between Fortify on Demand and GitHub and GitLab is not there yet. Improved integration would be very valuable to us.

Similarly, I would love to see some kind of tracing solution for use in stress testing. So when we stress the application on a certain page or on a certain platform, we would be able to see a complete stress test report which could quickly tell us about weak points or failures in the application. 

Further potential for improvement is that, when we deploy our Java WAR files for review in the QA area, we want to be able to create a report in Fortify on Demand right from within this deployment stage. So it might inspect or check the solution's Java WAR package directly and come up with a report in this crucial phase of QA. 

View full review »
Project Manager at Everis

There's a bit of a learning curve. Our development team is struggling with following the rules and following the new processes.

The initial setup is a bit complex.

We could have more detailed documentation. They could offer some quick start or some extra guidance regarding the implementation.

I'd like to see more interactive application security And more IDE integration and integration with VS Code and Eclipse. I would like to see more features of this kind.

View full review »
LM
Principal Solutions Architect at a security firm with 11-50 employees

It could have a little bit more streamlined installation procedure. Based on the things that I've done, it could also be a bit more automated. It is kind of taking a bunch of different scanners, and SSC is just kind of managing the results. The scanning doesn't really seem to be fully integrated into the SSC platform. More automation and any kind of integration in the SSC platform would definitely be good. There could be a way to initiate scans from SSC and more functionality on the server-side to initiate desk scans if it is not already available.

View full review »
Information Security Engineer at a comms service provider with 501-1,000 employees

I would like to see easier integration to CI/CD pipelines. The reporting format could be more user friendly so that it is easy to read.

View full review »
Checkmarx: Integration
MM
CEO at a tech services company with 11-50 employees

The initial setup is pretty simple, it's no problem to start using Checkmarx. It's a very good approach if you compare it with competitors.

It only takes a few hours to tune your Checkmarx solution. You may need more time for deeper integration when it comes to DLC integration, for example, when using plug-in build management, such as Jenkins. 

If you are scanning and you have the source code then you are good to start scanning in a few hours. Three to four hours is required for tasks done in source code.

We have one or two engineers who can work with the solution.

For some of our customers have more than 100 developers and a DevOps team.

View full review »
Senior Security Engineer at a pharma/biotech company with 501-1,000 employees

You can't use it in the continuous delivery pipeline because the scanning takes too much time. Better integration with the CD pipeline would be helpful.

It reports a lot of false positives so you have to discriminate and take ones that are rated at either a one or a two. The lower-rated problems need to be discarded.

View full review »
Founder & Chairman at Endpoint-labs Cyber Security R&D

Aside from my occupation, I am an academic. Because of our status, we test products as well as their competition, for example, we45, AppScan, SonarQube, etc. I have to point out, from an academic and business point of view, there is a very serious competitive advantage to using Checkmarx. Even if there are multiple vulnerabilities in the source coding, Checkmarx is able to identify which lines need to be corrected and then proceeds to automatically remediate the situation. This is an outstanding advantage that none of the competition offers. 

The flexibility in regards to finding false-positives and false-negatives is amazing. Checkmarx can easily manage false-positives and negatives. You don't need to generate an additional platform if you would like to scan a mobile application from iOS or Android. With a single license, you are able to scan and test every platform. This is not possible with other competitive products. For instance, say you are using we45 — if you would like to scan an iOS application, you would have to generate an iOS platform first. With Checkmarx you don't need to do anything — take the source code, scan it and you're good to go. Last but not least, the incremental scanning capabilities are a mission-critical feature for developers. 

Also, the API and integrations are both very flexible.


View full review »
MC
Director at a tech services company with 11-50 employees

There is nothing particular that I don't like in this solution. It can have more integrations, but the integrations that we would like are in the roadmap anyway, and they just need to deliver the roadmap. What I like about the roadmap is that it is going where it needs to go. If I were to look at the roadmap, there is nothing that is jumping out there that says to me, "Yeah. I'd like something else on the roadmap." What they're looking to deliver is what I would expect and forecast them to deliver.

View full review »
VS
Procurement Analyst at a pharma/biotech company with 10,001+ employees

The integration could improve by including, for example, DevSecOps.

In an upcoming release, they could improve by adding support for more languages.

View full review »
SonarQube: Integration
Head of Software Delivery at a tech services company with 51-200 employees

Our primary use case is to analyze source code for software bugs, technical debt, vulnerabilities, and test coverage. It provides an automated gated procedure to ensure that engineers are able to deliver great, secure code to production. 

We plug this process into our process right from the start enabling the IDE integrations so that engineers can scan their code before submission. Following on from that we run the scans on every change that has been submitted for review. 

This way we ensure that no core/fundamental issues are added to our codebases. 

View full review »
Software Engineer at Adfolks

The reporting can be improved. In particular, the portability report can be better.

I would like to see better integration with the various DevOps tools.

View full review »
YB
Devops Engineer at a financial services firm with 10,001+ employees

The most valuable feature is the security hotspot feature that identifies where your code is prone to have security issues.

It also gives you a very good highlight of what's changed, and what has to be changed in the future.

Apart from that, there are many other good features as it's a code analytics platform. It also has a dashboard reporting feature, which is very good. I also like the ease of its integration with Jenkins.

Another valuable feature is the time snapshot that it provides for the code. It provides the code quality, the lagging, and the training features like what already has gone wrong and what is likely to go wrong. It's a very good feature for a project to have a dashboard where the users can find everything about their project at a single glance.

View full review »
PC
Engineer at a pharma/biotech company with 201-500 employees

The library could have more languages that are supported. It would be helpful.

There are a few clauses that are specific to our organization, and it needs to improve. It's the reason that were are evaluating other solutions. It creates the ability for the person who releases the authorized release, which is not good. We would like to be able to expand on our work.

MicroFocus, as an example, would be helping us with that area or creating a dependency tree of the code from where it deployed and branching it into your entire code base. This would be something that is very helpful and has helped in identifying the gaps.

It would be great to have a dependency tree with each line of your code based on an OS top ten plugin that needs to be scanned. For example, a line or branch of code used in a particular site that needs to be branched into my entire codebase, and direct integration with Jira in order to assign that particular root to a developer would be really good.

Automated patching for my library, variable audience, and support for the client in the CICD pipeline is all done with a set of different tools, but it would be nice to have it like a one-stop-shop.

I would like to see improvements in defining the quality sets of rules and the quality to ensure code with low-performance does not end up in production. We would also need the ability to edit those rules.

View full review »
SR
Team Lead at a computer software company with 10,001+ employees

The main factor that makes the product valuable for us is that it is free because budget is always an issue. We do not have to pay for it, but there are many cons to using a free product at times. It is a very good tool even if it is free. The dashboard and the media that it provides are all quite helpful.  

We are always using SonarQube. But currently, we were trying to evaluate some more tools because Sonar in the free version has around 10 to 15 languages. If we go to the commercial version, they support 27 languages and there are a lot of limitations in the resources for traditional support which is not available for the free license users of Sonar.  

Integration is there with most of the tools, but we do not have full integration with the free version. That is why we were planning to go ahead and plan to work with some other commercial tools. But as a whole, Sonar will do what we need it to.  

View full review »
TS
Security consultant at a tech services company with 1,001-5,000 employees

If I configure a project in SonarQube, it generates a token. When we're compiling our code with SonarQube, we have to provide the token for security reasons. If IP-based connectivity is established with the solution, the project should automatically be populated without providing any additional token. It will be easy to provide just the IP address. It currently supports this functionality, but it makes a different branch in the project dashboard.

From the configuration and dashboard point of view, it should have some transformations. There can be dashboard integration so that we can configure the dashboard for different purposes. 

View full review »
AB
Director IT Security, CISO at a transportation company with 10,001+ employees

The interface could be a little better and should be enhanced.

More support for integration with third-party products would be an improvement.

View full review »
KN
Web Developer at a tech services company with 51-200 employees

From a reporting perspective, we sometimes have problems interpreting the vulnerability scan reports. For example, if it finds a possible threat, our analysts have to manually check the provided reports, and sometimes we have issues getting all the data needed to properly verify if it's accurate or not.

This is especially important when considering false positives, and often we have issues getting all the necessary information from SonarQube in order to determine whether it is a true vulnerability or a false positive.

Another suggestion for improvement is that SonarQube could be better when it comes to integration with different development pipelines for continuous monitoring. For example, whether you are scanning manually or on-demand, we would like more ways to integrate SonarQube into our pipeline so that we can get reports quickly and automatically as we work.

View full review »
AJ
DevOps Lead at a marketing services firm with 1,001-5,000 employees

What I like about SonarQube is the integration of the pipelines. It is pretty easy. 

The reporting and the results are quick. It gets integrated within the pipeline well.

The solution is very stable.

The scalability is very good.

We found the initial setup to be straightforward.

View full review »
LM
Systems Analyst at a manufacturing company with 5,001-10,000 employees

I am struggling to come up with an area needing improvement. I am a big fan of SonarQube. I do have familiarity with the solution, but not extensively on a daily basis in respect of development. 

This said, we did have some trouble with the LDAP integration for the console. 

View full review »
HM
Founder at a tech services company with 11-50 employees

One thing to improve would be the integration. There is a steep learning curve to get it integrated.

View full review »
Klocwork: Integration
TMS Product Architect with 10,001+ employees

For an improved product, we'd like to see integration with Agile DevOps and Agile methodologies. Some capability of the tool that allows us to trigger the status analysis report based on actions like regular builds. We would like to have better integration with Microsoft Agile DevOps tools. This would save us a lot of time. In addition, we also sometimes experience issues with false-positive detections - phantom issues.

For the previous version, we realized it wasn't possible to have a quick dashboard for the number of violations. A feature like business intelligence or code coverage could be included. 

View full review »
Kiuwan: Integration
Head of Development and Consulting at Logalty

The most valuable feature of the solution is the continuous integration process. This enables us to make the best in terms of security of our solution and not introduce new mistakes. Problems are solved step by step. 

View full review »
RK
Information Security Specialist at a tech company with 51-200 employees

The integration process could be improved. It'll also help if it could generate reports automatically. But I'm not sure about the effectiveness of the reports. This is because, in our last project, we still found some key issues that weren't captured by the Kiuwan report.

View full review »
Contrast Security Protect: Integration
SW
Senior Customer Success Manager at a tech company with 201-500 employees

It's actually very straightforward to deploy. The complexities generally reflect the complexities of the overall system and environment. For example, the apps may be hosted at many different locations across multiple business units. 

Protect also has integrations with other tools, such as logging and SIEM products.

The solution's setup complications just typically reflect what's unique about the customer's environment due to the nature of the company. 

View full review »
Coverity: Integration
Automation Practice Leader at a financial services firm with 10,001+ employees

I would like to see integration with popular IDEs, such as Eclipse. If Coverity were available as a plugin then developers could use it to find security issues while they are coding because right now, as we are using Coverity, it is a reactive way of finding vulnerabilities. We need to find these kinds of problems during the coding phase, rather than waiting for the code to be analyzed after it is written.

View full review »
SG
Senior Technical Specialist at a tech services company with 201-500 employees

The most valuable feature is the integration with Jenkins. Jenkins can be used to automatically run it to perform the code analysis.

Integration with GitLab is helpful.

View full review »
AT
Sr. QA Engineer at a computer software company with 1-10 employees

I rate Coverity five out of 10, but it's tough for me to judge because we decided to purchase it based on one requirement that no other static analysis tool could satisfy. For that reason, we haven't tried anything else. So, let's make an analogy. Let's say I used Sony TVs my entire life, and someone comes up and says, "Hey, there is a new brand of TVs. What do you think of them? Do you think they are good?" How would I know? By comparison, SonarQube seems to be more feature-rich for a standard programming language, and it works with more continuous integration tools.

View full review »
VV
Senior Solutions Architect at a computer software company with 11-50 employees

I used CodeSonar a few years back. Both tools have their advantages. In any static analysis tool, the first stage is the instrumentation of the source code. It'll try to capture the skeleton of your source code. So when I compare them based on the first phase alone, Coverity is far better than CodeSonar. 

They both use a similar technique, but CodeSonar uses up way more storage resources. For example, to scan a 1GB code base, CodeSonar generates more than 5GB of instrumented files for every 1GB of code base. In total, that is 6GB. Coverity generates 500MB extra on top of 1GB, so that equals 1.5GB all in. That's a huge difference. CodeStar would eat up my disc space and hardware resources when I used it, whereas Coverity is minimal. 

In terms of checkers, both CodeSonar and Coverity cover a good length and breadth, especially for C and C++ programming languages. But CodeSonar focuses only on four languages—C, C++, Java, and C#—only four programming languages, whereas Coverity supports more than 20-plus programming languages.

Also, the two are comparable with respect to their plugin offerings, but there are crucial differences. For example, CodeSonar only focuses on well-known integrations, like Jenkins and JIRA, but you cannot expect all customers to use the same tools. Coverity supports almost all CI/CD tools, including Jenkins and Bamboo. It also integrates with service providers like Azure DevOps Pipelines, AWS CodePipelines that CodeSonar hasn't added yet. The plugins are available in the marketplace, and you don't have to pay extra. You just have to download it from the marketplace, hook the plugin in your pipeline, and ready to use kind of approach. So these are some of the major use cases, three major use cases I would say when you compare apples to apples with CodeSonar and Coverity.

View full review »
WhiteSource: Integration
User at a tech vendor with 1,001-5,000 employees

We moved from Black Duck to WhiteSource as it was a more modern and scalable solution, with better integration support to various build and source environments. The ease of running scans and getting results quickly enables our developers to address issues quicker. 

View full review »
VP R&D at a tech services company with 11-50 employees

The agent usage was not as smooth as the online experience. It lacks in terms of documentation and the errors and warnings it produces are not always very clear. We were able to get it up and running in a short while by getting help from support, which was very approachable and reliable.

If anything, I would spend more time making this more user-friendly, better documenting the CLI, and adding more examples to help expand the current documentation.

I would also like to get better integration with Google Docs.

View full review »
Founder & CEO at Data+

We use WhiteSource mainly to:

  1. Detect and automate vulnerability remediation. We started to research solutions since our dev teams are unable to meet sprint deadlines and keep track of product security. Most of our code scans are automated and integrated within our pipeline, which integrates with our CI server. With some, we run them manually using an agent. We recently started using the repository integration with Github, too, pre-build.
  2. License reporting and attribution reports. We use attribution reports and due diligence reports to asses risks associated with open-source licenses.
View full review »
Project Manager at a wellness & fitness company with 11-50 employees

We started using WhiteSource mainly to scan dependencies and detect open-source licenses, copyright information, and vulnerabilities.

We’ve managed to establish an integration with our CICD pipelines and use pretty much all of the automation that is offered, including automated policies.

View full review »
VP R&D at a computer software company with 51-200 employees

We use WhiteSource mainly to automate open source vulnerability detection and remediation, as well as for license compliance.

I’m less on the side of the license but mainly use the service to get control over vulnerabilities, detect the ones that affect us and remediate accordingly.

We integrate WhiteSource to our pipeline via CI server integration and now started using the GitHub integration too. We also run an agent in specific use cases.

View full review »
FOSS Coordinator at a manufacturing company with 5,001-10,000 employees

CI/CD integration required the use of a consultant. 

We did not require much technical team for this. The team consists of four people. 

View full review »
GM
Senior Lead Software Engineer at a tech services company with 10,001+ employees

The integration with Azure DevOps was good.

The results and the dashboard they provide are good.

It was pretty straightforward for me.

View full review »
HCL AppScan: Integration
TD
General Manager at a consultancy with 51-200 employees

There are some false positives, which need to be removed, but this is common with all types of scanners.

One thing which I think can be improved is the CI/CD Integration. There is a CI/CD Integration model, but I guess they are deliberately not using it currently. There are challenges when integrating AppScan with CI/CD because sometimes the activation plus the login mechanism provided doesn't work properly. Sometimes a login mechanism fails and then the whole scan fails. It's difficult to integrate with CI/CD.

View full review »
Sonatype Nexus Lifecycle: Integration
VP and Sr. Manager at a financial services firm with 1,001-5,000 employees

Its core features are the most valuable:

  • protection
  • scanning
  • detection
  • notification of vulnerabilities.

It's important for us as an enterprise to continually and dynamically protect our software development from threats and vulnerabilities, and to do that as early in the cycle as possible.

Also, the onboarding process is pretty smooth and easy. We didn't feel like it was a huge problem at all. We were able to get in there and have it start scanning pretty rapidly.

The data quality is really good. They've got some of the best in the industry as far as that is concerned. As a result, it helps us to resolve problems faster. The visibility of the data, as well as their features that allow us to query and search - and even use it in the development IDE - allow us to remediate and find things faster.

The solution also integrated well with our existing DevOps tool. That was of critical importance to us. We built it directly into our continuous integration cycles and that's allowed us to catch things at build time, as well as stop vulnerabilities from moving downstream.

View full review »
Product Strategy Group Director at Civica

We use Azure DevOps as our application lifecycle management tool. It doesn't integrate with that as well as it does with other tools at the moment, but I think there's work being done to address that. In terms of IDEs, it integrates well. We would like to integrate it into our Azure cloud deployment but the integration with Azure Active Directory isn't quite as slick as we would like it to be. We have to do some workarounds for that at the moment.

Also, the ability of the solution to recognize more of the .NET components would be helpful for us.

View full review »
DevOps Engineer at Guardhat

So far, the information that we're getting out of both the Nexus Lifecycle and SonarQube tools is really great.

And the integration of Lifecycle is really good with Jenkins and GitHub; those work very well. We've been able to get it to work seamlessly with them so that it runs on every build that we have. That part is easy to use and we're happy with that.

We're able to use Jenkins Pipeline and the integrations that are built into Gradle to incorporate that into our build process where we can have control over exactly when Nexus IQ and SonarQube analyses are run — what kinds of builds — and have them run automatically.

View full review »
Sr. DevOps Engineer at Primerica

The proxy repository is probably the most valuable feature to us because it allows us to be more proactive in our builds. We're no longer tied to saving components to our repository.

The default policies are good, they're a good start. They're a great place to start when you are looking to build your own policies. We mostly use the default policies, perhaps with changes here and there. It's deceptively easy to understand. It definitely provides the flexibility we need. There's a lot more stuff that you can get into. It definitely requires training to properly use the policies.

We like the integrations into developer tooling. We use the Lifecycle piece for some of our developers and it integrates easily into Eclipse and into Visual Studio code. It's a good product for that.

View full review »
Software Architect at a tech vendor with 11-50 employees

We filed a ticket for some unknown components and got quick feedback. They gave us pointers on how to figure out what it is. One of the things that we were impressed with was that they wanted to do a review of how we were using it after a few months. I guess this is a problem with us technical people. We often don't like reading manuals and like to figure out how stuff works. I initially was skeptical, but I figured that if they were offering it we should do it.

They had us show them how we had set it up, then they had a number of pointers for how we could improve it. E.g., we weren't fully using the JIRA integration and notifications and they pointed that out. There were a few other things they pointed out as well, such as a list of things for us to double check, like whether all our Javascript libraries and open source Javascripts were indexed correctly. Double checking that is what actually triggered the unknown component notification because we weren't 100 percent sure what it was. They then talked us through how to handle those. I'm happy they reached out to do the review. A lot of times, after you buy a piece of software, you just cost the vendor money every hour that they spend on you. In this case, the review was offered and initiated by them. We really appreciated that and we have had good experiences with them as a company.

It has been fun to work with Sonatype. We have been happy with them as a company.

View full review »
DevOps Engineer at a tech vendor with 51-200 employees

The REST API is the most useful for us because it allows us to drive it remotely and, ideally, to automate it.

We have worked a lot on the configuration of its capabilities. This is something very new in Nexus and not fully supported. But that's one of the aspects we are the most interested in.

And we like the ability to analyze the libraries. There are a lot of filters to output the available libraries for our development people and our continuous integration.

The solution integrates well with our existing DevOps tools. It's mainly a Maven plugin, and the REST API provides the compliance where we have everything in a giant tool.

View full review »
ME
Sr. Enterprise Architect at MIB Group

I won't say there aren't a ton of features, but primarily we use it as an artifact repository. Some of the more profound features include the REST APIs. We tend to make use of those a lot. They also have a plugin for our CI/CD; we use Jenkins to do continuous integration, and it makes our pipeline build a lot more streamlined. It integrates with Jenkins very well.

The default policies and the policy engine provide the flexibility we need. The default policy was good enough for us. We didn't really mess with it. We left it alone because the default policy engine pretty much works for our use cases.

The integrations into developer tooling work just fine. We primarily use Gradle to build our applications. We just point the URL to what we call our "public repository group" in Nexus. It's a front for everything, so it can see all of the other underlying repositories. Our developers, in their Gradle builds, just point them to this public repository and they can pull down any dependency that they need. It doesn't really integrate with our IDE. It's just simply that we use Gradle and it makes it very straightforward.

Nexus blocks undesirable open-source components from entering our development lifecycle because of the IQ policy actions. We define what sort of level of risk we're willing to take. For example for "security-critical," we could just fail them across the board; we don't want anything that has a security-critical. That's something we define as a CVE security number of nine or 10. If it has a known vulnerability of nine or 10 we could even stop it from coming down from Maven Central; it's quarantined because it has a problem that we don't want to even introduce into our network. We've also created our own policy that we call an "architecture blacklist," which means we don't want certain components to be used from an architectural standpoint. For example, we don't want anybody to build anything with Struts 1. We put it on the architecture blacklist. If a component comes in and it has that tag, it fails immediately.

View full review »
Security Analyst at a computer software company with 51-200 employees

I like the JIRA integration, as well as the email notifications. They allow me to see things more in real-time without having to monitor the application directly. So as new items come in, it will generate a JIRA task and it will send me an email, so I know to go in and have a look at what is being alerted.

The policy engine is really cool. It allows you to set different types of policy violations, things such as the age of the component and the quality: Is it something that's being maintained? Those are all really great in helping get ahead of problems before they arise. You might otherwise end up with a library that's end-of-life and is not going to get any more fixes. This can really help you to try to get ahead of things, before you end up in a situation where you're refactoring code to remove a library. The policy engine absolutely provides the flexibility we need. We are rolling with the default policy, for the most part. We use the default policy and added on and adjusted it a little bit. But, out-of-the-box, the default policy is pretty good.

The data quality is good. The vulnerabilities are very detailed and include links to get in and review the actual postings from the reporters. There have been relatively few that I would consider false positives, which is cool. I haven't played with the licensing aspect that much, so I don't have any comment on the licensing data. One of the cool things about the data that's available within the application is that you can choose your vulnerable library and you can pull up the component information and see which versions of that library are available, that don't have any listed vulnerabilities. I've found myself using that a lot this week as we are preparing for a new library upgrade push.

The data quality definitely helps us to solve problems faster. I can pull up a library and see, "Okay, these versions are non-vulnerable," and raise my upgrade task. The most valuable part of the data quality is that it really helps me fit this into our risk management or our vulnerability management policy. It helps me determine: 

  • Are we affected by this and how bad is it? 
  • How quickly do we need to fix this? Or are we not affected?
  • Is there any way to leverage it? 

Using that data quality to perform targeted, manual testing in order to verify that something isn't a direct issue and that we can designate for upgrade for the next release means that we don't have to do any interim releases.

As for automating open-source governance and minimizing risk, it does so in the sense of auditing vulnerabilities, thus far. It's still something of a reactive approach within the tool itself, but it comes in early enough in the lifecycle that it does provide those aspects.

View full review »
RH
Application Development Manager at a financial services firm with 501-1,000 employees

One thing that I would like to give feedback on is to scan the binary code. It's very difficult to find. It's under organization and policies where there are action buttons that are not very obvious. I think for people who are using it and are not integrated into it, it is not easy to find the button to load the binary and do the scan. This is if there is no existing, continuous integration process, which I believe most people have, but some users don't have this at the moment. This is the most important function of the Nexus IQ, so I expect it should be right on the dashboard where you can apply your binary and do a quick scan. Right now, it's hidden inside organization and policies. If you select the organization, then you can see in the top corner that there is a manual action which you can approve. There are multiple steps to reach that important function that we need. When we were initially looking at the dashboard, we looked for it and couldn't find it. So, we called our coworker who set up the server and they told us it's not on the dashboard. This comes down to usability. 

There is another usability thing in the reports section. When the PDF gets generated, it is different from the web version. There are some components from some areas which only reside inside the PDF version. When I generate the PDF for my boss to review, she comes back with a question that I didn't even see. I see on the reporting page whatever the PDF will be generating. The PDF is actually generating more information than the web version. That caught me off guard because she forwarded this to the security officer, who is asking, "Why is this? Or, why is that?" But, she has no idea. I didn't have anything handy because I saw the PDF version, which should be same as what I see on the web. This is a bit misrepresented. I would like these versions to speak together and be consistent. Printing a PDF report should generally reflect whatever you have on the page.

View full review »
Enterprise Infrastrcture Architect at Qrypt

When I started to install the Nexus products and started to integrate them into our development cycle, it helped us construct or fill out our development process, in general. The build stages are a good template for us to help establish a structure that we could build our whole continuous integration and development process around. Now our git repos are tagged for different build stages that align with the Nexus Lifecycle build stages.

Going to the Nexus product encouraged me to look for a package manager solution for our C and C++ development. My customer success engineer, Derek, recommended that we go to one that Sonatype was considering integrating with the product, which was called Conan Package Manager. I started doing research with Conan and realized how beneficial it would be for our C and C++ development cycle. Transitioning to that has really changed our whole C and C++ development. It was because we needed to have Nexus scanning for our C applications and I needed Conan to do that.

It's because of Conan that we've reduced our build timelines from weeks because we have so many architectures that we build for. After we figured out how to use it, we can build everything with only a couple of commands. Now, it's a really integrated process for our C and C++ applications, from development to the build pipelines to the IQ scanning, and the Nexus Repository manager repositories that we're using for building and packaging. It's been a fun process.

In terms of the data quality, everything has been really good for our Python and our Yum repositories. I know that they are still building their capability for the Conan repositories, the C dependencies. Right now, what Derek has told me, is that Conan application are analyzed with what they call Low Quality Assessment, or LQA. Essentially, any package that has identified vulnerabilities will show up, otherwise, there's not much information on the package. So scanning for Conan is not as good as Python right now, but I know they're working on higher quality data for Conan packages.

Comparing LQA in Conan to something like the higher quality data available in Python repositories does show a difference. For example, Nexus IQ identified a vulnerability in a Python package that we don't use, but it's a transitive dependency in four packages that we do use. We discovered the root vulnerability causing the problem in our four packages with the higher quality data, but we may not have been to do that as easily with a vulnerability identified in multiple C packages without the higher quality data. I'm not sure.

Nexus will block undesirable open source components from entering our development life cycle. We've agreed on the governance of our policies for blocking builds automatically and we've set a date, for example, to start failing builds automatically on July 15.

It integrates very well with our existing DevOps tools. The Azure DevOps Nexus IQ plugin was really easy. All we did was go to our DevOps portal, go to the add-ins, and then search the list for Nexus. We just clicked on it and it installed in DevOps. There are a couple of help pages on Sonatype's webpage, and I send those to the developers, they add the IQ plugin to the build pipeline and it just works. It's really nice also because the IQ plugin for DevOps gets updated before I can even go check on it. They've released two updates since we installed it. Every time I hear from Derek that they've updated the IQ plugin, I go to the IQ plugin page on our DevOps server, and it's already been updated. It's totally seamless for us.

It has brought open-source intelligence and policy enforcement across our software development life cycle for almost all of our applications. We're still integrating it with all of our applications, but it definitely has brought the kind of intelligence that we needed.

View full review »
BS
Application Security at a comms service provider with 1,001-5,000 employees

The biggest thing we've learned from using it is that, from a development point of view, we just never realized what types of badness are in those third-party libraries that we pull in and use. It has been an eye-opener as to just how bad they can be.

As far as Lifecycle's integration into developer tooling like IDEs, Git Repos, etc., I don't set that up. But I have not heard of any problems from our guys, from the team that set that stuff up.

I like the tool overall and would rate it at about nine out of 10. There are a few UI-type things that I don't like, that I would like to work a different way. But overall, the tool is good.

View full review »
Information Security Program Preparer / Architect at Alef Education

We have started rolling out to each of our feature teams and so far we have rolled it out to about 30 percent, but we can already see the benefit. It gives our teams easy visibility into the risk inside our code. "Risk" in this case can be copyright, more along the lines of compliance, and security itself, such as vulnerabilities.

From the legal and security perspectives, we have a huge concern about what we use in our product and our platform. Before using Sonatype we had a huge business risk. Since bringing in Sonatype, we have visibility for both the legal and security teams. It enables us to maintain the quality from the third-party libraries.

We follow the CI/CD methodology and Sonatype's impact is really huge because we are able to meet our continuous integration in the DevOps pipeline. The speed of that flow is noticeable. The impact is on both development and operations, together. The integration with the CI/CD pipeline is easy.

View full review »
RS
Senior Architect at a insurance company with 1,001-5,000 employees

The integration is one sore spot, because when we first bought the tool they said JavaScript wasn't really part of the IDE integration, but it was on the roadmap. I followed up on that, and they said, "Oh, you can submit an idea on our idea site to have that added." The sales team said it was already in the pipeline, but it was actually not in the pipeline. 

Overall it's good, but it would be good for our JavaScript front-end developers to have that IDE integration for their libraries. Right now, they don't, and I'm told by my Sonatype support rep that I need to submit an idea, from which they will submit a feature request. I was told it was already in the pipeline, so that was one strike against sales. Everything else has been pretty good.

Also, when Nexus Firewall blocks a component, it doesn't really give us a message that tells us where to go; at least it doesn't in our setup. I have to tell all the users, "Here's the URL where you can go to look up why Firewall is blocking your stuff. And that is odd because when it finishes a scan, the scan results give you the URL. But when you get blocked by Firewall, it doesn't give you the URL where you can go look that up. You can definitely work around that, but it's a bit strange. It's almost like something they forgot to include.

View full review »
Engineering Tools and Platform Manager at BT - British Telecom

IQ Server is part of BT's central DevOps platform, which is basically the entire DevOps CI/CD platform. IQ Server is a part of it covering the security vulnerability area. We have also made it available for our developers as a plugin on IDE. These integrations are good, simplistic, and straightforward. It is easy to integrate with IQ Server and easy to fetch those results while being built and push them onto a Jenkins board. My impression of such integrations has been quite good. I have heard good reviews from my engineers about how the plugins that are there work on IDE.

It basically helps us in identifying open-source vulnerabilities. This is the only tool we have in our portfolio that does this. There are no alternatives. So, it is quite critical for us. Whatever strength Nexus IQ has is the strength that BT has against any open-source vulnerabilities that might exist in our code.

The data that IQ generates around the vulnerabilities and the way it is distributed across different severities is definitely helpful. It does tell us what decision to make in terms of what should be skipped and what should be worked upon. So, there are absolutely no issues there.

We use both Nexus Repository and Lifecycle, and every open-source dependency after being approved across gets added onto our central repository from which developers can access anything. When they are requesting an open-source component, product, or DLL, it has to go through the IQ scan before it can be added to the repo. Basically, in BT, at the first door itself, we try to keep all vulnerabilities away. Of course, there would be scenarios where you make a change and approve something, but the DLL becomes vulnerable. In later stages also, it can get flagged very easily. The flag reaches the repo very soon, and an automated system removes it or disables it from developers being able to use it. That's the perfect example of integration, and how we are forcing these policies so that we stay as good as we can.

We are using Lifecycle in our software supply chain. It is a part of our platform, and any software that we create has to pass through the platform, So, it is a part of our software supply chain. 

View full review »
IV
Product Owner Secure Coding at a financial services firm with 10,001+ employees

The quality or the profiles that you can set are most valuable. The remediation of issues that you can do and how the information is offered is also valuable.

Its integration with our tool landscape is very valuable. It is the interaction with account management and technical consultants.

The default policies and the policy engine are very good. Most of what we have is the default. It is also possible to create your own policies and custom rules, but we only do that for a handful of exceptions. We are very pleased with the default policies and settings. It provides us the flexibility we need because we can use it in our own customized settings. It is flexible enough for us to work with.

View full review »
Tenable.io Web Application Scanning: Integration
NC
IT Manager at a manufacturing company with 10,001+ employees

The technical support is responsive and they worked on our problem quickly. That said, it depends on how quickly support is needed. The SLA is one or two days, although that depends on the agreement.

When we contacted support during the integration with ZeroNorth, our agents went down and it took a week to come up again. I think that the response and resolution time from technical support could be improved, which would lead to less downtime.

Overall, I would say that they are responsive.

View full review »
Snyk: Integration
SS
Engineering Manager at a comms service provider with 51-200 employees

What is valuable about Snyk is its simplicity, and that's the main selling point. It's understandably also very cheap because you don't need as much account management resources to manage the relationship with the customer and that's a benefit. I also like that it's self-service, with extremely easy integration. You don't need to speak to anybody to get you off and running and they have loads of integrations with source control and cloud CI systems. They are a relatively new product so they might not have a bigger library than competitors, but it's a good product overall.

They do however have the option to install Snyk on-prem, but it is much more expensive.

View full review »
AG
Information Security Engineer at a financial services firm with 1,001-5,000 employees

It is pretty easy and straightforward to use because integration won't take more than 15 minutes to be honest. After that, developers don't have to do anything. Snyk automatically monitors their projects. All they need to do is wait and see if any vulnerabilities have been reported, and if yes, how to fix those vulnerability. 

So far, Snyk has given us really good results because it is fully automated. We don't have to scan projects every time to find vulnerabilities, as it already stores the dependencies that we are using. It monitors 24/7 to find out if there are any issues that have been reported out on the Internet.

Whenever Snyk reports to us about a vulnerability, it always reports to us the whole issue in detail:

  • What is the issue.
  • What is the fix.
  • What version we should use.

E.g., if upgrading to a new version may break an application, developers can easily understand the references and details that we receive from Snyk regarding what could break if we upgrade the version.

The solution allows our developers to spend less time securing applications, increasing their productivity. As soon as there is a fix available, developers don't have to look into what was affected. They can easily upgrade their dependencies using Snyk's recommendation. After that, all they need is to test their application to determine if the new upgrade is breaking their application. Therefore, they are completely relaxed on the security side. 

Snyk is playing a big role in our security tooling. There were a couple of breaches in the past, which used vulnerability dependencies. If they had been using Snyk and had visibility into what vulnerabilities they had in their dependencies, they could have easily patched it and saved themselves from their breaches.

So far, we have really good feedback from our developers. They enjoy using it. When they receive a notification that they have a vulnerability in their project, they find that they like using Snyk as they have a very easy way to fix an issue. They don't have to spend time on the issue and can also fix it. This is the first time I have seen in my career that developers like a security tool.

I'm the only person who is currently maintaining everything for Snyk. We don't need more resources to maintain Snyk or work full-time on it. The solution has Slack integration, which is a good feature. We have a public channel where we are reporting all our vulnerabilities. This provides visibility for our developers. They can see vulnerabilities in their projects and fix them on their own without the help of security.

View full review »
JS
Manager, Information Security Architecture at a consultancy with 5,001-10,000 employees

We previously used Black Duck. We switched to Snyk because of its better false positive ratings along with its ease of use, integration, and deployment.

View full review »
JB
Security Analyst at a tech vendor with 201-500 employees

I find many of the features valuable: 

  • The capacity for your DevOps workers to easily see the vulnerabilities which are impacting the code that they are writing. This is a big plus. 
  • It has a lot of integration that you can use even from an IDE perspective and up to the deployment. It's nice to get a snapshot of what's wrong with the build, more than it is just broken and you don't know why. 
  • It has a few nice features for us to manage the tool, e.g., it can be integrated. There are some nice integrations with containers. It was just announced that they have a partnership with Docker, and this is also nice. 

The baseline features like this are nice. 

It is easy to use as a developer. There are integrations that will directly scan your code from your IDE. You can also use a CLI. I can just write one command, then it will just scan your old project and tell you where you have problems. We also managed to integrate it into our build pipeline so it can easily be integrated using the CLI or API directly, if you have some more custom use cases. The modularity of it is really easy to use.

Their API is well-documented. It's not too bad to integrate and for creating some custom use cases. It is getting extended going forward, so it's getting easier to use. If we have issues, we can contact them and they'll see if they can change some stuff around. It is doing well.

Most of the solution's vulnerability database is really accurate and up-to-date. It has a large database. We do have some missing licenses issues, especially with non-SPDX compliant one, but we expect this to be fixed soon. However, on the development side, I rarely have had any issues with it. It's pretty granular and you can see each package that you're using along with specific versions. They also provide some nice upgrade paths. If you want to fix some vulnerabilities, they can provide a minor or major patch where you can fix a few of them.

View full review »
CB
Senior Manager, Product & Application Security at a tech services company with 1,001-5,000 employees

There are two use cases that we have for our third-party libraries:

  • We use the Snyk CLI to scan our pipeline. Every time our developer is building an application and goes to the building process, we scan all the third-party libraries there. Also, we have a hard gate in our pipeline. E.g., if we see a specific vulnerability with a specific threshold (CDSS score), we can then decide whether we want to allow it or block the deal.
  • We have an integration with GitHub. Every day, Snyk scans our repository. This is a daily scan where we get the results every day from the Snyk scan. 

We are scanning Docker images and using those in our pipeline too. It is the same idea as the third-party libraries, but now we have a sub-gate that we are not blocking yet. We scan all the Docker images after the build process to create the images. In the future, we will also create a hard gate for Docker images.

View full review »
Senior Director, Engineering at Zillow Group

It is a fairly developer-focused product. There are pretty good support and help pages which come with the developer tools, like plugins and modules, which integrate seamlessly into continuous integration, continuous deployment pipelines. E.g., as you build your software, you may update your dependencies along with it. Packages that it supports include CI/CD toolchains, build tools, various platforms, and software/programming languages.

It is one of the best product out there to help developers find and fix vulnerabilities quickly. When we talk about the third-party software vulnerability piece and potentially security issues, it takes the load off the user or developer. They even provide automitigation strategies and an auto-fix feature, which seem to have been adopted pretty well. 

Their focus is really towards developer-friendly integrations, like plug and play. They understand the ecosystem. They listen to developers. It has been a good experience so far with them.

View full review »
Information Security Officer at a tech services company with 51-200 employees

We are using it to identify security weaknesses and vulnerabilities by performing dependency checks of the source code and Docker images used in our code. We also use it for open-source licensing compliance review. We need to keep an eye on what licenses are attached to the libraries or components that we have in use to ensure we don't have surprises in there.

We are using the standard plan, but we have the container scanning module as well in a hybrid deployment. The cloud solution is used for integration with the source code repository which, in our case, is GitHub. You can add whatever repository you want to be inspected by Snyk and it will identify and recommend solutions for your the identified issues. We are also using it as part of our CI/CD pipelines, in our case it is integrated with Jenkins. 

View full review »
Security Software Engineer at a tech company with 10,001+ employees

We use it as a pretty wide ranging tool to scan vulnerabilities, from our Docker images to Ruby, JavaScript, iOS, Android, and eventually even Kubernetes. We use those findings with the various integrations to integrate with our teams' workflows to better remediate the discoveries from Snyk.

View full review »
RA
Application Security Engineer at a tech services company with 501-1,000 employees

We have a lot of code and a lot of microservices and we're using Snyk to test our third-party libraries, all the external dependencies that our code uses, to see if there are any vulnerabilities in the versions we use.

We use their SaaS dashboard, but we do have some internal integrations that are on-prem.

We scan our code and we go through the results on the dashboard and then we ask the teams to upgrade their libraries to mitigate vulnerabilities.

View full review »
User

We have integrated it with our infrastructure, collecting images from there, and performing regular scans. We also integrated it with our back-end in version control systems.

Sometime ago, we deployed a new product based on web technologies. It was a new app for us. From the beginning, we integrated Snyk's code scannings that the product is based on. Before the production deployment, we checked the code base of Snyk, and this saved us from the deployment with the image of the solution where there were some spots of high severity. This saved us from high, critical vulnerabilities which could be exploited in the future, saving us from some risks.

It helps find issues quickly because:

  1. All the code changes go through the pipeline.
  2. All new changes will be scanned. 
  3. All the results will be delivered. 

This is about the integration. However, if we're talking about local development, developers can easily run Snyk without any difficulties and get results very quickly. 

It is one of the most accurate databases on the market, based on multiple open source databases. It has some good correlation and verifications about findings from the Internet. We are very happy on this front.

The solution’s container security feature allows developers to own security for the applications and containers they run in in the cloud. They can mitigate the vulnerabilities in the beginning of the solution's development. We can correlate the vulnerabilities in our base images and fix the base image, which can influence multiple services that we provide.

View full review »
Senior Security Engineer at Instructure

We have integrated it into our software development environment. We have it in a couple different spots. Developers can use it at the point when they are developing. They can test it on their local machine. If the setup that they have is producing alerts or if they need to upgrade or patch, then at the testing phase when a product is being built for automated testing integrates with Snyk at that point and also produces some checks.

The integration of SDE has been easy. We have it on GitHub, then we are using an open source solution that isn't natively supported, but Snyk provides ways for us to integrate it with them regardless of that. GitHub is very easy. You can do that through the UI and with some commands in the terminal. 

The sooner that we can find potential vulnerabilities, the better. Snyk allows us to find these potential vulnerabilities in the development and testing phases. We want to pursue things to the left of our software development cycle, and I think Snyk helps us do that.

A lot of the containerization is managed by some of our shared services teams. The solution’s container security feature allows those teams to own security for the applications and containers they run in in the cloud. Our development operations is a smooth process. We are able to address these findings later in the development process, then have the scans at the time of deployment. We are then able to avoid time crunches because it allows us to find vulnerabilities earlier and have the time to address them.

It provides better security because we make sure that our libraries dependencies and product stay up-to-date and have the most current code available. Yet, we are able to quickly know when something requires urgent attention.

View full review »
RD
VP of Engineering at a tech vendor with 11-50 employees

The core offering of reporting across multiple projects and being able to build that into our build-pipelines, so that we know very early on if we've got any issues with dependencies, is really useful.

We're loving some of the Kubernetes integration as well. That's really quite cool. It's still in the early days of our use of it, but it looks really exciting. In the Kubernetes world, it's very good at reporting on the areas around the configuration of your platform, rather than the things that you've pulled in. There's some good advice there that allows you to prioritize whether something is important or just worrying. That's very helpful.

In terms of actionable items, we've found that when you're taking a container that has been built from a standard operating system, it tends to be riddled with vulnerabilities. It's more akin to trying to persuade you to go for something simpler, whether that's a scratch or an Alpine container, which has less in it. It's more a nudge philosophy, rather than a specific, actionable item.

We have integrated Snyk into our software development environment. The way Snyk works is that, as you build the software in your pipelines, you can have a Snyk test run at that point, and it will tell you if there are newly-discovered vulnerabilities or if you've introduced vulnerabilities into your software. And you can have it block builds if you want it to. Our integrations were mostly a language-based decision. We have Snyk integrated with Python, JavaScript Node, and TouchScript code, among others, as well as Kubernetes. It's very powerful and gives us very good coverage on all of those languages. That's very positive indeed.

We've got 320-something projects — those are the different packages that use Snyk. It could generate 1,000 or 2,000 vulnerabilities, or possibly even more than that, most of which we can't do anything about, and most of which aren't in areas that are particularly sensitive to us. One of our focuses in using Snyk — and we've done this recently with some of the new services that they have offered — is to partition things. We have product code and we have support tools and test tools. By focusing on the product code as the most important, that allows us to scope down and look at the rest of the information less frequently, because it's less important, less vulnerable.

From a fixing-of-vulnerabilities perspective, often Snyk will recommend just upgrading a library version, and that's clearly very easy. Some of the patching tools are a little more complicated to use. We're a little bit more sensitive about letting SaaS tools poke around in our code base. We want a little bit more sensitivity there, but it works. It's really good to be able to focus our attention in the right way. That's the key thing.

Where something is fixable, it's really easy. The reduction in the amount of time it takes to fix something is in orders of magnitude. Where there isn't a patch already available, then it doesn't make a huge amount of difference because it's just alerting us to something. So where it wins, it's hugely dramatic. And where it doesn't allow us to take action easily, then to a certain extent, it's just telling you that there are "burglaries" in your area. What do you do then? Do you lock the windows or make sure the doors are locked? It doesn't make a huge difference there.

View full review »
DD
Security Engineer at a tech vendor with 201-500 employees

It helps us meet compliance requirements, by identifying and fixing vulnerabilities, and to have a robust vulnerability management program. It basically helps keep our company secure, from the application security standpoint.

Snyk also helps improve our company by educating users on the security aspect of the software development cycle. They may have been unaware of all the potential security risks when using open source packages. During this process, they have become educated on what packages to use, the vulnerabilities behind them, and a more secure process for using them.

In addition, its container security feature allows developers to own security for the applications and the containers they run in the cloud. It gives more power to the developers.

Before using Snyk, we weren't identifying the problems. Now, we're seeing the actual problems. It has affected our security posture by identifying open source packages' vulnerabilities and licensing issues. It definitely helps us secure things and see a different facet of security.

It also allows our developers to spend less time securing applications, increasing their productivity. I would estimate the increase in their productivity at 10 to 15 percent, due to Snyk's integration. The scanning is automated through the use of APIs. It's not a manual process. It automates everything and spits out the results. The developers just run a few commands to remediate the vulnerabilities.

View full review »
MG
Director of Architecture at a tech vendor with 201-500 employees

We have been considering Snyk in order to improve the security of our platform, in terms of Docker image security as well as software dependency security. Ultimately, we decided to roll out only the part related to software dependency security plus the licensing mechanism, allowing us to automate the management of licenses.

We have integrated Snyk in the testing phase, like in the testing environment. We are in the process of rolling the solution out across our entire platform, which we will be doing soon. The APIs have enabled us to do whatever we have needed, and the amount of effort for the integration on our end has been reasonable. The solution works well and should continue to work well after the full-scale roll-out.

View full review »
CAST Highlight: Integration
Digital Solution Architect at a tech services company with 10,001+ employees

I have also used Veracode and I like it much better. Veracode is easier for developers to work with. I have also worked with SonarQube.

The integration with Azure DevOps means that there are things you can do in CAST Highlight that you cannot do using other solutions.

View full review »
Contrast Security Assess: Integration
TM
Director of Innovation at a tech services company with 1-10 employees

The effectiveness of the solution’s automation via its instrumentation methodology is good, although it still has a lot of room for growth. The documentation, for example, is not quite up to snuff. There are still a lot of plugins and integrations that are coming out from Contrast to help it along the way. It's really geared more for smaller companies, whereas I'm contracting for a very large organization. Any application's ability to be turnkey is probably the one thing that will set it apart, and Contrast isn't quite to the point where it's turnkey.

Also, Contrast's ability to support upgrades on the actual agents that get deployed is limited. Our environment is pretty much entirely Java. There are no updates associated with that. You have to actually download a new version of the .jar file and push that out to the servers where your app is hosted. That can be quite cumbersome from a change-management perspective.

View full review »
Senior Security Architect at a tech services company with 5,001-10,000 employees

It depends on how many apps a company or organization has. But whatever the different apps are that you have, you can scale it to those apps. It has wide coverage. Once you install it in an app server, if the app is very convoluted, it has too many workflows, that is no problem. Contrast is per app. It's not like when you install source-code tools, where they charge by lines of code, per KLOC. Here, it's per app. You can pick 50 apps or 100 apps and then scale it. If the app is complex, that's still no problem, because it's all per app.

We have continuously increased our license count with Contrast, because of the ease of deployment and the ease of remediating vulnerabilities. We had a fixed set for one year. When we updated about six months ago, we did purchase extra licenses and we intend to ramp up and keep going. It will be based on the business cases and the business apps that come out of our organization.

Once we get a license for an app, folks who are project managers and scrum masters, who also have access to Contrast, get emails directly. They know they can put defects right from Contrast into JIRA. We also have other different tools that we use for integration like ThreatFix, and risk and compliance and governance tools. We take the results and upload them to those tools for the audit team to look at.

View full review »
Technical Information Security Team Lead at Kaizen Gaming

The real-time evaluation and library vulnerability checks are the most valuable features, because we have a code that has been inherited from the past and are trying to optimize it, improve it, and remove what's not needed. In this aspect, we have had many unused libraries. That's one of the key things that we are striving to carve out at this point.

An additional feature that we appreciate is the report associated with PCI. We are Merchant Level 1 due to the number of our transactions, so we use it for test application compliance. We also use the OWASP Top 10 type of reports since it is used by our regulators in some of the markets that we operate in, such as, Portugal and Germany.

The effectiveness of the solution’s automation via its instrumentation methodology is very effective and was a very easy integration. It does not get affected by how many reviews we perform in the way that we have designed the release methodologies. So, it has clear visibility over every release that we do, because it is the production code which is being evaluated. 

The solution has absolutely helped developers incorporate security elements while they are writing code. The great part about the fixes is they provide a lot of sensory tapes and stuff like what you should avoid to do in order to avoid future occurrences around your code. Even though the initial assessment is being done by a senior, more experienced engineers in our organization, we provide the fixes to more junior staff so they have a visceral marker for what they shouldn't do in the future, so they are receiving a good education from the tool as well.

View full review »
SW
Senior Customer Success Manager at a tech company with 201-500 employees

Start with a small app team initially, before scheduling a larger rollout. Teams that have been using SAST tools find that using Assess changes how they think about appSec in their development workflow and helps them identify process modifications that maximize the value of the tool.

Overall, on a scale from one to ten, I would give this solution a rating of ten. The product is strong and improving, support is responsive and effective, and supported integrations work for many customers.

View full review »
ML
Director of Threat and Vulnerability Management at a consultancy with 10,001+ employees

The initial setup was both straightforward and complex. Getting the agent deployed to environments can be complex when people don't understand how it works. But once that agent is deployed, it's very simple. The agent starts gathering data immediately and the data is presented in a UI in a way that is easily understood. You pretty much have vulnerability data right away. The only hurdle is making sure that you've got the agent deployed correctly. After that, everything is very simple.

Deployment for us is ongoing, as we continue to add applications. If I were to just choose one application and look at how long it takes to deploy to that environment, if the application owner has the resources and the ability to deploy the agent, it could be done in a few hours.

In our case, because deploying the agent is a change to the environment, sometimes that impacts larger processes like change management or making sure that the appropriate resources are assigned to do that work. If you have a large environment with many servers that need to have the agent deployed, it could take days or weeks if you don't have the resources to do it. That's not really a weakness of Contrast, but I think it's important to be aware of that if an organization is going to deploy this. A security team like mine might have external dependencies. When it comes to a legacy scan, we might not need anybody's input for us to run it. But with Contrast, we definitely need other teams to help us deploy the agents. Those teams include application owners, cloud services, server management. Whoever is responsible for installing software on a server in your environment would have to participate in this process. It's not something that the security team can do alone.

A good implementation strategy would be

  • having an application inventory
  • knowing where you're going to deploy this
  • ensuring that your applications are using technologies that are supported by Contrast. 

One of the things that we've done internally to try to simplify the agent deployment process is that we give the development teams a package that includes the agent, instructions for deploying the agent, and a couple of other properties that are included in the agent to help us with overall organization. At that point, it really is just a matter of getting the agents installed.

Once you're gathering data, you want to work with development teams to make sure that they have access to the data. Once you're gathering data, that's when you can start working with integration points, because Contrast does allow you to create tickets in bug-tracking systems or to send alerts to communications platforms. Gathering the data is just the beginning of the process. There's also the dissemination of that data. That part is really dependent on how your organization utilizes and communicates vulnerability data.

We have under 50 users of the solution and about 80 percent are developers, while 10 percent are program management and the other 10 percent are in security. Aside from security, they're all consumers of data. The security users operate the platform, make sure that everything is in order, that applications are being added correctly, and that integration is being added correctly. All of the other users are people who are logging in to view vulnerabilities or to review the state of their applications or to gather reporting data for some deliverable. They don't actually operate or manage the platform. I'm the primary operator.

In the security department, our role in deployment and maintenance is creating those packages that I referred to earlier, packages that tell the developers or the application owners how to deploy the agents. It's the application owners who are responsible for a lot of the maintenance. They're the ones that have to make sure that the agent is part of their build process, they have to make sure that the agent is reporting correctly, and they have to make sure the agent is deployed to servers that are associated with their application. It's the agent that feeds the platform, so a lot of the maintenance is associated with maintaining the agent.

View full review »
GitGuardian Internal Monitoring: Integration
DC
Chief Software Architect at a tech company with 501-1,000 employees

In general, we use Gitguardian as a safety net. We have our internal tools for validating that there is no sensitive data in there. GitGuardian is a more general and robust solution to double-check our work and make sure that if we are committing something, it only contains development IDs and not anything that is production-centric or customer-centric.

The main way in which we're using it at the moment is that it is connected through the GitHub integration. It is deployed through our code review process. When pull requests are created they connect with GitGuardian, which runs the scan before there is a review by one of our senior devs. That means we can see if there are any potential risk items before the code goes into the main branch.

View full review »