We just raised a $30M Series A: Read our story

Application Security API Reviews

Showing reviews of the top ranking products in Application Security, containing the term API
Veracode: API
Senior Security Analyst at a wellness & fitness company with 1,001-5,000 employees

Improve Mobile Application Dynamic Scanning DAST - .ipa and .apk. Right now I have to jailbreak an iPhone and Root an Android to intercept and fuzz requests with a Burp Suite Proxy.

That is a very time-consuming process and there are lots of dependencies. It would be very helpful if we can upload and .ipa or .apk into a Veracode simulator, provide credentials and run a Dynamic scan accordingly. Fuzzing functionality on API resources, HTTP Methods, and Parameters would also be very useful in testing our Web and API Application Firewalls, response pages, and other WAAF actions.

View full review »
Sr. Security Architect at a financial services firm with 10,001+ employees

Being cloud-based is a huge plus. All of our scans are always using up-to-date scan signatures and rules, and there is nothing for us to maintain.  Veracode has been spot-on with notifying about planned downtimes for maintenance and upgrades.  In my years of using the product, unplanned downtimes have been minimal (in fact I can't remember one.)

The API integration that allows integration with other tools, such as defect trackers and automated build tools, is also a benefit. We also like the integrated, available "in-person" support sessions to review and ask questions on discovered defects.

View full review »
AS
DevSecOps Consultant at a comms service provider with 10,001+ employees

There are quite a few features that are very reliable, like the newly launched Veracode Pipelines Scan, which is pretty awesome. It supports the synchronous pipeline pretty well. We been using it out of the Jira plugin, and that is fantastic. 

We are using the Veracode APIs to build the Splunk dashboards, which is something very nice, as we are able to showcase the application security hygiene to our stakeholders and leadership. 

We have been using Veracode Greenlight for the IDE scanning. 

Veracode has good documentation, integrations, and tools, so it has been a very good solution. 

Veracode is pretty good about providing recommendations, remedies, and guidelines on issues that are occurring.

It is an excellent solution. It finds a good number of the securities used, providing good coverage across the languages that we require at our client site.

We have been using the solution’s Static Analysis Pipeline Scan, which is excellent. When we started, it took more time because we were doing asynchronous scans. However, in the last six months, Veracode has come with the Pipeline Scan, which supports synchronous scans. It has been helping us out a lot. Now, we don't worry when the pentesting report comes in. By using Veracode, the code is secure, and there are no issues that will stop the release later on in the SDLC. 

The speed of the Pipeline Scan is very nice. It takes less than 10 minutes. This is very good, because our policy scans used to take hours.

Veracode is good in terms of giving feedback.

View full review »
MT
Software Architect at Alfresco Software

What could improve a lot is the user interface because it's quite dated. And in general, as we are heavy users of GitHub, the integration with the user interface of GitHub could be improved as well. 

There is also room for improvement in the reporting in conjunction with releases. Every time we release software to the outside world, we also need to provide an inventory of the libraries that we are using, with the current state of vulnerabilities, so that it is clear. And if we can't upgrade a library, we need to document a workaround and that we are not really touched by the vulnerability. For all of this reporting, the product could offer a little bit more in that direction. Otherwise, we just use information and we drop these reports manually.

Another problem we have is that, while it is integrated with single sign-on—we are using Okta—the user interface is not great. That's especially true for a permanent link of a report of a page. If you access it, it goes to the normal login page that has nothing that says "Log in with single sign-on," unlike other software as a service that we use. It's quite bothersome because it means that we have to go to the Okta dashboard, find the Veracode link, and log in through it. Only at that point can we go to the permanent link of the page we wanted to access.

Veracode has plenty of data. The problem is the information on the dashboards of Veracode, as the user interface is not great. It's not immediately usable. Most of the time, the best way to use it is to just create issues and put them in JIRA. It provides visibility into the SAST, DAST and SCA, but honestly, all the information then travels outside of the system and it goes to JIRA.

In the end, we are an enterprise software company and we have some products that are not as modern as others. So we are used to user interfaces that are not great. But if I were a startup, and only had products with a good user interface, I wouldn't use Veracode because the UI is very dated.

Also, we're not using the pipeline scan. We upload using the Java API agent and do a standard scan. We don't use the pipeline scan because it only has output on the user interface and it gets lost. When we do it as part of our CI process, all the results are only available in the log of the CI. In our case we are using Travis, and it requires someone to go there and check things in the build logs. That's an area where the product could improve, because if this information was surfaced, say, in the checks of the code we test on GitHub—as happens with other static analysis tools that we use on our code that check for syntax errors and mapping—in that case, it would be much more usable. As it is, it is not enough.

The management of the false positives is better than in other tools, but still could improve in terms of usability, especially when working with multiple branches. Some of the issues that we had already marked as "To be ignored" because they were either false positives or just not applicable in our context come down, again, to the problem of the user interface. It should have been better thought out to make it easier for someone who is reviewing the list of the findings to mark the false positives easily. For example, there were some vulnerabilities mentioning parts of libraries that we weren't actually using, even if we were including them for different reasons, and in that case we just ignore those items.

We have reported all of these things to product management because we have direct contact with Veracode, and hopefully they are going to be fixed. Obviously, these are things that will improve the usability of the product and are really needed. I'm totally happy to help them and support them in going in the right direction, meaning the right direction from my perspective.

View full review »
Product Owner - DevOps at Digite

We use Veracode primarily for three purposes:

  1. Static Analysis, which is integrated into our CI/CD pipeline, using APIs. 
  2. Every release gets certified for a static code analysis and dynamic code analysis. There is a UAT server, where it gets deployed with the latest release, then we perform the dynamic code scanning on that particular URL.
  3. Software Composition Analysis: We use this periodically to understand the software composition from an open source licensing and open source component vulnerability perspective.
View full review »
Automation Practice Leader at a financial services firm with 10,001+ employees

The solution has issues with scanning. It tries to decode the binaries that we are trying to scan. It decodes the binaries and then scans for the code. It scans for vulnerabilities but the code doesn't. They really need two different ways of scanning; one for static analysis and one for dynamic analysis, and they shouldn't decode the binaries for doing the security scanning. It's a challenge for us and doesn't work too well. 

As an additional feature I'd like to see third party vulnerability scanning as well as any container image scanning, interactive application security testing and IAS testing. Those are some of the features that Veracode needs to improve. Aside from that, the API integration is very challenging to integrate with the different tools. I think Veracode can do better in those areas.

View full review »
KE
Cybersecurity Executive at a computer software company with 51-200 employees

We utilize it to scan our in-house developed software, as a part of the CI/CD life cycle. Our primary use case is providing reporting from Veracode to our developers. We are still early on in the process of integrating Veracode into our life cycle, so we haven't consumed all features available to us yet. But we are betting on utilizing the API integration functionality in the long-term. That will allow us to automate the areas that security is responsible for, including invoking the scanning and providing the output to our developers so that they can correct any findings.

Right now, it hasn't affected our AppSec process, but our 2022 strategy is to implement multiple components of Veracode into our CI/CD life cycle, along with the DAST component. The goal is to bridge that with automation to provide something closer to real-time feedback to the developers and our DevOps engineering team. We are also looking for it to save us productivity time across the board, including security.

It's a SaaS solution.

View full review »
Qualys Web Application Scanning: API
PK
Senior Software Developer at a tech vendor with 1,001-5,000 employees

One area that could be improved is the a data server. That's probably what I most noticed in comparison with the Rapid7. Also, the UI is not user-friendly and you don't have a yearly reporting facility where you can slice and dice in different jobs. This is not good. 

Additionally, you don't have a recording feature, where you can record your screen navigation. Like a macro, you want to create the full screen, and they don't provide a tool which can record your navigation and then do a replay.

In terms of what should be included in the next release, like I mentioned, just the UI, the user interface screen. Also, it would be good If they could improve and enrich the reports. These are the fundamental differences with Rapid7.

View full review »
PortSwigger Burp Suite Professional: API
Director - Head of Delivery Services at Ticking Minds Technology Solutions Pvt Ltd

In the earlier versions what we saw was that the REST API was something that needed to be improved upon but I think that has come in the new edition when I was reading through the release offset available. 

There is a certain amount of lead time for the tickets to get resolved. The biggest improvement that I would like to see from PortSwigger is what many people see as a need in their security testing that coudl be priortized and developed as a feature which can be useful. For example, if they're able to take these kinds of requests, group them, prioritize and show this is how the correct code path is going to be in the future, this is what we're going to focus around in building in the next six months or so. That could be something that will be really valuable for testers to have.

View full review »
YC
Security consultant at a manufacturing company with 10,001+ employees

One downside of the solution would be their false positive checks. As with most automated security tools, there is still a high false positive issue. Hopefully they will be able to improve on that in the future. It would also be helpful if the solution had the capability of handling larger reports. Another area of improvement would be to have a customizable dashboard. It's currently restricted now to their own interface. If you want to utilize the other features available in their API documentation, then you have to write some code yourself. It would be great if their interface could be somewhat customizable.

View full review »
Senior Technical Architect at Hexaware Technologies Limited

There could be an improvement in the API security testing. There is another tool called Postman and if we had a built-in portal similar to Postman which captures the API, we would be able to generate the API traffic. Right now we need a Postman tool and the Burp Suite for performing API tests. It would be a huge benefit to be able to do it in a single UI.

In a future release, if there could be some kind of autonomous function, or user behavior prediction that would be beneficial.

View full review »
Micro Focus Fortify on Demand: API
OO
Information Security Manager at a tech services company with 501-1,000 employees

Reporting could be improved. It would nice to export to an Excel sheet or another spreadsheet. At the moment, my only option is a PDF.

Micro Focus Fortify on Demand is tailored towards more web application APIs, and I would like to see mobile applications added to the next release.

View full review »
RC
Security Systems Analyst at a retailer with 5,001-10,000 employees

The basic scanning is not very complex. When you get into more detailed scanning such as APIs, the level of complexity is moderate. However, when you are scanning that type of application, you usually have teams available that know what to do and what the configuration needs to be. We did our first scan within two days.

View full review »
SS
Acquisitions Leader at a healthcare company with 10,001+ employees

It is a very easy tool for developers to use in parallel while they're doing the coding. It does auto scanning as we are progressing with the CI/CD pipeline. It has got very simple and efficient API support.

It is an extremely robust, scalable, and stable solution.

It enhance the quality of code all along the CI/CD pipeline from a security standpoint and enables developers to deliver secure code right from the initial stages.

View full review »
Netsparker by Invicti: API
VD
Lead Security Architect at a comms service provider with 1,001-5,000 employees

Tech support is really wonderful, and they are very helpful and prompt with responses as well. If we have some queries regarding macros, regarding the APIs, the customer support is really good, and they have good recommendations as well.

View full review »
Checkmarx: API
Founder & Chairman at Endpoint-labs Cyber Security R&D

Aside from my occupation, I am an academic. Because of our status, we test products as well as their competition, for example, we45, AppScan, SonarQube, etc. I have to point out, from an academic and business point of view, there is a very serious competitive advantage to using Checkmarx. Even if there are multiple vulnerabilities in the source coding, Checkmarx is able to identify which lines need to be corrected and then proceeds to automatically remediate the situation. This is an outstanding advantage that none of the competition offers. 

The flexibility in regards to finding false-positives and false-negatives is amazing. Checkmarx can easily manage false-positives and negatives. You don't need to generate an additional platform if you would like to scan a mobile application from iOS or Android. With a single license, you are able to scan and test every platform. This is not possible with other competitive products. For instance, say you are using we45 — if you would like to scan an iOS application, you would have to generate an iOS platform first. With Checkmarx you don't need to do anything — take the source code, scan it and you're good to go. Last but not least, the incremental scanning capabilities are a mission-critical feature for developers. 

Also, the API and integrations are both very flexible.


View full review »
SonarQube: API
CV
CTO at a computer software company with 11-50 employees

The results of exporting capability could be improved. Currently, exporting is a bit messy and fully dependent on the SonarQube environment. Sonar Qube offers REST API and you could export the results programmatically, but the process is quite slow and limited. You could extract the maximum 10000 results per query, which increases the overall execution process tremendously. I guess the majority of the users are based on Sonar Qube presentation capabilities, which is very restrictive for some use cases.

View full review »
Technology Manager at Publicis Sapient

The scalability depends on the use case. You cannot install it with minimal resources and expect it to run thousands of jobs. It is scalable based on your environment. How big is your project? How many APIs do you want to scan? How many APIs per minute, etc. Based on that information you need to first decide upfront how much memory or how much storage you want to give to it. You need to have clear data with you and then use the resources to design accordingly. I think it is highly scalable and can operate seamlessly if you give it the environment that is sufficient. You cannot expect magic from it.

We have some projects that have 150 users with ten teams using the solution.

View full review »
Coverity: API
Automation Practice Leader at a financial services firm with 10,001+ employees

We found that during installation and configuration, it takes pipelines for continuous integration and continuous deployment. It was a bit challenging because the necessary base integration was not easy to configure.

It took us slightly over a week to deploy, whereas, with SonarQube, we were able to complete it in less than a day. It was due to complexities in Coverity that it took us more than a week. The complexities were related to missing API features and hooks.

View full review »
WhiteSource: API
Senior Productization Specialist at a tech services company with 51-200 employees

I use this solution for product inventory trace and 3PPs handling in aspect of License Compliance & Security.

I've been using both the UI & API.

View full review »
Project Manager at a wellness & fitness company with 11-50 employees

We were able to integrate the product naturally into our development process and it provided results really fast. You can easily use the unified agent and connect your CICD tools. It scans all of your source code quickly and it took us just a few minutes to run. The REST API is really good as well.

In the past, running similar tools or trying to get feedback on our open-source state was almost impossible.

Our primary goal was to get the license reports, but now we have a full end-to-end process that automates all license management, open-source license approval, rejection, ticket assignment, and more.

View full review »
Sonatype Nexus Lifecycle: API
VP and Sr. Manager at a financial services firm with 1,001-5,000 employees

Without it we didn't have any way to detect vulnerabilities except through reactive measures. It's allowed us to be proactive in our approach to vulnerability detection.

Sonatype has also brought open-source intelligence and policy enforcement across our SDLC. It enforces the SDLC contributors to only use the proper and allowed libraries at the proper and allowed time in the lifecycle of development. The solution blocks undesirable open-source components from entering our development lifecycle. That's its whole point and it does it very well.

We use the solution to automate open-source governance and minimize risk. With our leaders across our different organizations, we set policies that govern what types of libraries can be used and what types of licenses can be used. We set those as settings in the tool and the tool manages that throughout the lifecycle, automatically.

It's making things more secure, and it's making them higher in quality, and it's helping us to find things earlier. In those situations where we do find an issue, or there is an industry issue later, we have the ability to know its impact rapidly and remediate more rapidly.

View full review »
FT
IT Security Manager at a insurance company with 5,001-10,000 employees

For the application onboarding, we are focusing on automating that as much as possible. Considering the amount of applications that we scan, it's probably not feasible to do all that within the GUI, but the APIs provided by the solution are really good. We have some positive impressions for that. The automatic onboarding seems to work quite well.

One thing we recently did is we automatically onboarded every application that we deployed to production. We scanned each one of them and now have a complete picture of our estates. Every single vulnerability introduced from an open source component is now visible, and we have a clear number. That number was big. Really, we have a lot of issues which we were unaware of. We suspected that we had them, but we now have a clear number that makes selling the solution internally a lot easier.

The solution brought open source intelligence and policy enforcement to a small extent across our SDLC (software development lifecycle) because we have only fully rolled it out in a small number of teams. However, where we did do this, we have started scanning right at the built face, seeing issues really early in the lifecycle.

The solution automates open source governance and minimizes risk. We are trying to reduce the amount of vulnerabilities that we introduce using open source codes. The entire goal of why we're doing this solution is to have it in the lifecycle of our software development and reduce risk.

View full review »
DevOps Engineer at a tech vendor with 51-200 employees

The REST API is the most useful for us because it allows us to drive it remotely and, ideally, to automate it.

We have worked a lot on the configuration of its capabilities. This is something very new in Nexus and not fully supported. But that's one of the aspects we are the most interested in.

And we like the ability to analyze the libraries. There are a lot of filters to output the available libraries for our development people and our continuous integration.

The solution integrates well with our existing DevOps tools. It's mainly a Maven plugin, and the REST API provides the compliance where we have everything in a giant tool.

View full review »
ME
Sr. Enterprise Architect at MIB Group

We have a lot of legacy applications here and they're all built with Ant scripts and their dependencies come from a shared folder. There's not a lot of "accountability" there. What we get out of using Nexus is that all of our dependencies are in the same place and we can specify a specific version. We no longer have a situation where somebody has pulled down a .jar file and stuck it in this folder and we don't know what the version is or where, exactly, it came from. That's one of the benefits.

Another of the main things we get is what Sonatype calls a "bill of materials." We can go into our Nexus product and say, "Okay, here is our ABC application. What are its dependencies?" And we can be specific down to the version. We know what's in it and, if a vulnerability gets reported, we can look and see if we use that particular component and in which applications, to know if we're vulnerable. If we find we're exposed to that vulnerability we know we need to go and remediate it.

The biggest benefit we get out of it is the overall ease of development. The ability to automate a lot of the build-and-deploy process comes from that.

The data quality helps us solve problems faster, as in the security vulnerability example I just mentioned. In those circumstances, we have to solve that problem. Previously, we wouldn't have seen that vulnerability without a painstaking process. Part of the Nexus product, the IQ Server, will continually scan our components and if a new CVE is reported, we get that update through Nexus IQ. It automatically tells us, "Hey, in this open-source library that you're using, a vulnerability was found, and you use it in these four applications." It immediately tells us we are exposed to risk and in which areas. That happens, not in near real-time, but very quickly, where before, there was a very painstaking process to try to find that out.

A year ago we didn't have DevOps tools. We started building them after I came on. But Nexus definitely integrates very well with our DevOps tools. Sonatype produces plugins for Jenkins to make it seamlessly interact, not only with the repo product, but with the Nexus IQ product that we own as well. When we build our pipelines, we don't have to go through an array of calls. Even their command-line is almost like pipeline APIs that you can call. It makes it very simple to say "Okay, upload to Nexus." Because Jenkins knows what Nexus is and where it is — since it's configured within the Jenkins system — we can just say, "Upload that to Nexus," and it happens behind the scenes very easily. Before, we would have to either have run Maven commands or run Gradle commands via the shell script to get that done. We don't need to do that sort of thing anymore.

The solution has also brought open-source intelligence and policy enforcement across our SDLC. We have defined policies about certain things at various levels, and what risks we're willing to expose ourselves to. If we're going to proxy a library from Maven Central for example, if the Nexus IQ product says it has a security-critical vulnerability or it's "security high" or it's "component unknown," we can set different actions to happen. We allow our developers to pull down pretty much anything. As they pull something down from say, Maven Central, it is scanned. If it says, "This has a critical vulnerability," we will warn the developer with the report that comes out: "This has a security-critical vulnerability. You're allowed to bring it down in development, but when you try to move to QA or staging, that warning about the 'security-critical' component will turn to a failure action." So as we move our artifacts through that process, there are different stages. When someone tries to move that component to our staging environment, it will say, "Oh no, you can't because of the security-critical thing that we've been warning you about. Now we have to fail you." That's where we get policy enforcement. Before, that was a very manual process where we'd have to go out and say, "Okay, this thing has these vulnerabilities, what do we do with it?" It's much more straightforward and the turnaround time is a whole lot faster.

Automating open-source governance and minimizing risk is exactly what Nexus is for. Our company is very security conscious because we're governed by a number of things including the Fair Credit Reporting Act, which is very stringent in terms of what we can and cannot have, and the level of security for data and information that we maintain. What Nexus does is it allows us to look at the level of risk that we have in an application that we have written and that we expose to the companies that subscribe to us. It's based on the components that we have in the application and what their vulnerabilities are. We can see that very clearly for any application we have. Suppose, all of a sudden, that a Zero-day vulnerability — which is really bad — is found in JAXB today. We can immediately look for that version in Nexus. We can see: Do we have that? Yes, we do. Are we using it? Yes, we are. What applications are we using it in? We can see it's in this and that application and we can turn one of our teams to it and get them to address it right away.

I don't know exactly how much time it has saved us in releasing secure apps to market, but it's considerable. I would estimate it saves us weeks to a month, or more, depending upon the scope of a project.

And it has definitely increased developer productivity. They spend a lot less time looking for components or libraries that they can download. There was a very manual process to go through, before Nexus, if they wanted to use a particular open-source library. They had to submit a request and it had to go through a bunch of reviews to make sure that it didn't have vulnerabilities in it, and then they could get a "yes" or "no" answer. That took a lot of time. Whereas now, we allow them to download it and start working with it while other teams — like our enterprise security team — look at the vulnerabilities associated with it. That team will say, "Yeah, we can live with that," or "No, you have to mitigate that," or "No, you can't use this at all." We find that out very much earlier in the process now.

It allows us to shift gears or shift directions. If we find a component that's so flawed that we don't even want to bring it into the organization from a security standpoint, we can pivot and say, "Okay, we'll use this other component. It doesn't do everything we needed, but it's much more solid."

View full review »
RS
Senior Architect at a insurance company with 1,001-5,000 employees

We really like the Nexus Firewall. There are increasing threats from npm, rogue components, and we've been able to leverage protection there. We also really like being able to know which of our apps has known vulnerabilities. 

Specifically features that have been good include

  • the email notifications
  • the API, which has been good to work with for reporting, because we have some downstream reporting requirements
  • that it's been really user-friendly to work with.

Generally speaking, the configuration of all the tools is pretty good; the admin screens are good.

We have been able to use the API for some Excel-based reports to compare how many of our application deployments were covered by scans, and to do charts on that. That has been good and worked really well.

The default policies are also good. We deviated a little bit from those, but we have mostly used them, and they have been good. They provide us with the flexibility that we need and probably more flexibility than we need.

It has brought open source intelligence and policy enforcement across our SDLC. We have policies and SLAs that say, for example, critical findings have to be fixed within 90 days, and "high" findings have to be fixed within 120 days. That's tracked and reported on. We use the API to do some downstream reporting into some executive dashboards and when executives see red and orange they don't like it, and things get done. We've also made it part of our standards to say no components with existing vulnerabilities. Enforcing those standards is integrated into our software development life cycle.

Sonatype also blocks undesirable open source components. That is also done through policies that you can set, and configuration of the repo.

View full review »
IV
Product Owner Secure Coding at a financial services firm with 10,001+ employees

The user interface needs to be improved. It is slow for us. We use Nexus IQ mostly via APIs. We don't use the interface that much, but when we use it, certain areas are just unresponsive or very slow to load. So, performance-wise, the UI is not fast enough for us, but we don't use it that much anyway.

View full review »
Snyk: API
AG
Information Security Engineer at a financial services firm with 1,001-5,000 employees

The initial setup was straightforward. Onboarding projects didn't take me too long. It was pretty straightforward and easy to integrate with event/packet cloud and import all our projects from there. Then, it was easy to generate the organizational ID and API key, then add it to the Snyk plug-in that we are using in our build pipeline.

Snyk was already onboard when I joined. Deployment of my 23 projects took me an hour. 

View full review »
JB
Security Analyst at a tech vendor with 201-500 employees

I find many of the features valuable: 

  • The capacity for your DevOps workers to easily see the vulnerabilities which are impacting the code that they are writing. This is a big plus. 
  • It has a lot of integration that you can use even from an IDE perspective and up to the deployment. It's nice to get a snapshot of what's wrong with the build, more than it is just broken and you don't know why. 
  • It has a few nice features for us to manage the tool, e.g., it can be integrated. There are some nice integrations with containers. It was just announced that they have a partnership with Docker, and this is also nice. 

The baseline features like this are nice. 

It is easy to use as a developer. There are integrations that will directly scan your code from your IDE. You can also use a CLI. I can just write one command, then it will just scan your old project and tell you where you have problems. We also managed to integrate it into our build pipeline so it can easily be integrated using the CLI or API directly, if you have some more custom use cases. The modularity of it is really easy to use.

Their API is well-documented. It's not too bad to integrate and for creating some custom use cases. It is getting extended going forward, so it's getting easier to use. If we have issues, we can contact them and they'll see if they can change some stuff around. It is doing well.

Most of the solution's vulnerability database is really accurate and up-to-date. It has a large database. We do have some missing licenses issues, especially with non-SPDX compliant one, but we expect this to be fixed soon. However, on the development side, I rarely have had any issues with it. It's pretty granular and you can see each package that you're using along with specific versions. They also provide some nice upgrade paths. If you want to fix some vulnerabilities, they can provide a minor or major patch where you can fix a few of them.

View full review »
Information Security Officer at a tech services company with 51-200 employees

In our organization, I ask that things be done and people are doing them, so I wasn't directly involved in the setup. But the installation seemed to be quite straightforward. I don't get pushback from the dev community. My background is more infrastructure, I'm not a developer, so I can't comment how easy it is to bring everything together. But when I worked with my devs, when we migrated from Concourse to Jenkins, it wasn't such a huge undertaking and it didn't cause us too many headaches.

In terms of developer adoption, they have to use it because we asked them to use it. And once it's part of the pipeline; everything that they push through the pipeline goes through Snyk. It was a company decision to go that way.

The initial rollout took about one week. Most of the stuff was already in place. We just migrated from one pipeline provider to another. It was quite straightforward.

We have a bit of a hybrid approach. Some of it was in the cloud, and we haven't touched that. The integration of the container bit, the CLI integration is done on our cloud and it's something we maintain. We tried to use Snyk's recommendations. It has an API that you can call use to run some scans, but their full-feature recommended solution is to use the CLI, using your own instance of Snyk. So we have a container that's running Snyk, and whenever we run the scans we just call on that.

The deployment involved one or two people internally. When it was just GitHub, it was me and one developer. And when it came to infrastructure, it was me with an infra guy. It depends on the level of expertise that you have in-house and how comfortable people are with similar solutions. At the end of the day, to roll up a container image and pull that into your pipeline is quite straightforward. It's not difficult.

We don't do that much maintenance on Snyk. It's integrated. It's running in the background. We only touch it when we need to touch it. It's not like we need dedicated resources for that.

Between 50 and 70 people are using Snyk at a given time in our organization. Most of them are developers. We might have some QAs who look at something.

View full review »
RA
Application Security Engineer at a tech services company with 501-1,000 employees

We tried to integrate it into our software development environment but it went really badly. It took a lot of time and prevented the developers from using the IDE. Eventually, we didn't use it in the development area.

If the plugin for our IDE worked for us, it might help developers find and fix vulnerabilities quickly. But because it's hard to get the developers to use the tool itself, the cloud tool, it's more that we in the security team find the issues and give them to them.

I would like to see better integrations to help the developers get along better with the tool. And the plugin for the IDE is not so good. This is something we would like to have, but currently we can't use it.

Also, the API could be better by enabling us to get more useful information through it, or do more actions from the API.

Another disadvantage is that a scan during CI is pretty slow. It almost doubles our build time.

View full review »
Senior Security Engineer at Instructure

It raises alerts on vulnerable libraries and findings. It scores those alerts and allows us to prioritize them.

It is very easy to use: The UI is very polished and the API is straightforward. Our developers seldom have a thought like, "This is very odd how they are doing this." The solution seems very intuitive.

I am impressed with Snyk's vulnerability database in terms of its comprehensiveness and accuracy. There have been times when I know that brand new vulnerabilities have come out, then it's only taken them a day or two to adopt them and get them processed into their database. I feel pretty confident in the database.

The security container feature is good and straightforward. The solution’s actionable advice about container vulnerabilities is a little more straightforward, because in most cases, you need to upgrade. There is not as much investigation that needs to go into that. So, the decision to upgrade and fix those is straightforward.

Their API and UI are great.

View full review »
DD
Security Engineer at a tech vendor with 201-500 employees

It helps us meet compliance requirements, by identifying and fixing vulnerabilities, and to have a robust vulnerability management program. It basically helps keep our company secure, from the application security standpoint.

Snyk also helps improve our company by educating users on the security aspect of the software development cycle. They may have been unaware of all the potential security risks when using open source packages. During this process, they have become educated on what packages to use, the vulnerabilities behind them, and a more secure process for using them.

In addition, its container security feature allows developers to own security for the applications and the containers they run in the cloud. It gives more power to the developers.

Before using Snyk, we weren't identifying the problems. Now, we're seeing the actual problems. It has affected our security posture by identifying open source packages' vulnerabilities and licensing issues. It definitely helps us secure things and see a different facet of security.

It also allows our developers to spend less time securing applications, increasing their productivity. I would estimate the increase in their productivity at 10 to 15 percent, due to Snyk's integration. The scanning is automated through the use of APIs. It's not a manual process. It automates everything and spits out the results. The developers just run a few commands to remediate the vulnerabilities.

View full review »
MG
Director of Architecture at a tech vendor with 201-500 employees

We have been considering Snyk in order to improve the security of our platform, in terms of Docker image security as well as software dependency security. Ultimately, we decided to roll out only the part related to software dependency security plus the licensing mechanism, allowing us to automate the management of licenses.

We have integrated Snyk in the testing phase, like in the testing environment. We are in the process of rolling the solution out across our entire platform, which we will be doing soon. The APIs have enabled us to do whatever we have needed, and the amount of effort for the integration on our end has been reasonable. The solution works well and should continue to work well after the full-scale roll-out.

View full review »
CAST Highlight: API
Chief Architect at a computer software company with 10,001+ employees

Its price should be better. It is a pretty costly tool.

They have two products: CAST Highlight and CAST AIP. Both are licensed separately. As per CAST, Highlight is for RAPID prototyping and AIP is for in depth detailed analysis. But then there are areas which Highlights covers (Cloud Adoption) which AIP does not. Our experience in using AIP is that it also does not look at entire tech stack and does not provide the list of all technologies present in your application and then flag what is supported and what is not so that customer has clear view. Highlight probably does that. They need to simplify it for customers. I would expect CAST Highlight to have lighter version of the Health dashboard and the Engineering dashboards . These dashboards are currently a part of CAST AIP, and if these are made available in CAST Highlight, customers won't have to use two different products all the time.

View full review »