Is continuous vulnerability scanning necessary? Are there other approaches to vulnerability management that do not involve continuous scanning?
As data increasingly moves from on-prem to Public Cloud, we need a complete rethink about how we view and protect our critical databases. It is common for Cloud databases to be spun up, data ingested and then the database taken down again very quickly. In this situation its clear that continuous scanning to keep your database inventory up to date and vulnerabilities remediated is essential. An hourly, daily, weekly or monthly scan will not keep you updated on whats happening to your most precious resource... your data. However, this can only be achieved by using a product designed specifically for securing Cloud databases on a continuous cycle. Trying to re-purpose an on-prem tool to handle Cloud databases wont work. Ask an auditor if its ok to punch a hole in your VPC to allow your current database security tool to assess your security posture!! You can imagine the answer. Its also worth remembering that the Cloud Infrastructure is essentially handled by the Service Providers... so AWS, Microsoft, Google.... that's not where the problems will come from. The old days of keeping patches updated are largely gone with our move to the Public Cloud. Its far more likely that issues will appear from the Customers side of things. So continuous scanning for Inventory changes, for Vulnerabilities, for Misconfigurations... is absolutely essential in my view. If anyone is interested in more detail on this, we have written a short whitepaper describing the issues and solutions. You can find it on our website at www.secureclouddb.com
I believe vulnerability scanning is usually a scheduled activity where you can vary the frequency of the scans according to your needs and impact on performance of the target resources. Regular scans ensure you discover any new vulnerabilities while measuring your progress in addressing/remediating previously highlighted weaknesses. Continuous scans may involve un-authenticated scans to which the alternative would be to use authenticated scans/probes that result in more accurate data or less false positives.
The vulnerability management consists of multiple phases, one of them is vulnerability posture acquisition (basically scanning for vulnerabilities). There are clear advantages in obtaining vulnerability information very frequently (i.e. almost continually) and this is best done with an agent-based solution.
That said, there is no point in doing continually scanning if the process cannot handle the data in the same cadence. For example, there should be the automation of Triage to categorise detected vulnerabilities immediately and march against VM policy to derive action needed.
Our best practice is to process vulnerabilities in our platform that can be configured with very granular policies. The key is, however, not to overload IT organisation with requests to fix a vulnerability. Use the trust capital carefully and only push for emergency fixes when the risk warrants it.
Because the Technology landscape is constantly changing, the Thread landscape is also constantly changing, and we as humans cant be perfect, continuous vulnerability scanning is a must
Does anyone have recommendations about methodologies (e.g. use of FAIR framework), plug-ins (ETL schemas, FOSS add-ons) or commercial/free solutions (like Kenna) that can help us during "integration, transformation and consolidation" of vulnerability into risks (from Tenable.IO to Archer)?
I'm a Senior System Engineer at a mid-sized enterprise. I am comparing Qualys VM and Tenable Nessus: